text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
require 'spec_helper'
feature 'Settings for Klarna' do
stub_authorization!
scenario 'update' do
visit spree.admin_path
click_link 'Configuration'
click_link 'Payment Methods'
click_link 'New Payment Method'
select 'Spree::PaymentMethod::KlarnaInvoice', from: 'gtwy-type'
fill_in 'payment_method_name', with: 'klarna'
click_button 'Create'
fill_in 'payment_method_klarna_invoice_preferred_id', with: '123456'
fill_in 'payment_method_klarna_invoice_preferred_shared_secret', with: 'asd123asd'
click_button 'Update'
expect(page).to have_content 'Payment Method has been successfully updated!'
klarna = Spree::PaymentMethod::KlarnaInvoice.first
expect(klarna.preferred_id).to eq '123456'
expect(klarna.preferred_shared_secret).to eq 'asd123asd'
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 513 |
{"url":"http:\/\/chronicle.com\/blognetwork\/castingoutnines\/tag\/digital-natives\/","text":"Tag Archives: Digital natives\n\nApril 5, 2010, 12:00 pm\n\nThe MATLAB class at midterm: Comfort level\n\nTo end the first half of the semester in the MATLAB course, I gave students a lengthier-than-usual survey about the course \u2014 a sort of mid-semester course evaluation. I have a load of interesting data to sift through and analyze, relating to various aspects of the course and tagged with metadata about gender, GPA, major, whether they live on or off campus, and so on. I hope to finish analyzing the data before the semester is over. (Ba-dum-ching.)\n\nOne of the questions I asked was a mirror of a question I asked in the beginning: On a scale of 0 (lowest) to 10 (highest), rate your personal comfort level with using computers to do the kinds of things we do in this class. I\u2019m thinking that there are affective issues about working with computers, and especially MATLAB, that are never discussed but which play a huge factor in student learning. (We seem to just tell engineers to suck it up and\u2026\n\nFebruary 17, 2010, 9:59 pm\n\nAnd so it begins: Lab #1 in the MATLAB course\n\nThe MATLAB course began in earnest on Monday this week with our first full-length lab activity session. This was the second overall meeting, the first one being some organizational stuff and a lengthy fly-through of the main features of MATLAB. What follows is a breakdown of what we did and how it went, which also serves as an invitation for critique and suggestions in the comments.\n\nFirst, some context. I intend for this course to be heavily hands-on with an emphasis on self-teaching within reasonable bounds. I laid a ground rule in the first class meeting that any question of the form \u201cHow do you do ____ in MATLAB?\u201d was going to be met with the responses \u201cWhat have you found in the MATLAB help documentation? What have you found via a Google search? What have you found out from your lab partner?\u201d I\u2019m not above giving hints to students in the class, but I insist that they exhaust all\u2026\n\nMarch 13, 2008, 10:54 am\n\nTrue library story\n\nI don\u2019t make it out of my building very often at work, but I needed to go over to our library this morning to reserve a computer lab and to look for a particular book. I didn\u2019t know the call number for the book, so I went to the nearest available kiosk computer to look it up in the online catalog.\n\nI should have known it was going to be trouble when the nearest computer was an ancient, beige tower PC with a sticker on the side proclaiming it to be \u201cDesigned for Windows 98 and Windows NT\u201c. And it was turned off, which is unusual for a public kiosk. So I turned it on, and it proceeded to literally rattle and whine while it booted. After entering in my login information, I was able to access the web browser \u2014 after 15 minutes had passed. 15 minutes from login to usability! I couldn\u2019t even walk away and get on with the stuff I had to do today, because once the interminable login procedure\u2026\n\nJanuary 27, 2008, 12:06 pm\n\nenVisionMATH\n\nHere\u2019s a promotional video for a new math curriculum from Pearson called enVisionMATH. (It must be a sign of the times that grade school math curricula have promotional videos.) Watch carefully.\n\n1. Should it be a requirement of parenthood that you must remember enough 5th grade math to teach it halfway decently to your kids?\n2. Does the smartboard come included with the textbooks?\n3. Did anybody else have the overwhelming urge to yell \u201cBingo!\u201d after about 2 minutes in?\n4. When will textbook companies stop drawing the conclusion that because kids today like to play video games, talk on cell phones, and listen to MP3 players, that they are therefore learning in a fundamentally different way than anybody else in history?\n\nThe last question is all about the research-free digital nativist assumption that is the source of many lucrative curriculum deals these \u2026\n\nNovember 1, 2007, 2:00 pm\n\nRetrospective: A proposal about digital natives (4.12.2007)\n\nEditorial: We\u2019re getting near the end of this week\u2019s look back at articles from the past here at CO9s. I\u2019ll have two more tomorrow and one more Saturday. Why twelve? Why, because 12 is an integer of the form $$3 \\times 2^n$$, of course. Didn\u2019t you know those are the best kinds of numbers?\n\nOne of the things I want to accomplish on this blog is question assumptions, especially where those assumptions have an impact on students and how we teach them. For me, there\u2019s no bigger source of unquestioned assumptions than the current movement built around the digital native hypothesis \u2014 the notion that children today are native to the digital world and come pre-loaded with technological skills that we \u201cdigital immigrants\u201d have to acquire. These assumptions simply don\u2019t square in any way with what I\u2019ve experienced as a teacher, and the extent to which these assumptions are driving\u2026\n\nSeptember 29, 2007, 9:56 am\n\nWhat's the best electronic medium for professor\/student interaction?\n\nThe comments at my last post are suggesting that email has been surpassed by IM, Facebook, and text messaging among the younger generation as the preferred means of electronic communication. (Maybe of any kind of communication.) That really gives me, as a professor, some pause as to my assumption that if I need to get information out to students in a timely way (say, about a change in an assignment or a last-minute announcement for class) or create a space for out-of-classroom discussion of ideas or assignments, email isn\u2019t nearly as reliable as I think it is.\n\nI\u2019m OK with that if it\u2019s true, but then there are two questions that come to mind as being pretty important from my perspective:\n\n\u2022 If I have information that I need to get out to my students quickly and be reasonably assured that they\u2019ll get it in time for it to be useful, what is the best way to do this? Is there no one best way,\u2026\n\nSeptember 27, 2007, 8:57 pm\n\nThese digital natives don't email\n\nIf you read enough edublogs, you begin to encounter the factions that believe that students today are digital natives and have all sorts of rich information experiences all the time in their everyday lives. This is usually taken to mean that they use all kinds of electronic means of sending and receiving information, such as email. I\u2019m already skeptical of that claim, and after the following experience from today I am even less sure about it.\n\nWe had some high school students visiting the math department at my college, and part of the program was a discussion panel with current math majors. One of the math majors was asked about some of the main differences between high school and college, and he mentioned the quantity of email that one has to keep up with as a major difference. He asked the high school students how often they checked their emails now. They all looked at each other\u2026\n\n\u2022 The Chronicle of Higher Education\n\u2022 1255 Twenty-Third St., N.W.\n\u2022 Washington, D.C. 20037","date":"2014-11-26 12:50:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.28543275594711304, \"perplexity\": 1244.707402681787}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-49\/segments\/1416931006855.76\/warc\/CC-MAIN-20141125155646-00247-ip-10-235-23-156.ec2.internal.warc.gz\"}"} | null | null |
Q: 'ManyToManyDescriptor' object has no attribute 'filter' I am making an application which obtains data and then displays it through graphics, the problem is that I am making a query to know when developers belong to a project, but it throws me an error I disagree that the ManyToManyDescriptor object does not have the filter attribute.
My view:
class ProjectTemplateView(TemplateView):
template_name = 'index.html'
def count_developer(self):
projects = Project.objects.all()
for project in projects:
developers = Developer.project_set.filter(project=project).count()
print(developers)
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['projects'] = Project.objects.all()
context['developers'] = self.count_developer()
return context
This is my project model:
class Project(models.Model):
STATUS_CHOICES = (
('approver', 'Aprovado'),
('process', 'En Proceso'),
('inactive', 'Inactivo'),
)
name = models.CharField(max_length=50, unique=True, verbose_name='Nombre')
developer = models.ManyToManyField(Developer, verbose_name='Desarrollador')
visibility = models.BooleanField(default=False, verbose_name='Visibilidad')
status = models.CharField(
max_length=10, choices=STATUS_CHOICES, verbose_name='Estatus')
slug = models.SlugField(max_length=50, unique=True)
created_at = models.DateTimeField(
auto_now_add=True, verbose_name='Fecha de Creacion')
update_at = models.DateTimeField(
auto_now=True, verbose_name='Fecha de Actualizacion')
And this is my developer model:
class Developer(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
created_at = models.DateTimeField(auto_now_add=True, verbose_name='Fecha de Creacion')
A: this maybe useful.
def count_developer(self):
projects = Project.objects.all()
for project in projects:
developers = project.developer.all()
# give you whole queryset
print(developers)
# will give you count
print(developers.count())
another thing you should return developers from count_developer function so you can get developers count in template.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,873 |
Q: Add point to center of polygon graphic in the Esri javascript api I'm using the ESRI JavaScript API 3.3 to develop an application. I am selecting parcels and adding a graphic for the selected polygon. What I would like to do is add a point marker to the center of the polygon graphic after the graphic is added. Does anyone know of a sample I could take a look at, or know how to quickly implement this functionality? I feel like I'm missing something simple.
A: Unless your parcel polygons are unusually shaped, you get get the center point of the extent of the parcel polygon and use that to draw your center graphic.
// in this example, the graphic variable is the graphic of the parcel you added to the map
var centerPoint;
switch (graphic.geometry.type) {
case "point":
// if the graphic is a point
centerPoint = graphic.geometry;
break;
case "extent":
// if the graphic is an extent
centerPoint = graphic.getCenter();
default:
// if the graphic is a line or polygon, which for a parcel this will probably
// be the case.
centerPoint = graphic.getExtent().getCenter();
}
var centerGraphic = new esri.Graphic(centerPoint, centerSymbol);
...
A: So here are two quick solutions. I really do not think either are the optimal way, but would be easy.
*
*Place a marker symbol where the user clicks. That way the visual effect will be the point will be on the polygon that was selected.
*User the label points geometry task to get an geometry to know where to place the marker symbol. http://help.arcgis.com/en/webapi/javascript/arcgis/jsapi/geometryservice.html#GeometryService/onLabelPointsComplete
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,055 |
Hackett shops used to have a post office, a Shell service station, a pharmacy, a butcher and a bakery. Today, it features a bike store, florist, skin clinic, exercise centre, hairdresser, Thai restaurant and osteopath. In December 1962, a four bedroom Hackett home cost just £6250. Since that era, Hackett has more dwellings, but fewer residents - 2,991 in 2016, compared with 4,384 in 1971.
It's said that understanding yourself starts with knowing your history and local geography. Thanks to a new history of Hackett, local residents can get a better insight into both. Produced by the Hackett Community Association, we launched the book at Hackett's recent birthday celebrations. Many former residents came along, including those who had attended the former Hackett Primary School.
The Hackett history contains the stories of local residents such as Steve, Arthur and Andrew Savoulidis, who helped revitalise the shops, and the late Hackett 'mayor' James Walker. Plus there's plenty of general Canberra tales: from photos of the 1965 snowfall, to fabulous old Canberra Times advertisements and anecdotes. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,168 |
{"url":"https:\/\/cos495.github.io\/general\/2017\/02\/16\/backprop-slides.html","text":"Some of you looked shell-shocked after yesterday\u2019s barrage of indices. I added some more slides to backprop.pptx (jump to slide titled \u201cHigh-level view\u201d), which contain another attempt to explain the derivation of backprop from the gradient learning principle. The updated slide deck is in the same Google Drive folder. This is standard material, so you can also read about it in many textbooks. For example, you can try Nielsen\u2019s online book.","date":"2017-08-23 00:23:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8442317247390747, \"perplexity\": 1395.357022037263}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886116921.70\/warc\/CC-MAIN-20170823000718-20170823020718-00338.warc.gz\"}"} | null | null |
It Coulda Been Great is creating pop culture podcasts!
Have you ever walked out of a movie theatre or turned off a game and thought to yourself, "How can something so good be so bad? How does an amazing concept for a story miss the mark by nautical miles? Why is it that I hate this thing, but I desperately want to love it?" Well, welcome to the crowd here at It Coulda Been Great!!
It Coulda Been Great is a pop culture podcast about mostly mediocre media. Hosted by Diana Paparozzi and a rotating crowd of her good friends.
You can find us anywhere you find fine podcasts, but if you'd like to listen to us RIGHT THIS SECOND may I recommend listening on our Pippa page? (You can find that by clicking here!) You can listen right in your browser window!
We will be offering exclusive content here on our Patreon as a thank you for helping us make the thing we like making!
Patrons at this tier will be able to vote on topics for future episodes!
For only five bucks a month, you can submit one idea per month for the "It Can't Be Good" segment on the podcast to be added to the massive list we keep and decide by random dice roll.
Give us a 40-word message to say on our podcast! It can be a shoutout or an ad! Plus access to all previous tiers.
When we get to making $50 per month, we'll finally be self-sufficient! Huzzah! The podcast can stay on the air and we won't be turning out our pockets for change. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,537 |
Cashless and Mobile: What are you waiting for?
Cash has a long and interesting history. In neolithic times people tired of unwieldy barter and decided to trade with rare and shiny pebbles and shells. But now, more than 3000 years later, we are seeing cash unseated as king of trading methods. It happened gradually over the past decade—and accelerated over the past two years.
We are encouraged to use cards, contactless, direct transfers, direct debits, mobile wallets—to the point that a seeing a 50 or 100 pound note gives you a bit of a jolt. Cashless payments have now overtaken the use of notes and coins. Sweden has gone almost completely cashless with less than 10% of transactions involving cash, and Canada has abolished the penny.
So what does this all mean for business owners? As each new innovation is adopted, customers balk at having to return to previous payment methods. POS technology that can handle all payment methods becomes a necessity—and the benefits are numerous.
The advantages of upgrading to a POS system that can handle cashless, mobile phone payments as well as cash have been well documented. Attract younger customers, show that your business is in tune with today's trends and payment preferences, simplify and streamline record keeping. Increase spend at the till, and reduce crime and theft. The argument that this alienates older customers is also increasingly debunked as trends point to people in their 60s and beyond using mobile purchase technology. Sure, many still oppose a cashless society, but one by one those doubts are being replaced by technology adoption.
Don't expect to be able to purchase a drink on British Airways with cash—card is all they take for your mid-flight G&T. By 2020, 450 million people will use a mobile wallet on their smartphone to pay. Gone are the days of dashing to the CashPoint, cashless is the future and having a robust and adaptable payment terminal is an essential part of your business plan. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,288 |
Un frenum piercing es un tipo de piercing o perforación corporal que se hace a través del frenillo prepucial. El término también se emplea a veces para referirse a otras perforaciones realizadas en la zona ventral del pene humano.
El frenum piercing es, después del piercing príncipe Alberto, uno de los piercings más comunes en genitales masculinos.
Colocación
Los frenum piercing se colocan perpendiculares al tronco del pene, atravesando solamente la piel del frenillo, sin llegar a perforar el pene o la uretra. En los hombres circuncidados sólo es posible realizar este tipo de pirsin si luego de la circuncisión quedó algún remanente del frenillo.
Curación
El tiempo de curación para este tipo pirsin es variable. Dependiendo de la persona, puede tomar desde 2 semanas hasta 4 meses en sanar completamente.
Joyería
La joyería más empleada en frenum piercings incluye la barra (barbell), barra circular, barra curva y anillo de perla cautiva (BCR). Estos materiales suelen tener espesores que van desde 2 mm hasta 3,2 mm, los cuales no deben excederse para evitar que se produzcan molestias, dolor o desgarro del frenillo durante el coito. Una amplia variedad de dispositivos de castidad hace uso de pírsines en el frenillo como parte de fetiche o actividades BDSM.
Historia y cultura
La referencia más temprana sobre el uso de frenum piercings se encuentra en Die künstlichen Verunstaltungen des Körpers bei den Batta. Zeitschrift für Ethnologie (16:217-225, 1884), que declaraba que entre los habitantes de Timor había un grupo étnico que portaba anillos de bronce en el frenillo con el fin de aumentar la estimulación sexual durante el coito.
En la sociedad contemporánea, los pírsines en el frenillo fueron más comunes entre miembros de subculturas gay BDSM, hasta que la perforación corporal se hizo popular a finales de 1970 y principios de 1980.
Los pírsines en el frenillo se destinan a menudo a proporcionar placer sexual tanto para el portador como para la persona con quien mantenga relaciones sexuales. También pueden ser utilizados para conectar dispositivos de castidad al portador, negándoles el placer sexual.
Variantes
El Frenum Ladder o escalera se compone de una serie de pírsines que se extienden desde el frenillo y hacia la base del pene.
Véase también
Pirsin en los genitales
Genital beading (modificación corporal)
Referencias
Enlaces externos
Frenillo
Piercing en los genitales masculinos | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,379 |
Help the Retiree by having a raffle at the retirement party. Have guests donate $1. Proceeds will go to the "Retirement Fund". Host can donate a prize that will be raffled off. Package comes with 25 unique perforated raffle cards with either a male or female retiree. Make your gender selection from the drop down menu. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,173 |
is a Japanese voice actor and singer affiliated with Arts Vision. He is best known for his roles on Cautious Hero: The Hero Is Overpowered but Overly Cautious as Seiya Ryuuguuin, Cute High Earth Defense Club Love! as En Yufuin, Mobile Suit Gundam: Iron-Blooded Orphans as Eugene Sevenstark, The Legend of the Galactic Heroes: Die Neue These - Kaikou as Siegfried Kircheis, Goblin Slayer as Goblin Slayer, JoJo's Bizarre Adventure: Stone Ocean as Weather Report, Black Clover as Mars and Mobile Suit Gundam Narrative as Zoltan Akkanen. At the 10th Seiyu Awards, he won the Best Rookie Actors Award for his role Kurō Hazama in Young Black Jack and Wakasa in Merman in My Tub.
Umehara provides vocals and guitar to the pop rock band Sir Vanity, a band he formed with Yoshiki Nakajima and two other musicians.
Personal life
On May 10, 2018, it was announced that Umehara was hospitalized due to an acute disseminated encephalomyelitis. On July 30, 2018, Arts Vision announced that he had recovered. Arts Vision also stated that while he was being treated in the hospital, Umehara had a complication of intracranial hypotension, but after medical treatment and rehabilitation, his doctors discharged him from the hospital with no worries about the effects.
Filmography
Anime
2014
Brynhildr in the Darkness (Math teacher, man A, student, man, police officer, worker, Hekusen'yakuto man)
Chaika - The Coffin Princess (Thug C)
Magimoji Rurumo (Urata)
Merman in My Tub (Wakasa)
Riddle Story of Devil (Student of the darts shop)
Wolf Girl and Black Prince (Student, clerk)
Yowamushi Pedal (Audience)
2015
Aquarion Logos (Hayato Kujō)
Cute High Earth Defense Club Love! (En Yufuin)
Gatchaman Crowds insight (Rhythm Suzuki)
Makura no Danshi (Ryūshi Theodore Emori)
Million Doll (Ryū-san)
Mobile Suit Gundam: Iron-Blooded Orphans (Eugene Sevenstark)
Pokémon: XY (Orunisu)
Seraph of the End: Vampire Reign (René Simm)
Seraph of the End: Battle in Nagoya (René Simm)
Shōnen Hollywood -Holly Stage for 50- (Friend)
Snow White with the Red Hair (Mitsuhide Lowen)
Young Black Jack (Kurō Hazama)
2016
Amanchu! (Makoto Ninomiya)
Battery (Kazuki Kaionji)
Cute High Earth Defense Club LOVE! LOVE! (En Yufuin)
Gate: Jieitai Kanochi nite, Kaku Tatakaeri - Enryuu-hen (Diabo)
Girlish Number (Gojou Karasuma)
Haruchika (College Student) (ep 10)
Magic★Kyun! Renaissance (Teika Ichijōji)
Snow White with the Red Hair 2nd Season (Mitsuhide Lowen)
Tiger Mask W (Fujii Takuma)
Trickster (Inoue Ryo)
Masamune Datenicle (3rd Lord Yoshihiro, Yoshihiro Date)
2017
Children of the Whales (Ouni)
Chiruran : Nibun no Ichi (Nagakura Shinpachi)
Classroom of the Elite (Manabu Horikita)
Dynamic Chord (Shinobu Kurosawa)
Ikemen Sengoku: Toki o Kakeru ga Koi wa Hajimaranai (Shingen Takeda)
Jūni Taisen (Ushii/Eiji Kashii)
Kabukibu! (Tonbo Murase)
Karada Sagashi (Sugimoto Kenji)
Kino's Journey -the Beautiful World- the Animated Series (Shizu)
A Polar Bear in Love (Polar Bear)
Rage of Bahamut: Virgin Soul (Charioce XVII)
Robomasters: The Animated Series (Tei)
Sengoku Night Blood (Masamune Date)
Star-Myu 2 (Ren Kitahara)
The IDOLM@STER SideM (Kyoji Takajo)
Tsukipro The Animation (Dai Murase )
Whistle! (ONA) (Ryoichi Tenjo)
2018
Amanchu! Advance (Makoto Ninomiya)
Asa Da Yo!Kaishain (Kaibura Kai)
Black Clover (Mars)
Caligula (Izuru Minezawa)
Captain Tsubasa (2018) (Ken Wakashimazu)
Dame×Prince (Vino von Ronzado)
Darling in the Franxx (Goro)
Gakuen Babysitters (Hayato Kamitani)
Gintama: Shirogane no Tamashii-hen (Enshou)
Goblin Slayer (Goblin Slayer)
Hakyū Hōshin Engi (Igo)
Last Hope (Jay Yoon)
The Legend of the Galactic Heroes: Die Neue These - Kaikou (Siegfried Kircheis)
Mobile Suit Gundam Narrative (Zoltan Akkanen)
Planet With (Hideo Torai)
Sword Gai The Animation (Ichijou Seiya)
Tada Never Falls in Love (Sugimoto Hajime)
The iDOLM@STER SideM: WakeAtte Mini! (Kyoji Takajo)
The Thousand Musketeers (Ieyasu)
Uchū no Hō: Reimei-hen (Alpha)
2019
Ace of Diamond Act II (Soiichiro Mima)
Ahiru no Sora (Shigenobu Yakuma)
Cautious Hero: The Hero Is Overpowered but Overly Cautious (Seiya Ryuuguuin)
Crayon Shin-chan (Ikemen)
Ensemble Stars! (Keito Hasumi)
Fire Force (Tōjō)
Fragtime (OVA) (TBA)
Kimi dake ni Motetainda (Shun Gotōda)
Meiji Tokyo Renka (Ozaki Kouyou )
One-Punch Man (Kuroi Sēshi) (Episode 22)
RobiHachi (Prince Chamechamecha) (Episode 7)
Stand My Heroes: Piece of Truth (Miyase Gou)
Star-Myu 3 (Ren Kitahara)
The Legend of the Galactic Heroes: Die Neue These - Seiran (Siegfried Kircheis)
Tsukipro The Animation 2nd Season (Dai Murase)
ZENONZARD The Animation (Ash Claude)
2020
Akudama Drive (Courier)
Ascendance of a Bookworm (Damuel Matthias)
Fruits Basket 2nd Season (Kureno Souma)
Goblin Slayer: Goblin's Crown (Goblin Slayer)
Golden Kamuy (Vasily)
Kapibara-san (Narrator, Zookeeper)
Plunderer (Jail Murdoch)
Shadowverse (Kiriyama Shirou)
Uchitama?! Have you seen my Tama? (Kuro Mikawa)
Woodpecker Detective's Office (Sakutarō Hagiwara)
Toilet-Bound Hanako-kun (Nene's secret crush/side character)
2021
2.43: Seiin High School Boys Volleyball Team (Misao Aoki)
Heaven's Design Team (Kimura)
Hetalia: World Stars (Portugal)
High-Rise Invasion (Sniper Mask)
Hortensia Saga (Defrost Danois)
I-Chu: Halfway Through the Idol (Lucas)
JoJo's Bizarre Adventure: Stone Ocean (Weather Report)
Let's Make a Mug Too (Tomonari Kusano)
My Hero Academia: World Heroes' Mission (Shidero)
Seven Knights Revolution: Hero Successor (Gales)
Skate-Leading Stars (Izumi Himekawa)
So I'm a Spider, So What? (Balto Phthalo)
SSSS.Dynazenon (Koyomi Yamanaka)
The Saint's Magic Power is Omnipotent (Erhart Hawke)
The Slime Diaries: That Time I Got Reincarnated as a Slime (Zegion)
Those Snow White Notes (Seiryū Kamiki)
Tsukipro The Animation 2nd Season (Dai Murase)
Words Bubble Up Like Soda Pop (Toughboy)
2022
Aoashi (Haruhisa Kuribayashi)
Bleach: Thousand-Year Blood War (Jugram Haschwalt)
Build Divide -#FFFFFF- Code White (Arkeld)
Cap Kakumei Bottleman DX (Shiman Ijūin)
Classroom of the Elite 2nd Season (Manabu Horikita)
Echigo Bafuku (Sawatari)
I'm the Villainess, So I'm Taming the Final Boss (Claude Jean Ellmeyer)
Legend of Mana: The Teardrop Crystal (Elazul)
Miss Kuroitsu from the Monster Development Department (Professor Sadamaki)
My Master Has No Tail (Rakuda)
Play It Cool, Guys (Takayuki Mima)
Romantic Killer (Tsukasa Kazuki)
Shoot! Goal to the Future (Atsushi Kamiya)
Tales of Luminaria -The Fateful Crossroad- (August Wallenstein)
Tokyo Mew Mew New (Pie)
Golden Kamuy Season 4 (Vasily)
I've Somehow Gotten Stronger When I Improved My Farm-Related Skills (Volpe Dorma)
2023
Ayaka: A Story of Bonds and Wounds (Aka Ibuki)
Classroom of the Elite 3rd Season (Manabu Horikita)
Goblin Slayer II (Goblin Slayer)
Gridman Universe (Koyomi Yamanaka)
High Card (Vijay Kumar Singh)
Mashle (Abel Walker)
Opus Colors (Iori Haijima)
Revenger (Yuen Usui)
Spy Classroom (Klaus)
The Iceblade Sorcerer Shall Rule the World (Evi Armstrong)
The Misfit of Demon King Academy 2nd Season (Anos Voldigoad)
The Reincarnation of the Strongest Exorcist in Another World (Haruyoshi Kugano)
Tokyo Mew Mew New Season 2 (Pie)
Tsurune: The Linking Shot (Reiji Aragaki)
Why Raeliana Ended Up at the Duke's Mansion (Noah Voltaire Wynknight)
Game
2014
DYNAMIC CHORD feat. (reve parfait) (Shinobu Kurosawa)
IDOL-RISM (Ichido Haruna)
The IDOLM@STER SideM (Mobage) (Kyoji Takajo)
Ikemen Bakumatsu - Unmei no Koi (Sakamoto Ryoma)
Majo no Nina to Tsuchikare no Senshi
Senjou no Wedding
Tenku no Craft Fleet (Damien, Hauness, Reel)
2015
Ai ★ Chū (Lucas)
BELIEVER! (Inami You)
Cute High Earth Defense Club Love! Game! (Yufuin En)
Ensemble Stars! (Keito Hasumi)
Gakuen Club ~Houkago no Himitsu~ (Kimiki Renji)
I DOLL U (Peter
The IDOLM@STER SideM (Kyoji Takajo)
Ikémen Sengoku: Romances Across Time (Takeda Shingen)
Seraph of the End: Unmei no Hajimari (Rene Simm)
2016
Band Yarouze! (Shin Koganei)
DAMEXPRINCE (Vino von Ronzado)
Do s ni Koishite ~Suiteroom de Himitsu no Shihai~ (Kokonoe Naoki)
Icchibanketsu (Takemi Kazuchi)
Magic★Kyun! Renaissance (Teika Ichijoji)
Period Cube ~Torikago no Amadeus~ (Demento)
Toraware no Palm (Haruto Kisaragi)
The Caligula Effect (Izuru Minezawa)
Hortensia Saga: Ao no Kishidan (Defrost)
2017
Akane-sasu Sekai de Kimi to Utau (Ono no Imoko)
Kingdom Hearts HD 2.8 Final Chapter Prologue (Ira)
Hana Oboro: Sengoku-den Ranki (Hashiba Hideyoshi)
The IDOLM@STER SideM LIVE ON ST@GE! (Kyoji Takajo)
Sengoku Night Blood (Masamune Date)
SENSIL (Sakuraba Shion)
Shiro to Kuro no Alice (Rain)
White Cat Project (Liam)
Gakuen Club ~ Himitsu no Nightclub ~ PSVita (Kamiki Renji)
Dear my Magicalboys (Niki Mugendo)
Kimi to Kiri no Labyrinth (Hishikawa Hodaka)
Grand Summoners (Vox)
2018
Majestic ☆ Majolical (Jasper Beryl)
Shiro to Kuro no Alice -Twilight Line- (Rain)
Senjyushi: The Thousand Noble Musketeers (Ieyasu)
Servant of Thrones (Phiet Crestan)
Caligula Overdoes (Izuru Minezawa)
Dream Collection ~Mukanshu~ (Seika)
Dynamic Chord JAM&JOIN!!!! (Kurosawa Shinobu)
Kannagi no Mori (Nishina Nao)
Quiz Magical Academy (Mysterious Black Mage)
Yoake no Bel Canto (Astoria Bragium, Aunaus Ryuusu)
Dash! (Lucas)
DYNAMIC CHORD feat.apple-polisher V edition (Kurosawa Shinobu)
Dekiai voice drama × Berry's Danshi (Takabata Ibuki)
Puzzle Cafe (Hiruma Seiki)
Koutetsujou no Kabaneri -ran- (Chihiro)
Tlicolity eyes (Mochizuki Yousuke)
Ikemen Sengoku Toki o kakeru Koi -Aratanaru Deai- (Takeda Shingen)
Alchemia Story (Shizu) (Collaboration Event with Kino's Journey)
Eternal Dungeon (Hijikata Toshizo
Danmachi ~Memoria Friese~ (Shizu) (Collaboration Event with Kino's Journey)
Shinen Resist (Volker)
Meiji Tokyo Renka -Haikara Date- (Ozaki Kouyou)
Ayakashi Koi Mekuri (Gin'No Jou)
Ordinal Strata (Reinhardt)
Octopath Traveler (Cyrus Albright)
Black Clover : Quartet Knight (Mars)
Seikimatsu Days: Our Era's End (Toya Isui, Kusanagi Goshou)
23/7 (George A. Custer)
World End Heroes (Raijo Shigure)
Valkyrie Anatomia: The Origin (Goblin Slayer) (Collaboration Event with Goblin Slayer)
2019
ZENONZARD (Ash Claude)
BROWNDUST (Aaron)
Dragon Marked For Death (Warrior)
Kingdom Hearts III (Ira)
Criminal Girls X (Male Protagonist)
DRAGALIA LOST (Prometheus)
Caligula -OVERDOSE- (Nintendo Switch Edition) (Izuru Minezawa)
RELEASE THE SPYCE secret fragrance (Mrs. Chocolatier)
Grand Summoners (Goblin Slayer : Collaboration Event with Goblin Slayer), (Vox)
Tlicolity Eyes -twinkle showtime- (Mochizuki Yousuke)
Dear My Magical Boys (Nintendo Switch Edition) (Niki Mugendo)
Libra of Precatus (Claudio)
Graffiti Smash (Calm)
Toraware no Palm (Nintendo Switch Edition) (Haruto Kisaragi)
Ken Ga Toki (Shakushain)
Palette Parade (El Greco)
Gensou Kissa Enchanté (Canus Espada)
War of the Visions: Final Fantasy Brave Exvius (Sterne Leonis)
Gensou Maneji (Serge)
Sakura Wars (Xiaolong Yang)
Gunvolt Chronicles: Luminous Avenger iX (Dystnine)
Disney Twisted Wonderland (Leona Kingscholar)
Kaikan♥Phrase -CLIMAX- (Noah Walker)
Kannagi no Mori Satsuki Ame Tsuzuri (Nishina Nao)
The King of Fighters '98 (Goblin Slayer : Collarboration Event with Goblin Slayer)
Goblin Slayer -THE ENDLESS REVENGE- (Goblin Slayer)
2020
Birushana Senki ~Genpei Hikamu Sou~ (Musashibou Benkei)
Wind Boys (Hanashiro Seriya)
Digimon ReArise (Hackmon)
Ensemble Stars!! Basic/Music (Keito Hasumi)
Disney: Twisted-Wonderland (Leona Kingscholar)
I★Chu Étoile Stage (Lucas)
Bleach: Brave Souls (Jugram Haschwalt)
JACKJEANNE (Einishi Rokurou)
Hyakka Ryouran Sengoku Star (Tenkabito)
Miya no Kei -Palace Trick- (Emperor Bo Hokukou)
OVERLORD: MASS FOR THE DEAD (Ryuuguuin Seiya : Collaboration event with Cautious Hero)
World Flipper (Educeus)
Kingdom of Heroes Season 2 : The Broken King (Osric)
Touken Ranbu (Ochidori Jyumonjiyari)
Mitra Sphere (Prince voice)
Genshin Impact (Al-Haitham)
2021
Monster Hunter Rise (Merchant Kagero)
Meiji Restoration Tensho Keru Koi (Ōkubo Toshimichi)
Valkyrie Connect (Takamimusubi, Savior Dis)
London Labyrinth (Globley)
Octopath Traveler: Champions of the Continent (Cyrus)
Nekopara - Catboys Paradise (Sage)
Three Kingdoms Heroes, Three Kingdoms RPG (Hua Xiong, Gan Ning, Taishi Ci)
The IDOLM@STER SideM GROWING STARS (Kyoji Takajo)
Deep Insanity: Asylum (Wu Innominatus)
Fire Emblem Heroes (Raven)
The Legend of Heroes: Kuro no Kiseki (Kasim Al-Fayed)
My Next Life as a Villainess: All Routes Lead to Doom! ~The Pirate Who Summons Trouble~ (Albert)
Code Geass: Genesic Re;Code (Hijikata Tochizou)
Tarot Boys: 22 Apprentice Fortune Tellers (Ein Baphomet)
Dragon Quest X (Hakuou)
Ragnador: Ayashiki Koutei to Shuuen no Yashahime (Ginko)
Pokémon Masters EX (Darach)
Tales of Luminaria (August Wallenstein)
2022
Birushana Senki ~Ichijuu no Kaze~ (Musashibou Benkei)Sentimental Photography (Saijou Mamoru)Radiant Tale (Paschalia)Shironeko Golf (Liam)Gran Saga (Kaito)Dream Meister and the Recollected Black Fairy (Kuchen)Shadowverse (Magna Zero)Soukaitenki (Reiji)Genshin Impact (Alhaitham)JoJo's Bizarre Adventure: Last Survivor (Weather Report)LAST CHOUDIA (Thouzer)Majestic☆Majolical [Nintendo Switch] (Jasper Beryl)Shikōtei no michi e: Shichiyū no arasoi (Lord Xinling)WORLD II WORLD (Col)Eternal Tree (Hakuro)ALICE Fiction (Lex)JoJo's Bizarre Adventure: All Star Battle R (Weather Report)
2023Master Detective Archives: Rain Code (Vivia Twilight)Radiant Tale: Fanfare! (Paschalia)Tower of Sky (Anker)
Alchemy Stars (Leyn)
DUEL MASTERS PLAY'S (Shuramaru)
Drama CD
2014Exit Tunes Present Actors2 (Kiriyama)GANGSTA. (Subordinate)Mawazaka no Kenshi to Shoukan Maou (Loki)Nozomubeku mo Nai (Friend A)
2015FlyMEproject "MEDICODE (Semimaru)Zenryoku Shounen Tachi no O-ut (Toa Sakuraba)
2017Goblin Slayer (Goblin Slayer)
2018Blossom (Kiritani Yamato)Koiiro Shihyou -Sweet Days- (Tokitsu Kaname)
2021High Card (Vijay Kumar Singh)
BLCD
2015Ai no Mitsu ni Yoe! (Dormitory Student)
Vomic
2014My Hero Academia (Katsuki Bakugō)
Stage
2015Hoshi no Koe (2015) (Terao Noboru)
2017Homunculus (2017) (Julius)
2018Eraser in My Head 11th letter (Stage Edition)(2018) (Kousuke)
2019Chévere Note ~Story from Jeanne d'Arc~ (2019) (Étienne de Vignolles)
2020EL GALLEON (2020) (William Dampier)
THANATOS (2020) (Dr. Edmund Earhart)
VOICARION (Noguchi Tamon)
Film
2020
Seiyuu Danshi Desu Ga...? (Himself)
CM
2015MAMESHIBA GAKUEN (Midori Edao)
Dubbing
Live-actionDMZ (Skel)A Dog's Purpose (Teenage Ethan Montgomery)Goosebumps (Zach Cooper)School of Rock (Freddie) (episode 1 and 2 and then return from episode 12 onwards)Spiral (Detective William Schenk)Unforgotten (Tyler Da Silva)Valley of the Boom (Marc Andreessen)
AnimationBravest Warriors (Daniel "Danny" Vasquez)Love, Death & Robots (Sale man) (Episode 12)Miraculous Ladybug'' (Luka Couffaine)
References
External links
Official Profile at ArtsVision
1991 births
Living people
21st-century Japanese male actors
Japanese male video game actors
Japanese male voice actors
Male voice actors from Shizuoka Prefecture
Seiyu Award winners
Arts Vision voice actors | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 461 |
Drama Comedy Romance
Subgenres
Romantic Comedy Melodrama Romantic Drama War Drama Detective Film Biopic [feature]
Harry Davenport
Active - 1915 - 1995 | Born - Jan 19, 1866 | Died - Aug 9, 1949 | Genres - Drama, Comedy, Romance
Filmography ↓
Biography by Hal Erickson
Harry Davenport was descended from a long and illustrious line of stage actors who could trace their heritage to famed 18th-century Irish thespian Jack Johnson. Davenport made his own stage bow at the age of five, racking up a list of theatrical credits that eventually would fill two pages of Equity magazine. He started his film career at the age of 48, co-starring with Rose Tapley as "Mr. and Mrs. Jarr" in a series of silent comedy shorts. He also directed several silent features in the pre-World War I era. Most of his film activity was in the sound era, with such rich characterizations as Dr. Mead in Gone With the Wind (1939) and Louis XI in The Hunchback of Notre Dame (1939) to his credit. He also essayed a few leading film roles, notably as a lovable hermit in the 1946 PRC programmer The Enchanted Forest. At the time of his final screen performance in Frank Capra's Riding High (1950), much was made in the press of the fact that this film represented Davenport's seventy-eighth year in show business. Married twice, Harry Davenport was the father of actors Arthur Rankin and Dorothy Davenport.
Movie Highlights
See Full Filmography | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,897 |
\section{Introduction}
Time-delay systems may arise in practice for many reasons. For example, it appears in mechanical modeling like vibration absorber (see \cite{OLGAC199493}) or delayed resonator (see \cite{opac-b1100602}) which are intrinsically with delay and neglecting it leads to an over-simplification of the initial problem. That is why it is important to have a theory which can provide a framework to work with. Indeed, although time-delay systems are a class of dynamical systems widely studied in control theory, the honored method like root-locus to assess stability are not straightforward, particularly to provide robust stability criteria.
Three main approaches have been developed to study the stability of the such equations. The first one relies on the characteristic equation (see \cite{5687820} and references therein and \cite{BREDA2006305}) and pole location. These techniques give nearly the exact stability conditions but suffer from several drawbacks. First of all, as they are based on pole location approximations, they are not appropriated for uncertain and/or time-varying delay systems. Furthermore, these approaches could not also be used easily for the design of controllers or observers.
Other approaches have been developed based either on the robust approach or Lyapunov techniques. The robust approach consists of merging the delay uncertainty into an uncertain set and use classical robust analysis as Small Gain Theorem (\cite{fridman2008input}), Quadratic Separation (\cite{gouaisbaut2006delay}), Integral Quadratic Constraints (\cite{kao2007stability}). Techniques based on Lyapunov-Krasovskii functionals uses the LMI framework developed in the book by \cite{LMI}. This method enables exponential convergence with a guaranteed decay rate, robust analysis, synthesis of controllers and extension to multiple time-varying delay systems.
Despite these advantages, this approach is very conservative. The complete Lyapunov-Krasovskii functional is known (\cite{kharitonov2003lyapunov}) but too complex to be efficiently solved and even studied. A first step is to introduce a simplified functional. Some works have been done (for example by \cite{seuret:hal-01065142}) on how to relax the problem such that the conservatism introduced by the choice of the Lyapunov-Krasovskii functional is measured. The second step is to use integral inequalities to transform some non-manageable terms like $\int_{t-h}^t e^{-2\alpha s} x^{\top}(t+s) R x(t+s)ds$ into an expression suitable to be transformed into LMIs. This last step is important because there exists powerful and efficient algorithm to find solutions of LMIs in polynomial time. The commonly used inequalities in the two last steps are described by \cite{opac-b1100602} and rely for most of them on Jensen's inequality. An important amount of papers have been dedicated to reduce the conservatism induced by such inequalities. Recently, \cite{wirtinger} introduced a Wirtinger-based inequality, known to be less conservative. The present paper uses this framework to state the exponential convergence with a guaranteed decay rate and synthesis of controllers.
Two approaches have been widely used in the literature to assess the exponential stability. The first one relies on a change of variable $z(t) = e^{\alpha t}x(t)$ and it can be proven that establishing asymptotic stability of $z$ implies an exponential stability of $x$ with a decay rate of $\alpha$ (\cite{tds}). The second one is based on some modified Lyapunov-Krasovskii functionals which incorporate in their structures the exponential rate.
Since one of the first article by \cite{mori1982estimate} on exponential convergence of time-delay systems, several exponential estimates emerged from the literature: \cite{mondie2005exponential}, \cite{lam} or more recently \cite{trinh2016exponential}. But only a few of them used the Wirtinger-based inequality developed by \cite{wirtinger} to help synthesize observers or controllers for a discrete or distributed delay system. The aim of this article is to stabilize a specific class of time-delay systems as described in the problem statement using this inequality.
In Section 2, the problem is stated and some useful lemmas are reminded. Then in Section 3, an extension of exponential stability theorems with a Wirtinger-based inequality is introduced. The general results of the previous section are used for the computation of a feedback gain for a given system in Section 4 while Section 5 is dedicated to the design of an observer-based control. Finally, in the last section, a numerical comparison of efficiency between classical theorems and the one derived in this paper is performed.
\textbf{Notations.} Throughout the paper, $\mathbb{R}^n$ stands for the $n$ dimensional Euclidian space, $\mathbb{R}^{n \times m}$ for the set of all $n \times m$ matrices. $\mathbb{S}^n$ is the subset of $\mathbb{R}^{n \times n}$ of symmetric matrices such that $P \in \mathbb{S}_+^n$ or equivalently $P \succ 0$ denotes a symmetric positive definite matrix. For any square matrices $A$ and $B$, the operations '$\text{He}$' and '$\text{diag}$' are defined as follow: $\text{He}(A) = A + A^{\top}$ and $\text{diag}(A,B) = \left[ \begin{smallmatrix}A & 0\\ 0 & B \end{smallmatrix} \right]$. The notations $I_n$ and $0_{n \times m}$ denote the $n$ by $n$ identity matrix and the null matrix of size $n \times m$. The state variable $x$ can be represented using the Shimanov notation (\cite{kolmanovskii}):
$
x_t: \left\{ \begin{array}{rccl}
& [-h, 0] & \to & \mathbb{R}^n\\
& \tau & \mapsto & x(t+\tau)
\end{array}
\right.
$
\section{Problem Statement}
\subsection{System data}
The system to be controlled is the following one:
\begin{equation}
\left\{
\begin{array}{lcl}
\dot{x}(t) = Ax(t) + Bu(t), & \ & \forall t \geqslant 0, \\
\displaystyle y(t) = C \frac{1}{h} \int_{-h}^0 x_t(s) ds, & \ & \forall t \geqslant 0, \\
x(t) = \phi(t), & \ & \forall t \in [-h, 0],
\end{array}
\right.
\label{eq:sys}
\end{equation}
with $x(t) \in \mathbb{R}^n$ the instantaneous state vector, $h$ the time delay, $\phi$ the initial state function and $A$, $B$, $C$ three matrices of appropriate dimensions. Then, the output is not the instantaneous state but its average on a sliding window of time $[t-h, t]$, which differs significantly from classical control problems. Numerous measurement tools, in electronics for example, are measuring an average and not the instantaneous state.
The purpose of this paper is to find a control input $u$ computed only with the output measurement vector $y$ such that System \eqref{eq:sys} is exponentially stable with a decay rate of at least $\alpha \geqslant 0$. First of all, we recall the definition of exponential stability extended to time-delay systems:
\begin{de}
[\cite{Chen200795}] System \eqref{eq:sys} is said to be $\alpha$-stable if there exists $\alpha \geqslant 0$ and $\gamma \geqslant 1$ such that for every solution $x$ of \eqref{eq:sys} with a differentiable initial condition $\phi$ defined on $[-h; 0]$, the following exponential estimate holds:
\begin{equation}
\forall t > 0, \left| x(t) \right| \leqslant \gamma e^{-\alpha t} \left\lVert \phi \right\rVert_W
\label{eq:expoConvergence}
\end{equation}
where
$$\left\lVert \phi \right\rVert_W = \max\{ ||\phi||_h, ||\dot{\phi}||_h\} \text{ and } \displaystyle \left\lVert \phi \right\rVert_h = \sup_{\theta \in [-h, 0]} \left\lVert \phi(\theta) \right\rVert. $$
\end{de}
\begin{remark}
The norm $\lVert \cdot \rVert_W$ is sightly different from the one of \cite{mondie2005exponential} who do not consider a norm depending on the derivative $\dot{x}$. This problem has also been dealt by \cite{norm} by introducing the sum and not the maximum. These definitions are nevertheless equivalent.
\end{remark}
\subsection{Preliminary Results}
We recall two lemmas useful in the sequel. The first lemma, introduced by \cite{wirtinger} proposes an integral inequality which is used in the proof of the main theorem.
\begin{lemma} [Wirtinger-based inequality] For a given matrix $R \in \mathbb{S}^n_+$, the following inequality holds for all continuously differentiable function $x$ in $[t-h, t] \to \mathbb{R}^n$:
\begin{equation*}
\int_{t-h}^t \dot{x}^{\top}(s) R \dot{x}(s) ds \geqslant \frac{1}{h} \xi^{\top}(t) F_2^{\top} \tilde{R} F_2 \xi(t),
\end{equation*}
where
\[
\begin{array}{ccc}
F_2 = \left[ \begin{matrix} I_n & -I_n & 0_n \\ I_n & I_n & -2I_n \end{matrix} \right], & \ \ \ \ \ &
\tilde{R} = \text{diag}\left( R, 3R \right), \\
\end{array} \\
\]
\[
\xi(t) = \left[ \begin{matrix} x^{\top}(t) & \ x^{\top}(t-h) \ & \frac{1}{h} \int_{t-h}^t x^{\top}(s)ds \end{matrix} \right]^{\top}.
\]
\label{sec:wirtinger}
\end{lemma}
The second lemma, called Finsler's lemma, is widely used to cope with non linearities in LMIs.
\begin{lemma} (\cite{Svariable}) \\
For any $Q \in \mathbb{S}^n$ and $M \in \mathbb{R}^{p \times n}$, the three following properties are equivalent:
\begin{enumerate}
\item $x^{\top} Q x \prec 0$ for all $x \in \mathbb{R}^n \text{ such that } Mx = 0$,
\item $\exists Y \in \mathbb{R}^{n \times p}, Q + \text{He} \left( M^{\top} Y \right) \prec 0$,
\item ${M^{\perp}}^{\top} Q M^{\perp} \prec 0$ where $MM^{\perp} = 0$.
\end{enumerate}
\label{sec:finsler}
\end{lemma}
\section{Exponential Stability}
Considering a feedback on System \eqref{eq:sys}, i.e. $u(t) = Ky(t)$, it is possible to transform our system into a more general one:
\begin{equation}
\left\{
\begin{array}{ll}
\displaystyle \dot{x}(t) = Ax(t) + A_d x_t(-h) + A_D\int_{-h}^0 x_t(s) ds, & \forall t \geqslant 0, \\
x(t) = \phi(t), \hfill \forall t & \in [-h, 0],
\end{array}
\right.
\label{eq:sysDelayed}
\end{equation}
with $x(t) \in \mathbb{R}^n$ the instantaneous state vector and matrices $A$, $A_d$ and $A_D$ of appropriate dimensions.
Based on the lemmas recalled above, we propose a first exponential stability result for the previous system.
\begin{theo}
Assume that, for given $h > 0$ and $\alpha \geqslant 0$, there exist matrices $P \in \mathbb{S}^{2n}$, $R, S \in \mathbb{S}^n_+$ and $Y \in \mathbb{R}^{n \times 4n}$ and a positive real $\beta_1$ such that the following LMIs are satisfied:
\begin{equation}
\begin{array}{l}
P + \tfrac{e^{-2 \alpha h}}{h} \text{diag}(0_n, S) + \tfrac{4 \alpha^2 h}{e^{2 \alpha h}-2h\alpha-1} \left[ \begin{smallmatrix} h^2R & -hR \\ -hR & R \end{smallmatrix} \right] \ \ \ \ \ \ \\
\hfill - \beta_1 \text{diag}\left( I_n, 0_n \right) \succ 0,
\end{array}
\label{eq:positivity}
\end{equation}
\begin{equation}
\Phi(\alpha, h) + \text{He}\left( F_4^{\top} Y \right) \prec 0,
\end{equation}
with
\begin{equation*}
\begin{array}{rcl}
\Phi(\alpha, h) & = & \text{He}\left( F_1^{\top} P (F_0 + \alpha F_1) \right) + \bar{S} + h^2 F_3^{\top} R F_3 \\
& & - e^{-2 \alpha h} F_2^{\top} \tilde{R} F_2 , \\
\end{array}
\end{equation*}
\begin{equation*}
\begin{array}{ll}
F_0 = \left[ \begin{matrix}0_n & 0_n & I_n & 0_n \\ I_n & -I_n & 0_n & 0_n \end{matrix} \right], &
F_1(h) = \left[ \begin{matrix} I_n & 0_n & 0_n & 0_n \\ 0_n & 0_n & 0_n & hI_n \end{matrix} \right], \\
F_2 = \left[ \begin{matrix} I_n & -I_n & 0_n & 0_n \\ I_n & I_n & 0_n & -2I_n \end{matrix} \right] ,
&
\begin{array}{l}
F_3 = \left[ \begin{matrix} 0_n & 0_n & I_n & 0_n \end{matrix} \right],\\
F_4 = \left[ \begin{matrix} A & A_d & -I_n & hA_D \end{matrix} \right],
\end{array} \\
\tilde{R} = \text{diag} \left( R, 3R \right), \ & \ \bar{S} = \text{diag} \left( S, - e^{-2 \alpha h} S, 0_{2n}\right),
\end{array}
\end{equation*}
then, time-delay system \eqref{eq:sysDelayed} is $\alpha$-exponentially stable i.e.:
\begin{equation*}
|| x(t) || \leqslant \sqrt{\beta_2 \beta_1^{-1}} e^{-\alpha t} ||\phi ||_W,
\end{equation*}
where $\beta_2 = \displaystyle (1 + h^2) \lambda_{max}(P) + h \lambda_{max}(S) + \frac{h^3}{2} \lambda_{max}(R)$.
\label{sec:thm}
\end{theo}
\begin{proof}
This proof is divided into two parts.
{\em Part 1: Stability of system \eqref{eq:sysDelayed}} \
Consider a slightly modified Lyapunov-Krasovskii functional originally proposed by \cite{wirtinger,mondie2005exponential}:
\begin{equation}
\begin{array}{lcl}
V(x_t, \dot{x}_t) & = & \displaystyle \bar{x}^{\top}(t) P \bar{x}(t) + \int_{t-h}^t e^{-2\alpha(t-s)} x^{\top}(s) S x(s) ds \ \ \\
& & \hfill \displaystyle + h\int_{t-h}^t \int_{\theta}^t e^{-2\alpha(t-s)} \dot{x}^{\top}(s) R \dot{x}(s) ds d\theta,
\end{array}
\label{eq:lyap2}
\end{equation}
with the extended state $\bar{x}(t) = \displaystyle \left[ x^{\top}(t) \ \ \int_{t-h}^t x^{\top}(s) ds \right]^{\top}$.
Let us firstly introduce functional $W_{\alpha}$ given by:
\[
W_{\alpha}(x_t, \dot{x}_t) = \dot{V}(x_t, \dot{x}_t) + 2 \alpha V(x_t, \dot{x}_t)
\]
We want to find an LMI condition so that inequality:
\begin{equation}
W_{\alpha}(x_t, \dot{x}_t) < 0,
\label{eq:newLMI}
\end{equation}
is guaranteed for system \eqref{eq:sysDelayed}.
The derivative of functional \eqref{eq:lyap2} along the trajectories of time-delay system \eqref{eq:sysDelayed} leads to:
\begin{equation}
\begin{array}{lcl}
W_{\alpha}(x_t, \dot{x}_t) & \leqslant &\displaystyle \dot{\bar{x}}^{\top}(t) P \bar{x}(t) + \bar{x}^{\top}(t) P \dot{\bar{x}}(t) + 2 \alpha \bar{x}^{\top}(t) P \bar{x}(t) \\
& &\displaystyle + x^{\top}(t) S x(t) - e^{-2 \alpha h} x^{\top}(t-h) S x(t-h) \\
& &\displaystyle + h^2 \dot{x}^{\top}(t) R \dot{x}(t) - h e^{-2\alpha h} \int_{t-h}^t \hspace{-0.2cm} \dot{x}^{\top}(s) R \dot{x}(s) ds
\end{array}
\label{eq:inequality1}
\end{equation}
Using the extended state variable
\[
\xi(t) = \left[ \begin{array}{cccc}\displaystyle x^{\top}(t) \ \ \ x^{\top}(t-h) \ \ \ \dot{x}^{\top}(t) \ \ \ \frac{1}{h} \int_{t-h}^t x^{\top}(s) ds \end{array}\right]^{\top},
\]
and the matrices defined in this theorem, inequality \eqref{eq:inequality1} can be rewritten as:
\begin{equation*}
\begin{array}{ll}
W_{\alpha}(x_t, \dot{x}_t) \leqslant & \displaystyle\xi^{\top}(t) \left[ \text{He}\left( F_1^{\top} P ( F_0 + \alpha F_1\right) + \bar{S} \right. \\
& \displaystyle \left. +h^2 F_3^{\top} R F_3 \right] \xi(t) - h e^{-2\alpha h} \hspace{-0.1cm}\int_{t-h}^t \hspace{-0.2cm} \dot{x}^{\top}(s) R \dot{x}(s) ds.
\end{array}
\label{eq:finsler1}
\end{equation*}
Then, using the integral inequality from Lemma \ref{sec:wirtinger}, we obtain:
\begin{equation*}
\begin{array}{ll}
W_{\alpha}(x_t, \dot{x}_t) \leqslant & \displaystyle \xi^{\top}(t) \left[ \text{He}\left( F_1^{\top} P (F_0 + \alpha F_1) \right) + \bar{S} \right. \\
& \displaystyle \left. + h^2 F_3^{\top} R F_3 - e^{-2 \alpha h} F_2^{\top} \tilde{R} F_2 \right] \xi(t),
\end{array}
\label{eq:finsler2}
\end{equation*}
where $\xi$ satisfies a linear constraint defined by $F_4 \xi = 0$. Therefore, using Lemma \ref{sec:finsler}, $\xi^{\top} \Phi(\alpha, h) \xi \leqslant 0$ with $F_4 \xi = 0$, inequality \eqref{eq:newLMI} is satisfied if the following LMI is also satisfied:
\begin{equation}
\displaystyle \exists Y \in \mathbb{R}^{n \times 4n}, \Phi(\alpha, h) + \text{He}\left( F_4^{\top} Y \right) \prec 0,
\label{eq:finsler3}
\end{equation}
which concludes the first part of the proof.
{\em Part 2 Exponential stability}: The proof of exponential stability is based on inequality \eqref{eq:finsler3}. Indeed, as it has been noticed by \cite{mondie2005exponential}, the inequality \eqref{eq:finsler3} leads to:
\begin{equation}
V(x_t, \dot{x}_t) \leqslant e^{-2\alpha t} V(\phi, \dot{\phi}),
\label{eq:expConv}
\end{equation}
To ensure the exponential stability of system \eqref{eq:sysDelayed}, one should find strictly positive reals $\beta_1$ and $\beta_2$ such that:
\begin{equation}
\beta_1 || x(t) ||^2 \leqslant V(x_t, \dot{x}_t) \leqslant \beta_2 ||x_t||^2_W
\label{eq:lyapIneq}
\end{equation}
A lower bound for equation \eqref{eq:lyap2} can be derived using Jensen inequality and the inequality derived in appendix \ref{sec:app1}. The Bessel-like inequality developed in appendix \ref{sec:app1} is similar to Jensen's inequality but deals with the exponential terms.
\begin{equation*}
\begin{array}{lcl}
V(x_t, \dot{x}_t) & \geqslant & \displaystyle \bar{x}^{\top}(t) P \bar{x}(t) \\
& & + h \int_{t-h}^t \int_{\theta}^t e^{-2\alpha(t-s)} \dot{x}^{\top}(s) R \dot{x}(s) ds d\theta\\
& & \displaystyle + \frac{e^{-2 \alpha h}}{h} \left( \int_{t-h}^t x^{\top}(s) ds \right) S \left( \int_{t-h}^t x(s) ds \right).\\
\end{array}
\end{equation*}
Then, by Jensen's inequality, we have:
\begin{equation*}
\begin{array}{lcl}
V(x_t, \dot{x}_t) & \displaystyle \geqslant & \bar{x}^{\top}(t) \left( P + \frac{e^{-2 \alpha h}}{h} \text{diag}(0, S) \right. \\
& & \hfill \displaystyle + \tfrac{4 \alpha^2 h}{e^{2 \alpha h}-2h\alpha-1} \left[ \begin{smallmatrix} h^2R & -hR \\ -hR & R \end{smallmatrix} \right] \\
& & \hfill \left. - \beta_1 \text{diag}\left( I_n, 0_n \right) \vphantom{\frac{e^{-2 \alpha h}}{h}} \right) \bar{x}^{\top} + \beta_1 || x(t) ||^2. \\
\end{array}
\end{equation*}
Assuming LMI \eqref{eq:positivity} holds, then the previous equation becomes:
\begin{equation}
V(x_t, \dot{x}_t) \geqslant \beta_1 || x(t) ||^2.
\end{equation}
Using equation \eqref{eq:expConv} and \eqref{eq:lyapIneq}, one can get:
\begin{equation*}
\beta_1 || x(t) ||^2 \leqslant V(x_t, \dot{x}_t) \leqslant e^{-2\alpha t} V(\phi, \dot{\phi}).
\end{equation*}
Calculating $V(\phi, \dot{\phi})$, one can get the following upper bound:
\begin{equation*}
\begin{array}{lcl}
V(\phi, \dot{\phi}) & = & \displaystyle \bar{\phi}^{\top}(0) P \bar{\phi}(0) + \int_{-h}^0 e^{2 \alpha s} \phi^{\top}(s) S \phi(s) ds \\
& & \displaystyle + h \int_{-h}^0 \int_{\theta}^0 e^{2 \alpha s} \dot{\phi}^{\top}(s) R \dot{\phi}(s) ds d\theta, \\
\end{array}
\end{equation*}
with $\bar{\phi}(0) = \displaystyle \left[ \phi^{\top}(0) \ \ \int_{-h}^0 \phi^{\top}(s) ds \right]^{\top}$. We get:
\begin{equation*}
\begin{array}{ccl}
V(\phi, \dot{\phi}) & \leqslant & \displaystyle \left( (1 + h^2) \lambda_{max}(P) + h \lambda_{max}(S) \right) ||\phi||_h^2 \\
&&+ \frac{h^3}{2} \lambda_{max}(R) ||\dot{\phi}||_h^2\\
& \leqslant & \displaystyle \beta_2 ||\phi ||^2_W, \\
\end{array}
\end{equation*}
with
\[
\beta_2 = \displaystyle (1+ h^2) \lambda_{max}(P) + h \lambda_{max}(S) + \frac{h^3}{2} \lambda_{max}(R).
\]
Using the previous equation and \eqref{eq:lyapIneq}, one can get:
\begin{equation*}
\beta_1 || x(t) ||^2 \leqslant V(x_t, \dot{x}_t) \leqslant e^{-2\alpha t} V(\phi, \dot{\phi}) \leqslant \beta_2 e^{-2\alpha t} ||\phi ||^2_W
\end{equation*}
which is the same than:
\begin{equation*}
|| x(t) || \leqslant \underbrace{\sqrt{\beta_2 \beta_1^{-1}}}_{\gamma} e^{-\alpha t} ||\phi ||_W,
\end{equation*}
and that concludes the proof.
\end{proof}
\begin{remark}
The lower bound $\beta_1$ has been explicitly stated such that an optimization of $\gamma$ should be possible.
\end{remark}
\begin{remark}
By fixing $\alpha = 0$, one can recover the case of asymptotic stability developed by \cite{wirtinger}.
\end{remark}
\begin{remark}
At the light of Lemma \ref{sec:finsler} proposition 3, using slack variables is not mandatory and is useless for analysis purposes. Nevertheless, we will show that it is suitable for design purposes.
\end{remark}
\begin{coro}
Assume that, for given $h > 0$ and $\varepsilon_1, \varepsilon_2, \varepsilon_3, \varepsilon_4$ in $\mathbb{R}$, $\alpha \geqslant 0$, there exist matrices $P \in \mathbb{S}^{2n}$, $R, S \in \mathbb{S}^n_+$ and $Z \in \mathbb{R}^{n \times n}$ and a positive real $\beta_1$ such that the positivity LMI \eqref{eq:positivity} and the following LMI are satisfied:
\begin{equation}
\Phi(\alpha, h) + \text{He}\left( F_4^{\top} Z F_{\varepsilon} \right) \prec 0, \\
\label{eq:LMIcor}
\end{equation}
with
\begin{equation*}
F_{\varepsilon} = \left[ \begin{matrix} \varepsilon_1 I_n & \varepsilon_2 I_n & \varepsilon_3 I_n & \varepsilon_4 I_n \end{matrix} \right],
\label{eq:cor}
\end{equation*}
Then system \eqref{eq:sysDelayed} is $\alpha$-stable and $Z$ is not singular.
\label{sec:cor}
\end{coro}
\begin{proof}
Applying Theorem \ref{sec:thm}, with $Y = ZF_{\varepsilon}$ leads to this result. LMI \eqref{eq:LMIcor} leads to the result $-\varepsilon_3(Z^{\top} + Z) \prec 0$ which means $\varepsilon_3 \neq 0$ and $Z$ is not singular. This proof is constraining $Y$ so this is not equivalent to the previous theorem. The Finsler's lemma can be seen as assessing the stability of two systems at the same time. Considering $Y = ZF_{\varepsilon}$, and by applying Finsler's lemma on equation \eqref{eq:finsler3} with $F_4$ the vector of slack variables, that leads to the stability of another system:
\[
\varepsilon_3 \dot{x}(t) = -\varepsilon_1 x(t) - \varepsilon_2 x(t-h) - \varepsilon_4 \frac{1}{h} \int_{-h}^0 x(t+s) dx
\]
There are then two possible choices for $F_{\varepsilon}$:
\begin{enumerate}
\item $F_{\varepsilon \not = 1}$ is $\varepsilon_3 = \varepsilon_1 = \varepsilon_4 = 1$ and $\varepsilon_2 = 0$
\item$F_{\varepsilon = 1}$ is $\varepsilon_3 = \varepsilon_1 = \varepsilon_4 = \varepsilon_2 = 1$
\end{enumerate}
The first choice sees the delayed term $x(t-h)$ as a perturbation. Perhaps, deleting the effect of this term would stabilize the system, that means $\varepsilon_2 = 0$. The other choice considers that the delayed term is helping the stabilization of the system. The two choices are confronted in numerical simulations later on.
\end{proof}
\section{Control Design}
In this part, the problem of designing a controller for time-delay system \eqref{eq:sys} is discussed, i.e. the controller gain $K$becomes a variable of the LMI. Theorem \ref{sec:thm} would lead to a non-linear matrix inequality while Corollary \ref{sec:cor} gets rid of this at the price of a higher constraint on the structure of the slack variables.
Considering the average of the whole state $X$ as the output ($C = I_{n}$), the system can be written in another more useful form with :
\begin{equation}
\left\{
\begin{array}{ll}
\displaystyle \dot{x}(t) = Ax(t) + \frac{1}{h} B K \int_{t-h}^t x(s) ds, & \ \ \forall t \geqslant 0, \\
x(t) = \phi(t), & \ \ \forall t \in [-h; 0],
\end{array}
\right.
\label{eq:sys2}
\end{equation}
where $\phi$ is the initial condition and $x$ is the state.
The system is in the same form as the one defined in \eqref{eq:sysDelayed} with $A_D = \frac{1}{h}BK$ and $A_d = 0$. One can notice that $A_D$ depends on $K$ which is a variable in this case. The optimization based on the LMI framework cannot be applied directly because it is not a linear problem on the variable $K$. The feedback gain $K$ for a given $h$ can be found using this theorem:
\begin{theo}
\label{sec:thmFeedback}
Assume that, for given $h > 0$, $\varepsilon_1, \varepsilon_2, \varepsilon_3, \varepsilon_4 \in \mathbb{R}$ and $\alpha \geqslant 0$, there exist matrices $P \in \mathbb{S}^{2n}_+$, $R, S \in \mathbb{S}^n_+$, $X \in \mathbb{R}^{n \times n}$ invertible and a positive real $\beta_1$ such that the positivity LMI \eqref{eq:positivity} and the following LMI are satisfied:
\begin{equation}
\Phi(\alpha, h) + \text{He}\left( \left( N \tilde{X} + \left[ \begin{array}{cccc} 0_n & 0_n & 0_n & B\bar{K} \end{array} \right] \right)^{\top} F_{\varepsilon} \right) \prec 0, \\
\end{equation}
with the same notations than in Corollary \ref{sec:cor} but:
\[
\begin{array}{l}
N = \left[ \begin{array}{cccc} A & 0_n & -I_n & 0_n \end{array} \right], \\
\tilde{X} = \text{diag}(X,X,X,X), \\
\end{array}
\]
then time-delay system \eqref{eq:sys2} is $\alpha$-stable with the feedback gain $K = \bar{K} X^{-1}$.
\end{theo}
\begin{proof}
Since $Z$ is non-singular in the proof of Corollary \ref{sec:cor}, let us introduce $X = Z^{-1}$ and $F_4 = N + \left[ \begin{array}{cccc} 0_n & 0_n & 0_n & BK \end{array} \right]$ so that $F_4 \xi = 0$ is still valid.
Multiplying on the left by $\tilde{X}^{\top}$ and on the right by $\tilde{X}$, equation \eqref{eq:LMIcor} is equivalent to the following one:
\begin{equation}
\begin{array}{rl}
\left( F_0 \tilde{X} \right)^{\top} P F_1(h) \tilde{X} + \left(F_1(h) \tilde{X} \right)^{\top} P F_0 \tilde{X} &\\
+ 2 \left( \alpha F_1 \tilde{X} \right)^{\top} P F_1 \tilde{X} + \tilde{X}^{\top} \bar{S} \tilde{X} &\\
- e^{-2 \alpha h} \left( F_2 \tilde{X} \right)^{\top} \tilde{R} F_2 \tilde{X} + h^2 \left( F_3 \tilde{X} \right) ^{\top} R F_3 \tilde{X} & \\
+ \text{He}\left( \tilde{X}^{\top} F_4^{\top} X^{-1} F_{\varepsilon} \tilde{X} \right) & \prec 0.
\end{array}
\label{eq:LMIlemma}
\end{equation}
Noticing that $F_0 \tilde{X} = \bar{X} F_0$, $F_1 \tilde{X} = \bar{X} F_1$, $F_3 \tilde{X} = X F_3$ and $F_{\varepsilon} \tilde{X} = X F_{\varepsilon}$ with $\bar{X} = \text{diag}(X,X)$, equation \eqref{eq:LMIlemma} becomes:
\begin{equation*}
\begin{array}{rl}
F_0^{\top} P_2 F_1(h) + F_1^{\top}(h) P_2 F_0 + 2 \alpha F_1^{\top} P_2 F_1 & \\
- e^{-2 \alpha h} F_2 \tilde{R}_3 F_2 + h^2 F_3^{\top} R_3 F_3 & \\
+ \bar{S}_2 + \text{He}\left( \left( N \tilde{X} + \left[ \begin{array}{cccc} 0_n & 0_n & 0_n & B\bar{K} \end{array} \right] \right)^{\top} F_{\varepsilon} \right) & \prec 0,
\end{array}
\end{equation*}
with $\bar{K} = K X$, $P_ 2= \bar{X}^{\top} P \bar{X} \succ 0$, $S_2 = X^{\top} S X$ and $R_3 = X^{\top} R X$. As $X$ is invertible, the positiveness of $P$ is equivalent to the positiveness of $P_2$ and that concludes the proof.
\end{proof}
\section{Observer-based Control}
Based on the preliminary section, we aim at developing an observer-based controller for time-delay system \eqref{eq:sys}.
Following the same procedure than the one described in \cite{glad2000control} for a linear time-invariant system, the estimate of $x$ will be called $\hat{x}$ and let $\varepsilon$ be $x - \hat{x}$ such that:
\begin{subequations}
\begin{align}
\dot{\hat{x}} & = A \hat{x} + Bu + L \left( y - \frac{1}{h} C \int_{t-h}^t \hat{x}(s) ds \right) \label{eq:xHat}, \\
\dot{\epsilon} & = A \epsilon - \frac{1}{h} L C \int_{t-h}^t \epsilon (s) ds \label{eq:obsStab},
\end{align}
\label{eq:observer}
\end{subequations}
with $L$ a $n \times p$ matrix and the others matrices are the same as before. The stability of system \eqref{eq:obsStab} leads to the convergence of $\hat{x}$ to $x$. This observer has the same structure than a Kalman filter for LTI systems but adapted to System \eqref{eq:sys}.
\subsection{Convergence of the observer}
The following theorem holds for the error system \eqref{eq:observer}:
\begin{theo} \label{th:observerStability}
Assume that, for given $h > 0$, $\alpha \geqslant 0$, $\varepsilon_1, \varepsilon_2, \varepsilon_3, \varepsilon_4 \in \mathbb{R}$, there exist a matrix $P \in \mathbb{S}^{2n}$ and $R, S \in \mathbb{S}^{2n}_+$ and a $n \times n$ invertible matrix $Z$, a $n \times p$ matrix denoted $\bar{L}$ and , $\beta_1 > 0$ such that LMI \eqref{eq:positivity} and the following LMI are satisfied:
\begin{equation}
\Phi(\alpha, h) + \text{He}\left( N^{\top} Z F_{\varepsilon} + \left[ \begin{array}{cccc} 0_n & 0_n & 0_n & -\bar{L} C \end{array} \right]^{\top} F_{\varepsilon} \right) \prec 0,
\label{eq:lmiobsv}
\end{equation}
with the same notations than for Corollary \ref{sec:cor} and Theorem \ref{sec:thmFeedback} but:
\[
N = \left[ \begin{array}{cccc} A & 0_n & -I_n & 0_n \end{array} \right],
\]
then time-delay system error $\varepsilon$ defined in \eqref{eq:obsStab} is $\alpha$-stable with the gain $L = Z^{-\top} \bar{L}$. That means $\hat{x}$ in $\eqref{eq:xHat}$ converges exponentially to the instantaneous $x$.
\end{theo}
\begin{proof}
Starting from equation \eqref{eq:LMIcor} in Corollary \ref{sec:cor} with $F_4 = N + \left[ 0_{n, 3n} \ \ -LC\right]$ so that $F_4 \xi = 0$ then:
\[
\Phi(\alpha, h) + \text{He}\left( \left( N + \left[ 0_{n, 3n} \ \ -LC\right] \right)^{\top} Z F_{\varepsilon} \right) \prec 0
\]
which leads to LMI \eqref{eq:lmiobsv}
with $\bar{L}^{\top} = L^{\top} Z$ so $L = Z^{-T} \bar{L}$ which concludes the proof.
\end{proof}
\subsection{Feedback from reconstructed states}
Using equations \eqref{eq:observer} and $\hat{x} = x - \epsilon$, the original system can be transformed into:
\begin{equation}
\left\{
\begin{array}{l}
\dot{x}(t) = (A-BK) x(t) + BK \epsilon(t), \\
\displaystyle \dot{\epsilon}(t) = A \epsilon(t) - \frac{1}{h} LC \int_{t-h}^t \epsilon(s) ds, \\
u(t) = -K\hat{x}(t).
\end{array}
\right.
\label{eq:observerBased}
\end{equation}
Denoting $X^{\top}(t) = \left[ x^{\top}(t) \ \ \ \epsilon^{\top}(t) \right]^{\top}$ leads to:
\begin{equation}
\dot{X}(t) = \left[ \begin{matrix} A-BK & BK \\ 0_n & A \end{matrix} \right] X(t) + \left[ \begin{matrix} 0_n & 0_n \\ 0_n & -\frac{1}{h} LC \end{matrix} \right] \int_{t-h}^t X(s) ds.
\label{eq:reconstructedFeedback}
\end{equation}
The following proposition gives a sufficient condition which ensures the stability of the closed loop \eqref{eq:observerBased}.
\begin{proposition}
\textbf{(Separation Principle)} The stability of the system using feedback from reconstructed states is ensured if the observer is stable and if there exists $K$ such that $A-BK$ has strictly negative eigenvalues.
\end{proposition}
\begin{proof}
The characteristic matrix of equation \eqref{eq:reconstructedFeedback} is:
\begin{equation*}
\Delta(s) = \left[ \begin{matrix} sI_n - A+BK & -BK \\ 0_n & sI_n - A + \frac{1 - e^{-hs}}{hs} LC \end{matrix} \right]
\end{equation*}
And its characteristic equation is:
\begin{equation*}
\text{det}\left( \Delta(s) \right) = \text{det}(sI_n - A + BK) \text{det} \left( sI_n - A + \tfrac{1 - e^{-hs}}{hs} LC \right) \\
\end{equation*}
Using $L$ as defined by Theorem \ref{th:observerStability}, and using Theorem 1.5 proposed by \cite{opac-b1100602}, $\text{det} \left( sI_n - A + \frac{1 - e^{-hs}}{hs} LC \right) = 0$ has strictly negative roots and the system \eqref{eq:reconstructedFeedback} is stable if $A-BK$ has strictly negative eigenvalues.
\end{proof}
\section{Examples and Comparisons}
\subsection{Exponential convergence theorems}
\subsubsection{Example 1:}
Theorem \ref{sec:thm} and Corollary \ref{sec:cor} can ensure stability of system \eqref{eq:sysDelayed} for a given delay. A comparison of efficiency between the latter two and the theoretical bounds by \cite{Chen200795} can be done on system \eqref{eq:sysDelayed} with:
\begin{equation}
\begin{array}{ccccc}
A = \left[ \begin{matrix} 0.2 & \ & 0 \\ 0.2 & & 0.1 \end{matrix} \right]
&
,
&
A_D = \left[ \begin{matrix} -1 & \ & 0 \\ -1 & & -1 \end{matrix} \right]
&
\text{and}
&
A_d = 0_2
\end{array}.
\label{eq:system1}
\end{equation}
Table \ref{tab:stab1} shows a comparison of the upper and lower bound for $h$ leading to a stable system using different theorems obtained with YALMIP by \cite{1393890}.
\begin{table}[h]
\centering
\begin{tabular}{c|ccccc}
& EV & Th\ref{sec:thm} & Th\ref{sec:thm} & Cor\ref{sec:cor}${}_{\varepsilon=1}$ & Cor\ref{sec:cor}${}_{\varepsilon \not =1}$ \\
\hline
$\alpha$ & $0$ & $0$ & $0.5$ & $0$ & $0$ \\
\hline
$h_{min}$ & $0.2$ & $0.2001 $ & $0.6370$ & $0.2002$ & $0.2001$ \\
$h_{max}$ & $2.04$ & $1.9419$ & $1.0059$ & $1.8391$& $1.9108$
\end{tabular}
\caption{Upper and lower bound for the delay for the system \eqref{eq:sys2} and a given decay-rate}
\label{tab:stab1}
\vspace{-0.6cm}
\end{table}
EV stands for eigenvalue analysis and $h_{min}$ is the lower bound of the interval for asymptotic stability while $h_{max}$ is the upper one. Results of Theorem \ref{sec:thm} are reported in Th\ref{sec:thm} for two different choices of $\alpha$. For $\alpha = 0$, this is equivalent to Theorem 6 derived by \cite{wirtinger}. Cor\ref{sec:cor}${}_{\varepsilon=1}$ stands for Corollary \ref{sec:cor} in the case of all the $\varepsilon$ equals to $1$ while Cor\ref{sec:cor}${}_{\varepsilon \not =1}$ is with $\varepsilon_1 = \varepsilon_3 = \varepsilon_4 = 1$ and $\varepsilon_2 = 0$.
On this numerical example, it is possible to see the efficiency of the Wirtinger-based inequality by comparing the first and the second columns. To set an $\alpha$ different from $0$ is a very restrictive condition for the convergence and the range of feasible $h$ for $\alpha = 0.5$ is $4$ times shorter than the one for asymptotic convergence. The use of a structure for $Y$ leads to poorer results as expected. The choice of $\varepsilon_1 = \varepsilon_3 = \varepsilon_4 = 1$ and $\varepsilon_2 = 0$ is used in the examples from now on because in the examples presented in this article, it seems to give better results. As $\varepsilon_2$ is related to $A_d$ it is logical to set it to $0$.
There are not so many theorems which directly deal with distributed delay systems and Figure \ref{fig:h_alpha} compares only the efficiency of Theorem \ref{sec:thm} with a pseudo-spectral analysis conducted by \cite{freq} and Corollary \ref{sec:cor} with different choices of $\varepsilon$ as explained in the previous paragraph. The gap between the pseudo-spectral analysis and Theorem \ref{sec:thm} is of a factor of nearly $2.5$ for the maximum $\alpha$ to a given $h$. Nevertheless, for small $h$ and small $\alpha$, the approximate is good and fit the pseudo-spectral curve. Possible explanations would be in the difference introduced in \eqref{eq:inequality1} and in the choice of the Lyapunov-Krasvoski functional \eqref{eq:lyap2}. The extension to Corollary \ref{sec:cor} introduces more conservatism and the choice of $\varepsilon$ has to be done carefully because it can affects the stability assessment significantly.
\begin{figure}
\centering
\includegraphics[width=9cm]{h_alpha}
\vspace{-0.8cm}
\caption{Evolution of the decay-rate depending on the delay with Theorem \ref{sec:thm} and Corollary \ref{sec:cor} for system \eqref{eq:system1}.}
\label{fig:h_alpha}
\end{figure}
\subsubsection{Example 2:}
To be compared with other results of the literature, another system with a discrete delay only is considered:
\begin{equation}
\begin{array}{ccccc}
A = \left[ \begin{matrix} -3 & \ \ & -2 \\ 1 & & 0 \end{matrix} \right]
&
,
&
A_d = \left[ \begin{matrix} -0.5 & \ \ & 0.1 \\ 0.3 & & 0 \end{matrix} \right]
&
\text{and}
&
A_D = 0_2,
\end{array}.
\label{eq:system2}
\end{equation}
Results are shown in Figure \ref{fig:h_alpha2} with the use of Theorem \ref{sec:thm}, Corollary \ref{sec:cor}, the article by \cite{mondie2005exponential} (denoted Mondie in the legend), another by \cite{lam} (denoted Xu) and stability assessment using a pseudo-spectral approach (\cite{freq}).
First of all, Theorem \ref{sec:thm} leads to good results and fit the shape of the maximum $\alpha$. The stability theorem provided by \cite{lam} gives similar results but a bit closer to the real boundary. These two theorems give a precise estimation at small $h$ which is not the case of \cite{mondie2005exponential}. Another important conclusion is the conservatism of Corollary \ref{sec:cor} compared to the others theorems for bigger $h$. The curves decrease significantly faster than the others. Nevertheless, the main interest of this corollary compared to Theorem \ref{sec:thm} is the possibility of designing controller or observer gains.
\begin{figure}
\centering
\includegraphics[width=9cm]{h_alpha2}
\vspace{-0.8cm}
\caption{Evolution of the decay-rate depending on the delay with different theorems for system \eqref{eq:system2}.}
\label{fig:h_alpha2}
\end{figure}
\subsection{Controller design}
Let system \eqref{eq:sys2} defined by the matrices:
\[
\begin{array}{ccc}
A = \left[ \begin{matrix} 0.2 & \ \ & 0 \\ 0.2 & & 0.1 \end{matrix} \right]
&
\text{ and }
&
B = \left[ \begin{matrix} -1 & \ \ & 0 \\ -1 & & -1 \end{matrix} \right]
\end{array}.
\]
This system is the same than time-delay system \eqref{eq:system1}, and it is not stable without feedback for $h = 0$.
\begin{table}[h]
\centering
\begin{tabular}{c|ccc}
& Th\ref{sec:thmFeedback}${}_{\varepsilon \not = 1}$ & Th\ref{sec:thmFeedback}${}_{\varepsilon \not =1}$ & Th\ref{sec:thmFeedback}${}_{\varepsilon \not =1}$ \\
\hline
$\alpha$ & $0$ & $0.5$ & $1$ \\
\hline
$h_{min}$ & $0$ & $0$ & $0$ \\
$h_{max}$ & $2.5189$ & $0.8688$ & $0.5479$
\end{tabular}
\caption{Upper and lower bound for the delay for the stabilized system \eqref{eq:sys2} and a given decay-rate}
\label{tab:controller}
\vspace{-0.7cm}
\end{table}
The simulations have been made with Matlab and 'Sdpt-3' with the Yalmip toolbox\footnote{The codes are available at \\ \textit{https://homepages.laas.fr/mbarreau/drupal/content/publications}.}.
In Table \ref{tab:controller}, the lower bound ($h_{min}$) and the upper bound ($h_{max}$) of the delay for which there exists a matrix $K$ such that the closed-loop system is stable are summarized for different values of $\alpha$. The range of feasible delay is shrinking as the decay-rate increases. The lower bound of $h$ for the problem of stabilizing is $0$ in the examples studied which leads to the following assumption: if there exists a controller gain $K$ for a given $h > 0$ and $\alpha \geqslant 0$, then System \eqref{eq:sys2} is controllable for $h=0$.
For the observer-based control, the system to be studied has the same $A$ matrix than before and $C = I_2$. The same $h_{max}$ and $h_{min}$ are obtained using Theorem \ref{th:observerStability} and Theorem \ref{sec:thmFeedback} for this example. Same conclusions can be drawn.
\section{Conclusion}
In this paper, we have provided a set of LMIs to assess the exponential convergence of time-delay systems using the Wirtinger-based inequality. We have also shown a comparable performance with existing theorems. However, the feature of the main result of this paper is the use for stabilization and observation of a special class of time-delay systems. An extension to non-linear systems is not straightforward but should be considered. Further work will improve the efficiency of the control and the bound for the exponential estimate by using Bessel-based inequalities. Another possible research interest would be in proving the assumption made in the last part and some robustness study on the unknown parameter $h$ for example.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,248 |
https://mmajunkie.usatoday.com/2012/02/will-nick-diazs-retirement-stick-following-ufc-143-ufc-boss-not-so-sure
Will Nick Diaz's retirement stick? Following UFC 143, UFC boss not so sure
February 5, 2012 2:30 pm ET
LAS VEGAS – UFC president Dana White understands why Nick Diaz is upset, even if he doesn't agree with the fighter's belief that he won Saturday's UFC 143 main event.
He also understands if Diaz goes through with an announced retirement, even if he thinks it'd be financially foolish to do so.
Quite simply, White has given up on trying to predict the 28-year-old's often-erratic behavior.
"You never know with Nick Diaz," White said. "You never know. I think he's just upset right now, and I think he's emotional, but who knows?"
Diaz (27-8 MMA, 7-5 UFC) suffered a unanimous-decision loss to Condit (28-5 MMA, 5-1 UFC) at UFC 143. Saturday's pay-per-view fight, which took place at Las Vegas' Mandalay Bay Events Center, saw Condit implement the perfect game plan for defeating one of MMA's most effective and relentless fight styles. Condit stuck, move, struck and reset to frustrate Diaz over the five-round fight, which earned Condit the UFC's interim welterweight title and a future unification bout with recovering titleholder Georges St-Pierre.
Soon after the scores were read – Condit earned the victory via 48-47, 49-46 and 49-46 scores – the fiery Diaz praised his opponent but didn't hide his contempt for the situation.
"You guys pay me a [expletive] load of money, but I don't think I'm getting enough to keep going on," he said. "I don't need this [expletive]. I pushed this guy backward, and he ran from me the whole fight. He ran the whole fight.
"I landed the harder shots. He ran the whole time. He kicked me in the leg with little baby leg kicks the whole fight. That's the way [you] win in here, so I don't want to play this game no more."
According to FightMetric, Condit outlanded Diaz 159-117 (including 151-105 in significant strikes). Diaz had a slight edge with head and body shots, but Condit's low kicks – he outlanded Diaz 68-6 in that department – surely played a large part in the final scores.
The loss snapped Diaz's 11-fight win streak and was his first defeat since a contentious TKO defeat (due to facial cuts) to K.J. Noons more than four years ago in EliteXC. After subsequently emerging as Strikeforce's dominant longtime champion and then returning to the UFC, where he beat down B.J. Penn at UFC 137, he's emerged as one of MMA's biggest stars.
That's why White thinks it'd be shortsighted to call it quits now.
"Let me tell you what: The kid's made a lot of money," he said. "If he didn't want to do it anymore, maybe he could retire. But why? He's in his prime. Fight for a few more years, and he'll have enough money to really do it and kick back the rest of his life."
Diaz often has shared his disdain for the sport, specifically judging and what he calls the politics of the fight game. He's also notoriously undependable ahead of fight time; he lost a title shot with St-Pierre in October after no-showing a pair of pre-event press conference, and White said he inexplicably missed three flights for UFC 143 before he finally arrived in Sin City for this weekend's event.
So while White thinks Diaz ultimately will rescind his retirement offer – after all, the UFC boss didn't count out the possibility of an immediate rematch with Condit – he thinks it's for the best if the fighter's passion is gone.
"I think once he goes home and realizes and calms downs – look, Nick Diaz is a fighter," he said. "I don't see Nick Diaz retiring, but who knows? This isn't one of those sports where you want to be half in, half out.
"If that's how you feel, maybe you should retire."
For the latest on UFC 143, stay tuned to the UFC Events section of the site.
(Pictured: Carlos Condit)
More UFC | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,487 |
Calafindești è un comune della Romania di 2.835 abitanti, ubicato nel distretto di Suceava, nella regione storica della Bucovina.
Il comune è formato dall'unione di 3 villaggi: Botoșanița Mare, Calafindești, Călinești.
Nel 2005 si è staccato da Calafindeşti il villaggio di Șerbăuți, andato a formare un comune autonomo
Altri progetti
Comuni del distretto di Suceava | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,835 |
Analysing Gordon's trade-off by adapting Thurow's approach of pure public good to the German energy sector
Holger Schlör1Email author,
Wolfgang Fischer1 and
Jürgen-Friedrich Hake1
Published: 5 December 2016
We analyse Gordon's trade-off by adapting Thurow's approach of pure public good using the example of the German energy sector which is in a transition process to a low-carbon sustainable energy system (Energiewende). The income distribution and the energy expenditures of households are interpreted as public goods. Their distribution is measured with the Atkinson index, which determines how the quality of life, as measured in income and energy expenditures, is distributed among society.
We use the disaggregated consumption and income for 39.409 million German households. Our socio-economic analysis focuses on six household types.
Our analysis shows that among German households, energy expenditures are more equally distributed than private consumption in general and income. The rather (but by far not completely) equal distribution of energy expenditures confirms Smil's finding that energy is the universal currency (Sen, On Economic Inequality, 1973) for people's welfare and can be seen as an indicator of the basic needs of households irrespective of household income. Nevertheless, low-income households have to spend a higher share of their income on energy to avoid energy poverty. Further price increases could lead to an unequal distribution and rising energy poverty.
The socio‐economic conditions of society and its energy sector have to be addressed in a transition processes. Energy poverty constitutes an infringement of the sustainability concept. If society does not take distributional effects into account, the transition process itself could be jeopardized.
Gordon's trade‐off
Atkinson index
Sustainable energy system
Sustainable development is a process in which society and political decision makers have to balance ecological, economic, and social targets. Equal rights and equality in terms of "equivalent living conditions" (Article 74 German constitutional law) are key elements of the social pillar of sustainability.
Gordon's trade-off
Modern societies are confronted with Gordon's1 trade-off [14], that is to say, their democratic constitutions guarantee all citizens the same political rights and obligations [27]. However, this democratic guarantee of equality is contrasted with economic inequality as the result of economic market forces which produce unequal income, consumption opportunities, and life prospects [14, 29]. Individuals have the same political rights, but their social participation opportunities correlate not only with these rights but also with their individual success in economic processes [7, 14]. Individuals are affected by two institutions—economic market processes and the constitution—which grant different positions in society according to their specific institutional rules.2 The constitutions of democratic systems grant their citizens rights without any preconditions, whereas their position within the economic market system is based on their success in this system [27]. Economic institutions can "generate substantial disparities among citizens in living standards and material welfare [14]."
The political institutions of the government are, on the one hand, confronted with and have to manage a socio-economic democratic system that guarantees the same rights to each individual without any preconditions, and on the other hand, with an economic system in which individual success is based mainly on individual performance. Society and its government have to find a way to balance the trade-off between these two principles to avoid political tensions between social groups and households, because of the trade-off between the conflicting principles of society's democratic institutions and those of the economic market system: "At some points along the way, society confronts choices that offer somewhat more equality at the expense of efficiency or somewhat more efficiency at the expense of equality. In the idiom of the economist, a trade-off emerges between equality and efficiency [14]." Political projects such as the German Energiewende can be implemented more easily if social justice is taken into account, i.e. the distribution of the material welfare of society [26].
Hence, we can summarize that Gordon's trade-off is the result of the relations between two competing institutions (the democratic system and the economic market system). This competition is confirmed by Stiglitz, who illustrates that these conflicts arising from the trade-off are not the "result of the forces of nature, of abstract forces. [They are] the result of government policies that shape and direct the forces of technology and markets and broader societal forces [36]." In other words, Gordon's trade-off is politically shapeable by the institutions of society and has to be analysed so that this management process can avoid mismanagement on the basis of flawed data.
The need for such an analysis is also stressed by Acemoglu and Robinson [1], who argue "that economic analysis needs to identify, theoretically and empirically, conditions under which politics and economics run into conflict, and then evaluate policy proposals taking this conflict and the potential backlashes it creates into account [2]." These conflicts could endanger policy conceptions such as the German energy transition [13].
Our analysis tries to reveal societal obstacles in the socio-economic conditions of society which have to be addressed in transition processes and will show the necessity of political discourse concerning Gordon's trade-off, because transition processes are not only technical problems but increasingly also socio-economic problems that have to be solved. No one in society can escape from these unsolved problems. Hence, we will analyse Gordon's trade-off in the context of Thurow's theory of public goods.
Thurow—distribution of public goods
Private and public goods
The idea of public goods was developed in 1954 by Samuelson in his paper "The Pure Theory of Public Expenditure" [23]. He explains the characteristics of a public good: "that each individual's consumption of such a good leads to no subtraction from any other individual's consumption of that good [23]." Public goods "can be enjoyed by everyone and from which no one can be excluded [24]." Hence, we can classify the private and public goods consumed by households [10] and needed for the well-being of households [17] into four major categories [10] (Table 1).
Classification of goods
Pure private good
Impure public good
Club good
Pure public good
Source: D. Brümmerhoff, [10]
In the case of private goods, the use of such a good by one consumer excludes other consumers from consuming it (i.e. food). In contrast, a dike is a pure public good, because everyone behind it is protected. A club good [11, 28] refers to, for instance, the use of a gym. If the monthly fee is paid, everyone in the gym may use the equipment. A congested road is an impure public good—no one can be excluded from the use of the road, but there will be rivalry in using the road in the case of congestion [17].
Thurow's public good approach
The distribution of income was already interpreted as a pure public good by Thurow in 1971 [39], because every individual is confronted with the same distribution of income. No individual can be excluded from the advantages and disadvantages of a given distribution of income, and there is also non-rivalry in the consumption of the advantages and disadvantages [37, 40] of a given distribution of income [39]. Every individual is confronted with the same distribution of income, because as Joseph Stiglitz explains: "Widely unequal societies do not function efficiently and their economies are neither stable nor sustainable … there comes a point when inequality spirals into economic dysfunction for the whole society [37]." Everyone needs a functioning society to sustain their social position [37]. That is to say, the distribution of income is a pure public good [39] which sustains the functioning of society. It functions like a dike to stabilize the socio-economic system.
We will enlarge Thurow's approach of a public good by interpreting not only the income distribution of German households but also the distribution of their energy expenses as a public good, because the participation of all households in the energy system is an important factor in the success of any country's economy. The energy system is a dike for the socio-economic system which needs a competitive infrastructure. We therefore also interpreted the performance of the energy system as a public good for society, because no individual can be excluded from the advantages or disadvantages of the energy system and there is also non-rivalry in the consumption of the advantages or disadvantages of the energy system.
Hence, we will expand Thurow's idea of a pure public good by including household energy consumption as a parameter for the quality of the German energy system. In the following, the distribution of the two public goods—income and energy system—will be analysed with the Atkinson index on the basis of the German household expenditure survey (EVS) database.
The index is based on social theories [5] and regards society as "a cooperative project for the mutual [5]" benefit of all members of society.
The Atkinson index is a normative distribution measure. The index is based on a social welfare function, which implies diminishing marginal utility of income [5, 15]. The index thereby assumes additive social welfare, which is the sum of the individual utility of society members. This concept is based on utilitarian individual philosophy [15]. In this philosophy, the welfare of the other members of society is not part of the individual utility function [5]: Each individual simply maximizes his own utility and does not care about the other individuals. The welfare of the individual is measured independently of the income of other individuals [5, 15]. Hence, the level of possible energy consumption is based on the net income, and energy consumption is part of the social welfare function (SWF), as the following definition of the welfare function shows:
$$ \begin{array}{l}\begin{array}{l}\mathrm{S}\mathrm{W}\mathrm{F}={\displaystyle \sum_{i=1}^nU\left(Y{\left(PC\Big(EC\right)}_i\right)},\ \mathrm{Y}=\mathrm{income},\ \mathrm{U}=\mathrm{utility}\ \mathrm{level},\ \mathrm{n}=\mathrm{number}\ \mathrm{of}\ \mathrm{households}\hfill \\ {}\kern20.5em \mathrm{E}\mathrm{C}=\mathrm{energy}\ \mathrm{consumption},\ \mathrm{P}\mathrm{C}=\mathrm{total}\ \mathrm{private}\ \mathrm{consumption}\hfill \end{array}\\ {}\kern6em \end{array} $$
In our theoretical approach (utilitarianism), an "outside observer" has to compare the individual members of society with each other. His instrument is the Atkinson index [15]. The Atkinson index calculates how society can assess the distribution of individual income and consumption expenditures between the different income classes of the social groups.3 The index defines maximum inequality with 1 and maximum equality with 0 [26] and fulfils six mathematical axioms thus allowing it to measure inequality [26].
The Atkinson index has a specific feature for calculating distribution, namely the epsilon parameter ε [3, 4]. The epsilon parameter of Eq. (1) "defines how sensitively the Atkinson index should interpret inequalities [25]." The value ranges from zero to infinity. If society does not give any consideration to the distribution of income, then the value is zero (low inequality aversion). If society cares only about the lowest income group, then the value moves towards infinity (high inequality aversion).4 "The larger epsilon is, the more strongly the Atkinson index reacts to inequalities [27]." Epsilon can therefore represent the inequality aversion of society and can be interpreted as the mathematical parameter of Gordon's trade-off.
$$ \mathrm{Gordon}\hbox{'}\mathrm{s}\ \mathrm{Trade}\hbox{-} \mathrm{off}=\frac{\mathrm{Social}\ \mathrm{Equity}}{\mathrm{Economic}\ \mathrm{Efficiency}}=\mathrm{Inequality}\ \mathrm{Aversion}=\mathrm{Epsilon}\ \mathrm{Parameter}\ \mathrm{of}\ \mathrm{Atkinson}\ \mathrm{Index} $$
With the determination of the epsilon parameter, Gordon's trade-off becomes measurable by the Atkinson index. Epsilon relates two institutions to each other: the societal trade-off between social equality based on a democratic constitution and market economic efficiency. Researchers, social stakeholders, or legislators can define the social meaning of inequality for socio-economic development and can define Gordon's trade-off by the epsilon parameter. In a political discourse, society can develop a social view of its own understanding of how individuals treat and see each other in society which can also be expressed in the tax system. Epsilon confronts a society with its self-assessment as a just, fair society but also as an efficient market economy [25, 27].
We use the Atkinson index to determine the distributional effect of gross income, net income, private consumption, and energy expenditures [3]. The value of the Atkinson index is Thurow's public good. It defines the distribution of income and energy expenditure and the shape of the dike which prevents economic and social distortions of the socio-economic system.
For our analysis, we use the modified Atkinson index (AIXtype) to analyse the inequality of these issues:
$$ {\mathrm{AIX}}_{\mathrm{type}}=1-{\left[{\displaystyle \sum_{i=1}^n{\left(\frac{X_{i, type}}{\overline{X_{\mathrm{type}}}}\right)}^{1-\varepsilon }}{f}_{i, type}\right]}^{\frac{1}{1-\varepsilon }},\kern0.5em X={Y}^G,{Y}^N,\ \mathrm{P}\mathrm{C},\ E,\ EK,\ EW,\ \mathrm{f}\mathrm{o}\mathrm{r}\ \varepsilon \ne 1. $$
$$ {\mathrm{AIX}}_{\mathrm{type}}=1- \exp \left[{\displaystyle \sum_{i=1}^n{f}_{i, type}{ \log}_e\frac{X_{i, type}}{\overline{X_{\mathrm{type}}}}}\right],\ \mathrm{X}={\mathrm{Y}}^G,{\mathrm{Y}}^N,\ \mathrm{P}\mathrm{C},\ E,\ EK,\ EW,\ for\ \varepsilon =1 $$
\( {Y}_{i,\mathrm{type}}^G \) represents gross income of individuals, \( {Y}_{i, type}^N \) the net income of individuals, PC i,type consumption expenditure, E i,type energy consumption expenditure, EW i,type residential energy consumption expenditure, EK i,type car energy consumption expenditure in the i th income range (n sum of the income classes) in the household type (singles, singles with child(ren), couples, couples without child(ren), couples with child(ren)), f i,type is the proportion of the population in the particular household type with income in the i th income range, \( {\overline{X}}_{\mathrm{type}} \) is the mean household value for six income and expenditure issues (Y G , Y N , K, E, EK, EW) of the household types, and the epsilon parameter (ε) is the same for all groups.
Database—German household expenditure survey data
The German household expenditure survey (EVS) provides data sets on German economic life and the consumer behaviour of private households [34]. Every 5 years, the Federal Statistical Office questions a selection of German households (0.2% of all German households) about their income, expenditures, assets, consumer goods, and residential situation. The 2008 survey was the tenth survey, following surveys in 1962/63, 1969, 1973, 1978, 1983, 1988, 1993, 1998, 2003 [16, 35]. The EVS for 2008 was published in 2011 [31, 32]. The EVS for 2013 was not published in 2015. The EVS data sets provide an overview of the social conditions and socio-economic development of the population in Germany. The data sets are important not only for German social politics but also for all other socio-economic fields of politics [33].
Private households are the central object of investigation in the framework of the EVS.
Our analysis focuses on the following household types:
Single households
Single households with child(ren)
Couples without child(ren)
Couples with child(ren)
Other households5
In our model, we consider all 39.409 million households which took part in the EVS survey, of which 15.537 million (30.1%) are single households, 1.339 million are single households with child(ren) (2.6%), and 17.381 million are couples (33.7%) living in one household, while 11.441 million of the couples households have no children (22.2%) and 5.940 million of the couples households have child(ren) (11.5%). We also consider the 5.152 million as other households ("sonstige Haushalte").
The following table shows how German households are distributed among social groups and income groups. We analyse nine income classes as Table 2 shows.
Distribution of households 2008
Distribution of German households among the different household types and income groups
Number of households in 100
Single with child(ren)
Couples sine
Couples with
Other households
Proportion of the social group in all households in % of total households
Distribution of the households among the different social groups
Source: Schlör et al. 2015 [31, 32]
The table shows the distribution of the households over the nine income classes. The relatively largest group of all households (25.8%) is the income class € 2600–€ 3600, whereas within the single households, the income class € 900–€ 1300 has the largest relative proportion (22%). Within the single households with child(ren), the largest relative grouping (26.1%) is the income class € 1500–€ 2000, while couples have the biggest share (25.1%) in the income class of € 2600–€ 3600 and couples without children have the highest share (24.9%) in the income group of € 2600–€ 3600. Couples with child(ren) have the biggest share (28.4%) in the income group of € 3600–€ 5000. Nearly one third of the other households (29.3%) belong to the highest income group (€ 5000–€ 18,000).
Our paper measures the distribution of the public goods (income distribution and energy system) with the Atkinson index [3, 4].
In the first step, we analyse the first part of Gordon's trade-off, i.e. the success of the household groups in the economic process, i.e. the income and consumption expenditures of the different household types.
Real distribution
Disposable income of private households according to their social position
Our analysis is focused on five household types (single households, single households with child(ren), couples, couples without child(ren), couples with child(ren)), which are part of the group of all households. We analyse the real distribution of income, of consumption, and of energy expenses. In the first step, we analyse the dispersion of income [12, 18–21, 38], consumption, and energy use. We define dispersion as the ratio of the income, consumption, and energy expenditures of the highest income group to the average household of the social group.
Couples without children achieved the highest average monthly gross income in 2008 (€ 9222), followed by other households (€ 9152) and couples (€ 9136). Singles and couples with child(ren) achieved nearly the same level of gross income (€ 9083, € 9037), whereas the gross income of singles with children in the highest income group is significantly lower (€ 7990).
The dispersion of the gross income varies significantly between the household types. We can identify three major groups: The highest dispersion is found in the single households group (4.14, 3.43). The second group consists of all couples and couples without children (1.97, 2.18). The income dispersion reaches its lowest value in the groups containing couples with child(ren) and other households (1.66, 1.67) (Table 3).
Gross income of private households in Germany 2008 according to their household type
Net income groups in €
Single with children
All households
Gross income dispersiona
Source: Own calculation based on German Federal Statistical Office, 2011,
aIncome dispersion: Ratio of the gross income of the highest income group to the gross income of the average household of the social group. Italic numbers own estimation
Monthly net income
The monthly net income of private households also varies strongly with the social status of the main income recipient, as the following table shows (Table 4).
Net income of private households in Germany 2008 according to their household type
Income groups in €
Net income dispersiona
Source: Own calculation based on German Federal Statistical Office, 2011, /=no declaration, the number of cases is too small. Italic numbers are own estimation
aIncome dispersion: ratio of the net income of the highest available income group to the net income of the average household of the social group
Couples with children achieved the highest average monthly net income in 2008 (€ 4191), followed by couples (€ 3662), couples without child(ren) (€ 3387), and singles with and without child(ren) (€ 1943, € 1726). The dispersion of the net income varies significantly between the household types. Once again, the first group contains single households where the dispersion decreases from 4.09 to 3.3. The second group contains couples and couples without child(ren) (1.9, 2.1). They have a significantly lower dispersion than the single households. The income dispersion reaches its lowest value in the group containing couples with child(ren) and other households (1.6). The comparison of net and gross income shows that the German income tax system reduces the dispersion in this particular household type.
Expenditure of private households according to their social position
Monthly private consumption
Expenditure for private consumption also varies between the different household types, as the following Table 5 shows. The single households spend an average of € 1418 per month, singles with child(ren) € 1740, couples € 2757, couples without child(ren) € 2622, couples with child(ren) € 3017, and other households € 3142. The consumption expenditures increase with rising income without reaching a saturation point. The consumption dispersion is significantly lower than the income dispersion.
Private consumption
Private consumption of private households in Germany 2008 according to their household type
Consumption dispersiona
Source: Own calculation based on German Federal Statistical Office, 2011./= no declaration, the number of cases is too small
aConsumption dispersion: ratio of the consumption of the highest available income group to the consumption of the average household of the social group
The consumption dispersion of singles (2.35) and singles with child(ren) (2.12) is the highest of all households analysed, followed by couples (1.53, 1.62, 1.38) and other households (1.46). Their dispersion is much lower, and they have more similar consumption patterns than the single households.
In the following, we analyse the energy expenditures of the households.
Monthly energy consumption
The expenditures for energy consumption of the households will be analysed in more detail to obtain a picture of the real distribution of energy consumption in Germany. This includes car energy and residential energy expenditures and total energy expenditures as summarized in Table 6.
Energy consumption—car, residential, and total
Energy consumption of private households in Germany 2008 according to their social position
Couple without child(ren)
Couple with child(ren)
Car energy expenditure in €
Car energy dispersiona
Residential energy expenditure in €
Residential energy dispersionb
Total energy expenditure in €
Total energy dispersionc
Source: German Federal Statistical Office, 2010, aresidential energy dispersion: ratio of the residential energy expenditures of the highest income group to the residential energy expenditure of the average household of the social group
Source: German Federal Statistical Office, 2010, bresidential energy dispersion: ratio of the residential energy expenditures of the highest income group to the residential energy expenditure of the average household of the social group
Source: Own calculation based on German Federal Statistical Office, 2011, cTotal energy dispersion: ratio of the total energy expenditures of the highest income group to the total energy expenditure of the average household of the social group
Energy expenses for cars
Energy expenses for cars include expenses for fuel and lubricants in the six social groups. The single households without and with children spend nearly the same amount (€ 50 and € 67, respectively) on car energy, whereas the couples without child(ren) spend on average € 111 and the couples with child(ren) and couples spend € 150 and € 124, respectively. The other households have on average the highest expenditures on car energy: € 160. With rising income, expenses for car energy increase continuously without reaching a saturation point. The dispersion of energy expenditure between the household types is significantly lower compared to income and overall consumption. In the case of car energy expenditure, it ranges from 1.18 to 1.94.
Residential energy expenditure
With respect to expenses for residential energy, all three couple household types have nearly the same expenditures for residential energy (€ 165, € 163, € 169). The single households with child(ren) (€ 119) have insignificantly higher residential energy expenditure than all single households (€ 93). The other households have the highest expenditures for residential energy, with an average of € 201. With rising income, expenses for residential energy increase continuously, reaching a saturation point before the highest income group only in the case of singles with child(ren). In the other household types, the residential energy expenditure increases without reaching a saturation point. Generally, the dispersion in the case of residential energy is lower than that of car energy. All household types show a dispersion between 1.17 and 1.65.
Total energy expenditure
When we now sum up the car and residential energy expenditures to calculate the total energy expenditures. We see that couples with child(ren) (€ 319) have nearly the highest energy expenditures followed by the other two couple household types (€ 274, € 289), whereas the two single household types have lower energy expenditures (€ 143, € 186). The other households have the highest energy expenditures: € 361.
With rising income, the total energy expenses increase and reach a saturation point before the highest income group only in the household type singles with child(ren). In the other household types, the total energy expenditures increase without reaching a saturation point before the highest income group.
Hence, the dispersion varies between households. Couple (1.18, 1.28, 1.33) and single households show a slightly higher dispersion (1.55, 1.75), whereas the other households have a dispersion similar to the couple households (1.32).
In the following, we also present the distribution of expenditures for another basic good: food and beverages. The comparison between food and energy enables us to classify the energy distribution results.
The expenditures for food and beverages differ among the households. But the dispersion of food expenditures is the lowest of all analysed types of consumption and income (Table 7).
Food and beverage expenditures of private households in Germany 2008 according to their household type
Food dispersiona
Source: Own calculation based on German Federal Statistical Office, 2011. Italic numbers own estimation
aFood consumption dispersion: ratio of the food consumption of the highest income group to the food consumption of the average household of the social group
The single households spend on average €182 for food and beverages. These expenditures reach their saturation point at € 222 per month in the highest income class. The food consumption of singles with children increases on average by about € 100 to € 281 per month and reaches its saturation point in the income group of € 3600–5000 (€ 366) before the top income group, which consumes less (€ 357). The social group of couple households consumes on average food and beverage for € 400 a month, and this consumption reaches its highest value in the highest income group with € 486. Couples without children (€ 360, € 432) consume on average and in the top group less than all couples. Food and beverage consumption increases on average in the social group of couples with children to € 478 a month and in the top income group this rises to € 547. The social group of other households has the highest monthly food consumption with on average € 483 and in the top group € 603. The food consumption dispersion for other households (1.25) and single parents (1.27) is the highest of all households analysed, followed by couples (1.2, 1.2, 1.14). Couples with children have food consumption patterns that are more similar than the other households.
Our analysis shows how the household types' heterogeneous levels of success in the economic system may be measured in income and consumption expenditures.
In the following, we examine how the real distribution of expenses and income is perceived by the households against the background of differing levels of inequality aversion within society, i.e. how society assesses the distribution of income and expenditures against their normative perception of inequality.
Normative distribution
In the following, we examine how the real distribution of expenses and income is perceived by the households against the background of differing levels of inequality aversion within society, i.e. how society assesses the distribution of income and expenditures against their normative perception of inequality. In our analysis, the epsilon parameter of the Atkinson index ranges from 1 to 2.5, whereas (ε = 1, 1.5) represents a low inequality aversion of society and (ε = 2, 2.5) represents a high inequality aversion of German society.
In the case of the single households, the net income (0.149–0.299) is more equally distributed than the gross income (0.176–0.356). This illustrates the effectiveness of the German tax system in reducing some of the inequality of the German economic market system.
The consumption patterns of the singles (0.066–0.149) are distributed more equally between the households than the two income types.
In the case of energy consumption, the expenditures on residential energy (0.023–0.053) are nearly equally distributed between the households. On the other hand, the expenditures for car energy are more unequally distributed in this household group than the gross income (0.165–0.388). Residential energy expenditures are of central importance for the households irrespective of their income, whereas individual mobility (cars) is not necessarily required by all households. For the single households, the public transport system is an alternative. This explains why in the single households the car energy values of the Atkinson index are higher than the residential energy. Table 8 shows that "food" is the most equally distributed (0.006–0.018) item of the analysed data sample. As expected, food is the main basic good for single households.
Atkinson index of single households
Atkinson index 2008—singles
Atkinson epsilon
Residential energy
Car energya
Foodb
Source: Own calculations 2016
aCar energy = fuel and lubricants
bFood, beverages (non-alcohol and alcohol), and tobacco
Singles with child(ren)
As in the household groups of all single households, the net income of single households with child(ren) is more equally distributed than the gross income. The data confirms that the German tax system evens out the inequalities of the economic market system to some extent. The gross income of single households with child(ren) is more unequally distributed (0.125–0.258) than the income of the group consisting of all single households. This is also valid for the net income.
We can also see that the distribution of private consumption (0.056–0.121) and of all energy expenditures (0.038–0.087) is more equal in this household type than car energy expenditures (0.106–0.262). Table 9 illustrates that also in this social group food consumption is the most equally distributed consumption issue.
Atkinson index of single households with child(ren)
Atkinson index 2008—singles with child(ren)
In the couple group, the gross income (0.138–0.323) is again more unequally distributed than the net income (0.118–0.277) due to the German tax system (Table 10).
Atkinson index of all couples households
Atkinson index 2008—all couples
All couples
This is also valid for the consumption patterns (0.05–0.124) and energy expenditures (0.025–0.067). The residential energy expenditures (0.025–0.034) of this household group are again the most equally distributed issue in this household group. The results also show that car energy expenditures (0.047–0.139) are more unequally distributed than residential energy expenditures but more equally distributed than in the case of single households. Food consumption is distributed in the same way in the couple households (0.011–0.038) as in the single households with children.
In the case of the gross and net income, we see again that, because of the tax system, the net income (0.124–0.355) is more equally distributed than the gross income (0.150–0.355). We can assert that residential energy (0.017–0.041) is again the most equally distributed good. Private consumption (0.053–0.128) is distributed in a manner similar to car energy (0.057–0.148), and a little more unequally than energy expenditures.
The food consumption of the couple households with children (0.008–0.020) is more equally distributed than that of all couples. Table 11 also documents the basic need character of food consumption, because it is the most equally distributed good of these households.
Atkinson index of couples without child(ren)
Atkinson index 2008—couples without child(ren)
The effects of the German tax system as an instrument to reduce income inequality can also be confirmed by the analysis of the gross (0.104–0.267) and net income (0.091–0.227) of couples with children (Table 12).
Atkinson index of couples with child(ren)
Atkinson index 2008—couples with child(ren)
Private consumption in this household group is relatively equally distributed. But the results show that car energy expenditures are also equally distributed and we can see a clear contrast to the single households, where car energy expenditures are distributed very unequally. We can conclude from this that car energy expenditures are not necessarily an essential good for single households, but for the couples, especially for those with children, they are indispensable. In the households of couples with children, food consumption is also very equally distributed, and the Atkinson index (0.007–0.018) is a good indicator of that.
The final household type in our analysis is the group containing other households. This household group also confirms the effects of the German tax system, which reduces income inequality between the members of that household type (0.176–0.337 to 0.149–0.326).
Table 13 shows that the inequality assessed by the modified Atkinson index increases with rising epsilon irrespective of which issue is analysed. The energy expenditures (0.048–0.133) of that group are more equally distributed than the overall private consumption (0.065–0.167). The residential energy expenditures (0.024–0.069) are more equally distributed than the car energy expenditures (0.030–0.129). Food consumption is more unequally distributed in the group of all other households than in the other household groups. The values of the Atkinson index (0.025–0.065) are near the values of the residential energy. The other households group, which includes, for example, parents-in-law, children over 18 and groups sharing an apartment, is more heterogeneous than the single and couple households, which explains the higher Atkinson index.
Atkinson index of other households
Atkinson index 2008—other households
We can therefore summarize that the household group of couples with child(ren) is the most homogeneous group and that their net income is more equally distributed than their gross income. Private consumption is more equally distributed than both income types, and energy services are distributed almost equally between the household types. However, the single households are the most heterogeneous household group and show a more differentiated distribution picture than the couple households. In both single household types, the German tax system significantly reduces the inequality between households. In the case of epsilon 2.5—representing a high inequality aversion—the German tax system reduces the Atkinson index of single households from 0.356 to 0.299. But also in the single households, private consumption is more equally distributed than income, and energy expenditures are still the most equally distributed expenditure type (0.055–0.125). What is striking in this group is the fact that car energy expenditures are the most unevenly distributed expenditure type. We have seen that energy expenditures are more equally distributed than private consumption and income types. The nearly equal distribution of energy expenditures confirms Smil's assumption that energy is the universal currency [30] for people's welfare and can be seen as an indicator of the basic needs of the households, whereby "basic" means something different in different countries—for Germany basic needs means an energy consumption which offers social participation. These basic energy needs are to a large extent, but not completely, independent of people's income situation.
This means that the lower income groups have to spend a very high percentage of their income on energy services compared to the higher income groups (Table 14). Households with a net income lower than € 900 are divided into two major groups. The singles in this income group spend between 11.9 and 13.6% of their income on energy services. They spend between 3 and 4 basis points more than the average household in this social group and nearly 10 basis points more than the highest income group.
Energy consumption of private households in relation to net income* Germany 2008 according to their social position—in %
Source: Own calculation based on German Federal Statistical Office, 2011. Italic numbers: own estimation, limited data basis in this income group
*Total energy dispersion: ratio of the total energy expenditures of the highest income group to the total energy expenditures of the average household of the social group
However, we get a different picture in the social group of couples households: the couple households of the income group <€ 900 spend more than 25% of their net income on energy services. Rising energy prices would affect these households directly. In this case, they would have to rearrange the expenditures in their household budgets. They would have to reduce other expenditures to maintain their use of energy services at its current level; otherwise, they would lose access to modern energy services which are "crucial to human well-being and to a country's economic development" as the IEA stated. There is a danger that these households will be confronted with energy poverty, which can be defined as a "condition wherein a household is unable to access energy services [8]" at its accustomed level, and so there is a growing need for energy governance. Energy poverty constitutes an infringement of the sustainability concept: environmental, economic, and social targets have to be balanced in the transition to a low-carbon economy.
Our analysis reveals that energy poverty and the socio-economic conditions of society and its energy sector have to be addressed in transition processes to a sustainable society and have to be at the centre of any energy transition process and its political discourse. The analysis of Gordon's trade-off shows that transition processes such as the German Energiewende are not only technical problems but increasingly also socio-economic problems that have to be solved by energy governance [6], and because of Thurow's public good approach, no one in society can escape from the unsolved problems of Gordon's trade-off.
The analysis using the Atkinson index can reveal deeper insights into the self-perception of society and the conception of justice and equality, which are central pillars of a sustainable society. The epsilon parameter thereby enables us to parameterize this perception and conception in measuring the distribution of consumption and income. Our analysis is necessary, because every economic and political reform has distributional effects. If politicians do not consider these effects (energy poverty), they can endanger the total reform of the energy sector (Energiewende), because people will turn away from the goals of the reform [1, 9]. Acceptance of reforms such as the German Energiewende will thus decline.
The transformation of current energy systems into sustainable systems is on the agenda of all European countries (EU climate policy). Therefore, such a transformation could (and probably will) also lead to rising electricity prices, placing an above-average strain on the lowest income groups. Moreover, this regressive effect will appear in all categories of expenditure if prices increase, no matter whether this is caused by political decisions or market forces.
Our index can also be applied to other countries with respect to energy and other household expenditures, if the respective national statistical office provides the necessary household survey data for the analysis. Our index can then provide decision makers and institutions with information on how (un)equally the costs of transformation processes are distributed between the different income groups. We used energy in our analysis because it is one of the basic needs, and the energy sector is at the centre of the German transformation process: the Energiewende. Energy poverty caused by the Energiewende—as a synonym for a lack of societal participation in the transformation process, at least in highly developed countries—can endanger the whole transformation process. Political strategies to strengthen participation should therefore focus on the regressive effect of high energy prices.
Decision makers and political institutions can decide in a public discourse which categories of expenditures should be analysed and which are more important and relevant to justify political interventions to reduce the inequality caused by rising prices.
The index could also deliver information about the differences in income distribution in EU countries. For this analysis, we need reliable and comparable statistical data for the whole of Europe. However, in our view, two important political obstacles are looming: Firstly, it is difficult enough to find common political ground in domestic policy between the different political actors and interest groups in order to distribute the costs of national transformation policies. Secondly, this challenge is raised to a completely different level if wealth is to be redistributed between EU states (Euro crisis, Greek debt crisis) to a much larger extent than is the case today (EU Regional Fund, Structural Fund etc.).
To summarize, our concept has both a detection (revealing the implicit preferences) and potentially also an orientation function (defining explicit societal preferences with respect to the degree of homogeneity of a society).
Kermit Gordon (1916–1976) was Director of the United States Bureau of the Budget (now the Office of Management and Budget) (December 28, 1962–June 1, 1965) during the administration of Lyndon Johnson, and he was also the president of the Brookings Institution. He oversaw the creation of the first budgets for Johnson's Great Society domestic agenda. Gordon was a member of the Council of Economic Advisors, 1961–1962.
For our analysis, we take up the definition of an institution offered by Rawls. Institutions in Rawls's sense are the constitution, economic and social conditions, freedom of thought, freedom of conscience, economic markets with competition, and private property [22].
Nicholas Barr shows that the Gini coefficient has two disadvantages for measuring inequality, which are avoided by the Atkinson index [5]. The Gini coefficient is not an unambiguous measure because, as Hauser and Barr have shown, different distributions can lead to the same Gini coefficient [13, 52]. Hence, we decided to use the Atkinson index to estimate the distributional effects of increasing energy prices [27].
This analytical view is based on Rawls' theory of justice, where inequality is determined by the "position of the least advantaged members of society. Where epsilon lies between these extremes depends on the importance attached to redistribution towards the bottom [3]."
Other households include, e.g. parents-in-law, children over 18, and groups sharing an apartment.
HS initiated the research idea of analyzing Gordon's trade-off and developed the Atkinson model based on EVS data. HS and WF designed and organized all the research for this study. WF reviewed the theory of public goods. JFH had a leading role in the literature review and the analysis of the real distribution of the EVS data. All the authors contributed to the conclusion and the outlook of the study. All authors read and approved the final manuscript.
Forschungszentrum Jülich, Institute of Energy and Climate Research, IEK-STE: Systems Analysis and Technology Evaluation, Jülich, Germany
Acemoglu D, Robinson J (2012) Why nations fail: the origins of power, prosperity, and poverty. Crown Business, New YorkGoogle Scholar
Acemoglu D, Robinson JA (2013) Economics versus Politics: Pitfalls of Policy Advice. J Econ Perspect 2, 173–192, doi:10.1257/jep.27.2.173
Atkinson AB (1983) The economics of inequality. Clarendon, OxfordGoogle Scholar
Atkinson AB (1970) On the measurement of inequality. J Econ Theory 2:244–263MathSciNetView ArticleGoogle Scholar
Barr N (1993) The economics of the welfare state. Stanford University Press, StanfordGoogle Scholar
Bazilian M, Nakhooda S, Van De Graaf T (2014) Energy governance and poverty. Energy Research & Social Science 1:217–225View ArticleGoogle Scholar
Bell D (1996 (1976)) The cultural contradictions of capitalism. Basic Books, New YorkGoogle Scholar
Bouzarovski S, Petrova S, Sarlamanov R (2012) Energy poverty policies in the EU: a critical perspective. Energy Policy 49:76–82View ArticleGoogle Scholar
Braunberger G (2013) Ökonomen verstehen zu wenig von Politik (unter unterschätzen Verteilungsthemen). In: FAZ blogs (ed) Das Fazit-wirtschaftsblog. FAZ, Frankfurt/MGoogle Scholar
Brümmerhoff D (2007) Finanzwissenschaft. Oldenbourg Verlag, MunichGoogle Scholar
Buchanan JM (1965) An economic theory of clubs. Economica 32:1–14View ArticleGoogle Scholar
Edmond C, Veldkamp L (2009) Income dispersion and counter-cyclical markups. J Monet Econ 56:791–804View ArticleGoogle Scholar
German Federal Ministry of Economics and Technology (Bmwi) (2012) Germany's new energy policy. BMWI, BerlinGoogle Scholar
Gordon K (1975) Foreword. In: Okun AM (ed) Equality and efficiency: the big tradeoff. The Brookings Institution, Washington D.CGoogle Scholar
Hauser R (1996) Zur Messung individueller Wohlfahrt und ihrer Verteilung. In: Chlumsky J, Wiegert R (eds) Wohlfahrtsmessung - Aufgabe der Statistik im gesellschaftlichen Wandel. Statistisches Bundesamt, Wiesbaden, pp 13–38Google Scholar
Jung S (2001) Privater Verbrauch in Deutschland. DUV, WiesbadenView ArticleGoogle Scholar
Kaul I, Grunberg I, Stern MA (1999) Global public goods. UNDP, New YorkView ArticleGoogle Scholar
Metwally MM, Jensen RC (1973) A note on the measurement of regional income dispersion. Econ Dev Cult Chang 22:135–136View ArticleGoogle Scholar
Mulas-Granados C, Sanz I (2008) The dispersion of technology and income in Europe: evolution and mutual relationship across regions. Res Policy 37:836–848View ArticleGoogle Scholar
Park J (2006) Dispersion of human capital and economic growth. J Macroecon 28:520–539View ArticleGoogle Scholar
Ramos HM, Sordo MA (2003) Dispersion measures and dispersive orderings. Statistics & Probability Letters 61:123–131MathSciNetView ArticleMATHGoogle Scholar
Rawls J (1971) A theory of justice. Harvard University Press, CambridgeGoogle Scholar
Samuelson PA (1954) The pure theory of public expenditure. Rev Econ Stat 36:387–389View ArticleGoogle Scholar
Samuelson PA, Nordhaus WD (2010) Economics. McGraw-Hill Education (Asia), New YorkGoogle Scholar
Schlör H, Fischer W, Hake J-F (2012) Measuring social welfare, energy and inequality in Germany. Appl Energy 97:135–142View ArticleGoogle Scholar
Schlör H, Fischer W, Hake J-F (2012) Social welfare, income, consumption, energy, and the inequality aversion of society—a case study from Germany. J Eur Econ 11:356–377Google Scholar
Schlör H, Fischer W, Hake J-F (2013) Sustainable development, justice and the Atkinson index: measuring the distributional effects of the German energy transition (in press). Applied Energy 112:1493-1499Google Scholar
Scotchmer S (2008) Clubs. In: Durlauf SN, Blume LE (eds) The new Palgrave dictionary of economics. Palgrave Macmillan, BasingstokeGoogle Scholar
Sen A (1973) On economic inequality. Norton, New YorkView ArticleGoogle Scholar
Smil V (1994) Energy in world history. Westview, BoulderGoogle Scholar
Bundesamt S (2011) Einkommens- und Verbrauchsstichprobe - Einkommensverteilung in Deutschland 2008. Wirtschaftsrechnungen, WiesbadenGoogle Scholar
Bundesamt S (2011) Einkommens- und Verbrauchsstichprobe -Einnahmen und Ausgaben privater Haushalte 2008. Wirtschaftsrechnungen, WiesbadenGoogle Scholar
Statistisches Bundesamt (2013) Wirtschaftsrechnungen Einkommens- und Verbrauchsstichprobe Aufgabe, Methode und Durchführung. Fachserie Wirtschaftsrechnungen Fachserie 15, Heft 7Google Scholar
Statistisches Bundesamt (Federal Statistical Office) (2005) Einkommens- und Verbrauchsstichprobe - Aufgabe, Methode und Durchführung der EVS. Fachserie Wirtschaftsrechnungen 15, Heft 7Google Scholar
Statistisches Bundesamt (Federal Statistical Office) (2005) Einkommens- und Verbrauchsstichprobe - Einnahmen und Ausgaben privater Haushalte 2003. Fachserie Wirtschaftsrechnungen 15, Reihe 1Google Scholar
Stiglitz J (2012) Price of inequality. Norton, LondonGoogle Scholar
Stiglitz JE (2012) The 1 percent's problem. Vanity Fair, LondonGoogle Scholar
Theil H, Fiebig DG (1986) The measurement of income and price dispersion in cross-country demand analysis. Econ Lett 22:391–393View ArticleGoogle Scholar
Thurow LC (1971) The income distribution as a pure public good. Q J Econ 85:327–336View ArticleGoogle Scholar
Wilkinson R, Pickett K (2010) The spirit level. Why equality is better for everyone. Penguin, LondonGoogle Scholar
In these collections
Sustainable Energy; A Systems Approach | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,977 |
## Contents
Introduction
is for **Almond**
Spicy almond butter dressing
Chilled almond soup with mojo rojo
Sicilian almond and tomato pesto
Chicken korma
Salted almond toffee
Almond, honey and fig cake
is for **Blue Cheese**
Polenta with Gorgonzola and honeyed hazelnuts
Leek and Stilton steamed pudding
Roquefort and honey cheesecake with walnut and pear
Wedge salad with quick pickled onions and buttermilk blue cheese dressing
Blue cheese creamed spinach
Poached plum crumble with blue cheese ice cream
is for **Caramel**
Roast duck with miso caramel
Vietnamese caramel and pork hotpot
Banoffee split
Pecan, bourbon and salted caramel cookies
Salted peanut caramel crispy cakes
Walnut caramel cream pie
is for **Dumplings**
Canederli alla tirolese with Parmesan broth
Venison and port casserole with Stilton dumplings
Queenie and samphire crystal dumplings
Chickpea and spinach dumplings in a tomato and yoghurt sauce
Southern chicken and jalapeño dumplings
Spotted dick
is for **Eggs**
Bacon devilled eggs
Deep-fried quail's eggs with celery salt mayonnaise
Baked eggs, creamed corn and spinach
Omelette farcie
Rum flip
Pandan and coconut burnt creams
is for **Fat**
Cultured butter
Bacon refried beans
Red-braised pork
Lamb 'porchetta' with salsa verde
Bourbon and bacon butter
Coconut ice magic
is for **Garlic**
Confit garlic, thyme and Parmesan tart
Hot and sour seafood soup with black garlic aïoli
Brined and slow-cooked lamb with flageolet beans, white wine and garlic
Duck fat garlic bread
Georgian griddled chicken on toast
Grand aïoli for heretics
is for **Hot**
Blackened jalapeño and avocado slaw
Sweet sriracha cakes
Red lentil and tomato soup with harissa
Green chilli, New Mexico style
Lemongrass and chilli tofu
Meatball curry
Mexican chilli chocolate mousse
is for **Ice**
Simple banana and peanut butter ice
Salted brown butter and buttermilk ice cream
Avocado and double lime sorbet
Rum punch ice cream
Simple persimmon, lime and ginger sorbet
Frangelico and espresso granita shots
Ricotta ice cream terrine with fig molasses
is for **Junk**
Sweet paprika cheesy chips
Buttermilk onion rings
Vietnamese crispy pork and prawn pancakes (bánh xèo)
Texan queso dip
Homemade butterscotch 'Angel Delight'
Marathon pie
is for **Kale and other greens**
Spinach soup with spiced anchovy butter toasts
Spicy cashew kale crisps
Fava e cavolo nero
Spinach, ricotta and feta tart with hard-boiled eggs
Homemade orecchiette with sausage and kale
Chard gratin with a Gruyère crumb
is for **Leaves**
Nice salad
Green herb cauliflower 'tabbouleh'
Three pea salad with lemon butter dressing
Black kale salad with anchovy dressing
Chicory with beetroot, goat's cheese and walnuts
Mustard leaves and little gem with bacon vinaigrette and toasted walnuts
is for **Malt**
Moules marinières écossaises
Single malt loaf
Rye and porter porridge with bacon, leeks and cheese
Malted milk creams
Triple chocolate malt cake
Black and white shake
is for **Noodles**
Japanese carbonara
Baked ziti with sausage and kale
Spicy peanut butter noodles with sprouting broccoli
Beetroot noodles with goat's cheese, toasted walnuts and baby kale
Spätzle with cheese and onion
Spaghetti with courgette noodles and Parmesan
Vietnamese bún cha
is for **Octopus and other cephalopods**
Cambodian stuffed frog-style squid
Coconut squid
Black risotto with eggs
Braised octopus with chickpeas and coriander
Maryland-style octopus sandwich
is for **Potatoes**
Baked potato soup
Chorizo baked potatoes with avocado crema
Aloo tikki Scotch eggs
Northern potato salad
Potato, black kale and anchovy pie
Aligot
Tattie scones à la Arnold Bennett
Potato and cauliflower curry with coconut and cashew cream
is for **Quiver**
Tricolore jellies
Goat's cheese custards with honey-glazed hazelnuts and black olive toasts
Jelly cherry jubilee
Gooseberry and buttermilk pots
Caribbean milk punch jelly
Almond and rosewater blancmange
is for **Rhubarb**
Mackerel and samphire tartare with pickled rhubarb
Pork rillettes with rhubarb chutney
Persian lamb and rhubarb stew
Rhubarb Bircher muesli
Rhubarb and marmalade sticky pudding
Rhubarb and custard trifle with an amaretto syllabub
Rhubarb gin granita
is for **Smoke**
Charred squash soup with zhoug and toasted pumpkin seeds
Muhammara
Smoked cod's roe and beetroot dip
Kentucky pulled lamb
Kichri-kedgeree
Smoky black dal with eggs
Smoked mackerel and charred cauliflower gratin with smoked chilli breadcrumbs
Bacon and split peas with a quick mustard pickle
is for **Toast**
Burnt toast powder
White beans on toast
Duck and sherry pâté with pickled figs and pistachios
Southern cheese on toast
Salmon and coriander tartare with avocado and wasabi cream on toasted rye
Mexican torta with black beans, chorizo, avocado and goat's cheese crema
is for **Umami**
Shrimp and grits with bacon and Parmesan
Courgette fritters with bagna cauda hollandaise
Ox cheeks braised in Marmite
Chargrilled Caesar salad
Crunchy soy-braised pig's tails
Broccoli and edamame salad with Korean dressing
Dashi pickles
Green lamb kebabs
is for **Violets and other edible flowers**
Crab with ricotta and lemon zest and an elderflower and cucumber salad
Fig and goat's cheese olive oil flatbread with lavender honey
Geranium and apple snow
Marzipan violets
Scandi saffron buns
Shrikhand, or spiced saffron and pistachio yoghurt
Rose petal vodka
is for **Wild**
Roast new potatoes with wild garlic dressing
Scrambled eggs with crab and samphire
Wild garlic bread
Michaelmas mess
Almond rice pudding with blackberry and apple compote
Bramble old-fashioned
is for **Xmas**
Bread and walnut sauce
Georgian aubergine rolls with walnut sauce and pomegranates
Brussels sprout, hazelnut and lemon zest salad with goat's cheese
Spiced pumpkin and Parmesan pie with chestnuts
Turkey mole poblano
Tangerine and pomegranate salad with spiced Pedro Ximénez syrup and Marcona almonds
is for **Yeast**
Georgian cheesebread (khachapuri)
Buckwheat pikelets
Pissaladière
Marmite and cheese mini doughnuts
German plum bread with almond cream
Wholesome loaf
is for **Zest**
Slow-roast tomato pasta with lemon salt, ricotta and basil
Mediterranean ceviche
Peach and mozzarella salad with crispy lemon zest and basil
Candied peel
Pistachio and pink grapefruit cake
Chocolate orange cheesecake
Pomelo sour
**Acknowledgements**
**Stockists and links**
**Follow Penguin**
##### For Molly and Theo, who were cooking at the same time
## Introduction
This is a book for people who are beyond cookbooks. For those brave souls who feel they have a fairly firm grasp on the basics, who know it's easier to make tomato sauce than to go out and buy a jar (even if they don't always bother), for whom fish holds no fear and baking birthday cakes is a cause for celebration, not panic – in other words, people who can already cook. (Or who, like me, feel they're on their way there at least.)
This is not a book to teach you how to fry an egg, or make a hollandaise; I reckon I've covered that all fairly comprehensively elsewhere. It's a rough guide when you're hungry for inspiration, not instruction, one I hope will make you look at familiar ingredients in a new way, and welcome new ones with open arms – in the chapters that follow, I've picked out twenty-six food ideas I love, each of which has something special to offer the adventurous cook. From basics like yeast and eggs to delicate wild flowers and decadent jellies, all deserve a place in your personal culinary arsenal.
Much as I enjoy tackling the classics for my Perfect column in the _Guardian_ it's been a real treat for me to cook with a completely free hand here, and an eye-opener too, shaking me out of well-worn gastronomic grooves and encouraging me to look beyond the more obvious possibilities of some of my favourite ingredients.
God knows how much garlic bread I've put away over the years without once wondering how it would taste made with something other than butter – the duck fat version here was a very happy revelation. And who would have thought Guinness would make such great jelly (see here), or that chilli sauce would pair so well with marshmallows (see here)?
Although I selected these ingredients for their culinary potential, in the course of writing this book I've been surprised and delighted anew by just how versatile many of them are. So, in a sense, we go hand in hand together here – I hope you enjoy the ride.
### A few practicalities
Though I won't offer an inventory of useful kitchen equipment here so as not to repeat myself (there's a complete list in _Perfect Too_ ), I do feel strongly enough about a handful of things to recommend them for getting the best results from the recipes that follow.
Measuring spoons: They're cheap, they last for ever, and they're essential for baking in particular. Teaspoons and tablespoons vary wildly in size depending on their design; measuring spoons do not.
Stick blender: More of an investment this, but a less significant one than the countertop variety, and far easier for soups, sauces and purées. Electric beaters are also very handy for anything more than a small amount of cream or mayonnaise, and cheaper and more versatile than a stand mixer.
Cooking thermometer: Why faff about trying to guess oil temperatures with breadcrumbs, the stages of molten sugar with a glass of water, and the progress of your roast pork by violating it with a skewer, when you can harness the wonders of modern technology and know for sure? Digital varieties with a probe on a lead are the most practical (or the point and shoot ones are even better if you can run to one).
Decent food processor: These don't come cheap, but they're almost infinitely useful. Make sure you get one with a small bowl for smaller amounts. I'd also suggest a pestle and mortar for crushing and a mandoline for super-thin slicing; invaluable for the potato pie here or the pissaladière here. (Always use the guard supplied for the last; I lost a fingertip to perfect dauphinoise.)
Oven thermometer: Few ovens, if any, are the same temperature from top to bottom, and many vary quite considerably from that shown on the dial. An oven thermometer will give you a good idea of how yours performs, and allow you to adjust cooking temperatures accordingly. My oven has a fan in it to help distribute the heat more evenly; if yours doesn't, then you'll need the higher conventional temperatures in the recipes that follow, or indeed the relevant gas mark if it's gas fired. Bear in mind that, baking aside, the temperature or cooking times are rough guides only; you should be the one to decide when your food is ready based on how it looks and smells. Trust your instincts.
All eggs in this book are medium unless otherwise specified. And all recipes are fair game for playing about with – please feel free to use them as a starting point for some new favourites of your own. Happy cooking!
I've thought long and hard about it (seriously), and almonds are definitely my favourite nut. Pistachios have their considerable merits, and it's surprisingly tough to imagine life without the humble peanut, but neither can touch the almond for elegance and versatility.
No nut slips so easily between the sweet and the savoury, or blends as happily into a rich Indian curry as a delicate French pastry, or indeed makes such an addictive accompaniment to a salty sherry. Truly the almond is the king of nuts. (Or perhaps the queen. Those lovely curves are decidedly feminine.)
Almonds are a booming business, in demand worldwide, but only happy in a very narrow climatic region, with mild wet winters and warm summers – and they're priced accordingly. California is the world's largest producer, followed by Spain and Italy, though they're cultivated from Afghanistan to Australia and I have heard boast of fruiting trees in UK gardens.
Strictly speaking, they're not a true nut at all, but a drupe, part of the _Prunus_ family, where their closest relative is the peach – look at an unshelled almond and a peach stone, and you'll see the resemblance. We prize most drupes for their juicy flesh, but that of the almond is thin and fibrous; instead, the real prize, the kernel, is hidden inside the pit. But this family has a dark side too: bitter almonds are laced with cyanide, and just a handful can prove fatal (though the flavour is so pungent that few people are likely to eat more than one). That said, the same highly aromatic quality that proved so useful to Agatha Christie's amateur detectives makes this variety popular for flavouring purposes: as hydrocyanic acid breaks down upon heating, they're perfectly safe to use in cooking. A little is usually added to marzipan (as in the marzipan violets here) and amaretti biscuits to round out the flavour, and anything sold as pure almond extract is likely to have been made from them too. Beware of artificial flavourings, which are only worth a look if you're catering for a nut allergy, and don't confuse almond extract with sweet almond oil, which is better for massage than marzipan.
Almonds have health benefits too – they're a high-protein snack rich in healthy monounsaturated fats, which is why they fill you up annoyingly fast (though they taste so good it's tempting to power through and finish the bowl anyway), and an excellent source of vitamin E, which is good for peachy skin. (There's also some evidence to suggest the latter slows the onset of Alzheimer's disease, but this is not yet conclusive at the time of writing.)
### Cooking
Almonds have been part of the Arabic and European culinary tradition for thousands of years – the original medieval blancmange, or 'white food', was made from shredded chicken, rice, sugar and almond milk (for a modern version, see here) – and they're still an important ingredient in the confectionery industry in the form of marzipan.
Soft and malleable, this was once a wildly popular choice for sculptural table centrepieces. If you'd like to render a marzipan manticore of your own, it's incredibly easy to make a basic version at home from ground almonds, sugar and egg (see the marzipan violets, here).
You may be lucky enough to find young green almonds in Middle Eastern grocers which, crunchy as a cucumber and refreshingly sour, can be eaten whole, but you're more likely to find the dried variety in most shops. Keep the skins on whenever possible – blanched almonds may be better for cooking, but if you're just popping them into your gaping maw, your body will thank you for it. Toasting always improves the flavour.
## Spicy almond butter dressing
##### serves 6
3 tablespoons almond butter
Juice of 1 lime
2 teaspoons keçap manis sauce (see intro)
1 teaspoon soy sauce
1 garlic clove, crushed
½ a small red chilli, deseeded and finely chopped
This sweetly nutty salad dressing works particularly well on a colourful Asian-style coleslaw of grated carrot, shredded red cabbage, cucumber ribbons and pepper slices, and it's also rather good with a cold rice noodle salad topped with chicken, pork or prawns, and a fistful of chopped coriander, mint or Thai basil.
If you can't find almond butter in the supermarket, health food shops almost always have it (like the peanut variety, it's great in a banana sandwich, or spread on slices of apple) – note that it may well need stirring back together before use. Keçap manis is a thick, treacly Indonesian soy sauce often stocked in the speciality ingredients aisle, as well as Asian supermarkets.
1. Whisk together the almond butter, lime juice, keçap manis and soy sauce with enough warm water to make it smooth and pourable – exactly how much will depend on the consistency of your almond butter.
2. Mix in the garlic and red chilli and taste, adding a little more keçap manis if you'd like it sweeter, soy sauce if you'd prefer it saltier, or lime for sourness.
## Chilled almond soup with mojo rojo
##### serves 4
200g blanched almonds
100g fresh white breadcrumbs
200ml olive oil
4 tablespoons natural yoghurt
1 teaspoon honey
A squeeze of lemon juice
##### _For the mojo rojo:_
4 dried ancho or other small red chillies
½ teaspoon cumin seeds
1 teaspoon coarse salt
1 teaspoon smoked paprika
2 small garlic cloves
50ml olive oil
1 teaspoon sherry vinegar
A late but enthusiastic convert to the cult of the chilled soup, once I'd mastered the art of gazpacho, ajo blanco, a punchy garlic soup thickened with ground almonds, and another stroke of Spanish genius, was next on my hit list. But, much as I love garlic (see here), I wanted to celebrate the natural sweetness of the nuts too.
This version separates the two, replacing the more usual grape garnish with a fiery chilli-red condiment from the Canary Islands, mojo rojo, which looks very fetching pooling against the smooth whiteness of the soup. Its pungency will vary according to the variety of dried chilli you use – it should be fiery, but not, of course, inedible.
1. Toast the almonds in a dry frying pan until beginning to colour. Set aside to cool. Meanwhile, soak the dried chillies for the sauce in hot water for 20 minutes.
2. Roughly chop the almonds and put into a food processor with the breadcrumbs and 400ml of cold water and whiz until smooth. Add the oil and yoghurt and whiz again until smooth. You can pass it through a sieve at this point if you're a perfectionist, but I have to admit I'm not averse to the odd shard of almond in my soup.
3. Stir in the honey and a squeeze of lemon juice and season to taste, then chill until ready to serve.
4. Toast the cumin seeds in a dry frying pan until fragrant, then allow to cool. Put into a pestle and mortar with the salt and crush finely, then add the smoked paprika and garlic and crush again. Add the drained chillies, having first removed any stalks (or transfer the lot to a mini chopper or the small bowl of a food processor if you have one), and crush or whiz until you have a smoothish paste. Stir in the oil and vinegar and check the seasoning.
5. When you're ready to serve, divide the soup between shallow bowls and dollop a few small blobs of mojo across the surface – it's powerful stuff, so don't go overboard. Put the rest on the table for people to help themselves to.
## Sicilian almond and tomato pesto
##### serves 4–6
120g blanched almonds
400g sweet cherry tomatoes
2 small garlic cloves, crushed
A handful of mint, leaves only
4 tablespoons extra virgin olive oil
I was beside myself with excitement when I happened upon a jar of Sicilian _pesto alla trapanese_ in the Polish-run Italian deli down the road – it felt as if it had been placed there by God, specifically to catch my almond-loving eye, and even at £4 a jar, I was sold.
But, good as it was, like all such sauces, it's even better fresh. Already lighter and more summery than traditional Genovese pesto, thanks to the addition of tomatoes and almonds, in a further nod to that sunny island's Arab heritage I've used mint rather than basil, and left out the cheese, although I do like to grate a little pecorino over the top before serving. (OK, quite a lot.)
If it's not tomato season, and you're faced with a tray of sour orange gobstoppers, replace some of them with a jar of semi-dried tomatoes instead (or the baked versions here), or the pesto will be watery and bland.
1. Heat the grill to maximum. Toast the almonds on a greased baking tray for a couple of minutes until just beginning to colour, being careful they don't burn. Spread out on a cold surface to cool, and meanwhile, put the tomatoes on the baking tray and grill until beginning to char.
2. Once the almonds are cool, very roughly chop and put into a food processor with the crushed garlic. Whiz until most are fairly finely chopped, with a few larger shards, then add the tomatoes and mint and pulse until just combined. Pour in the oil and whiz again, then season to taste and use as you would ordinary green pesto.
## Chicken korma
##### serves 4–6
4 tablespoons double cream
1 teaspoon saffron
1 tablespoon rosewater
4 tablespoons ghee
6 green cardamom pods, lightly crushed
2 cinnamon sticks, lightly crushed
4 cloves, lightly crushed
8 chicken thighs, skinned and boned, or 1 small chicken, jointed, skinned and boned
2 onions, finely sliced
2 tablespoons finely grated ginger
6 garlic cloves, crushed
100g ground almonds
1 teaspoon sugar
½ teaspoon ground nutmeg
250ml natural yoghurt
½ teaspoon salt
3 black cardamom pods, seeds only, ground (optional)
Mild and creamy, korma tends to be viewed as a starter curry in this country, the preserve of children and the spice-shy elderly, when in fact its delicate nutty flavour made it one of the most celebrated dishes in the repertoire of the Mughal court. A favourite at imperial banquets, its sugary British incarnation bears little resemblance to the nutmeg, saffron and rosewater heavy original.
This version differs from the cashew-based one I wrote for the _Guardian_ , being slightly sweeter and somewhat less thick. It's still best served with plain basmati rice or naan though, with perhaps a sharp vegetable pickle, and some fruit afterwards.
1. Heat the cream gently in a small pan or the microwave until hot, but not simmering, then stir in the saffron and half the rosewater. Set aside.
2. Heat half the ghee in a large lidded frying pan or casserole dish on a medium-high heat and fry the whole spices until aromatic. Scoop out with a slotted spoon and set aside.
3. Brown the chicken, in batches, until golden, adding more ghee if necessary, then set aside. Add the onions to the pan, turn the heat down slightly and cook until soft and starting to brown.
4. Add the ginger and garlic and cook for a couple of minutes, then add another spoonful of ghee, stir in the almonds, sugar and nutmeg, and return the spices to the pan.
5. Stir in the yoghurt, along with 150ml of water and the salt, and mix to make a gravy. Put the chicken back into the pan and simmer gently for about 40 minutes, until the meat is cooked through and the gravy thick and rich.
6. Stir in the crushed black cardamom if using, plus the saffron-infused cream, and season to taste, adding the remaining rosewater if necessary (they vary greatly in strength).
## Salted almond toffee
##### makes about 45 toffees
150g salted almonds, roughly chopped
225g caster sugar
45g soft light brown sugar
170g butter
260g golden syrup
480ml double cream
1 teaspoon vanilla extract
1 teaspoon salt
This is a homage to my granny's love of bags of brazil nut toffee – given that she was never shy of the salt cellar either, I hope she would have approved.
I think it's so worth buying a digital thermometer for any number of culinary tasks that I haven't bothered with the instructions for checking the set otherwise, but you can easily find them online.
1. Tip the chopped almonds on to a small baking tray lined with greaseproof paper. Slowly bring the sugars, butter, golden syrup and half the cream to the boil in a large pan, stirring until the sugars have dissolved.
2. Very gradually stir in the remaining cream, being careful not to disturb the boil. Simmer until it reaches 130°C on a digital or sugar thermometer, then quickly stir in the vanilla and salt. Tip on to the tray, smooth out quickly and leave to set. Once it's firmed up a bit, cut into squares and leave to harden completely.
## Almond, honey and fig cake
##### makes 1 x 20cm cake
4 tablespoons honey
1 tablespoon lemon juice
8 whole dried figs
##### _For the cake:_
300g ground almonds
2 teaspoons baking powder
A pinch of salt
3 large eggs
120ml honey
120ml olive oil
I'm a sucker for a dense, moist almond cake – my standard recipe, the sticky orange version in my second book, _Perfect Host_ , also includes semolina flour, but this one is all nut, which makes it satisfyingly sweet and squidgy. It's more of a dessert cake than a teatime slice, with a vaguely southern Mediterranean or Middle Eastern feel to it thanks to the olive oil and dried figs. You can continue the theme by serving it with thick Greek yoghurt, although I like the tanginess of crème fraîche. Interested parties should note that this is both gluten and dairy free.
1. Stir the 4 tablespoons of honey into 4 tablespoons of boiling water and add the lemon juice and figs. Leave to soak for at least an hour.
2. Heat the oven to 160°C/fan 140°C/gas 3 and grease and line a 20cm cake tin.
3. Put the almonds, baking powder and salt into a large mixing bowl and whisk to combine and break up any clumps. Whisk together the eggs, honey and olive oil in a separate bowl, then stir into the dry ingredients.
4. Pour into the prepared tin and arrange the figs on top, reserving the soaking liquid. Bake for 40–50 minutes, until golden brown on top, and firmish.
5. Use a cocktail stick to poke a few small holes in the top of the cake and pour over the fig soaking liquid, waiting for the cake to absorb each dose before adding more. Allow to cool completely before removing from the tin.
I must admit, I hesitated before choosing blue cheese for B. Not because I don't love it (my mum claims she once found the infant me casually munching a wedge of Stilton I'd grabbed from the supermarket shelf), but because so many other, apparently sane people do not.
The fact is, blue cheese tastes like nothing else on earth, and although you're under no obligation to love the stuff, I think all decent cooks should at least have a passing acquaintance with its rich, tangy, gorgeously savoury flavour. One day you'll realize what you're missing.
### Those mouldy bits
The bit that seems to freak many people out about blue cheese is the mould, though I suspect few know that we have the comfortingly familiar-sounding penicillium to thank for it, rather than anything nasty. This was traditionally introduced by leaving cheese in caves where such fungi grow naturally, but these days is injected straight into the curd (though Roquefort, the world's oldest blue cheese, is still matured in the same damp _grottes_ it has been for centuries, carefully watched over by attendants who, according to the late Alan Davidson, 'enjoy particularly good health as a result of sharing this strange environment with the cheeses'. Retirement plan sorted).
The veining is created by piercing the cheeses with needles loaded with the relevant penicillium, encouraging mould to develop along these paths of least resistance and then spread outwards, which is why, when you buy a large chunk of Stilton, you may notice tiny holes in the rind.
Most blue cheeses have been injected with _Penicillium roqueforti_ or _Penicillium glaucum_. As well as creating the veins, these moulds produce lipase enzymes which break down the fats in the milk into fatty acids and thus flavour. Quite a lot of flavour in fact; those suspicious of blue cheese should start with a mild variety like Dolcelatte, St Agur or Cambozola (which resembles a slightly spotty Brie) and work slowly upwards to a decent Roquefort.
### The three kings
Somewhat appropriately for a food often associated with Christmas time, the classic blue cheeses of Europe are known collectively as the three kings. King Emperor, in my opinion, is our very own Stilton, a hard cow's milk cheese from the English Midlands, but Roquefort, a softer ewe's milk cheese from the south-west of France, and the northern Italian cow's milk Gorgonzola also have their charms. In general, I favour the creaminess of the last for cooking, although the other two work well in dishes which make a virtue of their unapologetic saltiness.
It's no coincidence that festive favourite Stilton is at its best in December, when the cheeses made with the rich summer milk are matured to perfection. Although to qualify for the name they must be aged for a minimum of nine weeks, older 'vintage' varieties will have a far superior flavour. According to Paxton & Whitfield, who have been pushing cheese since 1742, ripe Stilton should be neither white nor crumbly but creamy both in colour and in consistency.
Roquefort, meanwhile, which is matured for at least three months in those famous caves, should, like Stilton (and indeed all blues), be well marbled, but the colour of the cheese itself will be paler thanks to the sheep's milk involved, with an almost buttery sheen.
Gorgonzola comes in two forms: piccante (or naturale) and dolce. Piccante is the traditional drier, more pungent sort, while the dolce is a younger, unpressed, creamier version introduced in the post-war period in response to market demand for milder cheeses (interesting research could no doubt be conducted into the effect of politics on people's taste buds). The dolce is whiter and more moist in texture, while the piccante is closer to a dark ivory; in fact, the colour of the cheese is a good indication of its strength.
Other favourites of mine include Dorset Blue Vinney, which is slightly sharper and spicier than Stilton, Tipperary's 'voluptuously creamy' Cashel Blue (I can't improve upon Neal's Yard Dairy's wonderful description), the buttery Jersey milk Barkham Blue from Berkshire and an unusual goat's milk variety from Devon, Harbourne Blue. All of these can be found at specialist cheesemongers, both on the high street and online.
### Storage
If your cheese has come wrapped in clingfilm or any kind of sweaty plastic, your first task should be to remove it to let the cheese breathe; you can buy waxed cheese paper online, but I rewrap mine in greaseproof paper or, at a pinch, foil.
That done, in the absence of a cool pantry or cellar, the salad drawer of your fridge is the best place for it. If there's no room, as is generally the case at Christmas time, don't panic; Paxton & Whitfield's cellars are 12°C, so anywhere dry, with a fairly cool, stable temperature, will do just fine, whether that's a garage, porch or car boot. If you're storing it for a while, putting something fresh like a carrot in with the cheese will stop it drying out.
Before serving, it's important to let it come to room temperature first, or the flavours will be muted – take it out about an hour ahead of time, but keep it wrapped so it doesn't lose moisture.
See also: Venison and port casserole with Stilton dumplings (here).
## Polenta with Gorgonzola and honeyed hazelnuts
##### serves 2 (but easily doubled)
1 litre weak chicken stock
150g polenta
100g Gorgonzola, crumbled
##### _For the honeyed hazelnuts:_
40g hazelnuts, skin off
1 tablespoon runny honey, plus extra to drizzle
It took me quite a long time to get the point of polenta, to see that its very blandness was a virtue, a warm, comforting blank canvas for any kind of flavour you care to throw at it, whether that's delicate wafers of cured pork fat in Bergamo or vast amounts of melted cheese, sausage and egg up in the Italian Alps.
This makes a very satisfying dinner for two on a cold evening – you can serve it just as is, and eat it with a spoon, but some steamed greens (cavolo nero, for example) will salve your conscience.
1. Heat the oven to 200°C/fan 180°C/gas 6. Toss the nuts together with the honey and some salt and bake on a lined tray for 15 minutes, shaking occasionally, then leave to cool; they'll firm up as they do so.
2. Bring the chicken stock to the boil in a large pan, then sprinkle over the polenta, stirring as you do so. Cook for about 30 minutes over a very gentle heat, stirring regularly, until soft and thick. Stir in most of the cheese, reserving a little as garnish, allow it to melt, and season to taste.
3. Divide the polenta between bowls, then roughly chop the nuts and sprinkle on top along with the remaining cheese. Drizzle with a little honey to serve.
## Leek and Stilton steamed pudding
##### serves 4–6
4 tablespoons butter
500g trimmed leeks, sliced into 2cm rounds
A whole nutmeg, to grate
2 tablespoons flour
300ml milk
100g crumbled Stilton
##### _For the suet pastry:_
250g plain flour
2 teaspoons baking powder
¼ teaspoon salt
½ teaspoon English mustard powder
105g chopped suet (vegetarian or otherwise)
3 sprigs of thyme, leaves finely chopped
Oil, to grease
Suet puddings are so much lighter than they sound, with a wonderful fluffiness thanks to this hard fat's high melting temperature, which means the pastry sets around it before it dissolves to leave a network of tiny holes in its place. There's also a pleasing theatre to cutting into one at the table to reveal the treasures concealed within.
The plain leek variety, a speciality of the English north-east, was introduced to me by a _Guardian_ reader – this is my variation, rich with nutmeg and cheese.
1. Melt half the butter in a large frying pan over a medium-low heat and add the leeks. Grate in a good pinch of nutmeg and season, then cook until soft but still keeping their shape.
2. Scoop the leeks out of the pan and add the remaining butter to the pan in their place. Sprinkle over the flour and cook for a couple of minutes, stirring, then gradually add the milk, stirring all the time to incorporate the flour. Cook, stirring, until it thickens into a white sauce, then mix in the cheese, and once it's melted, gently fold in the leeks.
3. Sift the flour and baking powder into a mixing bowl and add the salt and mustard powder. Rub in the suet briefly to mix, then add the thyme and enough cold water (about 150ml) to bring it to a firm dough. Pinch off a quarter of the dough and set aside, then roll out the rest to about ½cm thick. Grease a 1 litre pudding basin generously, and use the pastry to line it, being careful not to stretch it more than you have to.
4. Fill the pastry with leeks and sauce, stopping about 2cm from the top, then roll out the rest to make the lid and stick it on with a little cold water. Cover the basin with foil, leaving enough slack for the pastry to rise, and fashion a handle out of string to lift the basin out of the water.
5. Put the pudding into a large pan half-filled with boiling water, then cover and simmer for 2 hours, checking the water level regularly and topping up with more boiling water as necessary. Turn out and serve immediately.
## Roquefort and honey cheesecake with walnut and pear
##### serves 10–12
##### _For the base:_
200g plain, finely milled oatcakes
70g walnuts
125g melted butter, plus extra to grease
3 tablespoons honey
##### _For the topping:_
400g cream cheese
200g Roquefort, crumbled
3 eggs, beaten
3 tablespoons honey
1 pear
Because it's so gorgeously rich, a little of this goes a long way, which makes it perfect to feed a festive crowd – salty sweet, with a crunchy oatcake base, it's best served still quivery and warm (emphatically not hot), preferably accompanied by a sharply dressed green salad. And don't worry that the ratio of base to topping seems unusually high – it works, I promise.
1. To make the base, whiz the oatcakes and 50g of the walnuts in a food processor until finely chopped, then drizzle in the melted butter and the honey and whiz to combine.
2. Grease a 23cm springform tin with butter, making sure the bottom half of the sides is particularly generously greased. Press the mixture down firmly into the base of the tin. Whiz the remaining walnuts until finely chopped, then add to the tin and rotate it on its side so it is coated with walnut crumbs to about halfway up. Chill for at least an hour.
3. Heat the oven to 130°C/fan 110°C/gas ½. Beat together the cheeses until well combined, then beat in the eggs, one at a time, followed by 1 tablespoon of the honey and some black pepper. Pour into the tin and bake for 1½ hours, then remove from the oven and leave to cool in the tin while you finish the topping.
4. Turn the oven up to 200°C/fan 180°C/gas 6. Thinly slice the pear, removing the stalk, and put it on a greased baking tray. Brush with half the remaining honey and bake for 15 minutes.
5. Heat the grill, brush the pear slices with the rest of the honey and grill for about 5 minutes, until beginning to brown. Arrange on top of the cheesecake and serve warm, but not hot.
## Wedge salad with quick pickled onions and buttermilk blue cheese dressing
##### serves 4
4 rashers of smoked streaky bacon
A handful of pecans
200g Roquefort or other strong blue cheese
180ml soured cream
120ml buttermilk
4 tablespoons cider vinegar
2 teaspoons soft light brown sugar
1 iceberg lettuce
A small bunch of chives
##### _For the quick pickled onions:_
50ml cider vinegar
2 teaspoons soft light brown sugar
½ a small red onion, very thinly sliced
In America salads are a serious business, and the classic wedge is a case in point – perhaps the only decent use I've found for the iceberg lettuce. Its blandly juicy sweetness and frankly awesome crunch are the perfect foil for a rich blue cheese dressing that knocks the socks off that gloopy stuff they used to serve at the Deep Pan Pizza salad bar.
1. Start with the onions. Whisk together the vinegar and sugar to dissolve, then add the thinly sliced onion. Leave to sit for at least half an hour before use, though longer won't hurt – they keep well in the fridge.
2. Cook the bacon in a dry frying pan over a medium heat until crisp and brown. Set aside on kitchen paper to dry, and toast the pecans in the same pan until fragrant. Tip on to the kitchen paper and allow to cool.
3. Meanwhile, to make the dressing, crumble the Roquefort and set half aside. Put the other half into a jug with the soured cream, buttermilk, vinegar and sugar and whiz with a hand blender until smooth. Add the remaining crumbled cheese and taste; season if necessary.
4. Roughly chop the bacon and pecans. Cut the lettuce into 4 wedges. Put each on a plate and spoon over the dressing. Sprinkle over the bacon, pecans and a few of the drained onion slices, and snip over the chives to serve.
## Blue cheese creamed spinach
##### serves 2 generously or 4 with other sides
400g spinach
125ml single cream
50g creamy blue cheese, crumbled
2 tablespoons butter
1 garlic clove, finely chopped
A whole nutmeg, to grate
You can't go to the States without visiting a steakhouse, and you can't order an enormous t-bone without getting a mound of creamed spinach on the side. It's the rules.
Beefing it up with blue cheese makes it so delicious that you hardly need the steak – that said, it's also worth trying it tossed with pasta or gnocchi, heaped on a baked potato or polenta, or alongside a Sunday roast.
1. Blanch the spinach briefly in plenty of salted boiling water until wilted, then run under cold water to cool immediately. When cool enough to handle, squeeze out very well.
2. Simmer the cream in a small pan until it has thickened slightly and is beginning to smell faintly caramelized (almost an evaporated milk type smell). Crumble in the cheese, stir well to melt, and set aside.
3. Heat the butter in a frying pan until just beginning to brown, and sauté the garlic for a couple of minutes. Add the spinach and stir to coat, grate in a little nutmeg, and stir in the cream. Check the seasoning (it probably won't need any salt) and serve.
## Poached plum crumble with blue cheese ice cream
##### serves 4–6
##### _For the ice cream:_
100g Dolcelatte
4 egg yolks
300ml whole milk
100g honey
200ml whipping cream
A whole nutmeg, to grate
##### _For the crumble:_
800g plums
25g soft light brown sugar
½ teaspoon ground cinnamon
50g walnut pieces, toasted in a dry pan until fragrant
150g spelt flour
125g cold butter, cubed
50g demerara sugar
¼ teaspoon salt
I'll admit that cheese ice cream does tend to sort the adventurously minded sheep from the more conservative, or perhaps sensible goats, but the sweetness of the honey and the freezing temperature both work to temper the saltiness of the Dolcelatte into a rich, subtle tang which works brilliantly with the plums and walnuts. Promise.
1. To make the ice cream, which needs to be done a good few hours ahead, finely chop or crumble the cheese and put into a large heatproof bowl with a sieve set over the top. Put the egg yolks into a medium heatproof bowl by the hob, then warm the milk to a simmer in a heavy-based saucepan, stirring regularly. Meanwhile, gently heat the honey in a small saucepan until runny, then set aside.
2. As soon as the milk comes to a simmer, take it off the heat and pour on to the egg yolks, whisking frantically. Return the mixture to the pan over a medium-low heat and stir until it thickens sufficiently to coat the back of a wooden spoon (it should be thick enough for you to be able to draw a distinct line in it with your finger).
3. Pour the mixture through the sieve into the cheese and stir until this has melted into the custard, then mix in the cream, honey and a generous pinch of grated nutmeg.
4. Allow to cool, then churn in an ice cream maker (or see instructions for still freezing here) and freeze.
5. Heat the oven to 190°C/fan 170°C/gas 5. Cut the plums in half, remove the stones and arrange in one tightly packed layer in an ovenproof dish. Mix together the sugar and cinnamon and scatter on top of the plums. Pour 4 tablespoons of water into the dish, and bake for about 30 minutes until they are soft, but still keep their shape.
6. Meanwhile, whiz the toasted walnuts in a food processor until relatively finely chopped. Add the flour and butter and pulse briefly until the mixture resembles very coarse breadcrumbs, with a few larger lumps. Stir in the sugar and salt, sprinkle with a little cold water and rake with a fork to make a lumpy, crumbly mixture. Put this into the freezer for 10 minutes.
7. Take the plums out of the oven and keep warm, then turn the oven up to 220°C/fan 200°C/gas 7. Spread the crumble out on a baking tray and cook for 15–20 minutes, until golden.
8. Get the ice cream out of the freezer 10 minutes before you want to serve. Put a spoonful of plums and their juices on each plate, scatter with crumble, and top with a scoop of ice cream.
The relentless rise of caramel in recent years, powered largely by the advent of the excellent salted variety, has improved life immeasurably. I've always preferred toffee to chocolate, favoured fudge over fruit pastilles, and once managed to inadvertently eat an entire tub of pralines and cream ice cream on the way to the freezer (in my head, I was just tidying up the melted bit). The addition of salt has only served to fan the flames of my passion; it gives caramel depth and balances its intense sweetness, allowing you to eat more. In short, I see salted caramel as an indisputably great leap forward for mankind.
Caramel, strictly speaking, is what you get when you heat sugar above 170°C (i.e. that scary point where it hovers between something delicious and something black, smoky and welded to the base of your pan), but the term is generally used to refer to all sorts of products of the sugar browning process, most of which occur at less terrifying temperatures.
When things begin to change colour, the flavour starts to get interesting, whether that's sugar itself, or the milk solids in a caramel sauce. Heat is the magic ingredient in all the recipes that follow, contributing acidity and, eventually, a bitterness bordering on the smoky to the basic, one-dimensional sweetness of white sugar.
It's heat that's responsible for breaking down sucrose into glucose and fructose, and then, eventually, into hundreds of different compounds, each with its own distinct flavour – bitter, yes, but also buttery, nutty and toasty. Heat can turn milk proteins rich and chewy, as in the dulce de leche recipe here, and heat can transform a thin sugar solution into those rich, viscous syrups that drop ever so slowly from a spoon on to your morning porridge. In short, heat + sugar = bliss.
### Making caramel
It's wise to treat anything which can reach such searing temperatures with respect, but that said, it's not a difficult skill to master. The most important ingredient here is patience; heat works in funny ways, so it's vital to give the sugar your full attention, rather than keeping half an eye on it while doing something else.
The simplest way of making caramel, by heating sugar in a dry pan, is not the easiest; the sugar tends to cook unevenly. Dissolving that sugar in water helps distribute it more evenly around the pan, which avoids this problem.
It's important not to interfere with the sugar as it melts, or you risk bringing undissolved sugar crystals into contact with dissolved ones, which can (if you're unlucky) cause the whole lot to solidify.
If this does happen, and you find yourself staring at a curiously pretty, but obstinately solid panful of crystals, the best thing to do is to take it off the heat, stir in a couple of tablespoons of hot water, and then continue stirring until you have a syrup again. You should then be able to proceed with the recipe as written. As you may have guessed, a sugar thermometer will make your life a lot easier here, saving you endless fiddling about with molten sugar and cold water. They're not expensive; if you're using a non-electronic one, ensure the bulb isn't resting on the base of the pan, as not only will this affect the accuracy of the reading, but it may cause it to overheat and shatter.
The other thing that will make the whole process less painful is filling the sink a quarter full of cold water, and boiling a kettle before you begin. The sink is to cool the caramel down rapidly if it looks like it's about to burn, and the kettle is to pour into the pan as soon as you've poured the finished caramel out – it will save you a lot of scrubbing later, although if you do burn anything to the base, putting the pan of hot water back on the heat will make slightly lighter work of it.
Lastly, lest all this talk of high temperatures hasn't hammered it home already, sugar gets very, very hot and will set on your tender flesh alarmingly quickly, so never dip a finger in it to taste, and stand back when adding cold liquid like cream for the sake of your precious eyesight.
See also: Salted almond toffee (here), Marathon pie (here).
## Roast duck with miso caramel
##### serves 4–6
1 duck, approximately 2kg
1 teaspoon baking powder
Coarse salt
A splash of white wine for gravy (optional)
##### _For the miso caramel:_
170g white sugar
120ml double cream
2 tablespoons white miso
Fermented soy beans may not be an ideal pairing with ice cream, but the intense combination of slightly smoky sugar and umami-rich miso is heaven with rich, fatty roast duck. Although the skin won't go as earth-shatteringly crisp as the well-tanned Cantonese window-decorations in Chinatown, it will develop a burnished lacquer just as addictive.
I like it served with some plain rice and steamed greens, but am also tempted by the idea of roast potatoes or root veg cooked in its fat – if you don't go down that route, try it in garlic bread (see here). Note that the caramel is also very good with pork belly.
1. Remove the duck from any wrappings and lightly score the skin on the breast. Rub with baking powder and coarse salt and put into the fridge, uncovered, for at least 24 hours to dry out.
2. Take the duck out of the fridge an hour before cooking to bring it to room temperature. Meanwhile, make the caramel. Put the sugar into a wide pan and pour over 60ml of water. Bring to the boil over a medium-high heat, swirling the pan initially to help dissolve the sugar.
3. When the caramel turns a rich amber colour, take off the heat and stir in the cream until well combined, then stir in the miso. Set aside.
4. Heat the oven to 180°C/fan 160°C/gas 4. Boil a full kettle of water and pour it over the duck, then leave to drain for 5 minutes. Dry well with kitchen paper and put into a roasting tray.
5. Coat the duck with caramel (if it's too solid to spread, put it back on a gentle heat for a couple of minutes), then roast for 30 minutes. Pour off the fat and baste again with caramel.
6. Return the duck to the oven and roast for another hour, basting every 15 minutes with the caramel and pouring off fat as required, until it is a deep burnished brown. Check the temperature at the thickest part of the thigh; it should be at least 65°C for medium rare, or the juices should run clear-ish; continue cooking until you've achieved this. Allow to rest for at least 20 minutes before carving.
7. To make gravy, spoon the fat from the pan (or use a gravy separator if you have one), then heat the pan on the hob and deglaze with a little wine and some hot water. Bubble for a couple of minutes and season to taste.
## Vietnamese caramel and pork hotpot
##### serves 4
600g pork belly, skin removed
2 tablespoons grated ginger
2 tablespoons minced garlic
4 tablespoons fish sauce
½ teaspoon ground black pepper
4 tablespoons white sugar
250ml coconut water
1 tablespoon soy sauce, to finish
Vietnamese food is justly famous for its fresh flavours, the vast bunches of herbs, tiny, vicious chillies and zingy citrus, but up near the Chinese border you'll also find plainer, heartier dishes, such as this rich, sweet clay-pot pork I was taught at a cookery school in the northern city of Hanoi. It's ridiculously easy to make, and needs nothing more than a mound of steamed rice, and perhaps a few steamed greens, on the side.
1. Cut the pork into bite-sized chunks and put into a bowl with the ginger, garlic, fish sauce and pepper. Cover and leave to marinate for about an hour.
2. Boil 100ml of water in a kettle. Put the sugar into a wide, heavy-based pan over a medium heat and leave for about 3 minutes until beginning to melt, then stir until the grains have dissolved and the sugar is golden. Pour in the boiling water, stirring all the time until the caramel has re-dissolved, then add the pork and stir to coat. Pour in the coconut water and bring to a simmer.
3. Cover, turn down the heat and cook for about an hour and a half, until the meat is falling apart and the sauce has thickened. Stir in the soy sauce and serve with rice.
## Banoffee split
##### serves 4
150ml double or whipping cream
50g pecans
4 bananas
1 tub of coffee ice cream
1 tub of vanilla ice cream
##### _For the_ cajeta _(or use ready-made dulce de leche):_
1 litre goat's milk (or cow's if you're not a fan)
150g white sugar
¼ teaspoon coarse salt
¼ teaspoon bicarbonate of soda
Bananas and caramel, or indeed toffee, are a match made in heaven – two such sweet things shouldn't work so well together, but somehow they do.
This Mexican goat's milk version of dulce de leche, _cajeta_ , has a slightly savoury, farmyardy edge which I love, but use cow's milk instead if you prefer, or indeed, substitute ready-made dulce de leche if you're short on time. With that, ice cream and bananas, you can't really go wrong.
1. To make the _cajeta_ , put the milk, sugar and salt into a large pan over a medium-low heat and bring to a simmer. Meanwhile, dissolve the bicarb in 2 teaspoons of water.
2. Take the milk off the heat and stir in the bicarb (beware, it will bubble up, hence the large pan) then put back on the heat and simmer gently, stirring occasionally, for about 45 minutes, until it's a pale caramel colour.
3. Stir more regularly for about another 30–45 minutes, until it's a deeper, toffee colour; once it starts to thicken, stir continuously to stop it burning. It's ready when it's thickish but still easily pourable. Take off the heat so it doesn't solidify any further (if you've taken it too far, stirring in a little more milk or cream over the heat should thin it down satisfactorily).
4. To assemble the splits, whip the cream to soft peaks and toast the pecans in a dry pan. Cut the bananas in half lengthways and arrange, slightly apart, in shallow bowls.
5. Put a scoop of coffee and two scoops of vanilla in between the two banana halves, then drizzle generously with the _cajeta_. Put a small dollop of whipped cream on top of each scoop of ice cream, and a pecan on top of each dollop of cream. Roughly chop any extras and scatter around the sides with a little more caramel. Serve immediately.
## Pecan, bourbon and salted caramel cookies
##### makes about 15
50g pecans
120g salted butter, at room temperature
75g soft light brown sugar
75g granulated sugar
A pinch of salt
1 egg
3 tablespoons bourbon
240g plain flour
½ teaspoon bicarbonate of soda
80g white chocolate chips
##### _For the toffee (or use 15 small ready-made toffees):_
170g white sugar
2 tablespoons golden syrup
2 tablespoons butter, at room temperature
120ml double cream, at room temperature
1 teaspoon salt
These started off as blondies, but I couldn't get enough of the crisp, overbaked bits stuck to the sides of the pan, which led, inevitably, to the idea of cookies.
The toffee recipe makes more than you'll need for this batch, but it's harder to make in smaller quantities, and I'm sure you'll find some way to dispose of the excess.
1. If making the toffee, you'll need to start at least 4 hours before you want to bake the cookies, to give it time to set. Line a small, shallow tin or dish with greased baking paper.
2. Put the sugar and golden syrup into a medium high-sided pan over a medium heat with 2 tablespoons of water. Swirl the pan to moisten all the sugar, and heat until it has dissolved to a deep amber syrup and the temperature reaches 155°C.
3. Whip off the heat and stir in the butter until melted, quickly followed by the cream and salt. Put back on the heat and cook until the temperature gets back up to 120°C, then pour the toffee into the tin or dish and leave to set.
4. Meanwhile (ideally, do this just after you make the toffee, to give the mixture time to rest), put the pecans into a dry frying pan and toast until fragrant, then roughly chop. Beat together the butter, sugars and salt in a food mixer until well combined, then mix in the egg, followed by the bourbon.
5. Fold in the flour and bicarb, followed by the pecans and chocolate chips. Cover and chill until the toffee is set (or up to 48 hours).
6. Heat the oven to 200°C/fan 180°C/gas 6 and line a couple of baking trays with greased baking paper. Roll the dough into golf-ball sized lumps, then cut or pinch off a little nugget of toffee and tuck it into the middle. Space well apart on the trays and bake for about 15 minutes, until golden. Allow to cool on the trays for 5 minutes, then move to a rack to cool completely (yeah right).
## Salted peanut caramel crispy cakes
##### makes about 12
200g Rice Krispies
100g roasted salted peanuts, roughly chopped
100g butter, diced
100g soft light brown sugar
50ml double cream
40g milk chocolate, broken into pieces
I have a nostalgic fondness for chocolate crispy cakes – these, based on the excellent Paul A. Young recipe for salted caramel with milk chocolate, are both deliciously sticky and dangerously light. The salted peanuts rescue them from overbearing sweetness, though feel free to substitute other nuts, or indeed chocolate chips if you'd prefer.
1. Mix together the Rice Krispies and peanuts. Grease a shallow tray roughly 28 x 18cm.
2. Melt the butter and sugar together in a wide pan until they come to a simmer. Simmer gently for 5 minutes, until amber and beginning to smoke, then take off the heat and stir in the cream, followed by the chocolate and a pinch of salt.
3. Once you have a smooth mixture, stir in the Rice Krispies and peanuts until well mixed and spoon into the prepared tray. Smooth the top and leave to set; once cool, refrigerate to help it along.
## Walnut caramel cream pie
##### serves 6–8
##### _For the base:_
300g dark chocolate digestive biscuits
75g butter, melted
3 tablespoons cocoa powder
##### _For the filling:_
200g walnut pieces
400g white sugar
200g butter, cubed
200ml crème fraîche
1–2 teaspoons flaky sea salt
300ml double or whipping cream
¼–½ teaspoon coffee granules
The gorgeous love-child of a banoffee and pecan pie, this sticky, creamy confection is saved from sugar overload by the slight bitterness of the toasted walnuts and the crunchy dark chocolate base. A little goes a long way, but it's horribly addictive.
1. Roughly break up the biscuits and put into a food processor. Whiz to crumbs, then mix in the melted butter and cocoa powder along with a pinch of salt until well combined. Use to line a roughly 23cm loose-based tart tin, pressing down firmly with your fingers so the mixture goes well up the sides. Cover with clingfilm and refrigerate while you make the filling.
2. Toast the walnuts in a large dry pan, then set aside.
3. Put the sugar into a wide shallow pan along with 250ml of water, making sure all the sugar is moistened. Set over a medium heat and swirl the pan to help the sugar to dissolve, then keep a close eye on it – make sure you have the butter, crème fraîche and salt close to hand.
4. Once it's a rich amber (be careful not to let it get too dark), whip it off the heat and immediately stir in the butter, frantically whisking to melt it. Stir in the crème fraîche and salt to taste (remember to let a little cool on a spoon before tasting) – if you have bits stuck to the bottom of the pan, put it briefly back on a low heat and keep stirring.
5. Spread the walnuts across the bottom of your tart, then pour on the caramel to fill – you may not need it all, depending on the depth of your tin, but excess caramel sauce is never a bad thing to have on hand, and it will store in a jar in the fridge for a few weeks. Leave the tart to cool, then refrigerate until set.
6. Just before serving, whip the cream to soft peaks with ¼ teaspoon of coffee granules, adding a little more if you prefer a stronger flavour. Spoon artistically on top of the tart.
A marvellously apt-sounding name for a very satisfying culinary concept, who can fail to feel a certain warmth towards the dumpling, whether it conjures up nostalgic memories of Granny's cooking, or the clatter of dim sum trolleys on a Sunday afternoon.
Few cultures are oblivious to their charms, from the obvious examples, like Italian gnocchi and Japanese gyoza, to the lesser-known varieties: the wonderfully named _chlupaté knedlíky_ , or hairy dumplings, from the Czech Republic, the meaty Anatolian _manti_ and the Puerto Rican green plantain _bollitos_ , to name just a few friends you may not have met yet.
As _The Oxford Companion to Food_ puts it so beautifully, a dumpling is a food with 'few, indeed no, social pretensions', which evolved, in all its forms, as a way of making a little go a long way. While the nobs may have been feasting on barons of beef, the rest of us had to make do with stretching the same animal's scrawny tail out a bit further by cooking it in a stew, and then topping it with plain, starchy dumplings – or even making those dumplings a meal in themselves.
They still fulfil much the same function today; though we may often cook dumplings because we love them (indeed, like the Yorkshire pudding element of a Sunday lunch, many of us are secretly more excited about these cheap and cheerful sides than the pricier star attraction), they do also permit a certain thriftiness with the other ingredients, allowing you to bypass further carby accompaniment if desired.
Back when many households baked their own loaves, dumplings would have been no more than small pieces of ordinary bread dough. Whatever they're made from nowadays, whether cornmeal or breadcrumbs, suet or olive oil, the basic principle of the dumpling remains the same: they're a starchy filler whose very blandness is perfectly designed to soak up the flavours of their cooking liquid or sauce. Submit to the dumpling's warm embrace. You will not regret it.
**Taphere to view full map**
**Taphere to view full map**
## Canederli alla tirolese with Parmesan broth
##### serves 3–4
##### _For the broth:_
50g Parmesan rinds (see intro)
750ml good chicken stock
2 garlic cloves, squashed
2 large handfuls of baby spinach (optional)
##### _For the dumplings:_
A knob of butter
75g speck, smoked pancetta or streaky bacon, finely chopped
½ an onion, finely chopped
1 leek, finely chopped
150g crustless sourdough (or other sturdy, chewy bread), cut into small cubes
1 tablespoon chopped parsley
1 tablespoon chopped chives
2 eggs, beaten
40g plain flour
Mountain fare from the South Tirol, a gorgeous region where Austria and Italy collide, home to spectacular skiing, epic walking, and the kind of rib-sticking food that's half the point of doing either. These are a great way to use up stale bread, and endlessly versatile in terms of flavouring, while the savoury broth will dispense with those odds and ends of cheese cluttering up the fridge door (you can use any other hard cheese rind, as long as it hasn't been waxed or cloth bound – Gruyère is another good candidate). Thrifty and tasty; you can't say smugger than that.
1. To make the broth, put the rinds into a medium saucepan with the stock, the garlic and 750ml of water. Bring to a simmer, then turn down the heat and simmer gently for about an hour, stirring occasionally to make sure the rinds don't weld themselves to the bottom of the pan.
2. Meanwhile, heat the butter in a frying pan and add the speck. Cook for a couple of minutes until the fat starts to run, then add the onion and leek. Season and cook until soft.
3. Tip into a large bowl and add the bread, herbs and eggs. Stir well, then mix in the flour. You should have a mixture firm enough to shape into balls – if not, add a little more flour. (If, on the other hand, it's too dry, add a splash of milk.) Season, then, with wet hands, form into dumplings about the size of a walnut, shaping them well until smooth (doing this with wet hands will help them stay together).
4. Bring a pan of well-salted water to the boil, turn down the heat to a gentle simmer and add the dumplings. Cook for 15 minutes.
5. Strain the broth to remove the cheese rinds and garlic, then bring back to a simmer. Just before the dumplings are ready, add the spinach, if using, and as soon as it wilts, divide the broth and leaves between bowls. Put the dumplings in the middle, and serve.
## Venison and port casserole with Stilton dumplings
##### serves 4–6
500g braising venison
2 tablespoons plain flour, seasoned with salt and pepper
2 tablespoons lard or oil
100g lardons or bacon chunks
2 small red onions, sliced
2 sprigs of thyme, leaves only
2 carrots, peeled and finely diced
6 baby turnips, trimmed and cut into chunky wedges
300ml port
300ml beef stock
##### _For the dumplings:_
100g plain flour
1 teaspoon baking powder
50g suet
75g Stilton, crumbled
Leaves from a couple of sprigs of thyme, chopped
This rich, fruity sauce studded with sweet root vegetables proves the perfect pair for venison's savoury, almost earthy flavour, while fluffy suet dumplings are, of course, the ideal accompaniment to just about any stew. This is definitely one for chilly evenings; I'd be tempted to add some sautéd Savoy cabbage or other greens on the side to help mop up the glorious gravy.
1. Cut the venison into chunks if it's not been done already, and toss in the seasoned flour.
2. Heat the fat in a large lidded casserole dish over a medium-high heat until smoking and brown the venison in batches, being careful not to overcrowd the pan. Scoop out and set aside. Heat the oven to 170°C/fan 150°C/gas 3.
3. Turn the heat down under the pan and add the bacon. Cook until the fat begins to render, then stir in the onions and cook until soft. Add the thyme, carrots and turnips and cook for a couple of minutes more.
4. Return the meat to the pan, pour in the port and stock and scrape the base of the pan to dislodge any nice crusty flavourful bits of flour. Bring to a simmer, then put in the oven and bake for 2 hours.
5. Forty minutes before it's finished cooking, mix together the flour, baking powder, suet, crumbled Stilton and thyme in a bowl. Season and add just enough cold water (about 70ml) to bring the mixture together. Roll into six dumplings and plop on top of the stew. Replace the lid and put back into the oven for the remainder of the cooking time, by which point the dumplings should be cooked through and fluffy.
## Queenie and samphire crystal dumplings
##### makes about 25
A dash of neutral oil
100g smoked streaky bacon, finely chopped
3 garlic cloves, crushed
100g samphire, finely chopped
25 queen scallops, roe on (about 150g, shelled weight)
Chilli oil and Chinkiang rice vinegar, to serve
##### _For the dumpling skins:_
125g wheat starch
60g tapioca flour
¼ teaspoon fine salt
240ml boiling water
4 teaspoons neutral oil
Though a fully paid-up member of the ancient and honourable cult of the Roast, a Sunday devoted to the delights of the dim sum trolley is never a Sunday wasted. What the ceremony lacks in goose fat and roast potatoes it more than makes up for with endless cups of tea to soothe a groggy head, and a welter of buns and dumplings to tickle the jaded palate.
Pick of the bunch, in my book, are _har gow_ , sometimes known as crystal dumplings for their gloriously delicate, translucent skin, through which glows the luminous pink of plump minced prawns. But even more colourful than the prawn is the electric orange roe of the scallop, the diminutive queen variety of which is the perfect size for the purpose. Salty samphire and rich fatty bacon are the perfect accompaniments.
Depending on where you live, you may need to order the sweet little queenies from a fishmonger – make sure they have the roe still attached (in extremis you can cut the bigger ones to size, but it seems a shame). The tapioca flour and super-fine wheat starch are essential for the stretchy, slightly chewy, nearly see-through texture of the wrappers, and will be easy to find in an oriental supermarket, or online.
(Note that this is not the traditional method of shaping the wrappers, but I find it the easiest with the dough. If you are more adept, more authentic instructions can easily be found online, along with numerous excellent videos demonstrating crimping technique better than mere words could ever explain it.)
1. Start by making the filling. Heat a frying pan with a splash of oil on a medium heat and fry the bacon until it begins to release its fat. Add the garlic and fry for a minute or so, stirring so it doesn't catch, then drop in the samphire and stir-fry for a minute until coated with the fat. Set aside, off the heat.
2. Put the wheat starch, tapioca flour and salt into a mixing bowl and stir in the water and the oil to make a soft, pliable dough. Knead until smooth; it shouldn't be at all dry, or sticky (if it is, add a tiny bit more water or flour as necessary). Divide in half and put one half under a damp cloth.
3. Roll out the dough on a lightly floured surface as thinly as possible. Use a cutter about 10cm in diameter to cut out circles, then cover these with a damp cloth and repeat with the remaining dough and any scraps that can be re-rolled.
4. Bring a pan of water with a steamer in or over it to the boil. Meanwhile, fill the dumplings. Put a scant teaspoon of the bacon mixture in the middle of the dumpling, then add a scallop. Fold over and press to seal completely, then crimp. Repeat.
5. Steam the dumplings in batches for about 5–6 minutes until translucent, and serve immediately, with chilli oil and rice vinegar to dip.
## Chickpea and spinach dumplings in a tomato and yoghurt sauce
##### serves 4
##### _For the dumplings:_
250g spinach
½ teaspoon cumin seeds
2 garlic cloves
½ teaspoon salt
1 tablespoon grated ginger
2 small green chillies, deseeded and finely chopped
2 teaspoons melted ghee
250g chickpea (gram) flour
##### _For the sauce:_
3 tablespoons ghee
½ teaspoon black mustard seeds
½ teaspoon cumin seeds
1 Indian bay leaf (the sort with 3 central veins)
1 onion, finely chopped
2 garlic cloves, finely chopped
½ teaspoon turmeric
½ teaspoon asafoetida
½–1 teaspoon chilli powder
½ teaspoon ground coriander
1 x 400g tin of plum tomatoes, roughly chopped
350ml natural yoghurt
Coriander, to finish
These nutty chickpea flour dumplings have their origins in Rajasthan, in north-western India, which, along with a princely number of handsome palaces and cities, is home to the vast Thar desert, where I once spent a couple of miserable days atop a camel. It's a magnificently bleak place, where, as with many such magnificently bleak places, very little grows in the way of vegetables, and these thrifty dumplings are the result.
The spinach is my addition; not only do the flavours work very well together, but it lightens the texture a little, and as the dumplings need no bread or rice accompaniment, makes them into a complete meal. That said, the tanginess of the sauce means the dish is even nicer with a sweet chutney on the side.
1. Blanch the spinach in a large pan of salted water for about 10–20 seconds, until wilted, then drain and run under cold water to cool. Squeeze out as much moisture as you can, then spread out to dry. Roughly chop.
2. Toast the cumin seeds for the dumplings in a small dry pan until fragrant, then tip into a mixing bowl. Crush the garlic, salt, ginger and chillies into a paste, then add to the bowl along with the ghee, flour and spinach. Add just enough water to bring together into a firm dough; it probably won't need more than 50ml.
3. Turn out on to a lightly floured surface and roll into smallish dumplings; they're quite dense, so about the size of a walnut is ideal. Set aside and put a large pan of salted water on to boil while you make the sauce.
4. Heat the ghee in a large frying pan over a medium-high heat and add the mustard and cumin seeds and the bay leaf. Once the seeds begin to pop, add the onion and turn down the heat, then cook until softened. Add the garlic and cook for another couple of minutes, then stir in the ground spices and cook for another minute, stirring.
5. Add the tomatoes and cook until the oil begins to separate and pool around the edge of the pan. Meanwhile, once the water has come to the boil in the other pan, plop in the dumplings, stirring once to stop them sticking. Once they bob to the surface, cook for 5 minutes, then cut one open to see if it's cooked through. Once it is, drain the entire pan and set aside.
6. Whisk the yoghurt with 250ml of cold water, then stir energetically into the sauce and keep stirring until it comes to a simmer – if you don't, you risk it curdling.
7. Add the drained dumplings and simmer gently for about 5 minutes, until they're heated through. Serve topped with roughly chopped coriander.
## Southern chicken and jalapeño dumplings
##### serves 4–6
8 bone-in, skin-on chicken thighs
500ml chicken stock
50g butter
4 rashers of smoked streaky bacon, finely sliced
1 onion, finely chopped
50g flour
350ml milk
1 tablespoon cider vinegar, or to taste
100g sweetcorn kernels (tinned is fine, but make sure they're unsweetened)
##### _For the jalapeño cornmeal dumplings:_
100g plain flour
100g fine or medium cornmeal (polenta)
2 teaspoons baking powder
½ teaspoon salt
15g cold lard or butter
1 egg, beaten
75ml milk
1 green jalapeño chilli, deseeded and finely chopped (optional)
Thick, creamy and intensely savoury, with little pops of sweetness from the corn, this is pure unadulterated comfort for those dreary days that demand a warm duvet of a dinner.
Proper Southern dumplings are wide, flat strips of dough, almost like noodles, but I prefer these cornmeal ones, which will merge as they cook to form a fluffy, cobbler-like topping. Good served with spring greens, Savoy cabbage or broccoli.
1. Put the chicken into a large pan and cover with stock. Bring to a simmer, then turn down the heat and poach gently for about 15–20 minutes, until cooked through. Scoop the chicken out with a slotted spoon and set aside to cool, reserving the cooking liquid.
2. Melt a knob of the butter in a large lidded ovenproof pan and fry the bacon and onion until beginning to brown. Scoop out and set aside. Heat the oven to 200°C/fan 180°C/gas 6.
3. Melt the rest of the butter in the same pan, then whisk in the flour. Cook for a couple of minutes, then little by little whisk in the stock the chicken was cooked in, followed by the milk, until you have a smooth sauce. Bring to a simmer, allow to thicken, then stir in the vinegar and season to taste, adding more vinegar if you think it needs it.
4. Strip the chicken meat from the bones and skin, and tear into large chunks. Stir into the sauce along with the bacon and onion and the sweetcorn. Cover and bake for 40 minutes.
5. Meanwhile, make the dumplings. Whisk together the flour and cornmeal in a bowl with the baking powder and salt, and then cut the fat into the bowl in small pieces. Rub in with your fingertips, then stir in the egg and milk (and the chilli if using) to make a dough.
6. Once the chicken has been in the oven for 20 minutes, take it out and dot teaspoonfuls of the dumpling dough on the surface. Cover and put back into the oven for the remaining 20 minutes, until the dumplings are crisp on top.
## Spotted dick
##### serves 6–8
225g plain flour
2½ teaspoons baking powder
A pinch of salt
2 tablespoons soft light brown sugar
1 teaspoon mixed spice
125g suet
150g currants
25g candied peel
175–200ml milk
At school, where we were spoilt for choice in the matter of hot stodgy puddings every day of the week, spotted dick was never met with enthusiasm, but some years later I've come to appreciate its unassuming charms. Rich and comfortingly fluffy, it has a tangy vine-fruit sweetness that makes it an excellent partner for some thick yellow custard (Bird's for maximum authenticity).
I haven't dared mess with the recipe too much, except for adding some mixed peel, because I love the citrussy bitterness and a little spice – otherwise, it's a fairly canonical version that will be recognizable to anyone educated in Britain. Nostalgia of the most currant kind.
1. Whisk together the flour, baking powder, salt, sugar and spice, then stir in the suet, followed by the currants and peel. Add just enough milk – probably about 175ml – to allow it to come together into a dough.
2. Shape the dough with floured hands into a sausage shape about 20cm long and wrap loosely in greaseproof paper. Twist the ends to seal and secure with string, then put into a steamer and cook for 90 minutes, topping up the water as necessary. Slice to serve.
I've waxed lyrical about the joys of eggs before; the single most useful ingredient you can keep in your kitchen, they also have the benefit of being quite absurdly good value.
With a box of eggs in the house, you need nothing more than heat and a pinch of salt for a satisfying meal – add butter or olive oil and a sprinkle of herbs or spices and you have a veritable feast. And think of all the wonderful recipes where they're cast as best supporting actor; adding richness to sauces and custards and lightness to meringues, soufflés and sponges. Uninspiring leftovers? Stick a fried egg on top to see them undergo a miraculous three-minute makeover.
In fact, much as I hate the word in a culinary context, there's no other way to say it; eggs make everything sexy. It's something about the vaguely pornographic way the golden yolk spills out on to the plate, which has made it as much an internet phenomenon as grumpy cats and sweary toddlers, but with far more justification.
All of us, vegans aside, can appreciate the beauty of a well-cooked egg. It's the Rosetta Stone of the kitchen – the key that unlocks so many of the secrets of cooking.
Little wonder that many chefs test potential employees by asking them to make an omelette. A simple dish, on the plate in less than two minutes, but one that requires real skill, care and finesse – a chewy chammy leather of an egg pancake is easy enough, but a perfectly fluffy _baveuse_ beauty takes practice.
### The nutrition bit
Eggs are officially good for you. The average medium-sized egg contains about 70kcal, which isn't much given how filling they are. Most of these are in the yolk, but as this is also the most nutritious, and delicious, part of the egg, I wouldn't recommend switching to egg-white omelettes except in cases of dire need: a meringue is a far better use for excess whites.
Eggs are a complete source of easily digested protein, which means they contain all eight amino acids our bodies need, and which we can't make ourselves, and at a far lower cost than many other, meat-based 'complete' alternatives. In fact, eggs are so protein rich that they're used as the benchmark against which all other sources are based.
They also contain most of the vitamins known to science, with the notable exception of vitamin C (just add orange juice), and are a particularly good source of vitamin B12 and riboflavin, which help maintain blood and nerve cells, ward off certain types of anaemia, and allow the body to absorb other nutrients. And, just when you thought that was enough of all the nutrition stuff (almost too much of a good thing, these eggs), they also contain handy amounts of iodine for your thyroid, the antioxidant selenium and phosphorus for bone health.
Lastly (I promise), eggs are a decent source of the long chain omega-3 fatty acids so important for brain function and vision, which is particularly useful if you're not a fan of the oily fish we're all constantly encouraged to eat more of. (Start every day with kippers and a poached egg and you're all but guaranteed a Nobel prize.)
But, though eggs may be simple, that doesn't mean that you can't teach your grandmother anything on the subject.
### Choosing and storing
Most of the recipes in this chapter assume you're using hen's eggs, but it would be very easy to swap in duck or goose eggs, both of which have a larger ratio of yolk to white and a higher protein content, which makes them taste richer and produces particularly light, well-risen cakes. In volume terms a duck egg is roughly equivalent to a large hen's egg (though they look bigger, the shells are thicker), while a goose egg will replace two large hen's eggs.
A word on choosing eggs – pale blue and speckled eggs look very pretty, but they taste just the same as the ordinary brown variety. The colour of an eggshell is determined by the breed that laid it (and, fact fans, can often be guessed by the colour of the chicken's earlobe, though this is not an absolute rule – the cream Legbars that lay those blue eggs have whitish earlobes) and is no indication of the quality of the egg within. Even the colour of the yolk is down to the chicken's diet, rather than any particular nutritional value.
That said, if you care about the welfare of the creatures that produce your food, free-range is the only choice – though I prefer, unless buying from a producer I know, to go organic, for the simple reason that the hens in such systems must have outdoor access all year round, and stocking levels are less dense than for other free-range birds.
Eggs sold as enriched with omega-3 have been laid by chickens fed a diet rich in these fatty acids (fish oil, flaxseed, etc.) – though I'd always prefer to get mine from some pilchards on toast.
The Lion mark you'll find on British eggs (except those from very small producers) indicates adherence to high food safety standards – including, most importantly, vaccination against salmonella. Our egg industry is now considered to be a salmonella-free zone, and where periodic outbreaks can be traced back to eggs, they tend to have been imported, which is why many health experts now see no reason that pregnant women, and other vulnerable groups, should avoid runny eggs.
Unless it's very hot, or you need to keep them for months, eggs don't need to be stored in the fridge; indeed, they're less likely to crack when put into boiling water, and will bind better with other ingredients, if they're kept at room temperature. (This is not the case in countries where salmonella is still prevalent.)
Eggs must be sold within twenty-one days of laying, though they'll remain perfectly edible for much longer.
Very fresh eggs (a day old or so) are easier to poach, as the thicker whites hold together better, making for a neater result. Conversely, the thinner whites of slightly older eggs are easier to whip up into meringues and the like. Older eggs will also be less of a pain to peel.
### Cooking basics
The simplest and easiest way to cook an egg is to boil it, and unless you want a runny yolk, you'll get the best, creamiest results by cooking them from cold.
— _Soft-boiled (firmish white, runny yolk)_ : lower into boiling water, turn down the heat and cook for 4 minutes
— _Medium-boiled (firm white, soft yolk)_ : lower egg into cold water, bring to the boil, turn down the heat and cook for 5 minutes
— _Hard-boiled (firm white, firm yolk)_ : lower egg into cold water, bring to the boil, turn down the heat and cook for 7 minutes
If you want to peel the eggs, or won't be eating them immediately, run them under cold water or drop into iced water to stop them cooking any further.
When it comes to omelettes, scrambled and fried eggs, remember that eggs are protein rich, and leaving those proteins on a high heat for too long will cause them to coil so tightly that the texture cannot be anything but tough. Thus they should be cooked briefly over a fairly high heat (an omelette) or slowly over a gentle one (scrambled eggs), but nothing in between. Which gives them something surprising in common with the creatures in Octopus and other cephalopods (see here).
See also: Mexican chilli chocolate mousse (here), Spinach, ricotta and feta tart with hard-boiled eggs (here), Japanese carbonara (here), Black risotto with eggs (here), Aloo tikki Scotch eggs (here), Goat's cheese custards with honey-glazed hazelnuts and black olive toasts (here), Kichri-kedgeree (here), Smoky black dal with eggs (here), Scrambled eggs with crab and samphire (here), Michaelmas mess (here), Pomelo sour (here).
## Bacon devilled eggs
##### makes 12
6 eggs
½ teaspoon smoked paprika
A small bunch of chives, finely snipped
##### _For the bacon mayonnaise:_
10 slices of smoked streaky bacon with plenty of fat
75ml neutral oil (vegetable, sunflower, groundnut, etc.)
1 egg yolk
½ teaspoon Dijon mustard
1 teaspoon white wine vinegar
Devilled eggs are proper party food, and these are even better than the original thanks to the bacon-fat mayonnaise. (This is also pretty excellent in chicken sandwiches, by the way – don't try and refrigerate it, though, or it will start to set). Should you need any further convincing, note you'll be left with six slices of crisp bacon; perfect for a post-party breakfast, if your guests don't get there first.
1. Put the bacon into a dry frying pan over a medium-low heat and fry gently until browned on both sides, pressing the rashers down towards the end to squeeze out as much fat as possible. Lift out the bacon and put on kitchen paper to dry. Pour the fat into a measuring jug; you should have about 50ml. Allow to cool to warm room temperature, then pour in the other oil.
2. While it's cooling, put the 6 whole eggs into a pan and cover with cold water. Bring to the boil, then turn down the heat and cook at a bare shiver for 7 minutes. Drain and run under cold water until completely cool.
3. Whisk together the raw egg yolk, mustard and vinegar in a medium bowl and then slowly drizzle in the oil, whisking all the time, until it thickens into a mayonnaise, at which point you can start adding it slightly faster. Once the oil is all incorporated, season to taste, and add a splash of water if it seems too thick.
4. Roll the boiled eggs along the counter to crack the shells and then carefully peel. Cut them in half through both ends and gently scoop out the yolks. Finely chop these and add to the mayonnaise. Snip 4 rashers of bacon into small shards and mix three-quarters of these into the mayonnaise. Taste and season if necessary.
5. Spoon the mayonnaise into the holes left by the yolks. Arrange on a serving plate and sprinkle with the remaining bacon bits, smoked paprika and chives.
## Deep-fried quail's eggs with celery salt mayonnaise
##### Makes 24 (though the dip will probably take more – it's also nice on crudités, crisps, etc.)
24 quail's eggs
50g plain flour
1 egg, beaten
50g panko breadcrumbs
Neutral oil, to fry
##### _For the celery salt mayonnaise:_
1 head of celery with leaves intact (lots of leaves, not a couple of wisps)
Flaky sea salt (about 75g)
1 egg yolk, at room temperature
1 teaspoon English mustard powder
250ml groundnut or sunflower oil
25ml extra virgin olive oil
1 tablespoon lemon juice
Boiled quail's eggs and celery salt seem like a canapé from a different age: if I had to position them in history, I'd go for a country house weekend some time in the 1930s.
That's all very well, of course, but if you'd like to take the same flavours and bring them kicking and screaming into the twenty-first century, the answer is simple: coat them in ultra-crunchy Japanese breadcrumbs and deep-fry the hell out of them. Soft and runny inside, hot and crisp without, dipped in a punchy homemade celery salt mayonnaise, they're even better than the original.
You can make the celery salt, and indeed the mayonnaise, and boil and peel the eggs, well ahead of time. The extra salt is very fine in Bloody Marys.
1. To make the celery salt, strip the celery leaves from the stalks and wash, then drain. Heat the oven to 200°C/fan 180°C/gas 6 while you leave them to dry, then finish the job very thoroughly with paper towels or a clean tea towel. Arrange in a single layer on one or two baking sheets and bake for 5–6 minutes, until dried out but not browned. They should feel crisp, but they will continue to dry out as they cool.
2. Once cool, crumble the leaves to a fine-ish powder with your fingers – at this point you will probably find some little bits of stalk which won't have dried out, so discard these as you don't want any moisture in the salt. Put in a jar and top up with the same volume of flaky sea salt, then shake to combine.
3. Put the yolk into a large bowl with the mustard powder and anchor the bowl by putting a damp tea towel beneath it. Whisk well until the colour lightens, then start to beat in the neutral oil, a little at a time, whisking all the while to incorporate it into the sauce. Do not be tempted to rush this stage – it will split. As it thickens, you can add the oil a little more quickly. Switch to the extra virgin olive oil once the neutral oil is all incorporated, then lastly, whisk in the lemon juice. If it still seems a little thick, add a drop of room temperature water. Then add celery salt to taste – I use just over a teaspoon.
4. Once you're ready to cook, gently lower the quail's eggs into a small pan of boiling water and cook for 2½ minutes. Meanwhile, prepare a bowl of iced water and, once they're done, transfer the eggs quickly to this to cool down.
5. Gently roll the eggs against a hard surface to crack the shells, then very carefully peel the shells off. Set out the dishes of flour, beaten egg and breadcrumbs near the hob, prepare a plate for the eggs (with some kitchen paper nearby for once they're cooked), and put a large pan a third full of oil on a high heat.
6. Roll each egg in turn in the flour, egg, breadcrumbs, egg and breadcrumbs again. When the oil comes to about 150°C, or is hot enough that a breadcrumb sizzles and turns golden when dropped in, lower a batch of eggs in and cook for 1 minute. Scoop out on to the kitchen paper and season. Serve hot with the mayonnaise to dip.
## Baked eggs, creamed corn and spinach
##### serves 4
4 ears of corn
2 tablespoons butter
1 tablespoon flour
2 teaspoons sugar
½–1 teaspoon salt
A whole nutmeg, to grate
200ml whole milk
250g spinach
2 tablespoons soured cream
4 eggs
I can't believe I lived so long in ignorance of the glorious existence of creamed corn – until the serendipitous day I stumbled across a video of a very jolly woman knocking some up in her Memphis kitchen. The internet is a wonderful place.
This version, using milk rather than cream, is a little less rich than hers, allowing the natural sweetness of the corn to take centre stage, ably backed up by a generous grating of nutmeg, an old friend to both spinach and eggs. It is an utterly delicious breakfast or brunch.
1. Remove the leaves from the corn if necessary, then stand one up on a chopping board and cut down its length to remove the kernels. Rotate and repeat until they're all stripped off, then tip these into a bowl and, holding the corn over the bowl, run the back of a knife down the stripped husks to squeeze out all the liquid. Repeat with the remaining ears.
2. Melt the butter in a medium saucepan and stir in the flour. Cook for a minute or so until it smells toasty, then stir in the sugar, salt, the corn and a generous grating of nutmeg. Stir to coat, cook for another minute, then stir in the milk.
3. Bring to a simmer, then turn the heat down low and cook, stirring regularly, for about 15–25 minutes, depending on how firm you like your corn. Meanwhile, heat the oven to 200°C/fan 180°C/gas 6.
4. Stir the spinach into the corn mixture and allow to wilt, then stir in the soured cream and check the seasoning. Spoon into four ovenproof dishes, or one large one. Crack an egg into a cup, make a divot in the corn and pour the egg into it. Repeat with the rest, then grate a little more nutmeg over the top and bake for 13–15 minutes, until the whites are set and the yolk is still runny inside.
## Omelette farcie
##### serves 1, decadently
##### _For the scrambled eggs:_
3 eggs, lightly beaten
1 tablespoon butter
½ tablespoon chopped chives
2 teaspoons lumpfish or salmon roe (optional)
_For the omelette:_
A generous knob of butter
2 eggs, lightly beaten and seasoned
Yes, it's an omelette stuffed with eggs. What of it?
When I first read Daniel Boulud's recipe for this Gallic classic I couldn't believe my eyes – I made it for breakfast the very next day, just to see if such a thing was even possible, and was blown away by the clever contrast in textures, the ridiculously creamy, slow-cooked scramble spilling out of a firmer, fluffier jacket. When scaling it down, I reluctantly decided adding a third in the form of a rich hollandaise would be over-egging the pudding, so instead I've substituted salty little fish eggs.
1. To make the scrambled eggs, set a heatproof bowl about 4cm above a pan of gently simmering water. Add the eggs and whisk until they foam, then continue to stir until they come together into smooth, creamy scrambled eggs. Immediately take the bowl off the pan (it will be hot) and stir in the butter to stop them cooking any further, along with the chives.
2. To make the omelette, heat the butter in a small frying pan over a medium heat. Once the foam has died down, tip in the eggs and cook for about 20 seconds, until they start to set.
3. Using a spatula or fork, draw in the sides of the eggs to the centre while shaking the pan to redistribute the liquid to the edges. The omelette is done when still slightly runny in the middle.
4. Take off the heat, add the scrambled eggs and fold the two edges into the middle. Shake the pan so they roll together, then tilt it and turn your omelette on to a warm plate. Add a dollop of roe if using.
## Rum flip
##### makes 1
300ml still cider
2 teaspoons soft brown sugar, or more according to taste
A grating of nutmeg
A good pinch of ground ginger
25ml rum (or more, as you see fit)
1 egg yolk
This is a very old recipe which would originally have been warmed with a red hot poker. Considerably less practical these days, the idea itself stands the test of time, a mix of fiery spices and spirits with sweet cider and brown sugar which gives mulled wine a serious run for its money. Feel free to use ale instead of cider, or whisky (or just about any other spirit) in place of the rum, as you like.
1. Warm the cider in a small pan until hot, but not simmering. Take off the heat, then stir in the sugar and spices to dissolve, followed by the rum. Taste and add more sugar or spice if necessary.
2. Whisk in the egg yolk, pour into a heatproof glass and serve (drink, obviously).
## Pandan and coconut burnt creams
##### makes 4
4 pandan leaves
325ml coconut cream
3 egg yolks
50g caster sugar
2 tablespoons desiccated coconut
2 tablespoons demerara sugar
I first encountered the aromatic pandan leaf in Singapore, where I fell in love with its strikingly aromatic, almost soapy flavour. The pretty pale green colour the pandan essence gives these dairy-free custards is an added bonus.
You can get the leaves, either fresh or frozen, in oriental supermarkets (which should also stock various pandan-flavoured cakes and sweets to give you an idea of whether to invest), or online, but if you can't find them, or don't care for the stuff, you can replace it with any other flavour that goes with coconut: vanilla essence, for example, or lime zest would work well.
1. Heat the oven to 170°C/fan 150°C/gas 3. Cut the pandan leaves into smallish pieces, then blend with 50ml of water, using a stick or mini blender, to make a bright green liquid. Strain through a sieve and discard the solids.
2. Bring the coconut cream to a simmer in a small pan, adding the pandan liquid to taste (I like about 3 tablespoons). Whisk together the yolks and caster sugar in a heatproof bowl next to the hob.
3. Pour the hot cream on to the yolks, whisking constantly. Divide the mixture between four ramekins and bake in a bain-marie (roasting tin of water) for about 40 minutes, until set. Cool, then chill until set completely.
4. Heat the grill (unless you have a cook's blowtorch). Divide the coconut between the dishes and sprinkle the demerara sugar on top. Grill until the sugar is bubbling, then serve.
Until recently the culinary love that dared not speak its name, after several decades of self-denial we seem to finally be coming to our senses with regards to the benefits, both medical and otherwise, of a certain amount of fat in our diet. And thank God for that – the world would certainly be a poorer place without peanut butter, or duck fat roast potatoes.
But from a gastronomic point of view, fat has always been a good thing. Fat in food makes it rich and smooth, while food cooked in fat will be savoury and full of flavour. Fillet steaks and white fish have their place, of course, but think of the deeper joys of slow-cooked beef shin, braised until it melts from the bone, or oily salmon with brown bread and sweet, creamy butter.
There are few foodstuffs, however healthy, that aren't enhanced by fat, whether that's dal makhani laced with ghee and cream, or deep-fried tofu, hot and crisp, yielding to an almost panna-cotta-like softness within, or indeed a wholesome bowl of vegetable minestrone set off with a generous pour of green olive oil. Most of the flavour in meat comes from fat: try stripping all the fat from lean pieces of beef and lamb; you'll find it surprisingly hard to distinguish between them.
Humans, like all animals, have embraced fat from the get-go, and after the whole fire thing took off, it assumed an even greater importance as a cooking medium; vital if you weren't to burn that precious mammoth steak to a cinder. But in recent years, we've turned our back on millennia of pleasure, and given fat the cold shoulder. We do not seem to be any healthier for it.
Our bodies need fat to function. Somewhat amazingly, the human brain is almost 60 per cent fat. It is essential to the functioning of every cell in our bodies, from the workings of the immune system to the way our hair looks.
That said, not all fats are created equal; in fact, as anyone following a diet for the past forty years will be aware, there are a bewildering number of different fats out there, some of which are better for us than others. For the information below I am indebted to two very patient chemist friends.
The most common terms bandied about are saturated and unsaturated fats, though actually, as Jennifer McLagan points out in her excellent book _Fat_ , 'there is no such thing as a completely saturated or completely unsaturated fat; every fat is a combination of both saturated and unsaturated fatty acids'.
In a very small, but rather dense nutshell, fats are made up of fatty acids, themselves chains of carbon atoms, with each carbon atom in the chain bonded to hydrogen atoms.
The difference between saturated and unsaturated fats lies in the arrangement of these atoms. In the carbon chains of unsaturated fatty acids, one or more of the carbon-carbon bonds can be a double bond. As each carbon atom can only be involved in four bonds in total, the carbon atoms involved in such a double bond can only thus be bound to one hydrogen atom, instead of two.
If there is one double bond in the fatty acid chain, the fat is monounsaturated; more than one and it's known as polyunsaturated. All the carbon-carbon bonds in saturated fatty acids, meanwhile, are single bonds, which leaves more room for hydrogen atoms along the chain.
(Omega-3 and -6 fatty acids, which are often singled out for special praise, have their double bonds as the third and sixth bonds from the end of the chain respectively. The body is unable to create these fatty acids itself, which is why they're so vital to our diet, and hence why we're always being urged to eat more oily fish and seeds.)
This is all important because these double bonds change the structure of the fat. Carbon-hydrogen bonds are less volatile than carbon-carbon bonds, which means that the more saturated fatty acids a fat contains, the more stable it is at room temperature, and the less likely it is to spoil.
Animal fats tend to be about half saturated fatty acids, as compared to only 15 per cent of vegetable fats, which is why lard is solid in the pantry and olive oil liquid. The more unsaturated the fat, the quicker it will go rancid on contact with air, microbes and so on; beef stays fresh for longer than white meats like chicken for the simple reason that it contains more saturated fat.
The third class of fats, which has only come to the attention of most of us relatively recently, so long have we been told that saturated fat is the enemy, is hydrogenated or trans fats. These do occur in low levels in nature, but the ones that make the headlines are vegetable fats that have been chemically altered to make them solid at room temperature, and improve their shelf life – 'hydrogenated' to turn some of the double bonds in the chain into single bonds, and add the associated missing hydrogen atoms. Unfortunately our body finds these hard to process in the normal way, and trans fats appear to raise the levels of undesirable cholesterol in the blood.
Fortunately, trans fats are not widely used in this country, but there is no obligation for manufacturers to highlight them on the label – the words hydrogenated or partially hydrogenated fats or oils should ring warning bells.
### Sources of fat
The good news is that saturated fats, labelled as the enemy in the 1970s and 80s for their link to coronary heart disease, have largely been exonerated. Many people I know still cut the fat off chops, and regard the skin of a chicken with deep suspicion – goose fat may have been rehabilitated thanks to some good publicity from television cooks, but few would give house room to the more homely charms of dripping or lard.
Although studies are still inconclusive, more recent research seems to agree that the only fats associated with an increased risk of heart disease are those aforementioned trans fatty acids. Indeed, replacing fat in the diet with carbohydrates, as was once suggested by the diet lobby, seems to actually encourage weight gain. Because it takes the body a while to digest fat, it keeps us feeling fuller for longer, while carbohydrates, particularly the refined sort, like pasta and bread, prompt a short-term spike in sugar levels, followed by a crash – which is when, if you're anything like me, you reach for the snacks.
It seems that the old adage, a little of what you fancy does you good, holds true; while I'm not suggesting we all go completely Atkins, there is little evidence a low-fat regime is good for you. Instead, that boring thing, a balanced diet, with moderate amounts of both animal and vegetable fats, and without any trans fats, seems the way to go. So I hope you fancy a little of some of the recipes in this chapter.
See also: Vietnamese caramel and pork hotpot (here), Duck fat garlic bread (here), Salted brown butter and buttermilk ice cream (here), Pork rillettes with rhubarb chutney (here).
## Cultured butter
##### makes 1 pat (about 200g)
400ml double cream, at room temperature
2 tablespoons live natural yoghurt, at room temperature
½ teaspoon sea salt flakes (optional)
One of those things which shouldn't be worth making at home, but somehow is – not just for the magic of producing something so fundamental to our cooking in a quarter of an hour, but because it seems to taste better, especially if you seek out some really great cream (farmers' markets are a good source). Taking the time to culture (or ferment) it overnight is not strictly necessary, but will give the butter a more interesting, complex flavour. If you don't want to do this, omit the yoghurt and start at step 2.
Obviously once it's made you can add any extra flavourings you fancy at step 4 – herbs, chilli, sugar and spice, as you wish, but I'm not sure you can beat simple salt. The buttermilk left over is excellent in smoothies, or indeed the buckwheat pikelets here.
1. Stir together the cream and yoghurt in the bowl of a stand mixer, then cover and leave in a warmish place for 8 hours. Check it at regular intervals after this – it's ready when the surface is bubbly and it smells faintly sour and tangy.
2. Whisk the mixture at a medium-low speed, scraping down the bowl as necessary, for about 8–10 minutes, until it separates into a solid, cottage-cheese-like mass (which will stick to the whisk) and a milky liquid. Alternatively you can put it into a large jar (much larger than the volume of cream) and shake it vigorously until it reaches this point, but it will take longer.
3. Drain the butter in a sieve set over a bowl to catch the buttermilk. Scoop it up and rinse it well under cold water to get rid of any remaining whey, which will cause it to spoil more quickly, then squeeze out any water.
4. If you're planning to add salt, do so now, kneading it into the butter until evenly distributed. Shape the butter into a pat, or put into a bowl, and refrigerate to firm it up a bit, or eat it immediately, toast optional.
## Bacon refried beans
##### serves 4
200g dried pinto beans, soaked overnight
1½ onions
¼ teaspoon Mexican oregano (optional)
12 rashers of dry-cured streaky bacon, or 4 rashers and 50ml bacon drippings
Tinned refried beans are a vaguely guilty pleasure of mine, but these taste even better thanks to a goodly, and authentically Mexican, dollop of smoky pork fat. If you keep a pot by the stove for bacon drippings, as I do, you can use those instead, but cook some specially and you'll be left with a few rashers of crisp streaky to top.
1. Put the drained soaked beans into a large pan with the half onion and cover with plenty of cold water. Bring to the boil, skim off the scum, then turn down the heat and stir in the oregano if using. Simmer until very tender – about 2 hours, but the time varies wildly depending on the age of your beans, so check regularly. Don't allow the pan to boil too dry, as you'll be needing some cooking liquid later.
2. Meanwhile, if you don't have the benefit of a big pot of bacon drippings, put a large frying pan on a medium-low heat and line with a layer of bacon. Cook gently until golden brown on both sides, then tip into a sieve set over a bowl, making sure you get all the fat out of the pan. Repeat with the remaining bacon; you should have a generous amount of bacon fat in the bowl by the end. Don't bother to wash up the frying pan.
3. When the beans are very tender, drain, reserving about 250ml of the cooking liquid. Mash them well along with a splash of liquid, or use a stick blender if you'd prefer a smoother texture. Finely chop the remaining onion.
4. Melt a generous few spoonfuls of bacon fat in the frying pan over a medium-high heat and add the chopped onion. Fry until soft, then add the beans. Fry for a minute or so, stirring, then stir in the reserved cooking liquid until you have a loose-ish paste – you probably won't need it all. Season to taste.
5. Finely chop 4 of the bacon rashers and stir into the pan just before serving, along with another spoonful of bacon fat if you're feeling authentic/reckless.
## Red-braised pork
##### serves 6 with other dishes
1kg pork belly, skin on
Groundnut oil
1½ star anise
2 cloves
1 cinnamon stick
½ teaspoon Sichuan peppercorns (optional)
4 garlic cloves, squashed with the back of a knife
A large chunk of ginger (about 60g), squashed with the back of a knife
4 spring onions, cut into 3–4 pieces each
50g soft light brown sugar
2 tablespoons dark soy sauce
4 tablespoons Shaoxing rice wine
1½ teaspoons salt
I first came across this idea when I was putting together a selection of _Guardian_ reader recipes for the Chinese New Year celebrations – and was immediately struck by its simplicity, and the joy of producing something so rich and intensely flavoured with so little effort. OK, so it has a fair few ingredients, but none of them are hard to find, with the possible exception of the peculiarly tingly Sichuan peppercorns, and there's little more to do than simply stick them all into a pan, cover and wait for heat and time to work their magic.
Fatty and gorgeously sticky and savoury, this is best served with plain rice and some simply steamed greens. Like many slow-cooked dishes, it reheats well – you'll probably need to add a splash of hot water as you do so, but don't be tempted to spoon off all the fat.
1. Bring a large pot of salted water to the boil, then add the pork, in one piece if possible, and blanch for 4 minutes. Drain well and cut into chunks approximately 4 x 4cm.
2. Heat a good splash of oil in a wide, lidded pan over a high flame until it begins to smoke, then sauté the pork in batches for a couple of minutes until it starts to brown.
3. Lightly bruise the spices in a pestle and mortar, then add to the pan with the last batch of pork. Fry for 30 seconds, stirring, then add the garlic, ginger and onions. Finally stir in 600ml of water and the remaining ingredients, scraping the bottom of the pan to deglaze.
4. Replace the rest of the pork and bring the pan to a simmer, then turn down the heat and cook, partially covered, for about 90 minutes to 2 hours, until the pork is very tender and the sauce well reduced and clinging to the meat. (Although this isn't a dish with a great deal of sauce, keep an eye on it and stir in a little more hot water if the pork starts to stick.)
## Lamb 'porchetta' with salsa verde
##### serves 6
2 tablespoons black peppercorns
½–1 tablespoon red chilli flakes (I use mild _pul biber_ , or Aleppo pepper, but if you use another chilli, you may want to err on the side of caution)
3 tablespoons fennel seeds
1.5kg boned lamb breast (probably 2 or 3)
6 garlic cloves, crushed
4 tablespoons chopped thyme and rosemary
½ teaspoon bicarbonate of soda
##### _For the salsa verde:_
1 large bunch of basil
1 large bunch of flat-leaf parsley
6 anchovies (rinsed if packed in salt)
2 tablespoons capers (rinsed if packed in salt)
1 garlic clove, crushed
Juice of ½ a lemon
1 teaspoon Dijon mustard
Olive oil
You don't see a lot of lamb breast around, so if you're not familiar with it, the best way to think of it is as the ruminant equivalent of pork belly – fatty, yes, but cooked right, utterly melt-in-the-mouth delicious.
As this recipe shows, pretty much anything you can do with belly you can do with breast, and actually, I think the garlicky, herbaceous flavours of a classic rolled porchetta work even better with the sweet mellow flavour of lamb, especially when offset by a zingy green sauce. It remains extraordinarily good value, and any decent butcher should be able to get you some without too much trouble.
1. Between 16 and 48 hours before you want to eat the lamb, depending on how long you have to marinate it, put the peppercorns, chilli flakes and fennel seeds into a hot dry frying pan and toast for a minute or so or until aromatic. Allow to cool slightly, then crush in a pestle and mortar.
2. Lay the lamb breast or breasts out flat on a board, fat side down, and salt generously. Spread over the crushed garlic (unfortunately, fingers are the easiest thing to use – rub them with lemon juice afterwards to help neutralize the smell), followed by the crushed spices and chopped herbs. Roll up tightly from one of the short ends and tie with string in several places. Rub the skin with bicarbonate of soda and a little more salt, then refrigerate overnight, or for up to 48 hours.
3. Take the meat out of the fridge an hour or so before you want to cook it, to bring it up to room temperature. Heat the oven to 240°C/fan 220°C/gas 9 and roast the lamb for about 30 minutes, until golden, then turn down the heat to 170°C/fan 150°C/gas 3 and roast for a further 2–2½ hours, or until the meat is very tender. Rest for at least 20 minutes in a warm place.
4. To make the salsa verde, whiz the herbs, anchovies, capers and garlic up in a food processor (or roughly chop and then pound in a pestle and mortar if you're feeling more energetic), then beat in the lemon juice and mustard, followed by enough olive oil to make a thick sauce – it doesn't need to be super smooth. Taste and season or add more lemon juice if necessary.
5. Cut the lamb into thick slices and serve with the salsa verde.
## Bourbon and bacon butter
##### makes 125g
2 rashers of smoked streaky bacon
120g butter, at room temperature
1–2 tablespoons bourbon
1 teaspoon soft light brown sugar
It's hard to improve on good butter, but if you're going to try, you may as well go all-out: Parmesan and garlic, anchovies and chilli, or this smoky sweet all-American version, which is particularly great on barbecued corn and, I must confess, also works disgustingly well on toast.
1. Dry fry the bacon until crisp, then drain, reserving both the bacon and the liquid fat from the pan. Chop the bacon into small pieces.
2. Once the fat has cooled slightly, beat 1 teaspoon into the butter, plus the bourbon and sugar. Taste, season and add a little more booze, or indeed sugar, then once you're happy with the flavour, stir in the bacon pieces until evenly distributed.
3. Chill until ready to serve (you can roll it into a cylinder before chilling if you like the idea of neat little discs of butter, but I'm happy just to pass the bowl round at the table for people to scoop as much or as little as they want).
## Coconut ice magic
##### serves 6
65g dark chocolate, chopped
50g coconut oil
2 tablespoons golden syrup (optional)
A generous pinch of salt
Anyone who grew up in 1970s and 80s Britain will have fond memories of Bird's Ice Magic, the sweet gooey sauce that set to a brittle shell on contact with cheap vanilla ice cream, just right for shattering with an aggressively wielded teaspoon.
Sadly it seems it was just too magic for the market, and it seems to have disappeared from shelves, along with its almost equally thrilling squeezy cone-shaped bottle – but never fear, because help is at hand from an unlikely source.
Extra virgin coconut oil may not have been a kitchen staple in the 1980s, but its high melting point means it hardens as it cools – which is exactly what we want here. For the sweet flavour of the original, albeit with a totally tropical coconut taste, add a little golden syrup – if you want to pretend sophistication, leave it out.
1. Melt all the ingredients together in a heatproof bowl set over a pan of simmering water, stirring to combine.
2. Pour into a jug and serve with ice cream – pour over while warm, and, within 30 seconds, it should have set to a shell.
I feel blessed to be born in an age where it's socially acceptable to indulge in a love of garlic. I probably wouldn't risk it on a first date or just before a job interview, but at least few British people these days would claim, like the eighteenth-century Scottish writer Tobias Smollett, to be 'grievously offended' by the stuff.
How anyone, literary lion or not, could be blind to its charms is beyond me; juicy and almost sweet in its fresh green form, sharp and emphatically savoury when dried, garlic has a flavour more complex than the onion, more pungent and spicy than the chive or the leek, and somehow more addictive than the rest of the family put together – the more you eat, in my experience, the more you crave.
But garlic phobia is not a peculiarly British, or indeed even a modern complaint; though it was much eaten in the ancient world for its medicinal qualities, there seems to have been a rather aristocratic disdain for its powerful odour. The Roman poet Horace describes garlic as 'more baneful than hemlock' in his _Odes_ , while Pliny warmly recommends it . . . for repelling scorpions, snakes and 'every kind of beast'.
Worse was to come: by the Middle Ages, garlic, a notably pungent bulb even in a memorably pungent age, stood accused of encouraging intemperance and lechery, and the sixteenth-century herbalist John Gerard claimed that consumption 'ingendreth naughty and sharpe bloud'.
Indeed, garlic did not find any sort of popularity in this country until the latter half of the twentieth century, with John Evelyn declaring in 1699 it was fit only for 'rustic northerns' thanks to its 'intolerable rankness . . . 'tis not for ladies palates, nor those who court them', and Mrs Beeton including it in just one recipe, for mango chutney, followed by a note describing garlic as the most 'acrimonious in its taste of the whole of the alliaceous tribe'.
It wasn't until we began to travel more widely in the 1960s, bringing the Mediterranean flavours of our holidays home with us, that garlic began to make inroads into the British kitchen. It is also, of course, a key ingredient in the Asian cookery we took to our hearts around the same time, though it tends to be used in conjunction with other spices rather than as the star attraction.
Tolerance for garlic, like chilli, is a very personal thing, and some people seem to be able to take far more of it than others. When following other people's recipes I generally add more than they suggest, and you should feel free to adjust my quantities according to your own taste.
_NB: for wild garlic, a related plant with a similar flavour, see W is for Wild._
### Chemistry
Interestingly, garlic does not smell until it is cut – breaking the cell membranes brings an enzyme called alliinase into contact with a sulfoxide called alliin, and the two combine to create alliicin, which is responsible for garlic's pungent scent. This explains the mysterious phenomenon by which crushed garlic smells far more strongly than its sliced or chopped counterpart; the more membranes are destroyed, the more allicin is produced.
These allicin molecules are highly volatile, and once released change readily into other organic, sulphurous compounds, including those responsible for garlic's many miraculous qualities, for example its antibacterial and anti-clotting properties. Though we rarely consume enough to think of it counting towards our daily intake of fruit and vegetables, garlic is also a good source of vitamins B1 and C.
### Buying and storing
Though most of us don't imagine cultivated garlic having a season because we generally eat it dried, the milder green sort, known as 'wet garlic', appears in May, with the main crop being pulled from July.
The two are the same thing at different stages of development (wet garlic is the immature bulb, harvested before the cloves have had a chance to form completely) and can be used pretty interchangeably, but it's nice to take advantage of the former's sweeter, more delicate flavour by pairing it with easily overwhelmed ingredients like fresh cheeses, eggs and salads. The comically large elephant garlic, meanwhile, which you might sometimes see at markets, is actually a kind of leek, which explains its muted flavour.
When buying garlic, look for firm heads with plump cloves, avoiding any with green tips that suggest they have started to sprout, and store them in a cool dry place to discourage this.
## Confit garlic, thyme and Parmesan tart
##### makes a 22cm tart (serves 6–8)
240ml milk
240ml double cream
3 eggs, beaten
120g Parmesan, finely grated
2 bushy sprigs of thyme, leaves only, plus an extra sprig for garnish
##### _For the confit garlic:_
2 heads of garlic
250ml olive oil
##### _For the herb pastry (or use 200g ready-made shortcrust):_
120g plain flour
60g cold butter, finely diced
¼ teaspoon fine salt
½ teaspoon herbes de Provence or dried thyme and rosemary
3 tablespoons ice-cold water
Garlic, slow cooked in oil, is a remarkable thing – the raw sharpness completely melts away, leaving a soft, caramelized, even toffeeish sweetness that needs something savoury to play off. Hopelessly wobbly and rich, this Parmesan custard is the perfect foil. It makes a lovely lunch, served with a plain green salad.
1. Peel the garlic cloves (the best way to do this in quantity is to put the separated cloves into a large bowl and invert a similarly-sized bowl over the top to make a lid, or put them in a large lidded saucepan or jar, and shake the bejesus out of them), then put them into a small pan with the oil. Bring to just below a simmer, then turn down the heat and cook gently for half an hour. Strain into a bowl so they stop cooking, then chill immediately (please don't ignore this – the refrigeration is important if you want to be sure of avoiding any botulism growth on the garlic). The leftover oil makes great salad dressing.
2. Make the pastry by whizzing together the flour and butter to make coarse crumbs, then add the salt, herbs and just enough icy water to bring it together into a dough; you'll probably need about 2 tablespoons. Wrap well and chill for at least 30 minutes.
3. Heat the oven to 200°C/fan 180°C/gas 6. Grease a 22cm tart tin, put it on a baking tray and roll out the pastry on a lightly floured surface until large enough to line the tin. Gently press it in, prick the base a few times with a fork, then line with baking paper or foil and baking beans, and blind bake for 15 minutes.
4. Remove the paper and beans and bake for another 5 minutes until golden. Take out of the oven and turn the heat down to 180°C/fan 160°C/gas 4.
5. Whisk together the filling ingredients and season lightly. Arrange half the garlic on the base of the tart, then put the baking tray with the tart on it back into the oven and pour the filling into the pastry. Bake for 20 minutes, then push the remaining garlic into the setting tart, put back into the oven and bake for 25–30 more minutes, until set but slightly wobbly, checking its progress regularly after 15 minutes. Allow to cool for at least 20 minutes before serving with a sprig of thyme snipped on top.
## Hot and sour seafood soup with black garlic aïoli
##### serves 4
8 raw shell-on king prawns
A dash of oil
1.5 litres good fish stock, not too strong
1 long red dried chilli
4 stalks of lemongrass, trimmed
4 kaffir lime leaves, torn
3 thick slices of galangal
2 Thai shallots, roughly chopped
A small bunch of coriander, with roots if possible
1 tablespoon palm sugar
24 mussels, cleaned
200g cod cheeks (or other meaty white fish), cut into chunks
2 medium squid, cut into bite-sized pieces and lightly scored
2–5 red bird's-eye chillies, finely sliced
3 tablespoons lime juice, or to taste
2 tablespoons fish sauce, or to taste
##### _For the black garlic aïoli:_
1 egg yolk
2 cloves of black garlic
¼ teaspoon coarse salt
150ml groundnut or other neutral oil
A kind of south-east Asian take on the classic Provençal bouillabaisse, this zingy Thai seafood soup is paired with a sauce made with sweet, richly flavoured aged black garlic in place of the usual rouille. Defiantly spicy and sour, shot through with the funky flavour of the garlic, consider this a wake-up call to your palate. Black garlic can be found in Asian supermarkets, fancy grocers, and very easily online.
1. To make the aïoli, mash together the egg yolk, garlic and salt in a pestle and mortar until smooth, then add 2 teaspoons of tepid water and mash to incorporate.
2. Transfer to a larger bowl and very gradually whisk in the oil (you can also do this in a food processor if you prefer) until you have a smooth emulsion. Taste and adjust the seasoning if necessary.
3. Shell and devein the prawns, saving the shells, and set the meat aside. Heat the oil in a large pan on a medium heat and fry the shells until pink. Add the stock and dried chilli, bring to the boil, simmer for a couple of minutes, then strain and discard the shells, replacing the chilli.
4. Meanwhile, bruise the lemongrass, lime leaves, galangal and shallots, plus the roots of the coriander if you have them, in a pestle and mortar. Finely chop the coriander leaves.
5. Add the contents of mortar to the stock along with the sugar and simmer for a minute or so.
6. Add the mussels to the pan and cover for a couple of minutes. Once they have begun opening, add the fish and squid and cook for a minute or so, then take off the heat.
7. Add the bird's-eye chillies, plus lime juice and fish sauce to taste. Garnish with the black garlic aïoli and the coriander leaves.
## Brined and slow-cooked lamb with flageolet beans, white wine and garlic
##### serves 6
1 large lamb shoulder, about 2kg
500g dried flageolet beans
1 head of garlic
1 lemon
2 rosemary sprigs, bruised with the back of a knife
400ml white wine
500ml chicken stock
##### _For the brine:_
350g coarse sea salt
225g sugar
4 garlic cloves, peeled and squashed with the back of a knife
2 rosemary sprigs, bruised with the back of a knife
Like cassoulet? Then you'll love this. Minus the duck fat and the sausage, it's a (slightly) lighter take on that south-western French classic.
You don't have to brine the lamb beforehand if you're pressed for time, but I'd recommend it for the infusion of savoury flavour it gives the dish.
1. Put the salt and sugar into a very large pan with 2 litres of cold water. Bring to a simmer, stirring to dissolve, then add the garlic and rosemary and allow to cool. Add another 2 litres of water and the lamb (or transfer to a larger container if necessary), and refrigerate for between 24 and 48 hours, turning occasionally.
2. The night before you want to cook, soak the beans in water.
3. Take the lamb out of the fridge, drain it and bring it to room temperature. Pat dry with kitchen towel. Drain the beans and put into a large pan. Cover with cold water and bring to the boil. Skim off the scum from the top, and simmer for about 30–40 minutes, until just tender – they'll cook further in the oven.
4. Meanwhile, heat the oven to 250°C/fan 230°C/gas 10 (or your oven's hottest temperature if lower). Put the lamb into a roasting tin and bake for 30 minutes, until the fat is golden.
5. Drain the beans. Take the lamb out of the oven and turn it down to 160°C/fan 140°C/gas 3. Put the lamb into a lidded, flameproof casserole dish just big enough to hold it, and cover with the beans. Cut the head of garlic in half laterally and push into the beans, cut sides down, then squeeze the lemon in and add the cut halves to the beans along with the rosemary. Pour over the wine and stock, which should come just to the top of the meat. Bring to a simmer on the hob, then cover and bake for about 3½–4 hours, until the lamb is soft enough to spoon, stirring occasionally to ensure the beans cook evenly.
6. Remove the lamb and spoon apart. Taste the seasoning of the beans and adjust if necessary, then serve the two together with a green salad.
## Duck fat garlic bread
##### makes 1 loaf
4 garlic cloves
¼ teaspoon salt
100g duck fat, at cool room temperature (solid, but spreadable)
A small bunch of parsley, finely chopped
1 baguette
No groundbreaker here, but if you've got some decent duck fat left over, say, from the miso caramel roasted bird here, this is an indecently delicious way to use it up.
1. Heat the oven to 200°C/fan 180°C/gas 6. Mash the garlic to a smooth paste with the salt, then blend with the fat and the parsley.
2. Put the bread on a large sheet of foil. Cut into slices, being careful not to go all the way through the bread. Carefully stuff the duck fat into the cuts – this is quite a messy job, but make sure you get it all in (if your hands stink afterwards, squeeze some lemon juice over them).
3. Wrap up and bake for 15 minutes, then open the foil a little and bake for another 5–8 minutes, until golden and crisp on top. Serve immediately.
## Georgian griddled chicken on toast
##### serves 4 (with sides)
##### _For the chicken:_
2 plump garlic cloves, roughly chopped
1 teaspoon salt
½ teaspoon paprika
1 small chicken, about 1.2kg
2 large, thick (about 2cm) slices of robust chewy bread
25g butter
##### _For the sauce:_
4 garlic cloves, roughly chopped
¼ teaspoon salt
¼ teaspoon paprika
300ml chicken stock
A small bunch of coriander, roughly chopped
Crisp, smoky, buttery, this is a dish that demands to be eaten messily with fingers – I've stolen the brilliant idea of roasting the bird on toast, which I've never come across in a Georgian restaurant, from Stevie Parle's east London restaurant Rotorino. The combination of garlic and butter seemed too good an opportunity to pass up, and sodden with these, and the sticky, savoury chicken juices, it's a treat worth fighting over.
1. Mash the garlic, salt and paprika for the chicken to a paste in a pestle and mortar. Put the chicken, breast side up, on a board. Untie the legs and wings, and use a heavy knife to cut down the middle of the bird, through the backbone. Turn it over and use a meat mallet or some other heavy item to flatten the bird out. Rub all over with the paste, then cover and leave to marinate for at least an hour (if you want to leave it much longer, refrigerate it, but bring it back to room temperature before cooking).
2. Heat the oven to 200°C/fan 180°C/gas 6 and put a greased griddle pan or large frying pan on a medium-high heat. Find a heavy heatproof chopping board or baking tray and a couple of heavy heatproof objects (I use my pestle and mortar). Put the chicken on to the hot griddle, put the board on top and weight it down. Cook for 7 minutes, then turn over and repeat.
3. Put the bread in a roasting tin, put the chicken on top, then the butter on top of that. Roast for 10 minutes, then add the board or tray and weights and roast for about another 20 minutes, until cooked through, checking the colour of the juices after 15 minutes.
4. Set the chicken and bread aside to rest while you make the sauce. Mash the remaining garlic with the salt and paprika to make a paste. Heat the stock in a small pan along with any juices from the chicken (most will probably have gone into the bread), then whisk the garlic paste into it.
5. Carve the chicken and cut the bread into smaller pieces (unless there are just the two of you, in which case you can be greedy and have one slice each). Stir the coriander into the sauce and serve the two together.
## Grand aïoli for heretics
##### serves 8
##### _For the aïoli sauce:_
1 head of garlic
2 egg yolks
450ml olive oil
Juice of ½ a lemon
A small bunch of basil
##### _For the salt fish (optional):_
900g thick fillet of pollack, cod or other firm white fish, skin on
Coarse salt, to cover the fish
1 bay leaf
Fronds from the top of the fennel (see below)
##### _To accompany (as desired):_
16 small new potatoes, boiled in their skins
2 red peppers, deseeded, brushed with a little oil and charred on a hot griddle until soft and blackened
2 heads of fennel, cut into wedges and chargrilled as above
2 courgettes, cut into strips and chargrilled as above
16 quail's eggs, hard-boiled (put into a pan of cold water, bring to the boil, simmer for 2½ minutes, then run under cold water)
8 large ripe tomatoes
800g large cooked prawns
1 baguette
For all their ancient reputation, many regions of France are disappointingly restrained with the garlic – I think of Michel Roux's half a clove rubbed around the gratin dish for dauphinoise – but down in the south, in Provence, they have no such qualms. This pungent sauce, which is served with everything from barbecued sausages to fish and warm salads, is also the centrepiece of the classic Provençal feast, the Grand Aïoli, flanked by an army of vegetables and seafood, simply cooked so as not to distract from its glorious garlickiness.
I love salt cod, but the other traditional accompaniments of green beans and carrots, beetroot and cauliflower have always seemed too northern to me – shaking off the yoke of tradition, I prefer more stereotypically Mediterranean vegetables, sweet fennel and peppers, even courgette and aubergine batons, plus a few big pink nutty prawns for good measure.
You, however, can use whatever you like; I've given a few ideas under 'To accompany', but the sauce is the point here. I've also taken the liberty of adding basil and a little lemon juice to it, which is definitely against the rules, but gives it a gorgeous green colour, and a slightly peppery, fresh flavour. Be warned, however, it's still very, very garlicky.
1. If you're making the salt fish, 48 hours before you want to eat, find a dish just large enough to hold it. Cover the base of the dish with 1cm of coarse salt, then lay the fish on top. Add another centimetre of salt, then cover and refrigerate for 24 hours.
2. Rinse the fish well, discarding the brine, then put into a large bowl of cold water and leave to soak for 24 hours, changing the water three times during that time. (If using bought salt cod, start by rinsing and soaking it.)
3. To make the aïoli, peel the garlic and pound to a smooth paste in a pestle and mortar with a hefty pinch of salt. Add the egg yolks, one at a time, and pound to combine. If you're feeling energetic, you can add the oil, very gradually, in the same way, but at this point I prefer to transfer the mixture into a large clean bowl and use electric beaters, whisking in the oil little by little until you have a thick mayonnaise.
4. Add the lemon juice and enough warm water to give a thick but creamy dipping consistency. Roughly chop the basil, then add to the aïoli and use a hand-held mixer to whiz to a vibrant green. Check the seasoning.
5. Put the salt fish into a wide pan and just cover with cold water. Add the bay leaf and fennel tops, then bring gently to the boil. Cover the pan, take off the heat, and leave the fish to poach for 15 minutes before taking it out of the water, removing the skin and flaking the flesh into large chunks.
6. Serve the aïoli sauce as the centrepiece of a platter, flanked by potatoes, peppers, fennel, courgettes, quail's eggs, tomatoes, prawns and salt cod, with the baguette on the table too for everyone to tear into as desired.
You can't beat a bit of heat: English mustard spread so thickly on a bacon sandwich that the first bite makes you sneeze, the sinus-cleansing satisfaction of a Sichuan chilli chicken or the sour buzz of a pickled jalapeño with a cold beer – there's something about the sensation that reminds you you're alive. It's the reason I slosh sriracha on a hangover breakfast with such abandon, or crave tom yum when I'm feeling under the weather; heat is a sharp kick to the palate.
Of course, you can't avoid the fact that said wake-up call comes in the form of pain – childhood memories of being fed 'green beans' by my brother at the local Indian restaurant are burned deeply into my psyche. A taste for heat is not the same as a taste for sugar, or fat – heat doesn't register with our taste buds, but with pain receptors on our tongues; we're the only animal known to seek out danger or discomfort for kicks, whether that comes in the form of an insanely spicy curry or a insanely dangerous base jumping holiday.
Although I'm secretly quite proud of my tolerance for chilli (I once ate an entire bhut jolokia, at that time the world's hottest variety, and felt mildly euphoric for a full twenty minutes after I stopped crying), I'm not one of those perverts who seek out foods just for their heat; for me, the spice has to be justified by flavour.
Though capsaicin, the compound responsible for the feeling of heat, is flavourless by itself, chillies, like any fruit, do have a taste; they can be sweet or smoky, grassy or citrussy, and which variety you choose has a great bearing on the character of the final dish.
Those crazier, often American, 'hot sauces' such as Grim Reaper, 100% Pain or Ass Blaster which are pure capsaicin are, to my mind, as pointless as an artificial sweetener – there's no flavour there, just burn.
If you need any more reason to avoid products that come with a pipette and a safety warning, remember that synthetic capsaicin is often used in self-defence pepper sprays, which should be some indication that it's not a good thing to ingest, mugger or not.
### Chilli immunity
It's often thought that chillies are something we can develop a tolerance for; after all, children from all cultures react badly to early experiences of them, however spicy the food around them. In fact, studies focusing on Mexicans and Americans found little correlation between either age or custom and tolerance for capsaicin – the Mexicans who ate hot food regularly didn't seem any less sensitive to the pain of the heat, they just enjoyed the sensation more. You grow to love the burn, in other words. This chapter should help.
### Chilli first aid
As capsaicin isn't very soluble in water, cold drinks are only helpful if you hold them in the mouth to cool down the troubled receptors – they won't wash the stuff away. Roughly textured foods, like rice crackers or crusty bread, will distract the tongue from its predicament. And, from personal experience, I took on the bhut jolokia with a tableful of thick Turkish yoghurt and flatbreads, and came out alive.
### The science of heat
Capsaicin, an irritant alkaloid, is mostly concentrated in the placental tissues of the fruit – which is a weird, if biologically accurate way of describing the soft pale stringy stuff that the seeds are attached to – though clearly it's also present in the rest of the flesh, or a carefully trimmed chilli would have no heat at all.
It's made up of at least five different chemical components that hit the tongue in different places, so what starts as a sharp burn in the throat will progress to a less intense, but lingering heat on the tongue.
Capsaicin triggers a response in receptors on the tongue similar to that of heat – so although the flesh isn't actually burning, or indeed damaged at all, the brain is tricked into thinking it is. This is why, despite the searing sensation, your mouth remains as puzzlingly cool as ever.
It's almost impossible to judge the potential heat of an individual fruit before you tuck in, though gently nibbling on the pointed end, the mildest part, will give you some idea. The heat will vary from season to season, soil to soil, and even among the fruits hanging on the same plant at the same time, depending on how ripe they are, so chillies really are a game of chance.
Hot dry weather increases capsaicin production, which begins at pollination and stops when the fruit begins to ripen, which means that, contrary to popular belief, chillies are hottest at about the time they start to change colour from green to red or yellow.
Chilli heat is traditionally measured on the Scoville scale, which is based on how far a chilli extract has to be diluted with sugar water before tasters can no longer detect it in the liquid. It has largely been replaced with less subjective methods among the scientific community, but is still commonly cited in culinary circles.
See also: Hot and sour seafood soup with black garlic aïoli (here), Spicy peanut butter noodles with sprouting broccoli (here), Turkey mole poblano (here).
## Blackened jalapeño and avocado slaw
##### serves 4–6
1 large raw beetroot, peeled
2 large carrots, peeled
3 limes
½ teaspoon salt
3–5 green jalapeño chillies (depending on heat tolerance)
3 spring onions
2 garlic cloves, unpeeled
1 ripe avocado
A small bunch of coriander
Slaw, in this case, because there's no cabbage involved – sweet beetroot and carrot just seemed a more apt pairing with the zingy, creamy avocado dressing, but really you could use any thinly sliced vegetable that takes your fancy. Toss it all together before serving if it's more convenient, but I like the contrast between the colourful vegetables and the pale green dressing.
Jalapeños vary in heat, so I'd advise cooking five, then starting off with three in the dressing and tasting before adding more, unless you like a Russian roulette element to your salads.
1. Grate the beetroot and carrots and squeeze the limes over the top, along with ¼ teaspoon of salt.
2. Heat a dry griddle pan over a high heat until smoking, then cook the chillies, spring onions and garlic until well charred on all sides. When cool enough to handle, peel the garlic, trim and roughly chop the onions and trim and deseed the chillies.
3. Put these charred vegetables into a food processor with the avocado and whiz until smooth. Add the remaining salt and 60–75ml of cold water – just enough to bring it to the consistency of a thinnish mayonnaise. Season to taste.
4. Roughly chop the coriander and toss through the vegetables. Serve with the avocado sauce on the side for people to drizzle over at the table (see above).
## Sweet sriracha cakes
##### makes about 30 squares
35g peanut butter
15g coconut oil
1 teaspoon fine sea salt
2 teaspoons chilli flakes
300g marshmallows
180g Special K or similar crunchy cereal
Oil, to grease
1 tablespoon sesame seeds (optional)
Sriracha or other hot sauce, to finish
This is a mash-up of those horribly addictive little chilli crackers sold in pubs, and the sticky treats popular at children's party teas. The sweet heat of the sriracha goes strangely well with the sugary marshmallows – in my experience, people are usually thrown by the first bite, and then several squares later are begging for them to be taken away. Excellent make-ahead no-cook party food to go with some ice-cold beer.
1. Put the peanut butter into a pan over a medium heat with the coconut oil. When they've both melted, stir together, then stir in the salt and chilli flakes. Add the marshmallows and heat, stirring regularly, until melted.
2. Meanwhile, put the cereal into a large bowl and lightly grease a small baking tin.
3. When the marshmallows have melted into a bubbling mass, pour this over the cereal and stir quickly to mix before it sets. Tip into the tin and press down with a lightly greased spatula or greased hands to flatten. Top with a sprinking of sesame seeds, if using, and leave to set.
4. Just before serving, drizzle artistically with sriracha and cut into small squares.
## Red lentil and tomato soup with harissa
##### serves 4
2 tablespoons olive oil
1 red onion, finely chopped
2 garlic cloves, finely chopped
1 teaspoon cumin seeds
½ teaspoon ground cinnamon
200g red lentils
½ a tin of plum tomatoes, roughly chopped
1 litre chicken or vegetable stock
5 teaspoons harissa, or to taste
4 teaspoons plain yoghurt (optional)
Coriander, to garnish
Comfortingly thick, with a sucker punch of spice, this is one of the best winter soups in my repertoire, and surprisingly quick to put together. If you're feeling in need of some extra bolstering, a spoonful of plain yoghurt adds richness – and is a good way of remedying a heavy hand with the harissa.
1. Heat the oil in a large pan over a medium heat and add the onion. Cook for about 7 minutes until completely softened, then stir in the garlic and cumin seeds and cook for a further couple of minutes. Stir through the cinnamon and cook for another minute.
2. Stir in the lentils, followed by the tomatoes and their juice, mashing them well, and finally the stock. Bring to a simmer, then turn down the heat and cook for about 20 minutes, until the lentils have broken down completely and the soup is thick.
3. Stir in the harissa to taste; brands vary considerably in their heat, so do this very gradually until you reach the level that suits you, then season.
4. Serve with the yoghurt swirled on top, if you're using it, and the coriander roughly snipped over it.
## Green chilli, New Mexico style
##### serves 4
6 green jalapeño chillies (or more if you want it smokin' hot)
400g tomatillos or 200g gooseberries
1 tablespoon lard or vegetable oil
800g boneless pork shoulder, diced
1 large onion, finely sliced
6 garlic cloves, crushed
2 teaspoons Mexican oregano
2 teaspoons ground cumin
2 teaspoons ground coriander
800ml chicken stock
A small bunch of coriander, roughly chopped
New Mexicans are passionate about their local Hatch chillies, which are sold fresh and green, or left to ripen to a rich mellow red, and sometimes dried – according to locals, the green version has a fruitier flavour, while the red boasts an earthier heat.
Sadly it's impossible to get them outside the Southwest, but charred jalapeños make a very decent substitute, and tomatillos, a relative of the Cape gooseberry, are easily found online. Surprisingly, however, the very British gooseberry makes a decent alternative – I promise, it works.
Good with all the things you'd usually serve chilli con carne with – rice, cornbread, tortillas, you know the drill. A dollop of soured cream wouldn't go amiss either.
1. Char the chillies and fresh tomatillos under a hot grill for about 10 minutes, turning, until blackened (if using tinned tomatillos or gooseberries, come to them in step 3). Set aside to cool. Heat the oven to 160°C/fan 140°C/gas 3.
2. Heat the fat in an ovenproof casserole over a medium-high heat. Brown the meat in batches, making sure not to overcrowd the pan. Set the meat aside and turn the heat down to medium.
3. Add the onion to the pan (you can add a little more fat if necessary) and soften. Meanwhile, deseed the chillies and roughly chop them and the tomatillos, fresh or tinned. If using gooseberries instead, top, tail and roughly chop.
4. Once the onion is soft, stir in the garlic, chillies and tomatillos or gooseberries and cook for a minute before adding the oregano, spices and a little more fat if necessary. Stir until fragrant, then return the meat to the pan and add the stock. Scrape the bottom and bring to a simmer, then cover and put into the oven for 1½–2 hours, until the meat can be cut with a fork.
5. Season to taste and allow to rest for at least 15 minutes before stirring in the coriander.
## Lemongrass and chilli tofu
##### serves 2
350g firm tofu
2 teaspoons salt
1 stalk of lemongrass
2 bird's-eye chillies
2 garlic cloves
Oil, to fry
I admit, I'm not a heavy user of tofu (it's so often bland and spongy with oil) but in a higgledy-piggledy hip little restaurant in a suburb of Saigon with some friends, I had an epiphany. I can't remember why we ordered so out of character – perhaps the waiter recommended it? – but my God, when it came, crisp on the outside, creamily rich and soft within, and perfectly seasoned, the whole table was momentarily silenced.
I've never had anything as good since, which, as Vietnam isn't exactly round the corner, forced me to try and recreate it at home. I'm pretty pleased with the results – and I beg fellow tofu sceptics to give it a try. (It took me a while to twig just how gentle you have to be with tofu, but if yours does stick, or break up, don't worry, it will taste good, even if it looks like scrambled eggs.)
1. Cut the tofu into roughly 4cm chunks. Dissolve the salt in 500ml of hot water, then add the tofu and leave for 15 minutes.
2. Meanwhile, finely chop the inner part of the lemongrass stalk, discarding the tough outer leaves, and deseed the chillies. Finely chop these and the garlic.
3. Carefully lift the tofu out of the water (it's fragile stuff) on to a bed of kitchen towel and gently blot dry on both sides. Heat enough oil in a large frying pan or wok to come about a third of the way up the tofu chunks.
4. When the oil is shimmering, add the tofu. Leave for a couple of minutes, then gently turn one of the pieces; if it has a golden crust, flip the others, being very careful not to break them up or disturb the crust. A thin, flexible metal spatula is the ideal tool.
5. Sprinkle with the lemongrass, chilli and garlic and fry for a couple more minutes, then scoop out and serve immediately.
## Meatball curry
##### serves 4
##### _For the meatballs:_
1 teaspoon coriander seeds
1 teaspoon cumin seeds
60g yellow split peas, soaked in cold water for at least an hour
400g minced lamb
1–2 small green chillies (to taste), deseeded and finely chopped
4 garlic cloves, minced
½ teaspoon salt
4 shallots, chopped
2 tablespoons poppy seeds
2 tablespoons fennel seeds
##### _For the sauce:_
2 tablespoons neutral oil
1 onion, finely chopped
5 garlic cloves, crushed
5cm piece of ginger, finely grated
½ teaspoon chilli powder
½ teaspoon turmeric
A small bunch of coriander, stems finely chopped
1 x 400g tin of plum tomatoes, roughly chopped
1 tablespoon tomato purée
Beef mince I can largely take or leave (hence its absence from this book), but I get really quite excited by the lamb and pork varieties, which contain enough fat and flavour to make them worth cooking with.
Kofta curry is one of those staples of home cooking which is puzzlingly hard to find in British Indian restaurants, which is all the more reason to make it at home. Spicy, juicy little meatballs in a rich tomato gravy – it's a familiar combination, but in this instance probably better served with flatbreads or rice than spaghetti.
1. Toast the coriander and cumin seeds in a hot dry pan until fragrant, then tip into a pestle and mortar. Allow to cool, then grind to a powder.
2. Drain the split peas and put them, along with ½ teaspoon of the coriander and cumin powder, into a food processor along with the remaining ingredients for the meatballs. Whiz until well combined. Heat a little oil in a frying pan and cook a bit of the mix to test the seasoning, adjusting if required. Roll the mixture into little meatballs and refrigerate to firm up if necessary.
3. Heat the oil in a wide frying pan over a medium-high heat until smoking, and cook the meatballs until nicely browned all round. Scoop out and set aside.
4. Turn the heat down and add the onion to the same pan. Cook, stirring, until soft and golden, then stir in the garlic and ginger. Cook for a couple of minutes then stir in the spices, including the remaining coriander and cumin mixture, and the chopped coriander stems.
5. Pour in the tomatoes and purée, and stir the bottom of the pan. Bring to a simmer, then cook until the oil just begins to separate around the edge of the pan. Put the meatballs back into the pan and cook, covered, for 30 minutes. Season the sauce to taste. Top with the coriander leaves, roughly chopped.
## Mexican chilli chocolate mousse
##### serves 4
¼ teaspoon chipotle chilli flakes, plus extra to serve
½ teaspoon ground cinnamon
½ teaspoon ground nutmeg
½ teaspoon ground ginger
40g caster sugar
A pinch of salt
175g dark chocolate, broken into pieces
7 egg whites (or 14 tablespoons egg white if you buy it in a carton)
Ten years on, Oaxaca's vast Mercado 20 de Noviembre remains as vivid in my mind as ever. It's that kind of place. One of the things I remember making the biggest impression, aside from the wicker baskets of deep-fried grasshoppers that I never quite plucked up the courage to try, was the hot chocolate. Thick, and at once astonishingly sweet and powerfully bitter, the spices brought out the flavour of the cocoa like a dream. This is the mousse version.
A word of caution from one who's been there; though all egg-white mousses are beautifully light, you really do have to be quick when combining the ingredients in step 4, or the chocolate will seize. I keep extra ingredients on hand in case of such disaster.
1. Grind the chipotle flakes into a fine powder, then mix with the other spices, sugar and salt.
2. Put the chocolate into a heatproof bowl set over, but not touching, a pan of simmering water, and melt, stirring to help it along.
3. Meanwhile, whisk the egg whites in a large bowl until they hold soft peaks. Whisk in the spiced sugar to stiff peak stage, being careful not to overwhisk (if they droop, you'll have to start again).
4. Once the chocolate has melted, take the bowl off the pan and, working very quickly, vigorously whisk in a third of the whites; you need to do this as fast as possible or the chocolate will seize and harden – the mixture should be thick, but not dull or grainy.
5. Gently fold in the remaining egg whites with a large metal spoon until the mixture has no white streaks, being careful to keep as much air in as possible. Divide between four glasses or bowls and chill until ready to serve. Top with a few flakes of chilli for extra drama.
However old you get, or much you eat, ice cream retains the excitement of a special treat; something you'd be allowed on a summer's afternoon if you were really, really good, or if your mum was just too exhausted to say no to the prospect of five minutes' peace courtesy of the magic singing van.
It's delicious in the sweltering heat (which calls for a citrussy ice lolly rather than a full-on clotted cream number – the classic lemonade sparkle is my usual) but a rich, barely sweet, plain milk ice cream also makes a marvellous accompaniment to steamy, stodgy wintery desserts, like crumble, sticky toffee or Christmas pudding – it's all in the contrast between the textures and temperatures. In short, there is no wrong time to indulge in good ice cream (who am I kidding, I'll eat, and enjoy, a Mr Whippy if that's all that's going).
That said, though there are many, many excellent ice creams available to buy these days, they tend to come in disappointingly tiny tubs for the enthusiastic consumer, and unless you're lucky enough to live near a good ice cream parlour, the range of flavours tends to be fairly limited. Yes, I like chocolate, but I prefer rum, or fig, or avocado and lime, and if you're going to eat a big bowl of ice cream it ought to be one that you really, really want. Plus, it's easier to make than you might imagine.
### Equipment
With the exception of kulfi and granita, all sorbets and ice creams require churning before their final freezing, to incorporate air into the mixture, or they will set like a house brick. I would highly recommend, if you have any interest at all in the subject, investing in an ice cream maker, preferably one with its own refrigeration unit, so you can churn the ice cream as it freezes. They're bulky, but make life so much easier that you're almost guaranteed to get more use out of one than the simpler, churn-only sort, and working on the price per wear principle I employ when trying to justify expensive clothing purchases to myself, this makes them better value.
That said, you can make ice creams and sorbets without them thanks to a method known as still freezing, which resembles that used for granita (although if you do it right, the results should be quite different). Chill the mixture before use, then pour it into a container large enough for the liquid inside to be about 4cm deep. Cover and freeze for an hour, then check at regular intervals; once it has frozen around the edge, use electric beaters, a food processor, or a very vigorously applied hand whisk, to beat it all back into a homogeneous slush, then refreeze. Repeat every hour (or however long it takes for the mixture to start solidifying again) for the next two hours, then leave undisturbed for at least another half an hour to set firm before serving.
See also: Poached plum crumble with blue cheese ice cream (here), Rhubarb gin granita (here).
## Simple banana and peanut butter ice
##### serves 2–4 (2 greedily, 4 more moderately)
4 very ripe bananas
2 tablespoons peanut butter (see intro)
A handful of salted roasted peanuts, to top (optional)
The idea is an old one, but it was such a revelation to me that I just had to share it in case anyone else was languishing in dark ignorance with regard to the miraculous properties of frozen bananas. This is so unbelievably creamy that you won't miss the dairy one bit – the peanut butter is optional, and can be left out or substituted with honey, chocolate spread or chips, nuts, spice, maple syrup . . . you get the idea. Ideal for children, and best eaten as soon as it's made rather than frozen.
_NB: the bananas must be really ripe, or they won't be sweet enough._
1. Peel the bananas, chop into even slices, and freeze for at least 3 hours.
2. Put into a food processor and whiz until smooth and creamy (you'll probably need to keep sticking a spatula in to stop it clumping into large frozen balls, but it will happen, I promise).
3. Add the peanut butter, or any other flavourings, and a pinch of salt and whiz to incorporate, then serve with a few roughly chopped peanuts scattered on top, or, indeed, a generous drizzle of chocolate sauce (see the black and white shake, here).
## Salted brown butter and buttermilk ice cream
##### serves 4
75g salted butter
4 egg yolks
50g soft light brown sugar
¼ teaspoon salt
200ml whole milk
200ml buttermilk
Butter ice cream may sound outrageous, but trust me, this is so, so good any scruples will fly out the window with the button on your trousers. (Although actually, unlike most ice creams, this is made with naturally low-fat buttermilk, which not only makes it not as bad as it could be, but supplies a tangy edge to cut through all that butter, almost like a frozen yoghurt. If you want to go to hell in a handbasket, however, replace it with cream.)
1. Melt the butter over a medium-low heat, then turn up the heat slightly and cook until, under the froth, the milk solids turn brown. Take off the heat immediately and pour into a bowl so it stops cooking. Set aside to cool to room temperature.
2. Once the butter is cool, whisk the egg yolks, sugar and salt together until they turn distinctly paler, and voluminous. Gradually whisk in the butter until it's all incorporated.
3. Heat the whole milk in a medium pan until it comes to a simmer, then pour, whisking all the time, into the yolk mix. Pour back into the pan and gently cook, stirring with a wooden spoon, until the mixture thickens sufficiently to coat the back of the spoon, and a finger drawn down the back leaves a distinct line.
4. Allow to cool until warm, then whisk in the buttermilk. Chill for at least 4 hours if you have time, then freeze in an ice cream maker, or according to the directions here.
## Avocado and double lime sorbet
##### serves 8
150g white sugar
12 kaffir lime leaves
2 large, ripe Hass avocados
Juice of 5 limes
The first time I tried avocado in a sweet context, I wasn't convinced. But after a Damascene moment with an avocado and chocolate mousse, I've come round to the idea – the creaminess of a really ripe example makes it the perfect base for all sorts of dairy-free desserts, and here it adds a richness to a sharp, zesty lime sorbet, aromatic with tropical lime leaves, without weighing it down. Indeed, though I love it on a hot summer's afternoon (occasionally with a shot of cold rum poured over the top), it would also make an excellent pre-pudding palate cleanser at a particularly fancy dinner party.
Lime leaves can be found in oriental supermarkets, often frozen. You can leave them out if you can't find them, but they do add a lovely perfume to the sorbet.
1. Put the sugar into a pan with 150ml of water and the lime leaves and heat gently until the sugar has dissolved. Simmer for about 5–8 minutes, until slightly thickened and syrupy. Set aside to cool completely.
2. Peel the avocados and scoop into a bowl. Add the lime juice and whiz to a smooth paste using a stick blender. Whisk in the cooled syrup, discarding the lime leaves, then churn and freeze in an ice cream maker, or according to the directions here.
## Rum punch ice cream
##### serves 8
6 egg yolks
360ml whipping cream
360ml whole milk
130g soft brown sugar
4 tablespoons rum
A whole nutmeg, to grate
A dash of bitters
On holiday a couple of years ago in Barbados, my then-boyfriend's father made it his solemn duty to sample the rum punch at every restaurant we visited – it seemed rude not to join him. The best versions were rich with the island's sweet spices, balanced with a deft dash of bitters, with a healthy helping of rum. John, this one's for you.
1. Whisk the egg yolks in a medium heatproof bowl. Pour half the cream into a larger heatproof bowl and put a sieve on top.
2. Put the milk, remaining cream and sugar into a medium pan and heat, stirring to dissolve the sugar, until it comes to a simmer. Pour the hot mixture on the egg yolks, whisking constantly, until well combined, then pour back into the pan on a medium-low heat.
3. Stir constantly until the mixture begins to thicken slightly – about 5–10 minutes – then strain through the sieve into the remaining cream. Stir in the rum, grated nutmeg, bitters and a pinch of salt, and cool, then chill for at least 4 hours if you have time.
4. Churn in an ice cream maker until frozen, then freeze until solid.
## Simple persimmon, lime and ginger sorbet
##### serves 2–4
4 very ripe persimmons
2 tablespoons chilled coconut cream
3 tablespoons finely grated ginger
Juice of 3 limes
Honey, to taste (optional, see intro)
Another remarkably creamy fruity sorbet on the same lines as the classic banana version here. Persimmons always remind me, oddly enough, of set custard (perhaps something to do with the contrast between the tough skin and the soft, honeyed, almost jellied flesh beneath), so it's unsurprising they make great ice cream.
Because they're so sweet, they can handle the zing of the lime juice and the heat of the ginger, which gives the whole thing a refreshing, south-east Asian feel – but do make sure your fruit is properly ripe; it should be really squashy, almost bursting from its skin.
(Such very sweet fruit shouldn't need any extra help in the form of honey, but as it's impossible to know until you've made the sorbet, it's wise to keep it at hand, just in case.)
1. Freeze the persimmons until solid (this will take a good few hours). Make sure the coconut cream is chilled.
2. Prepare the remaining ingredients and put them near the food processor. Holding each persimmon with a tea towel to save your fingers, peel it. Rinse the peeler in warm water every now and then to help.
3. Stand each fruit on its flatter, stalk end and cut it in half with a stout knife (I use a cleaver), then trim off the stalk and cut into large chunks.
4. Put the persimmon into a food processor and whiz until almost smooth, then add the coconut cream and ginger and whiz again.
5. Add two-thirds of the lime juice and taste; depending on the sweetness of your fruit, you may want to add more, or honey to taste. Once you're happy with the results, serve immediately.
## Frangelico and espresso granita shots
##### serves 10
100g sugar
300ml strongish coffee
120ml Frangelico
Cold milk, to top
A friend of mine, who shall remain nameless, introduced me to an exciting new digestif in an Italian ski resort. It consisted of a shot of Jägermeister, the herbal liqueur favoured by ancient Austrian hunters and drunk stags, deposited in a glass of milk rather than the usual noxious energy drink. The bemused barman obligingly made up a round for us, but once they were down the hatch, he brought over his own version, which replaced the Jäger with a considerably less challenging shot of hazelnut liqueur. Only one of them inspired a recipe in this book, Gemma.
1. Put the sugar into a small pan with 300ml of water and bring to a simmer, stirring to dissolve the sugar. Simmer for about 5 minutes, until slightly syrupy, then take off the heat, stir in the coffee and Frangelico and allow to cool completely.
2. Pour the granita into a tray – it should be about 2cm deep. Unless you have very steady hands, you may find it easier to pour it out again into a jug once you're chosen the right tray, put the tray into the freezer, then pour the mixture in once it's in there.
3. Freeze for about an hour, then check – once it's started to solidify around the edges, scrape into the middle with a fork. Repeat roughly every 30 minutes for the next 2½ hours, until you have a dish full of large crunchy crystals.
4. To serve, scoop some into a small glass and pour over milk to top. Consume immediately.
## Ricotta ice cream terrine with fig molasses
##### serves 8
##### _For the fig molasses_
_(or use about 75ml honey_
_and 6 semi-dried figs_
_for the finished ice):_
1kg dried figs
##### _For the ice cream:_
550ml whole milk
140g caster sugar
3 egg yolks
250g ricotta, drained
Ricotta and figs, drizzled with honey, are a match made in Mediterranean breakfast heaven – and one of the best ways to use any really ripe figs you're lucky enough to come across in this country, or indeed on your travels. But at that almost indecent stage of ripeness, when the syrupy juice runs from their thin skins, they don't travel well, so my consumption is largely of the dried sort. Here these are simmered into submission, creating a rich kind of figgy molasses in the process, which makes the perfect pairing with the mild, creamy cheese.
1. To make the molasses, put the figs into a large pan and cover with 2 litres of water. Bring to a simmer, then turn down the heat and simmer gently for 2 hours, keeping an eye on the water situation – it should reduce by about half, but if your figs are particularly parched, you may need to add more to stop them boiling dry.
2. After 2 hours the figs should be very soft. Place a sieve over a large bowl and drain, reserving the cooking liquid. Press as much liquid through as possible (alternatively you can use a piece of cheesecloth suspended over a bowl and squeeze them dry when they're cool enough).
3. Once you're content there's no more moisture left in the fruit, set the figs aside and pour the liquid into a pan. Bring to the boil, then turn down the heat slightly and reduce until syrupy but still liquid, the consistency of warm honey.
4. Meanwhile, make the custard base. Put the milk and half the sugar into a medium saucepan and bring to a simmer, stirring to dissolve the sugar. Whisk together the remaining sugar with the yolks in a heatproof bowl.
5. Pour the simmering milk on to the yolks, whisking all the time, then pour back into the pan and heat very gently, stirring with a wooden spoon, until it has thickened enough to thinly coat the back of the spoon (a line drawn with your finger should hold its shape).
6. Take off the heat and beat in the ricotta until smooth (I find a stick blender useful here) along with a pinch of salt. Allow to cool, chill for at least 4 hours if you have time, then churn in an ice cream maker until thick but not solid (or see the still freezing method here).
7. Grease a small loaf tin roughly 16 x 9cm and line with clingfilm. Spoon a quarter of the ice cream into the base, then drizzle a layer of molasses over the top. Add another quarter of the ice cream, then stud a line of figs down the centre, remembering to snip off the hard little stalks if necessary. Add another quarter, drizzle with molasses, then add the rest and smooth the top. Drizzle with molasses and swirl with a skewer or toothpick.
8. Freeze for an hour to set the top, then wrap the clingfilm over the top and freeze for at least another 2 hours until solid. Turn out and remove the clingfilm to serve.
Not an entirely fair term, I've always thought, suggesting as it does food that's no better than rubbish – but one with a certain undeniable allure. Bombarded as we are by healthy eating messages, by the unwelcome certainty that we'd be doing ourselves a favour by opting for fruit salad instead of an ice cream, it's all the more wonderful sometimes to throw wisdom to the wind and choose the wrong thing.
Some of my favourite vices contain so many ingredients not found in nature that it would be pointless to try and recreate them at home – the kind of cheap, aggressively cheesy corn snacks that coat your fingers with orange powder, for example, or the frozen potato waffle (best appreciated topped with baked beans and cheese, should you be lost for a serving suggestion).
I'm not ashamed to admit I prefer Bird's custard to the egg yolk sort, and would always choose tinned tomato soup over the lumpy, oven-roasted, heirloom variety strewn with fresh basil, for all its Italian virtues.
Some of these tastes can be attributed to nostalgia, no doubt, though very few of these things were part of my diet growing up – the potato waffle, for example, retains the attraction of forbidden fruit, consumed only at other people's houses, where the distinctions between 'proper food' and rubbish were less closely observed. But largely I like them because they appeal to the basest of human tastes, that primitive part of us that craves salt and fat and sugar, an instant addictive calorie hit to keep us warm in the cold of the cave, and something interesting to break the monotony of chewy roots and stringy meat. I am by no means suggesting you should incorporate any of these recipes into your daily diet. They're for special occasions only. And nothing says special like Angel Delight, right?
See also: Salted almond toffee (here), Banoffee split (here), Pecan, bourbon and salted caramel cookies (here), Salted peanut caramel crispy cakes (here), Walnut caramel cream pie (here), Bacon refried beans (here), Coconut ice magic (here), Duck fat garlic bread (here), Sweet sriracha cakes (here), Malted milk creams (here), Triple chocolate malt cake (here), Black and white shake (here), Coconut squid (here), Maryland-style octopus sandwich (here), Aloo tikki Scotch eggs (here), Caribbean milk punch jelly (here), Crunchy soy-braised pig's tails (here), Marzipan violets (here), Wild garlic bread (here), Georgian cheesebread (khachapuri) (here), Pissaladière (here), Marmite and cheese mini doughnuts (here), Chocolate orange cheesecake (here).
## Sweet paprika cheesy chips
##### serves 4
700g sweet potato (roughly 2 medium ones)
Sunflower or other neutral oil, to grease
2 tablespoons cornflour
2 teaspoons smoked paprika
50g Parmesan or other hard cheese, finely grated
If I learnt anything of lasting value at university, it was the beauty of chips and cheese. But these are a cut above those served at my favourite kebab van (sorry, Hassan) – sweet potato makes excellent fries, dense and fudgey, with crisp edges, and the perfect foil for salty savoury Parmesan and smoky paprika. Don't be tempted to skip the soaking process, or you'll end up with soggy fries.
1. Peel the sweet potatoes and cut into chips of your desired width. Put into a large bowl as you cut them and cover with plenty of cold water, then leave to soak for at least 30 minutes.
2. Heat the oven to 240°C/fan 220°C/gas 9. Once it has come to temperature, put two baking trays, well greased with oil, in there to heat. Meanwhile, drain the chips and dry thoroughly with a tea towel or kitchen roll. Dry the bowl too.
3. Put the dry chips back into the dry bowl and toss with the cornflour, paprika and a generous shake of salt until well coated. Divide between the trays, spreading them well out and tossing them as you add them, to coat with oil.
4. Bake for about 20–25 minutes, until crisp and beginning to blacken – keep an eye on them, as the exact time depends on both your oven and the thickness of your chips.
5. When they look almost ready, whip them out of the oven and transfer to an ovenproof serving dish. Sprinkle over the cheese and put back into the oven for 3–5 minutes, until melted, then serve immediately, while they're still finger-burningly hot.
## Buttermilk onion rings
##### serves 4 (as a side)
1 large onion
280ml buttermilk
100ml milk
About 1.5 litres sunflower, vegetable or groundnut oil, to cook
80g flour
20g cornmeal (or use 100g flour)
1 teaspoon black onion seeds
1 teaspoon smoked paprika
½ teaspoon salt
Who doesn't love onion rings? Even when they're bad they're good, which makes these ones out of this world. The buttermilk tames some of the fire of the onion – ordinary milk will give much the same result, though with less tang – while the cornmeal brings crunch to the trashy party in your mouth.
1. Slice the onion into thickish rings (½–1cm) and separate them. Put them into a bowl with the buttermilk and milk and leave to soak for at least 30 minutes.
2. Heat a deep pan a third full of oil on a medium-high heat. While you're waiting for it to come to the right temperature (180°C, when a breadcrumb dropped in should sizzle), mix together the flour, cornmeal, onion seeds, paprika and salt in a wide bowl and put the oven on to warm. Remove the onions from the buttermilk and shake off any excess, then drop into the bowl in batches and toss to coat.
3. Once the oil has come to sizzling temperature, drop in a handful of onion rings (don't overcrowd the pan) and stir once, then cook until golden. Scoop out with a slotted spoon, season and put into the oven to keep warm while you repeat the process.
## Vietnamese crispy pork and prawn pancakes (bánh xèo)
##### serves 4–6
Coconut or vegetable oil, to cook
300g cooked pork, diced (I use 3 thin shoulder steaks, thinly sliced and poached, but any leftovers will be fine, especially pork belly)
150g cooked prawns
8 spring onions, finely sliced
120g beansprouts
2 little gem lettuces, separated into leaves
A small bunch of coriander and mint, stalks trimmed
Hot sauce, to serve
##### _For the batter:_
60g moong dal
120ml coconut milk
225g rice flour
A generous pinch of turmeric
1 teaspoon fine salt
These were my absolute favourite breakfast discovery in Vietnam; with that addictive crunch that only comes from hot fat (the 'xèo' imitates the sizzle the batter makes as it hits the pan), they seemed like the Vietnamese equivalent of our own fried egg sandwich, only served with rather more in the way of fresh herbs on the side.
1. Soak the dal in hot water for 30 minutes, then drain and put into a large bowl with the coconut milk. Whiz to a paste with a stick blender, then stir in the flour, turmeric, salt and 570ml of water. Whisk well to combine, then leave to stand for at least 30 minutes (though you can keep it overnight if you like).
2. Grease a non-stick frying pan well and put it over a medium heat. Leave it to get nice and hot, so a drop of batter sizzles as it hits the surface. Arrange the pork, prawns, onions and beansprouts near the stove.
3. Once the pan is good and hot, whisk the batter to bring it back together, then pour a ladleful into the pan, quickly swirling it to spread it out; it should be very thin, and, if the pan is hot enough, full of little holes. Pour a little more round the sides, swirling again. (This may take a few practice goes to get right.)
4. Drop some pork, prawns, spring onion and beansprouts over one side of the pancake and cook until the edges start to curl and come away from the sides of the pan, then very gently check the base. Once it's golden, and the top of the pancake is dry and cooked through, fold in half and slide on to a plate.
5. Serve with plenty of lettuce, herbs and a good squiggle of hot sauce.
## Texan queso dip
##### makes as much as 4 of you should probably eat at one time, but easily doubled for a party
175g grated mature Cheddar
85g grated Gouda (the young, rubbery sort) or Monterey Jack
2 tablespoons cornflour
60ml whole milk
½ a white onion, finely minced (yellow onion will do at a pinch, but make sure it's very finely chopped)
1 tablespoon pickled jalapeño rings, chopped, plus 2 tablespoons of their pickling juice
That gloopy yolk-yellow nacho dip served in cinemas promises so much; after all, isn't all cheese just processed milk, so what's not to love about a molten bowl of the stuff? The answer, sadly, is the taste: cloying, artificial and actually just plain nasty.
This is closer to the original version, ubiquitous in the great state of Texas, though without that suspicious 'cheese product' orange colouring. Monterey Jack would be used stateside, but a young Gouda is easier to get hold of in my neck of the woods, and melts just as well. Eat with loads of nachos. Obviously.
1. Toss the grated cheeses with the cornflour, and put into a medium pan over a low heat. Add the milk and allow the cheeses to melt, stirring regularly, until smooth.
2. Mix in the onion, chilli and pickle juice and serve immediately if possible; you can keep it warm over a low heat, or in a bain-marie (a heatproof bowl set over a pan of simmering water) if necessary for about half an hour, but it will start to solidify, so keep stirring.
## Homemade butterscotch 'Angel Delight'
##### serves 6–8
75g butter
100g soft light brown sugar
¼ teaspoon salt
600ml whipping cream
2 egg whites
2 tablespoons caster sugar
Angel Delight is one of my basest pleasures – a dollop of this unpromisingly beige wobbly stuff takes me straight back to school days, only this time around, I can eat the whole packet on my own (they claim they serve two, but I've never found this to be the case). It is a guilty pleasure though, whereas this version, heavier on the cream and lighter on the old tetrasodium diphosphate, is just pure unadulterated joy, especially with some stewed apple.
1. Put the butter, brown sugar, salt and 50ml of the cream into a small pan over a medium heat, stirring until the sugar has dissolved and you have a smooth sauce. Allow to cool to warm room temperature, stirring occasionally to keep it liquid.
2. Whisk the egg whites to soft peaks, then whisk in the caster sugar. In a new, larger bowl, whisk the remaining cream to soft peaks, then fold in the caramel sauce.
3. Fold a spoonful of egg white into the cream mixture to loosen it, then gently fold the remainder in until well combined.
4. Spoon into serving dishes and chill until required.
## Marathon pie
##### serves 8–10
##### _For the base (makes about
28 digestive-sized biscuits):_
195g butter, at room temperature, plus a little extra to grease
200g granulated sugar
110g cocoa powder, sifted
1 egg, beaten
200g plain flour
##### _For the caramel layer:_
100g salted roasted peanuts, roughly chopped
165g white granulated sugar (golden stuff is fine, but will make life harder)
50g butter, at room temperature, diced
110ml double cream, at room temperature
½ teaspoon salt flakes
##### _For the nougat mousse:_
5 egg yolks
150g honey
3 gelatine leaves
375ml whipping cream
##### _To top:_
5 tablespoons white sugar
1 Snickers bar, chilled
I'll admit, this is some serious self-indulgence – when I put the picture online, someone asked if the name came from the fact you have to run a marathon to justify it; I suspect an ultra event might be the bare minimum. That said, I've done neither, and I'm still alive, and the combination of a crunchy, cocoa-rich base, salty peanut caramel and light honey mousse is surely worth the sacrifice of a few hours of life at the other end.
The different stages make this a little bit of a project (you could use ready-made bitter chocolate biscuits for the base if you want to speed up the process – something like Oreos would do, though because of the filling you won't need as much butter to stick it together), but there's nothing particularly complicated here, and most of it is chilling time. (Which you could always use for that thirty-mile run, of course.)
1. Start by making the biscuit base. Cream together 165g of the butter with the sugar until fluffy, then beat in the cocoa and a pinch of salt. Scrape down the sides of the bowl, then beat in the egg, followed by the flour. Once it comes together into a dough, form into a ball and flatten. Wrap well and chill for at least an hour, until firm.
2. Heat the oven to 200°C/fan 180°C/gas 6. Roll out the dough to about 3mm thick and cut out circles about 7cm in diameter. Arrange on a lined baking tray and bake for 9 minutes. Allow to cool.
3. Melt the remaining 30g of butter. Blitz 350g of the biscuits (about 12 digestive-sized ones) to coarse crumbs, add the melted butter, whiz until finely ground, then press the crumbs into a roughly 22cm loose-based pie dish, making sure they reach up the sides. (If you don't have enough, whiz a couple more biscuits with a little more melted butter.) Chill in the fridge for at least 30 minutes before making the caramel.
4. Scatter the peanuts evenly over the crust. To make the caramel, put all the ingredients close to the hob, including the dish. Pour the sugar into a large heavy-based pan over a medium-high heat and allow to melt. Once it's done so, leave it until it turns amber, then whisk in the butter and continue whisking until this has melted. Take off the heat and whisk in the cream (careful, it will bubble up), followed by the salt, pour into the dish and allow to cool, then chill until set.
5. To make the nougat mousse, whisk the egg yolks in a heatproof bowl until thickened and pale. Put the honey into a pan with 75ml of water and bring to a simmer. Cook until it reaches 115°C. Meanwhile, soak the gelatine in cold water, then squeeze out. Whip the cream to soft peaks, keeping an eye on the syrup all the time.
6. Working quickly, whisk all but a couple of tablespoons of hot syrup into the egg yolks. Dissolve the gelatine in the remaining syrup, then whisk this into the yolk mixture too. Continue to whisk until cool, then fold in the cream, spoon on to the caramel, and allow to set in the fridge, which will take at least 3 hours.
7. Put the remaining sugar into a light-coloured pan with a splash of water over a medium heat until toffee-coloured, then use to decorate the top of the cake, along with thin slivers of Snickers.
Whoever does the PR for kale, I want in. This is a tough vegetable to warm to, quite literally – chewy and woody, with a distinctive bitter flavour, it's an unlikely candidate for hipness, but in the last five years it's somehow gone from cattle fodder to the culinary catwalk.
Take this quote from the _New York Times_ , which I had to read twice to be sure it wasn't an April Fool: '"For some reason when you go to a restaurant and they have a kale salad on the menu, you automatically accept that it's a cool spot," said Chelsea Leyland, a D.J. and downtown fixture. "It's like playing the right music of the moment. It gives it that stamp of coolness."'
Yet so unpopular was kale as human food until relatively recently that Jane Grigson's excellent _Vegetable Book_ , first published in 1978, devotes a whole chapter to wild seakale, but makes no mention of the cultivated sort, while Nigel Slater's _Tender_ , published just over thirty years later, gives it a twelve-page hagiography. In a single generation, kale has been reborn. And hurrah for that – for all that I've said, like coffee or Campari, once you've developed a taste for the stuff, it's hard to remember what your problem was in the first place.
As hardy as its texture suggests, it thrives in climes other species find challenging (it was common in wartime kitchen gardens), so it's perhaps no surprise that our modern word comes from the Scottish name; in English, kale was known as cole (see also coleslaw, which comes to us from the German, and cole-flower, or cauliflower). The flavour, like that of traditional Brussels sprouts, actually improves after a frost, which is why it was so very prized in the depths of winter, when vitamins and variety were thin on the frozen ground.
As the selection in this chapter suggests, I like greens that, like kale, demand a bit of effort. Give me adult spinach over the boring baby stuff any day; crinkly Savoy cabbage over the smooth white variety – if we're going to eat leaves, let's at least go for those with some personality.
So often relegated to the role of a warm side salad; obligatory for a balanced meal, but not deemed worthy of anything more than the most basic preparation (steamed then dropped damply on to the plate), at the risk of coming over all sincere, they're capable of so much more if we let them shine. Greens can easily be the centrepiece of a dish, like the chard gratin and spinach tart in this chapter, but they're so quick to cook that they can also be stirred into almost any soup, stew or noodle dish at the last minute.
In short, eat your greens. They're good for the soul.
### Health benefits
I tend to treat anything labelled as a superfood with a certain amount of suspicion, but leafy greens like kale, spinach and chard at least have some claim to the title; they're a great source of vitamins K, A and C, and contain good amounts of folate, calcium and other minerals too.
### Kale
Unsurprisingly, just like the latest denim, trendy kale comes in a range of styles. The most common in this country is curly kale, the green variety of which is (rather annoyingly to my mind) often sold ready chopped in bags in the supermarket. This looks like a labour-saving boon, but seems generally to mean you purchase a dispiriting assortment of older leaves and thick chunks of browning stalk along with the good bits – much better to go somewhere where they sell it in pretty frilly bunches for you to trim as you see fit. If you want to use it in a salad, make sure the leaves are young and relatively tender, or you'll still be chewing come doomsday.
The other variety you'll often see is cavolo nero, or black kale (sometimes excitingly known, especially in the States, as dinosaur kale). This has broad, spear-shaped purple-green leaves with the same crinkly texture as a Savoy cabbage, and is one of the most beautiful vegetables I know. Despite the Italian name, it works well with all sorts of different flavours, though it does seem particularly suited to chucking into a minestrone with a healthy glug of olive oil. Look for proud, firm leaves rather than limp, slug-nibbled ones. (The less common flat-leaf, or Russian, kale is like a cross between the two – frilly at the edges, flat in the middle.)
Kale is harvested from midsummer onwards, but as mentioned it's at its best in midwinter, after a frost (and when there's not much else going on, perhaps more importantly).
### Spinach
Spinach is another vegetable that's badly served by the supermarkets, who have collectively decided to sell only the tender baby leaves. These are fine in a salad, but are far too delicate for cooking with, and, to my mind, have a less interesting, more strongly iron-tinged flavour of the kind likely to leave you with oddly furry teeth. Mature spinach is still sold at street markets and grocers (and, near me at least, in little Turkish corner shops), and though the washing can be a pain, the flavour makes it well worth the effort, not to mention the reduced cost.
Whatever sort of spinach you use, however, it is remarkable both for the speed at which it collapses down to nothing (you'll need sinkfuls to make a decent portion for more than a couple of people) and, perhaps thankfully in that case, the speed at which it cooks. You barely need to show it the heat, so be very careful not to leave it too long, and if you think you may have done so, rinse it under cold water before any further damage is wreaked.
Homegrown spinach is in season from spring to midsummer.
### Chard
Finally, as far as this chapter is concerned (though most of the recipes below would work equally well with Savoy or green cabbage leaves and spring greens), we come to chard, close cousin of spinach, a member of the beetroot family, and perhaps the blingiest green vegetable of them all.
It takes its name from its wide ridged stems, known as chards, which come in a gorgeous rainbow of colours, from sunshine yellow to vivid pink; even the workaday white-stemmed Swiss chard has a certain handsome quality. Only perhaps the candy-striped beetroot can rival it for sheer Liberace-style exuberance.
Chard has an earthy, slightly minerally sweetness, but the width of those stems, and the delicate, spinach-like quality of the leaves, means that all but the youngest examples should be divided between stems and leaves, and the two cooked separately if you're to avoid boiling the latter to death while waiting for the former to soften (though even the toughest stalks are unlikely to take more than three minutes). If this sounds like too much work, look for the smallest leaves, which, at the farmers' market, can generally be found lurking at the bottom of the box. Chard is in season from June to November.
All these vegetables are best stored in the salad drawer in the fridge, and consumed as soon as possible after purchase, though kale in particular does keep fairly well.
### What they go with
A simple dressing of olive oil, lemon juice and salt suits all three just fine, but they also have a great affinity with umami flavours like bacon, anchovies and Parmesan, and creamy sauces too. I find orange and nuts surprisingly good pairings and they can also take a certain amount of chilli heat, especially if you're generous with the garlic.
See also: Blue cheese creamed spinach (here), Black kale salad with anchovy dressing (here), Potato, black kale and anchovy pie (here).
## Spinach soup with spiced anchovy butter toasts
##### serves 4
2 tablespoons butter
2 shallots, roughly chopped
A whole nutmeg, to grate
600g spinach
1 litre chicken stock
2 tablespoons double cream (optional)
1 baguette
##### _For the anchovy butter:_
125g unsalted butter, softened
½ teaspoon cayenne pepper
¼ teaspoon finely ground black pepper
¼ teaspoon finely ground nutmeg
¼ teaspoon finely ground mace
¼ teaspoon finely ground cinnamon
¼ teaspoon ground ginger
50g anchovy fillets in olive oil, drained and roughly chopped
2 teaspoons lemon juice
The almost nutty sweetness of spinach works particularly well with this emphatically savoury anchovy relish.
1. Start by making the anchovy butter. Melt about a quarter of the butter in a small pan over a medium heat, then add the spices. Cook for a minute or so, stirring, then add the anchovies and cook for another couple of minutes, mashing them up with a spatula or wooden spoon as they begin to soften. Take off the heat and allow to cool to warm.
2. Put the anchovy mixture into a pestle and mortar and mash until fairly smooth, then stir in the lemon juice and gradually work in the remaining butter. Taste and add more lemon juice or cayenne pepper if you think it needs it, then shape into a sausage, wrap in clingfilm and chill until sliceable.
3. To make the soup, melt the butter in a large pan over a medium-low heat and soften the shallots with a pinch of salt and a good grating of nutmeg. Meanwhile, wash the spinach well.
4. When the shallots are soft and golden, add the washed spinach to the pan with a pinch of salt, turn up the heat slightly and cover. Cook until wilted, shaking the pan occasionally to make sure it cooks evenly.
5. Add the stock to the pan, bring to a simmer, then take off the heat and allow to cool slightly. Purée, then stir in the cream, if using, and taste; you can add a little more if you like, but bear in mind that the more you add, the less vibrant the colour. Season.
6. Cut the baguette into thin rounds and toast under the grill until golden and crisp. Serve the soup with a couple of baguette croutons per bowl, topped with a disc of anchovy butter.
## Spicy cashew kale crisps
##### makes 1 large bowl
200g cavolo nero
60g cashew butter
1 teaspoon nam pla (fish sauce)
1 teaspoon keçap manis (see here)
2 teaspoons soft light brown sugar
2 tablespoons crispy fried shallots (optional, see intro), crumbled
1 teaspoon togarashi or other chilli powder, or to taste
Let's get one thing straight: kale crisps are not the same as potato crisps, whatever those health bloggers might claim. Just as addictive, sure, but not the same – thinner and more friable, they shatter in your mouth like savoury honeycomb, flooding it with a rich green, earthy and very savoury flavour.
The trick here is the low, slow cooking – kale crisps baked hot and fast tend to dissolve into scorched dust – and the use of robust black kale, which stands up better to cooking than the more delicate curly variety, though you can substitute that if you prefer; just keep an eye on it during cooking.
_NB: I use the crispy dry shallots sold in Asian supermarkets as a topping, but they're entirely optional._
1. Wash the kale and dry very well – I like to do this an hour or so ahead and spread it out on paper towels to dry. (Water is the enemy of crispness.) Roughly chop into large pieces, cutting out the central stems if the leaves are very large, or you don't like chewing, and bearing in mind it will shrink significantly during cooking. Grease two large baking sheets.
2. Heat the oven to 120°C/fan 100°C/gas ½. Whisk the cashew butter with 1 tablespoon of warmish water to loosen, then whisk in the fish sauce, keçap manis and sugar.
3. Put the kale into a large bowl and add the dressing. Massage it into the leaves, then spread them out over the baking sheets in a single layer and dust with the crumbled shallots, if using, and chilli powder.
4. Bake for about 1 hour 45 minutes, until the leaves are crisp and dry, turning the sheets round every half hour or so, then run a thin fish slice under them to detach them from the sheets and leave to cool before transferring to an airtight container, or better still eating immediately while they're still lovely and crisp.
## Fava e cavolo nero
##### serves 4
500g dried, split broad beans (see intro)
700ml chicken or vegetable stock
2 tablespoons extra virgin olive oil
##### _For the kale:_
1kg cavolo nero
4 tablespoons olive oil
8 garlic cloves, thinly sliced
1 teaspoon chilli flakes
Dried fava (or broad) beans are a starchy staple in Puglia, in Italy's heel, where I first came across them, though I was later to find they're a key ingredient in falafel as well (which means they can be found in Middle Eastern grocers). I remember eating a vast plate of this creamy, nutty, slightly bitter purée, as comforting in its own way as a bowl of mash, with a pile of wilted chicory, slick with olive oil, in a tiny stuffy restaurant in the ancient city of Lecce one baking hot lunchtime. As a fan of all things starchy, I was an instant convert, though I must say I think it works better on a cold British afternoon.
Unless you grow it yourself, it's well nigh impossible to buy Italian dandelion, or cutting chicory, here, but kale makes a decently bitter substitute, and you could also use spinach, chard, or indeed any kind of greens. Be aware that the purée sets quite solid when cool, so if you want to reheat it, add a little more liquid.
1. Soak the beans in cold water for at least 6 hours, or overnight. Drain, rinse, and put into a pan with the stock. Bring to the boil, then skim, turn down the heat to medium, and simmer for about an hour until they begin to dissolve into a mush.
2. Meanwhile, blanch the kale for a minute in a large pan of boiling salted water, then drain and roughly chop.
3. Heat the 4 tablespoons of oil in a frying pan over a medium heat, then add the garlic. Fry until golden, then scoop out and set aside. Replace with the chopped kale and fry, stirring occasionally, until dark and beginning to crisp around the edges. Return the garlic to the pan along with the chilli and season to taste.
4. Mash the beans into a sloppy purée, or use a stick blender if you'd prefer a smoother texture (careful, they'll spit like angry snakes). Season to taste and stir in 1 tablespoon of the extra virgin olive oil. Divide between shallow bowls and plonk the kale on top. Drizzle with the remaining extra virgin olive oil and serve.
## Spinach, ricotta and feta tart with hard-boiled eggs
##### serves 6–8
3 eggs
1.3kg mature spinach, trimmed and well washed, or 900g frozen whole leaf spinach, defrosted
2 tablespoons olive oil, plus extra to glaze
1 large red onion, finely sliced
6 garlic cloves, finely sliced
A whole nutmeg, to grate
250g ricotta
100g feta, crumbled
Zest of 1 lemon
A dash of olive oil
3 tablespoons pine nuts
##### _For the polenta pastry:_
190g cornmeal
190g plain flour
½ teaspoon salt
120ml olive oil
A pleasingly substantial, and incidentally vegetarian main course which plays merry havoc with the flavours of southern Europe – it's a little bit Greek, a little bit Italian, with a ridiculously easy pastry that's rich with olive oil and crunchy with polenta, and some sunny eggs on top for colour.
You really need mature spinach for this one – see the introduction here for advice on sourcing. The whole leaf frozen sort is fine.
1. Put the cornmeal and flour into a large mixing bowl and whisk together with the salt. Whisk the olive oil with 120ml of cold water, then make a hollow in the middle of the flour, pour in the liquid and stir to make a soft dough that comes cleanly away from the sides of the bowl (if it doesn't, add a tiny bit more flour). Wrap well and put into the fridge while you prepare the filling.
2. Put the eggs into a pan of cold water, cover, bring to the boil, then simmer for 5 minutes. Run under cold water to cool, then set aside.
3. If using fresh spinach, bring a very large pan of salted water to the boil and blanch the spinach for a minute until wilted, working in batches for ease. Drain in a colander and, when cool enough to handle, squeeze very well until no more water comes out; you'll be amazed at how much is in there. If using defrosted frozen spinach, skip straight to the squeezing stage.
4. Heat the oil in a large frying pan over a medium-low heat and cook the onion until pink and soft. Stir in the garlic and cook for another couple of minutes, then add the spinach. Turn the heat up slightly and cook until dry. Grate in a generous amount of nutmeg and season well. Heat the oven to 220°C/fan 200°C/gas 7.
5. Grease a 26cm tart tin with oil. Roll out the pastry on a generously floured surface – it will be soft, but elastic – then use to line the tin. Line with foil or baking paper and baking beans and bake for 20 minutes.
6. Meanwhile, mix the ricotta with the feta, lemon zest and a dash of olive oil. Season to taste.
7. Remove the beans and paper and bake the tart for a further 7 minutes, then spread the base with the ricotta mixture, followed by the spinach. Bake for 15 minutes, then sprinkle the top with pine nuts and put back into the oven for another 5–10 minutes, until the nuts are golden. Meanwhile, peel the eggs and slice in half.
8. When the tart is ready, poke little hollows in the spinach and arrange the eggs in them. Serve hot or cold.
## Homemade orecchiette with sausage and kale
##### serves 4
##### _For the orecchiette:_
155g '00' pasta flour
300g semolina flour (available from Italian delis or online)
1 teaspoon salt
About 255ml warm water
##### _For the topping:_
500g cavolo nero
2 tablespoons olive oil, plus extra to grease
4 meaty Italian pork sausages, preferably with chilli or fennel seeds
4 garlic cloves, thinly sliced
120ml white wine
Zest of 1 lemon
Much as I love eating homemade pasta, it cannot be denied that the making part can be a bit of a faff – all that rolling and cutting and hanging of floury noodles over chairbacks means it's definitely a weekend project rather than a run-of-the-mill kitchen task as far as I'm concerned.
Orecchiette, or little ears, are an honourable exception – no rolling required, just a satisfying squidging of pasta into tiny mouse-sized hats, they may not be the ideal quick after-work dinner, but they are a nice way to spend an hour or so on a more leisurely evening, and a task that everyone can pitch in with.
I remember seeing black-clad women sitting on kitchen chairs on the pavement outside their houses in Puglia, gossiping as they shaped their orecchiette; throw in a glass of wine, and that's an ideal scenario here.
1. Put the flours into a large bowl with the salt and whisk. Make a well in the middle and pour in most of the water, then mix together. Add just enough water to bring the mixture into a coherent dough.
2. Lightly oil a work surface and turn the dough out. Knead for about 8–10 minutes until smooth and elastic, then wrap in clingfilm and leave at room temperature for about an hour before shaping.
3. To shape the orecchiette, put the dough under a damp cloth and prepare a couple of lightly floured trays.
4. Pinch off a piece roughly the size of a shelled hazelnut, then use your thumb to squash and drag it towards you, flattening it in the process, to form a thin disc with a slightly thicker rim. Shape this over the top of your thumb to make a little hat. Put on the lightly floured tray and repeat. You can leave them at this point to dry for a couple of hours, or cook immediately.
5. Wash and roughly shred the cavolo nero, then steam for a couple of minutes until just wilted. Prepare a large pot of boiling well-salted water for the pasta (you can use the steaming water as a start).
6. Heat the oil in a frying pan over a medium-high heat and slit the sausages down the middle, scooping the meat from the inside into the pan. Fry, breaking it up with a spatula, until beginning to brown and crisp. Meanwhile, add the pasta to the boiling water, stirring vigorously as you do so to stop it sticking together. Cook for 5 minutes, then begin checking it at regular intervals until it's done to your liking (the exact time will depend on how thick your orecchiette are, and how chewy you like them).
7. Add the garlic to the pan with the sausage and fry for a couple of minutes, stirring, then add the cavolo nero and stir to coat. Turn up the heat, then pour in the wine and stir to deglaze the pan.
8. Once the pasta is done, drain well and add to the pan. Toss together, season with salt, pepper and lemon zest, and serve immediately.
## Chard gratin with a Gruyère crumb
##### serves 4
275ml double cream
1 fat clove of garlic
A whole nutmeg, to grate
200g chard
Butter, to grease
50g hazelnuts
A little oil
20g breadcrumbs
50g Gruyère, grated
Luxuriously, creamily rich, with a crunchy, nutty crumb, this is a killer side dish for something plain – a roast chicken, perhaps, or even just a dollop of mash or polenta.
1. Pour the cream into a small pan, crush in the garlic and add a good grating of nutmeg. Bring to a bare simmer, then turn off the heat and leave to infuse.
2. Heat the oven to 180°C/fan 160°C/gas 4. Bring a large pan of salted water to the boil. Separate the chard leaves and stalks. Add the stalks to the pan and blanch for 2–4 minutes, depending on thickness, then add the leaves and blanch for a further minute. Drain and rinse under cold water, then squeeze dry.
3. Butter a small oven dish. Stir the chard into the cream, season and spoon into the dish. Bake for 30 minutes.
4. Meanwhile, toast the hazelnuts in a dry frying pan until fragrant, then tip out and set aside to cool. Put a little oil into the pan, allow to heat up, then toast the breadcrumbs until pale golden and crisp. Roughly grind the hazelnuts in a food processor (or finely chop), add the cheese and pulse briefly. Stir this into the breadcrumbs, then tip on to the gratin and bake for 15 minutes more, until golden.
Having gone into rhapsodies over cooked leaves in the previous chapter, here I'd like to sing the praises of the raw variety, in the form of salad. That once sad side dish has become a seriously slick operation, bursting with fresh zesty herbs, crunchy stems and peppery stalks; all micro greens and maximum flavour – and the epitome of culinary cool.
Since the health food revolution, you can serve a salad as the main event at a lunch or dinner party and no one will bat an eyelid. But, though they're always healthy, salads don't have to be dull diet fodder; fresh green leaves provide the ideal foil for rich dressings like the zesty lemon butter here or the savoury bacon vinaigrette here, and, of course, they can make quite a substantial meal – just think of the classic salade Niçoise (or my version here).
Even the most jaded of palates will perk up at a simple, well-dressed plate of judiciously chosen leaves, and it's one of the things the French still do so well. A modest steak frites, nothing to write home about, at a service station cafeteria will come flanked by a bowl of greenery tossed with just the right amount of piquant mustardy vinaigrette – and there's no better accompaniment. Making good salad isn't difficult, it just takes a little care.
### How to build a salad
### 1. Choose your base
### 2. Match this to a dressing
### 3. Balance your toppings
### On washing
Salad leaves make great hiding places for small creatures, so they do need washing unless the packaging states otherwise. The easiest way to do this is to fill a sink, or large bowl, with cold water and submerge the leaves, swishing them gently about so the dirt falls to the bottom, then scooping them out.
Bear in mind, however, that leaves are very delicate and need careful handling. They should be dried thoroughly before dressing, or you'll end up with a soggy heap of mulch; if you have the space for such a gadget, a salad spinner is a good tool, but if not, some tender patting with kitchen towel will do the trick.
## Nice salad
##### serves 4
8 quail's eggs
8 small new potatoes, scrubbed
8 asparagus spears
2 fillets of hot-smoked trout
1 small ridged cucumber (or ½ a larger cucumber)
100g watercress
100g pea shoots
A small bunch of mint, leaves picked
##### _For the dressing:_
1 egg
1 egg yolk
¼ teaspoon honey
½ teaspoon mustard powder
1 teaspoon water
2 tablespoons cider vinegar
150ml single cream
A small bunch of chives, finely chopped
My take on the classic salade Niçoise, using the best ingredients the British summer can offer. Packed with asparagus, pea shoots, watercress and cucumber, and topped with creamy new potatoes and blushing pink flakes of fish, the flavours are subtler and more delicate than the original, but just as delicious. The dressing is based on Eliza Acton's 1845 recipe for English salad sauce (as opposed to 'French salad dressing') – and has very little in common with salad cream, I promise.
1. Put the whole egg for the dressing into a small pan, barely cover with cold water, cover the pan and bring to the boil. Uncover, turn down the heat, cook for 7 minutes, then scoop out with a slotted spoon and run under cold water. Add the quail's eggs to the pan and cook for 2 minutes, then drain and run under cold water.
2. When the eggs are cool enough to handle, peel them all and cut in half. Set the quail's eggs aside. Scoop out the yolk of the hen's egg, crumble it into a small bowl with the raw yolk, honey, mustard powder and water, and whisk together. Whisk in the vinegar, then the cream, then add the chives and season to taste. (The egg white can be finely chopped and added to the salad if you like, or fed to the dog.)
3. Bring a medium pan of well-salted water to the boil. Add the potatoes and cook until tender (how long will depend on size), then scoop out with a slotted spoon. Prepare a large bowl or sink of iced water, then add the asparagus to the pan, leaving the tips sticking out of the water, cover and cook for about 3–4 minutes, until just tender. Drain and cool in the iced water.
4. Cut the potatoes in half, and chop the asparagus into shortish lengths. Flake the trout and thinly slice the cucumber. Put the watercress and pea shoots into a large salad bowl and toss with enough dressing to lightly coat. Scatter over the potatoes, asparagus, trout and quail's eggs and top with a few mint leaves. Serve immediately.
## Green herb cauliflower 'tabbouleh'
##### serves 2–4
1 smallish cauliflower
3 tablespoons butter
3 tablespoons sultanas
1 tablespoon barberries (or dried sour cherries or cranberries if unavailable)
3 tablespoons pine nuts
4 slim spring onions
20g chives
25g tarragon, leaves picked
25g dill
25g coriander
25g mint, leaves picked
25g flat-leaf parsley
A squeeze of lemon juice
Those ridiculously flavourful leaves we single out as herbs are the star of this dish. Inspired by both the Persian _sabzi_ , or herb 'salad', and Middle Eastern tabbouleh, the bland, creamy sweetness of cauliflower makes it the ideal base for a plethora of zesty green flavours and sweet dried fruits. This is an incredibly moreish addition to a selection of mezze, or a side dish for lamb or chicken, and looks even lovelier scattered with pomegranate seeds.
1. Cut the cauliflower in half and cut out the core. Discard the woody base from the core and roughly chop the rest, then break the head of the cauliflower into florets. Put it all into a food processor and pulse briefly until chopped into couscous-size pieces.
2. Melt 1 tablespoon of butter in a large frying pan over a medium-high heat and fry the cauliflower with a little salt for a couple of minutes until just tender. Scoop into a large salad bowl.
3. Melt another tablespoon of butter in the pan and fry the sultanas and barberries for a minute until plump, then tip into the bowl and toast the pine nuts in the remaining butter. Tip into the bowl.
4. Trim and roughly chop the spring onions, then put into the food processor and whiz until more finely chopped. Add the herbs and whiz again until it's all fairly finely chopped, then tip in with the cauliflower and toss everything together with a squeeze of lemon juice, and salt and pepper to taste.
## Three pea salad with lemon butter dressing
##### serves 4
200g mangetout
160g shelled peas (frozen are fine)
120g pea shoots
##### _For the butter dressing:_
4 tablespoons butter
3 tablespoons lemon juice
Three peas are better than one, and this fresh, sweet salad of sharply dressed, delicate little pea shoots is the proof – as well as an elegant starter, it makes a lovely accompaniment to fish or chicken.
1. Melt the butter in a small, preferably light-coloured pan over a medium-low heat. Skim the froth from the surface, then carefully pour off the clear yellow liquid beneath, leaving the milky solids in the pan. Allow the liquid to cool slightly, then whisk with the lemon juice and season generously with salt and black pepper.
2. Heat a large pan of well-salted water. Prepare a large bowl or sink of iced water. Blanch the mangetout for 90 seconds, then scoop out with a slotted spoon and put into the iced water. Blanch the peas for about a minute, depending on size, until tender but not mushy, then drain and add to the mangetout.
3. Put the pea shoots into a large salad bowl and add the drained mangetout and peas. Toss together with just enough dressing to coat, and serve immediately.
## Black kale salad with anchovy dressing
##### serves 4
350g young cavolo nero, well washed
150ml olive oil
Juice of ½ a lemon
1 small garlic clove
4 anchovy fillets in oil
1 egg yolk
A handful of finely grated Parmesan
This recipe was inspired by a vast and addictively savoury salad I enjoyed in the bar of the Soho Grand Hotel, New York, washed down by a couple of equally generous martinis. They know how to make both a great salad and a great cocktail over there: healthy, but by no means health food (the salad, I mean. The martini is obviously both).
1. Rip or cut the central stem from the kale, discard, and tear the leaves into shreds. Massage vigorously with your fingers for a couple of minutes with a drizzle of oil, a squeeze of lemon and some salt, until softened.
2. Mash the garlic and anchovies together in a pestle and mortar, then pound in the egg yolk until well combined.
3. Transfer to a larger bowl (unless your pestle and mortar is vast) and slowly whisk in the olive oil, a little at a time, followed by the lemon juice, until you have a thick salad dressing. Season to taste with salt and black pepper; depending on the saltiness of your anchovies, and your tolerance, you may not need any extra salt, though I usually do.
4. Toss through the kale along with the Parmesan just before serving.
## Chicory with beetroot, goat's cheese and walnuts
##### makes about 18
3 beetroots
50g walnut pieces
2 tablespoons cider vinegar
2 heads of chicory
##### _For the whipped goat's cheese:_
250g soft goat's cheese
2 tablespoons walnut oil
1 teaspoon coarsely cracked black pepper, or to taste
Great canapés, not only do they look rather gorgeous, the hot pink of the beetroot against the cool green of the leaves, but the refreshing crunch of the chicory makes a nice change from starchy crisps or breads. You can use ready-cooked beetroot if you like, but baking your own will give a more intense flavour (they can be done several days ahead, when you've got the oven on for something else, then refrigerated until use).
1. Heat the oven to 220°C/200°C fan/gas 7. Trim the beetroots, wrap in foil and bake for about 50 minutes to an hour, until tender all the way through. Allow to cool, then peel (the skin should just rub off, though it is a messy business – so wear rubber gloves or wash your hands immediately afterwards) and roughly chop.
2. While the beetroots are cooling, toast the walnuts in a dry pan until fragrant, and set aside.
3. Put the beetroot into a bowl, add the vinegar and purée with a stick blender (or use a food processor). Add salt to taste.
4. Put the goat's cheese into a bowl and whisk to loosen, then whisk in the walnut oil until well incorporated. Add pepper to taste.
5. Separate the leaves of the chicory and top each with a spoon of beetroot, followed by a blob of cheese, and finally a piece of walnut.
## Mustard leaves and little gem with bacon vinaigrette and toasted walnuts
##### serves 2
2 spring onions, finely sliced
8 walnut halves
50g mustard leaves (see intro)
1 little gem lettuce
##### _For the bacon vinaigrette:_
75g smoked pancetta or diced streaky bacon
1 tablespoon red wine vinegar
½ teaspoon Dijon mustard
1 tablespoon vegetable oil
2 tablespoons walnut oil
Spicy mustard leaves are something you'll either have to grow yourself (they're pretty hardy), or seek out at a farmers' market, but I think they're well worth the effort – their distinctive heat adds interest to any salad, and cuts through the richness of the salty bacon and creamy nuts beautifully.
1. Heat a frying pan over a medium heat and add the pancetta. Cook until it has browned and the fat has rendered, then pour off the fat into a small bowl and set the pancetta aside.
2. Put the pan back on the heat and add the spring onions. Cook for a couple of minutes until just softened, then scoop out and add to the pancetta. Turn up the heat slightly and toast the walnuts for a minute or so until fragrant.
3. Whisk the vinegar and mustard into the bacon fat, then whisk in the remaining oils until emulsified and season to taste.
4. Put the mustard leaves into a salad bowl and separate the leaves from the little gem and add them too. Toss with the dressing, then divide between plates and top with the pancetta, spring onions and walnuts.
I'd describe malt as the sweet equivalent of umami – a flavour that brings out the best in others. For all its recent popularity, it still tends to play a supporting role, rather than hogging the limelight, bringing an attractive complexity and depth to everything from milkshakes to beef and ale pie. The taste itself is hard to describe; toasty is probably the first word that springs to mind, with a certain earthy sweetness and a lingering nuttiness.
Malt is a broad church, the base of many great things, from beer to whisky, as well as that wonderfully savoury vinegar that so many heretics like to drown their chips in. It's a key ingredient in bagels and the criminally underrated rich tea biscuit. And, of course, it plays the lead in malt loaf and Horlicks; two of the most comforting things I can possibly imagine in a crisis.
### The science bit
Malting is a process familiar to anyone who has ever been on a tour of a brewery or a whisky distillery, in which cereal grains, usually barley or wheat, are soaked in water to encourage them to sprout. As the seeds germinate, their starches turn into sugars and other digestive enzymes, at which point the grains are heated to halt the germination process and dry them out.
The dried grain, or malt, can then be ground to make malt powder, which may be used to make beer or whisky, or mixed with dried milk, salt and sugar to produce malted milk powder. Malt powder can also be turned into the malt syrup I use in the malt loaf recipe in this chapter.
### Health benefits
Those of a certain age will remember malt syrup being proffered during childhood as something 'good for you', though as it was often mixed with cod liver oil, those memories may not be entirely fond ones.
Nowadays, the high sugar content means malt is unlikely to win on its nutritional value alone (though it does contain protein, vitamins and minerals), but you can't argue with the flavour – look for it in whole and health food shops and chemists, in a large, dark brown jar.
Malted milk powder also has a medicinal history. It was launched in the States by a British pharmacist, James Horlick, and his brother William in the late nineteenth century as a 'granulated food for infants'. Light and easy to transport, it found favour with explorers, and made its way on to expeditions to both the North and South Poles in the early twentieth century.
Taken to all corners of the empire by the British, Horlicks (made from buffalo rather than cow's milk) remains incredibly popular in India to this day, where it's the best-selling packaged drink after bottled water, outselling Pepsi two to one.
Other brands of malted milk powder are available of course: chocolaty Ovaltine, for example (a homely, old-fashioned name that, somewhat to my surprise, turns out to be Swiss) and even more chocolaty Milo, but my loyalty lies with Horlicks, which is the next best thing to a single malt nightcap as far as I'm concerned. Use whichever sort you have to hand in recipes that call for malted milk powder, but avoid the light versions, which won't work as well (and often contain artificial sweeteners).
And once it's installed on your shelf, try adding it to biscuits, cakes and other puddings as takes your fancy – it goes brilliantly with chocolate, coffee and dairy flavours in particular, and adds a certain old-fashioned _je ne sais quoi_ to most sweet things.
## Moules marinières écossaises
##### serves 2
1kg mussels
A knob of butter
2 shallots, finely chopped
1 garlic clove, finely chopped
2 sprigs of thyme, leaves picked
50ml whisky
120ml double cream
A small bunch of parsley, roughly chopped
A dish that started off life as a Burns Night starter, but which stands proudly alone as a milder, richer, fruitier version of the classic wine-based dish. It's also lovely tossed through cooked linguine or spaghetti as a kind of northern take on an Italian favourite usually made with tomatoes.
Note, if you'd prefer to have it as a starter or light lunch, the amount here should serve four. Hot crisp fries are optional, but bread to mop up the sauce is not.
1. Rinse the mussels well under cold water and scrub if necessary, discarding any with broken shells. If any are open, give them a sharp tap – live mussels will slowly close. Any that remain open should be discarded. Pull out the little beards hanging from the shells by tugging them sharply towards the hinge end of the mollusc. You can leave them in cold water for a couple of hours if you like, though most mussels these days tend to be grit free, rendering this step unnecessary.
2. Heat the butter in a large pan over a medium heat and sauté the shallots until soft. Add the garlic and thyme and sauté for a further minute, then add the whisky, turn the heat up and cook for a minute or so before tipping in the drained mussels.
3. Cover and cook until most have opened: 3–5 minutes. Take off the heat and stir in the cream, season well with both salt and black pepper, sprinkle over the parsley and divide between two bowls, discarding any closed mussels and making sure each has a good helping of whisky cream.
## Single malt loaf
##### makes 1 small loaf
9 tablespoons malt extract
2 tablespoons treacle
100ml strong warm tea
50ml whisky
75g dried prunes
75g dried figs
75g soft light brown sugar
150g spelt or wholemeal flour
100g plain flour
3 teaspoons baking powder
½ teaspoon salt
½ teaspoon ground ginger
50g flaked almonds
OK, so you don't actually need to use a single malt whisky for this (although a really salty, peaty one does work surprisingly well with the fruit), but I couldn't resist the name. Any cheap old blend will work, giving this teatime classic a bit of a kick – it keeps very well (indeed, it's even stickier a day or two after baking) and reaches its apotheosis with the addition of cold salty butter.
1. Whisk together the malt extract, treacle, tea and whisky. Finely chop the prunes and figs, add to the bowl and leave to soak for 30 minutes. Heat the oven to 200°C/fan 180°C/gas 6.
2. Stir the sugar into the tea mixture, then whisk together the rest of the ingredients and fold them in too.
3. Grease a 1lb loaf tin (about 20 x 10cm) and spoon in the mixture. Level the top and bake for an hour, turning halfway through so it bakes evenly. Leave to cool in the tin.
## Rye and porter porridge with bacon, leeks and cheese
##### serves 4
300g rolled rye flakes
50g butter
6 rashers of streaky bacon, finely chopped
2 large leeks, finely chopped
700ml porter or other dark beer
500ml chicken stock
2 teaspoons honey
100g Gruyère or similar Alpine cheese, grated
A kind of northern European version of polenta, inspired by the Danish rye bread and beer porridge usually eaten sweet with milk for breakfast. The malty, savoury flavour of beer marries particularly well with nutty cheeses, like Gruyère, though you could use any sweetish, hard variety with good melting properties. Be warned, this isn't a pretty dish, or a light one, but once you taste it, you won't care – it's a great wintery lunch or supper.
1. Toast the rye in a large dry frying pan until it smells nutty, then set aside.
2. Melt a good knob of the butter in a large saucepan over a medium heat and soften the bacon and leeks until they begin to caramelize. Scoop out of the pan and set aside, then add a little of the beer and scrape the bottom of the pan to deglaze. Pour in the remaining beer and the stock, stir in the rye, then bring to a simmer.
3. Turn down the heat and cook, stirring very regularly, until the rye has broken down and absorbed the liquid to produce a thick, porridgy consistency. Stir in the honey and cheese until melted, then add the leeks and bacon, and season to taste.
## Malted milk creams
##### makes about 16
125g plain flour
75g cocoa powder
1 teaspoon bicarbonate of soda
¼ teaspoon baking powder
½ teaspoon salt
200g caster sugar
140g butter, softened and diced
1 egg, beaten
##### _For the filling:_
100g butter, softened
100g icing sugar
25g malted milk powder (e.g. Horlicks or Ovaltine)
A splash of milk
This is a homemade take on the famous American Oreo cookie, but with added malt – because bitter chocolate and sweet milky malt go together like Mickey and Minnie, or Homer and Marge. They are utterly, utterly gorgeous with a large glass of cold milk, for extra wholesome American goodness.
1. Heat the oven to 200°C/fan 180°C/gas 6 and line two baking trays.
2. Sift the dry ingredients into a food processor, then add the butter and egg and pulse until the mixture comes together into a dough.
3. Pinch the mixture into balls about 15g in weight, and flatten them with your hand. Spread out on the trays and bake for 9 minutes, turning the trays round halfway so they bake evenly. Leave to cool for 5 minutes on the trays, then lift on to wire racks to cool completely.
4. Beat together the first three filling ingredients with a pinch of salt and add a splash of milk to loosen to a spreadable consistency, then use to sandwich the cooled biscuits together.
## Triple chocolate malt cake
##### serves 8
50g dark chocolate
250g butter, softened
250g soft light brown sugar
½ teaspoon salt
100g cocoa powder
150g malted milk powder (plain Horlicks does nicely)
100g plain flour
3 teaspoons baking powder
3 eggs, beaten
250ml milk
##### _For the decoration:_
140g butter, softened
50g malted milk powder
200g icing sugar
4 tablespoons milk
10 Oreo biscuits
A handful of Maltesers
This, if I say so myself, is a real stunner of a cake – and remarkably easy to make. If you can't find, or don't want to buy Oreos (or can't risk keeping the rest of the packet in the house), then the biscuit recipe on the previous page makes an excellent homemade substitute.
1. Heat the oven to 200°C/fan 180°C/gas 6 and grease and base-line two 20cm sandwich tins. Melt the chocolate in a heatproof bowl set over, but not touching, a pan of simmering water.
2. Cream together the butter, sugar and salt until fluffy. Meanwhile, sift the cocoa, malted milk powder, flour and baking powder together.
3. With the mixer still running, add the eggs to the butter and sugar mixture, then, once well combined, fold in half the sifted dry ingredients, followed by the melted chocolate, then the rest. Finally, add enough milk to give a soft dropping consistency – i.e. it drops easily from a spoon, but doesn't run off.
4. Divide the mixture between the tins (I weigh them to make sure they're even) and smooth the tops. Bake for 25–30 minutes, until just firm in the middle. Allow to cool for 10 minutes in the tins, then turn out on to a wire rack to cool completely.
5. Meanwhile, make the icing by beating the butter until very soft, then beating in the malted milk powder, sugar and a pinch of salt, followed by a little milk to loosen the mixture.
6. Once the cakes are completely cool, put the less flat or attractive one on a serving plate and spread with a third of the icing, banking it up round the edge a little. Top with the other cake, and spread the remaining icing on top. Crush the biscuits by putting them into a clean plastic bag and whacking repeatedly with a rolling pin, then sprinkle these on top. Finish with the Maltesers, lightly crushed.
## Black and white shake
##### serves 2, very generously (plus extra syrup)
500g plain or vanilla ice cream, slightly softened
250ml cold milk
2 tablespoons malted milk powder (e.g. Horlicks)
6 Maltesers, crushed
##### _For the chocolate syrup:_
165g soft light brown sugar
65g cocoa powder
A dash of vanilla extract
An old-fashioned diner classic with a certain wow factor thanks to the contrasting layers of creamy shake and dark syrup. The syrup recipe makes more than you will need for two drinks, but keeps well in the fridge for next time life throws you some lemons. For really bad days, add a splash of bourbon.
1. To make the syrup, whisk together the sugar and cocoa in a small saucepan with 180ml of cold water to make a smooth paste. Bring to the boil, then turn down the heat and simmer for about 5 minutes, until slightly thickened and syrupy with a glossy sheen. Stir in a dash of vanilla extract and salt to taste, then set aside to cool.
2. Put the ice cream and milk into a blender with half the malted milk powder and whiz until well combined, adding a little more milk if you'd prefer it thinner. Taste and add more powder as you see fit.
3. Pour the syrup down the side of a glass, rotating it so it coats the inside, then carefully pour the shake into the middle so it doesn't disturb the syrup. Top with the crushed Maltesers and serve immediately.
Such a pleasingly onomatopoeic word, noodles – instantly conjuring up hundreds of happy, slurpy memories.
In Britain, the term has come to refer to almost all strips of unleavened dough, from Germanic spätzle to Japanese soba – except for the most famous noodle of them all, Italian pasta. This distinction seems to me both puzzling and arbitrary, and as I love pasta with a burning passion, and P is rightly occupied by the mighty potato, I'm going to invite it to this particular party instead. _Vi do il benvenuto, Signora Pasta._
In reality, the difference between Eastern and Western noodles is not great. The wheat versions eaten in northern China have more in common with the Italian variety than the rice noodles of the south.
But while almost all European noodles are wheat-based (though ancient buckwheat and chestnut flour varieties linger on in odd corners), in Asia they can be made from everything from rice to mung beans, all of which, of course, offer yet more choice for the lucky consumer.
In addition, pasta is almost always cooked to the same al dente consistency, whatever the dish or shape, but further east, noodles can be bouncy and firm or doughy and soft, slippery or crunchy, boiled, sautéd, deep-fried or even eaten cold.
Yet all noodles, whether spaghetti or soba, have the same allure; a pleasure more textural than tasty. There's something fundamentally satisfying about a big bowl of noodles, steaming in a delicately spiced meaty ph broth in Saigon, peeping out from underneath a rich goulash in Szeged, or doused in butter and Marmite and eaten in front of the television on a Monday evening in Salford. There's a noodle for every occasion, and they always seem to hit the spot.
### Pasta
As with many basic foods, pasta has a fuzzy history – though ancient Greeks, Romans and Etruscans all made unleavened doughs that sound as if they may have been a bit like the stuff we know and love today, there is little evidence as to what they looked like, or indeed how they were cooked.
More conclusive early mentions come from the Middle East, but the first definite European pasta sighting was in twelfth-century Sicily, where strings of dough were reported a full century before Marco Polo is said to have brought the idea back from China. Macaroni pops up in England surprisingly soon afterwards, often paired with cheese – yet more proof of the astonishingly cosmopolitan nature of the (aristocratic) medieval diet.
At its simplest, pasta is made from durum wheat (a very hard variety also used for semolina and couscous) and water, though eggs can be added to enrich the dough, and other ingredients like spinach juice or squid ink deployed for colour and flavour. When making it at home, you can swap in fine white flour, as they do in northern Italy, which gives a softer, silkier result.
Contrary to popular belief, fresh pasta is not better than the dried variety; they're simply used for different things. Fresh pasta, which is lighter and more delicate, and will absorb more of whatever sauce you're adding to it, is best paired with subtle flavours, usually dairy based – Marcella Hazan reckons that olive oil 'obliterates its fine texture . . . and strong flavours deaden it'. More robust dry pasta, meanwhile, can take the weight of oil, tomato and hearty meat-based sauces.
The right shape, of course, depends on the dish: the thicker the sauce, the chunkier the pasta required. Ragù works best with hollow rigatoni or large conchiglie shells, while smoother, thinner sauces suggest spaghetti, or even the very thin spaghettini. (For more information on this subject, I'd recommend consulting Caz Hildebrand and Jacob Kenedy's comprehensive and beautiful book _The Geometry of Pasta_.)
Don't be tempted to scrimp on dried pasta; as it should form the bulk of the dish, it's worth spending an extra pound for the good stuff. This will have a slightly rough surface, which enables it to trap the sauce; cheaper varieties are slippery and wormlike, and the sauce will run off them like oil on a non-stick pan. Although it's sometimes hard to judge the texture from the packet, look for clues like 'bronze die' (the mould through which the dough is formed to shape it).
Cooking pasta is simple. You need a large pan of boiling water, larger than you might think necessary (crowding the pan will encourage the pasta to stick together), generously salted. It should taste like the sea (don't worry, your pasta won't).
Once it comes to the boil, add the pasta, stir once, and cover until it comes back to the boil. Uncover and cook for a couple of minutes shy of the time recommended on the label, then begin checking it – only you will know when it's done to your taste,* but err on the side of underdone, as it will cook a little more in the sauce.
Scoop out a cupful of cooking water, drain the pasta well and stir it into the sauce while it's still hot, adding any extra cooking water as necessary. That's it; no rinsing, and no oil until the end. (Then you can add as much as you like.)
### Asian noodles
An even more dauntingly broad topic than pasta, and one which deserves far more space than I can give it here. As well as the usual suspects, I'd recommend MiMi Aye's _Noodle!_ , to which I am heavily indebted for the following information, as an excellent overview of noodle cuisine from Japan to Jakarta.
Although noodles in general are thought to have originated in the Arab world, where they are little eaten today, they have a long history in the Far East. A decade or so ago, archaeologists found a 4,000-year-old bowl of millet noodles buried under three metres of earth in northern China.
These days, wheat noodles are more popular in the north and Japan, while rice noodles predominate in southern China and south-east Asia, and mung and soya bean, buckwheat, tapioca and yam noodles are found scattered throughout the region too. Here's a very brief guide to a few of the most common types in this country, though a quick browse of the noodle aisle at any oriental supermarket is likely to tempt you into more exotic territory almost immediately.
## Japanese carbonara
##### serves 2
200g dried udon noodles
1 tablespoon vegetable oil
4 spring onions, roughly chopped
2 eggs, plus 1 yolk
2 tablespoons bonito flakes (see intro), plus a little extra to serve
Togarashi seasoning, to serve (optional, see intro)
##### _For the dashi soy sauce:_
100ml light soy sauce
5g kombu (dried kelp)
3 teaspoons mirin rice wine
3g bonito flakes
Inspired by the traditional kamatama udon, which is served with a raw egg cracked into it, this version has the same rich, umami flavour as the Italian variety, and is just as satisfying to eat, though the dashi soy sauce requires a little more in the way of advance preparation.
You can easily order things like the dried tuna flakes (bonito) and kombu seaweed online if you don't happen to live near an oriental supermarket – they're super light and keep for ages, so they're good things to have in the cupboard. Togarashi seasoning is a spicy mix of chilli, peppers, seaweed, roasted orange zest and sesame seeds, and is increasingly widely available, but a pinch of chilli flakes would also do nicely.
1. Put the soy sauce, kombu and mirin into a small pan and bring to the boil. Stir in the bonito flakes, then leave to cool and infuse for at least a couple of hours before straining.
2. Cook the udon in boiling salted water until just al dente. Meanwhile, heat the oil in a medium saucepan on a medium heat and cook the spring onions until soft. Whisk together the eggs, yolk and remaining bonito flakes in a bowl and place next to the hob.
3. Drain the noodles. Pour 4 tablespoons of the infused soy sauce into the hot pan with the spring onions and stir in the noodles to coat. Take off the heat and immediately tip the eggs into the pan, stirring furiously so they don't scramble. Once the sauce has begun to thicken, divide between bowls and sprinkle with a little more bonito, if you like, and some togarashi if using.
## Baked ziti with sausage and kale
##### serves 4–6
6 Italian sausages
Olive oil, to fry
1 large red onion
400g large tubular pasta, preferably ziti if you can find it, but large penne or tortiglioni will do
200g kale, trimmed and roughly chopped
250g firm mozzarella (of the sort sold for pizza)
4 tablespoons extra virgin olive oil
200g ricotta
##### _For the sauce:_
8 garlic cloves, crushed
3 tablespoons olive oil
1 teaspoon chilli flakes
4 x 400g tins of plum tomatoes
2 teaspoons sugar
A generous dash of red wine vinegar
This Italian-American classic is dedicated to the memory of Carmela Soprano, whose cooking fascinated me for six seasons of mob violence and family feuding. With big, strong flavours worthy of Tony himself, this 'zee-tee' is great food to feed a crowd, and can be made well in advance up to the end of step 5, then baked to finish.
1. For the sauce, fry the garlic gently in the oil in a wide pan for a couple of minutes, then stir in the chilli and fry for 30 seconds. Add the tomatoes, rinsing out the tins with a dash of water, the sugar and vinegar. Bring to a simmer, turn down the heat and cook for 30 minutes, until thickened. Season to taste.
2. Meanwhile, strip the sausages of their casings and roll the meat into little balls. Fry in a little oil over a medium-high heat until well browned.
3. While the meatballs are cooking, finely slice the onion. Scoop out the meatballs and fry the onion in their fat until well softened. Turn off the heat and tip the meatballs back in.
4. Bring a large pan of salted water to the boil and heat the oven to 200°C/fan 180°C/gas 6 if baking immediately. Cook the pasta for 6½ minutes, then scoop out with a slotted spoon and add to the pan with the meatballs and onion. Cook the kale in the pasta water for a minute or so until softened, then drain thoroughly and add to the pasta. Toss together until it's all well coated with oil.
5. Finely dice the mozzarella and add three-quarters to the pan. Pour in the sauce and stir it all together, then pour it into a large baking dish.
6. Cover with foil and bake for 20 minutes (or 30 if from cold). Whisk the extra virgin olive oil into the ricotta and season well. Once the timer goes off, uncover the dish and sprinkle the remaining mozzarella on top, then dot spoonfuls of ricotta on top of that. Return to the oven for 15–20 minutes, until melted and bubbling. Allow to cool slightly before serving.
## Spicy peanut butter noodles with sprouting broccoli
##### serves 2
140g soba noodles
2 tablespoons crunchy peanut butter
2 tablespoons gochujang (see intro)
2 tablespoons Chinkiang vinegar
1 teaspoon sugar
1 teaspoon soy sauce
90g purple sprouting broccoli, stalks chopped and separated from the heads (see note)
2 spring onions, sliced on the diagonal
2 tablespoons roughly chopped peanuts
Though I wish I could take the credit for the genius idea of putting peanut butter and noodles together, I've appropriated it from the wonderful Fuchsia Dunlop, tweaked it a little bit, and added the rich, sweet heat of Korean gochujang fermented chilli paste. This is available from oriental grocers and online, but feel free to substitute rival chilli condiments as preferred – you may need to add more or less, and more sugar or soy sauce, depending on their heat and flavour profile.
_Note: most other quick-cooking vegetables will work if you don't happen to have any broccoli, and feel free to chuck in whatever prawns, cooked chicken, omelette or tofu you have to hand; this is a very easy-going dish._
1. Bring a large pan of salted water to the boil and add the noodles. Set the timer for 2 minutes before they ought to be done (which will depend on the noodles: usually about 3–5 minutes, but check the packaging).
2. Meanwhile, whisk together the peanut butter, gochujang, vinegar, sugar and soy sauce, and add a few tablespoons of the cooking water to loosen to a pouring consistency.
3. Once the timer goes off, add the broccoli stalks to the pan of noodles and cook for 1 minute, then add the broccoli heads, cook for a further minute, then drain and rinse briefly under cold running water.
4. Return the noodles and broccoli to the pan, still on the heat, add the sauce and toss through until heated. Divide between bowls and sprinkle over the spring onions and peanuts. Serve immediately.
## Beetroot noodles with goat's cheese, toasted walnuts and baby kale
##### serves 2
200g spaghetti or other pasta of your choice
50g walnuts
300ml beetroot juice
4 big handfuls of baby kale or other young greens
100g soft goat's cheese
Again, I must confess the clever notion of cooking pasta in vegetable juice is not my own; I read about it in an American food magazine on the tube one evening and could hardly wait to get home and try it. As well as turning the noodles a shockingly lovely pink, the reduced juice lends them a sticky vegetable sweetness which works particularly well with creamy, lactic goat's cheese and bitter toasted walnuts, though that first evening I used a tiny hunk of salty pecorino that had been falling from the fridge door with irritating regularity for some weeks, and that worked just fine too.
1. Bring a large pan of well-salted water to a rolling boil, then add the pasta. Cook for 5 minutes.
2. Meanwhile, toast the walnuts in a dry pan until aromatic, then roughly chop and set aside. Bring the beetroot juice to a simmer in a medium pan.
3. Drain the pasta and add to the pan with the beetroot juice. Cook for about another 5 minutes, until the noodles are al dente (exactly how long will depend on your pasta and your preferences) and the juice is thick – be careful they don't stick. If it does look a little dry before they're done, stir in a splash more juice.
4. Stir in the kale to wilt, then season well to taste; the juice will be quite sweet, so it will be able to take a generous amount of salt and black pepper.
5. Divide between bowls and scatter with chopped nuts and blobs of cheese – the cheese can be stirred in by the eater, but it looks prettier pristine and white against the pink pasta. Serve immediately.
## Spätzle with cheese and onion
##### serves 4
A knob of butter
5 rashers of pancetta, chopped (or a further 1 tablespoon butter if you'd prefer to make it vegetarian)
1 large onion, finely sliced
75ml chicken stock (or vegetable if preferred)
A large handful of grated Gruyère
##### _For the noodles:_
150g plain flour
85g semolina flour (or the same weight of plain flour if you can't find this)
1 teaspoon salt
A whole nutmeg, to grate
2 eggs, beaten
120ml milk
Hearty mountain food, these chunky central European noodles whose name, rather charmingly, means 'little sparrows' are more forgiving than most pastas. The technique for shaping them takes a bit of practice, and you'll probably end up with a few monsters to begin with (which will still taste pretty great), but as you work through the dough, you'll get the hang of it.
Spätzle are also very good served with sautéd cabbage and caraway seeds, or butter and herbs, or just about anything you might put with gnocchi.
1. To make the spätzle, put the flours into a large bowl with the salt and a pinch of nutmeg and stir in the eggs, followed by just enough milk to make a softish dough. Cover and leave for about half an hour.
2. Meanwhile, melt the butter in a frying pan and add the pancetta if using (if not, add the extra butter and skip straight to the onion). Cook until most of the fat has been released, then add the onion. Sprinkle with salt, and cook gently, stirring regularly, until soft, golden and beginning to caramelize, which will take at least 25 minutes.
3. Bring a large pan of salted water to the boil. Meanwhile, shape your spätzle dough into a long rectangle on a damp chopping board (nothing too heavy) and prepare a large bowl of iced water.
4. When the water comes to the boil, hold the board over the pan and use a palette knife or similar to cut and then flick tiny nuggets of the dough into the water – bear in mind they'll expand as they cook, so make them much smaller than you might think you need to. Once they start rising to the top, stop; you'll need to do this in several batches. Cook for a couple of minutes until the texture is firm but chewy, and they taste cooked, then scoop out with a slotted spoon and deposit in the iced water. Repeat until all the dough is used up.
5. Add the chicken stock to the onion and pancetta and bring to a simmer. Drain the spätzle well, then toss in along with the cheese and a little seasoning. Stir until the cheese begins to melt, then divide between bowls and serve.
## Spaghetti with courgette noodles and Parmesan
##### serves 2, easily doubled
300g courgette (1 large one)
3 tablespoons extra virgin olive oil
2 garlic cloves, finely chopped
200g linguine or spaghetti
40g pecorino or Parmesan
A small bunch of basil
A whole nutmeg, to grate
One of the best pasta dishes I have ever eaten is also one of the simplest, the second act of a dinner in a tatty old palazzo somewhere north of Rome where they took in paying guests with an air of politely aristocratic resignation.
After a glorious fortnight of ripe red tomato sauces and sweet Neapolitan seafood, a bowl of penne with some yellowing, mushy-looking courgettes seemed a bit of a let-down. Yet we could not stop eating it: the mild, sweet courgettes, slowly cooked in olive oil until they melted on the tongue, the perfectly bouncy pasta and salty clouds of pecorino – utter bliss.
1. Bring a large pan of well-salted water to the boil. Cut the courgette into long thin strips with a spiralizer, mandoline, or the coarse side of a box grater (lay the last on its side and grate the courgette lengthways for longer strips).
2. Heat the oil in a frying pan over a medium-low flame and add the garlic. Cook for a minute or so, then add the courgette strips and 2 tablespoons of water, turn down the heat low and cook for about 15 minutes, until the water has evaporated and the courgettes are very soft, adding more water if necessary.
3. Meanwhile, cook the pasta in the boiling water until al dente (check the packet), finely grate the cheese and pick the basil leaves, tearing any large ones up.
4. Drain the pasta, reserving a little of the cooking water, and toss with the courgettes, a pinch of freshly grated nutmeg and half the cheese, then add a splash of the cooking water to loosen it. Divide between plates, top with the basil and a little more cheese and serve with the remaining cheese on the side.
## Vietnamese bún cha
##### serves 4
400g thin rice noodles
1 soft lettuce, shredded
A small bunch of mint, leaves only
A small bunch of coriander, leaves only
##### _For the meatballs:_
2 tablespoons white sugar
450g pork mince (not too lean)
1 large shallot, finely chopped
2 garlic cloves, finely chopped
3 tablespoons fish sauce
½ teaspoon ground black pepper
½ teaspoon salt
##### _For the sauce:_
3 tablespoons rice vinegar
2 tablespoons palm or soft light brown sugar
3 tablespoons fish sauce
2 garlic cloves, finely chopped
Juice of 1 lime
1 red bird's-eye chilli, finely chopped
Crouched on tiny plastic stalls at the edge of a dusty, noisy road, straight off the plane, this dish, a Hanoi speciality of grilled pork, cold noodles and forests of fresh herbs, proved well worth the twelve-hour flight.
1. Start with the meatballs. Put the sugar into a small pan with 3 tablespoons of water and heat until it turns a rich brown colour. Have 2 tablespoons of water handy by the hob, then once it's ready, take off the heat and add this water to the pan, swirling quickly to combine. Set aside.
2. Put the remaining meatball ingredients into a bowl, mix well, then stir in the caramel sauce from the pan. Chill for at least an hour, then form into small balls.
3. Meanwhile, make the dipping sauce by bringing the vinegar, sugar, fish sauce and 120ml of water to a simmer in a small pan, stirring to dissolve the sugar. Allow to cool, then add the garlic, lime juice and chilli. Taste and adjust the seasoning if necessary.
4. Put the noodles into a large pan of boiling water, turn off the heat and leave for 15 minutes, then drain and rinse under cold water. Drain well.
5. Heat a griddle pan until smoking hot. Fry the meatballs until slightly charred on the outside and cooked through.
6. Divide the noodles between bowls and top with the meatballs. Put the lettuce and herbs into a bowl, and serve separately with the sauce, allowing diners to help themselves to both.
I have a soft spot for a tentacle or two. Though I wouldn't take it quite as far as the Japanese and their erotic octopus anime, there's a strange beauty in those long, sinuous arms with their polka-dot suckers.
Strange because, as the late Alan Davidson put it, the octopus, squid and cuttlefish 'look like bags with heads on top and eight or ten arms or tentacles sprouting therefrom . . . this construction, which is entirely logical for the life they lead, appears strange, even repugnant to human eyes, and accounts for the widespread reluctance to eat them'. Appearances be damned – they taste great.
All three are on the mild side, with a certain sweetness about them – the cuttlefish more than the squid, while the octopus has an earthier, more savoury flavour, though still a surprisingly delicate one for something that looks so monstrous. This subtlety means they're best prepared simply, rather than overwhelmed with rich or spicy sauces.
I suspect that mostly, however, we relish the cephalopod family for its texture, rather than its taste – an unusual thing in the Western world, but one which may go some way to explain why the Japanese are so fond of the things. Cooked correctly, they should be tender, but with a slight springiness to the flesh, while the ends of the tentacles, and the octopus's suckers, will crisp up beautifully over a high heat.
There are two ways to cook these lovely creatures: low and slow, or hot and fast; anything in between and you'll be left with the rubber bands beloved of bad Mediterranean restaurants. (The first, of course, does not mean you can't finish them off on the heat – a gently braised octopus or squid chucked on to a smoking hot barbecue until it's charred and smoky, then bathed in olive oil and lemon juice, is a seaside dream come true.)
### Sourcing
Though squid is easy to find (choose whole, rather than rings), octopus and cuttlefish will probably require a trip to a fishmonger, who ought to be able to order you some in – if you don't have one nearby, the freezers in oriental supermarkets are a surprisingly fertile hunting ground. (In this case, freezing actually benefits the cook, because it helps to tenderize the creature, so, in the case of the octopus at least, I prefer to buy them frozen.)
### Squid
Found in oceans and seas throughout the world – _Loligo forbesi_ , which can grow up to 90cm long, is the most common species in British waters, though the slightly smaller Mediterranean _Loligo vulgaris_ is also found in the English Channel.
Anatomically, they're fairly simple: the body is a tube with two rear fins, while the head boasts two long tentacles to drag in prey, eight arms, and unnervingly large eyes (the biggest relative to their body size in the animal kingdom). These formidable predators live fast and die young.
To clean them (though many squid are sold ready to eat, and a fishmonger should happily do it for you), pull the head apart from the tube-like body and reach into the body and remove the hard quill, the last vestiges of the squid's skeleton, from inside if it's still there.
Pull or cut off the wings (the flaps at the side) if large. Trim the tough bit that attaches them to the body and set aside. Peel or rub off any membrane from the outside using a sharp knife, then turn the body inside out to give it a really good rinse with cold water.
Decisively cut the tentacles off just below the eyes. If there is a pale blueish pouch still attached to the head, carefully remove it – it will be full of ink, which you can use in a sauce or risotto if you like. Discard the rest of the head and rinse the tentacles well.
### Cuttlefish
Like squid, cuttlefish are highly efficient predators, with a razor-sharp beak and one of the largest brain to body size ratios of any invertebrates (though I wouldn't let that put you off eating them).
_Sepia officinalis_ , which can grow up to 25cm long, is the species most often found in British waters, and can be cleaned in much the same way as the squid, but with extra care given to avoid coating the entire kitchen in ink, as they have far more of the stuff.
### Octopus
The southern _Octopus vulgaris_ is generally considered better eating than the tougher Lesser Octopus more reliably found in British waters. (If you're not sure, have a look at the tentacles: the common octopus has two rows of suckers – hurrah! – while the lesser version has to make do with just the one.)
As anyone who remembers the late lamented Paul the Psychic Octopus from the 2010 World Cup will be aware, they are very intelligent creatures, and, like the squid and the cuttlefish, keen hunters.
The common octopus is generally found ready cleaned and frozen in this country, which is a great boon to cooks, as it takes much of the hard work out of the equation.
_Note: only the smallest baby octopuses are suitable for the quick-cook treatment – larger ones will need to be braised to tenderness._
## Cambodian stuffed frog-style squid
##### serves 2
6 stalks of lemongrass, chopped
35g galangal, chopped
5 shallots, roughly chopped
5 garlic cloves, roughly chopped
5g kaffir lime leaves, shredded
2 teaspoons ground turmeric
60g roasted peanuts, roughly chopped
250g pork mince
1 tablespoon neutral oil, plus extra to brush
4 medium squid, cleaned
4 cocktail sticks
##### _For the sauce:_
2 tablespoons oil
4 shallots, chopped
5 garlic cloves, finely chopped
1 teaspoon ground turmeric
4 tablespoons ground roasted peanuts
2 tablespoons sugar
1 tablespoon fish sauce
1 teaspoon salt
250ml coconut cream
The best meal I had in Cambodia was in a curious little place sitting on its own on a main road, on a jungly evening so wet that the power kept cutting out. I ordered the frog to entertain my companions, and then was glad none of them wanted to share its fragrant pork and peanut stuffing.
Big meaty frogs aren't so easy to come by here, but squid are, and they taste better too.
1. To make the stuffing, combine the lemongrass, galangal, shallots, garlic and lime leaves in a food processor and whiz until finely chopped. Add the turmeric and half the peanuts, and pulse until the peanuts are ground. Add the pork mince and the remaining nuts and pulse to combine. You can make this several hours ahead if you like.
2. To make the dipping sauce, heat the oil in a frying pan over a medium heat and fry the shallots until soft. Add the garlic and fry for another minute or so, then stir in the turmeric and cook for a further 30 seconds. Add the peanuts, sugar, fish sauce and salt, stir for a minute, then stir in the coconut cream to give a thickish sauce.
3. Heat the oil for the stuffing in a large frying pan and, when hot, fry the pork mince until just cooked through. Divide between the squid, stuffing them tightly but leaving enough room at the end to secure with a cocktail stick.
4. Heat a griddle pan until smoking hot. Brush the squid with a little oil, then cook for a couple of minutes on each side until charred (you can do the tentacles as well). Serve immediately with the sauce for dipping.
## Coconut squid
##### serves 4
400g small squid
200ml coconut milk
50g sweetened desiccated coconut
50g panko breadcrumbs
1 teaspoon fine salt
50g plain flour
2 eggs
Neutral oil, to cook
2 teaspoons chilli flakes
Lime wedges, to serve
Inspired by the sweet, juicy coconut prawns of the Caribbean, I think squid, a creature whose anatomy lends itself perfectly to deep-frying, works even better, as well as being considerably easier on the wallet in this country.
This makes an excellent starter with a few rum punches.
1. Separate the tentacles from the bodies of the squid. Cut the bodies into triangles and score one side lightly with a knife (or cut into thick rings if you prefer). Put into a bowl with the tentacles and cover with the coconut milk. Leave to soak for up to 12 hours (though even half an hour is better than nothing).
2. Mix together the desiccated coconut, breadcrumbs and salt in a shallow bowl. Put the flour into a second shallow bowl, and lightly beat the eggs in a third. Heat a large saucepan a third full of oil on a medium-high heat.
3. While the oil heats, coat the squid. Take a piece from the marinade, shaking off any excess, and blot in the flour. Dunk in the egg, shaking off any excess, then roll in the coconut mixture. Repeat with the rest.
4. When the oil is hot enough that a breadcrumb sizzles immediately (170°C), add the squid in batches, frying until golden and turning over once during cooking, then scoop out with a slotted spoon, drain on kitchen paper and sprinkle with a few chilli flakes.
5. Make sure the oil comes back up to heat before adding any more – too cool and the squid will be pale and oily, too hot and the coconut will burn. Serve with the lime wedges.
## Black risotto with eggs
##### serves 2, easily doubled
75g butter, diced
2 shallots, finely chopped
1.5 litres hot fish stock (extra dilute if using stock cubes, as they can be very salty)
250g risotto rice (arborio or carnaroli)
150ml dry white wine
4 x 4g sachets of cuttlefish or squid ink
A little oil, to grease
2 quail's eggs
100g octopus or squid pieces in olive oil
2 teaspoons salmon roe or other colourful fish roe (optional)
2 tablespoons flat-leaf parsley, roughly chopped
This is one of those dishes that's as big on visual appeal as it is on taste – inky black rice, vivid orange eggs and green parsley make for a striking dinner. Sachets of squid or cuttlefish ink can be found in fishmongers and some delicatessens, as can squid or octopus pieces in oil, but you could always cook the latter from fresh if you prefer. Do be careful about the fish stock you use; too much salt, and you'll ruin the whole dish, so taste and dilute if necessary before starting.
1. Heat a third of the butter in a wide pan over a medium heat and soften the shallots. Meanwhile, keep the stock warm on a gentle simmer on another ring of the hob.
2. Add the rice to the shallots and stir to coat with butter. Once most of the grains start to look translucent, turn up the heat and add a little wine; if it sizzles, the pan is hot enough and you can pour in the rest. Stir until it has been absorbed, then stir in half of the squid ink.
3. Turn down the heat slightly and add a ladleful of stock. Cook, stirring, until most of it has been absorbed, then add another. Continue in this way until the rice is just tender – you may not need all the stock, so start checking the texture after about 20 minutes.
4. Once the rice is done, add the rest of the butter, cover the pan and leave for 5 minutes. Meanwhile, crack the quail's eggs and fry in a little oil.
5. Stir the by-now melted butter into the risotto with the remaining squid ink, divide between shallow dishes and put a quail's egg in the centre of each. Dot with the octopus or squid pieces, the roe if using, and the parsley, and serve.
## Braised octopus with chickpeas and coriander
##### serves 6
1 octopus, about 1.5kg, cleaned and defrosted if necessary
2 large onions, sliced
10 garlic cloves, sliced
2 x 400g tins of chickpeas, drained
150ml olive oil
800g new potatoes
Juice of 3 lemons
A large bunch of coriander
Cooking a whole octopus can be a daunting business, however much you love them – imprisoned within their icy packets, tentacles coiled up against the shrink-wrap, the frozen sort always remind me of something out of a particularly nightmarish manga tale. Once you get over that, however, they're simplicity itself to deal with, needing little more than time to soften into sweet submission.
One word of caution: octopus can be pretty salty, so the potatoes make a good, bland foil – don't be tempted to season the water.
1. Line the base of a pan large enough to hold the octopus with the onions and garlic, and add the chickpeas and olive oil. Sit the octopus on top, then cover and cook gently until very tender – this will probably take at least 2½ hours, but the longer you cook it, the better it will get.
2. About 20 minutes before you want to eat, cook the potatoes in their skins in unsalted water until tender.
3. When your octopus is done, lift it out of the pan and cut into chunks. Spoon the chickpea base into a large shallow serving dish with a slotted spoon and squeeze over lemon juice to taste; it will probably be quite salty, so you shouldn't need to season it. Roughly chop the coriander and potatoes and stir both through the chickpeas, adding a little of the cooking liquid if they seem dry, then arrange the octopus chunks on top to serve.
## Maryland-style octopus sandwich
##### makes 2
35g plain flour
35g cornflour
½ teaspoon celery salt (see recipe here or use ready-made stuff)
½ teaspoon hot chilli powder
1 tablespoon butter
2 tablespoons oil
4 baby octopuses or 8 small cuttlefish or squid, cleaned
2 soft white rolls
2 ripe tomatoes, sliced (see intro)
4 little gem lettuce leaves
##### _For the mayonnaise:_
100g mayonnaise
1 teaspoon mustard powder
½ teaspoon paprika
½ teaspoon celery salt
½ teaspoon hot chilli powder
¼ teaspoon ground black pepper
¼ teaspoon ground allspice
2 pinches of ground nutmeg
2 pinches of ground cinnamon
Inspired by the simple, but compulsively tasty soft shell crab sandwiches served on the shore of Maryland's Chesapeake Bay, a beautiful place with a sadly familiar story of declining stocks – but octopus, cuttlefish or squid make a surprisingly good substitute. If ripe tomatoes are but a distant dream, the baked ones here work very well indeed here.
1. Start by mixing all the mayonnaise ingredients together and adjusting to taste.
2. Whisk together the flours, celery salt and chilli in a wide bowl. Heat the butter and oil together in a frying pan over a medium-high heat and dredge the octopus or squid in the flour mixture, shaking off the excess.
3. Once the oil is hot enough that a pinch of flour sizzles when it hits it, add the octopus or squid, in batches if the pan is too small to hold them comfortably in one layer, and fry until golden and crusted on both sides.
4. Meanwhile, split the rolls and spread the bottom halves with mayonnaise. Top with tomato and lettuce. Once the octopus or squid is ready, drain briefly on kitchen paper and divide between the rolls, making sure the tentacles hang from the sides for best effect. Add the top halves and serve immediately.
There was never any doubt in my mind that potatoes deserved a place in this alphabet. No other foodstuff is quite so supremely satisfying, or quietly comforting – I don't know whether it's nostalgia or just the soporific effect of all that starch, but I'd be quite happy to be buried under a big mound of mash for all eternity.
They're so wonderfully versatile, for a start, as at home in a light, summery salad as they are in a big cheesy winter gratin. A baked potato can be a meal in itself, or sit elegantly alongside the main attraction in the form of a boulangère or dauphinoise; it can be fluffy, like a tattie scone, or dense and firm like a tartiflette; crunchy, like a crisp, or gooey like a pomme purée.
You can roast potatoes to deep golden perfection, or whip them to silky smoothness, and, perhaps most importantly for many people, you can deep-fry the hell out of them in the quest for the perfect chip.
In short, with potatoes in the house, you will never go to bed hungry. Or, perhaps, entirely unhappy.
### History
The homely spud is about as foreign as you can get – native to the Andes, they didn't arrive here until the end of the sixteenth century and, in Britain at least, were treated with suspicion for at least another hundred years after that, though they found favour in Ireland almost immediately.
Once they'd caught on, however, they quickly went native, as all of those mourning the decline of fish and chips as our national dish (first seen on these streets in the 1860s, some centuries after the curry) attest. Perfectly suited to our damp climate, economic and easy to grow in small spaces, with a long season and good storing potential, it was no wonder that they became a staple food for the poor.
The Victorian social reformer Henry Mayhew writes fascinatingly of the baked potato sellers who did a brisk trade on London's streets from August to April, reporting that one vendor at Smithfield, still a bustling meat market at that time, could hope to sell up to 1,000 spuds on market day, 'and to take upwards of £2' for his trouble.
Though consumption has dropped sharply in recent years in favour of quicker-cooking starchy rivals like rice and pasta, we still put away 90kg each annually, giving us quite a respectable showing in the world rankings of devoted spud munchers.
### Nutrition
Nutrition-wise, potatoes are not as bad as their starchy reputation has us believe – they're a decent source of vitamin C, and not too bad on the fibre front either, especially if you leave the skins on. (And why wouldn't you, given that's where most of the flavour lies?)
Plus, of course, they're a good source of energy; indeed, it's possible to survive for a surprisingly long time eating nothing but potatoes.
It's a sad truth, however, that potatoes only come into their own when you add fat and salt; we love them more for their soothing, satiating properties than any particular nutritional benefits.
### Varieties, storage and preparation
For a country apparently so enamoured by the potato, we pay surprisingly little attention to what kind we're eating. The most important distinction between the different varieties is waxy versus floury. Floury potatoes have higher levels of dry starch in their cells, which swell up and separate when cooked, resulting in a looser, fluffier texture better for mashed, roast and baked potatoes.
The cells in waxy potatoes, meanwhile, remain stuck together after cooking, which gives them the firmer, denser consistency required for potato salads and other dishes where you'd like them to keep their shape, such as sautéd potatoes, or pommes dauphinoise.
Because supermarkets often don't tell you what their potatoes are good for, here are a few of the most common names that appear on our shelves. Farmers' markets and farm shops are a handy source of more unusual varieties:
— Floury (good for mashing, roast potatoes with a fluffy texture, baked potatoes): Fianna, Golden Wonder, Kerr's Pink, King Edward, Maris Piper, Purple Majesty (dark purple skin, vivid purple flesh, very striking on the plate), Rooster, Shetland Black (dark purple outer skin, white flesh)
— Waxy (good for salads, boiling and gratins): Anya (good for salads), Charlotte (good for salads), Desiree (red-skinned), Estima, Lady Balfour, Maris Peer, Nicola, Pink Fir Apple, Vivaldi
Potatoes should be removed from any plastic wrapping as soon as you get them home, and stored in a cool, dark place. Not only are the unwashed sort cheaper, but they keep longer too.
When it comes to preparing them, always leave the skins on if possible; not just because they're the best bit of the potato, nutritionally speaking, they're also where most of the flavour lies, which is why I usually parboil my roast potatoes in a pan with their peelings. It sounds mad, but, as with so many ideas from Heston Blumenthal, there is method in it – taste the cooking water if you don't believe me.
### Potatoes in the kitchen
The potato's bland starchiness makes them a good match for almost anything you can throw at them, but my absolute favourite pairing is probably potatoes and cheese, be that mild goat's curd, fresh from the market, or a chunk of dry Cheddar from the back of the fridge.
As any Scandinavian will tell you, this affinity with savoury ingredients also makes them a very good partner to cured and smoked fish and seafood, as well as, of course, its meaty equivalent. Crisp little potato and bacon cakes, fried in bacon fat, are a joy indeed on a brisk morning, especially if you add another of their natural pairings: an allium of some kind.
The creamy foil they offer for strong flavours also makes them the ideal blank canvas for spice (as the extensive repertoire of potato curries and dry-fried potato dishes from the Indian subcontinent will attest) and, of course, a very good date for a richly flavoured stew, whether that's a spicy Mexican mole or a Lancashire hotpot.
Fats like butter, olive oil and crème fraîche are vital for bringing the best out in them, and a good pinch of salt is important too. Treat them in the same way as pasta, and salt the cooking water liberally; some of the best new potatoes I have ever eaten were simmered in sea water.
I hope all this inspires you to play around a bit with your potatoes, rather than relegating them to the role of best supporting carb. Why eat them boiled when you could have aloo tikki Scotch eggs instead?
See also: Braised octopus with chickpeas and coriander (here), Roast new potatoes with wild garlic dressing (here).
## Baked potato soup
##### serves 2 but easily scaled up (or down)
2 medium baking potatoes
20g butter
4 rashers of smoked streaky bacon (optional)
4 spring onions
750ml chicken stock (or vegetable if you'd prefer a meat-free dish)
1 tablespoon soured cream or crème fraîche
1 tablespoon chopped chives
This magnificently warming, velvety-textured soup combines the starchy creaminess of the spud with the savouriness of bacon and onions, enriched with a dollop of dairy. It's also a great way to use up leftover baked potatoes; just start from step 3.
1. Heat the oven to 220°C/fan 200°C/gas 7 (I usually bake my potatoes slightly hotter than this, but in this case we're not after a crisp skin). Wash and dry the potatoes well and prick in several places with a fork (this is insurance against them exploding in the oven and coating it in something with the properties of quick-drying cement), then wrap them in foil and bake for an hour.
2. Remove the foil and bake the potatoes for 15 minutes more, then take out of the oven and leave to cool slightly.
3. Meanwhile, melt the butter in a medium pan over a medium-low heat and finely chop the bacon if using. Add it to the pan and cook until beginning to brown. Roughly chop the spring onions. Scoop out about a quarter of the bacon with a slotted spoon and set aside as a garnish, then add the spring onions to the pan and soften for a couple of minutes. Roughly chop the potatoes, skins and all.
4. Add a little of the stock to the pan and scrape to deglaze, then tip in the potato and stir to coat with butter. Add the rest of the stock, bring to a simmer, then turn down the heat and cook gently for about 15 minutes.
5. Allow to cool slightly, then whiz with a hand blender until smoothish (don't overdo it or you'll end up with a gluey texture). Taste for seasoning (it probably won't need any further salt).
6. To serve, divide between bowls. Dribble a swirl of soured cream in the centre of each, and top with the chives and reserved bacon.
## Chorizo baked potatoes with avocado crema
##### serves 4
4 large baking potatoes
Coarse salt
Olive oil
200g cooking chorizo, cut into small dice
5 spring onions, finely sliced
1 ripe avocado
2 tablespoons soured cream
Juice of 1 lime
A small bunch of coriander, roughly chopped
Simple, but effective – the lovely paprika-spiked oil from the sausages seasoning the creamy flesh of the potato, and setting off the zingy green avocado crema a treat. (NB: if you'd prefer to keep it dairy free, you could leave out the soured cream; the sauce will just be slightly thicker, more like guacamole.)
1. Heat the oven to 240°C/fan 220°C/gas 9. Wash the potatoes and half dry, so they're still a bit damp, then prick each one a few times with a fork to stop it exploding in the oven. Shake a layer of coarse salt on to a small plate, then roll each potato in it so it sticks in patches, and put them on a baking tray. Bake for about an hour to an hour and a quarter, depending on size, until the skin is crisp and the insides tender, then take out of the oven, turning it down to 200°C/fan 180°C/gas 6, cut a large cross in the top of each and leave to cool a little.
2. While they're cooling, heat a small frying pan with a dash of olive oil over a medium heat and add the chorizo. Cook until the pieces have released their own orange oil, then turn up the heat slightly and fry until beginning to crisp. Add the spring onions and fry for a minute until softened, then set aside.
3. When the potatoes are cool enough to handle, scoop the flesh into a bowl, being careful to leave the skins intact, and mash until smooth. Tip in the chorizo and onion mixture, making sure you get all the oil, and mix well. Season to taste, then put back into the skins and bake for 10–15 minutes, until hot all the way through.
4. Meanwhile, cut the avocado in half and scoop out the flesh. Mash with a fork into a rough purée, then add the soured cream and half the lime juice and use a stick blender to whiz until smooth. Stir in the coriander, season to taste, adding more lime juice if necessary, and then serve the jacket potatoes with a dollop of crema on top, with the rest on the table for people to help themselves to.
## Aloo tikki Scotch eggs
##### makes 10 (or 18 smaller versions)
12 eggs (or 18 quail's eggs and 2 hen's eggs)
800g floury potatoes
50g root ginger, peeled
2–3 small green chillies, deseeded, depending on taste
5 round shallots (or 1 large red onion)
Neutral oil, to fry
1 teaspoon cumin seeds
2 teaspoons mustard seeds
1 teaspoon garam masala
½ teaspoon ground turmeric
150g peas, defrosted
1 teaspoon salt
A small bunch of coriander, finely chopped
100g flour
200g panko breadcrumbs
A mash-up (forgive me) of two picnic classics from very different parts of the world, these are rich with spice, but only mildly hot, with a lovely fresh sweetness from the peas.
The hen's egg versions are quite hefty propositions, a satisfying lunch on their own, so if you'd prefer to make them just one part of a picnic, or as party food, try them with quail's eggs instead. They are good hot or at room temperature, and pair well with mango chutney, sweet and sour date and tamarind chutney or a coriander and mint raita.
1. Gently lower 10 of the eggs (or all of the quail's eggs) into a pan of boiling water and cook for 4½ minutes (2½ minutes for quail's eggs). Meanwhile, prepare a large bowl or sink of iced water and, once they're done, transfer the eggs quickly to this to cool down.
2. Cut the potatoes into large, roughly equal chunks and put into a large pan. Cover with cold water, salt liberally, put a lid on the pan and bring to the boil, then uncover, turn down the heat and simmer until tender. Drain and allow to cool, then peel off the skins and discard (doing it this way may be more fiddly, but the flavour is far better).
3. Meanwhile, use a pestle and mortar to mash the ginger and chillies to a paste and finely chop the shallots. Heat 2 tablespoons of oil over a medium heat in a medium frying pan and fry the shallots until soft, then add the ginger chilli paste and fry for a minute. Turn up the heat slightly and add the cumin and mustard seeds. Fry for 30 seconds, then stir in the other spices, adding a splash more oil if they start sticking, and fry for another minute or so, stirring. Take off the heat.
4. Mash the potatoes until smooth, then add three-quarters of the peas and mash roughly. Stir in the spice mixture and salt and, once well combined, add the remaining peas and the coriander and distribute evenly. Taste for seasoning.
5. Carefully peel the eggs. Take a roughly 125g lump (or about 60g for quail's eggs) of aloo tikki mixture and form into a ball, then poke a hole in the middle. Put the egg in it and seal up, then repeat with the rest.
6. Put three bowls – of flour, lightly seasoned, the remaining hen's eggs, lightly beaten, and breadcrumbs – next to the hob. Fill a large pan a third full of oil, and set over a medium heat until it comes to 190°C. Meanwhile, roll each egg in turn in flour, egg and breadcrumbs and then (for hen's eggs only) a second time in egg and breadcrumbs. Put a plate lined with kitchen paper next to the hob and get ready a slotted spoon.
7. Once the oil has come to temperature, lower the eggs in with the slotted spoon, two or three at a time (be careful not to overcrowd the pan or they won't crisp up) and fry for about 2–3 minutes until golden. Lift them out with the slotted spoon, salt lightly and drain on kitchen paper while you cook the rest, making sure the oil comes back up to temperature first.
## Northern potato salad
##### serves 4
600g small, waxy potatoes (Jersey Royals or Charlottes are ideal)
1½ tablespoons cider vinegar
3 tablespoons neutral oil
4 teaspoons grated horseradish
A small bunch of dill, chopped
3 smoked mackerel fillets
A jar of cornichons or gherkins
4 tablespoons soured cream or crème fraîche
##### _For the quick-pickled red onions:_
1 small red onion
90ml cider vinegar
¼ teaspoon salt
¼ teaspoon sugar
1 teaspoon black peppercorns, bruised
I'm not a big fan of bland, gloopy, mayonnaise-based potato salads, but I do love the creamy flavour and texture of cold waxy potatoes with strong smoked fish and acidic pickles, and such fashionably Scandinavian ingredients are just crying out for a little peppery horseradish heat. This is an excellent, and very satisfying weekend lunch. Note that you'll need to make the onions at least an hour in advance.
1. Very finely slice the onion and put into a colander. Pour half a kettle of boiling water over it and leave to drain. Whisk together the vinegar, salt and sugar, then put the drained onions into a jar with the peppercorns, cover with the vinegar and allow to sit for at least an hour (though you can make this days in advance if you like).
2. Cut the potatoes into equally sized pieces without peeling, then put into a pan and cover with cold water and a generous amount of salt. Bring to the boil, then turn down the heat and simmer until tender.
3. Meanwhile, whisk together the vinegar and oil with a generous pinch of salt, then stir in the horseradish. When the potatoes are done, drain well, then put into a bowl and toss with the dressing. Leave to sit for at least 30 minutes.
4. Toss the potatoes with the dill, then flake in the mackerel. Roughly chop a few cornichons or gherkins (how many depends on their size, so use your own judgement) and scatter over the top along with some of the pickled onion. Either dollop the soured cream in the middle, or divide between plates and then add it to each – or leave people to spoon on their own if you prefer.
## Potato, black kale and anchovy pie
##### serves 6
750g large-ish waxy potatoes
A knob of butter, plus extra to grease
8–12 anchovies, rinsed well if packed in salt, and finely chopped
4 small garlic cloves, crushed
300ml double cream
150ml milk
200g trimmed black kale (cavolo nero), shredded
##### _For the pastry:_
150g butter
400g plain flour
½ teaspoon salt
1½ tablespoons mustard powder
1 egg, beaten with a little milk or water, to glaze (optional but handsome)
A cross between a French dauphinoise and a very British potato pie, with some salty little fish thrown in to add a touch of Scandi-chic, this is the kind of straightforward cold-weather food I love; creamy, rich with umami and comfortingly carby, it's an ideal lunch or dinner after a wintery walk, though you may need to go for a snooze afterwards. The kale makes it a complete meal in itself, but I usually serve a green salad too, just to balance things up a bit. It also cuts well for transportation, and makes for a nice, if rather decadent packed lunch in colder months.
The hot water crust pastry is soft, but very forgiving, almost like working with play-dough, but you can substitute shortcrust if you prefer, or, indeed dispense with the pastry altogether and bake it like a gratin – 30 minutes covered with foil at 180°C/fan 160°C/gas 4, then a further 10–15 minutes uncovered, until browned on top.
1. Use a food processor or mandoline to thinly slice the potatoes.
2. Melt a knob of butter or some oil from the anchovies in a very large saucepan (you're going to have to get the potatoes in there eventually) over a medium-low heat and add 8 of the anchovies. Cook, stirring, until they dissolve into the fat, then add the garlic, cook for about 30 seconds, then stir in the cream and milk and bring slowly to the boil. Taste, and add more finely chopped anchovies if you'd prefer a stronger flavour.
3. Put the potatoes into the pan, cover and simmer gently for 10 minutes, turning occasionally to redistribute, or until they are softened but not cooked through. Meanwhile, bring a large pan of well-salted water to the boil and cook the kale until just tender (about 1–1½ minutes), then drain well.
4. Heat the oven to 200°C/fan 180°C/gas 6 and grease a medium pie dish (I use one about 20 x 26cm) with butter.
5. To make the pastry, put the butter in a small pan with 110ml of water and heat until melted. Bring to a simmer. Meanwhile, put the flour in a mixing bowl with the salt and mustard powder and whisk together well. Pour in the hot butter and water mixture and stir until it comes together into a dough.
6. Set a third of the pastry aside and roll out the rest to about double the size of the pie dish, then carefully lift it into place (it will be very soft, so if you end up doing it in scraps, don't worry!), pressing it into the corners.
7. Spoon half the potatoes into the dish, followed by the kale, followed by the remaining potatoes. Roll out the rest of the pastry and place over the top, then crimp together the edges to seal. Brush with the egg wash, poke a couple of holes in the top for the steam to escape, then bake for about 45 minutes, until golden. Allow to cool a little before serving.
## Aligot
##### serves 2–4 depending on accompaniments and greed
500g waxy potatoes
125g butter
110ml double cream
1 small garlic clove
200g Lancashire cheese, very finely chopped
A whole nutmeg, to grate
Potatoes, butter, cream, garlic, and vast amounts of cheese – really, there is no way you can go wrong with this classic recipe from south-west France. It's very hard to come by the fresh Tomme cheese traditionally used to make it outside its home region, but a creamy Lancashire is a fairly close match for flavour, and gives a delicious, if not entirely authentic result.
This would generally be served with meat, but I find it so rich that I serve it with nothing more than some steamed greens, or even a sharp salad.
1. Cut the potatoes into evenly sized pieces but do not peel. Put them into a pan with a good shake of salt. Barely cover with cold water and bring to the boil, then turn down the heat and simmer until tender all the way through. Drain.
2. Melt the butter and the double cream together in a small pan with the crushed garlic clove. When the potatoes are just cool enough to handle (don't wait too long), peel and mash or put through a ricer and put back over a low heat. Use a stick blender, if you have one, to whiz them, along with a splash of the melted butter and cream, to a smooth, gluey purée.
3. Beat in the remaining cream and butter mixture vigorously with a wooden spoon or the stick blender – at this point you will be moved to check you've got the amounts right, as they will seem outrageous, but don't worry, it will all be absorbed.
4. Beat in the cheese until smooth and very gluey, followed by a good grating of nutmeg. This will take a lot of elbow grease, but when it's ready the aligot should have a stringy, elastic texture. Serve immediately.
## Tattie scones à la Arnold Bennett
##### serves 2
250g floury potatoes, e.g. Maris Pipers or King Edwards
150g smoked haddock, or other smoked firm white fish
200ml milk
50g plain flour
3 spring onions, finely sliced
A whole nutmeg, to grate
200g spinach
##### _For the hollandaise:_
2 egg yolks
125g butter, diced
A dash of white wine vinegar or lemon juice
This dish is loosely based on the outrageously rich omelette created for, or at least enjoyed by the Edwardian novelist Arnold Bennett at the Savoy Hotel.
It's lighter than the original, but if you really want to push the boat out you could also add a poached egg.
1. Put the potatoes, unpeeled but cut into equally sized pieces, into a pan, cover with cold water, salt liberally and bring to the boil. Turn down the heat and simmer until tender all the way through.
2. Meanwhile, put the haddock into a smallish pan just big enough to hold it in one layer with the milk, over a medium heat. Bring to a simmer, then turn down the heat and cook gently for 5 minutes, turning the fish over halfway through. Remove the fish from the milk and set aside.
3. Drain the cooked potatoes. Mash until smooth, then beat in just enough of the haddock-infused milk that the mixture grudgingly drops off the spoon – you certainly won't need it all. Stir in the flour to make a soft dough, then season and mix in the spring onions and a good grating of nutmeg until evenly distributed.
4. Roll the dough out on a well-floured surface to about ½cm thick. Cut round a side plate to make circles, then flour the tops and prick all over with a fork.
5. To make the hollandaise, put the egg yolks into a small pan with the diced butter and 1 teaspoon of cold water. Heat very gently, stirring constantly, until they melt together and the sauce thickens to your liking. Stir in a dash of white wine vinegar or lemon juice and keep warm.
6. Wash the spinach and put, still wet, into a large pan. Cover and heat gently for a couple of minutes, until wilted. Drain, squeeze out any excess water and keep warm.
7. Heat a well-greased griddle or heavy-based frying pan on a medium heat and fry the scones for about 3 minutes on each side, until golden. Cut into quarters and keep warm while you fry the rest. While they're cooking, flake the haddock.
8. To serve, put a tattie scone on a plate and top with spinach, followed by haddock, followed by a generous dollop of hollandaise. Season with a little black pepper for colour and serve immediately.
## Potato and cauliflower curry with coconut and cashew cream
##### serves 2 as a main dish, 4–6 as a side
##### _For the aloo gobi:_
400g waxy potatoes
1 tablespoon vegetable oil
½ teaspoon chilli powder
½ teaspoon ground turmeric
½ teaspoon mustard seeds
½ teaspoon salt
1 small cauliflower
##### _For the curry:_
50g cashew nuts
100g creamed coconut (the solid stuff that comes in blocks)
2 tablespoons neutral oil
1 large onion, thinly sliced
2 small green chillies, 1 finely chopped and 1 slit down its length but left whole
3 garlic cloves, crushed
1 tablespoon ginger, finely grated
2 teaspoons ground coriander
½ teaspoon ground turmeric
½ teaspoon ground fennel seeds
¼ teaspoon garam masala
A handful of coriander
This is my Keralan-inspired take on the classic northern Indian aloo gobi, with a rich, thick sauce of coconut cream and cashew nuts. It's hefty enough to make a meal on its own with some plain rice and a punchy pickle of some sort, but you could also serve it as part of a curry feast if you prefer – and it would be great with sautéd greens or grilled fish.
1. Heat the oven to 200°C/fan 180°C/gas 6 and cut the potatoes into halves or quarters depending on their size. Mix together the oil, spices and salt and toss half of this together with the potatoes, then spread out on a baking tray and bake for 15 minutes. Meanwhile, cut the cauliflower into florets and then, after 15 minutes, toss together with the remaining spice mix and add to the baking tray. Bake for another 15–20 minutes, until the potatoes are about cooked through.
2. Meanwhile, soak the cashew nuts in a generous amount of hot water for 15 minutes and dissolve the coconut in 250ml of hot water. Drain the cashews, retaining the soaking water, and whiz them up to a purée along with 5 tablespoons of the soaking water to make a loose-ish paste.
3. Heat the oil in a frying pan and add the onion. Cook until soft and golden, then stir in the chopped chilli, garlic and ginger. Cook for another couple of minutes, then stir in the dry spices and a pinch of salt and cook for a further 2 or 3 minutes.
4. Stir in the coconut cream and cashew purée, scraping the bottom of the pan, followed by the potatoes, cauliflower and remaining chilli. Simmer until the sauce thickly coats the vegetables, then serve with plenty of chopped coriander.
I defy anyone to watch a jelly wobble with a straight face. It's like fireworks, or the first sunny day of spring – a joy that never gets old.
Not only are jellies mobile in a way rare in foods not actively trying to escape their fate, but they're incredibly biddable; it's possible to set almost anything that takes your fancy – the amorphous jelly can take on just about any shape, hue, flavour and texture that the heart desires. It also has the benefit, as far as visual effects are concerned, of being, at its most basic, completely transparent; something almost unique in the culinary world, and strangely alluring. A clear wine jelly, studded with jewel-like fruits, with the sun shining through it, is a vision fit for a Dutch still life.
Jellies, panna cottas and their ilk are also a great way to impress guests: quick and easy to produce, happy to sit in the fridge for several days, and, most importantly, stunning to look at – there's something quite wonderful about the way they seem to defy the ordinary laws of physics.
### Science
Sadly for vegetarians, gelatine is by far the best setting agent for jellies; vegetarian alternatives such as the seaweed-based agar agar tend to lack the smooth texture, and clarity, of their animal-based counterpart.
Gelatine is extracted from a tough fibrous protein called collagen, found in the connective tissues of an animal's skin, bones and muscles; the richest sources are those cuts which require long slow cooking: feet, ears, tails and other good things. It's made up of a sequence of tightly wound chains of amino acids – when these are heated above body temperature they relax and disperse, and will not begin to reassemble until the temperature falls again.
As the mixture cools, surrounded by a stiffening mesh of gelatine molecules, the liquid in your jelly mix can no longer flow freely, and thus becomes a shaky kind of solid. This sensitivity to temperature helps explain why gelatine must not be heated to boiling point – above a certain temperature, its amino acids start to break down, and your jelly will not set.
Interestingly, jellies set firmer if allowed to cool slowly, rather than having their temperature brought down more rapidly in the fridge.
### Jelly basics
If the idea of ditching the familiar fruity cubes leaves you feeling quivery, it's helpful to remember the very simple principle behind all types of jellies, blancmanges and panna cottas:
_Liquid + setting agent = jelly_
There's no more mystery to it than that.
The only even vaguely tricky business is working out how much setting agent you need to achieve the consistency you're after; too little and your jelly won't hold together, too much and though it will look impressive, it's likely to be unpalatably firm and rubbery on the spoon.
Helpfully, however, _one leaf of gelatine will set about 100ml liquid_ firm enough to turn out on to a plate – if you'd prefer a softer consistency, and plan to serve it in whatever you've set it in (generally a glass dish, for best effect), then you can get away with less.
Dairy-based jellies will also require less, thanks to the proteins in the milk, as will jellies made with thicker liquids, such as fruit purées. Conversely, it's wise to use more gelatine than specified if the jelly is to be served in a very warm environment, or you're attempting an ambitiously large edifice, or one containing large amounts of alcohol.
I find leaf gelatine much easier to work with than the powdered stuff (which has an off-putting whiff of hoof about it); you'll find it in the baking section of most supermarkets. It requires soaking in water before use – make sure you use enough that the sheets don't stick together, and don't leave them soaking for much longer than they take to soften, or they may disintegrate. Squeeze out well before adding to your liquid, which must be above 37°C (body temperature) for the gelatine to melt, but below a simmer for it to work effectively.
If using vegetarian alternatives, follow the directions on the packet for setting the appropriate amount of liquid.
See also: Pandan and coconut burnt creams (here).
For how to unmould a jelly see here.
## Tricolore jellies
##### makes 6
##### _For the tomato jelly:_
600g ripe tomatoes, halved or quartered
1 small garlic clove, crushed
100ml tomato juice
¼ teaspoon sugar
2½ gelatine leaves
Neutral oil, to grease
##### _For the mozzarella panna cotta:_
1 burrata (you won't need all of it)
100ml whole milk
1 gelatine leaf
##### _For the basil jelly:_
1 lemon
40g fresh basil leaves, plus a few extra
1½ gelatine leaves
Familiar flavours – the tang of tomato, the creaminess of mozzarella, the sweet pepperiness of basil – cast in a new and unexpected form: miniature jellies.
You can make them a couple of days before if you like, which is always handy – and though it looks like you've gone to great effort, the work involved is both minimal and basic.
1. For the tomato jelly, whiz up the tomatoes and garlic with the juice, the sugar and a pinch of salt and pepper in a food processor until coarsely chopped. Line a sieve with muslin or a clean tea towel, set it over a large bowl and pour in the tomatoes, then gather up the sides of the material over the tomatoes and secure the top of the bundle with an elastic band. Suspend this above the bowl (I do this from the arm of my stand mixer, but any hook or cupboard handle will do) and leave to drain for at least 3 hours, squeezing the bag occasionally to help it along.
2. Once you've drained off most of the tomato liquid (you should have about 300ml – if it's significantly less, top up with tomato juice; if more, make the excess into a Bloody Mary shot), soak the gelatine leaves in a bowl of cold water until soft and scrunchable. Meanwhile, bring the juice to a simmer in a small pan. Squeeze out the gelatine and stir into the warm juice until dissolved.
3. Grease six small dariole moulds, or small glass dishes if you don't want to turn them out, and divide the tomato mixture between them. Chill until set.
4. When the tomato jelly is beginning to set, measure out 75g of the burrata, making sure you get a good lot of the cream inside. Finely chop the solid skin. Put into a small pan with the milk and a generous pinch of salt and heat gently, stirring once warm to encourage the cheese to melt. Meanwhile, soak the gelatine in cold water until soft. Once the dairy mixture is smoothish, squeeze out the gelatine and stir into the milk, then allow to cool to warm room temperature, stirring occasionally. Pour over the back of a spoon on top of the set tomato jelly (to stop them merging) and refrigerate.
5. For the basil jelly, bring a small pan of salted water to the boil and prepare a large bowl of iced water with the juice of the lemon squeezed into it. Blanch the basil for 15 seconds, then scoop out into the iced water. Reserve 180ml of the blanching water, and allow it to cool slightly. Meanwhile, soak the gelatine as before. Stir it into the warm blanching water and allow to cool, stirring occasionally, then drain and roughly chop or coarsely purée the basil and stir it into the gelatine mixture with a pinch of salt. Pour on top of the panna cotta and refrigerate until set.
6. Turn out on to plates if you're feeling brave, or serve in the dishes, with a basil leaf on top, a drizzle of extra virgin olive oil and some toasted ciabatta.
## Goat's cheese custards with honey-glazed hazelnuts and black olive toasts
##### makes 4
150ml single cream
150ml whole milk
1 sprig of rosemary, bruised with the back of a knife
85g strong hard goat's cheese, finely grated
2 egg yolks
Butter, to grease
##### _For the honey-glazed hazelnuts:_
15g butter
1 tablespoon hot water
1 tablespoon honey
100g hazelnuts
½ teaspoon salt
¼ teaspoon sugar
Leaves from 3 sprigs of rosemary
##### _For the black olive toasts:_
150g stoned black olives
2 tablespoons capers, rinsed if salted
4 anchovies, roughly chopped
Juice of ½ a lemon
3 tablespoons extra virgin olive oil
4 thin slices of bread
Not strictly a jelly, but wobbly enough that they qualify for this chapter anyway. Inspired by Rowley Leigh's dreamy Parmesan custards, but with a southern French twist, they pair piquant goat's cheese with sweet, honeyed nuts. The tapenade toasts aren't essential, but as with Leigh's anchovy variety, the two salty flavours work surprisingly well together.
An ideal dinner starter, all the elements can be prepared well in advance, and the custards, which are lovely both warm and cold, grilled just before serving.
1. Heat the cream, milk and rosemary together in a small pan and whisk in 75g (or all but a tablespoon) of the finely grated cheese until melted and smooth. Take off the heat and allow to cool.
2. Meanwhile, turn your attention to the hazelnuts. Melt the butter in a small pan over a medium-high heat, and whisk the hot water and honey together. Add the nuts and cook until they start to colour, then add the salt and sugar. Cook until the liquid has mostly evaporated. Meanwhile, line a baking tray with foil or greaseproof paper. Stir the rosemary leaves into the glazed nuts and spoon the nuts on to the tray, spreading them out well. Cool.
3. Heat the oven to 150°C/fan 130°C/gas 2 and boil a kettle. Fish out the rosemary from the cheese mixture and discard, then pass through a fine sieve into a jug and whisk in the egg yolks. Grease four small ramekins and put them into a baking tin. Divide the mixture between them and put the tin into the oven. Pour in the boiling water to come halfway up the ramekins. Bake for 20–30 minutes, until set on top, but still wobbly in the middle. Allow to cool to warm, or completely if you prefer.
4. Meanwhile, whiz the olives, capers and anchovies together until smoothish, then whisk in the lemon juice and oil. Taste and adjust as necessary.
5. When ready to serve, heat the grill. Scatter the remaining cheese over the top of the custards and grill until golden and bubbling (you could also use a blowtorch).
6. Toast the bread until crisp. Serve the custards with a few roughly chopped hazelnuts and rosemary needles scattered on top, and the toast, thinly spread with the olive paste and cut into soldiers.
## Jelly cherry jubilee
##### serves a large party (10–12)
##### _For the cherry jelly:_
1 litre unsweetened cherry juice (see intro)
8 gelatine leaves
Neutral oil, to grease
200g cherries, stoned and halved
##### _For the kirsch cream:_
275ml double cream
850ml whole milk
8 tablespoons white sugar
8 gelatine leaves
150ml kirsch
Should you ever have had your fill of cherries straight from the paper bag (competitive pit-spitting entirely optional), try this impressive stripy jelly, based on the flavours of a flambéd cherry pudding created by the great French chef Escoffier for Queen Victoria's Diamond Jubilee in 1897, but far easier to make.
Unsweetened cherry juice can be found at health food shops; if you can't lay your hands on any _kirschwasser_ , substitute any other clear, unsweetened brandy instead. (Cherry brandy, despite the name, is a sugary liqueur which won't do at all.)
1. Heat the cherry juice in a pan until quite warm, but not hot. Meanwhile, soak the gelatine leaves for the cherry jelly in cold water. Once the juice is warm, take off the heat, squeeze out the gelatine thoroughly and whisk into the pan, then set aside to cool.
2. Heat the cream and milk in a new pan until quite warm, but not hot, whisking in the sugar until dissolved. Meanwhile, soak the remaining gelatine leaves in cold water. Once the milk is warm, take off the heat, stir in the kirsch, squeeze out the gelatine thoroughly and whisk into the pan, then set aside to cool, whisking each jelly mixture regularly as it cools.
3. Grease a 2 litre mould (a bundt tin makes for an impressive shape), then arrange a ring of cherries around the base. Gently pour in a layer of the cooled cherry jelly (you're aiming for three layers of each, but it will depend on the shape of your mould), then put into the fridge to set.
4. Once set, top with a layer of the milk jelly, pouring it on to a spoon angled just above the set jelly so the pressure doesn't disturb the surface, followed by a layer of the cherries and cherry jelly and so on. Cover and chill until completely set; I like to leave mine overnight if possible.
5. Dip the mould briefly into hand-hot water, then invert on to a plate.
## Gooseberry and buttermilk pots
##### makes 6
##### _For the gooseberry jelly:_
225g gooseberries
55g caster sugar
2 gelatine leaves
2 tablespoons elderflower cordial
##### _For the panna cotta:_
100ml double cream
40g caster sugar
1½ gelatine leaves
250ml buttermilk
The sadly under-appreciated gooseberry is one of my favourite summer flavours – so wonderfully green and tart, it's a shame we seem to have fallen out of love with them as a nation, because we grow the finest examples in the world, particularly in the northern wilds of Scotland, where these prickly, gnarled bushes are one of the few things to withstand the punishing wind.
Their natural acidity makes them the perfect candidate for rich creamy flavours – gooseberry fool is the most obvious example. This is but a slightly more elegant take on that most excellent of desserts, perfect for when you'd like to impress. If your gooseberries are very sweet and ripe, the kind you can just about eat raw, then you may want to cook them for slightly less time, so they keep more of their shape and colour.
1. Make the jelly first. Top and tail the gooseberries and put into a small pan with the sugar. Cover and heat gently for about 15 minutes, until the fruit is soft, but still mostly retains its shape.
2. Allow to cool slightly. Meanwhile, soak the gelatine for the gooseberry jelly in cold water for a few minutes until soft, then wring out and stir into the slightly cooled fruit, along with the cordial and 175ml of water. Mix well and pour into six glass ramekins or small glasses. Put into the fridge for a couple of hours to set.
3. Once the jellies are beginning to firm up, put the cream into a small pan with the sugar over a low heat and stir to dissolve. Bring to a simmer, then take off the heat and allow to cool slightly. Meanwhile, soak the remaining gelatine in cold water for a few minutes until soft, then wring out and stir into the cooling cream.
4. Pour in the buttermilk and stir well, then divide between the glasses and chill until set.
## Caribbean milk punch jelly
##### serves 8–10
150ml condensed milk
150ml whole milk
9 gelatine leaves
100g soft light brown sugar
600ml stout or porter, preferably chocolate or milk stout
1½ tablespoons cocoa powder
Neutral oil, to grease
A whole nutmeg, to grate
A rich brown, almost black underneath, with a creamy white top, this is a lovely thing to bring in, gently wobbling, at the end of a meal, and a killer choice for St Patrick's Day.
1. Put the condensed milk and whole milk into a small pan and heat until quite warm, but not hot. Meanwhile, soak two of the gelatine leaves in cold water. Once the milks are warm, take off the heat, squeeze out the gelatine thoroughly and whisk into the pan, then set aside to cool.
2. Meanwhile, dissolve the sugar in 100ml of water in a medium pan, then bring to the boil. Simmer for about 5 minutes, until syrupy, then take off the heat and add the beer and cocoa powder, stirring to dissolve the cocoa. Put back on the heat and warm through. Soak the remaining gelatine as before and whisk in when the beer is warm. Take off the heat and leave to cool, stirring both pans regularly.
3. Lightly grease a 1 litre jelly mould. Pour the cooled condensed milk jelly into it and put into the fridge to set, remembering to keep stirring the beer jelly.
4. Once the milk jelly has set, gently pour the beer jelly on to a spoon held just over its surface (this helps to stop the two merging), then cover and chill for at least 8 hours, until set. Turn out and top with freshly grated nutmeg.
### How to unmould a jelly
Metal or plastic moulds are easier to work with than glass ones, but I still like to coat them with a thin film of neutral oil before filling them to make unmoulding easier. If you do this, make sure the jelly is cool before pouring it into the mould, or this layer of fat will be melted.
## Almond and rosewater blancmange
##### serves 8–10
1 litre unsweetened almond milk
175g–200g white sugar (depending on sweetness of tooth)
3 tablespoons rice flour
8 gelatine leaves
¼ teaspoon rosewater, or to taste
¼ teaspoon almond essence, or to taste
Neutral oil, to grease
Blanched almonds, crystallized rose petals, edible fresh flowers or coloured sugar, to decorate
Blancmange, still a trembly feature of children's birthday parties in the 1980s, has disappeared entirely from our diet, unmarked and unmourned, which seems a shame after nearly a millennium. To be fair, those stout floury rabbits of my youth, their lurid colours making a mockery of the 'white food' name, were but a distant relation of the original, delicately spiced courtly dish, which was made from rice, almonds and finely minced chicken.
This version takes it back to its dairy-free medieval roots, using almond milk and rice flour as a thickener. If you'd like to keep it vegan, replace the gelatine with agar agar or carragheen according to packet instructions for setting a litre of liquid.
1. Put the almond milk and sugar into a medium pan and bring to a simmer, stirring to dissolve the sugar. As it heats, put the rice flour into a large heatproof bowl and whisk in a little of the warm milk to make a smooth paste, then pour the simmering milk on to it, whisking to combine well.
2. Pour back into the pan and stir until just thickened sufficiently that a finger down the back of a wooden spoon leaves a clear line. Take off the heat and allow to cool until warm, rather than hot.
3. Meanwhile, soak the gelatine leaves in cold water until soft, then squeeze out any excess liquid. Once the mixture has cooled a little, add the rosewater and almond essence to taste (brands vary greatly in strength, so this is a minimum amount), followed by the squeezed-out gelatine, stirring vigorously to dissolve.
4. Grease a 1 litre mould and pour in the blancmange mixture. Cover and chill until set – this will take several hours. Once you're sure it's solid, dip the mould briefly into a bowl of hot water, then upend on a plate. Decorate with your choice of garnish, then serve.
A taste for rhubarb has always seemed a peculiarly British peccadillo – eye-wateringly tart and uncompromisingly stringy, with the look of overgrown celery, it should be a hard one to love, and yet love it we do.
We're not alone – though it still baffles the French, rhubarb is a popular addition to Norwegian baking, the key ingredient in one of Italy's beloved bittersweet aperitifs, rabarbaro, and is made into cold soup in Poland, while in Afghanistan they dip it in salt and eat it raw.
And this plant, forever associated for me with damp hockey boots and curdled custard, has a surprisingly interesting history. A native of north and central Asia, it's been valued for its medicinal qualities for millennia, and was imported into ancient Greece as a laxative (though don't worry – you have to eat quite a bit to feel the effects).
The seventeenth-century English herbalist Nicholas Culpeper claimed, somewhat hopefully perhaps, that it 'heals jaundice . . . provokes urine . . . is very effective for reins [gonorrhoea] and helps gout, sciatica . . . toothache . . . [kidney] stones and . . . dimness of sight' – little wonder that rhubarb powder was once worth more than opium. It wasn't to become popular as a foodstuff for another century, and it's no coincidence that its fortunes changed around the same time as Britain's burgeoning empire made sugar an affordable luxury for the first time.
The green stalks the girth of a terrier's leg typically found in British gardens bear little resemblance to the slender pink stems that you'll spot in greengrocers from Christmas to Easter. Forced rhubarb, a practice that always sounds faintly cruel, is grown in darkness, so the plants shoot straight up in their desperate quest for light, giving them a tender texture and delicate flavour.
Visiting one of the handful of growers still operating in the once mighty Rhubarb Triangle between Wakefield, Leeds and Bradford, I found stepping into the warm, dark sheds a distinctly unsettling experience – the plants, with their sickly yellow leaves and neon stalks thrusting vainly towards the ceiling, can put on five centimetres a day, a rate of growth that had a whiff of the triffids to me.
The summer stuff may not be as pretty, but it's criminally easy to grow – even a plant neglected at the bottom of the garden will provide you with more puddings than you can probably handle.
Season: forced indoor rhubarb, December to March; outdoor, April to September.
### Cooking
That said, older rhubarb does need a little more care taken with its preparation, as those sturdy stalks can be stringy. Like celery, however, this is a problem easily solved with a peeler.
Rhubarb's high water content means it needs little in the way of liquid to cook – indeed, you're more likely to be troubled by the excess of pale pink juice that has a tendency to leave the most robust pastry soggy at the knees, and curdle your carefully made custard – so it's a great candidate for roasting, which concentrates the flavour. (The juices can come in useful though; Nigel Slater tops them up with sparkling water for a pretty pink drink, Nigella Lawson makes them into jelly, and I've gone for a deceptively innocent-looking gin-soaked granita.)
Though it's most often eaten as a dessert in this country, rhubarb's piercing astringency makes it an excellent accompaniment to rich savoury dishes too; classically paired with oily fish like mackerel, it will also cut through the natural fattiness of meats like pork, duck and lamb, and makes a killer partner for cheese.
### Rhubarb loves ...
## Mackerel and samphire tartare with pickled rhubarb
##### serves 4 as a substantial starter
½ a small red onion, very finely chopped
4 very fresh mackerel fillets, skinned and boned
Juice of 2 limes
A small bunch of coriander, finely chopped
A dash of olive or rapeseed oil
A handful of samphire or 2 tablespoons capers
##### _For the pickled rhubarb:_
125g caster sugar
2 tablespoons coarse salt
½ teaspoon yellow mustard seeds
½ teaspoon peppercorns
1 dried red chilli
120ml cider vinegar
4cm piece of slim root ginger, peeled and thinly sliced
150g rhubarb, destringed if large, and thinly sliced
Apart from barbecued on the beach it was landed on, my favourite way to eat mackerel is raw; when super fresh, its oily flesh is almost creamily rich, making it (in my opinion) a better choice than the more usual salmon for the tartare treatment, though salmon makes a good substitute here if you prefer. Serve with a salad and some thin rye bread toasts.
1. Make the rhubarb pickle at least 24 hours before you want to serve it. Combine all the ingredients except the ginger and rhubarb in a small pan and bring to the boil, stirring to dissolve the salt and sugar. Add the ginger. Meanwhile, pack the rhubarb into a medium jar, then pour the hot pickling liquor over it to cover. Allow to cool slightly, then tighten the lid.
2. When you're ready to eat, soak the finely chopped red onion in iced water for 5 minutes while you prepare the rest of the tartare. Cut the mackerel into small dice, then squeeze over the lime and add the coriander and a dash of oil. Season and toss together with the well-drained onion and capers, if using them instead of samphire.
3. Scatter over the samphire fronds, if using, and add a generous helping of pickled rhubarb. Taste to check the seasoning, then serve immediately.
## Pork rillettes with rhubarb chutney
##### serves 6
##### _For the rillettes:_
500g pork belly, skin removed
450g pork shoulder
2 teaspoons salt
1 bay leaf
2 garlic cloves, crushed with the back of a knife
##### _For the rhubarb chutney:_
100g soft light brown sugar
100ml cider vinegar
1 red onion, finely chopped
1 unwaxed orange, zest only
½ teaspoon salt
½ teaspoon fennel seeds
1 star anise
½ teaspoon Sichuan peppercorns
500g rhubarb, roughly chopped
Meltingly soft meat spread on crisp toast with a lightly spiced, sharp rhubarb chutney to cut through all that rich, creamy fat – an indulgent lunch indeed, but in small quantities this is also an excellent make-ahead starter for a dinner party.
Originally conceived as a method of preservation, if you seal them well, the rillettes should keep for a good few months, as will the chutney; the flavour certainly improves after 3 or 4 days if you can wait that long.
1. To make the rillettes, heat the oven to 170°C/fan 150°C/gas 3. Cut the pork into rough 4cm chunks and put into an ovenproof casserole dish with the other ingredients. Add about 600ml of cold water to just barely cover, then put on the heat and bring to a simmer. Cover and bake for 3–4 hours, until most of the fat has melted, and the meat is falling apart.
2. Meanwhile, for the chutney, put all the ingredients apart from the rhubarb into a large pan, bring to the boil, stirring to dissolve the sugar and salt, and boil for 5 minutes. Add the rhubarb, bring to the boil again, then turn down the heat and simmer until the rhubarb has broken down and the mixture is thick and jammy. Spoon into clean jars.
3. When the pork is ready, place a sieve over a large bowl and separate the solids from the liquid. Decant the liquid back into the pan and simmer, adding any remaining whole pieces of fat, then once this has melted, pour the liquid back into the same bowl, or a gravy separator if you have one. Meanwhile, shred the meat in the sieve into strands with a fork, or your fingers.
4. Once the fat has risen to the surface of the liquid, spoon it off into a fresh bowl, and pour the brown meaty juices beneath on to the meat. Check the seasoning of the meat and pack into ramekins or a jar or pot, then, once cool, pour the fat on top (you may need to reheat if it has solidified in the meantime). Refrigerate until ready to eat (if you want to keep it for longer than a few days, buy some lard, melt, and use to create a really solid seal on top of the ramekins or jars, which should mean it keeps for up to 4 months in the fridge.
5. Serve the rillettes at room temperature, with the rhubarb chutney and some crisp toast.
## Persian lamb and rhubarb stew
##### serves 4
3 tablespoons olive oil
1 large onion, finely sliced
½ teaspoon ground turmeric
500g boned shoulder of lamb, cut into bite-sized chunks
A large bunch of parsley, roughly chopped
A small bunch of mint, leaves only, roughly chopped
A generous pinch of saffron
4 stalks of rhubarb, cut into 4cm lengths
2 tablespoons honey, or to taste
2 tablespoons flaked almonds, toasted, to serve (optional)
A wonderfully vivid green dish where the sharpness of the rhubarb makes the perfect foil for the rich slow-cooked lamb, though you could substitute beef shin, or even chicken thighs if you prefer. Serve with basmati rice to soak up the turmeric yellow sauce, and a fresh herb salad.
1. Heat 2 tablespoons of oil in a casserole dish, and soften the onion with a pinch of salt. When golden, add the turmeric and cook for a couple of minutes, then turn up the heat and add the lamb, in batches if necessary, and brown, stirring so the onions don't burn.
2. Pour in 500ml of water and scrape the bottom, then bring to a simmer, cover, turn down the heat and simmer for 1 hour.
3. Heat the remaining oil in a frying pan and fry the chopped herbs for a couple of minutes until they wilt. Add to the casserole along with the saffron and cook for another 15 minutes.
4. Add the rhubarb, cover and cook for about 15 minutes, until broken down into the sauce. Stir in the honey and taste for seasoning – depending on your rhubarb, you may also want to add a little more honey if you find it too sour. Sprinkle with the almonds just before serving, if using.
## Rhubarb Bircher muesli
##### makes 6 servings
550g rhubarb, cut into 4cm batons
200ml apple juice
4 tablespoons honey
125g jumbo rolled oats
150g natural yoghurt
30g shelled pistachios, toasted and roughly chopped
30g almonds, toasted and roughly chopped
More usually made with grated apples (there's a classic recipe in _Perfect Host_ ), this is the spring and summer equivalent.
For the sake of aesthetics, you can keep a few chunks of rhubarb back to top the dishes, but when I make this for myself at home, I just stir it all together. Muesli, after all, does mean 'mash'. This keeps well in the fridge for several days.
1. Put the rhubarb into a small pan with the apple juice and 3 tablespoons of honey. Heat gently, stirring to dissolve the honey, until the rhubarb has softened and begun to break down – some pieces should still be intact.
2. Meanwhile, toast the oats in a hot dry frying pan until fragrant. Allow the rhubarb to cool in the syrup, then tip into a sieve set over a bowl to collect the juices.
3. Put the oats into a large-ish bowl and tip over the rhubarb juices. Leave to soak overnight, then stir in the yoghurt, a pinch of salt, half the nuts and the rhubarb, reserving a few whole chunks to top.
Refrigerate until ready to serve.
4. Serve with a couple of chunks of rhubarb in the middle, scattered with the remaining nuts and a drizzle of honey.
## Rhubarb and marmalade sticky pudding
##### serves 6
1 stick of rhubarb, cut into 4cm lengths
6 tablespoons marmalade with peel
2 tablespoons golden syrup
150g softened butter, plus extra to grease
120g soft light brown sugar
2 eggs, beaten
100g spelt flour
50g plain flour
2 teaspoons baking powder
90ml milk
Rhubarb, marmalade and sticky steamed pudding; you'd be hard pressed to find a more British pudding than this, unless you served it with Bird's custard. Which, of course, you absolutely must.
1. Put the rhubarb, 3 tablespoons of marmalade and the golden syrup into a small pan and heat gently for about 5 minutes, until the marmalade and syrup have melted together and the rhubarb has begun to soften. Take off the heat and set aside.
2. Grease a 900ml pudding basin with butter, then spoon the contents of the pan into the base.
3. Beat together the butter and sugar with a pinch of salt until fluffy, then mix in the eggs. Fold in the flours and baking powder until well combined, then stir in the rest of the marmalade and taste – depending on what sort you use, you might want to add a little more for a stronger flavour. Add just enough milk so that the mixture drops easily from a spoon, then spoon it into the basin, leaving a couple of centimetres' gap at the top for the pudding to rise.
4. If your basin lacks a lid, cover with a pleated piece of parchment paper (again so the pudding has room to rise), then secure with a double layer of foil, and make a string handle if you'd like to make life easier for yourself later.
5. Put into a saucepan, pour in enough boiling water to come halfway up the basin, then bring back to the boil, cover and steam for 2 hours, topping up the water regularly.
6. Uncover, run a skewer round the edge of the basin and turn out on to a serving dish.
## Rhubarb and custard trifle with an amaretto syllabub
##### serves 8–10
800g rhubarb
5 heaped tablespoons caster sugar
10 boudoir biscuits
12 amaretti biscuits
5 tablespoons amaretto liqueur
##### _For the custard:_
200ml milk
400ml double cream
1 vanilla pod, slit in half and seeds scraped out
6 egg yolks
3 tablespoons caster sugar
2 tablespoons cornflour
##### _For the syllabub:_
150ml amaretto liqueur
Juice of 1 lemon
1 tablespoon soft brown sugar
250ml double cream
I've never met a trifle I didn't like; this one is inspired by that classic school dinner combination, rhubarb and custard, razzed up with a generous slug of almond. The natural sharpness of rhubarb acts as a counterpoint to the sweetness of the other layers.
1. Heat the oven to 200°C/fan 180°C/gas 6. Destring the rhubarb if large and old (no need if slim) and cut into 4cm pieces, or smaller if thick. Arrange in a roasting tin with the sugar and 4 tablespoons of water, then cover with foil and bake for 25–45 minutes, depending on the thickness of your rhubarb, until soft but still holding its shape. It will be quite sharp, but don't worry, the custard will see to that.
2. Meanwhile, make the custard. Put the milk and cream into a heavy-based pan along with the vanilla pod and seeds, and heat gently to just below a simmer. Beat the yolks, sugar and cornflour together in a large heatproof bowl. Pour the simmering milk and cream into this bowl, beating all the time, then turn the heat down and pour the custard back into the pan.
3. Stir until it's thick enough to coat the back of a wooden spoon, being careful it doesn't overheat and turn into scrambled eggs (I often fill the sink a quarter full of cold water when making custard just in case – if it threatens to turn, plunge the pan into the sink and stir vigorously to see if it can be rescued). Allow to cool.
4. Line the base of a glass bowl with boudoir biscuits (you may not need them all), then crumble over half the amaretti biscuits and sprinkle with the 5 tablespoons of amaretto. Carefully spoon the rhubarb on top, along with its juices, making sure it looks pretty around the edge. Pour the cooled custard on top, and refrigerate, covered, until set.
5. To make the syllabub, whisk together the amaretto, lemon juice and sugar until the last has dissolved. Beat in the cream until it forms soft peaks, then spoon gently on top of the trifle. Decorate with the remaining amaretti, crushed, just before serving.
## Rhubarb gin granita
##### serves 8
225g caster sugar
450g rhubarb, roughly chopped
50ml lemon juice
1 tablespoon rosewater (optional)
250ml gin
Sweet, pink and happily wicked, this is an excellent way to round off a heavy meal. As a bonus, it also produces enough stewed rhubarb for several breakfasts. Also great served in little shot glasses, with a dash more gin.
1. Put the sugar into a large pan with 950ml of water and heat, stirring, until dissolved. Add the rhubarb and the lemon juice and cook until the rhubarb has broken down completely. Allow to cool, then strain the pink juice into a large flat dish and set the stewed pulp aside for another use: it's pretty good with yoghurt or cereal.
2. Stir in half the rosewater, if using, and the gin and taste, adding more rosewater if you think it needs it (brands vary greatly in strength). Freeze for 1½ hours, until beginning to solidify, then run a fork through it, stirring the frozen bits from the edges back into the middle and breaking it all up.
3. Repeat every hour or so (the exact timings don't matter too much) until it's fully frozen, then cover.
This chapter is dedicated to an ingredient you can't buy in the shops, or keep in the cupboard – indeed, you can't even hold it in your hand. There are other worthy candidates for the letter S, of course: shellfish and saffron, sausages and sandwiches (and indeed sausage sandwiches), but none of them has the same strange power to beguile. If I see the word 'smoke' on a menu, I want it.
The flavour of smoke is at once simple and impossible to describe; it tastes, of course, as it smells – of charred wood and bitter bonfires, both intensely savoury and deeply primeval. Our attraction to the pungent whiff of smoke is as ancient as our fascination with fire.
It's similarly universal too: the smoky flavours of the Merkén pepper used by the Mapuche Indians of Chile and the paperbark wrappings of the Australian Aborigines, the smoked mutton of Iceland and the smoked shrimp of West Africa – the whole world loves a smoke.
### Smoke, the great preserver
We have been smoking food in these islands for thousands of years, and fish preserved in this way became particularly important in medieval Britain, when the numerous fast days dictated by the Church took meat off the menu for the few that could afford it in the first place.
The same technique was used to preserve meat; hams and bacons, yes, but also cold smoked mutton joints in upland areas, beef, duck and goat. (Hot-smoking, where the subject is subjected to heat as well as smoke, is far less effective at stopping spoilage, and is thus done only for flavour – and very nice the results are too.)
Almost anything can be smoked if you have a yen to do so; the only food I've tried that should have been left well alone was a smoked Stilton, which proved umami overload. But Cheddar and vodka, butter and garlic, almonds and apricots: all fair game.
### Smoke without fire
You don't need actual smoke to get the flavour into your food though; here are a few slightly quicker alternatives:
— Smoked paprika: An easy way to add a touch of char to dishes, this mild, earthy paprika is made from peppers that have been dried, smoked and then ground into a powder. I use it for everything from tomato soup to sprinkling over a roast chicken.
— Chipotle chilli: These smoked jalapeño chillies, available whole, flaked or as a very versatile paste, are a hotter, richer alternative to smoked paprika, and an essential ingredient in chilli con carne and the like, as well as on eggs the morning after the night before.
— Liquid smoke: More popular in the States, though available online here, this smoke-infused liquid made by passing smoke through water is held in disdain by those with the space, time and equipment not to call on its services, but has its benefits for the rest of us. Though it's never going to transform a herring into a kipper, a drop or two added to some slow-roasted pulled pork, beans, burgers or a bourbon cocktail is quite transformative. Critics often carp on about its carcinogenic qualities, while ignoring the considerable health risks that come with too much smoke of any kind.
— Tea powder: Grind lapsang souchong tea to a powder and use as a smoky seasoning. It works particularly well on poultry, game and fish, but is also nice on stone fruits and creamy summery puddings. You can also use lapsang to hot-smoke with.
— Toast powder: See Burnt toast powder here.
— Charcoal infusing: See Smoky black dal with eggs here.
See also: Sicilian almond and tomato pesto (here), Blackened jalapeño and avocado slaw (here), Mexican chilli chocolate mousse (here), Kentucky pulled lamb (here).
## Charred squash soup with zhoug and toasted pumpkin seeds
##### serves 4
1 medium butternut squash
Olive oil, to grease
40g pumpkin seeds (you can use some of the squash seeds if you like)
A squeeze of lemon juice
750ml chicken or vegetable stock
##### _For the zhoug:_
A large bunch of coriander, roughly chopped
A small bunch of flat-leaf parsley, roughly chopped
1–3 small green chillies, depending on heat and tolerance, deseeded and chopped
1 garlic clove, crushed
½ teaspoon ground cumin
½ teaspoon salt
10 tablespoons extra virgin olive oil
Almost any vegetable is improved by baking – the heat concentrates and intensifies the flavours, and if you take things a step further, as here, you get a hit of smokiness too.
Zhoug is a hot, aromatic Yemeni sauce which can be drizzled over the soup much like a pesto, but which is also fabulous on everything from potato salad to grilled fish, boiled eggs or a humble hunk of bread, so though the recipe below makes more than you'll probably need, you should have no problem finding other homes for it.
1. Heat the oven to 240°C/fan 220°C/gas 9. Peel the squash and cut into chunks about 5cm across, discarding the seeds and fibrous strands around them. Put on a lightly greased baking tray and toss with olive oil. Season well. Bake until the squash is soft and charred – about 45 minutes.
2. Meanwhile, make the zhoug by whizzing the herbs, chillies, garlic, cumin and salt to a purée in a food processor, then drizzling in oil to make a loose paste. Taste for seasoning.
3. When the squash is cooked, remove from the oven. Toss the pumpkin seeds on a baking tray with a little oil, salt and lemon juice and bake for about 3 minutes, until lightly toasted, then set aside and turn the oven off.
4. Put the squash into a pan, add a little stock and purée with a stick blender, adding more stock until you reach your desired consistency. Reheat and season to taste.
5. Serve drizzled with zhoug, and with pumpkin seeds scattered across the top.
## Muhammara
##### serves 6–8
6 red peppers
150g walnut pieces
4 tablespoons white breadcrumbs
2 garlic cloves, crushed
2–4 tablespoons pomegranate molasses
2 tablespoons lemon juice
1 teaspoon smoked paprika
1 teaspoon salt
This smoky sweet and sour Syrian red pepper dip is utterly beguiling – it really knocks hummus, or even my beloved (and similarly smoky) babaganoush, into a cocked hat for flavour. Great with toasted flatbreads or crudités.
1. Heat the oven to 240°C/fan 220°C/gas 9. Pierce the peppers with a skewer in a couple of places, then put on a greased baking tray and roast until blackened and collapsed. Allow to cool, then peel and seed and roughly chop.
2. Toast the walnuts in a hot dry frying pan, then allow to cool slightly and put in a food processor. Whiz until coarsely ground, then tip out.
3. Put the peppers, breadcrumbs and garlic into the processor and whiz to a purée, then add the molasses, lemon juice, paprika and salt along with the walnuts. Whiz to combine, then taste and adjust the seasoning if necessary.
## Smoked cod's roe and beetroot dip
##### serves 6
50g stale crustless white bread
Juice of ½ a lemon
170g smoked cod's roe
1 garlic clove, crushed
75g mascarpone cheese
50g Greek yoghurt
100g cooked and peeled beetroot (1 smallish one)
1 teaspoon grated orcreamed horseradish (to taste)
This started off as a taramasalata, but after I'd gone beyond the slightly heretical, though at least Greek, yoghurt and added a soft Italian cheese and the very northern European horseradish, I decided it was a bit of a case of the philosopher's axe (or, at least, his dip). Authentic or not, it's pretty delicious. Roe varies in smokiness, so try it first, and if it threatens to overpower, soak it in cold water for an hour or so to soften the flavour.
1. Put the bread into a shallow bowl and squeeze over the lemon juice. Leave to soften while you prepare the rest of the ingredients.
2. Scoop the roe from the skin encasing it and put in a food processor or a large bowl (if you have a hand-held blender) along with the garlic, cheese and yoghurt. Roughly chop the beetroot, tear the bread into pieces and add both to the bowl. Whiz until smooth.
3. Stir in the horseradish and taste for seasoning, adding a little more lemon juice if necessary. Great with toasted rye bread, or crudités like radishes, carrot and cucumber batons, or cool baby new potatoes with dill.
## Kentucky pulled lamb
##### serves 6
1.5kg lamb shoulder, bone in, at room temperature
1 tablespoon salt
1 tablespoon dark sugar
1 tablespoon smoked paprika
1 teaspoon liquid smoke (optional)
##### _For the sauce (mop):_
240ml water
2 tablespoons Worcestershire sauce
2 tablespoons cider vinegar
1 teaspoon dark brown sugar
¼ teaspoon ground allspice
1 teaspoon lemon juice
Barbecue in the States is a gloriously varied regional art, but they only seem to export the edited highlights. Pulled pork and burnt ends are, of course, undeniably delicious, but they're not the be-all-and-end-all, and there's no excuse for ignoring the Bluegrass State's particular speciality: slow-smoked mutton served with a thin, black, outrageously tangy sauce, bread, and often a thin spicy meat stew known as burgoo (got to love the idea of garnishing meat with more meat).
Mutton is still annoyingly hard to get hold of here, but if you can find a shoulder of that or hogget, by all means substitute it – you'll need to allow longer to cook, but the flavour will be its own reward.
1. Preheat the oven to 240°C/fan 220°C/gas 9. Put the lamb into a roasting tin. Combine the salt, sugar and paprika and rub into the meat, then cook for 20 minutes, until well browned. Remove from the oven and reduce the temperature to 150°C/fan 130°C/gas 2.
2. Pour 1 litre of water into the tin, cover with foil and bake for about 7 hours, until soft enough to pull off the bone and shred. Add the liquid smoke if using.
3. Towards the end of the lamb cooking time, make the sauce by putting all the ingredients into a small pan. Bring to the boil, then simmer for 10–15 minutes, until well reduced.
4. When the lamb is shredded, spoon the mop over the meat to taste.
## Kichri-kedgeree
##### serves 4, generously
250g yellow moong dal
200g basmati rice
2 smoked haddock or other smoked firm white fish fillets
50g ghee
2 onions, finely sliced
2 teaspoons salt
Ground seeds of 5 cardamom pods
1 tablespoon curry powder
1 teaspoon ground ginger
4 eggs
400g spinach
This satisfying supper dish is perhaps best thought of as Kedgeree: The Prequel – one of the many steps kichri might have gone through in its mutation from Indian dal and rice to Edwardian breakfast, boasting both the original pulses and the very British smoked fish, as well as some spinach, which happens to go beautifully with both.
1. Rinse the dal and rice well under running water, then leave to soak in lukewarm water for 30 minutes. Put the fish into a shallow pan on a low heat, cover with 1.25 litres of boiling water and leave to sit for 10 minutes.
2. Lift the fish out of the water (do not tip this away!) and break into large flakes.
3. Melt the ghee in a large saucepan over a medium-low heat. Add the onions and fry gently until golden brown. Scoop a third out and set aside as garnish. Stir in the salt and spices and fry for another couple of minutes, then stir in the drained dal and rice and the fish cooking water.
4. Bring to the boil, cover, and cook over a low heat for 25 minutes without lifting the lid. Meanwhile, put the eggs into a pan of cold water, bring to the boil, then turn down the heat and simmer for 6 minutes. Wash the spinach, drain and put into a large pan over a medium heat with no more water than still clings to the leaves. Cover and allow to wilt, then press out as much water as possible. Drain the cooked eggs and run under cold water to cool, then peel and cut in half.
5. When the rice is done, stir in the spinach and fish, add the eggs and scatter with the reserved onion to serve.
## Smoky black dal with eggs
##### serves 4 as a main course
200g urad dal (black lentils), soaked overnight
4 eggs (optional)
3 tablespoons ghee or vegetable oil
1 onion, finely sliced
2 tablespoons ginger, grated
6 garlic cloves, crushed
2 small green chillies, sliced into thin rounds, seeds removed
4 black cardamom pods, seeds only, crushed
1 teaspoon ground cinnamon
1 teaspoon garam masala
1 x 400g tin of plum tomatoes, roughly chopped
2 tablespoons tomato purée
4 tablespoons Greek yoghurt
1 piece of natural charcoal (nothing impregnated with lighter fuel)
This is a slightly lighter, more tomatoey version of the outrageously rich Punjabi dal makhani, infused with the smoky flavour of black cardamom and lightly smoked before serving. If you don't have a gas hob, however, you can leave out step 6; it will still be delicious.
The eggs make it into quite a substantial main dish, but feel free to leave them out too if you'd prefer to serve it as a side.
1. Put the drained dal into a pan and cover with cold water. Bring to the boil, skim, boil hard for 10 minutes, then turn down the heat and simmer until very soft (how long this takes depends on the age of the pulses, but expect about an hour to an hour and a half). Do not drain.
2. Put the eggs, if using, into a pan of cold water, bring to the boil, then turn down the heat and simmer for 6 minutes. Drain and run under cold water to cool, then set aside.
3. Heat 2 tablespoons of ghee or oil in a frying pan (with a lid, as you'll need this later; if not, use foil) over a medium heat and cook the onion until soft and golden. Add the ginger, garlic and chillies and fry, stirring, for a further couple of minutes.
4. Stir in the crushed and ground spices and cook for another minute, still stirring, until you can smell them. Tip in the tomatoes and purée. Drain the dal over a bowl, retaining the cooking water, and add to the pan along with 300ml of its liquid. Bring to the boil, then turn down the heat to medium and simmer until the sauce has thickened and started to separate, and oil begins to pool around the sides of the pan. Meanwhile, peel and halve the eggs.
5. Turn the heat off, stir in the yoghurt and arrange the eggs on top, leaving a space in the middle for the next step.
6. Put a piece of charcoal on the flame of your hob, and heat until it glows red. Place a small metal dish in the middle of the frying pan. Using metal tongs (this is important, obviously), place the charcoal in the dish, and spoon over the remaining ghee or oil. Cover the pan tightly and leave for 5 minutes to infuse before checking the seasoning and serving, without the charcoal.
## Smoked mackerel and charred cauliflower gratin with smoked chilli breadcrumbs
##### serves 2 (easily doubled)
1 small cauliflower
Oil, to grease
250ml double cream
100ml whole milk
2 small garlic cloves, crushed
2 smoked mackerel fillets
1 teaspoon smoked chipotle chilli paste (see intro)
A generous handful of breadcrumbs
This is an indulgent autumnal or winter supper; the smoky golden cream cloaks the white clouds of cauliflower in a dangerously seductive fashion, offset by the crunchy chipotle-spiked topping.
I am borderline obsessed with Gran Luchito smoked chilli paste, but if you don't have time to order any online, and can't find another brand in the shops, heat a little oil and fry a pinch of the more widely available chipotle chilli flakes for a minute or so before stirring in the breadcrumbs in step 3.
1. Heat the oven to 240°C/fan 220°C/gas 9. Break the cauliflower into large-ish florets and toss with a little oil and salt on a baking tray. Bake for about 20 minutes, checking regularly, until it's just turning golden and beginning to char round the edges.
2. Meanwhile, put the cream and milk into a small saucepan and bring to a simmer. Add the crushed garlic and flake in the mackerel in large chunks, removing the skin if necessary. Take off the heat and leave to infuse while the cauliflower cooks.
3. Heat the chilli paste in a frying pan and, when hot, stir in the breadcrumbs to coat. Take off the heat.
4. Take the cauliflower out of the oven and turn the heat down to 200°C/fan 180°C/gas 6. Tip it into the cream and mackerel mixture and stir to coat. Season to taste, then spoon into a shallow ovenproof dish.
5. Sprinkle over the breadcrumbs and bake for about 15–20 minutes, until bubbling.
## Bacon and split peas with a quick mustard pickle
##### serves 6
1 tablespoon vegetable oil or lard
1 onion, finely chopped
1 leek, finely sliced
1 large carrot, finely diced
1 bay leaf
1.25kg smoked boneless collar of bacon
500g yellow split peas
2 tablespoons butter
2 teaspoons brown mustard seeds
##### _For the pickle:_
3 teaspoons soft light brown sugar
2 teaspoons salt
5 teaspoons mustard powder
120ml cider vinegar
1 large carrot, peeled and finely diced
2 spring onions, chunkily sliced
Before the potato ruled these isles, dried peas were the staple starch for the masses, and this recipe always gives me a pleasing sense of connection with the past. It's thrifty winter comfort food par excellence – the kind of dish to send you into a contented coma afterwards if you so much as dare to look at the sofa.
Ask your butcher, or consult the label on your bacon collar, to find out whether it needs pre-soaking overnight. The homemade pickle is very quick, but does need marinating time; piccalilli would work well if you don't have that luxury.
1. To make the pickle, whisk together the sugar, salt, mustard powder and vinegar until dissolved. Put the chopped vegetables into a small, clean jar and pour in the marinade to cover. Leave for at least 12 hours.
2. To cook the bacon and split peas, heat the fat in a large casserole dish. Soften the onion, leek and carrot with the bay leaf for a few minutes, then add the bacon. Pour over the split peas and 1.5 litres of water, cover and bring to the boil. Skim off any scum from the top, then cover and cook for about 1¾–2 hours, until the meat is falling apart and the peas are thick. Remove the meat from the peas.
3. Heat the butter in a small frying pan over a medium-high heat. Add the mustard seeds and cook until they sizzle and pop. Stir into the split peas and taste – they shouldn't need any further seasoning.
4. Carve the bacon (though it won't take much cutting) and serve in chunks with a spoonful of split peas and the pickles on the side. Some boiled greens wouldn't go amiss either.
In five years of writing my _Guardian_ column, I've discovered one thing that holds true for every recipe. Post something simple online – or at least something that appears simple, given there are almost as many ways to fry an egg as there are to turn it into a soufflé – and some wag will always, without fail, respond underneath: 'What next, a recipe for toast?'
And, once in a blue moon, I reply, yes, yes please, I would dearly love to address the best way to toast bread. Because, though any fool can put a slice of bread in a toaster, it takes practice to get really superlative results out of it.
In my teenage years, I must have eaten a minimum of three slices a day. Even now, walking past certain caffs in the morning, the smell of cheap toast in quantity gives me a Proustian rush.
Nowadays, perhaps in reaction to the pasty sliced white of my schooldays, I like my toast aggressively crunchy, with enough structural integrity to bear the weight of several toppings, which generally means sourdough, though I have a soft spot for very seedy brown bread (of the sort here) and a decent soft white bloomer, done well. But I also like the tangy flavour of sourdough, the way it keeps for ever in the bread bin, and the generous smattering of holes, just large enough for the butter to pool in as it melts. (Good toast also demands patience for this coming together of ingredients, the ingress of rich, salty fat into bread, before tucking in.)
Back to the toasting. Sliced bread is undoubtedly more convenient, especially for the cack-handed among us, but the rougher surface of the home-sliced kind gives a crisper result. Your call. The bread itself should be slightly stale, not just because fresh bread deserves to be eaten as such, but because the lower moisture content makes for better toast.
Call me nerdy, but I like to preheat the toaster by putting it on for a cycle before inserting the bread, and then flip the bread a couple of times, so it cooks evenly – whatever manufacturers claim, the middle of the toaster is always hotter than the walls. (And despite what you read online, those numbers on the dial do not usually represent minutes, a pernicious rumour that caused me to doubt my toasting prowess for about three weeks last year before I thought to test the theory out.)
Thicker items are better done under a grill to prevent the edges burning before the middle is heated through, and the crumpet, a specialist subject of mine, should be toasted face-side down until the base sounds hollow before being flipped over. (Mad as it sounds, I do not like to let anyone else cook my crumpets for me. I just can't trust them to take it seriously enough.)
But we can agree to disagree over the specifics – if you want it virtually raw, or charred and smoking, if you have an inexplicable attachment to cheap white sliced that gums and wads on the roof of your mouth, or like to bake loaves with a mother older than your own, feel free. Most of the recipes below will work well with almost any kind of bread.
Nothing soothes like toast. As an American blogger living here wrote: 'If I was depressed, I'd want something like a plate of meatloaf and a carton of Ben and Jerry's to cheer me up. If I was British, apparently you could appease me with a piece of toasted bread. It makes me think that maybe they're a bit simple.' And thank God, sometimes, for a little simplicity.
### Quick ideas for toast
Breakfast:
— Muesli on toast. Sounds odd, but it works: spread toast (brown seems apt here) with a thick coating of Greek yoghurt, drizzle with honey and scatter with chopped dried figs or other fruit of your choice and nuts (I like almonds), mixed peel and seeds, and a pinch of salt.
— Ricotta, ripe figs or apricots, and honey
— Thick-cut marmalade, English mustard and bacon
— Marmite or peanut butter and ripe banana
— Tahini with honey and sea salt
— Ripe avocado, salt, chipotle honey (or honey and chipotle flakes)
Lunch and supper:
— Tinned sardines, mashed with a little lemon juice and Tabasco (an oldie but a goodie)
— Hard-boiled eggs mashed with a little butter, snipped chives, salt and lots of freshly ground pepper
— Cold roast meat topped with mayonnaise mixed with Dijon mustard, capers and green herbs
— Meat drippings and pickles (finely chopped gherkins, sauerkraut or kimchi)
— Steamed kale or sprouting broccoli, sautéd with a little garlic and lots of olive oil
— Washed rind cheese, sliced apple and chopped walnuts, grilled
— Chickpeas mashed with olive oil, seasoned and topped with smoked paprika
— A rub of garlic, squashed tomatoes and a sprinkle of sea salt and olive oil
— Mushy peas and shredded ham
— Melted cheese, mango chutney and a dusting of cayenne pepper
— Masala beans: Fry half a finely chopped onion and 2 crushed garlic cloves in a small pan until soft, then stir in a teaspoon of curry powder and ½ teaspoon of chilli powder. Cook for a minute or so, then add a tin of baked beans and simmer until thickened. Season to taste.
See also: Wild garlic bread (here).
## Burnt toast powder
Not so much a recipe as a slightly crazed idea, burnt toast powder is madly popular in the States. I find it unpleasantly bitter in a savoury context, but surprisingly delicious on very sweet things, like ice cream, chocolate mousse or caramel tart, especially with a pinch of coarse salt.
To make it, burn a piece of stale bread until it's completely black all over (it's wise to open the windows and stick the extractor fan on), then leave to dry out overnight. Crumble into pieces, then grind into a powder and use as above.
## White beans on toast
##### serves 4
225g dried cannellini or haricot beans, soaked overnight (or about 525g drained tinned white beans)
A knob of butter
4 rashers of dry-cured streaky bacon, finely chopped
2 shallots, finely chopped
2 sprigs of thyme, leaves picked
120ml dry white wine
150ml double cream
4 slices of sturdy bread
This recipe, for a richer, more savoury and yes, fancier version of the undisputed classic of quick lunches, doesn't mean I love the tinned sort any less, but a change is as good as a rest, as my granny used to say.
1. Drain the soaked beans if using, then put into a medium pan, cover with cold water and bring to the boil. Skim off the scum, turn down the heat and simmer for about 2 hours, until tender (the exact time will depend on the age of your beans, so check regularly). Drain.
2. Heat the butter in a medium pan over a medium-high heat and add the bacon. Fry for a couple of minutes, until the fat starts to melt, then add the shallots and thyme. Fry, stirring occasionally, until the bacon begins to brown, then stir in the wine, scraping the bottom of the pan clean. Simmer for about 7 minutes until most of the wine has evaporated, then stir in the cream. Season.
3. Stir in the beans. While they heat through, make the toast. Serve – well, you know how to serve beans on toast.
## Duck and sherry pâté with pickled figs and pistachios
##### serves about 6 (or more as a snack)
10 dried figs
120ml red wine vinegar
2 tablespoons white sugar
25g shelled pistachios
##### _For the pâté:_
350g duck or chicken livers
100g butter, diced
1 shallot, finely chopped
1 teaspoon thyme leaves
Zest of 1 unwaxed orange
75ml Pedro Ximénez sherry
75ml double cream
½ teaspoon salt
½ teaspoon Chinese five spice
I very nearly didn't include this recipe, because pâté seemed such an embarrassingly obvious choice in a chapter about toast, but just like when confronted with a plate of crisp warm bread with a thick coating of rich, sweet meat, I couldn't resist. It's a classic for a reason.
1. To make the pâté, trim the livers of any sinewy or green bits and roughly chop. Heat a knob of butter in a frying pan over a medium heat and add the shallot. Fry until soft, then stir in the thyme and orange zest and fry for another minute or so. Turn up the heat and add the livers. Sauté for a couple of minutes, until golden brown on the outside, then tip into a food processor.
2. Pour the sherry into the pan, still on the heat, and scrape to deglaze. Allow to bubble until syrupy and reduced, being careful it doesn't start to burn, then pour into the machine along with the remaining butter.
3. Whiz until smooth, then add the cream, salt and five spice and whiz to combine. Taste for seasoning, then spoon into a bowl and allow to cool. Cover and chill for a couple of hours until set.
4. Meanwhile, put the figs into a small pan along with the vinegar and sugar and bring to a simmer, stirring to dissolve the sugar. Turn down the heat and simmer gently for about 30 minutes, until the figs are sitting in a sticky syrup (I'd recommend using the extractor fan during this process; it's pungent stuff). Spoon out of the pan and set aside.
5. When you're ready to serve, slice the figs and roughly chop the pistachios. Serve the pâté on toasts or biscuits, with a slice of fig on top (or two or three, depending on the size of the vehicle) and an artful scattering of pistachios.
## Southern cheese on toast
##### serves 4
20 smallish tomatoes, or, if tomatoes are in season, enough ripe tomatoes of any size for 4
4 slices of robust bread
1 garlic clove, halved
2 burrata or buffalo mozzarella balls
##### _For the basil purée:_
25g basil
100ml extra virgin olive oil
By southern, I don't mean with zider and West Country Cheddar, or indeed Velveeta and cornbread, but soaked in the sunny flavours of the Mediterranean. Creamy mozzarella, sweet umami-rich tomatoes and a peppery green basil purée make this a treat indeed for a summer lunch, but I like it just as well with baked tomatoes when they're not quite up to eating raw, so feel free to make it with either.
1. If you're making this outside peak tomato season (or if your tomatoes turn out to disappoint), heat the oven to 210°C/fan 190°C/gas 7, then put the tomatoes on a greased baking tray (cut them in half if they're larger than a walnut) and bake for about 20 minutes, until they're starting to split.
2. Meanwhile, bring a small pan of salted water to the boil and put a large bowl of iced water next to it. Dunk the basil into the hot water for 15 seconds, then immediately scoop out with a slotted spoon and put into the iced water. Drain well and dry, then put in a small food processor, or use a stick blender or a pestle and mortar to blend with the oil, adding the latter gradually until you have a smoothish purée. Add salt to taste.
3. Toast the bread until golden, then rub with the cut garlic clove. Squish the tomatoes on top, drizzle with basil purée, add half a burrata or mozzarella (if you're using the former, do this on a plate so you catch any escaping cream), season and add a little more purée. Devour.
## Salmon and coriander tartare with avocado and wasabi cream on toasted rye
##### serves 2
1 ripe Hass avocado (the brown knobbly ones)
2 teaspoons wasabi paste (if you're making it up from the powder, use 2 teaspoons powder to 1 teaspoon warmish water)
Juice of 1 lime
1 teaspoon soy sauce (preferably Japanese)
1 salmon fillet
A small bunch of coriander, chopped
A handful of pea shoots
1 tablespoon pumpkin seeds
A dash of pumpkin seed, extra virgin olive or avocado oil
2 slices of dark rye bread
Unapologetically dense and healthy, rye bread nevertheless makes surprisingly good toast – especially those crisply curling edges. Packed full of what are somewhat defensively known as 'good fats', avocado, salmon and pumpkin seeds are an aptly nutritious topping for an outrageously quick lunch that's guaranteed to keep you feeling full up until, well, at least the four o'clock coffee and cake break, if not longer.
Pumpkin seed oil isn't widely available here, more's the pity, but if you see some, snap it up: vivid green and nutty-tasting, it makes an excellent salad dressing. As you'll be eating the salmon raw, make sure it's very fresh. If you're worried, freeze it for 24 hours, then defrost before use.
1. Cut the avocado in half, remove the stone, then scoop out the flesh into a small bowl or mini chopper. Add half the wasabi along with the lime juice and soy sauce. Whiz until smooth (or mash as best you can, if you don't have a stick blender or mini chopper), then taste and season accordingly. I like to add the rest of the wasabi, but you may not.
2. Skin the salmon if necessary, then cut into small dice. Put into a small bowl with the coriander, season well, and toss together with the pea shoots, pumpkin seeds and a dash of oil.
3. Toast the bread until crisp, then spread with the avocado and top with the salmon and pea shoots. Eat immediately.
## Mexican torta with black beans, chorizo, avocado and goat's cheese crema
##### makes 2
80g cooking chorizo
1 x 400g tin of black beans
60g soft goat's cheese
6 tablespoons crème fraîche
2 crusty rolls or chunky slices of a thick baguette
Olive oil, to cook
1 ripe Hass avocado (the brown knobbly ones)
Pickled jalapeños, to serve
A small bunch of coriander, roughly chopped, to serve
Although the burrito has won hearts internationally, Mexican sandwiches aren't all about tortilla wraps – indeed, in the north of the country tortas, or filled rolls, are also known as _lonches_ , due to their popularity at, you guessed it, lunchtime.
The bread used there is a local variation on the French baguette, so use any crusty rolls or wide baguette that looks good. The earthy flavour of the beans, enriched by the spicy orange fat of the chorizo, makes a lovely contrast to the creamy avocado and goat's cheese.
1. Slit the chorizo and scoop out the meat. Heat a frying pan over a medium heat and sizzle the meat until the fat has rendered, then stir in the drained beans. Cook for a couple of minutes until warmed through, then mash to a rough paste and season to taste.
2. Mash together the cheese and crème fraîche. Heat a griddle pan over a high heat and cut the rolls in half. Brush each cut half with a little oil, then, once the griddle is smoking hot, toast the cut sides until charred.
3. Divide the bean mixture between the bottoms of the rolls. Slice the avocado and arrange on top, followed by a scattering of jalapeños and coriander. Spread the tops with the goat's cheese crème fraîche and put the two halves together.
Famously known as the fifth taste, joining sweet, salty, bitter and sour at the flavour party rather belatedly at the beginning of the last century, umami is best described as savoury. Intensely savoury. Almost too savoury to bear, like soy sauce, or sun-dried tomatoes or a really mature Parmesan – in fact, just writing about it is making my tongue prickle slightly at the edges.
For centuries, we were at a loss as to how to describe this taste, which hovered on the edge of salty, but wasn't strictly that, was much richer, and deeper, and, well, less salty. It was the flavour of charred meat and old cheese, ripe tomatoes and fried mushrooms, meat stock and all sorts of good things, including, in Japan, dashi broth made from dried kelp and tuna flakes.
In a country where animal products were once considered taboo, this meat-free stock gave Japanese cuisine a much-needed injection of savoury richness, and it was this that intrigued chemist Kikunae Ikeda. There is, he wrote, 'a taste which is common to asparagus, tomatoes, cheese and meat but which is not one of the four well-known tastes'. After years of patiently distilling seaweed, veal stock and other likely candidates, in 1908 he found the secret ingredient that linked them all: glutamic acid.
But, as he discovered, glutamic acid doesn't taste of anything in its original form. It's not until it's broken down by heat, fermentation or time into something called an L-glutamate that our tongues can detect it – which is why raw meat doesn't have much in the way of umami, but a hamburger does (and explains why most of us add anchovy-rich Worcestershire sauce to steak tartare). Ikeda named this new flavour umami, or 'delicious taste' – often (nauseatingly) translated by modern sources as 'yummy'.
For many years, umami was thought to be something that simply enhanced the other flavours, but in fact, it doesn't seem to intensify sweetness or acidity, bitterness or even saltiness – it simply makes food taste 'more of itself'. The current thinking is that when we add umami to our food, whether by cooking it in a dashi or chicken stock, adding tomato ketchup or sprinkling over Parmesan, we're highlighting the small quantities of umami already present.
It wasn't until 2001, however, that umami taste receptors were identified on the human tongue, thus laying to rest, once and for all, the debate over its existence as a flavour in its own right.
Indeed, unlike the other flavours, which are sensed relative to each other (which is why we add a pinch of salt to sweet desserts, and serve cheese with fruit and sticky chutneys), umami alone can be detected on its own, probably because it's a protein – the human body makes up to 40g of glutamate a day, leaving it in constant search of a top-up of amino acids.
### The truth about MSG
The year after Ikeda's discovery, he began commercial production of umami in the form of a more stable sodium salt of glutamic acid, otherwise known as the much-maligned MSG and sold today under a variety of brand names around the world, including the deliciously coy 'Ac'cent seasoning'. (In much of Asia it is known, more brazenly, as Gourmet powder.)
Though it took off almost immediately in Asia, MSG didn't make the leap into the Western diet until after the Second World War, when American soldiers returned from the Far East with a taste for the stuff. Initially they weren't quite sure what exactly they were missing, but once food scientists put two and two together, MSG became a key ingredient in all sorts of industrially produced foodstuffs in need of a flavour boost, from TV dinners to chewing gum, and proved an especially helpful addition to diet foods when fat fell from favour in the 1960s.
Not that you'll often see MSG listed as such on labels; it goes by many names, including autolyzed yeast extract, E621, natural beef or chicken flavouring, seasonings and hydrolyzed milk protein, none of which sound much more appetizing. I have no cod medical objections to MSG, but I'd prefer to get my umami from foodstuffs that are actually delicious in their own right, like cheese, just as I'd prefer to get my vitamin C from an orange rather than a tablet.
But the fact that the big bad wolf is nothing but a glutamate also produced in quantity by our own bodies has done nothing to allay the fears of those who believe themselves to be a victim of 'Chinese restaurant syndrome', a mysterious condition which has gained remarkable credence in the western world for the past forty years without anyone actually being able to find any evidence for it. Every single food testing authority in the world has deemed MSG completely safe for human consumption.
According to Jeffrey Steingarten's excellent essay 'Why Doesn't Everyone in China Have a Headache?', written in 1999, 90 million Americans claim to be affected by MSG. In China, where they consume 1.8 million tonnes of the stuff every year, they're too busy eating to worry about it.
### Umami injectors
— Strong stock: Preferably beef. Those little jelly pots which are far too intense for normal use come into their own in an underwhelming meat stew; add a little at a time to taste, or do the same with a crumbled stock cube. Miso or Korean doenjang are good vegetarian alternatives.
— Marmite: Like stock cubes, this is best done behind closed doors, but the effects are just as magical, as well as being helpfully vegetarian friendly. Bovril, which does the same job, is not, of course.
— Anchovies: Lots of people think they don't like anchovies, but most of them (vegetarians excepted) are wrong. They just don't like the oversalted, hairy little things that infested every pizza up until 1988. Melt them down into a sauce, and anchovies don't taste fishy at all, they taste intensely, wonderfully savoury. (The potato, black kale and anchovy pie recipe here is a good way to prove this.) A discreet squirt of anchovy paste or Gentleman's Relish has rescued many a dish in my household.
— Soy sauce: If your stir-fry lacks pep, or your pork belly stew needs a bit of poke, a dash of soy sauce should do the trick. Don't overdo it though, as it can overwhelm other flavours.
— Worcestershire sauce: The sweet, spicy western equivalent to soy sauce. The classic kind is not veggie friendly, so check the label.
— Tomato or mushroom ketchup: I'm not a big fan of ketchup generally (I'm a mustard girl) but I do keep it for sneakily stirring into boring sauces.
— Cheese: If in doubt, add cheese. If you take one thing away from this book, let it be that.
See also: B is for Blue Cheese, Canederli alla tirolese with Parmesan broth (here), Bacon refried beans (here), Potato, black kale and anchovy pie (here), Pissaladière (here).
## Shrimp and grits with bacon and Parmesan
##### serves 2
500ml chicken stock
250ml milk
100g stoneground grits (see intro)
1 tablespoon double cream
40g Parmesan or Grana Padano, grated
2 rashers of smoked streaky bacon, finely chopped
10 large raw prawns, peeled and deveined, but tails left on
A small bunch of chives
Seafood and cheese is one of those combinations which the cognoscenti all know to be deeply wrong, but is happily quite the done thing down in Mississippi. Here the nutty sweetness of the prawns bounces beautifully off the salty savoury flavour of the cheese and bacon, with the creamy corn as a soothing backdrop.
If you can't find grits (and they are available online), you can substitute cornmeal or polenta, although the flavour won't be quite the same.
1. Combine the stock and milk in a medium saucepan and bring to a simmer, then pour over the grits, whisking vigorously to combine.
2. Turn down the heat to low and simmer for about 20–30 minutes, until the grits are thick and creamy, stirring regularly to make sure they aren't sticking.
3. Once they're ready, take off the heat and stir in the cream and cheese, then season to taste. Keep warm while you cook the topping.
4. Heat a dry frying pan over a medium-high heat and fry the bacon until crisp and beginning to brown. Scoop out with a slotted spoon and add the prawns. Sauté until pink on both sides, then scoop out and add to the bacon (if you leave them in the hot pan while you assemble the dish they will continue cooking).
5. Divide the grits between two shallow bowls. Top with the prawns, then scatter the bacon around them. Finally snip over the chives to serve.
## Courgette fritters with bagna cauda hollandaise
##### serves 4 with extra sauce
450g courgettes (2 large-ish ones)
2 spring onions
50g plain flour
50g dried breadcrumbs, preferably panko
1 teaspoon chilli flakes
A whole nutmeg, to grate
1 egg
A small bunch of parsley, finely chopped
Oil, to fry
##### _For the sauce:_
3 fat garlic cloves
10 anchovies, rinsed if packed in salt
100ml olive oil
3 egg yolks
150g cold butter, cubed
Bagna cauda (rather wonderfully, 'warm bath') is an incredibly rich, salty, garlicky dip from Italy's Piedmont region, usually served with raw or boiled vegetables, but this thicker version makes a dangerous pairing with hot, crispy courgette fritters. The slightly sweet, almost creamy flavour of the squash proves the perfect foil for the anchovy umami bomb.
This is a lovely late summer lunch or light(ish) supper. Don't be shy with the oil; you need it for really crispy fritters, but you can negate that by serving them with an undressed green salad.
_NB: the sauce is also great with crudités, toasts, poached eggs; in fact, almost anything._
1. Coarsely grate the courgettes into a colander in the sink. Salt lightly, toss, and leave to weep while you make the sauce.
2. Roughly chop the garlic and anchovies and mash together into a smooth paste. Heat a splash of oil in a small frying pan over a lowish heat, and gently fry the mixture until the garlic just smells cooked. Scoop out of the hot pan so it doesn't continue cooking.
3. Heat the olive oil to warm (I put the jug into a saucepan of hot water) and boil a small kettle of water. Put the egg yolks into a pan with 1 tablespoon of cold water and the butter and set over a low heat. Stir continually until the butter has melted and emulsified into a smooth, thickish sauce, then gradually but vigorously whisk in the warm olive oil. Turn up the heat slightly and whisk until thickened. If it threatens to separate, whisk in a little of the boiling water from the kettle, which should bring it back together. Once thickened, stir in the anchovy and garlic and set aside somewhere warm while you make the fritters, whisking it occasionally (I sit the pan in the larger pan of warm water previously occupied by the jug of oil).
4. Squeeze out the courgettes well. Finely slice the spring onions, then put into a large bowl with the courgettes, flour, breadcrumbs, chilli flakes and a pinch of nutmeg. Briefly beat the egg and mix in along with the parsley.
5. Heat enough oil in a frying pan over a medium-high heat to shallow-fry – if you only grease the pan, your fritters will be soggy. Once the pan is hot enough that a courgette strand sizzles as it hits the oil, add the mixture in spoonfuls, flattening out as you do so, and fry in batches until golden brown on both sides. Drain on kitchen paper, then serve with the sauce.
## Ox cheeks braised in Marmite
##### serves 4
600g ox cheek, trimmed of any sinew and cut into large chunks
1 tablespoon seasoned flour
2 tablespoons oil or dripping
1 onion, thinly sliced
1 large carrot, diced
1 leek, diced
300ml porter or other dark beer
3 tablespoons Marmite, dissolved in 4 tablespoons hot water
1 tablespoon dark brown sugar
Although I'm convinced they've changed the recipe in recent years, I still love Marmite and (to others' embarrassment) have been known to take a jar on holiday to perk up hotel breakfasts. (There's no joy in cheap jam, wherever it was made.) But it's also a very useful ingredient to have in the kitchen to deliver a quick shot of savoury flavour to insipid stews and soups – in this recipe it's out and proud; gorgeously sticky and salty, and quite superb with some creamy mash and steamed greens.
1. Toss the meat in the seasoned flour and heat the fat in a large casserole pan over a medium-high heat. Brown the meat well in batches, then set aside.
2. Turn down the heat a little, then add the onion, carrot and leek to the pan and stir well. Cook for about 10 minutes, until softened, then pour in the beer and stir to deglaze the bottom of the pan. Heat the oven to 170°C/fan 150°C/gas 3.
3. Stir in the Marmite-y liquid, the sugar, the meat and 100ml of water, bring to a simmer, then put into the oven for about 2½–3 hours, until the meat is falling apart. Taste before serving – you can stir in more Marmite if you're a fanatic like me, or more sugar if you think it could do with toning down a bit.
## Chargrilled Caesar salad
##### serves 4
2 smallish garlic cloves
150ml olive oil
2 chicken breasts or 4 boneless skinless thighs
4 rashers of streaky bacon
4 slices of day-old white sourdough bread
8 little gem lettuces
A large handful of finely grated Parmesan
2 anchovy fillets, rinsed
1 egg yolk
Juice of ½ a lemon
I'm loath to mess with an undisputed classic of the genre, which, frankly, has suffered enough (there's a tofu kale Caesar online. Seriously), but I'd like to propose this as an adaptation: lettuce is vastly enhanced by a little charring, a process that delivers yet another dollop of umami on top of the anchovy-rich dressing. Add the fried bread and the bacon and it's basically a salad for a very sophisticated hangover – hell, there's even an egg yolk in there for good measure.
1. Crush the garlic and add to the oil. Leave to infuse for about an hour. Bash out the chicken until it's nice and thin. Heat a griddle pan on a high flame and cook the bacon until crisp and well charred.
2. Tear the bread into bite-sized chunks and dunk in the oil, then griddle until crisp.
3. Cut the lettuces in half through the core, and brush with oil. Griddle until charred, then sprinkle with Parmesan. Set aside.
4. Brush the chicken with garlic oil and griddle on both sides until chargrilled and cooked through.
5. Mash the anchovies to a paste in a jug, then beat in the yolk, and gradually the rest of the garlic-infused oil until you have a thickish dressing. Stir in the lemon juice and taste – season if necessary.
6. Snip the bacon into small shards and cut the chicken into slices. Arrange the lettuce halves on a platter and scatter over the croutons, chicken and bacon. Drizzle with dressing to serve.
## Crunchy soy-braised pig's tails
##### serves 4
4 pig's tails
225g plain flour
2 eggs, beaten
50g panko breadcrumbs
Oil, to cook
##### _For the marinade:_
3cm piece of ginger, roughly sliced
3 spring onions, roughly chopped
3 small whole dried chillies
½ a star anise
1 tablespoon dark brown sugar
3 tablespoons dark soy sauce
3 tablespoons Shaoxing wine
I'm aware that this recipe is quite a niche one – either you can stomach the idea of nibbling on a curly-wurly little tail or you can't – but rest assured, if you can get past the cutesy Pigling Bland associations, it will repay your bravery. The slim little tail gives surprisingly good value, particularly as butchers will often give them away for free. (You'll probably need to order them specially, though.)
Slow-cooked until the rich, gelatinous meat falls from the bone, then crumbed and baked until crisp, tails make a strangely good snack – the kind of thing that goes very well indeed with a cold beer with adventurous friends.
1. Heat the oven to 200°C/fan 180°C/gas 6. Put the tails into an oven dish just big enough to hold them, add the marinade ingredients, then barely cover with water. Cover and bake for 3 hours, checking the tails are still submerged in liquid every hour or so, and topping up as necessary. Allow to cool slightly in the stock.
2. Turn the oven up to 220°C/fan 200°C/gas 7. Put the flour, egg and breadcrumbs into separate bowls. Cover the base of a roasting tin with oil and put on the hob over a medium-high heat.
3. While it's heating, lift the tails out of their marinade and roll each in flour, egg and breadcrumbs, then egg and breadcrumbs again, until well coated. Brown all over in the roasting tin, then roast for 20–25 minutes, turning them over halfway through. Eat immediately, while you're still feeling brave.
## Broccoli and edamame salad with Korean dressing
##### serves 4–6
1 large head of broccoli
450g shelled edamame beans, defrosted
1 tablespoon toasted sesame seeds
##### _For the dressing:_
6 tablespoons groundnut or other neutral oil
3 tablespoons doenjang (see intro)
Juice of 2 limes
1 tablespoon rice vinegar
1 teaspoon sugar
1 red chilli, seeded and finely chopped
1 teaspoon freshly grated ginger
Broccoli salads are yet another great American idea – in fact, the many charms of this excellent vegetable seem better appreciated in general across the pond – but this salad, which makes much of its crunchy texture and slightly bitter flavour, has a distinctly Far Eastern fusion feel to it.
Doenjang, a salty rich paste made from fermented soy beans, is one of the cornerstones of Korean cooking, and packs as much umami as an anchovy and Parmesan fritter – you could substitute Japanese miso paste if that happens to be easier to come by, but it's available online, and lasts for ever, so it's a useful thing to have in the cupboard for adding a bit of umami oomph to soups, rice dishes, etc.
1. Cut the broccoli into bite-sized pieces. Bring a large pan of salted water to the boil and prepare a large bowl or sink full of iced water. Blanch the broccoli and edamame for about 45 seconds, then transfer to the iced water to cool.
2. To make the dressing whisk together the oil and doenjang until well combined. Whisk in the lime juice, vinegar and sugar until you have a smooth dressing, then stir in the chilli and ginger. Taste, and add a little more of any of the ingredients if you feel it needs it.
3. Combine the broccoli and edamame in a salad bowl and toss through the dressing. Sprinkle with sesame seeds to serve.
## Dashi pickles
##### Makes a 1.5 litre jar
##### _For the dashi stock:_
20g kombu (dried seaweed)
3g bonito (dried tuna) flakes
1 tablespoon sugar
1 tablespoon salt
200ml rice vinegar
1 teaspoon soy sauce
1 teaspoon togarashi seasoning
##### _For the vegetables:_
2 tablespoons fine sea salt
10 small radishes, halved
2 carrots, cut into batons
½ a cucumber, seeds removed, cut into batons
¼ of a cauliflower, in florets
This slightly smoky, sweet and savoury pickle makes an addictive accompaniment to Japanese curries, but is also surprisingly good in a cheeseburger. The ingredients for the stock should be easy to find in an oriental supermarket or online, and will also come in useful for the noodle recipe here.
1. Put the kombu into a pan with 1 litre of water. Bring to the boil, then scoop out the kombu with a slotted spoon and discard or save to use again (when, like tea leaves, its flavour will be less strong). Add the bonito, bring up to the boil, then pass through a sieve. Allow to cool, then pass through a sieve back into the same pan. Add the sugar and salt and bring back to the boil, stirring to dissolve, then add the vinegar, soy sauce and togarashi and allow to cool completely.
2. Meanwhile, dissolve the fine sea salt in 900ml water. Put the vegetables into a large bowl and cover with the water, adding just as much as it takes to submerge them (add more water if necessary). Allow to soak overnight (or for 8 hours), then drain. Pack into jars and fill with the cool stock. Cover with a lid and chill for at least 3 days before serving.
## Green lamb kebabs
##### serves 4
4 anchovies, rinsed if packed in salt
4 fat garlic cloves
400g minced lamb (not the leanest kind)
2 tablespoons capers, roughly chopped
1 small green chilli, deseeded and roughly chopped
2 large handfuls of parsley, thick stalks removed, roughly chopped
A large handful of basil, roughly chopped
1 tablespoon lemon juice
1 tablespoon Dijon mustard
This mash-up of a classic sharp salsa verde and herb-flecked Turkish kofte is the product of my taste for pairing intensely savoury, aromatic flavours with the rich sweetness of lamb. Feel free to shape the meatballs into any form you like – they're good with rice and roasted peppers, but they'd make a pretty wonderful burger too, perhaps topped with a little yoghurt and folded into a warm flatbread with salad, or even with some simply dressed new potatoes.
1. Mash the anchovies and garlic into a paste in a pestle and mortar.
2. Put all the ingredients, anchovy and garlic paste included, into a food processor and pulse until you have a green mixture. Season. Heat a small frying pan over a high flame and cook a small blob to check the flavours, adding more garlic, lemon or mustard if desired.
3. Shape into six cylinders and refrigerate until you're ready to eat, then heat a frying pan, griddle or barbecue greased with a little oil. Cook the kebabs until golden brown on all sides, and cooked through to your liking.
We don't eat a lot of flowers these days, and when we do, they tend to be in consciously foreign preparations; delectable deep-fried _flor de zucchini_ and sticky Middle Eastern pastries are fashionable in a way that violet creams and lavender vinaigrettes are very definitely not. (I must admit this often works in my favour;
I always get both rose creams in the chocolate selection.)
Yet again, however, I find myself impressed by the adventurous tastes of our forebears. The ancient Greeks and Romans were great lovers of flowers, but so were earlier inhabitants of these chilly islands, who not only distilled them for medicinal purposes but used them in salads, sauces and baking too.
A seventeenth-century collection of recipes said to have come from the household of the exiled Queen Henrietta Maria includes instructions for such treats as an almond and rosewater set cream much like a modern panna cotta, eel in a wine and saffron sauce and veal pastries crammed with dried fruit and spice, and seasoned with rosewater.
The gay abandon with which these long-dead cooks used flowers is quite staggering; Frances Bissell's excellent book _The Scented Kitchen_ mentions one rose syrup recipe that calls for 11 gallons of petals – or an entire acre of bushes.
Although floral flavours began to fall from favour in Victorian times, coltsfoot wine, dandelion salads and elderflower vinegars all feature in collections of recipes compiled by the redoubtable Women's Institute right up to the present day.
But the urban population has rediscovered the charming possibilities of flowers in the kitchen in recent years as well – our growing taste for, and familiarity with, the flavours of the Indian subcontinent and the Middle East has made flower waters and saffron far easier to come by.
Although many perfectly edible examples are so very subtle that it's hardly worth the trouble of chewing, the best flowers impart delicate but haunting fragrance to food, somewhat like the elusive scent of warm jasmine on a summer's evening, or an old-fashioned rose in full fig in June.
And, if you think you don't like eating flowers, if you're not keen on artichokes, or capers, or cauliflower, do bear in mind that they're a vital ingredient in many dishes which don't shout about their floral content; creamy korma, for example (see here), and fragrant biryani both betray the Mughal fondness for rosewater, while North African spice mixes often feature rose petals, and orange blossom water perfumes many a meaty Moroccan tagine. Not so sickly sweet after all then.
### Practicalities
The best flowers to eat will always be those you have grown yourself, for only then can you be absolutely sure that they haven't been sprayed with anything potentially unpalatable or, worse still, poisonous. However, if, like me, you're not lucky enough to have acres of flower beds at your disposal, it's possible to buy flowers grown for the purpose online; see here for stockists. (This also has the distinct advantage, in many cases, of ensuring that the flowers have been selected for scent, rather than beauty – even if you grow roses at home, there's no guarantee that they're the right sort, and you may be disappointed.)
I try and wash flowers as little as possible as they're so delicate, but use your own judgement as to whether they require wiping with a damp cloth – shaking them a little to release any tiny stowaways is always a good idea, however.
If you're foraging for flowers, bear in mind the guidelines here, especially when it comes to identification; no gorgeous cascade of sugared petals is worth a trip to A&E.
Many of the recipes in this chapter call for flower waters and essences, which can be found in Indian or Middle Eastern and baking specialists respectively. Essences will be much stronger than waters, and even within these categories the pungency varies widely between brands, so never add the quantity suggested all at once, but do so to taste.
See also: Chicken korma (here).
## Crab with ricotta and lemon zest and an elderflower and cucumber salad
##### serves 2 (with plenty of vinegar left over)
2 dressed crabs
6 tablespoons ricotta
1 unwaxed lemon
##### _For the elderflower vinegar:_
1 head of elderflower, plus 1 more after the first week
350ml white wine or cider vinegar
##### _For the cucumber salad:_
120ml elderflower vinegar
Juice of 1 lemon
2 tablespoons sugar
1 teaspoon salt
½ a cucumber, thinly sliced
1 teaspoon pink peppercorns (optional)
Brown crab and fragrant elderflower: two ingredients that sing the siren call of early summer for me, here combined in one vaguely Scandinavian lunch or light dinner. You can buy elderflower vinegar online, but it's very easy to make yourself – you'll just need to allow at least a week for it to infuse before you can reap the rewards, but in the meantime, the crabs will obligingly be growing bigger and fatter, ready for the pot, which can't be a bad thing.
1. To make the vinegar, shake the elderflowers to dislodge any tiny hitchhikers, then break into small flower sprigs. Tip a little of the vinegar out of the bottle into a mug, to use for some other purpose, then push the sprigs into the bottle. Seal and put somewhere nice and sunny for about a week.
2. Strain the vinegar into a jug and discard the flowers, then pour it back into the bottle and add the new flowers. You can now use it immediately or leave it in a dark place until you're ready to make the salad.
3. Whisk together the vinegar, lemon juice, sugar and salt until the last two have dissolved, then pack the cucumber into a clean jar with the peppercorns if using, and pour over this dressing. Leave to sit for at least 3 hours, but a few days is fine.
4. When you're ready to eat, scoop the brown meat out of the crabs and mix with the ricotta, the finely grated zest of the lemon and its juice. Season to taste, then replace in the shells. Serve with the cucumber salad and some crispbreads.
## Fig and goat's cheese olive oil flatbread with lavender honey
##### serves 6
A small jar of clear honey
7 sprigs of lavender, plus a few extra to finish
225g plain flour
½ teaspoon baking powder
1 teaspoon salt
75ml olive oil, plus extra to brush
2 tablespoons semolina, cornmeal or polenta
5 ripe figs
120g log of soft, rinded goat's cheese
Lavender, to me, is the smell of holidays past; arriving in the Friday evening dark, and breathing in a warm, scented lungful of Provence.
This recipe, which makes an excellent lunch with a green salad, is also a good thing to have on the table while you're working your way through a couple of bottles of well-chilled rosé, or indeed a sticky bottle of pastis on a summer evening.
1. Tip the honey into a small pan and stir in 6 sprigs of lavender. Bring to the boil, then turn off the heat and allow to cool. Remove the lavender and pour back into the jar, adding one of the remaining sprigs. Heat the oven to 220°C/fan 200°C/gas 7.
2. Put the flour, baking powder and salt into a mixing bowl and whisk to combine. Whisk the oil with 125ml of warm water, then stir this into the dry ingredients and bring together into a soft dough.
3. Grease a small baking tray and sprinkle with the semolina, then use your fingers to gently prod the dough out to cover the tray, making it thinner in the middle and leaving a slightly thicker crust around the edge. Brush with a little oil and bake for 15 minutes.
4. Meanwhile, thinly slice the figs and goat's cheese. Arrange on top of the dough, drizzle with honey and return to the oven until the cheese bubbles and browns a little and the edges of the dough are golden. Sprinkle with a few more lavender flowers, being careful not to overdo it, and serve warm.
## Geranium and apple snow
##### serves 4
50–75g white sugar
10 large, unsprayed geranium leaves, washed
500g cooking apples, peeled, cored and roughly chopped
3 egg whites
1 tablespoon icing sugar
This easy autumnal pudding is inspired by the subtly fragrant stewed apple with geranium often found on the Ballymaloe House breakfast table in season – it sounds an odd combination, but it works beautifully, and as even the worst gardener can keep a geranium alive, it's an easy way into cooking with your own flowers (make sure the leaves haven't been sprayed with anything noxious, though). If you don't happen to have any suitable geraniums, a splash of rosewater is a good substitute.
If you want, you can fold in some whipped double cream as well to make a more substantial, fool-like dessert, but I like the delicate texture of this fat-free version as it is (or with a little custard).
1. Put 50g of the sugar and 75ml of water into a small pan over a medium-high heat. Stir to dissolve, add the geranium leaves, bring to the boil, then simmer for a couple of minutes.
2. Remove the leaves, add the apple chunks, turn down the heat, cover and cook until they break down into a smoothish purée. Whisk to get rid of any remaining lumps, taste and add more sugar if needed, then allow to cool.
3. Whisk the egg whites until they hold soft peaks, then whisk in the icing sugar until glossy. Fold in the apple purée, a little at a time. Once it's all incorporated, taste and add more sugar if necessary.
## Marzipan violets
##### makes about 25
80g icing sugar, plus a little extra to dust
80g caster sugar
1 egg
A few drops of almond extract
A few drops of violet essence
175g ground almonds, whizzed in the food processor if coarse
100g dark chocolate
Crystallized violets or coloured sugar, to decorate
At the very real risk of sounding like Miss Marple, I can think of few better companions for a winter's afternoon than a box of Fortnum & Mason violet and rose creams and a good book. I'm a big fan of the delicate perfume of violets in particular, which marries beautifully with sweet almonds in this nutty take on the classic sweet – don't be tempted to leave out the salt, it really makes them.
1. Bring a pan a third full of water to a simmer and sift the icing sugar into a heatproof bowl large enough to sit above, but not touching, the water. Run a little cold water into the sink. Stir the caster sugar and egg into the icing sugar, then set over the pan and whisk for at least 10 minutes, until thick and puffy.
2. Put the bowl into the sink, add a couple of drops of both flavourings and a pinch of salt and whisk until cool, then taste; the violet should be the dominant flavour, so add more if necessary, plus more salt if you like.
3. Stir in the ground almonds until you have a smooth paste, using your hands if necessary, then dust a tray with a little icing sugar and roll the marzipan into roughly quail's egg sized balls and arrange on the tray. Put in a cool place, or the fridge, to firm up a little – an hour should do it but longer won't hurt.
4. Melt the chocolate in a bain-marie (as above) or in the microwave, then dunk each ball in it to coat and put back on the tray, topping each with a violet or a sprinkle of sugar before it sets. Use any extra chocolate to touch up any thin bits, then set aside for the chocolate to harden.
## Scandi saffron buns
##### makes 7 large buns
245ml milk
¼ teaspoon saffron threads
425g plain flour
10g active dried yeast
75g caster sugar, plus 1 tablespoon to glaze
½ teaspoon salt
120g cold butter
1 teaspoon rosewater
A whole nutmeg, to grate
50g mixed peel
150g currants
##### _For the filling:_
¼ teaspoon saffron threads
A splash of milk
75g butter, softened
50g soft light brown sugar
A word of warning to any true-bred Cornwallahs out there; though I've taken the lead on ingredients from my beloved 1923 copy of _Cornish Recipes Ancient and Modern_ from the WI, which includes saffron cake inspiration from as far back as 1805, the form of these is borrowed from the distinctly Scandinavian cardamom version – because a ring of sticky buns puts a smile on everyone's face. Fluffy, but undeniably substantial, they make a generous afternoon tea in themselves.
1. Heat the milk for the buns to hand-hot. Crush the saffron to a powder in a pestle and mortar and then tip this into the milk and leave to infuse for a few minutes.
2. Put the flour into a large mixing bowl with the yeast, 75g of sugar and the salt. Stir together with a whisk to break up any lumps, then grate in the butter and rub in with your fingertips until it resembles coarse crumbs. Add the rosewater and a grating of nutmeg to the milk, then pour it into the flour and stir until it comes together into a soft dough.
3. Lightly grease a work surface and knead the dough for about 10 minutes (it will be very sticky) until it starts to feel more like a coherent ball than a mess. Knead in the peel and currants. Wipe out the mixing bowl, grease and then return the dough to it. Cover and put in a draught-free place until doubled in size (this will probably take a couple of hours at least).
4. Meanwhile, to make the filling, grind the remaining saffron to a powder and pour in a splash of milk. Beat the butter and sugar together with a pinch of salt, then stir in the saffron-infused milk. Lightly grease a 23cm springform cake tin.
5. Knock back the dough by punching the air out of it, then pull or roll out on a lightly floured work surface into a rough 35 x 25cm rectangle, long edges parallel to you. Spread the butter over the top, stopping short about 1cm before the bottom of the rectangle, then roll up the dough, starting at the long edge closest to you, into a tight sausage.
6. Cut the sausage into seven pieces and arrange evenly around the tin, with the smallest in the middle. (Don't worry that they look a bit lonely; they'll expand.) Cover and leave to prove for about 30 minutes, until the dough springs back when prodded. Heat the oven to 220°C/fan 200°C/gas 7.
7. Bake for about 25–30 minutes, until golden brown. Meanwhile, stir the remaining tablespoon of sugar into a tablespoon of boiling water, then brush this on to the buns when they come out of the oven. Leave to cool slightly before tearing into them.
## Shrikhand, or spiced saffron and pistachio yoghurt
##### serves 4
500g whole Greek yoghurt
½ teaspoon saffron
2 tablespoons milk
50–75g soft light brown sugar
1 tablespoon rosewater
4 tablespoons shelled pistachios, roughly chopped
A _Guardian_ reader introduced me to this rich, fragrant Indian dessert, and I'll be forever grateful to them – it's very simple to make, but tastes incredibly special. You don't have to hang the yoghurt if you don't have time, because with Greek yoghurt most of the work has already been done for you, but it will make it even thicker and more luxurious.
1. If time permits, scoop the yoghurt into some muslin, secure with string, and hang to drain for a couple of hours (I suspend it from the arm of my food mixer so the whey drips into the bowl).
2. Pound the saffron in a pestle and mortar until powdered. Warm the milk up and pour into the mortar (this will make it much easier to get the saffron out too – it tends to stick to the base of mine). Leave to infuse for 5 minutes.
3. Mix the sugar into the yoghurt to taste, then the rosewater, a little at a time – they vary greatly in strength, so you may need more or less depending on your brand. Stir in the saffron-infused milk and serve with the nuts sprinkled on top.
## Rose petal vodka
##### makes 700ml
4 fragrant, unsprayed roses (2 needed for each stage of the process)
1 medium bottle of vodka (about 700ml)
This is simplicity itself to do, and makes a lovely blush-pink long drink with tonic, as well as being gorgeous over ice.
1. Shake the roses to dislodge any insects, then carefully pick off the petals. Decant the vodka into a large jar or wide jug.
2. Put the petals of 2 roses into the jug and stir once, then seal with a lid or clingfilm and leave it somewhere sunny for 3 or 4 days; the vodka will turn a beautiful pink colour (unless your roses are yellow or white, of course).
3. Strain to remove the petals, then repeat the process with the second lot of roses (if you prefer a subtler infusion, you can skip this second step). Strain and pour back into the bottle. You can add fresh petals just before serving if you'd like it to look even prettier.
I won't claim I'm one of those people who's out every weekend combing the hedgerows for free food – for one, the hedgerows near me are likely to yield more kebab boxes than berries, and for another, frankly, much of the stuff on offer in my many foraging books doesn't seem worth the effort. (No one will convince me, for example, that nettles aren't considered weeds for a reason.)
That said, there are treats out there, for free, that are far superior to anything you can buy in the shops. Aromatic, grassy wild garlic has a quite different flavour to the bulb, while damsons are far tarter and richer than the sweet-natured common or garden plum.
There's also the very real benefit of feeling inordinately smug when serving up something you've harvested yourself, for nothing; not all of us have the space, or (admit it) the patience, to coax potatoes from the ground, or spend hours on the riverbank toiling for one tiny trout, but anyone can go down to the park with a plastic bag and spend a happy hour picking, and eating, blackberries.
There's a certain quiet satisfaction in dipping into the hunter-gatherer lifestyle for an afternoon, then going home and making the fruits of your labour even more delicious with some stuff you've foraged from more conventional sources. It's one of the luxuries of modern life.
### The rules
It may be obvious that you shouldn't gather food from the side of busy roads, and should treat anything at cocked-leg height with caution in parks frequented by dogs (i.e. all parks), but look out as well for signs of chemical use in both town and country; if the leaves are yellowing, or clumps of plants are dead, then the area may well have been treated with insecticide, and you should leave well alone. Wash anything gathered well in fresh water before eating.
In terms of your rights and responsibilities, the law allows you to pick food on public land, or along a right of way, but you must have the landowner's permission to uproot any whole plants (usually a bad idea anyway, given you'll kill them), or indeed to cross from the footpath to that fruity-looking damson tree at the edge of the wood. Few landowners would mind you taking a handful for your own use, but nevertheless it's polite to ask if possible, and to avoid stripping the tree completely.
In fact, that's a good maxim in general when foraging; if you're too greedy, then others will be disappointed, and indeed with some species you risk killing the plant off completely for next year. Only take what you need, and be careful to cause as little damage as possible to the surrounding area while doing so.
### The tools
A good identification guide (Richard Mabey's classic _Food for Free_ is still the best to my mind) is the only must if you're getting any more adventurous than the bramble, but a supply of clean carrier bags, and a small pair of stout scissors or secateurs, will come in handy.
### Wild garlic – March to June
Woodlands and hedgerows
Probably the easiest of all wild foods to identify, because you'll likely smell a clump of wild garlic before you see it, these broad green spears, with their star-shaped white flowers, can be found from March (in southern areas) to June in damp woodland and other shady places. Although there are other plants which look similar, only ramsons, as they're also known, have that distinct alliaceous whiff.
Ideally use scissors to harvest them to avoid pulling the plants up by the roots, and go for younger, smaller leaves rather than tougher, darker old-timers. The flowers, which are also edible, make a pretty garnish, though the leaves will be more tender before the plant has flowered.
It's a very easy plant to make use of; it'll work pretty much anywhere you might use garlic or chives, and has a milder flavour than you might expect from its pungent smell. Fold it into mayonnaise, crème fraîche or scrambled eggs, mash it into butter, or snip it on top of salads, fish, and so on – you can even make a punchy pesto with it.
### Samphire – June to September
Coastal marshes and salt flats
Properly known as marsh samphire, and found on tidal mudflats and salt marshes, the minerally, salty flavour of this leaves you in no doubt that it is a sea vegetable.
Young stalks are great tossed raw into salads, but any you've picked yourself will need washing thoroughly first to rid them of any grit. Steamed until al dente, samphire can be eaten like asparagus, dipped in melted butter, and pulled from its woody stems with the teeth, but if you're going to serve it as a vegetable, remove these fibrous lower portions before cooking.
### Cobnuts – August to October
Woods, hedgerows and wasteland
On my way to the greengrocer to find cobnuts (a type of hazelnut) to test the meringue recipe in this chapter, I was amazed to find myself stepping over piles of the things on an unlovely London pavement – they look like spiky little corn husks with pale green or brown shells inside, depending on their age, and the hazel tree or bush that bears them has broad, serrated green leaves with a slightly furry underside.
People rave about the milky joys of raw green cobnuts, but though they're nice enough, the flavour really comes out when you toast them.
Of course, they're great simply roasted and salted, but they're also delicious wherever you might use hazelnuts otherwise: in a seasonal apple and blue cheese salad, popped into a chocolate brownie, or stirred through sautéd cabbage.
### Blackberries – August to October
Woodland, wasteland, parks, hedgerows and heaths
These need little introduction, but I have discovered a couple of new things about this familiar fruit in recent years – the ripest berries will always be the lowest on the stem, and these are said to be the best of all, while those towards the top will be ready to eat last, and may be a bit sour (and thus, perhaps, better for cooking than eating raw).
### Damsons – September to October
Hedgerows, parks, woods and edges of gardens
These diminutive blue-black plums, about the size and shape of a large olive, with a slightly chalky-looking skin, aren't common in our hedgerows, but I've happened upon them more than once by accident, and if you find a bullace, or wild plum (similar in size, but rounder), you can use the fruit in much the same way.
Like their relative the sloe, they're too bitter to eat raw, but when cooked with sugar they have a far fuller, richer flavour than you'll get from a dessert plum – and they're also excellent candidates for steeping in booze. (Freeze, use to half fill an empty bottle, then top up with vodka, gin or whisky and leave to mature for a few months before straining and sweetening to taste.)
See also: German plum bread with almond cream (here).
## Roast new potatoes with wild garlic dressing
##### serves 2
500g small waxy new potatoes
130ml olive oil
50g wild garlic leaves, well washed (reserve any flowers as garnish)
1 egg
2 tablespoons lemon juice
100ml sunflower oil
Inspired by patatas bravas (or at least my version of it, which includes a garlicky aïoli-like sauce as well as the fiery tomato sort), this makes a great tapas dish, but would also be a very welcome accompaniment to grilled fish or roast chicken. Helpfully, wild garlic pops on to the menu about the same time as the first new potatoes from Jersey – spring on a plate.
1. Heat the oven to 220°C/fan 200°C/gas 8. Cut the potatoes into rough 2cm chunks. Put a roasting tray with 2 tablespoons of the olive oil into the oven and leave to heat for 5 minutes, then take out, toss the potatoes in the hot oil, and bake for about 45 minutes, until crisp and golden.
2. Bring a pan of water large enough to hold the wild garlic to the boil, and prepare a large bowl or sink full of iced water. Blanch the wild garlic for 30 seconds, then dunk in the iced water to cool. Squeeze out well, then use a stick blender to blitz with the remaining olive oil to make a purée.
3. Put the egg and lemon juice into a food processor and whiz to combine. With the motor still running, drizzle in the sunflower oil until it comes together into a loose emulsion. Stir in the wild garlic purée and season to taste.
4. To serve, transfer the potatoes on to a serving plate and drizzle with the green mayonnaise. Sprinkle with sea salt and top with any flowers you may have trimmed from the garlic.
## Scrambled eggs with crab and samphire
##### serves 2
2 thick slices of white sourdough bread
20g butter
A large handful of washed and trimmed samphire
4 eggs, lightly beaten
1 tablespoon brown crab meat
2 tablespoons crème fraîche
A generous 2 tablespoons white crab meat
This is a happily indulgent breakfast (or lunch, or supper), especially if you're on holiday by the seaside and want to eat local seafood three meals a day. And, even if you aren't, this will take you there, minus the shrieking seagulls and refreshing sea breezes. It would also be lovely with black pudding instead of the crab (you could even use both if you're really in the holiday mood).
1. Stick the bread in the toaster, or under the grill, as you wish.
2. Melt half the butter in a small frying pan and sauté the samphire briefly until well coated. Set aside. Butter the toast with a little of the remaining butter.
3. Pour the eggs into a medium, heavy-based saucepan off the heat. Add the remaining butter and a generous pinch of salt and place over a medium-high heat. Stir briefly, then leave alone for 10 seconds, and repeat until they're beginning to set, when you can start stirring continuously until they're nearly done to your liking (nearly!).
4. Whip off the heat and stir in the brown crab meat and crème fraîche, followed by the sautéd samphire. Season well with black pepper and dollop on to the buttered toast. Top with the white crab meat and serve immediately.
## Wild garlic bread
##### makes 2 slices but very easily scaled up
20g butter, at room temperature
2 tablespoons chopped wild garlic
A squeeze of lemon juice
1 tablespoon grated pecorino or hard goat's cheese
2 thick slices of robustly textured bread
I love garlic bread. I love wild garlic. This recipe is the only sane conclusion to draw from both of these facts. If you want to make an entire loaf of baguette or ciabatta, you'll need about five times as much of the wild garlic butter, but you won't regret it.
1. Mash the butter and garlic together with the lemon juice and cheese, and season to taste.
2. Toast the bread, spread with the butter, allow it 30 seconds to melt deliciously into the holes, then devour.
## Michaelmas mess
##### serves 4–6
400ml double cream
##### _For the cobnut meringues:_
80g shelled cobnuts (about 250g unshelled)
100g caster sugar
2 egg whites
A pinch of salt
##### _For the damson compote:_
300g damsons
75g caster sugar
An autumnal take on a classic summer pudding starring two of my favourite seasonal fruits, the cobnut and the damson – I think they have a more interesting flavour than their cultivated counterparts, but if you can only find ready-shelled hazelnuts and ordinary plums, worry not, it'll still be pretty delicious. (If you have more damsons left, then the plum and almond cake here is another possibility.)
1. Start with the meringues. Heat the oven to 170°C/fan 150°C/gas 3 and roast the cobnuts for about an hour, until hard, dry and brown. They will go scarily soft during cooking, but don't worry, they'll firm up again. Allow to cool slightly, then roughly chop.
2. Tip the sugar on to a baking tray and heat for about 5 minutes, then turn the oven down to 130°C/fan 110°C/gas ½. Meanwhile, whisk the egg whites with a pinch of salt in a large bowl to soft peaks. Tip the hot sugar, still whisking, into the egg whites and continue to whisk until the mixture is glossy and thick. Gently fold in the chopped cobnuts, being careful not to knock all the air out as you do so.
3. Dollop spoonfuls of the mixture on to a lined baking tray and bake for 2 hours, until firm. Leave to cool in the oven, then break into large shards.
4. Meanwhile, put the damsons into a pan with the sugar and heat, gently, covered, until the fruit has broken down. Push through a sieve until all you're left with in the sieve are the stones and skins.
5. When you're ready to serve, whip the cream to soft peaks, then gently fold through the compote and meringue.
## Almond rice pudding with blackberry and apple compote
##### serves 6
50g butter
50g soft light brown sugar
100g pudding rice
600ml whole milk
A dash of almond extract
Flaked almonds, to serve
##### _For the compote:_
1 unwaxed lemon
550g apples, cooking or tart eating, peeled, cored and diced
1 tablespoon caster sugar, plus extra to taste
1 cinnamon stick
150g blackberries
A whole nutmeg, to grate
Even rice pudding haters will love this: inspired by a medieval favourite made with almond milk, and served with a fragrant seasonal compote that stretches those hard-won brambles a little further, it's the perfect autumnal dessert. (If you'd like to make it dairy free, substitute unsweetened almond milk here.)
1. Melt the butter in a medium pan and stir in the sugar and a pinch of salt, then, a minute later, the rice. Stir to coat, and cook, still stirring, for a couple of minutes.
2. Stir in the milk, add a few drops of almond extract, then bring to a simmer. Cover, leaving the lid slightly ajar. Stir regularly for about 45–55 minutes, until the rice is cooked. Taste and add more almond if necessary. Allow to cool to warm before serving.
3. Meanwhile, peel two long strips of zest from the lemon and put them into a pan with the apples, sugar, cinnamon stick and 1 tablespoon of water. Cook over a medium-low heat, covered, shaking the pan regularly so the apples don't stick.
4. Once they've begun to break down, add the blackberries, replace the lid, and continue to cook, shaking occasionally, until you have a smoothish purple purée. Taste and add more sugar if necessary, plus a good grating of nutmeg. Remove the cinnamon stick and lemon zest if you can find them.
5. Serve the rice pudding warm, rather than hot, with a generous dollop of compote and a sprinkle of flaked almonds on top.
## Bramble old-fashioned
##### makes 1
5 large blackberries or 10 small ones
2 teaspoons soft brown sugar
2 tablespoons lemon juice
60ml peated whisky
Ice cubes, to fill
A fruity, slightly saline and very British take on the American classic. Note the exact amounts of sugar and lemon juice will depend on your blackberries, so use this as a rough guide only.
1. Mash the fruit in the bottom of a rocks glass with the sugar, lemon juice and the merest splash of water to help dissolve the sugar.
2. Add the whisky and a generous amount of ice and stir well to combine and chill. Serve immediately. Drink slowly.
Like Wizzard, sometimes I wish it could be Christmas every day – in the kitchen at least. I adore everything about the food at this time of year, right up to the hundredth crumbly, buttery mince pie. I love the overload of sweetly spiced, faintly medieval dried fruit, the juxtaposition of thick, creamy bread sauce and salty little pigs in blankets, the vivid tangerines and glossy piles of nuts, and perhaps best of all, the excuse to drink sherry in your pyjamas and fill the house with the scent of mulled wine before the sun is over the yardarm. Even that cheap Advent calendar chocolate has its charms.
Bleak midwinter or not, the shops are bursting with lovely things in December – the zingy citrus season is well under way, inspiring fresh fruity salads and sticky, sugary candies; pomegranates, those lethally juicy garnet grenades, add a touch of pink sparkly glamour to everything they touch (including your new Christmas jumper), and cheeses made with rich summer milk are coming up from the cellar and into their own.
There are neat little sprouts and sweet fluffy parsnips, toasted chestnuts and salty oysters, plus a whole ark full of animals which have been fattening all autumn ready for this very moment. (Christmas isn't the jolliest of times for them, I'll admit.)
### Planning
If you're in charge of the cooking this Christmas, don't panic; remember, it's just a Sunday roast with silly paper crowns. In fact, it's probably easier than your average roast, because most people will already be stuffed to the gills, and half pickled with booze by the time they sit down to eat, making them blessedly easy to please. But this isn't to say you shouldn't make an effort; it is Christmas, after all.
Having already dealt with the best way to cook a roast potato and a turkey, I won't repeat myself here, though I have sneaked in a modest spin on the classic bread sauce, just because I love it too much to leave out. Instead, the recipes in this chapter are a non-conformist celebration of the flavours of the season – an alternative Christmas message for cooks, if you like.
## Bread and walnut sauce
##### serves 4 bread sauce lovers, 6 ordinary people
1 garlic clove, unpeeled
5 black peppercorns
1 bay leaf
550ml whole milk
100g shelled walnuts
A couple of stems of sage, leaves only
100g fresh white breadcrumbs
50g butter
I am a complete sucker for bread sauce. I love the stuff, and every Christmas I wonder why I don't eat it all year round (answer, because not all clothes are as forgiving as woolly jumpers). It's impossible to improve upon perfection, but this recipe, inspired by a Ligurian walnut sauce, gives it a jolly good go. The walnuts add a toasty, slightly bitter depth of flavour which works beautifully with turkey.
1. Squash the garlic with the back of your knife, skin and all, and put it into a medium saucepan with the peppercorns, bay leaf and milk over a medium heat. Bring to a bare simmer, then take off the heat and leave to infuse for 30 minutes.
2. Meanwhile, toast the walnuts in a dry frying pan until fragrant, then allow to cool slightly before whizzing to fine crumbs in a food processor; be careful not to overdo this last bit or you'll get walnut butter.
3. Scoop the garlic, bay leaf and peppercorns from the milk and discard. Finely chop the sage leaves. Put the milk back on a low heat and stir in the breadcrumbs, then cook until the sauce begins to thicken, stirring occasionally. Stir in the walnuts, butter and most of the sage, and season to taste. Top with the remaining sage and serve.
## Georgian aubergine rolls with walnut sauce and pomegranates
##### makes about 15
100g walnuts
¼ teaspoon dried fenugreek seeds
½ teaspoon coriander seeds
10g coriander leaves, roughly chopped
1 tablespoon dill, roughly chopped
1–2 small garlic cloves (depending on your tolerance for raw garlic)
½ teaspoon paprika
1 tablespoon red wine vinegar
4 tablespoons olive oil
1 tablespoon pomegranate molasses
½ teaspoon fine salt, or to taste
Seeds of ½ a pomegranate
1 large-ish aubergine, preferably fairly long rather than wide
Oil, to grease
Nutty, sweet and sour, with subtle spicing courtesy of the characteristic Georgian combo of fenugreek and coriander, these simple rolls make very handsome little vegetarian canapés, especially as they have the great benefit of sitting happily at room temperature for a few hours. They're also very good with the cheesebread here.
1. Toast the walnuts in a dry frying pan until fragrant, then allow to cool slightly. Repeat with the fenugreek and coriander seeds, then grind these to a powder.
2. Grind the walnuts to a coarse rubble in a food processor, then add the herbs, garlic and spices and whiz, adding the vinegar, olive oil, molasses and salt as you do so. Stir in the pomegranate seeds, keeping a handful back for decoration, then taste and adjust the seasoning if necessary.
3. Very thinly slice the aubergine lengthways – a mandoline is ideal for this if you have one. Heat a griddle pan on a high heat and brush the aubergine slices with oil, then cook in batches until lightly charred on both sides, and soft and floppy.
4. Put a scant teaspoon of walnut mixture on to the narrow end of an aubergine slice and roll it up around it, then put on a plate, seam down. Repeat with the rest of the slices, then serve scattered with the remaining pomegranate seeds.
## Brussels sprout, hazelnut and lemon zest salad with goat's cheese
##### serves 4–6 as an accompaniment
500g Brussels sprouts
75g hazelnuts
1 unwaxed lemon
4 tablespoons neutral oil
2 tablespoons hazelnut oil
1 small log of fresh, soft goat's cheese (optional)
Sprouts still get a bad rap in this country, though given the remarkable renaissance of kale, I have high hopes they'll rise again in time, because, done properly, they really are lovely little things. Nutty sweet and wonderfully crunchy, they're far nicer in a fresh salad than boiled into submission with the turkey – just make sure you cut them really thinly so they aren't too chewy.
1. Pick the outer leaves from the sprouts, discarding any discoloured ones, and put into a serving bowl, then finely shred the inner leaves by holding each at the stalk end and slicing thinly until you reach the hard core. Alternatively, use a food processor if you can be bothered to find all the bits.
2. Toast the hazelnuts in a hot dry pan, being careful not to burn them, then roughly chop. Zest the lemon into a small bowl along with the juice and both oils. Season well and whisk to combine.
3. Toss the sprouts, hazelnuts and dressing together in a salad bowl and season to taste, paying particular attention to plenty of black pepper.
4. Top with chunks of crumbled goat's cheese, if using.
## Spiced pumpkin and Parmesan pie with chestnuts
##### serves 6
##### _For the pastry:_
170g spelt flour
A pinch of salt
100g cold butter
1 tablespoon finely chopped sage leaves
1 egg yolk
##### _For the pie filling:_
1 small pumpkin (preferably a variety designed for cooking, not decoration, e.g. Crown Prince) or medium butternut squash
2 tablespoons maple syrup
¼ teaspoon ground cinnamon
¼ teaspoon ground ginger
½ teaspoon ground nutmeg
2 eggs, beaten
100ml double cream
50g Parmesan (or vegetarian alternative), finely grated
A knob of butter
150g chestnuts
100ml Madeira or port
This would make a good meat-free main course for Christmas dinner, or indeed a centrepiece for any of the big meals over the Christmas period – the rich sweetness of the roasted pumpkin and chestnuts is well balanced by the salty umami notes of the cheese, and both work fantastically well with the nutty spelt pastry. Serve with a green salad.
1. Heat the oven to 220°C/fan 200°C/gas 7. Cut the pumpkin or squash in half or quarters depending on the size, and scoop out the seeds and fibres inside. Place skin-side up in a roasting dish with a couple of tablespoons of water. Roast for about half an hour, until tender.
2. Allow to cool slightly, then peel off the skin and scoop the flesh into a food processor. Whiz until smooth, then put into a fine sieve or piece of muslin suspended over a bowl and drain for at least an hour, squeezing out the last of its liquid towards the end.
3. Meanwhile, make your pastry. Sift the flour into a mixing bowl, stir in the salt, then grate in the butter. Rub in using your fingertips until it resembles breadcrumbs, then stir through the chopped sage. Mix the egg yolk with 2 tablespoons of iced water, sprinkle half over the mixture, then stir with a knife until it comes together into a paste, adding a little more liquid if necessary.
4. Bring the mixture together with your fingertips, then roll out on a floured surface to the thickness of a £1 coin. Use it to line a 20–21cm tart tin and prick with a fork in several places. Cover with clingfilm and chill for 30 minutes. Heat the oven back up to 220°C/fan 200°C/gas 7.
5. Line the pastry case with greaseproof paper and fill with baking beans. Bake for 15 minutes, then remove the paper and beans and bake for another 5 minutes, until the base is pale golden. Remove from the oven and turn it down to 200°C/fan 180°C/gas 6.
6. Meanwhile, put 320g of pumpkin purée into a large bowl, discarding the excess liquid, and stir in 1 tablespoon of maple syrup and the spices, followed by the eggs. Gradually stir in the cream and cheese until you have a thick, creamy consistency, and season to taste (unless you prefer to steer clear of raw eggs), adding the remaining syrup if you like. Pour into the pastry case.
7. Bake for about 30 minutes, until the filling is set, but still slightly wobbly in the centre. Meanwhile, heat the butter in a small pan and add the chestnuts. Fry over a medium-hot heat until slightly coloured, then add the Madeira or port and reduce until sticky and glossy.
8. Ten minutes before the end of cooking, take the pie out of the oven, arrange the chestnuts on the top and return to the oven. Allow to cool on a wire rack for at least an hour before serving.
## Turkey mole poblano
##### serves 4–10 depending on the size of your turkey
##### _For the turkey (or use cooked leftovers and about 1.4 litres of chicken or turkey stock):_
2 litres chicken stock
45g butter
1 carrot
2 celery sticks
1 bay leaf
A handful of peppercorns
1 turkey crown (you can use a whole turkey if you have a pot large enough)
##### _For the mole sauce:_
200g lard
10 mulato chillies
6 ancho chillies
6 pasilla chillies
½ teaspoon cloves
1 cinnamon stick
1 teaspoon black peppercorns
¼ teaspoon aniseed
¼ teaspoon coriander seeds
½ teaspoon Mexican oregano
4 tablespoons sesame seeds
5 garlic cloves, roughly chopped
3 tinned tomatillos, roughly chopped
2 tinned plum tomatoes, roughly chopped
50g raisins
50g almonds
40g pumpkin seeds
1 stale corn tortilla
1 slice of stale white bread
150g dark chocolate, finely chopped
5 tablespoons soft dark brown sugar
This is a corker of a turkey recipe – not only an excellent way to use up the Christmas leftovers, but a very fitting dish for the big day itself, especially as you can make the sauce and poach the turkey a day or so ahead and combine the two before reheating.
The mole sauce, which comes from the mountainous Mexican state of Puebla, is fairly labour-intensive, but it's not difficult, and its richly spicy and bittersweet flavour repays the blood, sweat and tears. (While we're on the subject of sweat and tears, though tongue-tingling, it's not as fiercely fiery as the number of chillies in the recipe suggests, I promise.) The chillies and tomatillos are easily sourced online.
I like it with toasted corn tortillas, a crunchy salad (the Brussels sprout one in this chapter does nicely) and a dollop of soured cream to take the edge off the heat, but it would also be good with rice – and, when turkey isn't available, it will of course work with chicken too.
1. Put all the ingredients for cooking the turkey, apart from the turkey itself, into a large pan. Bring to a shiver, so the water trembles, but no bubbles break the surface, then add the turkey and poach gently for 15 minutes per 450g, until a thermometer poked into the thickest part of the breast reads 65°C. Allow to cool in the liquid.
2. Meanwhile, heat 75g of lard in a frying pan and fry the whole chillies, in batches if necessary, until fragrant. Tip them into a bowl, cover with warm water and leave to soak for an hour, then drain, saving the soaking water. Tip 225ml of this liquid back into the bowl with the chillies, and purée the chillies until smooth. Heat 75g more lard in the frying pan and fry the chilli purée for 10 minutes, then set aside (in the frying pan if you have another one for steps 3 to 5 – if not, scoop out the chillies and wash the pan before continuing).
3. Toast the cloves, cinnamon, peppercorns, aniseed and coriander in a dry frying pan until fragrant, then tip them into a pestle and mortar and add the oregano. Toast the sesame seeds until slightly golden, then add 3 tablespoons of them to the spices and set the remainder aside. Grind the contents of the pestle to a fine powder (alternatively you can use a spice grinder, if you have one).
4. Heat a little more lard in the pan, and fry the garlic, tomatillos and tomatoes until soft and pulpy, then set aside.
5. Clean the pan, then grease with lard and fry, in turn, the raisins, almonds, pumpkin seeds, tortilla and bread, greasing as necessary and tipping each out into the tomato mixture as soon as it's golden and toasted (raisins, which will not of course go golden, should be fried for about 20 seconds), and breaking the toasted tortilla and bread into small pieces once toasted, but before adding to the rest. Add the ground spices to the mixture and purée it all together until smoothish.
6. Put the purée into the frying pan with the chilli purée and stir in. Heat gently, then add the chocolate, and enough of the turkey cooking liquid to make it into a sauce – about 8 ladlefuls should do it. Stir in half the sugar, and simmer for an hour.
7. Meanwhile, strip the meat from the turkey. Once the sauce has been cooking for an hour, taste and season, adding the rest of the sugar if you think it needs it, and fold in the turkey. Cook for another half an hour before serving.
## Tangerine and pomegranate salad with spiced Pedro Ximénez syrup and Marcona almonds
##### serves 4–6
100ml Pedro Ximénez sherry
5 tablespoons honey
1 cinnamon stick
8 tangerines
A whole nutmeg, to grate
½ a pomegranate
25g blanched Marcona almonds, roughly chopped
Though I love Christmas pudding, there comes a point when even I crave something lighter. This vaguely Moorish number celebrates two fine seasonal fruits, the tangy tangerine and the sweet, crunchy pomegranate, doused in a sticky, spicy syrup – because it's still Christmas, after all. It can be made well ahead of time, and kept in the fridge until its big moment.
_NB: do try and get tangerines if possible; this won't work as well with bland, puffy satsumas, but clementines, or indeed larger oranges, will do at a pinch._
1. Put the sherry and honey into a small pan and heat gently, stirring to dissolve the honey. Add the cinnamon stick, bring to a simmer, and reduce for about 5 minutes until slightly thickened and syrupy. Take off the heat.
2. Peel the tangerines and slice horizontally. Remove any seeds you come across and tip any juices on to the serving plate on which you are arranging the tangerine slices.
3. Pour the syrup over the tangerines and grate over a little nutmeg. Cover and chill for at least an hour. Scatter over the pomegranate seeds and almonds just before serving.
The most alchemic of all ingredients, before science identified it as a fungus, yeast was naturally assumed to be a blessing from above. Only with the advent of the microscope in the seventeenth century did we discover that we had something physical to thank for all that bread and booze, but it wasn't until the nineteenth century, and the pioneering work of Louis Pasteur, that its role in the fermentation process was fully understood.
So, for much of human history, yeast remained a happy mystery, which is certainly how it still appears to many of us less scientifically minded bakers today. It would be a jaded soul indeed who failed to feel a little flutter when, on peeping underneath the tea towel, they found a swelling mass of living dough in place of the tight little ball of an hour before. Simple, everyday magic perhaps, but magic nonetheless.
There's an element of jeopardy to working with yeast that you just don't get with other leavening agents: chemicals, like baking powder, may be more reliable, but they're a lot less fun (and the results just don't compare). Yeast is amazing, and that's that. Without it we wouldn't have bread beyond the dense soda and cornbread kind (and, come to think of it, we wouldn't have any alcohol or chocolate either. It's frightening to think that human happiness depends so heavily on one tiny microorganism) or pizza or hot cross buns, doughnuts or bara brith, croissants, rum babas or even (shock horror) a decent crumpet. Life without yeast is a dull prospect indeed.
### The chemistry
Though it's always nice to have a bit of magic in your life, it's often easier to cook with ingredients if you understand at least the basics of how they work. Here follows a very simplified explanation.
Yeasts are single-celled, microscopic fungi. In order to reproduce, they need energy. To get that energy they feed on the sugars and starches in bread dough, producing carbon dioxide and alcohol as by-products. As long as you've kneaded your dough to develop the gluten in it, it will be strong enough to trap this carbon dioxide in tiny pockets of gas, which will gradually cause the dough to rise.
But where does this sugar come from in the first place? Well, the magic starts the minute you begin mixing. As soon as flour and water come together, the broken starch cells in the flour begin to absorb the water, prompting enzymes to digest their starch and turn them into sugar – food for the yeast.
Note that yeast is very fussy about temperature; it's at its most active at about 35°C, but a slower fermentation gives more interesting results. Roughly speaking, 27°C is about as warm as you should go, and those of us with chilly kitchens will be pleased to know that many professionals prefer a long, slow fermentation, even leaving the dough overnight in the fridge to develop its flavour. It's best not to have too many plans when you're working with yeast, because, like animals and children, it can be unpredictable. (Unlike animals and children, it doesn't like too much salt or sugar, so don't overdo it.)
Once the dough has been stretched to its elastic limit by the air inside, it's time to bake. As it heats up in the oven, those air pockets expand, and the alcohol and water within evaporates, producing yet more gas, and causing the loaf to rise. Once a stiff crust has formed around the loaf, it stops rising, and when the interior temperature gets up near 100°C, it's baked.
### Types of yeast
There's understandable confusion about the different types of yeast available, and how they work. See here for information about conversion between different sorts.
— Fresh yeast: Comes in small brown putty-like cubes which need refrigeration. It's more powerful than dried forms, but has two distinct disadvantages: first, it only lasts a couple of weeks, which means a lot of wastage unless you eat a lot of bread, and second, it can be difficult to find. Look in wholefood shops and bakeries or ask at the bakery counter of supermarkets.
— Active dried yeast: Small brown granules of dormant yeast cells which are reactivated by soaking in warm water, often with some sugar for extra encouragement, before they are added to the dough. Many recipes are a bit vague about how you know if your yeast is ready to use – a few stray bubbles are not enough. Wait until the surface of the water is covered with a mass of beige froth. If that doesn't happen, either your water was too cold or your yeast is old – check the use-by date on the packet.
— Quick yeast: As the name suggests, more modern processing methods mean that this is more lively than active dried yeast, and can be added to dough without the need for any reactivation. Probably the easiest kind to use.
— Wild yeasts: If you're feeling brave, you can catch your own yeasts from the atmosphere in your home by making a sourdough starter, or mother. It will need more regular care and attention than a pot of dry stuff though.
Bear in mind that, though it might be tempting to add more yeast than the recipe suggests to speed up the rising time, in fact, the less you can get away with, and the slower your fermentation, the better the flavour of the end result. Yeast has a strong taste of its own which you don't want to be the dominant one in your bread.
## Georgian cheesebread (khachapuri)
##### serves 8
10g active dried yeast
130ml warmish water
A pinch of sugar
300g plain flour, plus extra to dust
1 teaspoon salt
70ml plain whole milk yoghurt
1 tablespoon olive oil, plus extra to brush
200g firm mozzarella (of the kind used for pizza)
200g feta
1 teaspoon black onion seeds (optional)
My favourite culinary discovery of recent years is the food of Georgia, the former Soviet state whose cooking combines southern ingredients with a certain northern heartiness. Khachapuri is the love-child of a cheese pie and a deep-dish pizza – molten cheese barely encased in an ever-so-slightly fluffy crust, and best devoured as soon as it's cool enough to handle.
If you ever find any suluguni cheese, the authentic choice, snap it up – but the mix below makes a decent substitute.
1. Mix the yeast, a little of the water and the sugar into a loose paste. Leave until the surface is covered with tiny bubbles, indicating the yeast has begun its work.
2. Meanwhile, put the flour and salt into a mixing bowl and whisk together to combine. Once the yeast is ready, mix it with the remaining water, the yoghurt and oil and stir into the flour to make a soft dough.
3. Knead the dough on a clean work surface or in a mixer fitted with a dough hook until smooth and bouncy; this should take 5–10 minutes. Cover and put in a draught-free, warmish place for about an hour and a half, or until roughly doubled in volume.
4. Scoop the risen dough on to a clean work surface, punch the air out of it, then cover and leave until doubled again, which should take about 40 minutes.
5. Meanwhile, grate the cheeses and heat the oven to 200°C/fan 180°C/gas 6.
6. Roll out the dough on a lightly floured surface into a rough 30cm circle. Put the cheese in the centre, then bring one edge of the dough into the middle, fold one edge of that edge into the middle and continue this all the way round, pinching it together to seal. Slide on to a lightly greased baking tray. Brush with olive oil and scatter with onion seeds if using, then bake for about 35–40 minutes, until golden.
## Buckwheat pikelets
##### makes 6 small or 3 large
180ml milk
1 teaspoon quick yeast
70g buckwheat flour
40g plain flour
½ teaspoon soft light brown or caster sugar
1 egg, beaten
½ teaspoon salt
Butter, to grease
Pikelets are a free-form Welsh variety of crumpet – made without the customary rings, they tend to be thinner and more flexible than the ordinary sort. This recipe, inspired by the buckwheat pancakes popular in Brittany and Belgium, has a richer, earthier, more savoury flavour than the white kind – I love them for breakfast (you can make the batter the night before), often with a fried egg and some spinach.
If you have buttermilk left over from the butter recipe here, you can use it to replace some of the milk. Of course, the butter itself would be very welcome on top.
1. Warm the milk. Meanwhile, whisk together the yeast, flours and sugar in a large bowl, then whisk in the milk followed by the egg; the mixture should be a very loose dough or thickish batter. Cover and leave in a warm place for a couple of hours, until the surface is covered in tiny bubbles.
2. Whisk in the salt. Grease a small frying pan on a medium-high heat with butter, tipping off any excess for re-use. Test the temperature with a little of the batter; it should sizzle as it hits the pan.
3. Add ¾ of a ladleful to the pan for smaller pikelets, or a generous ladleful for larger ones, and cook until the base is dark and the the top dry and covered with small holes; this will take between 3 and 5 minutes. Flip and toast the top for a couple of minutes until golden, then serve or keep warm while you repeat with the rest of the batter.
## Pissaladière
##### serves 8–10
1.5kg onions
4 tablespoons olive oil
A pinch of sugar
1 teaspoon herbes de Provence
100–200g anchovies in oil (depending on the strength of your love for anchovies)
2 small garlic cloves
A handful of stone-in black olives
1 red chilli, sliced
##### _For the base:_
450g strong white flour
5g quick yeast
1 teaspoon sugar
1½ teaspoons fennel seeds
1 teaspoon salt
60ml olive oil, plus extra to grease
About 240ml warm water
Polenta/cornmeal, to sprinkle
A Niçoise take on a pizza (it's only 30km from the Italian border, after all), this is an irresistible combination of crunchy flatbread base, salty anchovies and sweet onions. And make no mistake: those onions should be piled on top like a Chicago pizza pie, rather than scattered delicately in the Neapolitan manner – this is one time you'll be thankful you invested in a mandoline.
1. To make the base, whisk the flour, yeast, sugar, fennel seeds and salt together in a large mixing bowl. Make a dip in the middle, pour in the olive oil and enough of the water to make a dough, then turn out on to a lightly greased work surface and knead for about 10 minutes, until it feels smooth and elastic. Rub the bowl with oil and turn the dough in it to coat, then cover and leave in a draught-free place until doubled in size – about 1½–2 hours.
2. Meanwhile, get to work on the onions. Finely slice, which will take some time (I use a mandoline); then heat the oil in a large frying pan over a medium heat and add the onions and a generous pinch of sugar. Cook, stirring often, until very soft and golden, but not at all brown. Season lightly and stir in the herbs.
3. Put the anchovies and garlic into a pestle and mortar and mash until fairly smooth. Pound in the oil from the anchovies, a little at a time, to make a paste. Grease a large baking tray and sprinkle with cornmeal.
4. Once the dough has doubled in size, tip out of the bowl, knock back, then stretch or roll into a rectangle to fit the baking tray, pressing out any air. Heat the oven to 240°C/fan 220°C/gas 9.
5. Spread the anchovy paste over the dough, then top with the onions. Arrange the olives and chilli rings on top and bake for about 20 minutes, until the edges are golden.
## Marmite and cheese mini doughnuts
##### makes about 18
225g strong white flour, plus extra to dust
7g quick yeast
5g caster sugar, plus extra to dust
20g unsalted butter, at room temperature, chopped, plus extra to grease
2 tablespoons Marmite
65ml milk
1 egg, beaten
40g Parmesan or other very hard cheese
100g mature Cheddar, grated
40g Gouda, grated
1 tablespoon cornflour
2 tablespoons whole milk
2 litres vegetable or sunflower oil, to cook
Killer party food – if the idea of savoury doughnuts troubles you, just think of them as slightly more substantial versions of that classic French nibble, the gougère. Best served warm, while the cheese is still gorgeously gooey.
1. Combine the flour, yeast and sugar in a large bowl and mix well. Put the butter and Marmite into a small pan with the milk and 45ml of water, and heat gently, stirring until they have melted. Pour this into the mixing bowl, along with the egg, and stir until it comes together into a dough: it should be soft and slightly sticky.
2. Tip on to a lightly floured surface, or (better still as the mixture is soft) into a mixer fitted with a dough hook, and knead until smooth and elastic (about 10 minutes in a mixer, more by hand). Put into a lightly greased bowl, cover with a damp tea towel, and leave in a warm place until doubled in size (about an hour). Meanwhile, finely grate the Parmesan and spread out on a plate to dry out slightly.
3. Shape the dough into balls of about 20g each, folding each side tightly into the centre in turn, rotating as you go, then turn the ball over and put it on a lightly floured baking tray or board, spacing them well apart. Cover and leave to rise again for 45 minutes.
4. Meanwhile, toss the grated Cheddar and Gouda with the cornflour, and put in a medium pan over a low heat. Add the whole milk and allow the cheeses to melt, stirring regularly, until smooth. Keep warm.
5. Heat the oil in a large pan or deep-fat fryer to 160°C. Cook the doughnuts in batches for about 2 minutes on each side, until golden, then blot with kitchen paper and sprinkle with Parmesan. Make a small hole in the side of each, and use a piping bag to inject a splodge of cheese. Eat immediately, while they're still warm.
## German plum bread with almond cream
##### makes a 23cm cake
115ml whole milk
7g active dried yeast
25g caster sugar
250g plain flour
¼ teaspoon ground nutmeg
5g salt
30g butter, at room temperature
1 egg, beaten
25g mixed peel
Oil, to grease
##### _For the almond cream:_
125g butter, at room temperature
65g soft light brown sugar
60g caster sugar
100g ground almonds
25g plain flour
2 eggs, beaten
25g flaked almonds
2 tablespoons whisky, brandy or rum
##### _To top:_
6 plums
1 tablespoon demerara sugar
Cake doesn't quite do this defiantly bready, almost savoury fruit number justice. Based on the German _pflaumenkuchen_ , but with added almonds and a touch of very British peel and spice, this is hearty afternoon tea territory, rather than any delicate dessert. I'd describe it as the love-child of a hot cross bun and a fruit tart.
If you happen to come across any yellow plums, a mixture of them and the purple sort looks very pretty here – traditionally it would be made with damsons, in which case increase the amount of demerara sugar at the end.
1. Heat the milk to blood temperature, then whisk some of it into a loose paste with the yeast and ½ teaspoon of the sugar. Leave until the surface of the paste is covered with a froth of tiny bubbles.
2. Meanwhile, whisk together the remaining sugar with the flour, nutmeg and salt and cut the butter into it in small pieces. Rub the butter into the dry ingredients. When the yeast is ready, stir in the remaining milk, then stir the whole lot into the flour, followed by the egg, to make a softish dough.
3. Once it comes together, tip on to a clean surface and knead until smooth and springy (roughly 5–8 minutes). Scatter the peel over the work surface and knead into the dough until evenly distributed. Oil the bowl lightly, turn the dough in the oil, then cover and leave for about an hour and a half, until doubled in size.
4. Meanwhile, beat the butter for the almond cream until softened, then beat in the sugars and a pinch of salt until the mix is light and fluffy. Mix in the ground almonds and flour until well combined, followed by the eggs, a little at a time. Finally stir in the flaked almonds and the booze.
5. Cut the plums in half, remove the stones, then cut each half into wedges. Grease a 23cm springform tin. Knock the air out of the risen dough, then put the dough into the tin and use your fingers to spread it out evenly across the base.
6. Top the dough with the almond cream, followed by the plum wedges (I favour two concentric circles), then cover with a clean tea towel and leave for 40 minutes. Meanwhile, heat the oven to 200°C/fan 180°C/gas 6.
7. Sprinkle the demerara sugar on top of the plums and bake for about 40–45 minutes, until golden, then allow to cool slightly before serving.
## Wholesome loaf
##### makes 1 loaf
2 tablespoons treacle (about 55g)
325ml tepid water
7g active dried yeast
300g very strong wholemeal flour
140g rye flour
10g salt
100g seeds of your choice
Oil or butter, to grease
Though I love the robust chewiness of sourdough, sometimes I crave the workaday, brown nuttiness of the seed-strewn breads that are ordinary fare in Germany and Scandinavia, but frustratingly hard to find here. This one is lovely with soup, and makes outstanding cheese on toast, but, most importantly of all, it's fabulous with a generous wodge of butter.
1. Dissolve a teaspoon of the treacle in the water, then whisk in the yeast. Leave in a warmish place until the top is copiously frothy. Meanwhile, whisk together the flours and salt in a large bowl. Stir the remaining treacle into the yeasty water to dissolve and then stir this into the flours to make a soft, sticky dough – if it feels at all dry, add a little more water.
2. Tip on to a clean work surface (no need to flour) and knead until the dough starts to feel silky and elastic (and springs back when prodded), then scatter the seeds on the work surface and work into the dough until evenly distributed. Form the dough into a ball and put into a lightly greased bowl. Cover and leave in a draught-free place (I use a cold oven) until doubled in size.
3. Grease a roughly 20 x 10cm (1lb) loaf tin. Flatten the dough into a rough rectangle, short side parallel to you, then fold the bottom third into the middle, the top third down on top of it, then fold the whole thing over once more. Pinch the fold closed with your fingers around the bottom and the sides, then put into the loaf tin, seam-side down. Cover and leave to rest in the same place for about an hour to an hour and a half, until slightly risen (it won't double, being rye).
4. Heat your oven as high as it will go. Put a baking tray or pizza stone in there if you have one, and a roasting tin in the bottom. Boil a little water in a kettle. Put the loaf into the hot oven and immediately pour the hot water into the roasting tin underneath. Bake for 5 minutes, then turn the temperature down to 220°C/fan 200°C/gas 7 and bake for 45 minutes, rotating once. Allow to cool in the tin.
You might guess citrus zest would taste bitter, like the pith on a sloppily peeled orange, but the reality is intensely aromatic, even zingy, at once somehow fresher, and more complex – often almost floral – than the fruit itself, which can be one-dimensional in its acidity.
Each of the citrus zests has its own unique flavour, as different as the fruits themselves. Lime is fruitier and slightly more bitter than the fragrant lemon, orange is rich and marmaladey, while grapefruit has a peppery freshness and the pomelo is almost too perfumed for its own good – the connoisseur's choice, perhaps. The elusive bergamot, available for only a few weeks every year, is very much like Earl Grey tea (unsurprisingly, given that fine brew gets its flavour from the oil of this sour sort of orange).
Once you discover citrus zest, there's no limit to its uses, both sweet and savoury, whether that's adding some lightness and sunshine to a rich meat stew, like the lemon zest and parsley gremolata traditionally served with osso bucco; a touch of bitter depth to a sugary icing; or as a highly perfumed, sticky piece of confectionery in its own right in the form of candied peel.
### The anatomy of citrus peel
It may have been the smell of citrus zest that first attracted us to the fruit, long before selective breeding led to the introduction of sweet eating varieties. The flesh, nowadays largely considered the important bit, is protected by a thick spongy layer called the albedo, but better known as pith, which cushions the delicate collection of tiny juice sacs within.
Importantly for our purposes, surrounding the pith is a thin colourful layer of glands that secrete and store volatile oils, which are responsible for the distinctive scent and taste of the zest. If you've ever squeezed a twist of lemon peel over a martini, and seen beads of fragrant oil collect on the surface, you'll know this already – and if you haven't, then you should probably stop reading and go and do so at once.
Most citrus these days is waxed after picking, to stop the evaporation of moisture and keep it fresh for longer. To avoid ingesting this (harmless, but not particularly pleasant-tasting) substance, go for the unwaxed sort, which are available in most greengrocers and supermarkets, or choose organic fruit, which isn't treated in this way. If you can only get the ordinary stuff, a scrub with wire wool under hot water helps.
### Tools
When removing zest, avoid taking any of the bitter pith with it unless you're making candied peel or marmalade. This means using a fine grater (for small pieces that will blend easily with other ingredients), a citrus zester (for long thin strips of the kind you might want to drape over a salad, or some pasta), or a vegetable peeler (for long strips you might use to garnish a drink with, or for a marinade). At a pinch, you can use a sharp knife, but this requires a very steady hand.
See also: Spotted dick (here), Crab with ricotta and lemon zest and an elderflower and cucumber salad (here), Brussels sprout, hazelnut and lemon zest salad with goat's cheese (here), German plum bread with almond cream (here).
## Slow-roast tomato pasta with lemon salt, ricotta and basil
##### serves 4
10 medium tomatoes
Extra virgin olive oil, to drizzle
320g squid ink spaghetti, linguine or other long pasta
200g ricotta
A small bunch of basil
##### _For the lemon zest salt:_
60g flaky sea salt
Zest of 2 unwaxed lemons
OK, so this takes a while for what is, essentially, a very simple pasta, so feel free to substitute semi-dried tomatoes packed in oil if you prefer – I like to do a big batch of tomatoes and use them over several days on toast, in salads and in dishes like this, which makes it worthwhile. The lemon salt is well worth the effort, though; you'll be finding uses for it for ages, from fish to ice cream (try it).
Essentially, however, even if you buy in the tomatoes and mix salt with fresh lemon zest, this dish is still worth a go; it's so much more than the sum of its few parts. You can use ordinary pasta, but the colour of the ricotta, tomatoes and herbs is far more striking against the black of the squid ink variety – don't be tempted to toss it all together or you'll spoil the effect; it's for the diner to mess up.
1. Start by making the salt. Heat the oven to 120°C/fan 100°C/gas ½ and mix the salt and zest together well. Spread out on a lined baking sheet and bake for an hour. Remove, but leave the oven on.
2. Cut the tomatoes in half and brush with olive oil. Sprinkle with some of the lemon salt and bake for 4 hours.
3. Cook the pasta in plenty of boiling salted water until al dente, then drain and toss with olive oil and a good pinch of the lemon salt. Divide between bowls and arrange the tomatoes, a few spoonfuls of ricotta and some basil leaves artistically on top, together with a final pinch of lemon salt. Serve immediately.
## Mediterranean ceviche
##### serves 2
½ a small red onion, finely sliced
300g skinless sea bass or sea bream
Juice of 4 limes
4 tangerines or clementines
1 small red chilli, deseeded and finely chopped
1 small head of fennel, finely sliced
3 tablespoons finely chopped candied peel
A handful of mint leaves, torn
Candied peel in ceviche is an idea I shamelessly nabbed from my friend Hen, who once had a sideline running a pisco and ceviche bar in a Turkish-Cypriot social club in Hackney. As you do. The bittersweet flavour works surprisingly well with the creamy fish and peppery mint – and once I'd added orange, my mind began to run along more Mediterranean-type flavours, like aniseedy fennel. They might not recognize it in Lima, but I bet the Turks would dig it.
1. Cover the onion with cold water and leave to soak for 5 minutes, then drain well. Meanwhile, cut the fish into large bite-sized pieces and sprinkle with ½ teaspoon of salt.
2. Stir the lime juice, the juice of 2 of the tangerines and the chilli into the fish, then add the drained onion and leave for 10 minutes.
3. Meanwhile, peel the remaining tangerines and cut into thin slices, and finely slice the fennel. Add to the fish along with the candied peel and mint leaves, reserving a little of these as garnish, and toss together well. Divide between plates, garnish and serve immediately.
## Peach and mozzarella salad with crispy lemon zest and basil
##### serves 2 (easily scaled up)
1 large unwaxed lemon
6 tablespoons olive oil
2–3 fairly ripe peaches or nectarines
1 ball of buffalo mozzarella
4 sprigs of basil
Frying something as fresh and aromatic as lemon zest may sound counter-intuitive, but in fact it only enhances the flavour, releasing all sorts of lovely volatile oils and rendering it deliciously crisp in the process. I love the combination of creamy, lactic mozzarella with sweet, slightly acidic fruit – for a more robust dish, swap the peaches for ripe tomatoes and serve the lot on sourdough toast.
1. Peel the zest from the lemon in strips, keeping them as thin as possible to avoid the bitter white pith. Scrape any pith off the peel with a sharp knife, then cut the strips into long thin lengths. Put a plate lined with kitchen paper by the hob.
2. Heat the oil in a small frying pan and, when hot, fry the zest for about 30 seconds, until just beginning to crisp and colour. Use a slotted spoon to scoop on to the paper to drain, and allow the oil in the pan to cool.
3. Juice the lemon and whisk the cooled oil into 2 tablespoons of the juice. Season to taste.
4. Slice the peaches and divide between two small plates in a circle. Sprinkle with a little dressing, then tear the mozzarella over the top. Spoon over a little more dressing, season, and sprinkle with the lemon zest strips and torn basil leaves to serve.
## Candied peel
##### makes enough to fill a 2 litre jar
1 pomelo (or add an extra grapefruit and orange)
1 grapefruit
1 orange
1 lemon
1 lime
500g sugar
I'm on a one-woman crusade to rescue candied peel from its seasonal niche. Though it's pretty great in Christmas cakes and hot cross buns, its aromatic bittersweet flavour is also welcome in everything from workaday British dishes like porridge and spotted dick (see here) to more exotic fare like ceviche (see here). Or you can just serve it as a little something after dinner with coffee.
1. Cut the base off the pomelo and stand it upright, then take off the peel in long wide strips, getting a good amount of pith with it (this means the finished peel will be tender, rather than hard and chewy). Cut into slimmer strips (unless you'd prefer to keep them in larger pieces), then repeat the process with the other fruit.
2. Put the peel into a large pan and cover with cold water. Bring to the boil and simmer until soft – this should take about an hour. Drain, cover with fresh water, bring to the boil and simmer for a further 15 minutes, then drain again.
3. Put the sugar into the pan with 250ml of cold water and bring to a simmer, stirring to dissolve the sugar. Add the peel, then turn down the heat and simmer until the peel has absorbed most of the syrup – this should take about an hour and a half. Be careful it doesn't stick towards the end.
4. Scoop out on to a rack set above something easy to clean (to catch any drips), and leave somewhere dry and fairly warm to harden overnight.
## Pistachio and pink grapefruit cake
##### makes a 20cm cake
150g shelled unsalted pistachios
200g golden caster sugar
1½ pink grapefruits
50g polenta
2 teaspoons baking powder
½ teaspoon ground cardamom
A pinch of salt
4 eggs
200g extra virgin olive oil
3 tablespoons honey
10g shelled salted pistachios (about 20), roughly chopped
Fresh and bittersweet, this gluten-free cake is a very spring-like affair, and the syrup means it keeps well for a week.
1. Grease a 20cm loose-bottomed cake tin. Whiz the unsalted pistachios in a food processor until fairly finely ground. Add the sugar and the finely grated zest of the grapefruits and whiz briefly to combine, then stir in the polenta, baking powder, cardamom and salt.
2. Whisk together the eggs and oil and stir these into the dry ingredients. Scrape into the tin. Put into the oven and turn it to 200°C/fan 180°C/gas 6 (trust me, you don't need to preheat it for this one). Bake for 40–50 minutes until set on top.
3. Towards the end of the cooking time, heat the honey in a small pan with the juice of the half grapefruit. Bring to a simmer, then take off the heat.
4. Put the cake, still in its tin, on to a plate with a rim and pour over the syrup, a little at a time, adding more once each lot has been absorbed. Leave to cool before turning out and sprinkling with the chopped salted pistachios.
## Chocolate orange cheesecake
##### serves 8
60g butter
250g dark chocolate digestives
3 tablespoons cocoa powder
75g dark chocolate chips or chunks
500g ricotta
200g cream cheese
5 tablespoons honey
2 blood oranges (see intro)
1 tablespoon marmalade
A fresher, lighter take on the traditional baked cheesecake, which, if I'm honest, has never really won me round, this uncooked ricotta-based version balances creamy cheese with a bittersweet dark chocolate base and juicy, ice-cold orange slices. Blood oranges are the ideal when in season, but if not the ordinary variety will do just fine.
1. Melt the butter, and bash the biscuits up roughly. Put them into a food processor and whiz into coarse crumbs, then pour in the melted butter and whiz until smooth. Mix in the cocoa powder and a good pinch of salt, then stir through the chocolate chips.
2. Grease a 20cm loose-bottomed cake tin, and press the base mixture into it, pushing down firmly. Refrigerate while you make the topping.
3. Drain the ricotta well in a sieve, then put into a bowl, beat in the cream cheese, and stir in honey to taste. Zest one of the oranges into the mixture, then spoon on top of the biscuit base, cover and chill for an hour or so.
4. Heat the marmalade gently until liquid. Peel the oranges and remove as much pith as possible. Thinly slice into rings and arrange on top of the cheesecake, then brush with the marmalade. Return to the fridge and chill for at least 2 hours, until ready to serve.
## Pomelo sour
##### makes 1
50ml gin
75ml pomelo juice (freshly squeezed)
2 teaspoons sugar syrup
10ml egg white
A healthy shake of bitters
½ teaspoon finely grated pomelo zest, plus 1 thin strip of zest to garnish
Since belatedly discovering this massive fruit in south-east Asia, I've fallen in love with its perfumed flavour, which works beautifully in this lighter take on the classic whisky cocktail. If you can't find pomelo, you can substitute grapefruit, though you may need to add more sugar depending on the colour.
Sugar syrup is easy to make: dissolve sugar in an equal weight of water, then bring to the boil and simmer for about 5 minutes, until reduced and syrupy. Allow to cool before using.
1. Put all the ingredients apart from the strip of zest into a cocktail shaker with plenty of ice. Shake vigorously, then strain into a cold glass and top with a twist of the zest.
## Acknowledgements
Thanks to: as ever, my fabulous editor Juliet Annan for her tireless enthusiasm and great patience, and Anna Steadman for her infinite calm. Much gratitude is also due to Helen Cathcart and River Thompson for making my food look so very beautiful, with more than a little help from Sophie Missing, an immensely talented cook and food stylist (and an invaluable source of trashy coffee and _Archers_ conspiracy theories) – thank you all for making it so much fun (and for the canine first aid).
Sarah Ballard and Zoe Ross, for all the important stuff, and for being generally kind and wonderful in the face of the odd meltdown.
Annie Lee for her eagle eyes, Giulia Garbin for the lovely illustrations and design, James Blackman and Ellie Smith for managing everything with such aplomb.
All at the _Guardian_ , especially Susan Smillie for putting up with my excuses, and for always making me laugh, Bob Granleese for the _vieille prune_ (well, I did promise), and Mina Holland for her wise advice.
Caroline Stafford and Phill Price for their very patient help with my muddled chemistry, Hen Clancy for her ceviche generosity, Phillip Souta for his expertise in the matter of Eastern European dumplings, Jot Davis for help with Italian translation, Vanessa Maurice-Williams and her hens for finishing the first triple chocolate malt cake before I'd even fetched a knife, and Alex Matts and Claire Cohen for feeding me wine until the alphabet made sense.
Richard and Gemma for keeping a roof over my head (and more importantly, a kitchen under my feet) while writing this, and Pam and John for their immense generosity and generous dogsitting. Also, of course, to all my other friends who have kindly allowed themselves to be guinea pigs, and provided such wonderful support and sympathetic ears throughout the process, including Aimee, Alastair and Swanley, Alice and Emily, Anna and Jot, Ali and Iain, Ed, Emma and Tim, Greg, Helen and Jonathan, Ian and Beth, Jacqueline, James, Jemma and Curtis, Julia and Jay, Kelda, Lily and Marcus, Lorna, Lucinda, Matt, Olivia and Nick, Pia, Phillip and Anna and Tiffany and Sam, and everyone I've no doubt forgotten, plus all the nice people on social media for cheering me up on a daily basis.
My whole family, especially my parents, for everything. And, lastly, thanks to Wilf, who has simultaneously driven me mad and kept me just about sane: no more kale chips, I promise.
## Stockists and links
Yeast conversions: dovesfarm.co.uk
Cool Chile Company (for Mexican ingredients like tomatillos): coolchile.co.uk
Greens of Devon (for edible flowers): greensofdevon.com
Sous Chef (for all sorts of specialist ingredients): souschef.co.uk
South Devon Chilli Farm: southdevonchillifarm.co.uk
South West Garlic Farm (for black garlic): southwestgarlicfarm.co.uk
## THE BEGINNING
Let the conversation begin...
Follow the Penguin Twitter.com@penguinUKbooks
Keep up-to-date with all our stories YouTube.com/penguinbooks
Pin 'Penguin Books' to your Pinterest
Like 'Penguin Books' on Facebook.com/penguinbooks
Listen to Penguin at SoundCloud.com/penguin-books
Find out more about the author and
discover more stories like this at Penguin.co.uk
FIG TREE
UK | USA | Canada | Ireland | Australia
India | New Zealand | South Africa
Fig Tree is part of the Penguin Random House group of companies whose addresses can be found at global.penguinrandomhouse.com.
First published 2016
Text copyright © Felicity Cloake, 2016
Photographs copyright © Helen Cathcart, 2016
Illustrations copyright © Giulia Garbin, 2016
The moral right of the copyright holders has been asserted
Design by Giulia Garbin
A CIP catalogue record for this book is available from the British Library
ISBN: 978-0-241-27876-5
##### N is for Noodles
##### *
Interestingly Italians always seem to have had a taste for al dente pasta; a collection of recipes from 1475 directs the reader to cook pasta for just as long as it takes to say three paternosters (AKA the Lord's Prayer), which isn't very long at all, even for the fresh variety.
# Contents
1. Cover
2. Title Page
3. Dedication
4. Introduction
5. A is for **Almond**
1. Spicy almond butter dressing
2. Chilled almond soup with mojo rojo
3. Sicilian almond and tomato pesto
4. Chicken korma
5. Salted almond toffee
6. Almond, honey and fig cake
6. B is for **Blue Cheese**
1. Polenta with Gorgonzola and honeyed hazelnuts
2. Leek and Stilton steamed pudding
3. Roquefort and honey cheesecake with walnut and pear
4. Wedge salad with quick pickled onions and buttermilk blue cheese dressing
5. Blue cheese creamed spinach
6. Poached plum crumble with blue cheese ice cream
7. C is for **Caramel**
1. Roast duck with miso caramel
2. Vietnamese caramel and pork hotpot
3. Banoffee split
4. Pecan, bourbon and salted caramel cookies
5. Salted peanut caramel crispy cakes
6. Walnut caramel cream pie
8. D is for **Dumplings**
1. Canederli alla tirolese with Parmesan broth
2. Venison and port casserole with Stilton dumplings
3. Queenie and samphire crystal dumplings
4. Chickpea and spinach dumplings in a tomato and yoghurt sauce
5. Southern chicken and jalapeño dumplings
6. Spotted dick
9. E is for **Eggs**
1. Bacon devilled eggs
2. Deep-fried quail's eggs with celery salt mayonnaise
3. Baked eggs, creamed corn and spinach
4. Omelette farcie
5. Rum flip
6. Pandan and coconut burnt creams
10. F is for **Fat**
1. Cultured butter
2. Bacon refried beans
3. Red-braised pork
4. Lamb 'porchetta' with salsa verde
5. Bourbon and bacon butter
6. Coconut ice magic
11. G is for **Garlic**
1. Confit garlic, thyme and Parmesan tart
2. Hot and sour seafood soup with black garlic aïoli
3. Brined and slow-cooked lamb with flageolet beans, white wine and garlic
4. Duck fat garlic bread
5. Georgian griddled chicken on toast
6. Grand aïoli for heretics
12. H is for **Hot**
1. Blackened jalapeño and avocado slaw
2. Sweet sriracha cakes
3. Red lentil and tomato soup with harissa
4. Green chilli, New Mexico style
5. Lemongrass and chilli tofu
6. Meatball curry
7. Mexican chilli chocolate mousse
13. I is for **Ice**
1. Simple banana and peanut butter ice
2. Salted brown butter and buttermilk ice cream
3. Avocado and double lime sorbet
4. Rum punch ice cream
5. Simple persimmon, lime and ginger sorbet
6. Frangelico and espresso granita shots
7. Ricotta ice cream terrine with fig molasses
14. J is for **Junk**
1. Sweet paprika cheesy chips
2. Buttermilk onion rings
3. Vietnamese crispy pork and prawn pancakes (bánh xèo)
4. Texan queso dip
5. Homemade butterscotch 'Angel Delight'
6. Marathon pie
15. K is for **Kale and other greens**
1. Spinach soup with spiced anchovy butter toasts
2. Spicy cashew kale crisps
3. Fava e cavolo nero
4. Spinach, ricotta and feta tart with hard-boiled eggs
5. Homemade orecchiette with sausage and kale
6. Chard gratin with a Gruyère crumb
16. L is for **Leaves**
1. Nice salad
2. Green herb cauliflower 'tabbouleh'
3. Three pea salad with lemon butter dressing
4. Black kale salad with anchovy dressing
5. Chicory with beetroot, goat's cheese and walnuts
6. Mustard leaves and little gem with bacon vinaigrette and toasted walnuts
17. M is for **Malt**
1. Moules marinières écossaises
2. Single malt loaf
3. Rye and porter porridge with bacon, leeks and cheese
4. Malted milk creams
5. Triple chocolate malt cake
6. Black and white shake
18. N is for **Noodles**
1. Japanese carbonara
2. Baked ziti with sausage and kale
3. Spicy peanut butter noodles with sprouting broccoli
4. Beetroot noodles with goat's cheese, toasted walnuts and baby kale
5. Spätzle with cheese and onion
6. Spaghetti with courgette noodles and Parmesan
7. Vietnamese bún cha
19. O is for **Octopus and other cephalopods**
1. Cambodian stuffed frog-style squid
2. Coconut squid
3. Black risotto with eggs
4. Braised octopus with chickpeas and coriander
5. Maryland-style octopus sandwich
20. P is for **Potatoes**
1. Baked potato soup
2. Chorizo baked potatoes with avocado crema
3. Aloo tikki Scotch eggs
4. Northern potato salad
5. Potato, black kale and anchovy pie
6. Aligot
7. Tattie scones à la Arnold Bennett
8. Potato and cauliflower curry with coconut and cashew cream
21. Q is for **Quiver**
1. Tricolore jellies
2. Goat's cheese custards with honey-glazed hazelnuts and black olive toasts
3. Jelly cherry jubilee
4. Gooseberry and buttermilk pots
5. Caribbean milk punch jelly
6. Almond and rosewater blancmange
22. R is for **Rhubarb**
1. Mackerel and samphire tartare with pickled rhubarb
2. Pork rillettes with rhubarb chutney
3. Persian lamb and rhubarb stew
4. Rhubarb Bircher muesli
5. Rhubarb and marmalade sticky pudding
6. Rhubarb and custard trifle with an amaretto syllabub
7. Rhubarb gin granita
23. S is for **Smoke**
1. Charred squash soup with zhoug and toasted pumpkin seeds
2. Muhammara
3. Smoked cod's roe and beetroot dip
4. Kentucky pulled lamb
5. Kichri-kedgeree
6. Smoky black dal with eggs
7. Smoked mackerel and charred cauliflower gratin with smoked chilli breadcrumbs
8. Bacon and split peas with a quick mustard pickle
24. T is for **Toast**
1. Burnt toast powder
2. White beans on toast
3. Duck and sherry pâté with pickled figs and pistachios
4. Southern cheese on toast
5. Salmon and coriander tartare with avocado and wasabi cream on toasted rye
6. Mexican torta with black beans, chorizo, avocado and goat's cheese crema
25. U is for **Umami**
1. Shrimp and grits with bacon and Parmesan
2. Courgette fritters with bagna cauda hollandaise
3. Ox cheeks braised in Marmite
4. Chargrilled Caesar salad
5. Crunchy soy-braised pig's tails
6. Broccoli and edamame salad with Korean dressing
7. Dashi pickles
8. Green lamb kebabs
26. V is for **Violets and other edible flowers**
1. Crab with ricotta and lemon zest and an elderflower and cucumber salad
2. Fig and goat's cheese olive oil flatbread with lavender honey
3. Geranium and apple snow
4. Marzipan violets
5. Scandi saffron buns
6. Shrikhand, or spiced saffron and pistachio yoghurt
7. Rose petal vodka
27. W is for **Wild**
1. Roast new potatoes with wild garlic dressing
2. Scrambled eggs with crab and samphire
3. Wild garlic bread
4. Michaelmas mess
5. Almond rice pudding with blackberry and apple compote
6. Bramble old-fashioned
28. X is for **Xmas**
1. Bread and walnut sauce
2. Georgian aubergine rolls with walnut sauce and pomegranates
3. Brussels sprout, hazelnut and lemon zest salad with goat's cheese
4. Spiced pumpkin and Parmesan pie with chestnuts
5. Turkey mole poblano
6. Tangerine and pomegranate salad with spiced Pedro Ximénez syrup and Marcona almonds
29. Y is for **Yeast**
1. Georgian cheesebread (khachapuri)
2. Buckwheat pikelets
3. Pissaladière
4. Marmite and cheese mini doughnuts
5. German plum bread with almond cream
6. Wholesome loaf
30. Z is for **Zest**
1. Slow-roast tomato pasta with lemon salt, ricotta and basil
2. Mediterranean ceviche
3. Peach and mozzarella salad with crispy lemon zest and basil
4. Candied peel
5. Pistachio and pink grapefruit cake
6. Chocolate orange cheesecake
7. Pomelo sour
31. Acknowledgements
32. Stockists and links
33. Follow Penguin
34. Copyright
1. Cover
2. Table of Contents
| {
"redpajama_set_name": "RedPajamaBook"
} | 9,747 |
All posts by Will Hansen
Franklin Research Center, From Our Collections, New at the Rubenstein Library
New Acquisition: Langston Hughes Revises His Text
May 21, 2014 Will Hansen
In the Rubenstein Library, sometimes we primarily judge books by their covers, be they bejeweled, finely bound, or otherwise interestingly decorated. And sometimes we certainly do not. Case in point: the book below.
The Library wouldn't acquire most copies of the third edition of Langston Hughes's Shakespeare in Harlem, especially not a copy without its original dust jacket and rather heavily worn. But this was no ordinary copy. This appears to be Hughes's own copy of the last edition of this book issued during his lifetime.
Inscription on front endpaper of this copy of Shakespeare in Harlem.
Not only that, Hughes made changes to fifteen of its poems, some of them dramatic shifts in the tone, rhythm, length, or meaning of the text.
Hughes's poem "Down and Out," with repetitive lines crossed out and some representations of dialect changed.
The copy recently turned up in a sorority house at Lincoln University, from which it was sold at auction and entered the rare book trade. Much about the volume remains to be discovered. The changes that Hughes made in this volume have not been published or incorporated into any of the later editions of Hughes's collected works or poems.
African AmericanLangston Hughesliteraturemarginaliapoetry
Exhibits, From Our Collections, News and Features
Tramps Like Us: Springsteen and Whitman
May 8, 2014 Will Hansen
You may have heard the news: a working draft of one of the iconic songs in American music, Bruce Springsteen's "Born to Run," will be displayed in Perkins Library on May 8-11, and then here in the Rubenstein Library from May 12-June 27. While at the Rubenstein, Springsteen's draft, owned by Floyd Bradley, will be in the very good company of one of the largest collections of manuscripts by another favorite son of New Jersey, Walt Whitman, in the Trent Collection of Whitmaniana.
Walt Whitman, 1869, from the Trent Collection of Whitmaniana, box III-6C (Saunders 29), by M. P. Rice; Bruce Springsteen, on the cover of the album Born to Run, 1975, by Eric Meola.
Both Whitman and Springsteen felt and expressed a deep connection with working-class Americans. After a transient childhood, Whitman worked as a journeyman printer before becoming the "Good Gray Poet"; Springsteen's mother famously took out a loan to buy him a guitar when he turned sixteen, and years of honing his musical craft at small venues for low pay preceded the breakthrough of "The Boss."
The working draft of "Born to Run" includes many passages that were changed or excised from the final lyrics, but the chorus "tramps like us, baby we were born to run" is already in place.
The chorus of "Born to Run" in the working draft. Image courtesy of Sotheby's.
"Tramps," or homeless itinerants looking for steady work and a place to live, became a particular concern in the United States (and for Whitman) during and after the "long depression" of the 1870s. Whitman wrote about this phenomenon in many different contexts, perhaps most memorably in a fragment entitled "The Tramp and Strike Questions." In a sentence that gets to the core of an element of "Born to Run" and other Springsteen songs, Whitman writes there: "Curious as it may seem, it is in what are call'd the poorest, lowest characters you will sometimes, nay generally, find glints of the most sublime virtues, eligibilities, heroisms." A volume in the Trent Collection, given by Whitman the title "Excerpts &c Strike & Tramp Question," contains manuscripts and newspaper stories annotated by Whitman in preparation for a lecture on the topic, which was never delivered.
Two prose fragments from "Excerpts &c Strike & Tramp Question," Trent Collection of Whitmaniana Box II-7B.
We're excited to host the "Born to Run" draft, and please contact us if you'd like to take the chance to see this treasure of American culture alongside items in the Trent Collection of Whitmaniana.
Post contributed by Will Hansen, Assistant Curator of Collections.
bruce springsteenliteraturemusicwalt whitman
Audiovisual Materials, Technical Stuff
"The Guardians of History," a Documentary
April 30, 2014 Will Hansen 2 Comments
Mary Samouelian, the Heschel Processing Archivist here at the Rubenstein Library, has created a short documentary. "The Guardians of History" features seven archivists working in our Technical Services Department and explores why archivists do what we do. In Mary's words, the documentary "reveals our intimate relationship with the historical materials we work with, why we are drawn to the mission of preserving history, and how our work makes it possible for researchers, historians, writers, and the general public to discover and experience intimate connections between their lives and historical materials."
Mary enrolled as a student at the CDS in 2011 and this documentary piece is her final project for the Certificate in Documentary Arts. The photographs associated with the documentary will be exhibited on the Student Wall in Perkins Library this coming Friday.
Congratulations to Mary on this wonderful work!
archivesarchivistsdocumentaryMary Samouelianrubensteinstaff
From Our Collections, Hartman Center
Mad Men Monday — Season 7, Episode 3 "Field Trip"
April 28, 2014 Will Hansen 1 Comment
Last night's episode of Mad Men features several characters whose elevated hopes for connections with others get dashed. Don flies out to Los Angeles after Megan's agent calls him to say that she was desperate and demanding with a director after an audition. She is happy to see him, but then gets upset when she realizes why he came. He is forced to admit that SC&P put him on leave and she asks him to go for being dishonest. Peggy is upset that her St. Joseph's commercial wasn't nominated for a Clio, and later finds out that Lou only submitted work that he could claim as his own. Betty meets Francine for lunch and Francine brags about her new career as a travel agent. She tells Betty that working in an office is her reward for raising kids. Later Betty tells Bobby that she will chaperone his field trip the next day and he is thrilled to spend time with her. Harry exaggerates SC&P's media capability to the clients from Koss, and later tells Jim that they need a computer to compete. Don meets with two men from Wells Rich Greene and gets an offer to work for them. Don takes that offer to Roger, who agrees to let Don come back the following Monday. Betty and Bobby have a good time on the field trip until Bobby gives away Betty's sandwich to a friend. Don arrives at SC&P on Monday morning, and awkwardly greets the staff until Roger comes in around lunchtime. The partners are upset that Don is back, but realize it will cost them too much to fire him officially. Instead they agree to take him back only if he can adhere to several restrictive rules and reports to Lou. He agrees.
Last night's episode featured references to typewriters, Kahlua, plaid jackets, and bras, among other things. Enjoy our selection of highlighted ads that reflect the brands and themes that Mad Men characters interacted with last night.
A gallery of our selected images may also be found on Flickr.
advertisementsadvertisingMad Menmadmenmadmenmondays
From Our Collections
The Tweetable Letters of "Mother Whitman"
April 2, 2014 Will Hansen 1 Comment
Louisa Van Velsor Whitman. From the Trent Collection of Whitmaniana.
Wesley Raabe, Assistant Professor in the Department of English at Kent State University, has published an edition of the letters of Walt Whitman's mother, Louisa Van Velsor Whitman, on the Walt Whitman Archive website. Entitled "walter dear": The Letters from Louisa Van Velsor Whitman to Her Son Walt, the edition includes a critical introduction, images of the original letters, transcriptions, and extensive explanatory annotations on each letter. 144 of the 170 letters in the edition are held in the Trent Collection of Whitmaniana here at the Rubenstein Library.
In addition, Prof. Raabe shares thoughts on editing these fragile letters, and on the treatment they have received from our Conservation Services Department here at Duke, in a thought-provoking blog post entitled "Restoring Fragile Remains: Two Louisa Van Velsor Whitman Letters." If it strikes a chord with you, please consider adopting an item from the Trent Collection of Whitmaniana or another selection in the Libraries' new Adopt-a-Book program to support the conservation of our materials.
Finally, Prof. Raabe has created a Twitter account to share excerpts from Louisa's eminently quotable letters. Follow @MotherWhitman for such gems as:
"so your writin[g?] again leaves of grass well if it dont hurt you i am glad" (To Walt Whitman, February 1868) http://t.co/ntwo3BOn9e
— walter dear (@MotherWhitman) March 20, 2014
"O walt how many mornings i think of you when we have buckwheat cakes how i wish you had some)" (Feb. 1869) http://t.co/m7aTbS4hU2)
"farewell i have lived beyond all comfort in this world..farewel my dear beloved walter" (May 1873 deathbed note, http://t.co/vKsmEvOu2k)
Audiovisual Materials, Events, Human Rights Archive, New at the Rubenstein Library, Readings and Talks
Rubenstein Library Acquires Radio Haiti Archives
March 31, 2014 Will Hansen 3 Comments
Jean Dominique and Michèle Montas celebrating the anniversary of the station in the Radio Haiti newsroom, 1990. From the Radio Haiti Records.
The Human Rights Archive at Duke University's Rubenstein Library and the estate of broadcaster Jean Dominique have announced a partnership to preserve the broadcast archives of the journalist's iconic Radio Haiti station. From the 1960s to 2002, Radio Haiti was that country's first independent radio station, promoting democratic freedoms, speaking out against human rights abuses, and celebrating Haitian life and culture. The station's archive includes approximately 2,500 audio recordings of programs, as well as 28 boxes of paper records. Recordings include daily coverage of events, cultural programs, interviews on public affairs, political analysis, and roundtable discussions on different aspects of Haiti's recent history.
"The Radio Haiti collection is an incredibly important resource for understanding the recent history of Haiti," said Laurent Dubois, Marcello Lotti Professor of Romance Studies and History at Duke. "Because the station broadcast news and reportage largely in Creole and extensively covered events both in Port-au-Prince and the rural areas of Haiti, the collection gives us unequalled access to an understanding of one of the most important grassroots democratic movements in recent history: the movement that overthrew the Duvalier dictatorship in 1986."
The Radio Haiti archives were donated to the Rubenstein Library by Michèle Montas, station co-anchor and widow of Jean Dominique. Dominique had an unquenchable passion for Haiti and its people, and his quest for truth and justice may have led to his assassination in 2000.
According to Montas the archives "capture a time and place in which journalists and broadcast journalism played a major role in redefining a country and reaching a people. Beyond Haiti, they bear witness to the turbulent transition from a dictatorship to a functioning democracy. "
Montas stressed that the archives matter today because they touch on and track issues that remain of paramount importance in Haitian society. "By saving these archives and making them once more accessible to large audiences, Duke and the Rubenstein Library are playing a crucial role in advancing the dialogue about Haiti and its future."
On April 3, Montas will be at Duke to discuss the history of Radio Haiti and its archive. Archivists from the Rubenstein Library will also share some of the challenges of preserving such a large audio collection and discuss the importance this archive has for the broader Haitian community and the human rights movement. Those interested in learning more about preserving Radio Haiti can visit Duke Library's Youtube channel. The event is free and open to the public and will be held at 12 p.m. in the Forum for Scholars and Publics, Old Chemistry Building Room 011, on Duke's West Campus. Lunch will be provided.
The Radio Haiti archives join other recent acquisitions by the Rubenstein Library documenting the history of Haiti, including the records of the National Coalition for Haitian Rights, the Mark Danner Papers, and a scribal copy of the Haitian Declaration of Independence dating from 1804.
The Radio Haiti archives will open for research after conservation review and archival processing are complete. For more information, contact Patrick Stawski, Human Rights Archivist.
HaitiJean DominiqueMichele Montasradio
March 8, 2014 Will Hansen
Flyer for an International Women's Day event in Atlanta, 1984. From the Atlanta Lesbian Feminist Alliance Records.
Archive of Documentary Arts, Bingham Center, New at the Rubenstein Library
The Curious Case of Frances Benjamin Johnston
March 6, 2014 Will Hansen 2 Comments
The Library recently acquired a small album of photographs taken in Virginia's Tidewater region. It contains six cyanotypes depicting work at the freight docks of Newport News and other subjects. Of particular interest is a laid-in cyanotype which appears to be a portrait of Frances Benjamin Johnston, a pioneering female American photographer.
Johnston was a remarkable photographer. She took portraits of American presidents and the high society of the turn of the nineteenth century from her Washington, D.C. studio, but also participated in ambitious documentary projects, such as her architectural photographs of Southern states. For one of her best-known commissions, she traveled to Virginia to document the students of the Hampton Normal and Agricultural Institute in 1899-1900. Her photographs of this important education institution for African Americans and Native Americans are preserved in her collection at the Library of Congress.
Based on the probable identification of the woman in the photograph as Johnston and the photographs of the area around Hampton in the album, these photographs have been dated to the first decade of the 1900s. However, no information about the photographer is yet known. Were they a student or colleague of Johnston? Is it possible that the photographs (or some of the photographs) are by Johnston herself?
African American women aboard a steamboat, from the Tidewater album, ca. 1900.
The album is also accompanied by handwritten directions for making "Pyro Developer" and a "fixing bath for platinum prints," which may provide further evidence that the creator may have been a student or novice photographer. (The large initial "B" on the "Pyro Developer" formula bears some resemblance to Johnston's handwriting, but the handwriting of the rest of the formula does not appear to be similar to hers.)
If anyone has clues or guesses to contribute to the mystery of the photographer's identity, please share them in the comments section below!
African AmericancyanotypesphotographyVirginiawomen's history
Digital Collections, News and Features, University Archives
New Digital Collection: Duke Chapel Recordings
Undated photograph of a service in Duke Chapel, from the University Archives Photograph Collection.
We are pleased to announce a new digital collection, The Duke Chapel Recordings. This collection of 168 recordings features inspiring sermons from a variety of theologians and preachers, including a number of notable African American and female preachers. The collection includes both audio, and where available, video of the services.
The project was a collaboration of the University Archives, the Libraries' Digital Collections Department, and the Duke University Chapel. The original recordings are part of a large collection held in the University Archives. We hope the recordings are used for a variety of purposes: the study of homiletics, research into the spiritual response to social changes, musical study, and simple inspiration.
Dr. Luke A. Powery, Dean of Duke Chapel, says of the collection, "Duke University Chapel is distinguished in both its faithful preaching and its sacred music. The Sunday morning 'Protestant hour' captured within this archive has been the public face and voice of the Chapel for decades; this digital collection makes Duke Chapel's liturgical history accessible for both those interested in scholarly research in the area of preaching, music, and worship, and those who desire spiritual inspiration. This collection is an interdisciplinary educational resource for teaching and learning, and demonstrates that eruditio et religio is still alive and well at Duke; may it be so for years to come."
Learn more about how the video player feature was added to this collection on Bitstreams, the Digital Projects blog.
audiovisual recordschapelreligionsermons
From Our Collections, In the Conservation Lab, News and Features
In the Lab: Boxing Challenges
Some of our recent interesting conservation projects have involved housing. Not only do we repair damaged books and paper items in the conservation lab, but we also make many boxes and enclosures to house them, and occasionally our box-making expertise is called upon for rather unusual items.
For example, from the Abraham Joshua Heschel Papers: a rock. Little is known about this small piece of rock except that it is a souvenir of a trip that Heschel made to Israel. The rock was originally wrapped in a newspaper. Tedd Anderson made a four-flap enclosure for the newspaper and a box to house both the rock and the newspaper enclosure.
Rachel Penniman has been working on a set of Charles Dickens's publications, the original short segments of his novels that came out in serial form. These serials had been housed in custom boxes that someone must have made for their personal collection. Although the boxes were attractive with leather spines and stamped titles, they were not safe for the serials. The boxes caused creases and abrasions each time one of the pamphlets was removed or reinserted. Rachel made individual enclosures for each serial issue, and the enclosures were housed together in larger boxes, one for each title. Access to the serials is now much easier and safer.
The Digital Production Center (DPC) is in the process of scanning glass lantern slides of scenes of daily life in China made by Sidney Gamble in the early 20th century. Many of the slides are hand-colored, some have existing cracks, and all are very fragile because of the glass support. Erin Hammeke has been working to stabilize their housings. Each slide is housed in a labeled four-flap paper wrapper, and in the case of cracked slides, she adds a piece of mat board as an extra stiffener.
The conservation department creates housings for circulating collections as well. Mary Yordy has an upcoming housing project for the fascinating new book S by J.J. Abrams and Doug Dorst. The book is beautifully made to look old and well used with notes in the margins and numerous loose paper inserts. Mary is planning to make a box for the book that will prevent loose materials from falling out and getting lost, and the book will be kept in the locked stacks. While we chose to leave the inserts untreated and as published, the Preservation Lab at the Public Library of Cincinnati/University of Cincinnati decided on another route with this title.
More images of these and other housing projects can be seen on Flickr.
Post contributed by Grace White, Conservator for Special Collections, as part of our ongoing "In the Conservation Lab" series.
conservationDickensHeschellantern slidesSidney Gamble | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,258 |
\section{Introduction}
The KLOE detector operates at DA\char8NE} \def\ifm{\sigma}} \def\plm{\ifm{\pm}{\ifm{\sigma}, an $\epm$ collider working at the center of mass energy $W\sim m_{\ifm{\phi}} \def\ff{\f--factory} \def\epm{\ifm{e^+e^-}} \sim 1.02$ GeV. The \ifm{\phi}} \def\ff{\f--factory} \def\epm{\ifm{e^+e^-}\ mesons are produced essentially at rest and decay to $\ensuremath{K_S}\ensuremath{K_L}$ ($K^+K^-$) $\sim$ 34\% ($\sim$ 49\%) of the times. The $K$ mesons are produced in a pure $J^{PC}=1^{--}$ coherent quantum state, so that observation of a \ensuremath{K_S}\ in an event signals (tags) the presence of a \ensuremath{K_L}\ and vice-versa: highly pure, almost monochromatic, back-to-back \ensuremath{K_S}\ and \ensuremath{K_L}\ beams can be obtained. Moreover \ensuremath{K_S}\ and \ensuremath{K_L}\ are distinguishable on the basis of their decay length: $\lambda_S \sim 0.6$ cm and $\lambda_L \sim 340 $ cm. \\
The KLOE experiment is designed to exploit the unique feature of a \ifm{\phi}} \def\ff{\f--factory} \def\epm{\ifm{e^+e^-}-factory environment for the measurement of CP and CPT violation in the $K^0-\bar{K}^0$ system and more generally for the study of kaons' decays and interference.
The KLOE detector consists essentially of a drift chamber (DCH), surrounded by an electromagnetic calorimeter (EMC). A superconducting coil surrounding the barrel provides a 0.52 T magnetic field. Descriptions of the EMC and DCH can be found in \Ref{kloe:dc,kloe:emc}
\section{Quantum interference in the channel $\ensuremath{K_L}\ensuremath{K_S} \to \ensuremath{\pi^+}\ensuremath{\pi^-}\ensuremath{\pi^+}\ensuremath{\pi^-}$.}
Test of quantum mechanics (QM) can be performed by studying the time evolution of the quantum correlated kaon system,
in particular studying the interference pattern of the decay $\ensuremath{K_L}\ensuremath{K_S} \to \ensuremath{\pi^+}\ensuremath{\pi^-}\ensuremath{\pi^+}\ensuremath{\pi^-}$. According to QM,
the distribution of the difference of the decay times $I(\Delta t)$ of the two kaons shows a characteristic destructive interference which prevents the two kaons from decaying into the same final state at the same time. As suggested in ref.~\cite{{bib:bertlmann},{bib:eberhard}}, a simple way to parametrize a possible deviation of QM is to introduce a decoherence parameter $\zeta_{S,L}$ ($\zeta_{S,L}= 0$ in QM) as follows:
\begin{equation}I(|\Delta t|)\propto e^{-|\Delta t| \Gamma_L} +e^{-|\Delta t| \Gamma_S} -2\underbrace{(1-\zeta_{S,L})}_{{\mbox{\small decoherence}}} cos(\Delta m |\Delta t|) e^{-\frac{\Gamma_s+\Gamma_L}{2}|\Delta t| }
\label{eq:deltat}
\end{equation}
\begin{figure}[h]
\begin{center}
\includegraphics[width=7.0cm]{./fit_15_kskl_verynice.eps}
\put(-125,155){\small Data = 7366 evts}
\put(-125,145) {\color{red}{\small Fit: $ \chi^2/$dof = 15/22}}
\caption{Fit of the difference $\Delta t$ of the decay times of $\ensuremath{K_S}\to \ensuremath{\pi^+}\ensuremath{\pi^-}$ and $\ensuremath{K_L}\to \ensuremath{\pi^+}\ensuremath{\pi^-}$. The black points are the data and the red ones are the results of the fit. The peak at $\Delta t \sim 17 \tau_S$ is due to the regeneration on the beam pipe.}
\label{fig:deltat}
\end{center}
\end{figure}
Selecting a pure sample of $\ensuremath{K_L}\ensuremath{K_S} \to \ensuremath{\pi^+}\ensuremath{\pi^-}\ensuremath{\pi^+}\ensuremath{\pi^-}$ and fitting eq.~\ref{eq:deltat} to data, KLOE has obtained the following preliminary result:
$$\zeta_{S,L}= 0.043\,^{+0.038}_{{-0.035}_{\mbox{stat}}}\pm 0.008_{\mbox{syst}},$$ consistent with QM predictions. The result of the fit is shown in fig.~\ref{fig:deltat}.
\section{BR($\ensuremath{K_L}\to \ensuremath{\pi^+}\ensuremath{\pi^-}$)}
KLOE has measured the BR($\ensuremath{K_L}\to \ensuremath{\pi^+}\ensuremath{\pi^-}$) using a \ensuremath{K_L}\ beam tagged by $\ensuremath{K_S}\to \ensuremath{\pi^+}\ensuremath{\pi^-}$ decays.
The number of $\ensuremath{K_L}\to \ensuremath{\pi^+}\ensuremath{\pi^-}$
is obtained from a fit to the $\sqrt{E^2_{miss}+|\ensuremath{\mathbf{p}}_{miss}|^2}$
distribution, where $E_{miss}$ is the missing energy in the hypothesis of $\ensuremath{K_L}\to \ensuremath{\pi^+}\ensuremath{\pi^-}$ decay and $\ensuremath{\mathbf{p}}_{miss}$ is the missing momentum,
with a linear combination of Monte Carlo distributions for $\ensuremath{K_L}\to \ensuremath{\pi^+}\ensuremath{\pi^-}$, $\ensuremath{K_L}\to \pi^{\pm} e^{\mp} \nu$, $\ensuremath{K_L}\to \pi^{\pm} \mu^{\mp} \nu$
$\ensuremath{K_L}\to \ensuremath{\pi^+}\ensuremath{\pi^-} \ensuremath{\pi^0}$ events, inclusive with respect to final-state radiation.
The number of signal events has been normalized to the number of $\ensuremath{K_L}\to \pi\mu\nu$, in order to minimize systematic uncertainties on the tagging and tracking efficiency evaluation (exploiting the similar topology of the decays as well as the momentum overlap).
Correcting for the tagging and tracking efficiency and using the BR$(\ensuremath{K_L}\to\pi^{\pm}\mu^{\mp}\nu)$ from ref.~\cite{KLOE:brl}, we obtain: BR$(\ensuremath{K_L}\to \ensuremath{\pi^+}\ensuremath{\pi^-}) = (1.963 \pm 0.012_{\rm stat}\pm 0.017_{\rm syst})\times 10^{-3}$. The result is in good agreement with the measurement of KTeV~\cite{ktev} $(1.975 \pm 0.012)\times10^{-3}$ and in strong disagreement with that reported by the PDG~\cite{PDG2004}, $(2.090\pm0.025)\times10^{3}$.
This result can be used to determine $|\eta_{+-}|$
and $|\varepsilon|$ correcting for the small contribution of $\varepsilon'$.
Using the measurements of BR($\ensuremath{K_S}\to \ensuremath{\pi^+}\ensuremath{\pi^-}$) and $\tau_{\ensuremath{K_L}}$ from KLOE
\cite{KLOE:Rs,KLOE:brl,KLOE:KLlife}, and the value of $\tau_{\ensuremath{K_S}}$ from
PDG\cite{PDG2004} and subtracting the contribution of the photon direct emission~\cite{Dire} to the value of BR($\ensuremath{K_L}\to \ensuremath{\pi^+}\ensuremath{\pi^-}$), we obtain:
$|\eta_{+-}| = (2.219 \pm 0.013)\times 10^{-3}$.
Finally, using the world average measurement of
$\rm{Re}(\varepsilon'/\varepsilon)=(1.67 \pm 0.26)\times 10^{-3}$,
and assuming equal phases between $\varepsilon'$ and $\varepsilon$ we obtain
$|\varepsilon| = (2.216 \pm 0.013)\times 10^{-3}$,
in disagreement with the value $|\varepsilon| = (2.284 \pm 0.014)\times 10^{-3}$
reported in ref.~\cite{PDG2004}.
The value of $|\varepsilon|$ can be
can be compared with the prediction~\cite{UTfit} $|\varepsilon| = (2.875 \pm 0.455)\times 10^{-3}$
where to test the mechanism of the CP violation in the Standard Model,
the value of $|\varepsilon|$ has been
computed from the measurement of the CP conserving observables:
$\Delta {\rm m_d}$, $\Delta {\rm m_s}$,
$V_{\rm{ub}}$, and $V_{\rm{cb}}$.
No significant deviation from the Standard Model prediction has been
observed.
\begin{figure}\label{fig:fit_klpp}
\begin{center}
\includegraphics[width=6.0cm]{./pull_epskfit1.eps}
\caption
$|\varepsilon|$ constraints by the measurements
of $|V_{ub}|/|V_{uc}|$, $\Delta m_d$
and by the limit on $\Delta m_s$on the $(\bar{\rho},\bar{\eta})$ from ref.~\protect\cite{UTfit}, compared with the value of $|\varepsilon|$ from the measurements of BR(\ensuremath{K_L}\to\ensuremath{\pi^+}\ensuremath{\pi^-}).}
\end{center}
\end{figure}
\section{Bell Steinberger Relation}
The most powerful test of CPT invariance in the neutral kaon system is presently obtained by means of the Bell-Steinberger relation \cite{bib:bellsteinberger}, which relates CPT and CP violating parameters, $ Im (\delta)$ and $Re(\epsilon)$, to the decay amplitudes of \ensuremath{K_L}\ and \ensuremath{K_S}\ into the same final state:
\begin{equation}
\begin{array}{rcl}
(1 + i \tan{\phi_{SW}}) [ {\rm Re}(\epsilon) - i \: {\rm Im}(\delta) ]
= {\displaystyle{\sum_{{\subrm{final}} \atop {\subrm{states}} \: f}}} A(\ensuremath{K_L} \to f)^\star A(\ensuremath{K_S} \to f) / \Gamma_S
= {\displaystyle{\sum_{{\subrm{final}} \atop {\subrm{states}} \: f}}} \alpha_f
\end{array}
\label{eqn:cpt1}
\end{equation}
where $\phi_{SW}$ is the superweak phase, defined by $\tan{\phi_{SW}} = 2\Delta M/(\Gamma_S-\Gamma_L)$.
For the determination of the $\alpha_f$ parameters,
experimental inputs are \ensuremath{K_L}\ and \ensuremath{K_S}\ branching
ratios, the relative phases between the amplitudes, and the \ensuremath{K_L}\ and \ensuremath{K_S}\
lifetimes, $\tau_{\ensuremath{K_S}}$ and $\tau_{\ensuremath{K_L}}$.
We use the value of
$\tau_{\ensuremath{K_S}}$ reported by the PDG~\cite{PDG2004} and $\tau_{\ensuremath{K_L}}$ from KLOE average~\cite{KLOE:brl,KLOE:KLlife}
and the following measurements:
\begin{itemize}
\item the new KLOE measurement of
BR(\ensuremath{K_S}\to\ensuremath{\pi^+}\ensuremath{\pi^-}), BR(\ensuremath{K_S}\to\ensuremath{\pi^0}\Ppin) from ref~\cite{KLOE:Rs}
which enters in the evaluation of $|\alpha_{+-}|$ and $|\alpha_{00}|$,
\item the average between the BR(\ensuremath{K_L}\to\ensuremath{\pi^+}\ensuremath{\pi^-})
here presented and that measured by KTeV\cite{ktev}, used to determine $|\alpha_{+-}|$,
\item the measurement of BR(\ensuremath{K_L}\to\ensuremath{\pi^0}\Ppin) from KTeV \cite{ktev},
used to determine $|\alpha_{00}|$,
\item the values, $\phi_{+-}$ and $\phi_{00}$, of the phases of $\alpha_{+-}$
and $\alpha_{00}$, taken from the PDG\cite{PDG2004} fit without assuming CPT symmetry,
\item the measurement of the CP conserving
direct component contribution
to the process \ensuremath{K_L}\to\pic$\gamma$ from ref~\cite{Dire} and the
upper limit on the direct component contribution to the process
\ensuremath{K_S}\to\pic$\gamma$ \cite{PDG2004}, both entering in the evaluation of $\alpha_{+-\gamma}$,
\item the recent KLOE upper limit on the BR($\ensuremath{K_S}\to \ensuremath{\pi^0}\Ppin\ensuremath{\pi^0}$)\cite{KLOE:ks3pi0}, which constraints the value of $|\alpha_{000}|$,
\item the measurement of the BR(\ensuremath{K_S}\to \ensuremath{\pi^+}\ensuremath{\pi^-}\ensuremath{\pi^0}) reported in the
PDG~\cite{PDG2004},
\item the recent KLOE measurement of the semileptonic $K_S$ charge asymmetry $A_S$~\cite{KLOE:Rs},
which allows to calculate the semileptonic contribution $\alpha_{kl3}= 2\tau_{\ensuremath{K_S}}/\tau_{\ensuremath{K_L}} B(kl3)((A_S+A_L)/4-i Im(\delta)+Im(x+))$, being $x+$ the parameter describing the $\Delta S=\Delta Q$ violation in the semileptonic decays. $Im(x+)$ has been determined from a combined fit of $A_S$ with the semileptonic time dependent decay rate asymmetry measured by CPLEAR~\cite{cplear2}. The semileptonic $K_L$ charge asymmetry $A_L$ has been taken from the PDG~\cite{PDG2004}.
\end{itemize}
As $\phi_{+-\gamma}$, the phase of $\alpha_{+-\gamma}$, has not been measured yet, no constraints have been assumed on its value. Using these experimental inputs, the differences between the \ensuremath{K_S}\ and \ensuremath{K_L}\ masses and lifetimes, $\Delta M$ and $\Delta \Gamma$, reported in the PDG\cite{PDG2004} (for the determination of $\phi_{SW}$), we obtain:
${\rm Re}(\epsilon) = (160.2\pm 1.3)\times 10^{-5}$ and ${\rm Im}(\delta) = (1.2\pm 1.9)\times 10^{-5}$, resulting in a considerable improvement to the CPLEAR measurement~\cite{cplear}: ${\rm Re}(\epsilon) = (164.9\pm 2.5)\times 10^{-5}$ and ${\rm Im}(\delta) = (2.4\pm 5.0)\times 10^{-5}$.
\section*{References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,662 |
\section{Introduction}
There is growing concern that improperly designed data-driven approaches to decision-making may display biased or discriminatory behavior. In fact, such concerns are justified by numerous examples of unfair algorithms that have been deployed in the real world \cite{angwin2016machine,barocas2016big,nature2016more,executive2016big}. In response, researchers have started to develop a number of approaches to encourage fairness in various statistical or machine learning problems \cite{calders2009building,chouldechova2017fair,dwork2012fairness,hardt2016equality,olfat2018spectral,zafar2017,zliobaite2015relation}. The problem of classification has received particular attention due to the ease of mapping class labels to positive and negative outcomes with which to characterize fairness, but recent work has also begun to explore fair statistical methods in the context of unsupervised learning \cite{chierichetti2017fair,olfat2018convex} and in more general decision-analytic frameworks \cite{ensign2017runaway,liu2018delayed}.
\subsection{Existing Approaches to Fairness}
The literature on fair statistics and learning can be classified into three categories: pre-processing steps, post-processing steps, and training regularization. The general setup of these approaches is that they seek to estimate a model that predicts a {response} variable using a vector of {predictor} variables, while trying to ensure that the model predictions are fair (we discuss quantitative measures of fairness in the next subsection) with respect to some {(potentially multiple)} variables that indicates a protected attribute (e.g., gender or race). Here we briefly review some of the existing approaches that have been developed for fairness.
Pre-processing approaches transform the data before estimation, to remove any protected information that could cause unfairness. For instance, \cite{calmon2017optimized,zemel2013learning} take a nonparametric approach: They optimize over distributions to variationally transform the feature space. {However, the underlying optimization problem quickly becomes intractable because its computation scales as exponential in dimension.} Alternatively, \cite{olfat2018convex} take an adversarial outlook on pre-processing for fairness, and propose a semidefinite programming (SDP) formulation to calculate a ``fair principal component analysis (FPCA)" that can then be used to this end. Several groups have designed autoencoders, with a similar inspiration, oriented around deep classifiers \cite{beutel2017,edwards2015,madras2018,zhang2018}.
In comparison, there is a smaller literature on post-processing for fairness. These methods take the output of a statistical technique, and process the output in order to improve fairness. A canonical example of this approach is \cite{hardt2016equality}, which designs a method for post-processing an arbitrary classifier in order to ensure fairness. While this method is flexible with regards to the type of classifier used, it achieves fairness by requiring different score function thresholds for different groups of protected classes. This violates a general principle called \textit{individual fairness} \cite{dwork2012fairness}, which says that similar individuals should be treated similarly. More significantly, \cite{woodworth2017learning} show that this method achieves suboptimal tradeoffs between accuracy and fairness.
Notably, both pre-processing and post-processing approaches are necessarily greedy since they unlink the process of estimation from ensuring fairness. This has motivated work on regularization approaches to fairness, which generally achieve lower generalization error while improving fairness. The regularization approaches most related to this paper include \cite{berk2017fairness,olfat2018spectral,woodworth2017learning,zafar2017,agarwal2018reductions,agarwal2019fair,oneto2019general,donini2018empirical}. In particular, \cite{zafar2017} control the correlation of a classifier score function and the protected attribute, which can be formulated as a linear constraint in the estimation problem. The method in \cite{olfat2018spectral} implements non-convex optimization techniques to further consider second-order deviations. However, a limitation of both is they are applicable only when protected attributes are binary. The approach of \cite{kamishima2012fairness,zemel2013learning} works for more general types of protected attributes, but it requires a heuristic to approximate a \emph{mutual information} (MI) measure of fairness as a constraint. Alternatively, \cite{goh2016satisfying} designs an iterative cutting-plane algorithm for fair support vector machine (SVM) that requires solving an SVM instance in each iteration. Moving away from classification, \cite{calders2013controlling,johnson2016impartial,agarwal2019fair} develop concepts of fairness in the case of regression, and \cite{berk2017convex} extends this to regularization techniques for ensuring different qualitative types of fairness in regression. {Empirical risk minimization formulations for classification \cite{agarwal2018reductions,donini2018empirical}, regression \cite{agarwal2019fair}, and general problems \cite{oneto2019general} have also been proposed.} Finally, recent work has sought to generalize these ideas towards fair decision-making \cite{ensign2017runaway,liu2018delayed}.
\subsection{Quantitative Measures of Fairness}
\label{sec:robcondind}
We have casually used the terms fairness and bias without formally defining them. Part of the difficulty is a considerable lack of clarity in the existing literature as to their meaning, with different works defining different quantitative measures of fairness. We believe the underlying (and unifying) idea behind all these measures is they approximate in some way a measure of independence between the output of the statistical procedure and the variable of protected attributes. In fact, this way of thinking about fairness was first noticed by \cite{kamishima2012fairness}.
To make our discussion concrete, we first discuss notions of fairness for classification. Let $(X,Y,Z) \in \mathbb{R}^p\times\{\pm 1\}\times\{\pm 1\}$ be a jointly distributed random variable consisting of a vector of predictors, a binary class label, and a binary protected attribute. Let $\delta(x)$ be a score for a classifier { that operates on $X$}, and suppose the classifier makes binary predictions $\widehat{Y}(x,t) = \mathrm{sign}(t-\delta(x))$ for a given threshold $t$ of the score. Since binary classifiers output a $\pm 1$ that can be mapped to desirable/undesirable decisions, one measure of fairness is
\begin{equation}
\label{eq:dispimpact}
KS = \max_{t\in\mathbb{R}}\big|\mathbb{P}[\widehat{Y}(X,t)=+1|Z=+1]-\mathbb{P}[\widehat{Y}(X,t)=+1|Z=-1]\big|.
\end{equation}
{This measures how similar the probability of making a prediction of a given binary class is between the two groups specified by the protected attributed}, and it is often called \emph{disparate impact} \cite{hardt2016equality,olfat2018spectral}. Effectively, disparate impact measures the total disparity in outcomes between protected classes.
This above measure of fairness can be too strict in some applications, as there may be unavoidable correlation between the classifier output and the protected label. For such cases, \cite{hardt2016equality} proposes \textit{equalized odds} as an alternative measure of fairness that instead constrains disparity in outcomes conditional on some informative variable. In the setting of binary classification, one possible informative variable is $Y\in\{\pm 1\}$ itself. This choice leads to the following quantitative measure of equalized odds fairness:
\begin{multline}
\label{eq:equaloddsbin}
EO = \max_{y\in\{\pm 1\}}\max_{t\in\mathbb{R}}\big|\mathbb{P}[\widehat{Y}(X,t)=+1|Z=+1,Y=y]-\\
\mathbb{P}[\widehat{Y}(X,t)=+1|Z=-1,Y=y]\big|.
\end{multline}
Restated, the quantity (\ref{eq:equaloddsbin}) measures the disparity in \emph{error rates} between the protected classes. An additional benefit is that a classifier with zero training error will also be fair with respect to this measure of fairness \cite{hardt2016equality}.
At an initial glance, the above measures of fairness do not look like manifestations of independence. Yet note the event $\{\widehat{Y}(X,t)=+1\}$ is equivalent to the event $\{\delta(X) \leq t\}$ since $\widehat{Y}(x,t) = \mathrm{sign}(t-\delta(x))$. This means that (\ref{eq:dispimpact}) is the Kolmogorov-Smirnov (KS) distance between the distributions of $\delta(X)|Z=+1$ and $\delta(X)|Z=-1$. Since (\ref{eq:equaloddsbin}) has a very similar interpretation, we will focus our discussion on (\ref{eq:dispimpact}). Thus when $KS = 0$ in (\ref{eq:dispimpact}), we have that
\begin{equation}
G(t) := \mathbb{P}[\delta(X)\leq t|Z=+1] = \mathbb{P}[\delta(X)\leq t|Z=-1].
\end{equation}
This means that the joint distribution factorizes as
\begin{equation}
\mathbb{P}(\delta(X) \leq t, Z=z) = \mathbb{P}[\delta(X)\leq t|Z=z]\cdot\mathbb{P}(Z=z) = G(t)\cdot\mathbb{P}(Z=z),
\end{equation}
which means the two random variables are independent. Summarizing, we have $KS = 0$ in (\ref{eq:dispimpact}) if and only if $\delta(X)$ is independent of $Z$. The importance of such independence in relation to fairness was first noticed by \cite{kamishima2012fairness}.
\subsection{Technical Challenges with Independence}
The above discussion suggests that a promising direction for generalizing fairness to a broader class of problems is to ensure independence (or rather some approximate notion of independence) between the output of a statistical technique and a random variable that measures attributes for which fairness is desired. In fact, the broader idea of quantifying independence using an empirical estimate has a long history in statistics \cite{breiman1985estimating,chen2005consistent,feuerverger1977empirical,pal2010estimation,szekely2007measuring,szekely2009brownian,MAL-060,gretton2005measuring}. One approach is to compute some generalized notion of correlation such as Renyi correlation, distance correlation, or the {Hilbert Schmidt Independence Criterion (HSIC).} Another approach is to use some distance like the KS distance, total variation distance, or mutual information between the empirical probability measures of the joint and product distributions.
However, incorporating empirical independence measures into statistical procedures is not straightforward. Many statistical procedures are computed by solving an optimization problem, and so such measures must be added as constraints. However, measures like Renyi correlation, { HSIC}, KS distance, total variation distance, and mutual information are all themselves the solutions of an optimization problem. (Mutual information is traditionally defined using a hard-to-compute integral, but a well-known variational characterization \cite{boucheron2013concentration} shows that it should more properly be thought of as the solution to an optimization problem for our discussion.) This means the resulting optimization problem for a fair statistical procedure defined in this way would have another optimization problem as a constraint; these types of problems are known as bilevel programs and are very difficult to numerically solve \cite{dempe2002,ouattara2018duality}. The numerical difficulties are compounded for those measures defined using an empirical c.d.f., which is always discontinuous. { HSIC and distance correlation are an exception to the above statement in that these quantities can be estimated by an explicit formula, and so an optimization problem with HSIC or distance correlation as a constraint is simply an optimization problem with a nonlinear constraint corresponding to the empirical estimate of the HSIC or distance correlation. There is in fact a history of using HSIC as a component of optimization problems for tasks such as feature selection \cite{song2007supervised} and clustering \cite{song2007dependence}.}
\subsection{Contributions and Outline}
This paper develops an optimization hierarchy for fair statistical decision problems. We first generalize in Section \ref{sec:fsdp} the framework of statistical decision problems \cite{lehmann2006testing} to include fairness. This provides a systematic approach for developing and studying fair versions of hypothesis testing, decision-making, estimation, regression, and classification. We use the above discussed insight relating fairness to statistical independence in order to propose in Section \ref{sec:foh} an optimization hierarchy that lends itself to numerical computation. Tools from variational analysis and random set theory are used to prove in Section \ref{sec:scfoh} that higher levels of this hierarchy lead to consistency in the sense that it asymptotically imposes independence as a constraint in corresponding statistical decision problems {for bounded random variables. Section \ref{sec:ubrv} generalizes these results to unbounded random variables, namely sub-Gaussian random variables and random variables with finite moments.} In Section \ref{sec:er}, we demonstrate numerical effectiveness of our hierarchy using several data sets, and we conclude by using our hierarchy to fairly perform automated dosing of morphine.
The distinguishing feature of our approach to ensuring independence is to use a moment-based characterization of independence that generalizes Kac's theorem \cite{bisgaard2006does,kac1936fonctions} to multivariate random variables. This has the key practical benefit over other approaches to measuring independence (such as \cite{kamishima2012fairness,zemel2013learning}) that all the resulting constraints in the corresponding optimization problems are smooth polynomials. This means we avoid the bilevel programming structure that arises from the use of other independence measures \cite{kamishima2012fairness,zemel2013learning}, and which makes numerical optimization very difficult. Because the moment constraints are smooth polynomials, this further allows us to leverage advances in convex optimization \cite{lasserre2010moments} and related heuristics such as the constrained convex-concave procedure \cite{smola2005kernel,tuy1995dc,yuille2002concave} for the purpose of numerically solving the resulting optimization problem. The tradeoff is that we have to include multiple (but a finite number of) constraints, one for each possible combination of moments between joint and product distributions.
Our framework also builds on preliminary work on the use of moment-based constraints for fair statistical methods \cite{olfat2018spectral,olfat2018convex,zafar2017}. These approaches were restricted to binary classification with binary protected classes, made use of only first- or second-order moments of only the classifier, were based on ad-hoc arguments and justifications, and lacked theoretical analysis of the resulting statistical methods. The past papers \cite{olfat2018spectral,olfat2018convex,zafar2017} leave open the larger question of how moment-based approaches to fairness can be generalized to continuous protected classes, multivariate protected classes, multivariate statistical decisions, and other classes of statistical problems beyond classification. Our work in this paper unifies these past approaches into a broader theoretical framework, {proves this framework provides asymptotic and finite-sample guarantees on fairness}, and successfully achieves a generalization of moment-based methods in order to handle continuous protected classes, multivariate protected classes, multivariate statistical decisions, and multiple classes of statistical decision problems, including fair versions of hypothesis testing, decision-making, estimation, regression, and classification.
{Empirical risk minimization formulations for fair statistics have been recently proposed \cite{agarwal2018reductions,agarwal2019fair,donini2018empirical,oneto2019general}. These papers are similar to our framework, but differ in several important ways. Fairness is defined in \cite{agarwal2018reductions,donini2018empirical} using conditional probabilities, which because of the classification setup considered can be exactly rewritten as a conditional expectation. This allows the fairness constraints to be represented by a finite number of inequalities using sample averages in place of the conditional expectations. In contrast, our framework applies to problems such as regression where fairness as defined by statistical independence cannot be exactly rewritten as a conditional expectation. The work in \cite{agarwal2019fair} extends these ideas to regression by a performing a discretization that results in approximation of regression by a classification problem. The fairness constraints in this approach require discrete protected classes, whereas our framework is also able to handle continuous and vector-valued (consisting of both discrete and continuous) protected attributes. The formulation in \cite{oneto2019general} applies to general risk minimization problems, defines fairness in terms of conditional expectation, and proposes an approximation to ensure convexity of the resulting optimization problem. Our framework defines a different definition of fairness in terms of statistical independence.}
Because we have to include multiple constraints, this significantly complicates the theoretical analysis of our optimization hierarchy. The limiting behavior of our framework requires a statistical analysis on the solution to an optimization problem in the limit of a countably-infinite number of random constraints involving empirical moments. Traditional results in statistics do not apply to set-valued functions \cite{aswani2019statistics}, which are one way to interpret constraints in an optimization problem \cite{rockafellar2009variational}. In fact, most attention in statistics on sets has been focused on estimating a single set under different measurement models \cite{devroye1980,guntuboyina2012,korostelev1995,patschkowski2016,scholkopf2001}. The traditional theoretical argument is to use the Pompeiu–Hausdorff distance to metricize the set of sets, but this approach is too difficult for use in our setting which has random sets defined using (in the limit) an infinite number of non-convex constraints. Instead, we build on our past work on statistics with set-valued functions \cite{aswani2019statistics}: We develop new theoretical arguments for statistics with random sets and set-valued functions, using variational analysis \cite{rockafellar2009variational,royset2019} and random sets \cite{matheron1975,molchanov2006}. {These techniques are of potential interest to other set-based statistical problems where empirically-successfully approaches without theoretical guarantees have been used \cite{zaheer2017deep}. Examples of such statistical problems include estimation tasks where the predictor variables are a set and the response variable is a scalar, such as galaxy red-shift estimation in cosmology \cite{rozo2014redmapper} and point-cloud classification in computer vision \cite{wu20153d}.}
\section{Preliminaries}
\label{sec:notation}
This section presents our notation. We also describe some useful (and needed) notation and definitions from variational analysis and random sets. Most of the variational analysis definitions are from \cite{rockafellar2009variational}, and the stochastic set convergence notation is originally from \cite{aswani2019statistics}.
\subsection{Notation}
Let $M : \mathbb{R}^{dp} \rightarrow \mathbb{R}^{d\times p}$ be the function that reshapes a vector into a matrix by placing elements into the matrix columnwise from the vector. Similarly, we define $W := M^{-1}: \mathbb{R}^{d\times p}\rightarrow\mathbb{R}^{dp}$ to be its inverse.
We use $\mathbb{E}_n(\cdot)$ to denote expectation with respect to the empirical distribution. Recall this is the sample average of the random variable inside parenthesis. As examples, $\mathbb{E}_n(Z) = \frac{1}{n}\sum_{i=1}^nZ_i$ and $\mathbb{E}_n(ZX) = \frac{1}{n}\sum_{i=1}^nZ_iX_i$.
Consider a tensor $\varphi\in\mathbb{R}^{r_1\times\cdots\times r_q}$, and let $[r] = \{1,\ldots,r\}$. The norm $\|\varphi\|$ is the $\ell_\infty$ vector norm for the tensor considered as a vector. For two tensors $\varphi,\nu\in\mathbb{R}^{r_1\times\cdots\times r_q}$, we define their inner product $\langle \varphi,\nu\rangle$ to be the usual dot product for the tensors interpreted as vectors.
For a tensor interpreted as a multilinear operator $\varphi(u_1,\ldots,u_q)$, we define the two subordinate norms
\begin{equation}
\begin{aligned}
\|\varphi\|_\circ &= \max\big\{\|\varphi(\rlap{$\hspace{0.17em}u$}\hphantom{u_2},\ldots,\rlap{$\hspace{0.23em}u$}\hphantom{u_q})\|\ \big|\ \|\rlap{$\hspace{0.23em}u$}\hphantom{u_k}\|_2 = 1\big\}\\
\|\varphi\|_* &= \max\big\{\|\varphi(u_1,\ldots,u_q)\|\ \big|\ \|u_k\|_2 = 1 \text{ for } k\in[q]\big\}
\end{aligned}
\end{equation}
where $\|\cdot\|_2$ is the Euclidean norm for vectors. These are subordinate norms since $\|\varphi(u,\ldots,u)\| \leq \|\varphi\|_\circ\big(\|u\|_2\big)^q$ and $\|\varphi(u_1,\ldots,u_q)\| \leq \|\varphi\|_*\prod_{k=1}^q\|u_k\|_2$. When $\varphi(\cdot,\ldots,\cdot)$ is symmetric in its arguments, then $\|\varphi\|_\circ = \|\varphi\|_*$ \cite{banach1938homogene,bochnak1971polynomials}.
\subsection{Variational Analysis}
\label{sec:vaprelim}
Let $\overline{\mathbb{R}} = [-\infty,\infty]$ denote the extended real line. We define $\Gamma(\cdot, \mathcal{S}) : E \rightarrow\overline{\mathbb{R}}$ to be the indicator function
\begin{equation}
\Gamma(u,\mathcal{S}) = \begin{cases} 0, &\text{if } u \in \mathcal{S}\\
+\infty, &\text{otherwise}\end{cases}
\end{equation}
where $E$ is some Euclidean space that will be clear from the context.
The outer limit of the sequence of sets $C_n$ is defined as
\begin{equation}
\textstyle\limsup_n C_n = \{x : \exists n_k \text{ s.t. } x_{n_k} \rightarrow x \text{ with } x_{n_k}\in C_{n_k}\},
\end{equation}
and the inner limit of the sequence of sets $C_n$ is defined as
\begin{equation}
\textstyle\liminf_n C_n = \{x : \exists x_n \rightarrow x \text{ with } x_n\in C_n\}.
\end{equation}
The outer limit consists of all the cluster points of $C_n$, whereas the inner limit consists of all limit points of $C_n$. The limit of the sequence of sets $C_n$ exists if the outer and inner limits are equal, and when it exists we use the notation that $\textstyle\lim_n C_n := \limsup_n C_n = \liminf_n C_n$.
A sequence of extended-real-valued functions $f_n : X\rightarrow\overline{\mathbb{R}}$ is said to epi-converge to $f$ if at each $x\in X$ we have
\begin{equation}
\begin{aligned}
\begin{cases}
\lim\inf_n f_n(x_n)\geq f(x) & \text{for every sequence } x_n\rightarrow x\\
\lim\sup_n f_n(x_n)\leq f(x) &\text{for some sequence }x_n\rightarrow x
\end{cases}
\end{aligned}
\end{equation}
Epi-convergence is so-named because it is equivalent to set convergence of the epigraphs of $f_n$, meaning that epi-convergence is equivalent to the condition $\lim_n \{(x,\alpha)\in X\times\mathbb{R} : f_n(x) \leq \alpha\} = \{(x,\alpha)\in X\times\mathbb{R} : f(x) \leq \alpha\}$. We use the notation $\elim_n f_n = f$ to denote epi-convergence relative to $X$.
A sequence of extended-real-valued functions $f_n : X\rightarrow\overline{\mathbb{R}}$ is said to converge pointwise to $f$ if at each $x\in X$ we have that $\lim_n f_n(x) = f(x)$. We abbreviate pointwise convergence relative to $X$ using the notation $\lim_n f_n = f$.
\subsection{Specific Distributions}
{We define a multivariate random variable $U\in\mathbb{R}^p$ to be sub-Gaussian with variance parameter $\sigma^2$ if we have that $\mathbb{E}\exp(s\cdot\langle t, U - \mathbb{E}(U)\rangle) \leq \exp(\sigma^2 s^2/2)$ for all $t \in \mathbb{S}^{p-1}$, which is the unit sphere in $p$-dimensions. Thus a sub-Gaussian random variable also satisfies
\begin{equation}
\label{eqn:subgaualt}
\mathbb{E}\exp\big(s\cdot\langle t, U\rangle\big) \leq M\exp\big(\sigma^2 s^2\big)
\end{equation}
for all $t \in \mathbb{S}^{p-1}$, where $M\geq 1$ and $\sigma^2\geq 0$ are constants. We will use (\ref{eqn:subgaualt}) as our primary characterization of a sub-Gaussian distribution. An important implication of this characterization is that
\begin{equation}
\label{eqn:subgaumom}
\mathbb{E}\big(\langle t, U\rangle^{2k}\big) \leq M\sigma^{2k}\cdot(2k)!/k!
\end{equation}
for all $t \in \mathbb{S}^{p-1}$, which can be shown using the bound in (\ref{eqn:subgaualt}).
Sub-Gaussian distributions are ubiquitous. A Gaussian distribution $X$ with mean $\mu$ and variance $\sigma^2$ is denoted $X \sim \mathcal{N}(\mu,\sigma^2)$, a Bernoulli random variable $X$ with success probability $x\in[0,1]$ is denoted $X \sim \mathrm{Ber}(x)$, and a uniform random variable $X$ with support $[a,b]$ is denoted $X \sim \mathrm{Uni}(a,b)$. These are all elementary examples of sub-Gaussian random variables.}
\subsection{Random Sets}
Let $(\mathcal{U}, \mathfrak{F}, \mathbb{P})$ be a complete probability space, where $\mathcal{U}$ is the sample space, $\mathfrak{F}$ is the set of events, and $\mathbb{P}$ is the probability measure. A map $S : \mathcal{U}\rightarrow\mathcal{F}$ is a random set if $\{u : S(u) \in\mathcal{X}\}\in\mathfrak{F}$ for each $\mathcal{X}$ in the Borel $\sigma$-algebra on $\mathcal{F}$ \cite{molchanov2006}. Like the usual convention for random variables, we notationally drop the argument for a random set.
When discussing stochastic convergence of random sets, we denote that a type of limit occurs almost surely by appending ``$\as$'' to the limit notation. For instance, notation $\aslimsup_n C_n \subseteq C$ denotes $\mathbb{P}(\limsup_n C_n\subseteq C) = 1$, and notation $\asliminf_n C_n \supseteq C$ denotes $\mathbb{P}(\liminf_n C_n\supseteq C) = 1$.
\section{Fair Statistical Decision Problems}
\label{sec:fsdp}
We use the setting of statistical decision problems: Consider the random variables $(X,Y,Z)$ that have a joint distribution $\mathcal{P} \in \mathcal{D}$ where $\mathcal{D}$ is some fixed family of distributions. The interpretation is that $X$ gives descriptive information, $Y$ has information about some target, and $Z$ encodes protected information which we would like to be fair with respect to. We will not explicitly use $Y$ in this paper, but we note that it is implicitly included within other terms that we discuss.
The goal is to construct a function $\delta(\cdot,\cdot)$ called a \emph{decision rule}, which provides a decision $d = \delta(x,z)$. To evaluate the quality of a decision rule $\delta$, we define a \emph{risk function} $R(\delta)$. (Though it is conventional to define the risk as $R(\mathcal{P}, \delta)$, we assume without loss of generality that the risk is of the form $R(\delta)$ because when the risk is $R(\mathcal{P}, \delta)$ then the proper choice of $R(\delta)$ recovers the Bayes $R(\delta) = \mathbb{E}_\mathcal{P} R(\mathcal{P},\delta)$ and minimax $R(\delta) = \max_{\mathcal{P}\in\mathcal{D}}R(\mathcal{P},\delta)$ procedures.) In this setup, an optimal decision rule is taken to be any function from $\arg\min_{\delta(\cdot,\cdot)} R(\delta)$. However, we can define a related optimization problem that chooses an optimal fair decision rule by solving
\begin{equation}
\label{eqn:ofdr}
\textstyle\delta^*(x,z) \in \arg\min_{\delta(\cdot,\cdot)}\big\{R(\delta)\ \big|\ \delta(X,Z) \perp \!\!\! \perp Z\big\},
\end{equation}
where the notation $\delta(X,Z) \perp \!\!\! \perp Z$ indicates independence of $\delta(X,Z)$ and $Z$.
The above abstract setup is useful because it allows us to reason about fairness for a wide class of problems using a single theoretical framework. This is demonstrated by the following (which is the first to our knowledge) example of a procedure for performing fair hypothesis testing:
\begin{example}
Consider a hypothesis testing setup where the null hypothesis is $H_0: \mathbb{E}(\Xi) = 0$ for the underlying distribution
\begin{equation}
\begin{bmatrix}\Xi \\ \Psi\end{bmatrix} \sim \mathcal{N}\Bigg(\begin{bmatrix}0 \\ 0\end{bmatrix}, \begin{bmatrix}1 & \rho \\ \rho & 1\end{bmatrix}\Bigg).
\end{equation}
Suppose $X = (\Xi_1,\ldots,\Xi_n)$ and $Z = (\Psi_1,\ldots,\Psi_n)$ consist of i.i.d. samples. Let $d_0$ be the decision to accept the null, and let $d_1$ be the decision to reject the null. The traditional hypothesis test with a significance level of $a$ corresponds to a decision rule $\delta$ that minimizes the risk function
\begin{equation}
\label{eqn:htrisk}
R(\delta) = \mathbb{P}_{H_1}(\delta = d_0) + \Gamma(\mathbb{P}_{H_0}(\delta = d_1) - a, \mathbb{R}_{\leq 0}),
\end{equation}
where $H_1 = \{\mathcal{P} \in\mathcal{D} : \mathcal{P} \neq H_0\}$ \cite{lehmann2006testing} {and $\Gamma(\cdot,\cdot)$ is the indicator function that was defined in Section \ref{sec:vaprelim}}. An optimal decision rule for this risk is
\begin{equation}
\label{eqn:drht}
\delta^* = \begin{cases}d_0, & \text{if } p \geq a\\
d_1, &\text{if } p < a\end{cases}
\end{equation}
where $p$ is a $p$-value \cite{lehmann2006testing}. An optimal decision rule that depends only upon $X$ corresponds to the use of a traditional $p$-value
\begin{equation}
\label{eqn:oldp}
\textstyle p = 2\Phi\Big(-\sqrt{n}\big|\frac{1}{n}\sum_{i=1}^n\Xi_i\big|\Big),
\end{equation}
with $\Phi(\cdot)$ being the standard normal c.d.f. Using the above framework, we can compute an optimal \emph{fair} decision rule for this risk. This corresponds to
\begin{equation}
\label{eqn:fairp}
\textstyle p = 2\Phi\Big(-\sqrt{\frac{n}{1-\rho^2}}\big|\frac{1}{n}\sum_{i=1}^n\big(\Xi_i - \rho\Psi_i\big)\big|\Big),
\end{equation}
which we can interpret as a \emph{fair} $p$-value. An interesting observation about this setup is that using (\ref{eqn:fairp}) results in a test with greater \emph{power} than using (\ref{eqn:oldp}). {This means that the risk as measured by (\ref{eqn:htrisk}) of the decision rule (\ref{eqn:drht}) with (\ref{eqn:oldp}) is higher than the risk of the decision rule (\ref{eqn:drht}) with (\ref{eqn:fairp}). This example is interesting because it shows that using more variables, even protected ones, can improve the resulting decision rule by reducing its risk.
\end{example}
In many statistical contexts, $\mathcal{D}$ is singleton but unknown. We then instead choose the decision rule using a sample $(X_i,Y_i,Z_i)$ for $i=1,\ldots,n$, which is i.i.d. from the distribution $\mathcal{P}$. Towards this aim, we approximate the risk function $R(\delta)$ using an (random) approximate risk function $R_n(\delta)$ that depends upon the sample. However, computing a sample-based fair decision rule is not obvious because a statistically well-behaved, sample-based analog of the constraint $\delta(X,Z) \perp \!\!\! \perp Z$ from (\ref{eqn:ofdr}) has not been studied previously.
\section{Fair Optimization Hierarchy}
\label{sec:foh}
We next propose a framework for computing a fair decision rule by solving a sample-based analog of (\ref{eqn:ofdr}). We first describe our assumptions about the statistical and numerical properties of the problem. Next we present our framework and provide some intuition to justify the structure of our formulation. We conclude by discussing some of the favorable computational properties of our framework.
\subsection{Assumptions}
We first make some assumptions about our decision rule and random variables:
\begin{assumption}
\label{ass:drule}
The decision rule belongs to a parametric polynomial family and can be written as
\begin{equation}
\delta(x,z) = B\cdot\omega(x,z),
\end{equation}
where $B\in\mathcal{B}$ is a matrix, $\mathcal{B}\subset\mathbb{R}^{d\times p}$ is a compact set, and $\omega(x,z) \in\mathbb{R}^p$ is a vector of monomials of the entries of the vectors $x,z$. More precisely, $B$ parametrizes the decision rule $\delta(x,z)$, and the function $\omega(x,z)$ is assumed to be known and fixed by our design such as through feature engineering. We define the random variable $\Omega = \omega(X,Z)$, so that $\delta(X,Z) = B\Omega$.
\end{assumption}
\begin{remark}
In some settings, it may be desirable to have the fair decision rule depend upon only $X$ and not $Z$. The above includes this case by noting $\omega(x,z)$ is free to be chosen to include only monomials of the entries of $x$.
\end{remark}
\begin{remark}
{This assumption says the decision rules are linear with respect to some polynomial transformation of the $X$ and $Z$. Such a linear decision rule may not be competitive in terms of risk minimization as compared to more sophisticated models, but linear decision rules are commonly used in many application domains such as health care or economics and as such are important to theoretically study in the setting of fairness.}
\end{remark}
\begin{assumption}
\label{ass:2norm}
Assume $\mathcal{B} \subseteq \{B \in \mathbb{R}^{d\times p} : \|W(B)\|_2 \leq \sqrt{\lambda}\}$ for $\lambda \geq 1$.
\end{assumption}
Our next assumption is about statistical properties of the approximate risk function. Since our primary interest in this paper is studying independence constraints, we directly make assumptions about the convergence of the approximate risk function. Showing that such convergence holds typically involves a separate statistical analysis specific to the problem at hand.
\begin{assumption}
\label{ass:convergence}
Note the function $R_n(B\cdot\omega(x,z))$ is the approximate risk function composed with the parametric decision rule in Assumption \ref{ass:drule}. We assume that this function can be written in the form
\begin{equation}
\label{eqn:objn}
h_n(B) := R_n(B\cdot\omega(x,z)) = f_n(B) + \Gamma(g_n(B), \{\mathbb{R}_{\leq 0}\}^\eta),
\end{equation}
where $f_n : \mathbb{R}^{d\times p}\rightarrow\mathbb{R}$ and $g_n : \mathbb{R}^{d\times p}\rightarrow\mathbb{R}^\eta$. Moreover, define the notation $h(B) = R(B\cdot\omega(x,z))$. We assume $\aselim h_n = \aslim h_n = h$ relative to $\mathcal{B}$.
\end{assumption}
\begin{remark}
We should interpret the notation of (\ref{eqn:objn}) as simultaneously specifying an objective function $f_n(B)$ and a set of constraints $g_n(B) \leq 0$.
\end{remark}
\begin{remark}
This convergence assumption may look unfamiliar, but we note that it is weaker than the convergence results that are usually shown when proving consistency of estimators. In particular, almost sure uniform convergence of $h_n$ to $h$ implies the above assumption.
\end{remark}
The first three assumptions are primarily related to statistical properties. {It is instructive to consider examples that show how linear regression and linear classification problems match the assumptions above.}
\begin{example}
{Linear regression with $(X_i, Y_i) \in \mathbb{R}^p\times\mathbb{R}$ in our setup would mean we choose a linear decision rule $\delta(x) = Bx$ with $B \in \mathbb{R}^{1\times p}$. We could use a squared loss $R_n(B\cdot x) = \frac{1}{n}\sum_{i=1}^n(Y_i - BX_i)^2$ or the least absolute deviation loss $R_n(B\cdot x) = \frac{1}{n}\sum_{i=1}^n|Y_i-BX_i|$ for our regression. The nondifferentiability of the latter can be managed by introducing the variables $s_i$ and noting $R_n(B\cdot x) = \frac{1}{n}\sum_{i=1}^ns_i$ subject to the constraints $-s_i \leq Y_i-BX_i \leq s_i$. This matches the decomposition (\ref{eqn:objn}) of $R_n(\delta)$ into an objective with constraints. These loss functions can hence be minimized by many algorithms.}
\end{example}
\begin{example}
{Linear classification with $(X_i, Y_i) \in \mathbb{R}^p\times\{-1,+1\}$ in our setup would mean we choose a linear decision rule $\delta(x) = Bx$ with $B \in \mathbb{R}^{1\times p}$. We could use any classification-calibrated loss: Logistic regression uses $R_n(B\cdot x) = \frac{1}{n}\sum_{i=1}^n\log(1 + \exp(-Y_i\cdot BX_i))$. Because this logistic loss is convex and differentiable, it can be easily optimized. Support vector machine uses the hinge loss $R_n(B\cdot x) = \frac{1}{n}\sum_{i=1}^n\max\{0, 1-Y_i\cdot BX_i\}$. Its nondifferentiability is handled by introducing the variables $s_i$ and noting $R_n(B\cdot x) = \frac{1}{n}\sum_{i=1}^ns_i$ subject to constraints $s_i \geq 0$ and $s_i \geq 1-Y_i\cdot BX_i$. This matches the decomposition (\ref{eqn:objn}) of $R_n(\delta)$ into an objective with constraints, and this formulation can be easily minimized by many algorithms.}
{In each case, the linear classifier makes binary predictions $\widehat{Y}(x,t) = \mathrm{sign}(t-Bx)$ by applying a threshold $t$ to the decision rule $\delta(x) = Bx$. This interpretation of using the score function $Bx$ as the decision rule is theoretically justified because classification-calibrated losses (like the logistic loss or the hinge loss) composed with the score function are statistically consistent with respect to the 0-1 classification loss composed with the thresholded binary predictions $\widehat{Y}(x,0)$ \cite{bartlett2006convexity}, and because statistical independence of $\delta(x) = Bx$ and $Z$ implies independence between $\widehat{Y}(X,t)$ and $Z$.}
\end{example}
\begin{example}
\label{ex:01loss}
{We could consider the above linear classification setup using the 0-1 classification loss $R_n(B\cdot x) = \frac{1}{n}\sum_{i=1}^n H(-Y_i\cdot BX_i)$, where $H(\cdot) : \mathbb{R} \rightarrow\{0, 1\}$ is the step function defined as
\begin{equation}
\mathbf{1}(u) = \begin{cases} 0, &\text{if } u \leq 0\\
1, &\text{otherwise}\end{cases}.
\end{equation}
This loss is supported by our setup because Assumption \ref{ass:convergence} follows by applying standard uniform convergence results \cite{wainwright2017high}. (Uniform convergence is technically stronger than the type of convergence required in Assumption \ref{ass:convergence}.) However, the resulting optimization problem is an integer program \cite{Liittschwager1978}. The idea behind the integer programming formulation is that it uses binary variables to keep track of whether or not each $Y_i\cdot BX_i$ is nonnegative.}
\end{example}
\subsection{Formulation}
We are now ready to present our framework. Given the above assumptions, we study use of the following sample-based optimal fair decision rule: The level-$(\mathfrak{g},\mathfrak{h})$ fair optimization (FO) is
\begin{equation}\label{eq:fo}
\begin{aligned}
\min_{B\in\mathcal{B}}\ &R_n(B\cdot\omega(x,z))\\
\text{s.t. }&\textstyle\big\|\mathbb{E}_n\big(Z^{\otimes m}(B\Omega)^{\otimes q}\big)-\mathbb{E}_n\big(Z^{\otimes m}\big)\otimes\mathbb{E}_n\big((B\Omega)^{\otimes q}\big)\big\|\le\Delta_{m,q},\\
&\qquad\text{for } (m,q)\in[\mathfrak{g}]\times[\mathfrak{h}]
\end{aligned}
\end{equation}
{where $\mathfrak{g},\mathfrak{h}\geq 1$ are integers and $\Delta_{m,q} \geq 0$ are nonnegative real numbers. We note that $\mathfrak{g}$, $\mathfrak{h}$, and $\Delta_{m,q}$ will generally be chosen to depend on $n$, but for simplicity we will not make this $n$-dependence explicit in our notation. Our optimization hierarchy for fair statistical decision problems is defined by the above formulation given in (\ref{eq:fo}), with the increasing number of constraints in the hierarchy parametrized by increasing values of $\mathfrak{g},\mathfrak{h}$.} We will study the constraints of the above problem and show that they are statistically well-behaved analogs of the independence constraint in (\ref{eqn:ofdr}).
\begin{remark}
{
The above formulation considers fairness in the sense of disparate impact. When the protected attributes are categorical, meaning $Z \in \mathcal{Z}$ for some finite-cardinality set $\mathcal{Z}$, then our formulation can be modified to consider fairness in the sense of equalized odds by replacing the constraints in the above formulation with the constraints
\begin{multline}
\textstyle\big\|\mathbb{E}_n\big[Z^{\otimes m}(B\Omega)^{\otimes q}|Z = z\big]-\mathbb{E}_n\big[Z^{\otimes m}|Z=z\big]\otimes\mathbb{E}_n\big[(B\Omega)^{\otimes q}|Z=z\big]\big\|\\
\le\Delta_{m,q}, \text{ for } (m,q)\in[\mathfrak{g}]\times[\mathfrak{h}] \text{ and } z \in \mathcal{Z}.
\end{multline}
Compared to the above formulation, here we take expectations with respect to the empirical distribution conditioned on each possible value in $\mathcal{Z}$.}
\end{remark}
Our first result provides intuition about the constraints in the FO optimization problem (\ref{eq:fo}). This result generalizes Kac's theorem \cite{bisgaard2006does,kac1936fonctions}, which characterizes independence of random variables using moment conditions, to the setting of random vectors. This generalization is novel to the best of our knowledge, and so we include its proof below for the sake of completeness.
\begin{theorem}
\label{thm:kac}
{Let $M_{(U,V)}(s,t) = \mathbb{E}\exp(\langle s,U\rangle + \langle t,V\rangle)$ be the moment generating function for the multivariate random variable $(U,V)$ where we have $U\in\mathbb{R}^p$ and $V\in\mathbb{R}^d$. If $M_{(U,V)}(s,t)$ is finite in a neighborhood of the origin, then} $U$ and $V$ are independent if and only if
\begin{equation}
\label{eqn:mc}
\mathbb{E}\big(U^{\otimes m}V^{\otimes q}\big) = \mathbb{E}\big(U^{\otimes m}\big)\otimes\mathbb{E}\big(V^{\otimes q}\big)\ \mathrm{for}\ m,q\geq 1.
\end{equation}
\end{theorem}
\begin{proof}
{Let $M_U(s) = \mathbb{E}\exp(\langle s, U\rangle)$ and $M_V(t) = \mathbb{E}\exp(\langle t,V\rangle)$ be the moment generating functions for $U$ and $V$, respectively. Observe that these are defined for $s$ and $t$ in a neighborhood of the origin by the assumption in the hypothesis on $M_{(U,V)}(s,t)$.} Our proof begins with the well-known characterization of independence using moment generating functions, that is $U$ and $V$ are independent if and only if $M_{(U,V)}(s,t) = M_U(s)M_V(t)$. In particular, if (\ref{eqn:mc}) holds then we have
\begin{equation}
\label{eqn:pfkacth}
\begin{aligned}
M_{(U,V)}(s,t) &= \textstyle\sum_{m=0}^\infty\sum_{q=0}^\infty\frac{1}{m!\cdot q!}\cdot\mathbb{E}\big(\langle s,U\rangle^m\langle t,V\rangle^q\big)\\
&\textstyle= \sum_{m=0}^\infty\sum_{q=0}^\infty\frac{1}{m!\cdot q!}\cdot\langle\mathbb{E}\big(U^{\otimes m}V^{\otimes q}\big), s^{\otimes m}t^{\otimes q}\rangle\\
&\textstyle= \sum_{m=0}^\infty\sum_{q=0}^\infty\frac{1}{m!\cdot q!}\cdot\langle\mathbb{E}\big(U^{\otimes m}\big)\otimes\mathbb{E}\big(V^{\otimes q}\big), s^{\otimes m}t^{\otimes q}\rangle\\
&\textstyle= \sum_{m=0}^\infty\sum_{q=0}^\infty\frac{1}{m!\cdot q!}\cdot\langle\mathbb{E}\big(U^{\otimes m}\big), s^{\otimes m}\rangle\cdot\langle\mathbb{E}\big(V^{\otimes q}\big), t^{\otimes q}\rangle\\
&\textstyle= \sum_{m=0}^\infty\sum_{q=0}^\infty\frac{1}{m!\cdot q!}\cdot\mathbb{E}\big(\langle s,U\rangle^m\big)\cdot\mathbb{E}\big(\langle t,V\rangle^q\big)\\
&\textstyle= \sum_{m=0}^\infty\frac{1}{m!}\cdot\mathbb{E}\big(\langle s,U\rangle^m\big)\cdot\sum_{q=0}^\infty\frac{1}{q!}\cdot\big(\mathbb{E}\langle t,V\rangle^q\big) \\
&= M_U(s)M_V(t)
\end{aligned}
\end{equation}
This proves the reverse direction. {To prove the forward direction, we note it follows by applying componentwise for all $\sigma \in [p]^m$ and $\tau\in[d]^q$ the standard result that if $U$ and $V$ are independent, then $\mathbb{E}(\prod_{k=1}^m U_{\sigma_k}\cdot\prod_{k=1}^q V_{\tau_k}) = \mathbb{E}(\prod_{k=1}^mU_{\sigma_k})\cdot\mathbb{E}(\prod_{k=1}^q V_{\tau_k})$ when these expectations exist. Indeed, these expectations exist because of the hypothesis assumption on $M_{(U,V)}(s,t)$.}
\end{proof}
\begin{remark}
{This result requires that $M_{(U,V)}(s,t)$ exists in a neighborhood of the origin. Examples of distributions that satisfy this condition are those with a bounded support (almost surely), as well as those belonging to the sub-Gaussian, sub-exponential, or sub-gamma families of distributions. This encompasses a large number of the most common distributions.}
\end{remark}
{Next, we show a similar result that characterizes approximate independence of random variables using moment conditions. The benefits of this next result are that: it holds for (possibly unbounded) distributions that have finite moments, and it does not require the existence of $M_{(U,V)}(s,t)$ in a neighborhood of the origin. This means it applies to a larger class of distributions. Our characterization relating moment conditions to approximate independence is the first result of its kind, to our knowledge.}
{However, we have to specify how independence is quantified. A natural idea is to consider a distance between the joint distribution of $(Z,\widehat{B}_n\Omega)$ and the product distribution of $Z$ and $\widehat{B}_n\Omega$. This idea is natural because independence means that the joint distribution equals the product distribution. Thus the pertinent detail is choosing a distance between distributions to use. Our next example shows a subtle issue in making this choice.}
\begin{example}
{Consider a setting where $B\in\mathbb{R}$, where $\omega(x,z) = x$, and the distributions are $X \sim\mathrm{Uni}(-1,1)$ and $Z = X$. Then $\Omega = X$. Next let
\begin{equation}
{\ooalign{$d$\cr $\mkern6.8mul$}}(B) = \sup_{s,t} \big|\mathbb{P}_{(Z,B\Omega)}(Z \leq s, B\Omega \leq t) - \mathbb{P}_{\vphantom{(Z,B\Omega)}Z}(Z \leq s) \cdot \mathbb{P}_{\vphantom{(Z,B\Omega)}B\Omega}(B\Omega \leq t)\big|
\end{equation}
be the multivariate Kolmogorov-Smirnov distance between the joint and product distributions of $Z$ and $B\Omega$. Now note ${\ooalign{$d$\cr $\mkern6.8mul$}}(0) = 0$ because $Z$ is trivially independent of the constant $0\cdot\Omega \equiv 0$. Next observe that for any $B \neq 0$ we have ${\ooalign{$d$\cr $\mkern6.8mul$}}(B) = {\ooalign{$d$\cr $\mkern6.8mul$}}(1)$, but ${\ooalign{$d$\cr $\mkern6.8mul$}}(1) > 0$ since $Z = \Omega$. Hence for the sequence $B_n = n^{-1}$, we have that $B_n\Omega$ is asymptotically independent of $Z$ but that ${\ooalign{$d$\cr $\mkern6.8mul$}}(\lim_n B_n) = {\ooalign{$d$\cr $\mkern6.8mul$}}(0) = 0 \neq \lim_n {\ooalign{$d$\cr $\mkern6.8mul$}}(B_n) = {\ooalign{$d$\cr $\mkern6.8mul$}}(1) > 0$ . This means the multivariate Kolmogorov-Smirnov distance cannot quantify independence here.}
\end{example}
\begin{remark}
{Because the total variation distance is greater than or equal to the value of the multivariate Kolmogorov-Smirnov distance, the above example also applies to the total variation distance. Thus Pinsker's inequality implies the above example applies to the Kullback–Leibler (KL) divergence. This means the above example also applies to mutual information, which is defined as the KL divergence between the joint and product distributions.}
\end{remark}
\begin{remark}
{A multivariate version of this example can be constructed where the same issue occurs for a $B \neq 0$, where the example is constructed such that the issue occurs because $B$ does not have full column rank.}
\end{remark}
{The above examples show that several popular distances between distributions cannot be used for quantifying the degree of independence in our setting of fair optimization. This is perhaps not surprising given that the notion of convergence in distribution is weaker than many popular distances. Consequently, we need to consider topologically-weaker metrics on probability distributions, that are able to metricize convergence in distribution.
One such distance is the Zolotarev metric defined using characteristic functions \cite{Zolotarev_1976,klebanov1984estimate,rachev2013methods}, and we will use this distance to quantify the degree of independence between two random variables. Let $U\in\mathbb{R}^p$ and $V\in\mathbb{R}^d$ be random vectors, and define $\mathfrak{i} = \sqrt{-1}$. Then for $s \in \mathbb{R}^{p}$, $t\in\mathbb{R}^{d}$, and $\zeta\in\mathbb{R}$; let $J(s,t,\zeta) = \mathbb{E}\exp(\mathfrak{i}\zeta\langle s, U\rangle + \mathfrak{i}\zeta\langle t, V\rangle)$ and $P(s,t,\zeta) = \mathbb{E}\exp(\mathfrak{i}\zeta\langle s, U\rangle)\cdot\mathbb{E}\exp(\mathfrak{i}\zeta\langle t, V\rangle)$ be the characteristic functions corresponding to the joint and product distributions, respectively, of $U$ and $V$. The Zolotarev metric between the joint and product distributions is given by
\begin{equation}
\label{eqn:mdef}
\mathbb{H}(U;V) = \sup_{(s,t)\in\mathbb{S}^{p+d-1}}\Bigg[\mathop{\mathrm{inf}\vphantom{\mathrm{sup}}}_{\vphantom{|\zeta|}T > 0}\max\Big\{\frac{1}{2}\sup_{|\zeta| \leq T}\big|J(s,t,\zeta) - P(s,t,\zeta)\big|, \frac{1}{T}\Big\}\Bigg].
\end{equation}
We call the quantity $\mathbb{H}(U;V)$ the \emph{mutual characteristic} of $U$ and $V$, and the choice of this name is meant to draw a direct analogy to mutual information.}
\begin{theorem}
\label{thm:kac2}
{Consider the random variable $(U,V)$ where $U\in\mathbb{R}^p$ and $V\in\mathbb{R}^d$. If $J_{\mathfrak{g},\mathfrak{h}} = \sup_{(s,t)\in\mathbb{S}^{p+d-1}}\mathbb{E}(\langle s, U\rangle^{\mathfrak{g}+1}\langle t,V\rangle^{\mathfrak{h}+1})$ is finite and
\begin{equation}
\label{eqn:mc2}
\mathbb{E}\big(U^{\otimes m}V^{\otimes q}\big) = \mathbb{E}\big(U^{\otimes m}\big)\otimes\mathbb{E}\big(V^{\otimes q}\big)\ \mathrm{for}\ m,q\in[\mathfrak{g}]\times[\mathfrak{h}],
\end{equation}
then we have that
\begin{equation}
\label{eqn:kacapdi}
\mathbb{H}(U;V) \leq \textstyle\Big[\frac{J_{\mathfrak{g},\mathfrak{h}} + P_{\mathfrak{g},\mathfrak{h}}}{(\mathfrak{g}+1)!\cdot(\mathfrak{h}+1)!}\Big]^{1/(\mathfrak{g}+\mathfrak{h}+3)}
\end{equation}
where $P_{\mathfrak{g},\mathfrak{h}} = \sup_{s\in\mathbb{S}^{p-1}}\mathbb{E}(\langle s, U\rangle^{\mathfrak{g}+1})\cdot \sup_{t\in\mathbb{S}^{d-1}}\mathbb{E}(\langle t, V\rangle^{\mathfrak{h}+1})$.}
\end{theorem}
\begin{proof}
{We need to bound the modulus of $J(s,t,\zeta) - P(s,t,\zeta)$. As a first step, note that the difference of their Taylor polynomials satisfies
\begin{multline}
\textstyle\sum_{m=0}^\mathfrak{g}\sum_{q=0}^\mathfrak{h}\frac{1}{m!\cdot q!}\cdot\mathbb{E}\big(\langle s,U\rangle^m\langle t,V\rangle^q\big) +\\
\textstyle- \sum_{m=0}^\mathfrak{g}\frac{1}{m!}\cdot\mathbb{E}\big(\langle s,U\rangle^m\big)\cdot\sum_{q=0}^\mathfrak{h}\frac{1}{q!}\cdot\big(\mathbb{E}\langle t,V\rangle^q\big) = 0
\end{multline}
by the same reasoning used to show (\ref{eqn:pfkacth}). We note that the above summation is well-defined because of the finiteness assumption on $J_{\mathfrak{g},\mathfrak{h}}$ in the hypothesis of this theorem. Next we apply a standard argument (see for instance Section 26 of \cite{billingsley1995probability}) that first uses Jensen's inequality and then uses the elementary inequality $|\exp(i\zeta) - \sum_{m=0}^\mathfrak{g}(i\zeta)^m/m!| \leq |\zeta|^{\mathfrak{g}+1}/(\mathfrak{g}+1)!$ for the complex exponential. This argument implies that for $|\zeta| \leq T$ we have
\begin{equation}
\big|J(s,t,\zeta) - P(s,t,\zeta)\big| \leq\textstyle\frac{J_{\mathfrak{g},\mathfrak{h}} + P_{\mathfrak{g},\mathfrak{h}}}{(\mathfrak{g}+1)!\cdot(\mathfrak{h}+1)!}\cdot T^{\mathfrak{g}+\mathfrak{h}+2}.
\end{equation}
If we choose $T^{\mathfrak{g}+\mathfrak{h}+3}=(\mathfrak{g}+1)!\cdot(\mathfrak{h}+1)!/(J_{\mathfrak{g},\mathfrak{h}} + P_{\mathfrak{g},\mathfrak{h}})$, then the result follows by applying this bound to the definition (\ref{eqn:mdef}).}
\end{proof}
These two generalizations of Kac's theorem allow us to interpret the constraints of the FO problem (\ref{eq:fo}). {Using Theorem \ref{thm:kac}, we can interpret the constraints as a finite number ($\mathfrak{g}\cdot\mathfrak{h}$ many, for a level-$(\mathfrak{g},\mathfrak{h})$ FO problem) of sample-based analogs of the corresponding moment conditions (\ref{eqn:mc}) for independence. Using Theorem \ref{thm:kac2}, we can also interpret the constraints as sample-based analogs of the corresponding finite number of moment conditions that achieve approximate independence in the sense of (\ref{eqn:kacapdi}).}
\subsection{Computational Properties}
We next discuss some favorable computational properties of the FO problem (\ref{eq:fo}). {A key advantage of our framework is that the moment constraints are polynomials. This leads to three general approaches that can be used to numerically solve the FO problem.
The first approach when the relevant functions are polynomials, which allow us to draw upon powerful tools for polynomial optimization \cite{lasserre2010moments}:}
\begin{theorem}[Theorems 5.6, 5.7 of \cite{lasserre2010moments}]
Suppose Assumptions \ref{ass:drule}--\ref{ass:convergence} hold. {If, in the notation of Assumption \ref{ass:convergence}, we assume that the functions $f_n : \mathbb{R}^{d\times p}\rightarrow\mathbb{R}$ and $g_n : \mathbb{R}^{d\times p}\rightarrow\mathbb{R}^\eta$ are polynomials on the set $\mathcal{B}$}, then the level-$(\mathfrak{g},\mathfrak{h})$ FO problem (\ref{eq:fo}) can be solved to any desired accuracy by solving a convex optimization problem that can be explicitly constructed.
\end{theorem}
\begin{remark}
{The polynomial assumption is not restrictive because the celebrated Stone-Weierstrass theorem shows that if $f_n$ and $g_n$ are continuous then they can be approximated to arbitrary accuracy by polynomials, since the domain of the optimization problem is within a compact set $\mathcal{B}$. This means this approach can be used in principle for the squared loss, logistic loss, hinge loss (after the earlier reformulation), least absolute deviation loss (after the earlier reformulation), and many other functions.}
\end{remark}
{Though the convex optimization problems resulting from the explicit construction of \cite{lasserre2010moments} are often large, these resulting optimization problems can be numerically solved for many interesting instances \cite{majumdar2014convex,zhao2019optimal}. We briefly discuss the intuition behind this approach. The first insight is that any polynomial optimization problem $\min \big\{f_n(B)\ \big|\ g_n(B) \leq 0, B\in\mathcal{B}\}$ can be written as maximizing a scalar subject to nonnegative polynomial constraints
\begin{equation}
\max \big\{s\ \big|\ f_n(B) - s \geq 0, -g_n(B) \geq 0, s\in\mathbb{R}, B\in\mathcal{B}\big\}.
\end{equation}
The second insight is that nonnegative polynomials can be approximated on a bounded domain to arbitrary accuracy using sum-of-squares (SOS) polynomials \cite{berg1987multidimensional,lasserre2010moments}. Since our problems involve optimizing a vector that belongs to Euclidean space, SOS polynomials are literally the set of polynomials that are generated by squaring arbitrary polynomials and then adding them up. Specifically, the nonnegative polynomial constraints can be approximated by instead asking for the polynomials to equal a linear combination of a finite number of SOS polynomials. This is a tractable approximation because the resulting optimization problem is a convex semidefinite program, and the following solution can be made arbitrarily accurate by increasing the finite number of SOS polynomials used in the approximation.}
{The second approach applies to cases where the relevant functions are differentiable (but not necessarily polynomial), which allow us to use standard optimization algorithms. Specifically, the moment constraints for low levels of our FO hierarchy have structures that enable numerical solution using algorithms like the constrained convex-concave procedure \cite{smola2005kernel,tuy1995dc,yuille2002concave}.} We can say more about the FO problem for specific levels of the hierarchy, and we omit the proofs since they follow from the definition of the constraint:
\begin{proposition}
The constraints in the FO problem (\ref{eq:fo}) for $q = 1$ can be written as the following linear inequality constraints:
\begin{equation}
\begin{aligned}
\textstyle B\Big(\frac{1}{n}\sum_{i=1}^n\Omega_i\otimes(Z_i)^{\otimes m} - \frac{1}{n}\sum_{i=1}^n\Omega_i\otimes \frac{1}{n}\sum_{i=1}^n(Z_i)^{\otimes m}\Big)\leq &\Delta_{m,1}\\
\textstyle-B\Big(\frac{1}{n}\sum_{i=1}^n\Omega_i\otimes(Z_i)^{\otimes m} - \frac{1}{n}\sum_{i=1}^n\Omega_i\otimes \frac{1}{n}\sum_{i=1}^n(Z_i)^{\otimes m}\Big)\leq &\Delta_{m,1}\\
\end{aligned}
\end{equation}
where the inequality should be interpreted as being elementwise of the left (which is a tensor) with respect to the scalar $\Delta_{m,1}$ on the right.
\end{proposition}
This results says constraints with $q=1$ are always convex. This means that the FO problem (\ref{eq:fo}) with $\mathfrak{h} = 1$ is a convex optimization problem whenever $R_n$ is convex in $B$. Such convexity of $R_n$ occurs in many interesting problems, including linear regression and support vector machines.
\begin{proposition}
The constraints in the FO problem (\ref{eq:fo}) for $q = 2$ are inequalities that each involve a difference of two convex quadratic functions.
\end{proposition}
This results says constraints with $q=2$ are always a difference of convex functions. This means that stationary points of the FO problem (\ref{eq:fo}) with $\mathfrak{h} = 2$ can be found using the effective constrained convex-concave procedure \cite{smola2005kernel,tuy1995dc,yuille2002concave} whenever $R_n$ is convex in $B$. Recall that $R_n$ is convex in many interesting problems like linear regression and support vector machines.
\begin{proposition}
\label{prop:zbinary}
If $Z$ is a binary random variable, which is coded as either $Z\in\{0,1\}$ or $Z\in\{\pm 1\}$, then the constraints in the FO problem (\ref{eq:fo}) for $m \geq 2$ are redundant with the corresponding constraint for $m =1$.
\end{proposition}
This result says that when $Z$ is binary, then the hierarchy simplifies and we only need to consider applying the level-$(1,\mathfrak{h})$ FO problems. We will use this simplification when conducting numerical experiments in Section \ref{sec:er}.
{The third approach applies when the relevant functions are mixed-integer non-convex quadratic-representable, which means the objective and constraints can be represented by non-convex quadratic functions with some variables constrained to be integer-valued. As described in Example \ref{ex:01loss}, this case holds for linear classification using the 0-1 classification loss.}
\begin{proposition}
{Suppose Assumptions \ref{ass:drule}--\ref{ass:convergence} hold. If, in the notation of Assumption \ref{ass:convergence}, we assume that the functions $f_n : \mathbb{R}^{d\times p}\rightarrow\mathbb{R}$ and $g_n : \mathbb{R}^{d\times p}\rightarrow\mathbb{R}^\eta$ are mixed-integer non-convex quadratic-representable, then the level-$(\mathfrak{g},\mathfrak{h})$ FO problem (\ref{eq:fo}) can be solved using a non-convex mixed-integer quadratically constrained program (non-convex MIQCP).}
\end{proposition}
{The development of numerical algorithms to solve non-convex MIQCP problems is an active research area \cite{burer2012non,kilincc2015two,chen2017spatial}, and a number of software packages \cite{sahinidis1996baron,adjiman2000global,burer2009copositive,lin2009global,vigerske2018scip,gurobi} are already available for solving such problems. The proof of the above result is omitted because it follows immediately from the facts that the moment constraints are polynomials and that any polynomial inequality constraint can be represented by quadratic constraints and a set of new variables. To understand the intuition behind this second fact, consider as an example the constraint $B_1^{\ 3} \leq 0$. We can represent this by two constraints $B_2^{\vphantom{2}} = B_1^{\ 2}$ and $B_1\cdot B_2 \leq 0$, where we have introduced a new variable $B_2$. These two constraints are non-convex quadratic constraints.}
\section{Statistical Consistency of FO Hierarchy}
\label{sec:scfoh}
We prove in this section that the sample-based constraints of the FO problem (\ref{eq:fo}) are in fact statistically well-behaved analogs of the independence constraint in (\ref{eqn:ofdr}). We consider the case of bounded random variables in this section:
\begin{assumption}
\label{ass:zdim}
The entries of the random variables $X,Z$ are almost surely bounded by $\alpha \geq 1$. Moreover, the maximal monomial degree of entries in $\omega(x,z)$ is $\rho \geq 1$, and the random variable $Z$ has dimensions $Z\in\mathbb{R}^r$.
\end{assumption}
\subsection{Concentration of Tensor Moment Estimates}
We begin by defining several multilinear operators. We define the empirical operators
\begin{equation}
\begin{aligned}
\widehat{\varphi}_{m,q}(B_1,\ldots,B_q) &= \textstyle\mathbb{E}_n\big(Z^{\otimes m}\bigotimes_{k=1}^{q}(B_k\Omega)\big)\\
\rlap{$\hspace{0.09em}\widehat{\nu}$}\hphantom{\widehat{\varphi}}_{m,q}(B_1,\ldots,B_q) &= \textstyle\mathbb{E}_n\big(Z^{\otimes m}\big)\otimes\mathbb{E}_n\big(\bigotimes_{k=1}^{q}(B_i\Omega)\big)\\
\end{aligned}
\end{equation}
and the expected operators
\begin{equation}
\begin{aligned}
\varphi_{m,q}(B_1,\ldots,B_q) &= \textstyle\mathbb{E}\big(Z^{\otimes m}\bigotimes_{k=1}^{q}(B_k\Omega)\big)\\
\rlap{$\hspace{0.09em}\nu$}\hphantom{\varphi}_{m,q}(B_1,\ldots,B_q) &= \textstyle\mathbb{E}\big(Z^{\otimes m}\big)\otimes\mathbb{E}\big(\bigotimes_{k=1}^{q}(B_i\Omega)\big)
\end{aligned}
\end{equation}
As a slight simplification of notation, when the argument of these multilinear operators is $(B)$ we take that to mean the argument is $(B,\ldots,B)$. We can thus identify these operators with terms in the FO problem (\ref{eq:fo}): The $\widehat{\varphi}_{m,q}(B)$ and $\widehat{\nu}_{m,q}(B)$ are precisely the terms appearing in the constraints.
\begin{proposition}
\label{prop:cphi}
If Assumptions \ref{ass:drule}, \ref{ass:zdim} hold, then we have
\begin{equation}
\label{eqn:cphi}
\textstyle\mathbb{P}\big(\|\widehat{\varphi}_{m,q}-\varphi_{m,q}\|_\circ > \mathcal{R}_{m,q}[n] + \gamma\big) \leq 2\exp\big(-\frac{n\gamma^2}{64p^q\alpha^{2m+2\rho q}}\big)
\end{equation}
for $\mathcal{R}_{m,q}[n] = 8\alpha^{m+\rho q}p^{q/2}\sqrt{\frac{dp\log(1+4q)+m\log r+q\log d}{n}}$.
\end{proposition}
\begin{proof}
We use a chaining argument. Suppose $\{t_i\}_{i=1}^N$ is a $\frac{1}{2q}$ covering of $\mathbb{S}^{dp-1}$, and note $N \leq (1+4q)^{dp}$ by the volume ratio bound \cite{wainwright2017high}. Define $T_i = M(t_i) \in\mathbb{R}^{d\times p}$. Let $P_q$ be the set of all permutations of $[q]$, and let
\begin{equation}
\label{eqn:tensym}
\textstyle\Phi(B_1,\ldots,B_q) = \frac{1}{q!}\sum_{\pi\in P_q}\big(\widehat{\varphi}_{m,q}(B_{\pi_1},\ldots,B_{\pi_q}) - \varphi_{m,q}(B_{\pi_1},\ldots,B_{\pi_q})\big).
\end{equation}
Observe that by construction: $\Phi(\cdot,\ldots,\cdot)$ is symmetric, and it satisfies the identity $\Phi(B) = \widehat{\varphi}_{m,q}(B) - \varphi_{m,q}(B)$. Now consider the telescoping sum
\begin{equation}
\label{eqn:tele}
\textstyle\Phi(B) = \Phi(T_i) + \sum_{k=1}^q \Phi(\stackrel{q-k}{\overbrace{B,\ldots,B}}, B - T_i, \stackrel{k-1}{\overbrace{T_i,\ldots,T_i}}).
\end{equation}
Recall $\|W(T_i)\|_2 = 1$ and $\|W(B - T_i)\|_2 \leq \frac{1}{2q}$ for $W(B) \in \mathbb{S}^{dp-1}$. Since $\|\cdot\|_*$ is a subordinate norm, we have $\|\Phi\|_\circ \leq \|\Phi(T_i)\| + \sum_{k=1}^q\frac{1}{2q}\|\Phi\|_*$. But note that $\Phi(\cdot,\ldots,\cdot)$ is symmetric, and so $\|\Phi\|_\circ = \|\Phi\|_*$ \cite{banach1938homogene,bochnak1971polynomials}. Thus we have $\|\Phi\|_\circ \leq 2\|\Phi(T_i)\|$. But by definition of the tensor norm $\|\cdot\|$ we have
\begin{equation}
\|\Phi(T_i)\| = \max_{u_k, v_k}\textstyle \big|\big\langle \Phi(T_i), \bigotimes_{k=1}^m u_k \bigotimes_{k=1}^q v_k\big\rangle\big|
\end{equation}
for $u_k\in E_r, v_k\in E_d$; where $E_d = \{x \in \{0,1\}^d : \|x\|_1 = 1\}$. So it holds that
\begin{equation}
\label{eqn:pfref}
\|\Phi\|_\circ \leq 2\max_{i,u_k,v_k}\textstyle\big|\big\langle \Phi(T_i), \bigotimes_{k=1}^m u_k \bigotimes_{k=1}^q v_k\big\rangle\big|
\end{equation}
for $i\in [N], u_k\in E_r, v_k\in E_d$. Next consider any $s\in\mathbb{R}$, and observe that
\begin{equation}
\label{eqn:expbnd}
\begin{aligned}
\mathbb{E}\exp\big(s\|\Phi\|_\circ\big) & \leq \mathbb{E}\exp\big(2s\max_{i,u_k,v_k}\textstyle\big|\big\langle \Phi(T_i), \bigotimes_{k=1}^m u_k \bigotimes_{k=1}^q v_k\big\rangle\big|\big)\\
& \textstyle\leq \sum_{\sigma\in\pm 1,i,u_k,v_k}\textstyle\mathbb{E}\exp\big(2s\sigma\big\langle \Phi(T_i), \bigotimes_{k=1}^m u_k \bigotimes_{k=1}^q v_k\big\rangle\big)
\end{aligned}
\end{equation}
We seek to bound the term on the right-hand side. Towards this end, note $\|B\Omega_i\| \leq \sqrt{p}\|W(B)\|_2\|\Omega_i\| \leq \sqrt{p}\alpha^\rho$ by the Cauchy-Schwarz inequality and Assumption \ref{ass:zdim}. This means that for $S_i = \sigma\big\langle Z^{\otimes m}(T_i\Omega)^{\otimes q}, \bigotimes_{k=1}^m u_k \bigotimes_{k=1}^q v_k\big\rangle$ we have $\big|S_i\big| \leq \alpha^{m+\rho q}p^{q/2}$. Next observe that
\begin{equation}
\label{eqn:bigone}
\begin{aligned}
\textstyle\mathbb{E}\exp\big(2s\sigma\big\langle \Phi(T_i), \bigotimes_{k=1}^m u_k \bigotimes_{k=1}^q v_k\big\rangle\big) &\leq \textstyle\big(\mathbb{E}\exp\big(\frac{4\epsilon s S_i}{n}\big)\big)^n\\
&=\textstyle\big(\mathbb{E}\sum_{k=0}^\infty\frac{1}{k!}\big(\frac{4\epsilon s S_i}{n}\big)^k\big)^n\\
&=\textstyle \big(\mathbb{E}\sum_{k=0}^\infty\frac{1}{(2k)!}\big(\frac{4s S_i}{n}\big)^{2k}\big)^n\\
&\textstyle\leq \big(\sum_{k=0}^\infty\frac{1}{k!}\big(\frac{16s^2p^q\alpha^{2m+2\rho q}}{n^2}\big)^{k}\big)^n\\
&\textstyle = \exp\big(\frac{16s^2p^q\alpha^{2m+2\rho q}}{n}\big)
\end{aligned}
\end{equation}
where the first line follows by a stochastic symmetrization step (i.e., Jensen's inequality, multiplication with i.i.d. Rademacher random variables $\epsilon$ having distribution $\mathbb{P}(\epsilon = \pm 1) = \frac{1}{2}$, using the triangle inequality, and concluded by Jensen's inequality), the third line follows since $\epsilon$ is a symmetric random variable, and the fourth line follows by replacing $(2k!)$ with $k!$ and substituting the absolute bound on $|S_i|$. Combining the above with (\ref{eqn:expbnd}) gives
\begin{equation}
\textstyle\mathbb{E}\exp\big(s\|\Phi\|_\circ\big) \leq 2(1+4q)^{dp}r^md^q\exp\big(\frac{16s^2p^q\alpha^{2m+2\rho q}}{n}\big).\end{equation}
Using the Chernoff bound gives
\begin{equation}
\begin{aligned}
\mathbb{P}\big(\|\Phi\|_\circ > t\big) &\leq 2(1+4q)^{dp}r^md^q\inf_{s\in\mathbb{R}}\textstyle\exp\big(\frac{16s^2p^q\alpha^{2m+2\rho q}}{n}-s t\big)\\
&\textstyle=2(1+4q)^{dp}r^md^q\exp\big(-\frac{nt^2}{64p^q\alpha^{2m+2\rho q}}\big)
\end{aligned}
\end{equation}
The result now follows by choosing
\begin{equation}
\textstyle t = \sqrt{\frac{64p^q\alpha^{2m+2\rho q}}{n}\big(dp \log(1+4q) + m\log r + q \log d\big) + \gamma^2}
\end{equation}
and accordingly simplifying the resulting expression.
\end{proof}
\begin{remark}
Though a similar proof was used in \cite{wainwright2017high} for random matrices and in \cite{tomioka2014spectral} for random tensors, we use a stronger argument that is adapted to our setup and results in a faster convergence rate where some terms are logarithmic that would otherwise be polynomial with a weaker argument. We use a stronger chaining argument than \cite{tomioka2014spectral,wainwright2017high} by using a telescoping sum (\ref{eqn:tele}) that reduces cross terms. We use a tensor symmetrization construction (\ref{eqn:tensym}) that allows us to exploit Banach's theorem \cite{banach1938homogene,bochnak1971polynomials}. We achieve better constants than \cite{wainwright2017high} by more carefully bounding our moment series expansion.
\end{remark}
\begin{proposition}
\label{prop:cpsi}
If Assumptions \ref{ass:drule}, \ref{ass:zdim} hold, then we have
\begin{equation}
\label{eqn:cpsi}
\textstyle\mathbb{P}\big(\|\widehat{\nu}_{m,q}-\nu_{m,q}\|_\circ > 2\mathcal{R}_{m,q}[n] + 2\gamma\big) \leq 4\exp\big(-\frac{n\gamma^2}{64p^q\alpha^{2m+2\rho q}}\big).
\end{equation}
for $\mathcal{R}_{m,q}[n] = 8\alpha^{m+\rho q}p^{q/2}\sqrt{\frac{dp\log(1+4q)+m\log r+q\log d}{n}}$.
\end{proposition}
\begin{proof}
We cannot prove the result directly as in Proposition \ref{prop:cphi} because $\mathbb{E}\widehat{\nu}_{m,q}(B) \neq \nu_{m,q}(B)$, whereas the proof of Proposition \ref{prop:cphi} used the fact that $\mathbb{E}\widehat{\varphi}_{m,q}(B) = \varphi_{m,q}(B)$ in the symmetrization step of (\ref{eqn:bigone}). We instead have to use an indirect approach to prove this result. We begin by noting $\widehat{\varphi}_{m,0}(B) = \mathbb{E}_n(Z^{\otimes m})$, $\varphi_{m,0}(B) = \mathbb{E}(Z^{\otimes m})$, $\widehat{\varphi}_{0,q}(B) = \mathbb{E}_n((B\Omega)^{\otimes q})$, and $\varphi_{0,q}(B) = \mathbb{E}((B\Omega)^{\otimes q})$. For any $W(B) \in \mathbb{S}^{dp-1}$ we have that $\|B\Omega_i\| \leq \sqrt{p}\|W(B)\|_2\|\Omega_i\| \leq \sqrt{p}\alpha^\rho$ by the Cauchy-Schwarz inequality and Assumption \ref{ass:zdim}. This means that $\|\widehat{\varphi}_{m,0}\|_\circ \leq \alpha^m$ and $\|\varphi_{0,q}\|_\circ \leq \alpha^{\rho q}p^{q/2}$. Now consider
\begin{equation}
\begin{aligned}
\|\widehat{\nu}_{m,q} - \nu_{m,q}\|_\circ &= \|\widehat{\varphi}_{m,0}\otimes\widehat{\varphi}_{0,q} - \varphi_{m,0}\otimes\varphi_{0,q}\|_\circ\\
&\leq \|\widehat{\varphi}_{m,0}\|_\circ\cdot\|\widehat{\varphi}_{0,q} - \varphi_{0,q}\|_\circ + \|\varphi_{0,q}\|_\circ\cdot\|\widehat{\varphi}_{m,0}-\varphi_{m,0}\|_\circ\\
&\leq \alpha^m\|\widehat{\varphi}_{0,q} - \varphi_{0,q}\|_\circ + \alpha^{\rho q}p^{q/2}\|\widehat{\varphi}_{m,0}-\varphi_{m,0}\|_\circ
\end{aligned}
\end{equation}
Then the union bound implies
\begin{multline}
\textstyle\mathbb{P}\big(\|\widehat{\nu}_{m,q} - \nu_{m,q}\|_\circ \leq 2\mathcal{R}_{m,q}[n]+ 2\gamma\big) \geq \\
\textstyle 1 - \mathbb{P}\big(\alpha^m\|\widehat{\varphi}_{0,q} - \varphi_{0,q}\|_\circ > \mathcal{R}_{m,q}[n] +\gamma\big) +\\
\textstyle- \mathbb{P}\big(\alpha^{\rho q}p^{q/2}\|\widehat{\varphi}_{m,0} - \varphi_{m,0}\|_\circ > \mathcal{R}_{m,q}[n] +\gamma\big)
\end{multline}
for $\mathcal{R}_{m,q}[n] = 8\alpha^{m+\rho q}p^{q/2}\sqrt{\frac{dp\log(1+4q)+m\log r+q\log d}{n}}$, which upon using (\ref{eqn:cphi}) from Proposition \ref{prop:cphi} gives (\ref{eqn:cpsi}), which is the desired result.
\end{proof}
\subsection{Feasible Set Consistency}
We are now in a position to study the constraints of the FO problem (\ref{eq:fo}). Towards this goal, we first define
\begin{equation}
\mathcal{S} = \big\{B \in\mathcal{B} : B\Omega \perp \!\!\! \perp Z\big\}.
\end{equation}
This is the feasible set of (\ref{eqn:ofdr}), which chooses an optimal fair decision rule when the underlying distributions are exactly known, for a decision rule that satisfies Assumption \ref{ass:drule}. We next define the family of random sets
\begin{multline}
\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} = \big\{B \in \mathcal{B} : \textstyle\big\|\widehat{\varphi}_{m,q}(B)-\widehat{\nu}_{m,q}(B)\big\|\le\Delta_{m,q},\text{for } (m,q)\in[\mathfrak{g}]\times[\mathfrak{h}]\big\}.
\end{multline}
This is simply the feasible set of the level-$(\mathfrak{g},\mathfrak{h})$ FO problem (\ref{eq:fo}).
\begin{proposition}
\label{prop:closed}
$\mathcal{S}$ and $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ are closed, under {Assumptions \ref{ass:drule}, \ref{ass:zdim}.}
\end{proposition}
\begin{proof}
We first prove the result for $\mathcal{S}$. Consider any convergent sequence $B_k\in\mathbb{R}^{d\times p}$ with $B_k \in \mathcal{S}$ and $\lim_kB_k=B_0$. {Because of our assumptions, the hypothesis of Theorem \ref{thm:kac} is satisfied. This theorem says for all $k$ we have}
\begin{equation}
\varphi_{m,q}(B_k) = \nu_{m,q}(B_k), \text{for } m,q\geq 1.
\end{equation}
But the $\varphi$ and $\nu$ are continuous since they are multilinear operators on Euclidean space. This means $\lim_k \varphi_{m,q}(B_k) = \varphi_{m,q}(B_0)$ and $\lim_k \nu_{m,q}(B_k) = \nu_{m,q}(B_0)$ for $m,q\geq 1$. As a result we have
\begin{equation}
\varphi_{m,q}(B_0) = \nu_{m,q}(B_0), \text{for } m,q\geq 1,
\end{equation}
which by Theorem 1 implies $B_0 \in \mathcal{S}$. This proves that $\mathcal{S}$ is closed.
The proof for $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ is a simple modification of the above argument. Consider any convergent sequence $B_k\in\mathbb{R}^{d\times p}$ with $B_k \in \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ and $\lim_kB_k=B_0$. By definition of $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ we have for all $k$ that
\begin{equation}
\textstyle\big\|\widehat{\varphi}_{m,q}(B_k)-\widehat{\nu}_{m,q}(B_k)\big\|\le\Delta_{m,q},\text{for } (m,q)\in[\mathfrak{g}]\times[\mathfrak{h}].
\end{equation}
But the $\widehat{\varphi}$ and $\widehat{\nu}$ are continuous since they are multilinear operators on Euclidean space, and so the normed function $\big\|\widehat{\varphi}_{m,q}(B)-\widehat{\nu}_{m,q}(B)\big\|$ is also continuous. As a result we have
\begin{multline}
\textstyle\big\|\widehat{\varphi}_{m,q}(B_0)-\widehat{\nu}_{m,q}(B_0)\big\| = \lim_k \big\|\widehat{\varphi}_{m,q}(B_k)-\widehat{\nu}_{m,q}(B_k)\big\|\leq\Delta_{m,q},\\ \text{for } m,q\geq 1.
\end{multline}
This means $B_0 \in \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ by definition. This proves that $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ is closed.
\end{proof}
\begin{figure*}[t]
\begin{center}
\begin{subfigure}[t]{0.47\linewidth}
\includegraphics[width=\linewidth]{setex_1}
\caption{Unregularized Set Intersections}
\end{subfigure}\qquad
\begin{subfigure}[t]{0.47\linewidth}
\includegraphics[width=\linewidth]{setex_2}
\caption{Regularized Set Intersections}
\end{subfigure}
\end{center}
\caption{\label{fig:setex} The left shows how the intersection of a sequence of sets may not converge to the intersection of the limiting sets. The right shows how regularization of the sequence of sets can help to ensure that the intersection of the regularized sets converges to the intersection of the limiting sets.}
\end{figure*}
The sequence of random sets $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ is technically difficult to study because each random set is defined by the intersection of many random constraint inequalities, with the number of these random constraints increasing towards infinity. There is a more subtle technical difficulty that needs to be addressed. The issue is that when intersecting a sequence of sets, the intersection of the sequence terms generally does not converge to the intersection of the limiting sets \cite{aswani2019statistics,matheron1975}. The next example demonstrates this phenomenon in a deterministic setting, and it provides some insight into how the situation can be addressed through a carefully designed regularization approach.
\begin{example}
\label{exa:detset}
Fig. \ref{fig:setex} provides a visualization of this example. Let us first define $C_n = [-1,-\frac{1}{n}]$ and $D_n = [\frac{1}{n},1]$, which each specify a deterministic sequence of compact sets. Then we have that $\lim_n C_n = [-1,0] =: C_0$ and that $\lim_n D_n = [0,1] =: D_0$. However, note that $C_n\bigcap D_n = \emptyset$. This means $\lim_n C_n\bigcap D_n = \emptyset \neq C_0\bigcap D_0 = \{0\}$. Now suppose we carefully regularize these sequences of sets. Specifically consider the regularized sequence of deterministic, compact sets $C_n' = [-1,-\frac{1}{n} + \Delta_n]$ and $D_n' = [\frac{1}{n}-\Delta_n, 1]$ for $\Delta_n = \frac{2}{n}$, where we think of the $\Delta_n$ as regularizing by inflating the sets. Clearly this choice of regularization goes to zero since $\lim_n\Delta_n = 0$. More importantly, we now have $C_n'\bigcap D_n' = [-\frac{1}{n}, \frac{1}{n}]$. This means we have $\lim_n C_n' = C_0$ and $\lim_n D_n' = D_0$ with $\lim_n C_n'\bigcap D_n' = \{0\} = C_0\bigcap D_0$.
\end{example}
The above example was deterministic, and it may not initially be clear whether such behavior is an issue for our random setting. The next example demonstrates a situation where this non-convergence occurs for $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$.
\begin{example}
\label{exa:noncon}
Consider a setting where $B\in\mathbb{R}$ and the distributions are $X \sim \mathrm{Ber}(x)$ and $Z\sim\mathrm{Ber}(z)$ with $X\perp \!\!\! \perp Z$. We assume that $x\in(0,1)$ and $z\in(0,1)$ to prevent degeneracies in this example. In this setup $\mathcal{S} = \mathcal{B}$. Now observe that $(Z_i)^m = Z_i$ and $(X_i)^q = X_i$ for $(m,q)\geq 1$ since $X_i,Z_i\in\{0,1\}$. This means the $(m,q) \geq 1$ constraints in $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ for $\Delta_{m,q} = 0$ are
\begin{multline}
\textstyle\big|\big(\frac{1}{n}\sum_{i=1}^n(Z_i)^m(X_i)^q - \frac{1}{n}\sum_{i=1}^n(Z_i)^q\cdot\frac{1}{n}\sum_{i=1}^n(X_i)^q\big)B^q\big| = \\\textstyle\big|\big(\frac{1}{n}\sum_{i=1}^nZ_iX_i - \frac{1}{n}\sum_{i=1}^nZ_i\cdot\frac{1}{n}\sum_{i=1}^nX_i\big)B^q\big| = 0.
\end{multline}
This means $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} = \mathcal{B}$ whenever $\mathcal{E}_n = \{\frac{1}{n}\sum_{i=1}^nZ_iX_i = \frac{1}{n}\sum_{i=1}^nZ_i\cdot\frac{1}{n}\sum_{i=1}^nX_i\}$ occurs, and that $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} = \{0\}$ otherwise. And so trivially by the definition of $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ we have $\aslimsup_n \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} \subseteq\mathcal{B}$. If we recall the classical setting of a $2\times 2$ contingency table, this event $\mathcal{E}_n$ is equivalent to having exact equality between a marginal and cross-term in the contingency table. As a result, we consider a test statistic inspired by the Pearson test for independence
\begin{equation}
T_n = n\cdot\big(\mathbb{E}_n(ZX) - \mathbb{E}_n(Z)\mathbb{E}_n(X)\big)^2.
\end{equation}
Clearly by its definition, we have that $T_n = 0$ if and only if $\mathcal{E}_n$ holds. Also, a straightforward calculation gives
\begin{equation}
\textstyle\mathbb{E}(T_n) = (\frac{n-1}{n})(zx)(1-z-x-zx).
\end{equation}
Note that $\mathbb{E}(T_n) > 0$ since we assumed $x,z\in(0,1)$, and note that $\mathbb{E}(T_n)$ is monotonically increasing towards $\lim_n \mathbb{E}(T_n) = (zx)(1-z-x-zx) > 0$. Now using McDiarmid's inequality we get for any $t > 0$ that
\begin{equation}
\mathbb{P}(\mathcal{E}_n) \leq \mathbb{P}(T_n \leq \mathbb{E}(T_n) - t) \leq \exp(-nt^2/8).
\end{equation}
Choosing $t = (zx)(1-z-x-zx)/2$, the Borel-Cantelli lemma implies $\mathcal{E}_n$ cannot occur infinitely often. Hence we must have $\asliminf_n \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} = \{0\} \nsupseteq \mathcal{S}$.
\end{example}
Example \ref{exa:detset} provides the key intuition for how potential non-convergence of $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$, as demonstrated in Example \ref{exa:noncon}, can be resolved. If we can regularize the sets $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ by sufficiently inflating them in such a way that the amount of inflation decreases with $n$, then we may be able to ensure the almost sure stochastic convergence of $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ to $\mathcal{S}$. In fact, the notation of Example \ref{exa:detset} was chosen to be suggestive of how we will perform this regularization: We will purposefully keep the $\Delta_{m,q} > 0$ while allowing them to shrink towards zero.
More broadly, the FO problem (\ref{eq:fo}) has two types of tuning parameters, namely the $(\mathfrak{g},\mathfrak{h})$ that controls the number of moment constraints and the $\Delta_{m,q}$ that controls the strictness of the moment constraint. This gives us considerable flexibility when studying asymptotic properties. In the following results, we will have to make choices for both of these tuning parameters.
\begin{theorem}
\label{thm:setcon}
{Suppose $\Delta_{m,q} = 3(1+\log n)\cdot\mathcal{R}_{m,q}[n]$ and $\mathfrak{g} = \mathfrak{h} = O(\log n)$, such that $\Delta_{\mathfrak{g},\mathfrak{h}} = o(1)$.} If Assumptions \ref{ass:drule}, \ref{ass:2norm}, \ref{ass:zdim} hold, then $\aslim_n \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} = \mathcal{S}$.
\end{theorem}
\begin{proof}
For the first part of the proof we will show $\asliminf_n \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} \supseteq \mathcal{S}$. Indeed, suppose this is not true. Then there exists $B_0 \in \mathcal{S}$ and an open neighborhood $\mathcal{N}\subseteq\mathcal{B}$ of $B_0$ such that $\mathcal{N}\bigcap\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} = \emptyset$ infinitely often (Theorem 4.5 of \cite{rockafellar2009variational}). We can rewrite one of these events as
\begin{equation}
\label{eqn:event}
\textstyle\big\{\mathcal{N}\bigcap\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} = \emptyset\big\} = \bigcup_{m\in[\mathfrak{g}]}\bigcup_{q\in[\mathfrak{h}]}\big\{\displaystyle\inf_{B\in\mathcal{N}}\|\widehat{\Xi}_{m,q}(B)\| > \Delta_{m,q}\big\},
\end{equation}
where for convenience we define the multilinear operators ${\Xi}_{m,q} = {\varphi}_{m,q}-{\nu}_{m,q}$, $\widehat{\Xi}_{m,q} = \widehat{\varphi}_{m,q}-\widehat{\nu}_{m,q}$, $\Phi_{m,q} = \widehat{\varphi}_{m,q}-\varphi_{m,q}$, and $\Psi_{m,q} = \widehat{\nu}_{m,q} - \nu_{m,q}$. Because Theorem \ref{thm:kac} can be rewritten under the assumptions of this theorem as
\begin{equation}
\label{eqn:kacres}
\sup_{B\in\mathcal{S}} \|\varphi_{m,q}(B)-\nu_{m,q}(B)\| = 0 \text{ for } m,q\geq 1,
\end{equation}
application of the triangle inequality yields
\begin{equation}
\begin{aligned}
\|\widehat{\Xi}_{m,q}(B_0)\| &\leq \|\Xi_{m,q}(B_0)\| + \|\Phi_{m,q}(B_0)\| + \|\Psi_{m,q}(B_0)\| \\
&\leq\lambda^{q/2}\|\Phi_{m,q}\|_\circ + \lambda^{q/2}\|\Psi_{m,q}\|_\circ
\end{aligned}
\end{equation}
Let $\mathcal{G}_{m,q}[n] = (1+\log n)\lambda^{q/2}\mathcal{R}_{m,q}[n]$. Note that for all $n$ sufficiently large, the union bound gives us that
\begin{equation}
\begin{aligned}
\textstyle\mathbb{P}\big(\mathcal{N}\bigcap\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} = \emptyset\big) &\textstyle\leq \sum_{m\in [\mathfrak{g}]}\sum_{q\in[\mathfrak{h}]}\mathbb{P}\big(\lambda^{q/2}\|\Phi_{m,q}\|_\circ>\mathcal{G}_{m,q}[n]\big) + \\
&\textstyle\qquad \sum_{m\in [\mathfrak{g}]}\sum_{q\in[\mathfrak{h}]}\mathbb{P}\big(\lambda^{q/2}\|\Psi_{m,q}\|_\circ>2\mathcal{G}_{m,q}[n]\big)\\
&\leq O((\log n/n)^2)
\end{aligned}
\end{equation}
where the last line used Propositions \ref{prop:cphi} and \ref{prop:cpsi}, along with the relation that $\exp(-\frac{n\gamma^2}{64p^q\alpha^{2m+2\rho q}}) = O(1/n^2)$ for $\gamma = \log n\cdot\mathcal{R}_{m,q}[n]$. Thus the Borel-Cantelli lemma says $\mathcal{N}\bigcap\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} = \emptyset$ only finitely many times, which is a contradiction. This proves $\asliminf_n \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} \supseteq \mathcal{S}$.
For the second part of the proof we will show $\aslimsup_n \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} \subseteq \mathcal{S}$. Indeed, suppose this is not true. Then there exists $B_0 \in \limsup_n \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ and a closed neighborhood $\mathcal{N}\subseteq\mathcal{B}$ of $B_0$ such that $\mathcal{N}\bigcap\mathcal{S} = \emptyset$ and $\mathcal{N}\bigcap\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} \neq \emptyset$ infinitely often (Theorem 4.5 of \cite{rockafellar2009variational}). But Theorem \ref{thm:kac} implies there exists some $m,q\geq 1$ such that we have
\begin{equation}
\zeta := \inf_{B\in\mathcal{N}}\|\varphi_{m,q}(B)-\nu_{m,q}(B)\| > 0.
\end{equation}
We will keep $m,q$ fixed at these values for the remainder of the proof. Now note that for one of the events $\mathcal{N}\bigcap\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} \neq \emptyset$ we have
\begin{equation}
\textstyle\big\{\mathcal{N}\bigcap\mathcal{S}_{\mathfrak{g},\mathfrak{h}} \neq \emptyset\big\} \subseteq \displaystyle\big\{\inf_{B\in\mathcal{N}}\|\widehat{\Xi}_{m,q}(B)\| \leq \Delta_{m,q}\big\}.
\end{equation}
Application of the triangle inequality yields
\begin{multline}
\zeta = \inf_{B\in\mathcal{N}}\|{\Xi}_{m,q}(B)\| \leq \\
\inf_{B\in\mathcal{N}}\|\widehat{\Xi}_{m,q}(B)\| + \sup_{B\in\mathcal{N}}\|\Phi_{m,q}(B)\| + \sup_{B\in\mathcal{N}}\|\Psi_{m,q}(B)\| \leq\\
\inf_{B\in\mathcal{N}}\|\widehat{\Xi}_{m,q}(B)\| + \lambda^{q/2}\|\Phi_{m,q}\|_\circ + \lambda^{q/2}\|\Psi_{m,q}\|_\circ.
\end{multline}
Let $\mathcal{G}_{m,q}[n] = (1+\log n)\lambda^{q/2}\mathcal{R}_{m,q}[n]$. Note that for all $n$ sufficiently large, we have $\zeta-\Delta_{m,q} \geq \zeta/2 \geq 3\mathcal{G}_{m,q}[n]$. Hence the union bound gives
\begin{equation}
\begin{aligned}
\textstyle\mathbb{P}\big(\mathcal{N}\bigcap\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} \neq \emptyset\big) &\textstyle\leq \mathbb{P}\big(\lambda^{q/2}\|\Phi_{m,q}\|_\circ>\mathcal{G}_{m,q}[n]\big) + \\
&\textstyle\qquad \mathbb{P}\big(\lambda^{q/2}\|\Psi_{m,q}\|_\circ>2\mathcal{G}_{m,q}[n]\big)\\
&\leq O(1/n^2)
\end{aligned}
\end{equation}
where the last line used Propositions \ref{prop:cphi} and \ref{prop:cpsi}, along with the relation that $\exp(-\frac{n\gamma^2}{64p^q\alpha^{2m+2\rho q}}) = O(1/n^2)$ for $\gamma = \log n\cdot\mathcal{R}_{m,q}[n]$. Thus the Borel-Cantelli lemma says $\mathcal{N}\bigcap\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} \neq \emptyset$ only finitely many times, which is a contradiction. This proves $\aslimsup_n \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} \subseteq \mathcal{S}$.
\end{proof}
\subsection{Solution Set Consistency}
Next consider the solution set
\begin{equation}
\widehat{\mathcal{O}}_{\mathfrak{g},\mathfrak{h}} = \arg\min_{B}\big\{R_n(B\cdot\omega(x,z))\ \big|\ B \in \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}\big\}
\end{equation}
for the level-$(\mathfrak{g},\mathfrak{h})$ FO problem (\ref{eq:fo}). Similarly, consider the solution set
\begin{equation}
\mathcal{O} = \arg\min_{B}\big\{R(B\cdot\omega(x,z))\ \big|\ B \in \mathcal{S}\big\}
\end{equation}
for the optimization problem (\ref{eqn:ofdr}), which chooses an optimal fair decision rule when the underlying distributions are exactly known.
Our next result shows that solving the FO problem (\ref{eq:fo}) provides a statistically consistent approximation to solving the optimization problem (\ref{eqn:ofdr}), and we state the result using the solutions sets $\widehat{\mathcal{O}}_{\mathfrak{g},\mathfrak{h}}$ and $\mathcal{O}$ defined above.
\begin{theorem}
\label{thm:opcon}
{Suppose $\Delta_{m,q} = 3(1+\log n)\cdot\mathcal{R}_{m,q}[n]$ and $\mathfrak{g} = \mathfrak{h} = O(\log n)$, so that $\Delta_{\mathfrak{g},\mathfrak{h}} = o(1)$.} If Assumptions \ref{ass:drule}--\ref{ass:convergence}, \ref{ass:zdim} hold, then $\aslimsup_n \widehat{\mathcal{O}}_{\mathfrak{g},\mathfrak{h}} \subseteq \mathcal{O}$
\end{theorem}
\begin{proof}
First consider the indicator function $\Gamma(B, \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}})$. Combining our Theorem \ref{thm:setcon} with Proposition 7.4 of \cite{rockafellar2009variational} gives $\aselim \Gamma(\cdot,\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}) = \Gamma(\cdot,\mathcal{S})$ relative to $\mathbb{R}^{d\times p}$. Next we claim $\aslim \Gamma(\cdot,\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}) = \Gamma(\cdot,\mathcal{S})$ relative to $\mathbb{R}^{d\times p}$. Since Proposition \ref{prop:closed} says the $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ are closed, the remark after Theorem 7.10 of \cite{rockafellar2009variational} implies it is sufficient to show that for every $B_0 \in\mathcal{S}$ we have $B_0 \notin \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ only a finite number of times. A similar argument to the first part of the proof for Theorem \ref{thm:setcon} can be used to show this, and so we omit the details.
Next we note that the level-$(\mathfrak{g},\mathfrak{h})$ FO problem (\ref{eq:fo}) can be written as $\min_B h_n(B) + \Gamma(B, \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}})$, and the optimization problem (\ref{eqn:ofdr}) can be written as $\min_B h(B) + \Gamma(B, \mathcal{S})$. Now using Theorem 7.46 of \cite{rockafellar2009variational} gives us that
\begin{equation}
\aselim \big(h_n(\cdot) + \Gamma(\cdot, \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}})\big) = h(\cdot) + \Gamma(\cdot, \mathcal{S}).
\end{equation}
The result now follows by direct application of Proposition 7.30 of \cite{rockafellar2009variational}.
\end{proof}
\begin{remark}
If the optimization problem (\ref{eqn:ofdr}) is infeasible, then we will have $\mathcal{O} = \emptyset$ and $\aslimsup_n \widehat{\mathcal{O}}_{\mathfrak{g},\mathfrak{h}} = \emptyset$, with $\widehat{\mathcal{O}}_{\mathfrak{g},\mathfrak{h}} \neq \emptyset$ only finitely many times.
\end{remark}
\begin{remark}
We can guarantee under the case of additional assumptions that $\aslimsup_n \widehat{\mathcal{O}}_{\mathfrak{g},\mathfrak{h}} \neq \emptyset$, with $\widehat{\mathcal{O}}_{\mathfrak{g},\mathfrak{h}} = \emptyset$ only finitely many times. In particular, it can be shown this occurs when the underlying problem satisfies some regularity conditions (see Theorem 7.33 of \cite{rockafellar2009variational}) and $\mathcal{O} \neq \emptyset$. If $\mathcal{O}$ consists of a single point, then it can also be shown that $\aslim_n\widehat{\mathcal{O}}_{\mathfrak{g},\mathfrak{h}}=\mathcal{O}$.
\end{remark}
The conclusion ``$\aslimsup_n \widehat{\mathcal{O}}_{\mathfrak{g},\mathfrak{h}} \subseteq \mathcal{O}$'' of the above theorem says all cluster points (i.e., convergent subsequences) as $n$ increases of optimal solutions to the sample-based FO problem (\ref{eq:fo}) belong to the set of optimal solutions to the problem (\ref{eqn:ofdr}) that we initially set out to solve using a sample-based approach. A stronger result is generally not true \cite{rockafellar2009variational}; however, as mentioned above it can be shown that if $\mathcal{O}$ is singleton then we have $\aslim_n\widehat{\mathcal{O}}_{\mathfrak{g},\mathfrak{h}}=\mathcal{O}$.
\subsection{Finite Sample Bounds}
{The solution set consistency results of the previous subsection are asymptotic, and here we provide finite sample bounds that more precisely characterize this consistency. For our FO problem (\ref{eq:fo}), there are really two kinds of consistency that we need to discuss. One kind of consistency is the usual notion of how good the sample-based optimal fair decision rule $\widehat{\delta}_n(x,z) = \widehat{B}_n\cdot\omega(x,z)$ for any $\widehat{B}_n \in \widehat{\mathcal{O}}_{\mathfrak{g},\mathfrak{h}}$ is in terms of minimizing the risk $R(\cdot)$. The second kind of consistency is to quantify how close $\widehat{\delta}_n(X,Z) = \widehat{B}_n\Omega$ is in terms of being independent to $Z$.}
{To study the first kind of consistency, we have to strengthen Assumption \ref{ass:convergence}. Recall this assumption says the approximate risk function composed with the parametric decision rule epi-converges almost surely. We will replace this assumption with a finite sample analog that specifies uniform convergence:}
\begin{assumption}\label{ass:convergence_prime}
{
Let $h_n(B)$ and $h(B)$ be the functions that are defined in Assumption \ref{ass:convergence}. We assume that $\sup_{B\in\mathcal{B}} |h_n(B) - h(B)| \leq r_n$ holds with probability at least $1 - c_n$, where we have that $\lim_n r_n = 0$ and $\lim_n c_n = 0$.}
\end{assumption}
{With the modified assumption and the distance definition (\ref{eqn:mdef}), we can prove finite sample bounds for the FO problem (\ref{eq:fo}). Recall that $\widehat{\delta}_n(x,z) = \widehat{B}_n\cdot\omega(x,z)$ for any $\widehat{B}_n \in \widehat{\mathcal{O}}_{\mathfrak{g},\mathfrak{h}}$ is a sample-based optimal fair decision rule, and $\delta^*(x,z) = B^*\cdot\omega(x,z)$ for any $B^*\in\mathcal{O}$ is an optimal fair decision rule. }
\begin{theorem}
\label{thm:fsbnd}
{Suppose $\Delta_{m,q} = 3(1+\log n)\cdot\mathcal{R}_{m,q}[n]$ and $\mathfrak{g} = \mathfrak{h} = \kappa_1\log n$ (rounded down when non-integer), where $\kappa_1 = (20p\log \alpha + 5\log p + 1)^{-1}$. If Assumptions \ref{ass:drule}, \ref{ass:2norm}, \ref{ass:zdim}, \ref{ass:convergence_prime} hold, then we have: $R(\widehat{\delta}_n) \leq R(\delta^*) + 2r_n$, with probability at least $1 - 6(\kappa_1\log n/n)^2 - 2c_n$; and that
\begin{equation}
\mathbb{H}(\widehat{\delta}_n(X,Z), Z) \leq e^{1/\kappa_2}n^{\kappa_1/\kappa_2}\Delta_{\mathfrak{g},\mathfrak{h}} + \textstyle\frac{\kappa_2(r+d)}{\kappa_1\log n + 1}
\end{equation}
with probability at least $1 - 6(\kappa_1\log n/n)^2$, where $\kappa_2 = e\alpha^\rho\lambda p$.}
\end{theorem}
\begin{proof}
{We begin by bounding the probability that $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} \supseteq \mathcal{S}$. Observe that we can rewrite the complement of this event as
\begin{equation}
\label{eqn:event_fs}
\textstyle\big\{\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} \nsupseteq \mathcal{S}\big\} = \bigcup_{m\in[\mathfrak{g}]}\bigcup_{q\in[\mathfrak{h}]}\big\{\displaystyle\sup_{B\in\mathcal{S}}\|\widehat{\Xi}_{m,q}(B)\| > \Delta_{m,q}\big\},
\end{equation}
where for convenience we define the multilinear operators ${\Xi}_{m,q} = {\varphi}_{m,q}-{\nu}_{m,q}$, $\widehat{\Xi}_{m,q} = \widehat{\varphi}_{m,q}-\widehat{\nu}_{m,q}$, $\Phi_{m,q} = \widehat{\varphi}_{m,q}-\varphi_{m,q}$, and $\Psi_{m,q} = \widehat{\nu}_{m,q} - \nu_{m,q}$. Because Theorem \ref{thm:kac} can be rewritten under the assumptions of this theorem as
\begin{equation}
\sup_{B\in\mathcal{S}} \|\varphi_{m,q}(B)-\nu_{m,q}(B)\| = 0 \text{ for } m,q\geq 1,
\end{equation}
then for any $B\in\mathcal{S}$ the application of the triangle inequality yields
\begin{equation}
\begin{aligned}
\|\widehat{\Xi}_{m,q}(B)\| &\leq \|\Xi_{m,q}(B)\| + \|\Phi_{m,q}(B)\| + \|\Psi_{m,q}(B)\| \\
&\leq\lambda^{q/2}\|\Phi_{m,q}\|_\circ + \lambda^{q/2}\|\Psi_{m,q}\|_\circ
\end{aligned}
\end{equation}
Let $\mathcal{G}_{m,q}[n] = (1+\log n)\lambda^{q/2}\mathcal{R}_{m,q}[n]$, and note that the union bound gives
\begin{equation}
\begin{aligned}
\label{eqn:psnfs}
\textstyle\mathbb{P}\big(\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} \nsupseteq \mathcal{S}\big) &\textstyle\leq \sum_{m\in [\mathfrak{g}]}\sum_{q\in[\mathfrak{h}]}\mathbb{P}\big(\lambda^{q/2}\|\Phi_{m,q}\|_\circ>\mathcal{G}_{m,q}[n]\big) + \\
&\textstyle\qquad \sum_{m\in [\mathfrak{g}]}\sum_{q\in[\mathfrak{h}]}\mathbb{P}\big(\lambda^{q/2}\|\Psi_{m,q}\|_\circ>2\mathcal{G}_{m,q}[n]\big)\\
&\leq 6(\kappa_1\log n/n)^2
\end{aligned}
\end{equation}
where the last line used Propositions \ref{prop:cphi} and \ref{prop:cpsi}. This implies $\mathbb{P}(\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} \supseteq \mathcal{S}) \geq 1-6(\kappa_1\log n/n)^2$, which means that $\mathbb{P}(h_n(\hat{B}_n) \leq h_n(B^*)$ for all $B^*\in\mathcal{O}) \geq \mathbb{P}(\mathcal{O}\subseteq \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}) \geq \mathbb{P}(\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} \supseteq \mathcal{S}) \geq 1-6(\kappa_1\log n/n)^2$. Combining this with Assumption \ref{ass:convergence_prime} implies $R(\widehat{\delta}_n) \leq R(\delta^*) + 2r_n$, with probability at least $1 - 6(\kappa_1\log n/n)^2 - 2c_n$. This proves the first part of the result.}
{We prove the second part of the result in two steps. As the first step, we consider the event
\begin{equation}
\mathcal{E} = \textstyle \bigcup_{m\in[\mathfrak{g}]}\bigcup_{q\in[\mathfrak{h}]}\big\{\sup_{\widehat{B}_n\in\widehat{O}_{\mathfrak{g},\mathfrak{h}}}\|\Xi_{m,q}(\widehat{B}_n)\| > 2\Delta_{m,q}\big\},
\end{equation}
and note that for $\widehat{B}_n \in \widehat{O}_{\mathfrak{g},\mathfrak{h}}$ application of the triangle inequality yields
\begin{equation}
\begin{aligned}
\|\Xi_{m,q}(\widehat{B}_n)\| &\leq \|\widehat{\Xi}_{m,q}(\widehat{B}_n)\| + \|\Phi_{m,q}(\widehat{B}_n)\| + \|\Psi_{m,q}(\widehat{B}_n)\| \\
&\leq\Delta_{m,q} + \lambda^{q/2}\|\Phi_{m,q}\|_\circ + \lambda^{q/2}\|\Psi_{m,q}\|_\circ
\end{aligned}
\end{equation}
since $\|\widehat{\Xi}_{m,q}(\widehat{B}_n)\| \leq \Delta_{m,q}$ by definition of $\widehat{O}_{\mathfrak{g},\mathfrak{h}}$. Thus the union bound gives
\begin{equation}
\begin{aligned}
\textstyle\mathbb{P}\big(\mathcal{E}\big) &\textstyle\leq \sum_{m\in [\mathfrak{g}]}\sum_{q\in[\mathfrak{h}]}\mathbb{P}\big(\lambda^{q/2}\|\Phi_{m,q}\|_\circ>\mathcal{G}_{m,q}[n]\big) + \\
&\textstyle\qquad \sum_{m\in [\mathfrak{g}]}\sum_{q\in[\mathfrak{h}]}\mathbb{P}\big(\lambda^{q/2}\|\Psi_{m,q}\|_\circ>2\mathcal{G}_{m,q}[n]\big)\\
&\leq 6(\kappa_1\log n/n)^2
\end{aligned}
\end{equation}
where the last line used Propositions \ref{prop:cphi} and \ref{prop:cpsi}. This implies $\mathbb{P}(\|\Xi_{m,q}(\widehat{B}_n)\| \leq 2\Delta_{m,q}\ \text{for } (m,q)\in[\mathfrak{g}]\times[\mathfrak{h}]) \geq 1-6(\kappa_1\log n/n)^2$.}
{We conclude with the second step for our proof of the second part of the result. Because our random variables are bounded, we can use series expansions to express the characteristic functions in the definition (\ref{eqn:mdef}) of $\mathbb{H}(\widehat{B}_n\Omega; Z)$. In particular, we have that
\begin{equation}
J(s,t,\zeta) - P(s,t,\zeta) = \textstyle \sum_{m=1}^\infty\sum_{q=1}^\infty\frac{(\mathfrak{i}\zeta)^{m+q}}{m!\cdot q!}\cdot\langle\Xi_{m,q}(\widehat{B}_n), s^{\otimes m}t^{\otimes q}\rangle.
\end{equation}
We need to bound the modulus of the above. H\"{o}lder's inequality gives us that $|\langle\Xi_{m,q}(\widehat{B}_n), s^{\otimes m}t^{\otimes q}\rangle| \leq (r^m + d^q)^{1/2}\big\|\Xi_{m,q}(\widehat{B}_n)\big\| \leq (r+d)^{(m+q)}\|\Xi_{m,q}(\widehat{B}_n)\|$. In the proof of Propositions \ref{prop:cphi} and \ref{prop:cpsi} we showed $\|\psi_{m,q}(\widehat{B}_n)\|_\circ \leq \alpha^{m+\rho q}p^{q/2}$ and $\|\nu_{m,q}(\widehat{B}_n)\|_\circ \leq \alpha^{m+\rho q}p^{q/2}$. Thus $\|\Xi_{m,q}(\widehat{B}_n)\| \leq 2\alpha^{m+\rho q}(\lambda p)^{q/2}$, which we will use for $m = \mathfrak{g}+1$ and $q = \mathfrak{h}+1$. We next use these bounds with a standard argument (see for instance Section 26 of \cite{billingsley1995probability}) that first uses Jensen's inequality and then uses the elementary inequality for the complex exponential that $|\exp(i\zeta) - \sum_{m=0}^\mathfrak{g}(i\zeta)^m/m!| \leq |\zeta|^{\mathfrak{g}+1}/(\mathfrak{g}+1)!$. This two step argument implies that for $|\zeta| \leq T$ we have
\begin{multline}
\big|J(s,t,\zeta) - P(s,t,\zeta) - \textstyle \sum_{m=1}^\mathfrak{g}\sum_{q=1}^\mathfrak{h}\frac{(\mathfrak{i}\zeta)^{m+q}}{m!\cdot q!}\cdot\langle\Xi_{m,q}(\widehat{B}_n), s^{\otimes m}t^{\otimes q}\rangle\big| \\\textstyle \leq \frac{2}{(\mathfrak{g}+1)!\cdot(\mathfrak{h}+1)!}\cdot\alpha^{\mathfrak{g}+1+\rho (\mathfrak{h}+1)}\cdot(\lambda p)^{(\mathfrak{h}+1)/2}\cdot((r+d)T)^{\mathfrak{g}+\mathfrak{h}+2}.
\end{multline}
Using the reverse triangle inequality implies the modulus is bounded by
\begin{multline}
\label{eqn:mdbndbdn}
\big|J(s,t,\zeta) - P(s,t,\zeta)\big| \leq \textstyle \sum_{m=1}^\mathfrak{g}\sum_{q=1}^\mathfrak{h}\frac{((r+d)\zeta)^{m+q}}{m!\cdot q!}\cdot\big\|\Xi_{m,q}(\widehat{B}_n)\big\| + \\
\textstyle\frac{2}{(\mathfrak{g}+1)!\cdot(\mathfrak{h}+1)!}\cdot(\alpha^\rho\lambda p(r+d)T)^{\mathfrak{g}+\mathfrak{h}+2}
\end{multline}
for all $|\zeta|\leq T$. Combining this with the first step of the proof for the second part of the result implies that with probability at least $1-6(\kappa_1\log n/n)^2$ we have for $|\zeta|\leq T$ that
\begin{equation}
\big|J(s,t,\zeta) - P(s,t,\zeta)\big| \leq \textstyle 2\exp((r+d)T)\cdot\Delta_{\mathfrak{g},\mathfrak{h}} + \textstyle\frac{2(\alpha^\rho\lambda p(r+d)T)^{\mathfrak{g}+\mathfrak{h}+2}}{(\mathfrak{g}+1)!\cdot(\mathfrak{h}+1)!}
\end{equation}
where the first term follows from the exponential series. If we choose that $T = (\kappa_1\log n + 1)/(\kappa_2(r+d))$, then using the standard error bound $(\mathfrak{g}+1)! \geq (2\pi(\mathfrak{g}+1))^{1/2}((\mathfrak{g}+1)/e)^{\mathfrak{g}+1}$ for Stirling's approximation leads to
\begin{equation}
\big|J(s,t,\zeta) - P(s,t,\zeta)\big| \leq \textstyle 2e^{1/\kappa_2}n^{\kappa_1/\kappa_2}\cdot\Delta_{\mathfrak{g},\mathfrak{h}} + \textstyle\frac{1}{\pi(\kappa_1\log n + 1)},
\end{equation}
which holds with probability at least $1-6(\kappa_1\log n/n)^2$. The second result follows by applying this bound and choice of $T$ to the definition (\ref{eqn:mdef}).}
\end{proof}
\begin{remark}
{The result of the above theorem can be interpreted as implying that $|R(\widehat{\delta}_n)-R(\delta^*)| = O(r_n)$ and that $\mathbb{H}(\widehat{\delta_n}(X,Z); Z) = O(1/\log n)$, with high probability. This is because we have that $n^{\kappa_1/\kappa_2}\Delta_{\mathfrak{g},\mathfrak{h}} = o(1/\log n)$ under the conditions specified in the above theorem.}
\end{remark}
\subsection{Approximate Independence}
\label{sec:ai}
Let $U\in\mathbb{R}^p$ and $V\in\mathbb{R}^d$ be random vectors, and consider the quantity
{
\begin{equation}
\begin{aligned}
\mathbb{M}(U;V) = \inf\ &\epsilon\\
\text{s.t. }&\|\mathbb{E}\big(U^{\otimes m}V^{\otimes q}\big) - \mathbb{E}\big(U^{\otimes m}\big)\otimes\mathbb{E}\big(V^{\otimes q}\big)\big\| \leq \epsilon^{m+q}\cdot m!\cdot q!,\\
&\qquad\text{for } m,q\geq 1.
\end{aligned}
\end{equation}}
We call the quantity $\mathbb{M}(U;V)$ the \emph{mutual majorization} of $U$ and $V$, and the choice of this name is meant to draw a direct analogy to mutual information. The mutual majorization is nonnegative $\mathbb{M}(U;V) \geq 0$ and symmetric $\mathbb{M}(U;V) = \mathbb{M}(V;U)$ by definition. One utility of this definition for the mutual majorization is that it bounds approximate independence.
\begin{proposition}
\label{prop:mm}
{Let $M_{(U,V)}(s,t) = \mathbb{E}\exp(\langle s,U\rangle + \langle t,V\rangle)$ be the moment generating function for the multivariate random variable $(U,V)$ where $U\in\mathbb{R}^p$ and $V\in\mathbb{R}^d$. Suppose that $M_{(U,V)}(s,t)$ is finite in a neighborhood of the origin. If $\mathbb{M}(U; V) \leq \epsilon$, then $\mathbb{H}(U;V) \leq 2(\epsilon\cdot(r+d))^{2/3}$ when $\epsilon\cdot(r+d)\leq 1$.}
\end{proposition}
\begin{proof}
{We need to bound the modulus of $J(s,t,\zeta) - P(s,t,\zeta)$. Because $M_{(U,V)}(s,t)$ exists in a neighborhood of the origin, this means the characteristic functions can be represented as infinite series. Thus we have
\begin{multline}
\label{eqn:geobnd}
\big|J(s,t,\zeta) - P(s,t,\zeta)\big| = \\
\textstyle \big|\sum_{m=1}^\infty\sum_{q=1}^\infty\frac{(\mathfrak{i}\zeta)^{m+q}}{m!\cdot q!}\cdot\langle\mathbb{E}\big(U^{\otimes m}V^{\otimes q}\big) - \mathbb{E}\big(U^{\otimes m}\big)\otimes\mathbb{E}\big(V^{\otimes q}\big), s^{\otimes m}t^{\otimes q}\rangle\big| \leq \\
\textstyle\sum_{m=1}^\infty\sum_{q=1}^\infty(\epsilon(r+d)\zeta)^{m+q} = (\tau/(1-\tau))^2.
\end{multline}
when $\tau = \epsilon(r+d)\zeta \in [0,1)$. If we choose $T^{-1} = \epsilon(r+d) + (\epsilon(r+d))^{2/3}$, then the result follows by applying this bound to the definition (\ref{eqn:mdef}).}
\end{proof}
The implication of this result is we can use mutual majorization {as a surrogate for} approximate independence. We thus define an optimization problem that chooses an optimal $\epsilon$-approximately-fair decision rule by solving
\begin{equation}
\label{eqn:Lfdr}
\textstyle\delta^*(x,z) \in \arg\min_{\delta(\cdot,\cdot)}\big\{R(\delta)\ \big|\ \mathbb{M}(\delta(X,Z); Z) \leq \epsilon\big\}.
\end{equation}
The level-$(\mathfrak{g},\mathfrak{h})$ FO problem (\ref{eq:fo}) with appropriate choice of $\Delta_{m,q}$ is a statistically well-behaved, sample-based approximation of the above problem.
In order to be able to discuss this, we first define the set
\begin{equation}
\mathcal{S}(\epsilon) = \big\{B \in\mathcal{B} : \mathbb{M}(B\Omega; Z) \leq \epsilon\big\}
\end{equation}
and the solution set
\begin{equation}
\mathcal{O}(\epsilon) = \arg\min_{B}\big\{R(B\cdot\omega(x,z))\ \big|\ B \in \mathcal{S}(\epsilon)\big\}.
\end{equation}
These are respectively the feasible set and solution set of the optimization problem (\ref{eqn:Lfdr}), which chooses an optimal $\epsilon$-approximately-fair decision rule when the underlying distributions are exactly known.
\begin{theorem}
\label{thm:apphi}
{Let $\Delta_{m,q} = \epsilon^{m+q}\cdot m!\cdot q! + 3(1+\log n)\cdot\mathcal{R}_{m,q}[n]$ and suppose $\mathfrak{g} = \mathfrak{h} = O(\log n)$, such that $\log n\cdot\mathcal{R}_{\mathfrak{g},\mathfrak{h}}[n] = o(1)$.} If Assumption \ref{ass:drule} holds, then $\mathcal{S}(\epsilon)$ is closed. If Assumptions \ref{ass:2norm}, \ref{ass:zdim} also hold, then $\aslim_n \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} = \mathcal{S}(\epsilon)$. If Assumption \ref{ass:convergence} also holds, then $\aslimsup_n \widehat{\mathcal{O}}_{\mathfrak{g},\mathfrak{h}} \subseteq \mathcal{O}(\epsilon)$.
\end{theorem}
\begin{remark}
The proof is omitted because it is a straightforward modification of the proofs for Proposition \ref{prop:closed} and Theorems \ref{thm:setcon} and \ref{thm:opcon}.
\end{remark}
\begin{remark}
Recall we already proved $\widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}}$ is closed in Proposition \ref{prop:closed}.
\end{remark}
{We can also prove a finite sample version of the above result, which shows that consistency holds for sample-based analogs of (\ref{eqn:Lfdr}).}
\begin{theorem}
{Suppose $\Delta_{m,q} = \epsilon^{m+q}\cdot m!\cdot q! + 3(1+\log n)\cdot\mathcal{R}_{m,q}[n]$ and $\mathfrak{g} = \mathfrak{h} = \kappa_1\log n$ (rounded down when non-integer), where $\kappa_1 = (20p\log \alpha + 5\log p + 1)^{-1}$. If Assumptions \ref{ass:drule}, \ref{ass:2norm}, \ref{ass:zdim}, \ref{ass:convergence_prime} hold, then: $R(\widehat{\delta}_n) \leq R(\delta^*) + 2r_n$, with probability at least $1 - 6(\kappa_1\log n/n)^2 - 2c_n$; and when $\epsilon\cdot(r+d)\leq 1$ then we also have that
\begin{multline}
\label{eqn:mmhbnd}
\mathbb{H}(\widehat{\delta}_n(X,Z), Z) \leq 2(\epsilon\cdot(r+d))^{2/3} + \\ \kappa_3\cdot(1+\log n)\cdot\mathcal{R}_{\mathfrak{g},\mathfrak{h}}[n] + \textstyle\frac{1}{(\mathfrak{g}+1)!\cdot(\mathfrak{h}+1)!}\cdot\kappa_4^{\ \mathfrak{g}+\mathfrak{h}+2}
\end{multline}
with probability at least $1 - 6(\kappa_1\log n/n)^2$, where the constants used above are $\kappa_3 = 3\exp(1/\epsilon)$ and $\kappa_4 = \alpha^\rho\lambda p/\epsilon$.}
\end{theorem}
\begin{proof}
{The proof is identical to that of Theorem \ref{thm:fsbnd}, up to (\ref{eqn:mdbndbdn}). (This means the first part of the current result is proved the same way as in Theorem \ref{thm:fsbnd}.) To complete the proof we first bound (\ref{eqn:mdbndbdn}) using the $\Delta_{m,q}$ in the hypothesis of this theorem. Comparing to (\ref{eqn:geobnd}), we get with probability at least $1-6(\kappa_1\log n/n)^2$ we have that
\begin{multline}
\big|J(s,t,\zeta) - P(s,t,\zeta)\big| \leq \textstyle 2(\tau/(1-\tau))^2 + \\6\exp((r+d)T)\cdot(1+\log n)\cdot\mathcal{R}_{\mathfrak{g},\mathfrak{h}}[n] + \textstyle\frac{2(\alpha^\rho\lambda p(r+d)T)^{\mathfrak{g}+\mathfrak{h}+2}}{(\mathfrak{g}+1)!\cdot(\mathfrak{h}+1)!}
\end{multline}
when $\tau=\epsilon(r+d)T \in [0,1)$ and for all $|\zeta|\leq T$. If we choose $T^{-1} = \epsilon(r+d) + (\epsilon(r+d))^{2/3}$, then the second result follows by applying this bound and choice of $T$ to the definition (\ref{eqn:mdef}).}
\end{proof}
\begin{remark}
{The result of the above theorem implies that we have $\limsup_n \mathbb{H}(\widehat{\delta}_n(X,Z), Z) \leq 2(\epsilon\cdot(r+d))^{2/3}$ because the second and third terms in (\ref{eqn:mmhbnd}) converge to zero under the conditions of the above theorem.}
\end{remark}
\section{Hierarchy Consistency for Unbounded Random Variables}
\label{sec:ubrv}
{In the previous section, we proved consistency of the FO problem (\ref{eq:fo}) when the involved random variables are bounded. However, the underlying generalizations of Kac's Theorem, which relate moment conditions to independence, also apply to unbounded random variables whose moment generating function is finite about the origin (Theorem \ref{thm:kac}) and to unbounded random variables with some number of finite moments but not necessarily with a moment generating function that exists near the origin (Theorem \ref{thm:kac2}).}
{In this section we show that the sample-based constraints of the FO problem (\ref{eq:fo}) are statistically well-behaved analogs of the independence constraint in (\ref{eqn:ofdr}) when the involved random variables are unbounded. We will consider two cases. The first is when the involved random variables are sub-Gaussian, and the second is for random variables with finite moments.}
\subsection{Sub-Gaussian Case}
{Our first task is to relax Assumption \ref{ass:zdim}, which assumed the involved random variables are bounded. There is a subtlety in relaxing this assumption for sub-Gaussian random variables.}
\begin{example}
{Let $X \sim \mathcal{N}(0,1)$ be a standard normal and define $U = X^k$ for some $k \in \mathbb{Z}_+$. Then $U$ is sub-Gaussian for $k=1$, but $U$ is not sub-Gaussian for $k \geq 2$. Furthermore, the moment generating function for $U$ is finite in a neighborhood about the origin only for $k \in \{1,2,4\}$, or restated the moment generating function is not well-defined for $k = 3$ or $k \geq 5$ \cite{berg1988cube}.}
\end{example}
{The consequence of this example is that if we want to consider a sub-Gaussian case, then we need to specify that the joint distribution of $(Z,\Omega)$ is sub-Gaussian rather than assuming that $(X,Z)$ is sub-Gaussian. Thus, in lieu of Assumption \ref{ass:zdim} we make the following assumption:}
\begin{assumption}
\label{ass:zdim_subgau}
{The (joint) random variable $(Z,\Omega)$ is sub-Gaussian (\ref{eqn:subgaualt}) with $M \geq 1$ and $\sigma^2 \geq 0$, and the random variable $Z$ has dimensions $Z\in\mathbb{R}^r$.}
\end{assumption}
{With this assumption, we can now study consistency of the FO problem (\ref{eq:fo}) when the involved random variables are sub-Gaussian. We first prove a result on the convergence of the tensor moment estimates.}
\begin{proposition}
\label{prop:cphi_subgau}
{If Assumptions \ref{ass:drule}, \ref{ass:zdim_subgau} hold, then we have
\begin{equation}
\begin{aligned}
&\textstyle\mathbb{P}\big(\|\widehat{\varphi}_{m,q}-\varphi_{m,q}\|_\circ > \hphantom{2}\mathcal{C}_{m,q}[n]\cdot\gamma\big) &\leq \hphantom{4}\big(\gamma^6\cdot n^2\big)^{-1}\\
&\textstyle\mathbb{P}\big(\|\rlap{$\hspace{0.09em}\widehat{\nu}$}\hphantom{\widehat{\varphi}}_{m,q}-\rlap{$\hspace{0.09em}\nu$}\hphantom{\varphi}_{m,q}\|_\circ > 2\mathcal{C}_{m,q}[n]\cdot\gamma + \mathcal{C}_{m,q}[n]^2\cdot\gamma^2\big) &\leq 4\big(\gamma^6\cdot n^2\big)^{-1}\end{aligned}
\end{equation}
for $\mathcal{C}_{m,q}[n] = [\frac{e^2M^22^{7}5^3}{\pi n}\cdot (1+4q)^{dp}(rm^3)^m(dq^3)^q(24\sigma^2/e)^{3m+3q}]^{1/6}$.}
\end{proposition}
\begin{proof}
{The proof for the first part of this result follows the same steps as the proof of Proposition \ref{prop:cphi} up to and including (\ref{eqn:pfref}). Next observe that
\begin{equation}
\label{eqn:varbnd_subgau}
\begin{aligned}
\mathbb{E}\big(\|\Phi\|_\circ^{\ 6}\big) &\leq \mathbb{E}\big(2^6\max_{i,u_k,v_k}\textstyle|\langle \Phi(T_i), \bigotimes_{k=1}^m u_k \bigotimes_{k=1}^q v_k\rangle|^6\big)\\
&\leq \textstyle 2^6\cdot\sum_{i,u_k,v_k}\mathbb{E}\big(\langle \Phi(T_i), \bigotimes_{k=1}^m u_k \bigotimes_{k=1}^q v_k\rangle^6\big)
\end{aligned}
\end{equation}
We seek to bound the term on the right-hand side. For convenience, define $S_i = \langle Z^{\otimes m}(T_i\Omega)^{\otimes q}, \bigotimes_{k=1}^m u_k \bigotimes_{k=1}^q v_k\rangle$ and $V_i = S_i - \mathbb{E}(S_i)$. Next observe that the Marcinkiewicz-Zygmund inequality \cite{rio2009moment} implies that
\begin{equation}
\textstyle\mathbb{E}\big(\langle \Phi(T_i), \bigotimes_{k=1}^m u_k \bigotimes_{k=1}^q v_k\rangle^6\big) \leq 5^3\cdot\mathbb{E}(V_i^{\ 6})/n^3.
\end{equation}
We next have to bound the expectation on the right. Consider
\begin{equation}
\begin{aligned}
\textstyle \mathbb{E}\big(V_i^{\ 6}\big) &\leq 2\mathbb{E}\big(\epsilon^6S_i^{\ 6}\big)\\
&\leq \textstyle 2\cdot\big[\mathbb{E}\big(\langle u_k, Z\rangle^{12m}\big)\cdot\mathbb{E}\big(\langle v_k,\hspace{0.4em}T_i\Omega\rangle^{12q}\big)\big]^{1/2}\\
&\leq \textstyle 2\cdot\big[\mathbb{E}\big(\langle u_k, Z\rangle^{12m}\big)\cdot\mathbb{E}\big(\langle \rlap{$
T_i$}\hphantom{T}^\mathsf{T}v_k, \Omega\rangle^{12q}\big)\big]^{1/2}\\
&\leq\textstyle 2M\sigma^{6m+6q}\cdot\big[\frac{(12m)!\cdot(12q)!}{(6m)!\cdot(6q)!}\big]^{1/2}\\
&\leq\textstyle 2eM\cdot (24\sigma^2/e)^{3m+3q}\cdot m^{3m}\cdot q^{3q}/\sqrt{\pi}\\
\end{aligned}
\end{equation}
where the first line follows by a stochastic symmetrization step (i.e., Jensen's inequality, multiplication with i.i.d. Rademacher random variables $\epsilon$ having distribution $\mathbb{P}(\epsilon = \pm 1) = \frac{1}{2}$, using the triangle inequality, and concluded by Jensen's inequality), the second line follows by the Cauchy-Schwarz inequality, the third line uses a matrix transpose $\rlap{$T_i$}\hphantom{T}^\mathsf{T}$, the fourth line follows by (\ref{eqn:subgaumom}) because $\|\rlap{$T_i$}\hphantom{T}^\mathsf{T}v_k\|_2 \leq \|v_k\|_2$ since $T_i = M(t_i)$ for $t_i\in \mathbb{S}^{dp-1}$, and the fifth line uses Stirling's approximation. Combining the above with (\ref{eqn:varbnd_subgau}) gives
\begin{equation}
\textstyle\mathbb{E}\big(\|\Phi\|_\circ^{\ 6}\big) \leq \frac{eM2^{7}5^3}{\sqrt{\pi}n^3}\cdot (1+4q)^{dp}(rm^3)^m(dq^3)^q(24\sigma^2/e)^{3m+3q}.
\end{equation}
Let $\kappa = (eM/\sqrt{\pi})^{1/6}$ and note that Markov's inequality implies
\begin{equation}
\label{eqn:msg}
\mathbb{P}\big(\|\widehat{\varphi}_{m,q}-\varphi_{m,q}\|_\circ > \mathcal{C}_{m,q}[n]\cdot\gamma/\kappa\big) \leq \big(\gamma^6\cdot n^2\big)^{-1}.
\end{equation}
The first result now follows by nothing that $\kappa > 1$.}
{The proof for the second part of this result proceeds slightly differently than the proof of Proposition \ref{prop:cpsi}. Recall that we have $\widehat{\varphi}_{m,0}(B) = \mathbb{E}_n(Z^{\otimes m})$, $\varphi_{m,0}(B) = \mathbb{E}(Z^{\otimes m})$, $\widehat{\varphi}_{0,q}(B) = \mathbb{E}_n((B\Omega)^{\otimes q})$, and $\varphi_{0,q}(B) = \mathbb{E}((B\Omega)^{\otimes q})$. Let $\kappa = eM/\sqrt{\pi}$, and observe that Jensen's inequality implies
\begin{equation}
\|\varphi_{m,0}\|_\circ^{\ 6} \leq \mathbb{E}\big(\langle u_k, Z\rangle^{6m}\big) \leq eM\cdot(12\sigma^2/e)^{3m}m^{3m}/\sqrt{\pi} \leq\mathcal{C}_{m,0}[n]^6/\kappa.
\end{equation}
A similar calculation shows that for some $T = M(t)$ with $t\in\mathbb{S}^{dp-1}$ we have
\begin{equation}
\|\varphi_{0,q}\|_\circ^{\ 6} \leq \mathbb{E}\big(\langle T^\mathsf{T}v_k, \Omega\rangle^{6m}\big) \leq eM\cdot(12\sigma^2/e)^{3q}q^{3q}/\sqrt{\pi} \leq \mathcal{C}_{0,q}[n]^6/\kappa.
\end{equation}
Next note that two applications of the triangle inequality imply
\begin{multline}
\label{eqn:tisug}
\|\widehat{\nu}_{m,q} - \nu_{m,q}\|_\circ \leq \|\varphi_{m,0}\|_\circ\cdot\|\widehat{\varphi}_{0,q} - \varphi_{0,q}\|_\circ + \\
\|\varphi_{0,q}\|_\circ\cdot\|\widehat{\varphi}_{m,0}-\varphi_{m,0}\|_\circ + \|\widehat{\varphi}_{m,0}-\varphi_{m,0}\|_\circ\cdot\|\widehat{\varphi}_{0,q} - \varphi_{0,q}\|_\circ.
\end{multline}
Hence the union bound implies
\begin{equation}
\textstyle\mathbb{P}\big(\|\widehat{\nu}_{m,q} - \nu_{m,q}\|_\circ > 2\mathcal{C}_{m,q}[n]\cdot\gamma + \mathcal{C}_{m,q}[n]^2\cdot\gamma^2\big) \leq \mathsf{I} + \mathsf{II} + \mathsf{III} + \mathsf{IV}
\end{equation}
for terms we define next. To bound these terms, we use (\ref{eqn:msg}). Observe that $\mathsf{I} = \mathbb{P}\big(\|\widehat{\varphi}_{0,q} - \varphi_{0,q}\|_\circ > \mathcal{C}_{m,q}[n]\cdot\gamma\big)\leq(\gamma^6\cdot n^2\big)^{-1}$, that $\mathsf{II} = \mathbb{P}\big(\|\widehat{\varphi}_{m,0} - \varphi_{m,0}\|_\circ > \mathcal{C}_{m,q}[n]\cdot\gamma\big)\leq(\gamma^6\cdot n^2\big)^{-1}$, that
\begin{equation}
\begin{aligned}
\mathsf{III} &= \textstyle \mathbb{P}\big(\mathcal{C}_{m,0}[n]\cdot\|\widehat{\varphi}_{0,q} - \varphi_{0,q}\|_\circ > \kappa\cdot\mathcal{C}_{m,q}[n]\cdot\gamma\big) \\
&\leq \mathbb{P}\big(\|\widehat{\varphi}_{0,q} - \varphi_{0,q}\|_\circ > \mathcal{C}_{0,q}[n]\cdot\gamma/\kappa\big)\\
&\leq(\gamma^6\cdot n^2\big)^{-1}
\end{aligned}
\end{equation}
and that
\begin{equation}
\begin{aligned}
\mathsf{IV} &= \textstyle \mathbb{P}\big(\mathcal{C}_{0,q}[n]\cdot\|\widehat{\varphi}_{m,0} - \varphi_{m,0}\|_\circ > \kappa\cdot\mathcal{C}_{m,q}[n]\cdot\gamma\big)\\
&\leq \mathbb{P}\big(\|\widehat{\varphi}_{m,0} - \varphi_{m,0}\|_\circ > \mathcal{C}_{m,0}[n]\cdot\gamma/\kappa\big)\\
&\leq(\gamma^6\cdot n^2\big)^{-1}
\end{aligned}
\end{equation}
Combining the above with (\ref{eqn:tisug}) gives the second result.}
\end{proof}
{With the above result on concentration of the moment tensors in the sub-Gaussian case, we can now state our results about consistency of the FO problem (\ref{eq:fo}). We start with a result on asymptotic consistency.}
\begin{theorem}
\label{thm:apphi_subgau}
{Suppose $\Delta_{m,q} = 3\cdot\mathcal{C}_{m,q}[n] + \mathcal{C}_{m,q}[n]^2$, and suppose we have $\mathfrak{g} = \mathfrak{h} = O(\sqrt{\log n})$, such that $\Delta_{\mathfrak{g},\mathfrak{h}} = o(1)$. If Assumption \ref{ass:drule} holds, then $\mathcal{S}(\epsilon)$ is closed. If Assumptions \ref{ass:2norm}, \ref{ass:zdim_subgau} also hold, then $\aslim_n \widehat{\mathcal{S}}_{\mathfrak{g},\mathfrak{h}} = \mathcal{S}(\epsilon)$. If Assumption \ref{ass:convergence} also holds, then $\aslimsup_n \widehat{\mathcal{O}}_{\mathfrak{g},\mathfrak{h}} \subseteq \mathcal{O}(\epsilon)$.}
\end{theorem}
\begin{remark}
{The proof is omitted because it is a straightforward modification of the proofs for Proposition \ref{prop:closed} and Theorems \ref{thm:setcon} and \ref{thm:opcon}.}
\end{remark}
{Our next result provides a finite sample characterization of the consistency of solutions to the FO problem (\ref{eq:fo}) in this sub-Gaussian case.}
\begin{theorem}
\label{thm:fsbnd_subgau}
{Suppose $\Delta_{m,q} = 3\cdot\mathcal{C}_{m,q}[n] + \mathcal{C}_{m,q}[n]^2$, and suppose that $\mathfrak{g} = \mathfrak{h} = \sqrt{\kappa_5\log n}$ (rounded down when non-integer), where we have $\kappa_5 = (\max\{5, 20dp + 5\log(rd) + 30\log(24\sigma^2)\})^{-1}$. If Assumptions \ref{ass:drule}, \ref{ass:2norm}, \ref{ass:convergence_prime}, \ref{ass:zdim_subgau} hold, then we have: $R(\widehat{\delta}_n) \leq R(\delta^*) + 2r_n$, with probability at least $1 - 6\kappa_5\log n/n^2 - 2c_n$; and for $n \geq 3 > e$ we have
\begin{equation}
\mathbb{H}(\widehat{\delta}_n(X,Z), Z) \leq e^{1/\kappa_6}n^{\kappa_5/\kappa_6}\Delta_{\mathfrak{g},\mathfrak{h}} + \kappa_6(r+d)\cdot[\sqrt{\kappa_5\log n} + 1]^{-1/2}
\end{equation}
with probability at least $1 - 6\kappa_5\log n/n^2$, where the constant in the above is $\kappa_6 = \max\{4,2\sqrt{e}\sigma M\}$.}
\end{theorem}
\begin{remark}
{ The proof is omitted because it is a straightforward modification of the proof for Theorem \ref{thm:fsbnd} after noting that Cauchy-Schwarz and Jensen's inequalities imply $\|\Xi_{m,q}(\widehat{B}_n)\| \leq 2eM\cdot(\sqrt{4\sigma^2/e})^{m+q}m^{m/2}q^{q/2}/\sqrt{\pi}$.}
\end{remark}
\begin{remark}
{The result of the above theorem can be interpreted as implying that $|R(\widehat{\delta}_n)-R(\delta^*)| = O(r_n)$ and $\mathbb{H}(\widehat{\delta_n}(X,Z); Z) = O((\log n)^{-1/4})$, with high probability. This is because we have $n^{\kappa_5/\kappa_6}\Delta_{\mathfrak{g},\mathfrak{h}} = o((\log n)^{-1/4})$ under the conditions specified in the above theorem.}
\end{remark}
\subsection{Finite Moments Case}
{Our last set of results concern relaxing Assumption \ref{ass:zdim} to the case of unbounded random variables with finite moments. Instead of Assumptions \ref{ass:zdim} or \ref{ass:zdim_subgau}, we make the following assumption:}
\begin{assumption}
\label{ass:zdim_fm}
{Consider the (joint) random variable $(Z,\Omega)$, and define $M_{m,q} = \sup_{(s,t)\in\mathbb{S}^{p+d-1}}\mathbb{E}(\langle s, Z\rangle^{m}\langle t,\Omega\rangle^{q})$. Assume that any moments used in the results are finite, and the random variable $Z$ has dimensions $Z\in\mathbb{R}^r$.}
\end{assumption}
{With this assumption, we can now study approximate consistency of the FO problem (\ref{eq:fo}) when the involved random variables have finite moments. We first prove a result on the convergence of the tensor moment estimates.}
\begin{proposition}
\label{prop:cphi_fm}
{If Assumptions \ref{ass:drule}, \ref{ass:zdim_fm} hold, then we have
\begin{equation}
\begin{aligned}
&\textstyle\mathbb{P}\big(\|\widehat{\varphi}_{m,q}-\varphi_{m,q}\|_\circ > \hphantom{2}\mathcal{Y}_{m,q}[n]\cdot\gamma\big) &\leq \hphantom{4}\big(\gamma^2\cdot n\big)^{-1}\\
&\textstyle\mathbb{P}\big(\|\rlap{$\hspace{0.09em}\widehat{\nu}$}\hphantom{\widehat{\varphi}}_{m,q}-\rlap{$\hspace{0.09em}\nu$}\hphantom{\varphi}_{m,q}\|_\circ > 2\mathcal{Y}_{m,q}[n]\cdot\gamma + \mathcal{Y}_{m,q}[n]^2\cdot\gamma^2\big) &\leq 4\big(\gamma^2\cdot n\big)^{-1}\end{aligned}
\end{equation}
for $\mathcal{Y}_{m,q}[n] = (8/n)^{1/2}\cdot(M_{4m,0}\cdot M_{0,4q})^{1/4}$.}
\end{proposition}
\begin{remark}
{ The proof is omitted because it is a straightforward modification of the proof for Proposition \ref{prop:cphi_subgau}.}
\end{remark}
{We conclude with a result about the finite sample behavior of solutions to the FO problem (\ref{eq:fo}) when the involved random variables have finite moments. The difference in the hypothesis of this result, relative to the results for the cases of bounded or sub-Gaussian random variables, is that here we will characterize solutions when $\mathfrak{g}$ and $\mathfrak{h}$ are held as fixed constants. In the previous results, we assumed $\mathfrak{g}$ and $\mathfrak{h}$ were increasing with $n$.}
\begin{theorem}
\label{thm:fsbnd_gencase}
{Suppose $\Delta_{m,q} = 3\cdot\mathcal{Y}_{m,q}[n] + \mathcal{Y}_{m,q}[n]^2$, and that $\mathfrak{g}$ and $\mathfrak{h}$ are constants. If Assumptions \ref{ass:drule}, \ref{ass:2norm}, \ref{ass:convergence_prime}, \ref{ass:zdim_fm} hold, then we have: $R(\widehat{\delta}_n) \leq R(\delta^*) + 2r_n$, with probability at least $1 - 6\cdot\mathfrak{g}\cdot\mathfrak{h}/n - 2c_n$; and we have that
\begin{equation}
\mathbb{H}(\widehat{\delta}_n(X,Z), Z) \leq \exp((r+d)T)\cdot\Delta_{\mathfrak{g},\mathfrak{h}} + \textstyle\frac{1}{T}
\end{equation}
with probability at least $1 - 6\cdot\mathfrak{g}\cdot\mathfrak{h}/n$, where $T$ is the constant such that $T^{\mathfrak{g}+\mathfrak{h}+3}=(\mathfrak{g}+1)!\cdot(\mathfrak{h}+1)!/(\lambda^{(\mathfrak{h}+1)/2}\cdot(M_{\mathfrak{g}+1,\mathfrak{h}+1} + M_{\mathfrak{g}+1,0}\cdot M_{0,\mathfrak{h}+1}))$.}
\end{theorem}
\begin{remark}
{ The proof is omitted because it is a straightforward modification of the proof for Theorems \ref{thm:kac2} and \ref{thm:fsbnd}.}
\end{remark}
\begin{remark}
{The above theorem implies $\limsup_n \mathbb{H}(\widehat{\delta}_n(X,Z), Z) \leq 1/T$ because $\Delta_{\mathfrak{g},\mathfrak{h}} = o(1)$ under the conditions of the above theorem.}
\end{remark}
\section{Numerical Experiments}
\label{sec:er}
\begin{table}[t]
\caption{\label{tab:datasets} List of Datasets Used in Numerical Experiments}
\begin{center}
\begin{scriptsize}
\begin{tabular}{l|rrlll}
\toprule
Dataset & $p$ & $n$ & $Z$ Type & Task & Source \\
\midrule
Arrhythmia & 10 & 453 & Binary & Classification & \cite{guvenir1997supervised} \\
Biodeg & 40 & 1055 & Categorical & Classification & \cite{mansouri2013quantitative} \\
Communities & 96 & 1994 & Continuous & Regression & \cite{department1992census,department1992law,statistics2004department,redmond2002data} \\
EEG & 12 & 4000 & Binary & Regression & \cite{fernandez2018feature} \\
Energy & 8 & 768 & Continuous & Regression & \cite{tsanas2012accurate} \\
German Credit & 49 & 1000 & Continuous & Classification & \cite{Lichman:2013} \\
Letter & 15 & 20000 & Continuous & Classification & \cite{frey1991letter} \\
Music & 68 & 1034 & Continuous & Regression & \cite{zhou2014predicting} \\
Parkinson's & 18 & 5875 & Binary & Both & \cite{Lichman:2013} \\
Pima Diabetes & 7 & 768 & Continuous & Classification & \cite{smith1988using} \\
Recidivism & 6 & 5278 & Binary & Classification & \cite{angwin2016machine} \\
SkillCraft & 17 & 3338 & Continuous & Classification & \cite{thompson2013video} \\
Statlog & 35 & 3486 & Binary & Classification & \cite{Lichman:2013} \\
Steel & 25 & 1941 & Categorical & Classification & \cite{Lichman:2013} \\
Taiwan Credit & 22 & 29623 & Binary & Classification & \cite{yeh2009comparisons} \\
Wine Quality & 11 & 6497 & Binary & Both & \cite{cortez2009modeling} \\
\bottomrule
\end{tabular}
\end{scriptsize}
\end{center}
\end{table}
In this section, we implement various levels of the FO problem (\ref{eq:fo}) for: classification, regression, and decision-making. In all cases, fairness is measured using disparate impact. Unless otherwise noted, all experiments were carried out using the Mosek 9 optimization package \cite{mosek2002mosek}. We first discuss the issue of hyperparameter selection for the FO problem, and then we describe the benchmark fairness methods that we compare our approach to. Next, we present classification and regression implementations of FO on a series of datasets from the UC Irvine Machine Learning Repository \cite{Lichman:2013}, the full list of which is in Table \ref{tab:datasets}. Finally, we present a case study on the use of FO to perform fair morphine dosing.
\subsection{Hyperparameter Selection}
{Applying the FO problem (\ref{eq:fo}) to particular datasets requires choosing several hyperparameters, namely: $\lambda$, $(\mathfrak{g},\mathfrak{h})$, and $\Delta_{m,q}$. The parameter $\lambda$ bounds the Euclidean norm of the model coefficients $B$, and it can be shown using standard duality arguments that varying $\lambda$ is equivalent to controlling the amount of $\ell_2$ regularization of the model coefficients. The parameters $(\mathfrak{g},\mathfrak{h})$ control the level of the FO problem, and the theory developed in previous sections says that consistency is achieved when $\mathfrak{g}$ and $\mathfrak{h}$ grow at a logarithmic or square-root-logarithmic rate. This implies that in practice small values of $(\mathfrak{g},\mathfrak{h})$ should be used. Last, the formulation in Section \ref{sec:ai} suggests an approach that makes the parameter choices $\Delta_{m,q} = \epsilon^{m+q}\cdot m!\cdot q!$. This is beneficial because it replaces multiple parameters $\Delta_{m,q}$ for $(m,q) \in [\mathfrak{g}]\times[\mathfrak{h}]$ with a single parameter $\epsilon$.}
{Consequently, applying the FO problem (\ref{eq:fo}) to a particular dataset requires choosing four hyperparameters, which is feasible using cross-validation. However, there is a subtlety because we have two criteria to evaluate the quality of a particular model, and these two criteria are generally (but not always) opposing each other. The first criteria is model accuracy, and the second criteria is model fairness. Because these criteria are generally opposing, cross-validation can only generate a Pareto frontier, which is a curve that for a particular quantitative level of fairness specifies the most accurate model possible at that level of fairness. Choosing a particular model from among that Pareto frontier requires a subjective choice for how much reduction in model accuracy is tolerable for any given increase in model fairness. To make this discussion more concrete, we consider examples of cross-validation for fair linear regression and fair linear classification.}
\begin{example}{\label{ex:fsvm} Consider a classification setup with $(X_i, Y_i) \in \mathbb{R}^p\times\{-1,+1\}$ and $Z_i \in \mathbb{R}$, and suppose we choose a linear decision rule $\delta(x) = Bx$ with $B \in \mathbb{R}^{1\times p}$. Then fair SVM using the level-(2,2) FO problem (\ref{eq:fo}) is given by
\begin{equation}
\label{eqn:fsvm_22}
\begin{aligned}
\min_{B \in \mathbb{R}^{1\times p}}\ & \textstyle\frac{1}{n}\sum_{i=1}^ns_i\\
\text{s.t. }& s_i \geq 0, &\text{for } i \in [n]\\
&s_i \geq 1-Y_i\cdot BX_i, &\text{for } i \in [n]\\
&-\hphantom{2}\epsilon^2 \leq BM_{(1,1)}\hphantom{B^\mathsf{T}} \leq \hphantom{2}\epsilon^2\\
&-2\epsilon^3 \leq BM_{(2,1)}\hphantom{B^\mathsf{T}} \leq 2\epsilon^3\\
&-2\epsilon^3 \leq BM_{(1,2)}B^\mathsf{T} \leq 2\epsilon^3\\
&-4\epsilon^4 \leq BM_{(2,2)}B^\mathsf{T} \leq 4\epsilon^4\\
&\|B\|_2 \leq \sqrt{\lambda}
\end{aligned}
\end{equation}
where we have the matrices
\begin{equation}
\label{eqn:exmat}
\begin{aligned}
M_{(1,1)} &= \textstyle\frac{1}{n}\sum_{i=1}^n Z_i\cdot X_i - \frac{1}{n}\sum_{i=1}^n Z_i\cdot \frac{1}{n}\sum_{i=1}^n X_i\\
M_{(2,1)} & = \textstyle\frac{1}{n}\sum_{i=1}^n Z_i^{\ 2}\cdot X_i - \frac{1}{n}\sum_{i=1}^n Z_i^{\ 2}\cdot \frac{1}{n}\sum_{i=1}^n X_i\\
M_{(1,2)} & = \textstyle\frac{1}{n}\sum_{i=1}^n Z_i\cdot X_i^{\vphantom{\mathsf{T}}}X_i^\mathsf{T} - \frac{1}{n}\sum_{i=1}^n Z_i\cdot \frac{1}{n}\sum_{i=1}^n X_i^{\vphantom{\mathsf{T}}}X_i^\mathsf{T}\\
M_{(2,1)} & = \textstyle\frac{1}{n}\sum_{i=1}^n Z_i^{\ 2}\cdot X_i^{\vphantom{\mathsf{T}}}X_i^\mathsf{T} - \frac{1}{n}\sum_{i=1}^n Z_i^{\ 2}\cdot \frac{1}{n}\sum_{i=1}^n X_i^{\vphantom{\mathsf{T}}}X_i^\mathsf{T}
\end{aligned}
\end{equation}
Observe that the constraint in (\ref{eqn:fsvm_22}) involving the matrix $M_{(m,q)}$ for any value of $(m,q) \in [2]\times[2]$ is precisely the specific form of the $(m,q)$ constraint in (\ref{eq:fo}) for this particular setup. Fair SVM using the level-$(\mathfrak{g},\mathfrak{h})$ FO problem for $1 \leq \mathfrak{g},\mathfrak{h} \leq 2$ is given by (\ref{eqn:fsvm_22}) with the appropriate constraints involving $M_{(m,q)}$ removed. An example of using five-fold cross-validation to construct a Pareto frontier for fair SVM is shown in Fig \ref{fig:cv_fsvm}. For each possible value of the hyperparameters, cross-validation generates a quantitative value for model accuracy and for model fairness. These pairs of values describe points that are plotted in Fig \ref{fig:fsvm_cv}. Fig \ref{fig:fsvm_pf} shows the Pareto frontier. Locations on the Pareto frontier with a point marker can be directly achieved by a model with a given set of hyperparameters, while locations on the Pareto frontier in between two point markers can be achieved by a randomized prediction that randomly chooses from one of two deterministic predictions that arise from the two models corresponding to the two point markers.}
\end{example}
\begin{figure*}[!t]
\begin{center}
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Letter_Rec_DataSVM.csv_pts_1r.pdf}
\caption{\label{fig:fsvm_cv} Cross-Validation}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Letter_Rec_DataSVM.csv_epc_1r.pdf}
\caption{\label{fig:fsvm_lpf} Level Pareto Frontiers }
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Letter_Rec_DataSVM.csv_apc_1r.pdf}
\caption{\label{fig:fsvm_pf} Pareto Frontier}
\end{subfigure}
\caption{\label{fig:cv_fsvm} Pareto frontier for fair SVM on the Letter dataset. Cross-validation is used to identify points of possible tradeoff between model accuracy (measured by area under the curve) and fairness (measured by Kolmogorov-Smirnov distance between the joint and product distributions of the model prediction and the protected information) using the FO problem (left), Pareto frontiers can be constructed for each individual level of the FO problem (middle), and a single Pareto frontier can be constructed for all the levels of the FO problem (right). For the points, circles are level-(1,1), pluses are level-(1,2), exes are level-(2,1), and triangles are level-(2,2).}
\end{center}
\end{figure*}
\begin{figure*}[!t]
\begin{center}
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Communities_DataReg.csv_pts_1r.pdf}
\caption{\label{fig:freg_cv} Cross-Validation}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Communities_DataReg.csv_epc_1r.pdf}
\caption{\label{fig:freg_lpf} Level Pareto Frontiers }
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Communities_DataReg.csv_apc_1r.pdf}
\caption{\label{fig:freg_pf} Pareto Frontier}
\end{subfigure}
\caption{\label{fig:cv_freg} Pareto frontier for fair regression on the Communities dataset. Cross-validation is used to identify points of possible tradeoff between model accuracy (measured by out-of-sample $R^2$) and fairness (measured by Kolmogorov-Smirnov distance between the joint and product distributions of the model prediction and the protected information) using the FO problem (left), Pareto frontiers can be constructed for each individual level of the FO problem (middle), and a single Pareto frontier can be constructed for all the levels of the FO problem (right). For the points, circles are level-(1,1), pluses are level-(1,2), exes are level-(2,1), and triangles are level-(2,2).}
\end{center}
\end{figure*}
\begin{example}{\label{ex:freg} Consider a regression setup with $(X_i, Y_i) \in \mathbb{R}^p\times\mathbb{R}$ and $Z_i \in \mathbb{R}$, and suppose we choose a linear decision rule $\delta(x) = Bx$ with $B \in \mathbb{R}^{1\times p}$. Then fair regression using the level-(2,2) FO problem (\ref{eq:fo}) is
\begin{equation}
\label{eqn:freg_22}
\begin{aligned}
\min_{B \in \mathbb{R}^{1\times p}}\ & \textstyle\frac{1}{n}\sum_{i=1}^n(Y_i - BX_i)^2\\
\text{s.t. }&-\hphantom{2}\epsilon^2 \leq BM_{(1,1)}\hphantom{B^\mathsf{T}} \leq \hphantom{2}\epsilon^2\\
&-2\epsilon^3 \leq BM_{(2,1)}\hphantom{B^\mathsf{T}} \leq 2\epsilon^3\\
&-2\epsilon^3 \leq BM_{(1,2)}B^\mathsf{T} \leq 2\epsilon^3\\
&-4\epsilon^4 \leq BM_{(2,2)}B^\mathsf{T} \leq 4\epsilon^4\\
&\|B\|_2 \leq \sqrt{\lambda}
\end{aligned}
\end{equation}
where the matrices are as in (\ref{eqn:exmat}). Observe that the constraint in (\ref{eqn:freg_22}) involving the matrix $M_{(m,q)}$ for any value of $(m,q) \in [2]\times[2]$ is precisely the specific form of the $(m,q)$ constraint in (\ref{eq:fo}) for this particular setup. Fair regression using the level-$(\mathfrak{g},\mathfrak{h})$ FO problem for $1 \leq \mathfrak{g},\mathfrak{h} \leq 2$ is given by (\ref{eqn:freg_22}) with the appropriate constraints involving $M_{(m,q)}$ removed. An example of using five-fold cross-validation to construct a Pareto frontier for fair regression is shown in Fig \ref{fig:cv_freg}. For each possible value of the hyperparameters, cross-validation generates a quantitative value for model accuracy and for model fairness. These pairs of values describe points that are plotted in Fig \ref{fig:freg_cv}. Fig \ref{fig:freg_pf} shows the Pareto frontier. Locations on the Pareto frontier with a point marker can be directly achieved by a model with a given set of hyperparameters, while locations on the Pareto frontier in between two point markers can be achieved by a randomized prediction that randomly chooses from one of two deterministic predictions that arise from the two models corresponding to the two point markers.}
\end{example}
\subsection{Comparison Methods}
In the following subsections, we compare FO to three other methods. The methods of \cite{berk2017convex} and \cite{kamishima2012fairness} are designed for fair classification and fair regression, respectively, and are similar to our method in that they enforce fairness at training time. We also compare FO to the method of \cite{calmon2017optimized}, although this takes a pre-processing approach. { In all comparison methods, we include an $\ell_2$ regularization on the model coefficients $B$. This is done to ensure an equitable comparison to the FO problem (\ref{eq:fo}), which includes a constraint on the Euclidean norm of the model coefficients.}
\paragraph{Berk et al. \cite{berk2017convex}}
The method of \cite{berk2017convex} is one of the few comparable methods for fair regression. They also take an in-training approach, defining two regularization terms that enforce fairness. Let $P_z=\{i\in[n]:Z_i=z\}$, and note $\#P_z$ refers to the cardinality of these sets. Given a binary protected attribute $Z$, they define a regularizer for group fairness
\begin{equation}
\textstyle\big((\# P_{-1}\cdot\# P_{+1})^{-1}\sum_{i \in P_{-1}}\sum_{j\in P_{+1}}d(Y_i,Y_j)\cdot(X_i^\mathsf{T}\beta-X_j^\mathsf{T}\beta)\big)^2,
\end{equation}
for some distance measure $d(\cdot,\cdot)$. Note that this is similar to the term constrained in FO for $(m,q)=(1,1)$. They also define the following regularizer for individual fairness:
\begin{equation}
\textstyle(\# P_{-1}\cdot\# P_{1})^{-1}\sum_{i \in P_{-1}}\sum_{j\in P_{+1}}d(Y_i,Y_j)\cdot(X_i^\mathsf{T}\beta-X_j^\mathsf{T}\beta)^2.
\end{equation}
\noindent This term is similar to a term in FO for $(m,q)= (1,2)$, although not equivalent. It has the benefit of being convex, although the double-summation term can be computationally prohibitive for large datasets. In our implementation, we estimate this term from a sub-sample (10\%) of the data when this issue arises. We note that this method can only accommodate binary-valued protected attributes, and so we cannot provide comparisons to several of the datasets for fair regression. For this method, the group fairness and individual fairness terms are implemented as a penalty in the objective.
\paragraph{Calmon et al. \cite{calmon2017optimized}}
This work is comparable to that of \cite{zemel2013learning}. Both of these works formulate nonparametric optimization problems whose solution yields a conditional distribution $f_{\widehat{X},\widehat{Y}|X,Y,Z}$ that then probabilistically transforms the data. We only compare our method to the approach introduced in \cite{calmon2017optimized}, since their formulation directly builds on that of \cite{zemel2013learning}.
Given a predefined notion of deviation amongst distributions, this method minimizes the overall deviation of $f_{\widehat{X},\widehat{Y}}$ from $f_{X,Y\vphantom{\widehat{Y}}}$. In the original work, the authors chose to minimize $\frac{1}{2}\sum_{x,y}|f_{\widehat{X},\widehat{Y}}(x,y)-f_{X,Y\vphantom{\widehat{Y}}}(x,y)|$. They also include constraints on pointwise distortion $\mathbb{E}_{\widehat{X},\widehat{Y}|X,Y}[\theta((X,Y),(\widehat{X},\widehat{Y})]$ for some user-defined function $\theta:\left\lbrace\mathbb{R}^p\times\{\pm1\}\right\rbrace^2\rightarrow\mathbb{R}_{\geq 0}$. There are also bounds on the dependency of the new main label $\widehat{Y}$ on the original protected label $J(f_{\widehat{Y}|Z}[y|z],f_{\vphantom{\widehat{Y}}Y}(y))$, where $J(a,b) = |\frac{a}{b}-1|$ is defined to be the probability ratio measure. Thus, the final formulation is
\begin{equation}
\label{eq:calmon}
\begin{aligned}
\min\ &\textstyle\frac{1}{2}\sum_{x,y}|f_{\widehat{X},\widehat{Y}}(x,y)-f_{X,Y\vphantom{\widehat{Y}}}(x,y)|\\
\text{s.t. }&\mathbb{E}_{\widehat{X},\widehat{Y}|X,Y}[\theta((X,Y),(\widehat{X},\widehat{Y})|x,y]\le c, & \text{for all } x,y\\
&|f_{\vphantom{\widehat{Y}}Y}(y)^{-1}f_{\widehat{Y}|Z}[y|z]-1|\le d, & \text{for all } y,z\\
&f_{\widehat{X},\widehat{Y}|X,Y,Z}\textrm{ are all distributions.}
\end{aligned}
\end{equation}
Following the procedure used by the authors, we approximate $f_{X,Y,Z}$ with the empirical distribution of the original data, separated into a pre-selected number of bins. The resulting optimization problem will have $8(\#\textrm{bins})^{2p}$ parameters, which can quickly become computationally intractable when the dataset is high-dimensional. To account for this, we follow the original work and choose the 3 features most correlated with the main label $Y$. Each dimension is split into 8 bins. We choose $\theta((x',y'),(x,y))$ to be $0$ if $y=y'$ and $x=x'$, $0.5$ if $y=y'$ and $x,x$ vary by at most one in any dimension, and $1$ otherwise: This is similar to the $\theta$ chosen in the original paper.
\paragraph{Kamishima et al. \cite{kamishima2012fairness}}
Another comparable method is that of \cite{kamishima2012fairness}, which also aims to enforce fairness at training time. As opposed to our approach of bounding interaction moments, they instead regularize with a mutual information term. Also, this method differs from our framework notably in that it imposes different treatments for different protected classes, violating the principle of individual fairness; as a result, it is also unable to handle continuous protected attributes. The authors implement their regularizer in the context of logistic regression. Let $\sigma$ be a sigmoid function and $g_{\beta}[y|x,z]=y\sigma(x^{\textsf{T}}\beta)+(1-y)(1-\sigma(x^{\textsf{T}}\beta))$, and note that the notation $\beta_z$ indicates that this approach has a different set of coefficients for each possible value of $Z$. the authors approximate the mutual information as
\begin{equation}
\textstyle n^{-1}\sum_{i=1}^{n}\sum_{y\in\{\pm1\}}g_{\beta_{Z_i}}[y|X_i,Z_i]\log\frac{\widehat{P}[y|Z_i]}{\widehat{P}(y)},
\end{equation}
with $\widehat{P}[y|z] = (\# P_{z})^{-1}\sum_{i\in P_{z}}g_{\beta_{z}}[y|X_i,z]$ and $\widehat{P}(y) = \frac{1}{n}\sum_{i=1}^{n}g_{\beta_{z_i}}[y|X_i,Z_i]$. This is then weighted and added to the objective as a regularizer. We include this method as a comparison to our fair SVM, while noting the core differences mentioned above. All experiments for this method were done using the sequential least squares programming approach of \cite{kraft1988software}.
\begin{figure*}[!t]
\begin{center}
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Arrhythmia_DataSVM.csv_cmp.pdf}
\caption{Arrhythmia}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Parkinsons_DataSVM.csv_cmp.pdf}
\caption{Parkinson's}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Recidivism_DataSVM.csv_cmp.pdf}
\caption{Recidivism}
\end{subfigure}\hfill\\
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Statlog_DataSVM.csv_cmp.pdf}
\caption{Statlog}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Taiwanese_Credit_DataSVM.csv_cmp.pdf}
\caption{Taiwan Credit}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Wine_Quality_DataSVM.csv_cmp.pdf}
\caption{Wine Quality}
\end{subfigure}\hfill
\caption{\label{fig:fsvm} Pareto frontiers for fair SVM on datasets with binary protected attribute. The approaches compared are the FO formulation (solid line), Kamishima et al. \cite{kamishima2012fairness} (dotted line), and Calmon et al. \cite{calmon2017optimized} (dashed line). The square mark denotes linear SVM without any fairness modifications.}
\end{center}
\end{figure*}
\begin{figure*}[!t]
\begin{center}
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Biodeg_DataSVM.csv_cmp.pdf}
\caption{Biodeg}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{German_Credit_DataSVM.csv_cmp.pdf}
\caption{German Credit}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Letter_Rec_DataSVM.csv_cmp.pdf}
\caption{Letter}
\end{subfigure}\hfill\\
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Pima_Diabetes_DataSVM.csv_cmp.pdf}
\caption{Pima Diabetes}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Skillcraft_DataSVM.csv_cmp.pdf}
\caption{Skillcraft}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Steel_DataSVM.csv_cmp.pdf}
\caption{Steel}
\end{subfigure}\hfill
\caption{\label{fig:fsvm2} Pareto frontiers for fair SVM on datasets with continuous or categorical protected attributes. The approaches compared are the FO formulation (solid line) and Calmon et al. \cite{calmon2017optimized} (dashed line). The square mark denotes linear SVM without any fairness modifications.}
\end{center}
\end{figure*}
\subsection{Fair SVM}
We consider classification problems using a series of datasets. {For the FO approach, we consider the formulation in Example \ref{ex:fsvm}. We perform five-fold cross validation repeated five times. The Pareto frontiers of different approaches are shown in Fig \ref{fig:fsvm} and Fig \ref{fig:fsvm2}. Accuracy is measured by the area under the curve (AUC) since classifier models are often used as scores that are then subject to different thresholds. Fairness is measured by the Kolmogorov-Smirnov distance between the joint and product distributions of the model prediction and the protected information. The variance of the results over the five repetitions is low, and so this is not plotted to make the results easier to visualize.} Since the mutual-information-based method of \cite{kamishima2012fairness} cannot accommodate continuous protected variables, results are not reported for this method for the associated datasets. We note our method often improves fairness with less cost (in terms of accuracy) than the method of \cite{calmon2017optimized}. This is to be expected, as such pre-processing approaches do not take into account the downstream task that the transformed data is to be used for. Our method is also able to match or improve the fairness results of the mutual information approach. Recall that this method maintains explicitly different treatments for different protected classes, while ours adheres to the principle of individual fairness. Given this, it is unsurprising that the method of \cite{kamishima2012fairness} can sometimes achieve fairness at a lower cost to accuracy, although our method even outperforms on this metric for a number of datasets. Further, this feature of disparate treatments can yield fairness values notably worse than even a standard SVM. { Interestingly, for the Taiwan Credit, Letter, and Steel datasets our method can do strictly better in terms of both accuracy and fairness than linear SVM without fairness modifications.}
\begin{figure*}[!t]
\begin{center}
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{EEG_DataReg.csv_cmp.pdf}
\caption{EEG}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Parkinsons_DataReg.csv_cmp.pdf}
\caption{Parkinson's}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Wine_Quality_DataReg.csv_cmp.pdf}
\caption{Wine Quality}
\end{subfigure}\hfill\\
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Communities_DataReg.csv_cmp.pdf}
\caption{Communities}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Energy_DataReg.csv_cmp.pdf}
\caption{Energy}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{Music_DataReg.csv_cmp.pdf}
\caption{Music}
\end{subfigure}\hfill
\caption{\label{fig:freg} Pareto frontiers for fair regression. The approaches compared are the FO formulation (solid line) and Berk et al. \cite{berk2017convex} (dashed line). The square mark denotes ridge regression without any fairness modifications.}
\end{center}
\end{figure*}
\subsection{Fair Regression}
We next consider regression problems using another series of datasets {For the FO approach, we consider the formulation in Example \ref{ex:freg}. We perform five-fold cross validation repeated five times. The Pareto frontiers of different approaches are shown in Fig \ref{fig:freg}. Accuracy is measured by the out-of-sample $R^2$ (OR2), which means that higher values of OR2 implies better accuracy. Fairness is measured by the Kolmogorov-Smirnov distance between the joint and product distributions of the model prediction and the protected information.The variance of the results over the five repetitions is low, and so this is not plotted to make the results easier to visualize.} As the method of \cite{berk2017convex} is unable to accommodate non-binary protected attributes, we only provide results for the appropriate datasets. Again, we note that our method can reduce the bias of a typical linear regression problem. {Our method generally does better than \cite{berk2017convex} on datasets where \cite{berk2017convex} can be applied.}
\subsection{Case Study: Morphine Dosing}
Opioid overdoses, including from illicit heroine and synthetic fentanyl, have become the leading cause of death in Americans under 50 \cite{salam2017opioid}. Today, Americans comprise 4.6\% of the global population, but 51.2\% of global morphine usage. Hence there has been much recent interest in regulated and disciplined methods for dosing \cite{manchikanti2017responsible}. At the same time, recent reports have indicated that women and low-income patients are more likely to be under-diagnosed for pain or made to wait longer for a diagnosis \cite{billock2018pain,dusenbury2018everybody}. Thus, we seek to employ FO in order to train an individualized dosing policy that adapts to each patient's measurements and status, but can be made certifiably fair with regards to protected labels.
We extracted data for 7156 morphine prescriptions made to 4612 unique patients extracted from the publicly-available Multiparameter Intelligent Monitoring in Intensive Care (MIMIC III) database \cite{saeed2011multiparameter}. For each patient, we collected age (at the time of prescription), heart rate, breath rate, blood pressure (both systolic and diastolic), weight and temperature. In all cases, measurements are the latest possible within 48 hours of prescription. We also collect, as categorical variables, admission type (ER, urgent care or other), service type (surgery or medical), ethnicity (black, white or other), gender (male or female) and insurance type (private or governmental). We also note the presence of embolism or obesity amongst the diagnoses of the patients at admission. We exclude all patients who are not prescribed Morphine Sulfate to be taken intravenously, and all patients for whom the appropriate measurements were not available. Since there are medical justifications for the consideration of gender and ethnicity in opioid dosing, we decide to instead consider insurance type as our protected variable in this analysis. To begin, we conduct a standard linear regression to determine if insurance type does currently play a role in, or is at least highly correlated with, morphine dosage, conditional on all other variables considered. The results found that insurance type had a large magnitude coefficient with $p < 0.001$, which provides some statistical evidence that insurance type is correlated to dosing even after adjusting for the other predictor variables.
\begin{figure*}[!t]
\begin{center}
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{mor_pts.pdf}
\caption{\label{fig:fqua_cv} Cross-Validation}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{mor_epc.pdf}
\caption{\label{fig:fqua_lpf} Level Pareto Frontiers }
\end{subfigure}\hfill
\begin{subfigure}[t]{0.33\linewidth}
\includegraphics[width=\linewidth]{mor_apc.pdf}
\caption{\label{fig:fqua_pf} Pareto Frontier}
\end{subfigure}
\caption{\label{fig:cv_fqua} Pareto frontier of learned morphine dosage rules. Five-fold cross-validation repeated five times identifies points of possible tradeoff between model accuracy (measured by risk) and fairness (measured by Kolmogorov-Smirnov distance between the joint and product distributions of the learned dosage and the insurance type) using the FO problem (left), Pareto frontiers can be constructed for each individual level of the FO problem (middle), and a single Pareto frontier can be constructed for all the levels of the FO problem (right). For the points, circles are level-(1,1), pluses are level-(1,2), and the square is quantile regression without fairness modifications.}
\end{center}
\end{figure*}
One possible risk function for dosing is analogous to the newsvendor problem from the operations research community, where supply must be chosen beforehand to meet random demand and undersupply/oversupply are penalized differently. Recent work formulated a data-driven newsvendor model, where demand is predicted via a quantile regression problem \cite{sachs2015data}. Similarly, we can treat dosage as a matter of supply, with demand being the amount of medication that a specific patient needs. In our case, we impose a linearly increasing cost to both under-prescription and over-prescription, with the cost to over-prescription increasing half as quickly as that of under-prescription. {This means we use the loss $R_n(\delta) = \frac{1}{n}\sum_{i=1}^n \max\{0,\delta(X_i) - Y_i\} + 2\max\{0,-(\delta(X_i) - Y_i)\}$ and consider decision rules of the form $\delta(x) = Bx$. The nondifferentiability of this loss is easily handled by introducing the slack variables $s_i,t_i$ and noting that $R_n(\delta) = \frac{1}{n}\sum_{i=1}^n (s_i + 2\cdot t_i)$ subject to the constraints $s_i \geq 0$, $s_i \geq \delta(X_i) - Y_i$, $t_i \geq 0$, and $t_i \geq -(\delta(X_i) - Y_i)$.} This reflects the short-term nature of the risks of under-prescription, and the long-term nature of the risks of over-prescription. Given the features described above (excluding insurance payer), we then formulate varying levels of our FO to solve the quantile regression problem that specifies dosing.
\begin{figure*}[t]
\begin{center}
\begin{subfigure}[t]{0.3\linewidth}
\includegraphics[width=\linewidth]{mor_qr.pdf}
\caption{\label{fig:morphqr} \scriptsize Quantile regression}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.3\linewidth}
\includegraphics[width=\linewidth]{mor_11.pdf}
\caption{\label{fig:morphfo1} \scriptsize level-(1,1) FO}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.3\linewidth}
\includegraphics[width=\linewidth]{mor_12.pdf}
\caption{\label{fig:morphfo2} \scriptsize level-(1,2) FO}
\end{subfigure}
\end{center}
\caption{\label{fig:morphdist} Distributions of morphine dosage, conditional on insurance type, for varying levels of FO. Histograms are shown of morphine dosages recommended by rules generated using quantile regression with no fairness modifications (left), the level-(1,1) FO problem (center), and the level-(1,2) FO problem. These histograms are generated by combining the recommended dosages for the hold-out data when doing five-fold cross-validation repeated five times. These results show how using increasing levels of the FO problem can yield more similar distributions. Note that all negative dosage recommendations from the respective models are replaced with zero.}
\end{figure*}
The results of our analysis are displayed in Fig \ref{fig:cv_fqua} and Fig \ref{fig:morphdist}. In Fig \ref{fig:cv_fqua}, the tradeoff between risk and fairness is displayed, as well as the range of best possible dosage rules. Visual evidence of the reduction in disparate impact is shown in Fig \ref{fig:morphdist}, which presents the difference in the distribution of dosage levels across insurance types for standard Quantile Regression (QR), the level-(1,1) FO { with hyperparameters that provide an intermediate tradeoff between risk and fairness}, and the level-(1,2) FO { with hyperparameters that provide the maximum level of fairness achievable}. There is a clear disparity between the distributions in Fig \ref{fig:morphqr}, but this difference is significantly reduced in Fig \ref{fig:morphfo1} and even more so in Fig \ref{fig:morphfo2}. {In fact, Fig \ref{fig:cv_fqua} shows that an intermediate tradeoff between risk and fairness using the level-(1,1) FO problem increases risk by 0.5\% while improving fairness by 45\%, whereas the maximum fairness achievable by the level-(1,2) FO problem increases the risk by only 1.5\% while improving fairness by 70\%.}
\section{Conclusion}
We proposed an optimization hierarchy for fair statistical decision problems, which provides a systematic approach to fair versions of hypothesis testing, decision-making, estimation, regression, and classification. We proved that higher levels of this hierarchy asymptotically impose independence between the output of the decision rule and the protected variable as a constraint in corresponding statistical decision problems. We demonstrated numerical effectiveness of our hierarchy using several data sets. An important question that remains to be answered is how to tune the hyperparameters in our hierarchy. Our theoretical results provide some guidance on how to choose the level of the hierarchy and how to reduce the number of tuning parameters to just one. However, further theoretical and empirical study is needed to better understand the tuning process.
\bibliographystyle{imsart-number}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,207 |
\section{Introduction}
\label{intro}
In the real scenario, images are often corrupted by different types of noises, e.g., additive, multiplicative, or mixed nature. Hence the noise removal process is a very initial stage for high-level image
analysis. In this work, we focus our interest only on multiplicative speckle noise removal process. Purity of the edge/texture information in the synthetic aperture radar(SAR) images, ultrasound images, and
laser images are usually diminished by speckle noise \cite{burckhardt1978speckle,loizou2005comparative,prager2001speckle}. Due to the contamination by speckle noise, it is challenging to distinguish
the hidden details in the images. Therefore the development of an advance speckle noise removal algorithm is always an essential aspect for the image processing society. A Mathematical representation
is still required to develop an efficient noise removal algorithm so that we can express each pixel of an image as a function of the speckle noise. The popularly used model for the noise image can be express
as a product of the original signal and the speckle noise \cite{dutt1995statistical}
\begin{equation*}
J=I\eta,
\end{equation*}
where $J$ indicates the noisy image, $I$ is the noise-free image, and $\eta$ signifies the speckle-noise process.
In general, the probability density function of the multiplicative speckle noise process $\eta$ follows the Gamma Law as,
\begin{equation*}
g(\eta) =
\begin{cases}
\frac{L^L}{\Gamma \left( L \right)}\eta^{L-1}\text{exp}\left(-L\eta\right), \hspace{0.5cm} \text{for}\hspace{0.1cm} \eta >0, \\
\hspace{1.5cm} 0, \hspace{1.9cm} \text{for}\hspace{0.1cm} \eta = 0,
\end{cases}
\end{equation*}
where $L \in {\rm I\!N} $ signifies the number of looks which correspond to the noise level in the corrupted images \cite{argenti2013tutorial,hao2015variational,liu2016modified} and $\Gamma \left( \cdot \right)$ denotes the Gamma function.
A large number of study reports the fundamentals and the statistical attributes of the speckle noise \cite{achim2001novel,argenti2013tutorial,frost1982model,kuan1985adaptive,lee1980digital,prager2001speckle}.
Futuristic despeckling approaches include Bayesian methods in spatial domain \cite{frost1982model,kuan1985adaptive,lee1980digital},
Bayesian methods in transformed domain \cite{aiazzi1998multiresolution,hao1999novel,meer1994multiresolution}, order-statistics and
morphological filters \cite{alparone1998decimated,alparone1995two,crimmins1985geometric,prager2001speckle},
simulated annealing despeckling \cite{white1994simulated}, nonlocal
filtering \cite{buades2005non,coupe2008bayesian,deledalle2009iterative,teuber2012new},
wavelet-based approaches \cite{achim2001novel,sudha2009speckle},
nonlinear diffusion in Laplacian pyramid domain \cite{zhang2007nonlinear},
anisotropic diffusion based methods \cite{jain2019non,jain2018nonlinear,jin2000adaptive,shan2019multiplicative,yu2002speckle,zhou2015doubly,zhou2018nonlinear},
and variational methods \cite{ aubert2008variational,dong2013convex,huang2010multiplicative,jidesh2013complex, jin2010analysis, jin2011variational, liu2013nondivergence,rudin2003multiplicative,shi2008nonlinear}.
From the initiation of PM model \cite{perona1990scale}, the partial differential equations(PDEs) are extensively used to develop noise removal algorithms, among different types of PDE based models, the total variational (TV) based algorithms are achieved remarkable results. First variational based strategy to deal with multiplicative noise is proposed by Rudin et al. \cite{rudin2003multiplicative}, with the principles,
\begin{equation*}
\int_{\Omega} \frac{J}{I}dx=1, \quad {\rm and}\quad \int_{\Omega} {\left( \frac{J}{I}-1\right) }^2dx=\sigma^2\,,
\end{equation*}
where $\sigma^2$ represents the variance of the noise $\eta$.
Due to the non-convexity of their proposed energy function, the model may not give a globally unique solution. To succeed over this shortcoming, several authors suggested various convex functional with different data fidelity terms \cite{aubert2008variational,huang2010multiplicative, jin2011variational, liu2013nondivergence}.
Recently, Dong et al. \cite{dong2013convex} suggest a convex total variation model for multiplicative speckle-noise reduction with the following form:
\begin{equation}\label{eq:Dong_energy}
I=\min_{I\in \text{BV} \left(\Omega\right)}\left\lbrace \int_{\Omega} \alpha(x)|\nabla I|dx+\lambda \int_{\Omega}\left( I+J \log\frac{1}{I}\right) dx\right\rbrace. \nonumber
\end{equation}
They choose the gray level indicator function $\alpha$, as
\begin{equation}\label{eq:gray_indicator}
\left( 1-\frac{1}{1+k|G_\xi \ast J|^2}\right) \frac{1+kM^2}{kM^2},\,\,\,\, \text{or}\,\,\,\,\, \dfrac{G_{\xi}\ast J}{M}, \nonumber
\end{equation}
with $M=\underset{x \in \Omega}{\text{sup}}(G_\xi \ast J)(x)$, where $\xi>0$, $k>0$, ``$\ast$" is the convolution operator, $G_\xi$ is the two dimensional Gaussian kernel and $\lambda$ is a given parameter, see \cite{dong2013convex}. Later, based on a gray level indicator function, Zhou et al. proposed a diffusion model~(DDD model)\cite{zhou2015doubly} for multiplicative noise removal problem. Their model takes the form:
\begin{align*}
&I_t = \text{div}(g(I,|\nabla I|)\nabla I), \hspace{0.3cm} \text{in} \hspace{0.2cm} \Omega_T:= \Omega \times (0,T), \\
&\partial_n I=0, \hspace{2.6cm} \text{in} \hspace{0.2cm} \partial \Omega_T:= \partial \Omega \times (0,T),\\
&I(x,0)=I_0(x), \hspace{1.5cm} \text{in} \hspace{0.2cm}\Omega,
\end{align*}
where $\Omega$ is the domain of original image $I$ and the observed noise image $I_0$, $\text{div}$ and $\nabla$ represents the divergence and gradient operator respectively.
They choose the diffusion coefficient as
\begin{align*}
g\left(I,\vert \nabla I \vert \right)=\dfrac{2\vert I \vert^\nu}{M^\nu+\vert I \vert^\nu}\cdot \dfrac{1}{\left(1+ |\nabla I|^2\right)^{(1-\beta)/2} },
\end{align*}
where $\nu>0,$ $0<\beta<1,$ and $M=\underset{x \in \Omega}{\text{sup}} I$. In this case, the gray level indicator and edge detector function are $a(I):=\dfrac{2\vert I \vert^\nu}{M^\nu+\vert I \vert^\nu}$ and $b(I):= \dfrac{1}{\left(1+ |\nabla I|^2\right)^{(1-\beta)/2} }$ respectively.
However because of the degeneracy of the edge detector function, i.e., $b(|\nabla I |) \rightarrow 0$ as $|\nabla I | \rightarrow \infty$, it is challenging to establish the well-posedness of their model.
Recently Shan et al. \cite{shan2019multiplicative} proposed a regularized version of the above-discussed model \cite{zhou2015doubly}.
In \cite{shan2019multiplicative}, the model takes of the form
\begin{align*}
&I_t = \text{div}(g(I_\xi,|\nabla I_\xi|)\nabla I), \hspace{0.2cm} \text{in} \hspace{0.2cm} \Omega_T, \\
&\partial_n I=0, \hspace{2.6cm} \text{in} \hspace{0.2cm} \partial \Omega_T,\\
&I(x,0)=I_0(x), \hspace{1.5cm} \text{in} \hspace{0.2cm}\Omega\,.
\end{align*}
They choose the diffusion coefficient as
\begin{align*}
g\left(I,\vert \nabla I \vert \right)= \left( \dfrac{I_\xi}{M_\xi^I} \right)^\nu \cdot \dfrac{1}{1+ |\nabla I_\xi|^\beta },
\end{align*}
where $I_\xi=G_\xi\ast I$, $M_\xi^I= \underset{x\in \Omega}{\text{max}}\vert I_\xi(x,t) \vert$, $I_0$ is the initial image, and $\nu, \beta$ and $\xi$ are positive constants. Due to the introduction of the Gaussian kernel in the diffusion coefficient, which avoids the degeneracy in the model, the authors able to study the wellposedness of the underlying problem.
To the best of our knowledge, most of the researcher concentrated their interest only on parabolic PDE based models, which are developed either from the variational bases approach or diffusion based approach, for the speckle-noise removal process. The hyperbolic PDEs could upgrade the quality of the detected edges and improve the image better than parabolic PDEs \cite{averbuch2006edge}. In the existing literature, the first
hyperbolic model for image denoising is telegraph-diffusion model \cite{ratner2007image}, where the image was viewed as an elastic sheet placed in a damping environment, which interpolates between the diffusion
equation and the wave equation. The telegraph-diffusion model takes the form,
\begin{align*}
&I_{tt}+\gamma I_t =\text{div}(g(|\nabla I|)\nabla I), \hspace{0.6cm} \text{in} \hspace{0.2cm} \Omega_T, \\
&\partial_n I=0, \hspace{3.5cm} \text{in}
\hspace{0.2cm} \partial \Omega_T,\\
&I(x,0)=I_0(x), \hspace{0.2cm} I_t(x,0)=0, \hspace{0.3cm} \text{in} \hspace{0.2cm}\Omega,
\end{align*}
where $g(|\nabla I|)=1/(1+({|\nabla I|^2}/{k^2}))$ is an edge-controlled diffusion function which preserves the important features and smoothen the unwanted signals, and $\gamma$ is the damping parameter.
It is quite interesting to note that for a very higher value of $g$ and $\gamma$, this telegraph-diffusion equation (TDE model) converges
to the original PM model \cite{perona1990scale} in a long time scenario. Although the TDE model performs better, it is challenging to confirm the well-posedness of their model. To overcome the ill-posedness issue in the TDE model \cite{ratner2007image}, Cao et al. suggest a regularized TDE model \cite{cao2010class}. They replace the gradient $|\nabla I|$ by $|\nabla G_\xi \ast I| $ in the edge-controlled function $g$ in the TDE model \cite{ratner2007image} and establish the well-posedness of their proposed model.
Even though the TDE model can effectively preserve the sharp edges but failed to produce satisfactory smoothing in the presence of a large level of noise.
To overcome this issue, several non-linear telegraph diffusion-based method have been proposed \cite{cao2010class,jain2016edge,sun2016class,yang2014kernel,zhang2015spatial}. However, in spite of their impressive applications in the additive noise removal process, hyperbolic PDE based approaches have not successfully used for speckle noise removal process.
Recently Sudeb et al. suggest a fuzzy edge detector based telegraph total variation model \cite{fuzzy2019ttvmodel} for the speckle noise removal problem. To the best of our knowledge, this is the first hyperbolic PDE based model in the existing literature applied to speckle noise removal process. The model \cite{fuzzy2019ttvmodel} takes the form,
\begin{align*}
&I_{tt}+\gamma I_t = \text{div}\left(\theta(I)\frac{\nabla I}{|\nabla I|}\right)-\lambda \left( 1-\frac{I_0}{I}\right),
\hspace{0.6cm} \text{in} \hspace{0.2cm} \Omega_{T}, \\
&\partial_n I =0, \hspace{6.1cm} \text{in}
\hspace{0.2cm} \partial \Omega_{T},\\
&I(x,0)=I_0(x), \hspace{0.1cm} I_t(x,0)=0, \hspace{3.0cm} \text{in} \hspace{0.2cm}\Omega,
\end{align*}
where $\theta$ is the fuzzy edge detector function \cite{chaira2008new}, $\gamma$ is a positive parameter and $\lambda$ is the weight parameter.
Further continuing to demonstrate the importance of hyperbolic PDE based model for image despeckling, the present work suggests a gray level indicator based telegraph diffusion model for multiplicative speckle noise removal. In this model, we choose a different diffusivity function from our previous model \cite{fuzzy2019ttvmodel}.
Also, instead of total variation framework \cite{fuzzy2019ttvmodel}, we designed the present model in an anisotropic diffusion-based fashion as discussed in \cite{zhou2015doubly}. Furthermore, we study the well-posedness of the suggested model in an appropriate function space. We opt an explicit numerical method to solve the present model. Our numerical implementation allows computing despeckled results on some standard test images. Quality of the despeckled images using the suggested model compare with the recently developed model \cite{shan2019multiplicative}. We compare the quantitative and qualitative results at different noise levels. The experiment results confirm that the proposed model performs better as compared to the model considered for the comparison.
The rest of the paper is organized as follows. Section \ref{sec:Proposed Model} describes the proposed telegraph diffusion method for image despeckling. In section \ref{sec:analysis}, we study the wellposedness of weak solution of the proposed model. Section \ref{sec:numerical} describes the numerical discretization of the present model. The simulated despeckling results obtained by the proposed approach are compared with other discussed diffusion methods in Section \ref{sec:Results}. We conclude the paper in Section \ref{sec:Conclusion} with a scope on future work.
\section{Telegraph Diffusion Model for Speckle Noise Removal}
\label{sec:Proposed Model}
Inspired by the ideas of \cite{fuzzy2019ttvmodel} and \cite{zhou2015doubly} initially we developed the model
\begin{align}
&I_{tt} +\gamma I_{t}- \text{div} \left( g\left(I,\vert \nabla I \vert \right) \nabla I\right)=-\lambda h(I_0,I)\,, \hspace{0.2cm} \text{in}\,\,\, \Omega_T\,, \label{maina1} \\
\label{mainb1}
&\partial_n I=0\,, \hspace{5.7cm} \text{on}\,\,\, \partial\Omega_T\,,\\
&I(x,0)=I_0(x)\,, \hspace{0.2cm} I_t(x,0)=0\,, \hspace{2.5cm} \text{in}\,\,\, \Omega\,.\label{mainc1}
\end{align}
The function $g$ is defined as
\begin{align}\label{g_ddd}
g\left(I,\vert \nabla I \vert \right)=\dfrac{2\vert I \vert^\nu}{\big( M^{I}\big)^\nu+\vert I \vert^\nu}. \dfrac{1}{1+ \left(\frac{|\nabla I|}{K} \right)^2 },
\end{align}
where, $\nu \geq 1,$ $\gamma, K>0$ are constants, $M^{I}= \underset{x \in \Omega}{\text{max}}\vert I(x,t) \vert$, and
$h(I_0, I)$ is the source term which comes due to the fidelity control term in the energy functional as discussed in \cite{fuzzy2019ttvmodel}. Although the presence of fidelity term in the equation keeps the restored image close to the original image, the noise may not be removed sufficiently. Therefore we would like to choose $h(I_0, I)=0.$ Also, because of the degeneracy in the diffusion coefficient \ref{g_ddd}, the suggested model \eqref{maina1}-\eqref{mainc1} may not be a well-posed problem\cite{shan2019multiplicative}. To overcome these issues, we invoke the ideas of \cite{cao2010class} and \cite{shan2019multiplicative}, and finally designe the following model in the anisotropic diffusion-based framework:
\begin{align}
&I_{tt} +\gamma I_{t}- \text{div} \left( g\left(I_\xi,\vert \nabla I_\xi \vert \right) \nabla I\right)=0\,, \hspace{1.1cm} \text{in}\,\,\, \Omega_T\,, \label{maina} \\
\label{mainb}
&\partial_n I=0\,, \hspace{5.4cm} \text{on}\,\,\, \partial\Omega_T\,,\\
&I(x,0)=I_0(x)\,, \hspace{0.2cm} I_t(x,0)=0\,, \hspace{2.2cm} \text{in}\,\,\, \Omega\,,\label{mainc}
\end{align}
where the diffusion function $g$ as given by
\begin{align*}
g\left(I_\xi,\vert \nabla I_\xi \vert \right)=\dfrac{ 2\vert I_\xi \vert^\nu}{\big(M^{I}_{\xi}\big)^\nu+\vert I_\xi \vert^\nu}\cdot \dfrac{1}{1+ \left(\frac{|\nabla I_{\xi}|}{K} \right)^2 }\,.
\end{align*}
In the above, $I_\xi=G_\xi\ast I$, $M^{I}_{\xi}= \underset{ x \in \Omega}{\text{max}}\vert I_\xi(x,t) \vert.$ Moreover the gray level indicator function
\begin{align*}
b(I)=\dfrac{2\vert I_\xi \vert^\nu}{\big(M^{I}_{\xi}\big)^\nu+\vert I_\xi \vert^\nu}
\end{align*}
can be transformed into $ b(s)=\dfrac{2s^\nu}{1+s^\nu} $, where $s=\dfrac{|I_{\xi}|}{M_\xi^I} \in [0, 1].$
The use of Gaussian convolution in the proposed model has a lot of advantages, not only the robustness in denoising viewpoint but also the well-posedness in the theoretical perspective. There are two key advantages of this proposed approach:
\begin{itemize}
\item[i)] it provides the sharp and true edges during noise removal process than other non-telegraph based algorithms as the model \eqref{maina}-\eqref{mainc} consists of telegraph diffusion model \cite{ratner2007image}
\item[ii)] it controls the diffusion process very well along with the gradient based edge detector coefficient specially for the speckle noise removal process \cite{dong2013convex} as the gray level indicator function in the proposed model is incorporated into the telegraph diffusion framework.
\end{itemize}
\section{Wellposedness of weak solution}
\label{sec:analysis}
In this section, we prove the existence and uniqueness of weak solution of the proposed model \eqref{maina}-\eqref{mainc}. Since
the problem \eqref{maina}-\eqref{mainc} is nonlinear, we first consider the linearized problem, and then use Schauder's
fixed-point theorem \cite{LCEvans1998} to show the existence of a weak solution. Without loss of generality, we assume $\gamma=1$ in \eqref{maina}.
\subsection{Technical framework $\&$ statement of the main result}
Throughout this section, $C$ denotes a generic positive constant. For $1\le p\le \infty$, we denote by $(L^p, \|\cdot\|_{L^p})$ the standard spaces of $p$-th order integrable
functions on $\Omega$. For $r\in \mathbb{N}$,
we write $(H^r, \|\cdot\|_{H^r})$ for usual Sobolev spaces on $\Omega$, and $(H^{1})^\prime$ for the dual space of $H^1$.
We introduce the solution space $W(0,T)$ for the
problem \eqref{maina}-\eqref{mainc}, where
\begin{align*}
W(0,T)&=\Big\{w\in L^\infty(0,T; H^1)\,, w_t \in L^\infty(0,T; L^2); \,w_{tt} \in L^2(0,T; (H^1)') \Big\}\,.
\end{align*}
Note that the space $W(0,T)$ is a Hilbert space for the graph norm, see \cite{jllions1968}.
\begin{defi}[Weak solution]\label{defi:weak}
A function $I$ is called a weak solution of \eqref{maina}-\eqref{mainc} if
\begin{itemize}
\item[a)] $I \in W(0,T) $ and \eqref{mainc} holds.
\item[b)] For all $\phi \in H^1$ and a.e $t\in (0,T)$, there hold
\begin{align*}
\left \langle I_{tt}, \phi \right \rangle + {\displaystyle \int_{\Omega}}
\Big( I_t \phi + g\left(I_\xi,\vert \nabla I_\xi \vert \right) \nabla I\cdot \nabla \phi \Big)\,dx =0.
\end{align*}
\end{itemize}
\end{defi}
As we mentioned, our aim is to establish wellposedness of weak solutions of the underlying problem \eqref{maina}-\eqref{mainc}, and we will do so under the following assumption:
\begin{Assumptions}
\item \label{A1} The initial data $I_0$ is an $H^2$-valued function such that
\begin{align*}
0< \alpha:=\inf_{x\in \Omega} I_0(x)\,.
\end{align*}
\end{Assumptions}
\begin{thm}\label{thm:existence-uniqueness}
Let the assumption \ref{A1} be true. Then the problem \eqref{maina}-\eqref{mainc} admits a unique weak solution in the sense of Definition \ref{defi:weak}.
\end{thm}
\subsection{Linearized problem $\&$ existence of weak solution:} For any positive constant $M_1>0$, define
\begin{align*}
W_{M_1}= \big\{ & \bar{I}\in W(0,T):~ \|\bar{I}\|_{L^\infty(0,T;H^1)} + \|\bar{I}_t\|_{L^\infty(0,T; L^2)} \le M_1\|I_0\|_{H^1},\,\\
& \hspace{0.5cm} 0<\alpha \le \bar{I}(x,t)~~{\rm for~a.e.}~~(x,t)\in \Omega_T\,
\big\}.
\end{align*}
For any $\bar{I}\in W_{M_1}$, consider the linearized problem:
\begin{align}
I_{tt} + I_{t}- {\rm div}\big( \bar{g}(x,t) \nabla I\big)=0 \hspace{1cm} {\rm in}~~~ \Omega_T\,, \label{linmaina}
\end{align}
with the initial condition \eqref{mainc},
where the function $\bar{g}$ is given by
\begin{align*}
\bar{g}(x,t) \equiv g_{\bar{I}}(x,t):= \frac{|\bar{I}_\xi|^\nu}{ \big(M_\xi^{\bar{I}}\big)^\nu + |\bar{I}_\xi|^\nu}\cdot \dfrac{1}{1+ \left(\frac{|\nabla \bar{I}_{\xi}|}{K} \right)^2 }\,.
\end{align*}
\begin{claim}\label{claim:1}
There exist positive constants $\kappa, C >0$, depending only on $G_\xi, I_0, M_1, K, \alpha$ and $\nu$, such that
\begin{equation}\label{bound:g_w}
\begin{aligned}
&{\rm i)}~ 0< \kappa \le \bar{g}\le 1\,, \\
& {\rm ii)}~ |\bar{g}_t| \le C\,.
\end{aligned}
\end{equation}
\end{claim}
\textbf{Proof:}
\noindent{Proof of ${\rm i)}$:} Since $\bar{I} \in W_{M_1}$, by convolution property, we have
\begin{align*}
\alpha \|G_\xi\|_{L^1} \le | G_\xi \ast \alpha| \le |\bar{I}_\xi| \le M_1 C_\xi \|I_0\|_{H^1}\,; \quad
\big(\alpha\|G_\xi\|_{L^1}\big)^\nu \le \big(M_\xi^{\bar{I}}\big)^\nu \le \big( M_1 C_\xi \|I_0\|_{H^1}\big)^\nu\,,
\end{align*}
and hence
\begin{align}
\frac{ \big(\alpha \|G_\xi\|_{L^1}\big)^\nu}{2\,\big( M_1 C_\xi \|I_0\|_{H^1}\big)^\nu} \le \frac{|\bar{I}_\xi|^\nu}{ \big(M_\xi^{\bar{I}}\big)^\nu + |\bar{I}_\xi|^\nu} \le 1\,. \label{esti:1-bar-g}
\end{align}
Again by Young's convolution inequality, we observe that
\begin{align}
\dfrac{1}{1+\left( \frac{C_{\xi} M_1\left\Vert I_0 \right\Vert_{H^1}}{K} \right)^2 } \leq \frac{1}{1+ \left(\frac{|\nabla \bar{I}_{\xi}|}{K} \right)^2 }\leq 1. \label{esti:2-bar-g}
\end{align}
Now ${\rm i)}$ follows from \eqref{esti:1-bar-g}-\eqref{esti:2-bar-g} for $\kappa=\frac{ \big(\alpha \|G_\xi\|_{L^1}\big)^\nu}{2\,\big( M_1 C_\xi \|I_0\|_{H^1}\big)^\nu}
\cdot \dfrac{1}{1+\left( \frac{C_{\xi} M_1\left\Vert I_0 \right\Vert_{H^1}}{K} \right)^2 } $.
\vspace{.1cm}
\noindent{Proof of ${\rm ii)}:$} Observe that, since $0< \alpha \|G_\xi\|_{L^1} < M_\xi^{\bar{I}}$, we have
\begin{align*}
|\bar{g}_t| & \le C(\nu,\alpha,\xi, M_1,\Vert I_0 \Vert_{H^1}) + C(\xi, K, M_1)\Vert I_0 \Vert^2_{H^1} \,.
\end{align*}
Thus ${\rm ii)}$ holds. This finishes the proof of claim.
Thanks to Claim \ref{claim:1}, one can apply classical Galerkin method \cite{LCEvans1998} to show that there exists a
unique weak solution $ I \in W(0,T)$ of the linearized problem \eqref{linmaina} with the initial condition \eqref{mainc}.
\begin{lem}\label{lem:a-priori}
The unique solution $ I \in W(0,T)$ of the linearized problem \eqref{linmaina} with the initial condition \eqref{mainc} satisfies the following: there exists a constant $C>0$, depending only on
$G_\xi, I_0, M_1, \nu, \alpha, K$ such that
\begin{itemize}
\item[a)] $ \|I\|_{L^\infty(0,T; H^1)} + \|I_t\|_{L^\infty(0,T; L^2)} \le C \|I_0\|_{H^1}$, \\
\item[b)] $\int_0^T \|I_{tt}\|_{(H^1)^\prime}^2\,dt \le C T \|I_0\|_{H^1}^2$. \\
\end{itemize}
\end{lem}
\textbf{Proof:}
\noindent{Proof of ${\rm a)}$:}
Note that $I_t \in L^\infty(0,T; H^1)$. Taking $\phi=I_t$ in \eqref{linmaina}, integrating by parts and using the
inequality $\int_{\Omega} \bar{g}\nabla I \cdot \nabla I_t\, dx
\geq \frac{1}{2}\dfrac{d}{dt}\int_{\Omega} \bar{g}|\nabla |^2 \,dx- \frac{C}{2}\|\nabla I\|_{L^2}^2$,
which follows from
integration by parts formula and \eqref{bound:g_w}, and the fact
\begin{align}
\|\nabla I\|_{L^2}^2 \le \frac{1}{\kappa} \int_{\Omega} \bar{g} |\nabla I|^2\, dx\,, \label{esti:gradient-inters-gw}
\end{align}
we obtain
\begin{align*}
\frac{d}{dt} \Big[\| I_t|_{L^2}^2 + \int_{\Omega} \bar{g} |\nabla I |^2\, dx\Big]
\le C\,\Big(\|I_t\|_{L^2}^2 + \int_{\Omega} \bar{g} |\nabla I |^2\, dx\Big)\,.
\end{align*}
An application of Gronwall's lemma along with \eqref{esti:gradient-inters-gw} gives: for a.e. $t\in (0,T]$
\begin{align}\label{bound_I_t_nabla_I}
\| I_t(t)\|_{L^2}^2 + \|\nabla I(t)\|_{L^2}^2 \leq C e^{C\,t} \,.
\end{align}
Since $I(x,t)=I_0(x)+ \displaystyle \int_{0}^{t} I_t(x,s)\,ds$, thanks to Young's inequality and \eqref{bound_I_t_nabla_I}, we have $\|I(t)\|_{L^2}^2 \le C_T \|I_0\|_{H^1}^2$ and hence
\begin{align*}
\| I\|_{L^{\infty}(0,T;H^{1})} + \| I_t\|_{L^{\infty}(0,T;L^2)} \leq C \| I_0\|_{H^1}\,.
\end{align*}
\noindent{Proof of ${\rm b)}$:} Choose $\phi \in H^1$ with $||\phi||_{H^1}\leq 1$ in \eqref{linmaina}, and use Cauchy-Schwarz inequality along with ${\rm a)}$, Lemma \ref{lem:a-priori} to obtain $\big| \langle I_{tt}, \phi \rangle \big|
\leq C\,\|I_0\|_{H^1} \|\phi\|_{H^1}$ and hence
\begin{align*}
\| I_{tt}\|_{(H^1)^\prime} \leq C \|I_0\|_{H^1}\,.
\end{align*}
Therefore ${\rm b)}$ follows once we take square both side of the above inequality and then integrate over $(0,T)$.
\subsection{Proof of Theorem \ref{thm:existence-uniqueness}}
In this section, we prove wellposedness of weak solution of the underlying problem via Schauder's fixed-point theorem. To proceed further, we introduce the subspace $W_0$ of $W(0,T)$ defined by
\begin{align*}
W_0=\Big\{ & w \in W(0,T):\, \|w\|_{L^\infty(0,T; H^1)} + \|w_t\|_{L^\infty(0,T; L^2)} \leq C\|I_0\|_{H^1}^2\,;\\
& \hspace{2cm} ~~ 0<\alpha \le w(x,t)~{\rm for ~a.e.}~(x,t)\in \Omega_T\,,~~\text{and}~~w~{\rm satisfies}~\eqref{mainc}\Big\}\,.
\end{align*}
Moreover, one can prove that $W_0$ is a non-empty, convex and weakly compact subset of $W$. Consider a mapping
\begin{align*}
\mathcal{P}:~ & W_0 \ensuremath{\rightarrow} W_0 \\
& w\mapsto I_w\,.
\end{align*}
In order to use Schauder's fixed-point theorem on $\mathcal{P}$, we need to prove only that the mapping $\mathcal{P}:w \rightarrow I_w $ is weakly continuous from $W_0$ into $W_0$. Let
$w_k$ be a sequence that converges weakly to some $w$ in $W_0$ and let $I_k = I_{w_k}$. We have to show that $\mathcal{P}(w_k):= I_k$ converges weakly
to $\mathcal{P}(w): = I_w$.
Thanks to Lemma \ref{lem:a-priori}, one can use classical results of compact inclusion in Sobolev spaces \cite{raadams1975}, to extract subsequences $\{w_{k_n}\}$ of $\{w_k\}$ and $\{I_{k_n}\}$ of $\{I_k\}$ such that
for some $I\in W_0$, the following hold as $k\ensuremath{\rightarrow} \infty:$
\begin{align*}
\begin{cases}
w_{k} \longrightarrow w \hspace{0.2cm} \text{in} \hspace{0.2cm} L^2(0,T;L^2) \hspace{0.2cm} \text{ and a.e. on } \hspace{0.2cm} \Omega_T,\\[0.5em]
G_{\xi}\ast w_k \longrightarrow G_{\xi}\ast w \hspace{0.2cm} \text{in} \hspace{0.2cm} L^2(0,T;L^2) \hspace{0.2cm} \text{and a.e. on} \hspace{0.2cm} \Omega_T,\\[1em]
| G_{\xi}\ast w_k |^\nu \longrightarrow | G_{\xi}\ast w |^\nu \hspace{0.2cm} \text{in} \hspace{0.2cm} L^2(0,T;L^2) \hspace{0.2cm} \text{and a.e. on} \hspace{0.2cm} \Omega_T,\\[1em]
\dfrac{| G_{\xi}\ast w_k |^\nu}{ \big(M_\xi^{w_k}\big)^\nu + |G_{\xi}\ast w_k |^\nu} \rightarrow \dfrac{|G_{\xi}\ast w |^\nu}{ \big(M_\xi^{w}\big)^\nu + |G_{\xi}\ast w |^\nu} \hspace{0.2cm} {\rm in}~~L^2(0,T;L^2) \hspace{0.2cm} {\rm and~a.e.~on}~~\Omega_T\,,\\[1em]
\partial_{x_i} G_{\xi}\ast w_k \rightarrow \partial_{x_i} G_{\xi}\ast w ~(i=1,2) \hspace{0.2cm} {\rm in}~~L^2(0,T;L^2) \hspace{0.2cm} {\rm and~a.e.~on}~~\Omega_T\,,\\[1em]
\dfrac{1}{1 + \left(\frac{|\nabla G_{\xi}\ast w_k|}{K}\right)^2} \longrightarrow \dfrac{1}{1 + \left(\frac{|\nabla G_{\xi}\ast w|}{K}\right)^2} \hspace{0.2cm} \text{in} \hspace{0.2cm} L^2(0,T;L^2) \hspace{0.2cm} \text{and a.e. on} \hspace{0.2cm} \Omega_T,\\[1.5em]
\displaystyle I_{k} \rightarrow I\, \hspace{0.2cm} \text{weakly} *~ \text{in}~~L^{\infty}(0,T;H^1)\,,\\[1em]
\displaystyle I_{k} \rightarrow I\, \hspace{0.2cm} \text{in}~~L^{2}(0,T; L^2)\,,\\[1em]
\partial_t I_k \rightarrow \partial_t I\, \hspace{0.2cm} \text{weakly} * ~\text{in}~~L^{\infty}(0,T;L^2)\,,\\[1em]
\partial_{tt} I_k \rightarrow \partial_{tt}I \hspace{0.2cm} \text{weakly} *~ \text{in}~~L^{2}(0,T;(H^1)^\prime)\,.
\end{cases}
\end{align*}
The above convergence allow us to pass to the limit in the problem \eqref{linmaina} and obtain $I=\mathcal{P}(w)$. Moreover, since the solution of \eqref{linmaina} is unique, the whole
sequence $I_k=\mathcal{P}(w_k)$ converges weakly in $W_0$ to $I=\mathcal{P}(w)$. Hence $\mathcal{P}$ is weakly continuous. Consequently, thanks to the Schauder fixed
point theorem, there exists $w \in W_0$ such that $w=\mathcal{P}(w)=I_w$. Thus, the function $I_w$ solves the problem \eqref{maina}-\eqref{mainc}.
\vspace{.1cm}
\noindent{\bf Uniqueness of weak solution:}
Following the idea as in \cite{LCEvans1998}, we prove the uniqueness of weak solutions of the underlying problem \eqref{maina}-\eqref{mainc}. Let $I_{1}$ and $I_{2}$ be two weak solutions of \eqref{maina}-\eqref{mainc}.
Then, we have
\begin{align}
&I_{tt}+ I_t-\text{div} \big(g_{I_1} \nabla I\big) = {\rm div}\big( \big(g_{I_1}-g_{I_2}\big) \nabla I_2 \big)\hspace{1.0cm}\text{in}~~\Omega_T\,, \label{eq:maina}\\
& \begin{cases} \label{eq:mainc}
I(x,0)= 0\,,~ I_t(x,0)=0\, \hspace{3.8cm} {\rm in}~~~\Omega\,, \\
\partial_n I =0 \hspace{6.3cm} {\rm on}~~~\partial \Omega_T\,,
\end{cases}
\end{align}
where $I=I_1-I_2$.
It suffices to show that $ I \equiv 0$ . To verify this, fix $ 0 < s < T$, and set for $i=1,2$,
\begin{align}\label{relationvi}
v_{i}(\cdot,t)= \begin{cases}
\displaystyle \int_{t}^{s} I_{i}(\cdot, \tau)d\tau, \hspace{0.5cm} 0<t\leq s\,, \\
0 \hspace{2.5cm} s \leq t < T\,.
\end{cases}
\end{align}
Note that, for $t\in (0,T)$,
\begin{align}\label{eq:fact-1}
\begin{cases}
\partial_t v_i(x,t)=-I_i(x,t) \quad i=1,2\,, \\
v_{i}(\cdot,t) \in H^1\,,~~~\partial_n v_{i}=0~~~\text{on}\,\, \partial \Omega\,\,\text{in the sence of distribution}.
\end{cases}
\end{align}
Set $v=v_1-v_2$. Then $v(\cdot,s)=0$.
Multiplying \eqref{eq:maina} by $v$, integrating over $\Omega \times (0,s)$ along with the integration by parts formula, \eqref{eq:fact-1}, Cauchy-Schwarz inequality and the equality
\begin{align*}
g_{I_1}\partial_t \nabla v\cdot \nabla v = \frac{1}{2} \partial_t\big(g_{I_1} |\nabla v|^2\big)-\frac{1}{2} \partial_t g_{I_1}|\nabla v|^2\,,~~{\rm and}~~
\nabla v(x,s)=0\,,
\end{align*}
we obtain
\begin{align}\label{unique9}
&\frac{1}{2}\|I(s)\|_{L^2}^2+\int_{0}^{s}\|I(t)\|_{L^2}^2\,dt + \frac{1}{2}\int_{\Omega} g_{I_1}(x,0) |\nabla v(x,0)|^2\, dx \nonumber \\
&\leq \frac{1}{2} \Big|\int_{0}^{s}\int_{\Omega} |\nabla v|^2 \partial_t g_{I_1}\, dx\,dt\Big| + \int_0^s \|(g_1-g_2)(t)\|_{L^\infty} \|\nabla I_2(t)\|_{L^2}\|\nabla v(t)\|_{L^2}\,dt\,.
\end{align}
As seen in the proof of Claim \ref{claim:1}, there exist positive constants $\kappa_1, C_1>0$ such that
\begin{align*}
\kappa_1 \leq g_{I_1}\leq 1\,,\quad |\partial_t g_{I_1}|\le C_1\,.
\end{align*}
Moreover, one can use property of convolution along the fact that solution $I_i$ has positive lower bound to show that
\begin{align*}
\|(g_{I_1}-g_{I_2})(t)\|_{L^{\infty}} \leq C(\xi, \nu, \alpha, K I_0)\|I(t)\|_{L^{2}}^\nu\,.
\end{align*}
Thus, using the above estimates in \eqref{unique9}, we have for $\nu\ge 1$
\begin{align*}
\frac{1}{2}\|I(s)\|_{L^2}^2+\int_{0}^{s}\| I(t)\|_{L^2}^2\,dt + C \|\nabla v(0)\|_{L^2}^2
&\le C\, \int_0^s \big( \|\nabla v(t)\|_{L^2}^2 + \|I(t)\|_{L^2}^2\big)\,dt\,.
\end{align*}
Since $ \|v(0)\|_{L^2}^2 \le T \int_0^s \| I(t)\|_{L^2}^2\,dt $, we have
\begin{align}\label{unique15_1}
\frac{1}{2}\|I(s)\|_{L^2}^2+\int_{0}^{s}\| I(t)\|_{L^2}^2\,dt + C \|v(0)\|_{H^1}^2
\le C\,\int_0^s \big( \|v(t)\|_{H^1}^2 + \|I(t)\|_{L^2}^{2} \big)\,dt\,.
\end{align}
Set
\begin{align*}
w_{i}(\cdot,t)=& \int_{0}^{t} I_{i}(\cdot,\tau)d\tau\, ; \quad w(\cdot,t)=(w_1-w_2)(\cdot,t)\,, \hspace{0.5cm} 0<t\leq T.
\end{align*}
Observe that
\begin{align*}
v(x,0)= w(x,s) \quad {\rm and}~~
v(x,t)= w(x,s)-w(x,t)~~{\rm for}~~ 0<t\le s\,.
\end{align*}
Hence \eqref{unique15_1} reduces to
\begin{align}\label{unique16}
&\frac{1}{2}\|I(s)\|_{L^2}^2+\int_{0}^{s}\| I(t)\|_{L^2}^2\,dt + C \|w(s)\|_{H^1}^2 \notag \\
& \le \tilde{C} s\,\|w(s)\|_{H^1}^2 + C\,\int_0^s \Big( \|w(t)\|_{H^1}^2 + \|I(t)\|_{L^2}^{2} \Big)\,dt\,.
\end{align}
Choose $T_1$ sufficiently small such that $C-\tilde{C} T_1 >0$.
Then, for $0<s\leq T_1,$ we have, from \eqref{unique16}
\begin{align}
\| I(s)\|_{L^2}^2 + \|w(s)\|_{H^1}^2 \le C \int_0^s\Big( \|w(t)\|_{H^1}^2 + \|I(t)\|_{L^2}^{2}\Big)\,dt\,. \label{unique17}
\end{align}
Consequently, an application of Gronwall's lemma then implies $ I \equiv 0$ on $[0,T_1]$.
Finally, we utilize a similar logic on the intervals $(T_1, 2T_1]$, $(2T_1,3T_1],\ldots$ step by step, and eventually deduce that
$I_{1} = I_{2}$ on $(0,T)$. This finishes the proof of Theorem \ref{thm:existence-uniqueness}.
\begin{lem}
Let $I$ be a weak solution of the problem \eqref{maina}-\eqref{mainc}, and $\beta_1:= \underset{x\in \Omega}\sup I_0(x) < \infty$. Then
\begin{align}
0<\alpha \le I(t,x)\le \beta_1 \quad {\rm for ~a.e.}~(x,t)\in \Omega_T\,. \label{boundedness-weak-solution}
\end{align}
\end{lem}
\textbf{Proof:}
Integrating the equation \eqref{maina} w.r.to time variable and using \eqref{mainc}, we get that
\begin{align}
I_t + (I-I_0) -\int_0^t {\rm div}\big( g_{I}(x,s) \nabla I\big)\,ds=0 \quad \forall~(x,t)\in \Omega_T\,. \label{maind}
\end{align}
Note that, $(I-\beta)_{+} \in H^1$, where $(\cdot)_{+}$ is the truncated function defined as $(\theta)_{+}=\max\{ 0,\theta\}$. Multiplying the PDE \eqref{maind} by $(I-\beta_1)_{+}$
and then integrating over $\Omega$ to have
\begin{align*}
\frac{1}{2} \frac{d}{dt} \int_{\Omega} |(I-\beta_1)_{+}|^2\,dx + \int_{\Omega} (I-I_0)(I-\beta_1)_{+}\,dx + \int_0^t \int_{\{I\ge \beta_1\}} g_{I}(x,s)|\nabla I|^2\,dx\,ds=0\,.
\end{align*}
Observe that, $g_{I} \ge 0$ and $(I-I_0)(I-\beta_1)_{+} \ge 0$. Thus, we have $\frac{d}{dt} \int_{\Omega} |(I-\beta_1)_{+}|^2\,dx\le 0$. Again, since $I_0 \le \beta_1$, we obtain
$\int_{\Omega} |(I-\beta_1)_{+}|^2\,dx \le 0$ for a.e. $t\in [0,T]$. Therefore, $I(x,t)\le \beta_1$ for a.e. $(x,t)\in \Omega_T$.
\vspace{.1cm}
Similarly, multiplying the equation \eqref{maind} with $(I-\alpha)_{-}\in H^1$ and then integrating over $\Omega$ to conclude that $0<\alpha \le I(x,t)$ for a.e. $(x,t)\in \Omega_T$, where
$(\cdot)_{-}$ is the truncated function defined as $(\theta)_{-}=\min\{ 0,\theta\}$. Hence \eqref{boundedness-weak-solution} holds true. This completes the proof.
\section{Numerical Implementation}
\label{sec:numerical}
To solve the present model numerically, we choose an explicit finite difference scheme, which is the most straightforward option for solving a hyperbolic PDE.\\
(a). Discretize the time domain using a step $\tau$ and the space domain using a step $ h.$ Denote $I^n_{i,j}=I(x_i,y_j,t_n)$ where $x_i=ih, \hspace{0.2cm} i=0,1,2...,N;$
$y_j=jh, \hspace{0.2cm} j=0,1,2...,M;$ $t_n=n\tau,\hspace{0.2cm} n=0,1,2...$ where $n$ is the number of iterations and $M \times N$ is the size of the image.\\
(b). Boundary conditions are given as:
$I_{-1,j}^n=I_{0,j}^n,I_{N+1,j}^n=I_{N,j}^n, \hspace{0.2cm} I_{i,-1}^n=I_{i,0}^n,I_{i,M+1}^n=I_{i,M}^n.$ \\
(c). The approximation of derivative terms are given as follows:
\begin{center}
$
\begin{array}{lll}
\displaystyle\frac{\partial I_{i,j}^n}{\partial t} & \approx & \displaystyle\frac{I_{i,j}^{n+1}-I_{i,j}^n}{\tau},
\displaystyle\frac{\partial^2 I_{i,j}^n}{\partial t^2} \approx \displaystyle\frac{I_{i,j}^{n+1}-2I_{i,j}^n+I_{i,j}^{n-1}}{\tau ^2},\\[0.8cm]
\nabla_x I_{i,j}^n &\approx& \displaystyle\frac{I_{i+h,j}^n-I_{i-h,j}^n}{2h},
\nabla_y I_{i,j}^n \approx \displaystyle\frac{I_{i,j+h}^n-I_{i,j-h}^n}{2h},\\[0.8cm]
|\nabla I_{i,j}^n| & \approx & \sqrt{(\nabla_x I_{i,j}^n)^2 + (\nabla_y I_{i,j}^n)^2}.
\end{array}
$
\end{center}
(d). The discrete form of the proposed model \eqref{maina} could be written as follows:
\begin{align*}
&(1+\gamma \tau)I_{i,j}^{n+1}=(2+\gamma \tau)I_{i,j}^n -I_{i,j}^{n-1} + {\tau ^2} \big\{ \nabla_x \left( g_{i,j}^n \nabla_x I_{i,j}^n \right)
+ \nabla_y \left( g_{i,j}^n \nabla_y I_{i,j}^n \right) \big\} ,
\end{align*}
where
\begin{align*}
g_{i,j}^n=b(s^{n}_{i,j})\cdot \dfrac{1}{1+ \left(\frac{|\nabla G_{\xi} \ast I^n_{i,j} |}{K} \right)^2 },
\end{align*}
with the conditions,
\begin{align*}
\begin{split}
I_{i,j}^0 &=I_0(ih,jh), \hspace{1.5cm} 0 \leq i \leq N, 0 \leq j \leq M,
\\
I_{i,j}^1 &=I_{i,j}^0, \hspace{2.5cm} 0 \leq i \leq N, 0 \leq j \leq M.
\end{split}
\end{align*}
Apart from the discretization of \eqref{maina}-\eqref{mainc}, we need to specify a stopping criterion for the convergence of the numerical simulation process. For this, we start the simulation with the initial value $I_0$ and utilize the system \eqref{maina} repeatedly, resulting in a family of smoother images ${I(x,t)};t>0$, which represents filtered versions of $I_0$. And then we stop the noise elimination process after getting the best PSNR value of the restored image.
\section{Experiment Results and Discussion}
\label{sec:Results}
This section displayed the performance of the present model in terms of visual quality and quantitative results. We compare the despeckling result of the proposed model using three standard test images corrupted by the multiplicative speckle noise with a different number of looks~($L$). We have artificially added multiplicative speckle
noise level $ L = \{ 1,3,5,10,33 \} $ by using our MATLAB program. All the numerical tests are performed under windows $7$ and MATLAB version $R2018b$ running on a desktop with an Intel Core $i5$ dual-core CPU at $2.53$ GHz with $4$ GB of memory. Image denoising using the present model has been compared with the Shan model \cite{shan2019multiplicative}.
In this process, the considered existing model is discretized using the same explicit numerical scheme as in the proposed model. We choose an uniform time step size $\tau = 0.2$ and $\xi=1$ for each models. Details of the other parameter values for the numerical computation are given in the right-hand of Table \ref{tab:psnr_ssim_parameter}.
\subsection{Image quality measurement}
\label{sec:quality}
Since the proposed model is claimed to be an improvement over the existing diffusion models, our main aim is to compare the edge detection and denoising results, in terms of both qualitative and quantitative
measures. For each experiment, we compute the values of the two standard parameters peak signal to noise ratio (PSNR)\cite{gonzalez2002digital} and Structural similarity index (SSIM)\cite{wang2004image} for the quantitative comparison with the other existing model. A higher numerical value of PSNR and SSIM suggests that the reconstructed image is closer to the noise-free image. The considered parameters are defined as follows: \\
(a). PSNR can measure the match between the clean and denoised data,
\begin{align*}
\text{PSNR} = 10\, \text{log}_{10} \left(\frac{\text{max}(I)^2}{\frac{1}{\text{MN}} \sum\limits_{i=1}^\text{M} \sum\limits_{j=1}^\text{N} (I(i,j)-I_t(i,j))^2 }\right).
\end{align*}
Here $I$ denotes the clean image of size $\text{M}\times \text{N}$ and max($I$) is the maximum possible pixel value of $I,$ and $I_t$ denotes the denoised image at a certain time $t.$ \newline
(b). SSIM is used to calculate the similarity between structure of clean and reconstructed image and can be given as,
\begin{align*}
\text{SSIM}(x,y)=\frac{(2\mu_x \mu_y +k_1)(2\sigma_{xy} +k_2)}{({\mu_x}^2 +{\mu_y}^2 +k_1)({\sigma_x}^2 + {\sigma_y}^2 +k_2)}.
\end{align*}
Here $\mu_x, \mu_y, {\sigma_x}^2, {\sigma_y}^2, \sigma_{xy} $ are the average, variance and covariance of $x$ and $y$, respectively. $k_1$ and $k_2$ are the variables to stabilize the division with weak denominator.\\
(c). Other typical qualitative measures have also been computed in terms of the ratio image, which can be defined as the point-by-point ratio between the degraded and the despeckled image \cite{argenti2013tutorial}. Apart from the ratio image, we also compute the 2D contour plot, 3D surface plot for the better visualization of the computational result for the proposed model as well as for the other discussed models.
\iffalse
\begin{figure}[]
\centering
\begin{subfigure}[b]{0.21\textwidth}
\includegraphics[scale=0.18]{boat}
\caption{Boat}
\label{fig:boat_clear}
\end{subfigure}%
\begin{subfigure}[b]{0.21\textwidth}
\includegraphics[scale=0.36]{brick}
\caption{Brick}
\label{fig:brick_clear}
\end{subfigure
\begin{subfigure}[b]{0.21\textwidth}
\includegraphics[scale=0.309]{circle}
\caption{Circle}
\label{fig:circle_clear}
\end{subfigure}%
\caption{Test Images: (a) Natural Image, (b) Texture Image, (c) Synthetic Image.}\label{fig:all_clear_images}
\end{figure}
\fi
\subsection{Computational Results \& Discussion}
\label{sec:natural}
In figure \ref{boat_1}, we represent the restored results of a Boat image (Natural Image) which is contaminated by multiplicative speckle noise with $L=1$. From the visual quality of the restored images, it is easy to perceive that Shan model leaves some spikes in the restored images but the results computed by the present model is more apparent than the results of Shan model.
In figures \ref{brick_1}-\ref{brick_3}, we describe the reconstructed results of a Brick image (Texture Image) which is corrupted by speckle noise with $L=\lbrace1,3 \rbrace$. From the figures, it is easy to see that the result computed by the present model is more apparent as well as less blurry than the Shan model.
To check the more reconstruction capability of the present model in figures \ref{circle_1}- \ref{fig:circle_10_3d} illustrate the qualitative results of a Circle image (Synthetic Image) which is corrupted by speckle noise with $L = \lbrace 1,3,5,10 \rbrace$. In the images \ref{circle_1}- \ref{circle_5} we demonstrate the despeckling images by the present model and Shan model when the image is corrupted by the noise level $L=\lbrace1,3,5\rbrace$. From these figures, we easily visualize the performance of the present model.
In figure \ref{fig:circle_10_ratio}, we represent the restored image along with their ratio image for a better comparison of the qualitative result. From the figures \ref{fig:circle_10_zzdb}-\ref{fig:circle_10_tdm} it can be easily concluded that the present model gives promising results in terms of image despeckling than the Shan model. Figure \ref{fig:circle_ratio} represent the ratio image for the clear circle image \ref{fig4:circle}.Figure \ref{fig:circle_ratio} indicates that ratio image for the clear image has no background information. From the figures \ref{fig:circle_10_zzdb_ratio}- \ref{fig:circle_10_tdm_ratio} we can see that ratio image corresponding to the present model has very less background information. Which confirms that the present model works better in terms of edge preservation than the Shan model.
To more visualize the noise removal ability in figures \ref{fig:circle_10_cont}-\ref{fig:circle_10_3d} we illustrate the contour maps and 3D surface plots corresponding to the images \ref{fig4:circle}-\ref{fig:circle_10_tdm}. One can easily observe that from the contour maps, and 3D surface plots, Shan model left some speckles in the homogeneous regions, but the present model produces fewer artifacts with better edge preservation.
Along with the qualitative comparison, the quantitative results in terms of PSNR and SSIM values are displayed in Table \ref{tab:psnr_ssim_parameter}. The highest values of PSNR and SSIM for each noise level clearly shows that the suggested model is better than the Shan model.
\iffalse
Apart from the results for artificially noisy images, in the figure \ref{fig:real_images} we display the restored results for real SAR images. Apply the present model on four different real images : $(i)$ \ref{fig:real2_noisy}: Space Radar Image of Kilauea, Hawaii \cite{dataset_kilauea}, $(ii)$ \ref{fig:real3_noisy}: SAR image of KOMPSAT/Arirang-5 of a portion of the Himalayan Arc \cite{dataset_eoportal}, $(iii)$ \ref{fig:real4_noisy}: High-resolution SAR image of Prague, Czech Republic \cite{dataset_eoportal}, $(iv)$ \ref{fig:real1_noisy}: One look radar image \cite{dataset_european}.
\fi
\begin{figure}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.2]{boat}
\caption{Original}
\label{fig1:boat}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.2]{boat_speckle_look_1}
\caption{Noisy}
\label{fig:boat_1}
\end{subfigure
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.2]{boat_speckle_look_1_shan}
\caption{Shan}
\label{fig:boat_1_zzdb}
\end{subfigure
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.2]{boat_speckle_look_1_tdm_rev}
\caption{Proposed}
\label{fig:boat_1_tdm}
\end{subfigure}
\caption{Image corrupted with speckle look L=1 and restored by different models.}\label{boat_1}
\end{figure}
\iffalse
\begin{figure}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.2]{boat}
\caption{Original}
\label{fig2:boat}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.2]{boat_speckle_look_3}
\caption{Noisy}
\label{fig:boat_3}
\end{subfigure
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.2]{boat_speckle_look_3_shan}
\caption{Shan}
\label{fig:boat_3_zzdb}
\end{subfigure
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.2]{boat_speckle_look_3_tdm_rev}
\caption{Proposed}
\label{fig:boat_3_tdm_rev}
\end{subfigure}
\caption{Image corrupted with speckle look L=3 and restored by different models.}\label{boat_3}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.2]{boat}
\caption{Original}
\label{fig3:boat}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.2]{boat_speckle_look_5}
\caption{Noisy}
\label{fig:boat_5}
\end{subfigure
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.2]{boat_speckle_look_5_shan}
\caption{Shan}
\label{fig:boat_5_zzdb}
\end{subfigure
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.2]{boat_speckle_look_5_tdm_rev}
\caption{Proposed}
\label{fig:boat_5_tdm}
\end{subfigure}
\caption{Image corrupted with speckle look L=5 and restored by different models.}\label{boat_5}
\end{figure}
\fi
\begin{figure}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.4]{brick}
\caption{Original}
\label{fig1:brick}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.4]{brick_speckle_look_1}
\caption{Noisy}
\label{fig:brick_1}
\end{subfigure
%
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.4]{brick_speckle_look_1_shan}
\caption{Shan}
\label{fig:brick_1_zzdb}
\end{subfigure
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.4]{brick_speckle_look_1_tdm_rev}
\caption{Proposed}
\label{fig:brick_1_tdm}
\end{subfigure}
\caption{Image corrupted with speckle look L=1 and restored by different models.}\label{brick_1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.4]{brick}
\caption{Original}
\label{fig2:brick}
\end{subfigure}
%
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.4]{brick_speckle_look_3}
\caption{Noisy}
\label{fig:brick_3}
\end{subfigure
%
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.4]{brick_speckle_look_3_shan}
\caption{Shan}
\label{fig:brick_3_zzdb}
\end{subfigure
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.4]{brick_speckle_look_3_tdm_rev}
\caption{Proposed}
\label{fig:brick_3_tdm}
\end{subfigure}
\caption{Image corrupted with speckle look L=3 and restored by different models.}\label{brick_3}
\end{figure}
\iffalse
\begin{figure}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.4]{brick}
\caption{Original}
\label{fig:brick}
\end{subfigure}
%
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.4]{brick_speckle_look_5}
\caption{Noisy}
\label{fig:brick_5}
\end{subfigure
%
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.4]{brick_speckle_look_5_shan}
\caption{Shan}
\label{fig:brick_5_zzdb}
\end{subfigure
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[scale=0.4]{brick_speckle_look_5_tdm_rev}
\caption{Proposed}
\label{fig:brick_5_tdm}
\end{subfigure}
\caption{Image corrupted with speckle look L=5 and restored by different models.}\label{brick_5}
\end{figure}
\fi
\begin{figure}
\centering
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle}
\caption{Original}
\label{fig1:circle}
\end{subfigure}%
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle_speckle_look_1}
\caption{Noisy}
\label{fig:circle_1}
\end{subfigure
%
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle_speckle_look_1_shan}
\caption{Shan}
\label{fig:circle_1_zzdb}
\end{subfigure
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle_speckle_look_1_tdm}
\caption{Proposed}
\label{fig:circle_1_tdm}
\end{subfigure}
\caption{Image corrupted with speckle look L=1 and restored by different models. }\label{circle_1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle}
\caption{Original}
\label{fig2:circle}
\end{subfigure}%
%
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle_speckle_look_3}
\caption{Noisy}
\label{fig:circle_3}
\end{subfigure
%
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle_speckle_look_3_shan}
\caption{Shan}
\label{fig:circle_3_zzdb}
\end{subfigure
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle_speckle_look_3_tdm}
\caption{Proposed}
\label{fig:circle_3_tdm}
\end{subfigure}
\caption{Image corrupted with speckle look L=3 and restored by different models. }\label{circle_3}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle}
\caption{Original}
\label{fig3:circle}
\end{subfigure}%
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle_speckle_look_5}
\caption{Noisy}
\label{fig:circle_5}
\end{subfigure
%
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle_speckle_look_5_shan}
\caption{Shan}
\label{fig:circle_5_zzdb}
\end{subfigure
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle_speckle_look_5_tdm}
\caption{Proposed}
\label{fig:circle_5_tdm}
\end{subfigure}
\caption{Image corrupted with speckle look L=5 and restored by different models. }\label{circle_5}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle}
\caption{Original}
\label{fig4:circle}
\end{subfigure}%
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle_speckle_look_10}
\caption{Noisy}
\label{fig:circle_10}
\end{subfigure
%
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle_speckle_look_10_shan}
\caption{Shan}
\label{fig:circle_10_zzdb}
\end{subfigure
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[scale=0.37]{circle_speckle_look_10_tdm}
\caption{Proposed}
\label{fig:circle_10_tdm}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[scale=0.45]{circle_ratio}
\caption{Original}
\label{fig:circle_ratio}
\end{subfigure
%
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[scale=0.45]{circle_speckle_look_10_shan_ratio}
\caption{Shan}
\label{fig:circle_10_zzdb_ratio}
\end{subfigure
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[scale=0.45]{circle_speckle_look_10_tdm_ratio}
\caption{Proposed}
\label{fig:circle_10_tdm_ratio}
\end{subfigure}
\caption{Upper row: Image corrupted with speckle look L=10 and restored images. Lower row: Ratio images. }\label{fig:circle_10_ratio}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[scale=0.45]{circle_cont}
\caption{Original}
\label{fig:3a_cont}
\end{subfigure}%
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[scale=0.45]{circle_10_cont}
\caption{Noisy}
\label{fig:3b_cont}
\end{subfigure
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[scale=0.45]{circle_10_shan_cont}
\caption{Shan}
\label{fig:3d_cont}
\end{subfigure
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[scale=0.45]{circle_10_tdm_cont}
\caption{Proposed}
\label{fig:3e_cont}
\end{subfigure
\caption{Contour maps of the restored imgaes in figure \ref{fig:circle_10_ratio}.}\label{fig:circle_10_cont}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[scale=0.5]{circle_original_3d}
\caption{Original}
\label{fig:2a_3d}
\end{subfigure}%
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[scale=0.5]{circle_10_3d}
\caption{Noisy}
\label{fig:2b_3d}
\end{subfigure
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[scale=0.5]{circle_10_shan_3d}
\caption{Shan}
\label{fig:2d_3d}
\end{subfigure
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[scale=0.5]{circle_10_tdm_3d}
\caption{Proposed}
\label{fig:2e_3d}
\end{subfigure
\caption{ 3D surface plots of the restored imgaes in figure \ref{fig:circle_10_ratio}.}\label{fig:circle_10_3d}
\end{figure}
\begin{center}
\begin{table}
\caption{Left table: Comparison of SSIM and PSNR values of despeckled images. Right table: Parameter values for the numerical experiments.}
\label{tab:psnr_ssim_parameter}
\begin{tabular}{ll}
\scalebox{0.8}{
\begin{tabular}[t]{llccccccccc}
\toprule
\multirow{2}[8]{*}{Image} & \multirow{2}[8]{*}{$L$} & \multicolumn{2}{c}{Shan Model\cite{shan2019multiplicative}} & \multicolumn{2}{c}{Proposed Model} \\
\cmidrule(r){3-4}
\cmidrule(r){5-6}
& & \multicolumn{1}{c}{SSIM} & \multicolumn{1}{c}{PSNR} & \multicolumn{1}{c}{SSIM} & \multicolumn{1}{c}{PSNR} \\
\midrule
Boat & 1 & 0.5975 & 17.10 & \textbf{0.6096} & \textbf{17.12} \\
& 3 & 0.7087 & 22.71 & \textbf{0.7347} & \textbf{22.85} \\
& 5 & 0.7508 & 24.73 & \textbf{0.7905} & \textbf{24.93} \\
& 10 & 0.8325 & 26.98 & \textbf{0.8422} & \textbf{27.12} \\
& 33 & 0.8941 & 29.57 & \textbf{0.9057} & \textbf{29.72} \\
& & & & & & & \\
Brick & 1 & 0.2930 & 12.17 & \textbf{0.2954} & \textbf{12.19} \\
& 3 & 0.3837 & 17.08 & \textbf{0.3861} & \textbf{17.11} \\
& 5 & 0.4291 & 19.34 & \textbf{0.4355} & \textbf{19.40} \\
& 10 & 0.4947 & 22.06 & \textbf{0.4960} & \textbf{22.18} \\
& 33 & 0.5943 & 25.40 & \textbf{0.5961} & \textbf{25.53} \\
& & & & & & & \\
Circle & 1 & 0.9582 & 34.30 & \textbf{0.9644} & \textbf{34.70} \\
& 3 & 0.9735 & 38.10 & \textbf{0.9772} & \textbf{39.53} \\
& 5 & 0.9765 & 39.36 & \textbf{0.9806} & \textbf{40.73} \\
& 10 & 0.9817 & 41.26 & \textbf{0.9865} & \textbf{42.85} \\
& 33 & 0.9870 & 43.64 & \textbf{0.9889} & \textbf{44.62} \\
& & & & & \\
\midrule
\end{tabular}
}
&
\scalebox{0.87}{
\begin{tabular}[t]{llcccccc}
\toprule
\multirow{2}[4]{*}{Image} & \multirow{2}[4]{*}{ $L$ } & \multicolumn{2}{c}{Shan\cite{shan2019multiplicative}} & \multicolumn{3}{c}{Proposed} \\
\cmidrule(r){3-4}
\cmidrule(r){5-7}
& & \multicolumn{1}{c}{$\alpha$} & \multicolumn{1}{c}{$\beta$} &\multicolumn{1}{c}{$\gamma$} & \multicolumn{1}{c}{$\nu$} & \multicolumn{1}{c}{$K$} \\
\midrule
Boat & 1 & 1 & 1 & 5 & 1 & 2 \\
& 3 & 1.2 & 1 & 4 & 1.5 & 2 \\
& 5 & 1.3 & 1 & 2 & 1.5 & 1 \\
& 10 & 1.4 & 1.2 & 2 & 2 & 1 \\
& 33 & 1.5 & 1.5 & 2 & 3 & 1 \\
\midrule
Brick & 1 & 1 & 1 & 5 & 1 & 4 \\
& 3 & 1.2 & 1 & 4 & 1.3 & 3 \\
& 5 & 1.4 & 1 & 2 & 1.5 & 2 \\
& 10 & 1.6 & 1 & 2 & 2 & 1 \\
& 33 & 1.7 & 1 & 2 & 3 & 1 \\
\midrule
Circle & 1 & 1.5 & 2 & 10 & 1 & 1 \\
& 3 & 1.5 & 2 & 10 & 1 & 1 \\
& 5 & 2 & 2.25 & 5 & 1 & 1 \\
& 10 & 2 & 2.25 & 2 & 1 & 1 \\
& 33 & 2 & 2.5 & 2 & 1 & 1 \\
\bottomrule
\end{tabular}
}
\end{tabular}
\end{table}
\end{center}
\section{Conclusion}
\label{sec:Conclusion}
This work suggests an efficient telegraph diffusion-based multiplicative speckle noise removal model. Such a new method intends to preserve the image edges during the noise removal process. To overcome the limitations of gradient-based despeckling models as well as parabolic PDE based models, we considered a hybrid approach. Here we combine a gray level indicator function with gradient-based diffusion in a telegraph diffusion framework for image restoration. To the best of our knowledge, the gray level indicator based telegraph diffusion model has not been used before for speckle noise suppression.
Also, we established the existence and uniqueness of a weak solution to the suggested model using Schauder's fixed point theorem. Moreover, we prove the boundedness of the weak solution.
Numerical experiments have been conducted to highlight the efficiency of the proposed model for despeckling using different types of test images. Computational result of the present model compares with a recently developed model. From the experiment results of the proposed model, we can conclude that the images are suitably recovered without introducing undesired artifacts. A potential direction that the telegraph diffusion model can be extended to handle texture preservation issues in various real-life images, which are degraded by mixed noises. Another significant step might be the study of the advanced numerical solver to enhance the convergence speed of the proposed model.
\thispagestyle{empty}
\bibliographystyle{unsrt}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,337 |
This culture knew only uses, not meanings.
— Seth Price, Fuck Seth Price
Franklin had reached an age when he no longer fretted about squandering his time.
— Walter Isaacson, Benjamin Franklin: An American Life
Go Speed Racer!
— Person handing out pamphlets outside subway turnstiles as I ran by to catch train
Among Franklin's cards was his fame.
You waiting for a cab?
— Guy paid to keep homeless people out of 42nd and 9th Citi ATM lobby from midnight to 9am
Never tell the machine it's for me.
— Sid on duplicator machine
Absolute time.
— Misreading of Lily from China's misspelling
You're not a real bookbinder?
— Nice lady on last day of her bookbinding I class after I described Uriel as "a real bookbinder." After her question I said no and Uriel laughed and said no.
Franklin became an apostle of being—and, just as important, of appearing to be—industrious. Even after he became successful, he made a show of personally carting the rolls of paper he bought in a wheelbarrow down the street to his shop, rather than having a hired hand do it.
You want to hear a lesson of bad parenting? My parents took me to the Sistine Chapel and no one told me to look up.
I can. I can do that. I like doing that. I mean not every artist wants to do that. Some do that. I mean. Um. I do. I, I've always liked that.
— David Hockney making hand gestures for big and small painting, "The Art of Seeing: David Hockney," YouTube
People aren't buying knick knacks anymore.
— Darren
As Charlie Warzel wrote on BuzzFeed, "For Wolff's book, the truth seems almost a secondary concern to what really matters: engagement."
— David Brooks, "The Decline of Anti-Trumpism," The New York Times
I wanted to build a resort, but I didn't want to copy others and make just another theme park. I wanted to build one that has cultural depth to it. I came up with the idea at 3 in the morning.
— Shaojun Su, "Local Chinese Government Backs Titanic Replica," NPR Morning Edition
I think I want to open a zoo. Zoo futures are up.
Finish one.
— Uriel
You live around here?
— Patricio
He announced to aids that he was going to write "a trashy piece of pulp"… a reportedly sex filled novel titled From Palms to Pines.
— Robert Caro, The Power Broker: Robert Moses and the Fall of New York
Kushner, going concave, retreated from the discussion.
— Michael Wolff, excerpt from Fire and Fury in NYMag
Why exactly?
— Sean Astin as Bob the Brain, Stranger Things 2 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,715 |
package sparsed
import "syscall"
// FileSize checks the real file size. Helpful for detecting sparse files.
func FileSize(path string) (allocated int64, taken int64, err error) {
s := syscall.Stat_t{}
err = syscall.Stat(path, &s)
if err != nil {
return 0, 0, err
}
blockSize, err := FSBlockSize(path)
if err != nil {
return 0, 0, err
}
return s.Size, s.Blocks * int64(blockSize), nil
}
// Owner returns uid and gid of a file
func Owner(path string) (uid uint32, gid uint32, err error) {
s := syscall.Stat_t{}
err = syscall.Stat(path, &s)
if err != nil {
return 0, 0, err
}
return s.Uid, s.Gid, nil
}
// FSBlockSize reports block size of particular file system
func FSBlockSize(path string) (blockSize int, err error) {
s := syscall.Statfs_t{}
err = syscall.Statfs(path, &s)
if err != nil {
return 0, err
}
return int(s.Bsize), nil
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,682 |
using System;
using System.Collections.Generic;
using System.Reflection;
namespace FluentSharp.CoreLib.API
{
public class O2CmdApi
{
public static List<Type> typesWithCommands = new List<Type>();
public static MethodInfo getMethod(string methodName, string[] methodParameter)
{
foreach (var type in typesWithCommands)
{
// ReSharper disable CoVariantArrayConversion
var methodToexecute = PublicDI.reflection.getMethod(type, methodName, methodParameter);
// ReSharper restore CoVariantArrayConversion
if (methodToexecute != null)
return methodToexecute;
}
return null;
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,933 |
World Music Award – nagroda muzyczna przyznawana od 1989 roku artystom muzycznym z całego świata, za największą liczbę sprzedanych płyt. Nagrody przyznawane są przez International Federation of the Phonographic Industry (IFPI).
2006
Zwycięzcy:
Michael Jackson- Diamond Award
Beyoncé – World's Best-Selling R&B Artist Award
Nelly Furtado – World's Best-Selling Pop/Rock Artist Award
Madonna – World's Best-Selling Pop Artist Award
Nickelback – World's Best-Selling Rock Artist Award
Kanye West – World's Best-Selling Rap/Hip Hop Artist Award
Bob Sinclar – World's Best DJ Award
Andrea Bocelli – World's Best-Selling Classical Artist Award
Shakira – World's Best-Selling Latin Artist Award
James Blunt – World's Best-Selling New Artist Award
Andrea Bocelli – Best-Selling Italian Artist Award
Dima Bilan – Best-Selling Russian Artist Award
James Blunt – Best Selling U.K. Artist Award
Madonna – Best Selling U.S. Artist Award
Elissa – Best-Selling Arabic Artist Award
Enya – Best-Selling Irish Artist Award
Tokio Hotel – Best-Selling German Artist Award
Katie Melua – Best-Selling British Artist Award
Jay Chou – Best-Selling Chinese Artist Award
Rihanna – Best-Selling Barbadian Artist Award
2007
Zwycięzcy:
Céline Dion – Legend Award for Outstanding Contribution to Music
Patti LaBelle – Legend Award for Outstanding Contribution to R&B
Rihanna – World's Best-Selling Pop Female Artist
Shaggy – Entertainer of the Year
Shaggy – World's Best-Selling R&B Male Artist
Avril Lavigne – World's Best-Selling Pop/Rock Female Artist
Mika – World's Best-Selling Pop/Rock Male Artist
Linkin Park – World's Best-Selling Rock Group
Akon – World's Best-Selling R&B/Soul Artist
50 Cent – World's Best-Selling Rap/Hip Hop Artist
Mana – World's Best-Selling Latin Artist
David Guetta – World's Best-Selling DJ
Mika – World's Best-Selling New Artist
Justin Timberlake – Best Selling U.S. Artist Award
Mika – Best Selling U.K. Artist Award
Cascada – Best-Selling German Artist Award
Avril Lavigne – Best Selling Canadian Artist Award
Silverhair – Best Selling Australia Artist Award
Akon – Best Selling Africa Artist Award
Silver – Best Selling Russian Artist Award
Laura Pausini – Best Selling Italian Artist Award
U2 – Best Selling Irish Artist Award
Nightwish – Best Selling Scandanavian Artist Award
Miguel Bose – Best Selling Spanish Award
Jay Chou – Best Selling Chinese Artist Award
Within Temptation – Best Selling Dutch Artist Award
Amr Diab – Best Selling Middle Eastern Artist Award
2008
Zwycięzcy:
Basshunter – World's Best Selling Swedish Artist
Specjalne nagrody
Legend awards – nagroda przyznawana co roku artyście za specjalne zasługi. Do ich laureatów należą:
Tina Turner, Michael Jackson, Whitney Houston, Mariah Carey, Elton John, Stevie Wonder, Modern Talking, Ace of Base, Diana Ross, Julio Iglesias, Tina Cousins, Rod Stewart, Lionel Richie, Ray Charles, Cher, Plácido Domingo, Luciano Pavarotti, Destiny's Child, Prince, Janet Jackson, Carlos Santana, Chaka Khan, Destiny's Child, Cliff Richard, Bee Gees, Deep Purple, Gloria Gaynor, Tony Bennett, Patti LaBelle czy Céline Dion.
Diamond awards – nagroda przyznawana artyście za liczbę sprzedanych płyt w ciągu kilkunastu lat:
2001: Rod Stewart
2003: Mariah Carey
2004: Céline Dion
2005: Bon Jovi
2006: Michael Jackson
2013: Justin Bieber
2017: Justin Bieber
Milenium awards – nagroda przyznana w 2000 roku, artyście milenium:
2000: Michael Jackson
2000: Mariah Carey
Przypisy
Linki zewnętrzne
Oficjalna strona World Music Award
World Music Awards
Nagrody muzyczne | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,453 |
{"url":"https:\/\/socratic.org\/questions\/how-do-you-determine-whether-the-situation-is-an-example-of-an-inverse-or-direct","text":"# How do you determine whether the situation is an example of an inverse or direct variation: the drama club can afford to purchase 10 wigs at $2 each or 5 wigs at$4 each?\n\nJul 20, 2018\n\nInverse variation : Inverse variation equation is $n \\cdot p = 20$\n\n#### Explanation:\n\nInverse variation : Here price per wig (p) increases , quantity of\n\nwigs (n) decreases as the total amount (k=20) remains constant.\n\n n prop 1\/p or n=k\/p or n * p = k ; k=20 or n * p=20# , when\n\n$n = 10 , p = 2 \\therefore 10 \\cdot 2 = 20 \\mathmr{and} n = 5 , p = 4 \\therefore 5 \\cdot 4 = 20$\n\nInverse variation equation is $n \\cdot p = 20$ [Ans]","date":"2019-12-10 14:00:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 4, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4624903202056885, \"perplexity\": 7853.507323329971}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540527620.19\/warc\/CC-MAIN-20191210123634-20191210151634-00271.warc.gz\"}"} | null | null |
Leonard Bisaku (born 22 October 1974) is a Croatian retired football midfielder who last played for the Columbus Crew in Major League Soccer.
Club career
He spent most of his professional career playing in Croatia with clubs including, Hajduk Split and NK Rijeka.
At age 31, he signed with the Crew on April 4, 2006, and was released late in the season.
Bisaku belonged to the Kosovar jeweller family.
He is now a football agent.
References
External links
Profile at 1hnl.net
Bisaku was being appointed to Columbus Crew
1974 births
Living people
Footballers from Zagreb
Croatian people of Kosovan descent
Croatian people of Albanian descent
Association football midfielders
Croatian footballers
HNK Hajduk Split players
HNK Cibalia players
NK Slaven Belupo players
NK Hrvatski Dragovoljac players
Pohang Steelers players
Seongnam FC players
HNK Rijeka players
HŠK Posušje players
Columbus Crew players
Croatian Football League players
Premier League of Bosnia and Herzegovina players
Major League Soccer players
Croatian expatriate footballers
Expatriate footballers in South Korea
Croatian expatriate sportspeople in South Korea
Expatriate footballers in Bosnia and Herzegovina
Croatian expatriate sportspeople in Bosnia and Herzegovina
Expatriate soccer players in the United States
Croatian expatriate sportspeople in the United States | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,795 |
Pianoconcert nr. 4 in G majeur, KV 41, is een pianoconcert van Wolfgang Amadeus Mozart. Het concert, samen met pianoconcert nr. 1, 2 en 3, stond lang bekend als door Mozart zelf gecomponeerd. Nu is duidelijk dat het een orkestratie is van sonates van verschillende Duitse componisten. Mozart voltooide het stuk in juli 1767.
Orkestratie
Het pianoconcert is geschreven voor:
Twee fluiten
Twee hoorns
Pianoforte
Strijkers
Onderdelen
Het pianoconcert bestaat uit drie delen:
Allegro
Andante
Molto allegro
Externe link
Bladmuziek op het International Music Score Library Project
04
Compositie voltooid in 1767 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,073 |
Triton Air uses and supports products Made in the USA. We are an EPA certified reclaimer & recycler, fully Licensed, Bonded & Insured company.
- Have an old air conditioning or heating system ?
Get a detailed tune-up on your system before the cold or heat hits!
Click for a video demonstrating the importance of picking the right Contractor and quality Equipment.
Taming the Earth's environment while preserving it. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,723 |
\section*{Acknowledgment}
The work of M. Conti was supported by a Marie Curie Fellowship funded by the European Commission under the agreement PCIG11-GA-2012-321980. Ankit Gangwal is pursuing his Ph.D. with a fellowship for international students funded by Fondazione Cassa di Risparmio di Padova e Rovigo~(CARIPARO). This work is partially supported by EU LOCARD Project under Grant H2020-SU-SEC-2018-832735.
\section{Background}
\label{background}
The purpose of this section is to provide an overview of the \gls{ICN} paradigm (Section~\ref{ICN_intro}), a comparison of the main features of \gls{IP} and \gls{ICN} architectures (Section~\ref{features}), the benefits of \gls{ICN} (Section~\ref{icn_benefits}) and, finally, the emerging technologies (Section~\ref{emerging_technologies}).
\subsection{\review{\gls{ICN} Overview}}
\label{ICN_intro}
\review{The \gls{ICN} concept was first implemented in 2001 in the TRIAD project~\cite{Cheriton00triad:a}, by introducing a new \textit{content layer} in the \gls{IP} communication model. This layer provided several content-based features, among which: hierarchical content caching, content replication and content discovery, multicast-based content distribution, and name-based routing. Moreover, the layer supported end-to-end communication based on content name and \gls{URL} by relying on \gls{IP} addresses only to reduce the role of transient routing tags. Although TRIAD routing mechanism used content names instead of \gls{IP} addresses, the \gls{TCP} and the \gls{IP} protocols were still the backbone of the proposed architecture. In 2006, UC Berkeley and ICSI proposed the \gls{DONA}~\cite{Koponen:2007:DNA:1282380.1282402}, which improved TRIAD by incorporating data authenticity and persistence as key objectives of the architecture, but still having a strong dependency on the underlying \gls{TCP}/\gls{IP}. In 2009, the \gls{PARC} revealed the \gls{CCN}~\cite{Jacobson} project. Soon after, the \gls{NSF} introduced its ``Future Internet Architecture'' program, which paved the way for \gls{NDN}~\cite{Zhang} - a branch of the \gls{CCN} project. Both \gls{CCN} and \gls{NDN} significantly moved the TRIAD and \gls{DONA} projects forward, by introducing a new network layer to definitely replace the existing \gls{TCP} and \gls{IP} ones. Thus, \gls{CCN} and \gls{NDN} are considered two key projects due to the considerable attention they brought to the \gls{ICN} paradigm from both Academia and Industry, influencing also the design of the \gls{ICN} architecture~\cite{ICNRG}.
}
\subsection{\review{Comparison Between \gls{IP}-based and \gls{ICN}-based Internet Architectures}}
\label{features}
Originally developed as part of the ARPANET project~\cite{mcquillan1980new} during the 1960s, the current Internet is now often referred as \gls{TCP}/\gls{IP} architecture due to its most well-known protocols (i.e., \gls{TCP} and \gls{IP}). On the contrary, the \gls{ICN} paradigm was first introduced in the TRIAD project~\cite{Cheriton00triad:a} in 2001 and, then, followed by several architectures adhering to its new communication model. \review{Since \gls{ICN} is a paradigm, we will consider here the five main architectures to describe the technical features of the future Internet, while we will provide a comprehensive description of all the architectures addressing the \gls{ICN}-\gls{IP} coexistence in Section~\ref{coexistence_architectures}}: (i)~the \gls{DONA} architecture~\cite{Koponen:2007:DNA:1282380.1282402}, (ii)~the \gls{CCN} architecture~\cite{Jacobson}, (iii)~the \gls{NDN} architecture~\cite{Zhang}, (iv)~the \gls{PURSUIT} architecture~\cite{Dimitrov:2010:PPP:1839379.1839409}, and (v)~the \gls{NetInf} architecture~\cite{Dannewitz:2013:NII:2459510.2459643}.
\textbf{Protocol Stack.} Both \gls{TCP}/\gls{IP} and \gls{ICN} rely on a layered protocol stack, which is comparable to the \gls{OSI} Reference Model~\cite{zimmermann1980osi}, as shown in \review{Fig.}~\ref{fig:stacks}.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{images/TCP_IP_ICN_Stack.pdf}
\caption{Adaptation of the \gls{OSI} seven layer model in the \gls{TCP}/\gls{IP} and \gls{ICN} protocol stacks.}
\label{fig:stacks}
\end{figure}
The \gls{TCP}/\gls{IP} stack includes the following four layers\review{~\cite{TCPstack}}:
\begin{itemize}
\item \emph{Application} - it combines the functionality of the \emph{Application}, \emph{Presentation} and \emph{Session} layers of the \gls{OSI} model. It is responsible for sending and receiving data and it is specific for a particular type of application (e.g., \gls{DNS}, \gls{HTTP}).
\item \emph{Transport} - it targets the \emph{Transport} layer of the \gls{OSI} model and it is responsible for the end-to-end data transfer and data streams. Its most important protocols are \gls{TCP}, which provides a reliable and connection-oriented service, and \gls{UDP}, which offers an unreliable and connection-less service.
\item \emph{Internet} - equivalent to the \emph{Network} layer of the \gls{OSI} model, it provides addressing and routing functionalities to ensure the delivery of messages to their destination. \gls{IP} is the most important protocol, but it does not provide flow control or error handling.
\item \emph{Link} - equivalent to the \emph{Data} and \emph{Physical} layers of the \gls{OSI} model, it manages the interaction among physical network components and it works as an interface with the network hardware.
\end{itemize}
Since the \gls{ICN} stack is an evolution of the \gls{TCP}/\gls{IP} one\review{~\cite{icnstack1,icnstack2, point}}, each layer is described with respect to the corresponding one in the Internet stack. More specifically, the layers of the \gls{ICN} stack are the following ones:
\begin{itemize}
\item \emph{\gls{ICN} Application} - the protocols of this layer address content names instead of hosts locations. For example, the \gls{URL} inside an \gls{HTTP} request is replaced with the complete name of a content.
\item \emph{\gls{ICN} Forwarding} - for any \gls{ICN}-compliant architecture this layer offers \review{routing functionalities for \gls{ICN} interest and data packets equivalent to} the \gls{TCP}/\gls{IP} \emph{Network} layer in such a way that source and destination \gls{IP} addresses are removed from the network packets and only the addressed content name is declared. According to the specific architecture, this layer can also provide the features of the \gls{TCP}/\gls{IP} \emph{Transport} layer. In that case, the Interest/Data messages replace the \gls{TCP}/\gls{IP} segment/\gls{ACK} messages and the content requester becomes responsible for the message sending rate in place of the content source (producer or intermediate router).
\item \emph{Link} - to be \gls{ICN}-compliant, this layer introduces a mapping between \gls{MAC} addresses and content names.
\end{itemize}
\textbf{Routing.} The purpose of the routing functionality is to route network packets from the source node till the destination node on one way and, then, from the destination to the source on the other.
Each \gls{TCP}/\gls{IP} packet specifies both source and destination nodes by including their \gls{IP} addresses. An \gls{IP} address is the unique identifier of each network component and it contains both the address of the network and the address of the specific component within that network. In the current Internet, routers are the main responsible for the routing functionality. Equipped with at least two \gls{IP} interfaces (i.e., an incoming and an outgoing one), each router receives \gls{IP} packets in the incoming interface and checks whether there is a match, based on the longest prefix, in its \gls{FIB} internal data structure. The \gls{FIB} contains a mapping between a network prefix and a router's outgoing interface, together with the next-hop \gls{IP} address. If there is a match in the \gls{FIB} for the incoming packet, this is forwarded through the outgoing interface towards the next node in the network.
In \review{ICN}, the routing functionality differs according to the specific design of each architecture, but they all have a common design choice: the packets sent by a requester contain only the full name of the content and no \gls{IP} addresses, neither the content requester's one nor the content source's one. In \gls{NDN} and \gls{CCN} architectures, contents are expressed through hierarchical names and routers use a longest-prefix match approach to find a possible entry in their \gls{FIB}, which returns the name-prefix/prefixes of the next node/nodes in the network. On the contrary, \gls{DONA} exploits a flat naming scheme to point to the contents available in the network and a name-based routing to redirect the packets until they reach the content source. A different approach is used by \gls{PURSUIT}, which relies on a publish/subscribe model. Publishers publish their contents in the network and subscribers ask for a specific content by using a flat name scheme, made of two components: the \gls{RI} and the \gls{SI}. The first element addresses the component responsible to find the match between publisher and subscriber for a specific content, while the second is used to identify the sub-network where the rendezvous is. Once the subscriber obtains the location of the publisher from the rendezvous node, it sends its packet to the \gls{TM} of the network where the content publisher is. The \gls{TM}, then, identifies the path from the publisher to the subscriber and adds a series of \gls{FIs} to the header of the packets. After that, the \gls{FNs} forward the packets only by using the \gls{FIs}, without any routing table. Finally, the \gls{NetInf} architecture adheres to both the approaches: name resolution, based on the publish/subscribe paradigm, and name-based routing.
\textbf{Name Resolution.} In the \gls{TCP}/\gls{IP} architecture there is a dedicated network component responsible for the name resolution, which is the \gls{DNS}. This is a distributed service, which translates domain names, expressed in hierarchical \gls{URL}s, into the corresponding \gls{IP} addresses. The Internet is organized into separate \gls{DNS} zones, each one under the direct control of an authoritative \gls{DNS} server, and everytime a network device sends a request to its local \gls{DNS} server, this might reply with a value saved in its cache or, otherwise, forward the same request to a remote server.
In \gls{ICN}, the name resolution differs according to the chosen forwarding approach. In case of name-based routing, the requester specifies a content by providing its full name, which is the same analyzed by the routers to find the next hop in the network. \review{On the other hand}, in the name resolution approach, used by \gls{PURSUIT} or \gls{NetInf}, there is always a dedicated node in the network, which is responsible for the mapping between publishers and subscribers.
\textbf{Storing.} In the \gls{TCP}/\gls{IP} architecture, \review{routers do not have caching features, while in \gls{ICN}, caching is fundamental and almost any node is able to cache contents and to serve the corresponding requests.}
\textbf{Traffic Management.} \review{In the current Internet, the traffic management, in terms of connection management, flow control and congestion control, is guaranteed by the \gls{TCP} protocol. The establishment of a connection is regulated by the three-way handshake mechanism, through which the \gls{TCP} protocol checks for the availability of the remote server, before exchanging any data with it. Only at the end of the handshake, the real communication starts, together with the data exchange, and it is regulated by the introduction of sequence numbers in the message blocks that enable the destination node to properly order all the received messages. The flow control is provided by the \gls{ACK} messages received by the sender from the receiver every time a packet has been properly delivered. Thus, a sender never overflows the receiving host because the re-transmission of a packet is performed only after a timeout, which corresponds to either an \gls{ACK} not received by the sender or three \gls{ACK}s received. Finally, the congestion control refers to the prevention of the routers from becoming overflowed.}
In \gls{ICN}, some architectures, such as \gls{DONA}, still rely on the existing transport protocols so that all the forwarding mechanisms and transport functionalities are guaranteed. However, other \gls{ICN} solutions, such as \gls{NDN}, do not provide the \emph{Transport} layer functionalities and, instead, delegate them to the application itself or to the network packets. After a certain timeout, an application can transmit again a packet, which by design has a limited lifetime to prevent network congestion. Moreover, the availability of distributed caches, which means contents, all over the network should prevent losses due to congestion.
\review{\subsection{Benefits of ICN-based architectures}
\label{icn_benefits}
The following ones are the key \gls{ICN} benefits, which better motivate why this architecture is a potential candidate for the future Internet.
\subsubsection{Scalable and Cost-Efficient Content Distribution}
In a future world where the mobile video traffic will be dominant (e.g., video data will consume more than 80\% of the \gls{IP} traffic, wireless mobile devices will generate two-third of the Internet traffic~\cite{cisco}, Netflix and YouTube together amount nearly 50\% of Internet traffic), the current network operators will face challenges in meeting the bandwidth requirements from end users. Thus, the inherent \gls{ICN} support for caching at the network layer~\cite{Jacobson}, together with the receiver-driven mechanism, the inherent support for mobility and the multi-cast routing, make \gls{ICN} fit the new network use in a multimedia streaming context~\cite{dashovericn,6649319,dashoverccn,7169859,ndnavs,Carofiglio}.
}
\review{\subsubsection{Mobility and Multihoming}
\gls{ICN} also meets the requirements of the 5G network, such as global Internet access and user mobility over dense and heterogeneous networks by adapting to multiple radio access technologies (e.g., Wi-Fi and \gls{LTE}). As a matter of fact, \gls{ICN} supports the mobility at the network layer by decoupling time and space between request resolution and content transfer~\cite{8303694}. In particular, two fundamental \gls{ICN} features encourage seamless consumer mobility~\cite{Anastasiades2014,8303694}. The first is the receiver-driven communication model, where it is up to the consumer to request location-independent contents. The second is the connection-less request/response communication model between consumer and producer. Therefore, when a mobile consumer connects to a new \gls{PoA}, the above two features allow the consumer to re-issue interests for the data that he has not received from the previous \gls{PoA}. On the contrary, producer mobility is more challenging in \gls{ICN} because of no distinction between routing locator and content identifier. Previous work have already proposed new solutions for an efficient management of producer mobility in \gls{ICN}~\cite{7562050,Anastasiades2014}.}
\review{\subsubsection{Disruption Tolerance}
Achieving an end-to-end communication through \gls{TCP}/\gls{IP} transport sessions in challenged networks is often difficult due to the sparse connectivity, high-speed mobility, and disruptions of such networks. Since the application protocol sessions are bound to transport sessions, the communication fails as soon as the transport session fails. In the current Internet, several applications do not require seamless communication with end-to-end paths~\cite{Ott2004WhyST}. As the primary objective is to access data objects, \gls{ICN} is the perfect approach for \gls{DTN} architectures~\cite{Fall:2003:DNA:863955.863960,rfc4838} due to the in-network caching with hop-by-hop transport functionality, which provides a store-and-forward mechanism and enables a better performance and reliability.
}
\review{
\subsubsection{Security}
Unlike the \gls{TCP}/\gls{IP} architecture, the \gls{ICN} design comes with the security in mind. In particular, in \gls{ICN} the security follows a data-centric model, which focuses on the importance of guaranteeing content integrity and source authentication. For a content-centric architecture, where contents can be located and provided in any point of the network, and not only by the original content producer, the above-mentioned features are particularly significant. To achieve this aim, \gls{ICN} contents are always signed by the producer, thus allowing consumers to always verify content integrity and data-origin authentication~\cite{Compagno2018}.}
\subsection{Emerging Technologies}
\label{emerging_technologies}
Before thinking of redesigning the whole Internet architecture, researchers and companies have provided several solutions, which work on top of the current Internet, to overcome some of its limitations. Among those, the most successful attempts are the following emerging architectures: \gls{SDN}, \gls{NFV}, \gls{CDN} and \gls{DTN}.
\subsubsection{Software-Defined Networking}
SDN~\cite{farhady2015software} is an emerging networking paradigm that separates network control logic (i.e., the control plane) from the underlying switches and routers that forward the traffic (i.e., the data plane). By separating the control and data planes, the network switching/routing devices become simple forwarding devices and the control logic is incorporated in a logically centralized controller. This separation primarily helps in simplifying network (re)configuration, policy enforcement, and evolution~\cite{kreutz2015software}. The control plane and the data plane communicate via a well-defined programming interface, i.e., the forwarding elements of the data plane request for instructions from the controller as well as the controller has direct control over the data plane elements using \gls{APIs}. The most popular flavor of such \gls{APIs} is OpenFlow~\cite{mckeown2008openflow}. An OpenFlow switch has one or more flow tables for handling packet-rules. When a rule matches with the incoming traffic, the OpenFlow switch performs certain actions (forwarding, modifying, dropping, etc.) on the traffic flow. The rules installed by the controller decide the role of an OpenFlow switch, i.e., it can behave as a switch, router, firewall, or middlebox (such as traffic-shaper, load-balancer).
\subsubsection{Network Functions Virtualization} Diversity and dominance of proprietary appliances made service deployment, as well as testing, complex. \gls{NFV}~\cite{li2015software} was designed as a technology to leverage \gls{IT} virtualization by exporting network functions from the underlying dedicated hardware equipment to general software running on \gls{COTS} devices. Using \gls{NFV}, the key network functions can be performed at various network locations, e.g., network nodes, data-centers, network edge, as required. \gls{NFV} is different from \gls{SDN}, and it only deals with the virtualization of network functions.
\subsubsection{Content Delivery Network}
The initial implementation of the Internet was designed to manage the traffic in a passive, end-to-end, and ``best effort'' approach~\cite{1250586}. With the explosion of user data and commercial content over the Internet, the ``best effort'' approach for traffic management became inefficient and unscalable. To handle this situation, \gls{CDN}~\cite{1250586, 6674399, 8046000} was designed~\cite{cisco,6688724,7948965}. Nowadays, \gls{CDN} appears as an integral and essential overlay network for the Internet~\cite{STOCKER20171003, Clark2005TheGO, Medagliani2017OverlayRF} since it primarily aims to improve bandwidth availability, accessibility, and precise content delivery through content replication.
\par
\gls{CDN} architecture consists of several cache servers that are strategically located across the Internet. Typically, \gls{CDN} holds a hierarchy of servers with multiple \gls{PoP} that stores copies of identical content to satisfy user's demand from most appropriate/closest site~\cite{NivenJenkins2012ContentDN}. It also has back-end servers for intra-\gls{CDN} content distribution. \gls{CDN} categorically distributes web contents to the cache servers, which are positioned close to the users. As a result, \gls{CDN} offers fast, efficient, and reliable web services to the users.
\par
There are two fundamental approaches for the deployment of \gls{CDN}: (i)~overlay model, where content is replicated to thousand of servers worldwide, and (ii)~network model, where routing configurations recognize the application services and forward them based on the predefined policies.
\par
Even though \gls{CDN}s improve content delivery, their performance is limited by the underlaying \gls{ISPs}. Usually, \gls{CDN}s do not manage independent packet data services, rather they rely on the \gls{ISPs} to make packet routing decisions. Moreover, both \gls{ISPs} and {CDN} collectively provide end-to-end \gls{QoE}\footnote{QoE is an all-inclusive model, which defines the quality perceived by a user when retrieving content or applications over the Internet.} for content delivery. Thus, coordination between \gls{ISPs} and \gls{CDN} providers causes a massive impact on the overall \gls{QoE}~\cite{ STOCKER20171003}.
\subsubsection{\glsentrylong{DTN}} \review{In the late 1990s, the widespread use of wireless protocols, together with an increasing interest in vehicular communication, encouraged researchers to design the \gls{IPN} architecture. This was the first attempt to address the need of long distance communications that were inevitably affected by packets loss/corruption and delays. \gls{DTN}~\cite{5770277} was first introduced as an adaptation of the \gls{IPN} for terrestrial networks~\cite{DTNstory}: it is an overlay architecture that operates above the protocol stack of \textit{ad-hoc} wireless networks and enables gateway functionality to interconnect them. To provide communication among networks having excessive delays due to highly repetitive link disruptions, \gls{DTN} adopts the ``store-carry-forward'' routing scheme~\cite{storecarryforward}: the main idea of this scheme is to have multiple nodes distributed over the network, each one able to receive a copy of the same message and then send it back to the destination node. This way, the delivery performance is improved and the destination node can receive the message from any location inside the network.}
\section{Coexistence Architectures: Features and Evaluation Parameters}
\label{classification_criteria}
In order to classify the existing architectures, we identified the necessary features and evaluation parameters to have a complete overview of each coexistence solution. The former come with the design of a coexistence architecture, while the latter refer to the challenges introduced during its deployment in a real scenario. The features are as follows: \emph{deployment approaches}, \emph{deployment scenarios}, \emph{addressed coexistence requirements} and \emph{\review{additional architecture or technology used}}. \review{On the other side}, the evaluation parameters are: \emph{traffic management}, \emph{access control}, \emph{scalability}, \emph{dynamic network management} and \emph{latency}.
In the remaining part of this section, we will describe features (Section~\ref{c_features}) and evaluation parameters (Section~\ref{evaluation_parameters}) used for analyzing each coexistence architecture.
\subsection{Features}
\label{c_features}
\subsubsection{Deployment Approaches}
The deployment of \gls{ICN} into the \gls{TCP}/\gls{IP} architecture inevitably \review{raises} the following question: \emph{How to introduce the \gls{ICN} protocol into the \gls{TCP}/\gls{IP} protocol?} To achieve this aim, researchers identified three possible approaches, shown in \review{Fig.}~\ref{fig:Category}: \emph{overlay} in case of \gls{ICN} running on top of the \gls{IP} protocol, \emph{underlay} in case of \gls{ICN} running under the \gls{IP} protocol and \emph{hybrid} in case of a coexistence of both \gls{IP} and \gls{ICN} protocols\review{~\cite{RFC}}. In the \emph{overlay} deployment approach, the aim is to enable the communication among several \gls{ICN} ``islands" in an \gls{IP} ``ocean" and is achieved through a tunnel over the Internet protocol. On the contrary, the \emph{underlay} solution involves the introduction of proxies and protocol conversion gateways near to either \gls{ICN} or \gls{IP} ``islands" to properly deliver and receive outgoing and incoming requests. As an example, an \gls{HTTP} request sent to an \gls{ICN} ``island" is intercepted by a gateway, which is responsible for translating it into an \gls{ICN} Interest. Then, the resulting \gls{ICN} data packet is translated again into an \gls{HTTP} reply sent back to the requester. Finally, the \emph{hybrid} approach claims the coexistence of both \gls{ICN} and \gls{IP}, by adopting dual stack nodes able to handle the semantics of both \gls{IP} and \gls{ICN} packets. Given the diversity of the two protocols, from a semantic and format point of view, a dual stack node can use various options to infer content names from an \gls{IP} packet, such as performing deep packet inspection in the payload or looking into the content name in the \gls{IP} option header.
\subsubsection{Deployment Scenarios}
The purpose of this feature is to analyze all the possible scenarios in which a coexistence architecture can be deployed among the others we identified and that are illustrated in \review{Fig.}~\ref{fig:coexistence_overview}.
\begin{figure}[h]
\centering
\includegraphics[width=0.55\textwidth]{images/deployment_scenarios.pdf}
\caption{Deployment scenarios for a coexistence architecture.}
\label{fig:coexistence_overview}
\end{figure}
Each deployment scenario involves two ``islands", which run either the same networking architecture or two separate ones, surrounded by an \gls{ICN} or an \gls{IP} ``ocean".
The possible different deployment scenarios are as follows:
\begin{itemize}
\item \emph{ICN-ICN communication in IP ``ocean''}.
\item \emph{ICN-IP communication in IP ``ocean''}.
\item \emph{ICN-IP communication in ICN ``ocean''}.
\item \emph{IP-IP communication in ICN ``ocean''}.
\item \emph{Border Island} - communication between different ``islands'' in separate ``oceans''.
\end{itemize}
\subsubsection{Addressed Coexistence Requirements}
In a coexistence scenario, the heterogeneity of the different networks might generate conflicts that prevent each individual architecture from guaranteeing its main features and properties. For example, since most of the \gls{ICN} architectures do not preserve the native transport functionalities provided by the \gls{TCP} protocol of the current Internet, one of their most significant limitations is the traffic management. In a coexistence scenario, there would be a conflict between an \gls{IP} ``island'' implementing its own logic for managing the traffic network and an \gls{ICN} ``island'', which does not support the same features.
Examining previous works~\cite{ren2015deployment}, we consider the following requirements as the necessary ones to be supported in a coexistence scenario:
\begin{itemize}
\item \emph{Forwarding} - the network forwarding devices should be able to handle packets with diverse routing identifiers (e.g., the variable-lengths of content names lead to dissimilar size of prefix-set and thus, different forwarding table look-ups).
\item \emph{Storage} - the network devices should support in-network caching to serve the content request and reduce bandwidth consumption. Nevertheless, the storage capacity of network devices also affects the size of the index table for the cached content and the time required to match the content name in the index table.
\item \emph{Security} - the network devices should preserve the security policies enforced in one (source) network to another (destination) network such as authenticating the digital signatures of content objects for content-based security or privacy policies.
\item \emph{Management} - the network devices should support management-related operations such as traffic-shaping/engineering, load-balancing, and explicit path steering.
\end{itemize}
\subsubsection{\review{Additional architecture or Technology Used}}
\gls{ICN} and \gls{IP} are not the only architectures that can coexist, and even the coexistence could be improved using other technologies. More specifically, \gls{ICN} well fits with several different technologies that are already deployed in the current Internet infrastructure. Among those, there are \gls{SDN}, \gls{NFV} or \gls{CDN}. The purpose of this feature is to collect all the architectures that the coexistence solutions involve.
\subsection{Evaluation Parameters}
\label{evaluation_parameters}
As evaluation parameters, we considered the following challenges arising during the deployment of a coexistence architecture in a real scenario:
\begin{itemize}
\item \emph{Access control} - in a networking context, access control uses a set of protocols to define, implement, and maintain policies that describe how the network nodes can be accessed by users/devices. Typically, it includes:
\begin{itemize}
\item Authorization, authentication, and accounting of network connections.
\item Identity and access management.
\item Mitigation of non-zero-day attacks.
\item Policy lifecycle management.
\item Role-based controls of user, device, application.
\item Security posture check.
\end{itemize}
\item \emph{Scalability} - it ensures that the overall performance of a network will be not affected by the size of the network. In other words, scalability describes the ability of a network to grow and manage increasing demand.
\item \emph{Dynamic network management} - it is the process of administering and managing dynamic changes in computer networks, such as topology changes and handovers for seamless host mobility.
\item \emph{Latency} - it is defined as the amount of time a message takes to traverse a system. In a computer network, it is typically measured as the time required for a packet to be returned to its sender. The major factors for the network latency include propagation delays and delays due to routers, as well as storage devices.
\item \emph{Traffic management} - \review{for a detailed description of the traffic management, we refer to Section~\ref{features}}.
\end{itemize}
\section{Classification of the Coexistence Architectures}
\label{coexistence_architectures}
The purpose of this section is to illustrate the classification of the coexistence architectures according to the features and the evaluation parameters described in Section~\ref{classification_criteria}. The summary of our findings is listed in Table~\ref{table:comparison}.
\begin{sidewaystable*}[!htbp]
\captionsetup{font=Large}
\centering
\resizebox{1.02\columnwidth}{!}
{
\begin{threeparttable}
\centering
\caption{\fontsize{14}{14}\selectfont Classification of the coexistence architectures (\ding{51}~Addressed ~ \ding{55}~Not addressed).}
\label{table:comparison}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{3}{|c|}{\begin{tabular}[c]{@{}c@{}}\textbf{Parameter}\end{tabular}} & \rotatebox{90}{\textbf{PURSUIT~\cite{6231280}}} & \rotatebox{90}{\textbf{NetInf~\cite{Dannewitz:2013:NII:2459510.2459643}}} & \textbf{\rotatebox{90}{NDN~\cite{NDNProject} \& CCN~\cite{Jacobson}~}} & \rotatebox{90}{\textbf{O-ICN~\cite{7084921}}} & \rotatebox{90}{\textbf{CONET~\cite{detti2011conet}}} & \textbf{\rotatebox{90}{GreenICN~\cite{vahlenkamp2013enabling}}} & \textbf{\rotatebox{90}{coCONET~\cite{veltri2012supporting}}} & \rotatebox{90}{\textbf{DOCTOR~\cite{doctor}}} & \rotatebox{90}{\textbf{POINT~\cite{point}}} & \rotatebox{90}{\textbf{RIFE~\cite{rife}}} & \rotatebox{90}{\textbf{CableLabs~\cite{cableLabs}}} & \rotatebox{90}{\textbf{NDN-LAN~\cite{NDNLAN}}} & \rotatebox{90}{\textbf{\review{hICN}~\cite{hICN}}} & \textbf{\rotatebox{90}{OFELIA~\cite{melazzi2012openflow}}} \\ \hline
\multicolumn{3}{|c|}{\begin{tabular}[c]{@{}c@{}}Duration of the project/\\ Year of publication\end{tabular}} & \begin{tabular}[c]{@{}c@{}}2010\\ to\\ 2013\end{tabular} & \begin{tabular}[c]{@{}c@{}}2010\\ to\\ 2013\end{tabular} & \begin{tabular}[c]{@{}c@{}}2010\\ till\\ today\end{tabular} & 2015 & \begin{tabular}[c]{@{}c@{}}2010\\ to\\ 2013\end{tabular} & 2013 & 2012 & \begin{tabular}[c]{@{}c@{}}2014\\ to\\ 2017\end{tabular} & \begin{tabular}[c]{@{}c@{}}2015\\ to\\ 2017\end{tabular} & \begin{tabular}[c]{@{}c@{}}2015\\ to\\ 2018\end{tabular} & 2016 & 2017 & 2018 & 2012 \\ \hline
\multirow{13}{*}{Features} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Deployment\\ approaches\end{tabular}} & Overlay & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & & & & & & & \\ \cline{3-17}
& & Underlay & & & & & & & & \ding{51} & \ding{51} & \ding{51} & \ding{51} & & & \\ \cline{3-17}
& & Hybrid & & & & & \ding{51} & & & & & & & \ding{51} & \ding{51} & \ding{51} \\ \cline{2-17}
& \multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}Deployment\\ scenarios\end{tabular}} & \begin{tabular}[c]{@{}c@{}}ICN-ICN\\ communication\\ in IP ``ocean''\end{tabular} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & & & \ding{51} & \ding{51} & \ding{51} & \\ \cline{3-17}
& & \begin{tabular}[c]{@{}c@{}}ICN-IP\\ communication\\ in IP ``ocean''\end{tabular} & & & & & & \ding{51} & \ding{51} & \ding{51} & & & \ding{51} & \ding{51} & \ding{51} & \\ \cline{3-17}
& & \begin{tabular}[c]{@{}c@{}}ICN-IP\\ communication\\ in ICN ``ccean''\end{tabular} & & & & & & & & \ding{51} & & & \ding{51} & \ding{51} & \ding{51} & \\ \cline{3-17}
& & \begin{tabular}[c]{@{}c@{}}IP-IP\\ communication\\ in ICN ``ocean''\end{tabular} & & & & & & & & \ding{51} & & & \ding{51} & \ding{51} & \ding{51} & \\ \cline{3-17}
& & Border Island & & & & \ding{51} & \ding{51} & & & \ding{51} & \ding{51} & \ding{51} & & & \ding{51} & \ding{51} \\ \cline{2-17}
& \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Addressed\\ coexistence\\ requirements\end{tabular}} & Forwarding & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} \\ \cline{3-17}
& & Storage & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} \\ \cline{3-17}
& & Security & \ding{51} & \ding{51} & \ding{51} & & & & \ding{51} & \ding{51} & \ding{51} & \ding{51} & & & \ding{51} & \ding{51} \\ \cline{3-17}
& & Management & & & & & \ding{51} & \ding{51} & \ding{51} & \ding{51} & & & & & \ding{51} & \ding{51} \\ \cline{2-17}
& \multicolumn{2}{c|}{\review{Additional architecture or technology used}} & \begin{tabular}[c]{@{}c@{}}PSIRP\\ LAN\end{tabular} & & & \begin{tabular}[c]{@{}c@{}}SAIL\\ SDN\end{tabular} & & SDN & SDN & \begin{tabular}[c]{@{}c@{}}NFV\\ SDN\end{tabular} & \begin{tabular}[c]{@{}c@{}}PURSUIT\\ SDN\end{tabular} & \begin{tabular}[c]{@{}c@{}}PURSUIT\\ DTN\end{tabular} & CDN & LAN & DNS & \begin{tabular}[c]{@{}c@{}}CONET\\ SDN\end{tabular} \\ \hline
\multicolumn{2}{|c|}{\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}Evaluation\\ parameters\end{tabular}}} & \begin{tabular}[c]{@{}c@{}}Traffic\\ management\end{tabular} & \ding{55} & \ding{55} & \ding{55} & \ding{55} & & & & & & & \ding{55} & \ding{55} & & \\ \cline{3-17}
\multicolumn{2}{|c|}{} & \begin{tabular}[c]{@{}c@{}}Access\\ control\end{tabular} & & \ding{55} & & & & & & & & & & & & \\ \cline{3-17}
\multicolumn{2}{|c|}{} & Scalability & & & & \ding{55} & & \ding{55} & & & \ding{55} & & & \ding{55} & \ding{55} & \\ \cline{3-17}
\multicolumn{2}{|c|}{} & \begin{tabular}[c]{@{}c@{}}Dynamic\\ network\\ management\end{tabular} & & & & \ding{55} & & & & & \ding{55} & & & \ding{55} & & \\ \cline{3-17}
\multicolumn{2}{|c|}{} & Latency & & & & & & & & \ding{55} & \ding{55} & & & \ding{55} & \ding{55} & \\ \cline{3-17}
\multicolumn{2}{|c|}{} & Other & & \begin{tabular}[c]{@{}c@{}}NetInf\\ transport\\ functions\\ Interaction.\end{tabular} & & & \begin{tabular}[c]{@{}c@{}}New IP\\ option\\ overhead.\end{tabular} & \begin{tabular}[c]{@{}c@{}}SDN controller\\ must manage\\ every \gls{ICN}\\ request and\\ rewrite several\\ headers fields\\ for every\\ response packet.\end{tabular} & \begin{tabular}[c]{@{}c@{}}ICN capable\\ OpenFlow-\\ compliant\\ network.\end{tabular} & & & & \begin{tabular}[c]{@{}c@{}}Optimization\\ of CCN router,\\ cache/content\\ implementation,\\ protocol\\ translation\\ between CCN\\ and HTTP.\end{tabular} & & & \begin{tabular}[c]{@{}c@{}}OpenFlow-\\ compliant\\ networking\\ elements.\end{tabular} \\ \hline
\end{tabular}
\end{threeparttable}
}
\end{sidewaystable*}
\subsection{\gls{PURSUIT}}
PURSUIT \cite{6231280} \review{was} a European project financed by the \gls{FP7}, started in September 2010 and ended in February 2013. \gls{PURSUIT} is an evolution of the \gls{FP7} project \gls{PSIRP}~\cite{Dimitrov:2010:PPP:1839379.1839409}, proposing an \gls{ICN} model based on a source node, that publishes an information, and on a client node, that subscribes to the content it desires. If the information is available, it will be delivered to the client. \gls{PURSUIT} aims at improving \gls{PSIRP}, meanwhile evaluating its performance, scalability, and coexistence with the current Internet network. \review{Fig.}~\ref{fig:pursuit} shows a simplified form of the architecture proposed in \gls{PURSUIT} project.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\columnwidth]{./images/pursuit.pdf}
\caption{Simplified view of the \gls{PURSUIT} architecture.}
\label{fig:pursuit}
\end{figure}
\gls{PURSUIT} architecture relies on the definition of a new data format and on the introduction of three new network components. \gls{PURSUIT} addresses the data as information items, which consist of pair of identifiers, i.e., \gls{RI} and \gls{SI}. The former represents the real piece of information, while the latter specifies the group which the information belongs to. The three additional network functions addressed by \gls{PURSUIT} are: \gls{RF}, \gls{TF}, and \gls{FF}. The \gls{RF} plays a fundamental role in \gls{PURSUIT} since it maps subscribers to publishers and supports names resolution. Moreover, it also initializes the delivery of information item to the client. The \emph{\gls{RP}} performs the \gls{RF} and relies on a hierarchical distributed hash table internal data structure. The \emph{\gls{TM}} implements the \gls{TF} by deploying a routing protocol to collect the topology of its domain and by exchanging routing information with other domains for global routing. The \emph{\gls{FN}} implements the \gls{FF} and it is also responsible for redirecting the information item to the client. In particular, the forwarding mechanism is label-based and uses a bloom filter \cite{Jokela2009LIPSINLS} to speed up the information delivery. In addition, the \emph{FN} offers also a caching facility.
As shown in \review{Fig.} \ref{fig:pursuit-node}, the \gls{PURSUIT} node internal architecture encompasses several component
, enabling the publish/subscribe communication model among the different stack layers. The \emph{IPC Elements} implement a non-blocking inter-process mechanism, allowing users-space applications to issue publish/subscribe requests and communicate through the proposed prototype. The functionality of the \emph{Local Proxy} element is to maintain a local record for all the pending subscriptions and, after receiving a request, dispatch it to the appropriate functions (i.e., \emph{\gls{RF}}, \emph{\gls{FF}}, \emph{\gls{TF}}). Finally, the \emph{Communication Elements} are responsible for transmitting publications to the network.
The design implementation of PURSUIT is based on Click elements \cite{Kohler:2000:CMR:354871.354874}: it creates Ethernet frames and forwards them to the appropriate network interface. In addition, it provides the ability to utilize raw \gls{IP} data packets as an alternative mechanism. This enables the prototype to be tested in Internet-wide scenarios.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{./images/pursuit-stack-1.pdf}
\caption{Internal architecture of a \gls{PURSUIT} node.}
\label{fig:pursuit-node}
\end{figure}
\textbf{Deployment Approach.}
Trossen et al. \cite{6231280} implemented a Layer-2 \gls{VPN}-based \emph{overlay} solution of \gls{PURSUIT} among multiple nodes located in Europe, US and Asia. The prototype is established and verified on three different testbeds for experimental purposes, functioning as an overlay on LAN environment.
To showcase a specimen of native \gls{ICN} application, multimedia streaming services were hosted as a demonstration, showing a lossless transmission and comparable performance.
\textbf{Deployment Scenarios.}
The \emph{ICN-ICN communication in IP ``ocean''} is the most suitable scenario for deploying PURSUIT, as it is also confirmed by the \emph{overlay} approach adopted in the testbed.
\textbf{Addressed Coexistence Requirements.}
\gls{PURSUIT} guarantees the following three coexistence requirements:
\begin{itemize}
\item Forwarding - this is specifically provided by the \emph{FN}, a software-based forwarder used for \gls{ICN} messages exchange.
\item Storage - the \emph{FN}, which has the responsibility of redirecting information to the client, provides caching facility to furnish storage of information.
\item Security - the security measures provided by \gls{PURSUIT} refer to the access of information. Besides gathering information into groups, \gls{PURSUIT} supports the information categorization into scopes, used for the definition of access privileges and policy implementations.
\end{itemize}
\textbf{\review{Additional architecture or Technology Used.}}
\gls{PURSUIT} is an evolution of the \gls{PSIRP} project and its testbed has been realized as an \emph{overlay} solution over a \gls{LAN} environment.
\textbf{Evaluation Parameters.}
The main issue introduced by the \emph{overlay} deployment in the PURSUIT architecture is the traffic management. This is mainly due to the existing Internet applications and protocols, which are not completely compatible with the techniques implementing \gls{ICN} over \gls{TCP}/\gls{IP} or UDP~\cite{ccnxudp,Zhang,NDNLP,ccnx-1.0} for traffic transport. Thus, many applications and protocols, such as \gls{HTTP} based multimedia streaming protocols, might face false throughput estimations \cite{wowmom2018}. This is due to the \gls{TCP} aggressiveness in presence of variations in content source location (e.g., dynamic caching and interest aggregation) \cite{CONTI2018209}.
\vspace{0.4cm}
\subsection{\gls{NetInf}}
The \gls{NetInf} architecture \cite{Dannewitz:2013:NII:2459510.2459643} is the approach proposed by the European \gls{FP7} project SAIL~\cite{sail}, started in January 2010 and ended in February 2013. The key component of the \gls{NetInf} architecture is the \gls{CL}, which is able to map the information, expressed through any protocol (e.g., \gls{HTTP}, \gls{TCP}, \gls{IP}, Ethernet), into specific messages compliant to a general communication paradigm. In particular, when two nodes communicate between each other, the functionality of a \gls{CL} is to provide framing and message integrity to NetInf requests and responses.
\review{Fig.}~\ref{fig:NetInf-node} depicts the different \gls{CL}s designed within the \gls{NetInf} stack. In particular, \gls{CL}s encompass an additional function (i.e., \emph{Request Scheduling}) between the \emph{NetInf Application} and the \emph{NetInf Protocol}. The \emph{CL1} functions over Ethernet, while \emph{CL2} makes \gls{NetInf} able to function over a variety of networks links and protocols such as HTTP, \gls{TCP}/\gls{IP}, \gls{WLAN}. The \gls{CL}s also provide transport layer functions across different nodes such as flow control, congestion control and reliability.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.3\textwidth]{./images/Netinf-stack-3.pdf}
\caption{Internal architecture of a \gls{NetInf} node.}
\label{fig:NetInf-node}
\end{figure}
\textbf{Deployment Approach.}
\gls{NetInf} adheres to the \emph{overlay} deployment approach, as it is confirmed by its first prototypes, deployed as an \emph{overlay} strategy over \gls{TCP}/\gls{UDP}.
\textbf{Deployment Scenarios.}
The \gls{NetInf} architecture supports the \emph{ICN-ICN communication in IP ``ocean''} scenario.
\textbf{Addressed Coexistence Requirements.}
The coexistence requirements provided by \gls{NetInf} are as follows:
\begin{itemize}
\item Forwarding - \gls{NetInf} guarantees both name-based forwarding and name resolution; \gls{NetInf} message forwarding protocol relies on the lower-layer networking technology (e.g., \gls{TCP} connection between two Internet hosts) and this communication is provided by the \gls{CL}s.
\item Storage - \gls{NetInf} nodes support both on-path and off-path caching.
\item Security - the \gls{CL}s are responsible for the integrity of the messages exchanged in the architecture.
\end{itemize}
\textbf{\review{Additional architecture or Technology Used.}}
Besides the standard \gls{TCP}/\gls{UDP}/\gls{IP} tunneling, which is part of the \emph{overlay} approach, \gls{NetInf} does not rely on additional architectures.
\textbf{Evaluation Parameters.}
The deployment of the \gls{NetInf} architecture in a coexistence scenario introduces the following challenges: traffic management, due to the absence of interaction among the \gls{CL}s transport functions and the \gls{NetInf} transport functions, and access control. The first issue refers to the \gls{CL}s, which are responsible for the interconnection of different types of networks into a single \gls{ICN} network. For example, the interaction among the underlying protocols that provide really different communication services creates new challenges (e.g., from uni-directional, opportunistic message forwarding to flow- and congestion-controlled higher layer communication services; from delay-challenged to high-speed optical backbone networks). Concerning the access control limitation, in \gls{NetInf}, it is not possible to apply controls over the accessibility levels of the information. Thus, anyone can access the published data without any restriction.
\vspace{0.4cm}
\subsection{\gls{NDN} and \gls{CCN}}
Among the the existing implementations of the \gls{CCN} paradigm \cite{Jacobson}, funded by the \gls{NSF}~\cite{NSF} as part of the Future Internet Architectures program, there is the \gls{NDN} research project~\cite{NDNProject}. From its first design late in 2010, the \gls{NDN} main idea is to shift the existing \gls{IP} host-to-host communication into a data oriented one by leveraging on an increased responsibility of the routers. Upon receiving a request for a content, the routers first check whether the content is already present in their cache (i.e., Content Store). If this is the case, they immediately return the content back, otherwise, they check the \gls{PIT}, searching for a pending request issued for the same content. If the PIT already contains an entry for the specific content, routers just collapse the current request into the PIT. If none of the previous cases verifies, routers forward the request to the next node in the network using the FIB, and keep waiting for the associated data to return back. Once the data packet arrives, all the pending interests for that content are satisfied just by sending the copy of data back to all the hosts which have requested it.
As shown in \review{Fig.}~\ref{fig:NDN-node}, \gls{NDN} introduces some changes into the \gls{IP} stack by adding the \emph{Security} and \emph{Strategy} novel layers: the first refers to the \gls{NDN} design addressing the security of the content instead of the security of the communication channel between two nodes (which is how \gls{IP} works); the second substitutes the network layer and provides the forwarding plane to forward \emph{Content chunks} by giving the best choices to maintain multiple connectivities under varying conditions. In addition, the \emph{Strategy} layer also supports security, scalability, efficiency and resiliency. Finally, \gls{NDN} modifies the \emph{Transport Layer} making it consumer-driven instead of producer-driven~\cite{6193510, CAROFIGLIO2016104}, importing it into the \gls{NDN} forwarding plane.
\begin{figure}[H]
\centering
\includegraphics[width=0.33\textwidth]{./images/ndn-stack-3.pdf}
\caption{\gls{NDN} network stack~\cite{Zhang}.}
\label{fig:NDN-node}
\end{figure}
\textbf{Deployment Approach.}
The common implementation of \gls{NDN} and \gls{CCN} includes \emph{overlay} protocols, such as CCNx~\cite{ccnx-1.0} and NDNLP~\cite{NDNLP}, which are deployed over existing \gls{IP} infrastructure. For instance, CCNx~\cite{ccnxudp} showcases the explicit example of overlay by implementing \gls{CCN}-over-\gls{UDP}. In particular, it provides a method to transport CCNx messages between two nodes over \gls{UDP}. Moreover, a concrete example of \gls{NDN} overlay architecture is provided by the ndn-testbed\footnote{https://named-data.net/ndn-testbed/}, which connects multiple \gls{NDN} nodes located in several continents over existing \gls{TCP}/\gls{IP}. The services provided in the trials of \gls{CCN}/\gls{NDN} include various projects, such as real-time video-conferencing \cite{Gusev2015NDNRTCRV}, adaptive bit-rate streaming (not limited to end-to-end) \cite{dashoverccn,dashovericn,ndnavs} and ndnSIM (\gls{NDN} simulator module on NS-3) \cite{399}.
\textbf{Deployment Scenarios.}
\gls{NDN} supports the \emph{ICN-ICN communication in IP ``ocean''} scenario, as it is confirmed by the ndn-testbed.
\textbf{Addressed Coexistence Requirements.}
\gls{NDN} guarantees the following three coexistence requirements:
\begin{itemize}
\item Forwarding - the router's FIB is responsible for forwarding interests towards the content provider via one or more network interfaces based on the routes to the origin node(s). The requested data packet is then forwarded towards the requester by simply traversing, in reverse, the path of the preceding interest~\cite{Zhang}. \gls{NDN} supports also the multicast data routing, which improves receiver-driven multimedia delivery.
\item Storage - \gls{NDN} routers are enabled to cache contents.
\item Security - \gls{NDN} provides a data-centric security model where each data unit is uniquely signed by the data producer \cite{8539023}.
\end{itemize}
\textbf{\review{Additional architecture or Technology Used.}}
Besides the standard TCP/UDP/\gls{IP} tunneling, which is part of the \emph{overlay} approach, the \gls{NDN} project does not rely on additional architectures.
\textbf{Evaluation Parameters.}
The tunneling approach, where \gls{NDN}/\gls{CCN} endpoints communicate over \gls{IP} \cite{TCP/ICN,367}, disowns the fundamental advantages of the content oriented networking (i.e., in-network caching and multicast forwarding) and the architectures implementing hop-to-hop connection-less (/oriented) connectivity (i.e., over \gls{TCP}/\gls{UDP}) suffer from a lack of traffic management \cite{CONTI2018209}.
In \gls{NDN}/\gls{CCN} networks, \gls{CA} is operated by the consumer rather than by the producer (server). This means that the Interests transmission rate is adapted in order to ensure that the delivery of a requested resource can make maximum fair use of the network. Existing \gls{NDN}/\gls{CCN} CA algorithms are largely based on the \gls{TCP} \gls{CA} algorithms, which assume that the bandwidth-delay product of the network fluctuates relatively slowly, as all the data packets traverse the same path from server to client. However, in \gls{NDN}/\gls{CCN} network content objects may be retrieved from various locations and may reach the consumer through different paths. Thus, the concept of a bandwidth-delay related to a single path and the use of \gls{TCP} \gls{CA} algorithms do not fit for \gls{NDN}/\gls{CCN} networks. In the \gls{NDN}/\gls{CCN} community, this is an active research area~\cite{Schneider:2016:PCC:2984356.2984369}.
\vspace{0.4cm}
\subsection{O-ICN}
\gls{O-ICN}~\cite{7084921} is a novel architecture, which leverages the \gls{SDN} technology for separating data plane activities (i.e., forwarding and storing/caching of \gls{ICN} contents) from control plane activities (i.e., naming, name resolution and routing). In particular, O-ICN introduces the \gls{ICN} Manager as an extended version of a \gls{DNS} server, which performs name resolution for both \gls{ICN} and non-\gls{ICN} requests. In case of an \gls{ICN} request, the \gls{ICN} Manager identifies the source of the content and sends to it the user's address, so that the source can route back the requested content to the user. In case of a non-\gls{ICN} request, the standard routing mechanism of \gls{TCP}/\gls{IP} is followed. The naming scheme adopted by O-ICN is hybrid, i.e., both human readable and self-certifying as in the SAIL architecture~\cite{sail}. Finally, the existing routers are modified to cache contents and communicate with the \gls{ICN} Manager.
\review{Fig}~\ref{fig:oicn_1} depicts the position of the novel \emph{ICN-sublayer} proposed by O-ICN, which lies between the \gls{TCP}/\gls{IP} \emph{Application Layer} and \emph{Transport Layer}. More specifically, \review{Fig.}~\ref{fig:oicn_2} describes the fields used by the new layer: the \gls{ICN} flag bit (\emph{F}), equal to 0 for an \gls{ICN} request or to 1 for an \gls{ICN} content; the three subsequent bits (1-4) reserved for additional purposes, and the remaining 28 bits for the total \gls{ICN} header~\cite{Agrawal2018}.
\begin{figure}[b]
\centering
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics[width=0.70\textwidth]{images/O-ICN-node1.pdf}
\caption{Position of the \gls{ICN} sublayer.}
\label{fig:oicn_1}
\end{subfigure}
~
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics[width=0.70\textwidth]{images/O-ICN-node2.pdf}
\caption{Detail of the \gls{ICN} sublayer header format.}
\label{fig:oicn_2}
\end{subfigure}
\caption{Internal architecture of an O-ICN node.}
\label{fig:OICN-node}
\end{figure}
\textbf{Deployment Approach.}
O-ICN relies on an \emph{overlay} deployment solution by leveraging on the \gls{ICN} Manager, which performs dual tasks: name resolution, along with routing functionalities for \gls{ICN} requests, and standard \gls{DNS} resolution for the existing Internet requests. \review{To evaluate the O-ICN architecture, authors in~\cite{Agrawal:2018:OSN:3265997.3266000} present the Overlay \gls{ICN} simulator (OICNSIM)\footnote{https://www.nsnam.org/wiki/Contributed\_Code}, an ns-3 based simulator where each O-ICN component is provided with helper classes and it is able to satisfy a wide variety of deployment scenarios. As an example, in~\cite{Agrawal:2018:OSN:3265997.3266000}, the authors studied the performance of OICNSIM for different \gls{ICN} caching policies.}
\textbf{Deployment Scenarios.}
O-ICN supports the \emph{ICN-ICN communication in IP ``ocean''} scenario. Moreover, thanks to the \gls{ICN} manager capability of manipulating both \gls{ICN} and not-\gls{ICN} requests, O-ICN can support also the \emph{Border Island} deployment scenario.
\textbf{Addressed Coexistence Requirements.}
The coexistence requirements addressed by O-ICN are as follows:
\begin{itemize}
\item Forwarding - the \gls{ICN} Manager is responsible for the forwarding strategy.
\item Storage - the data plane activities involve tactical storing/caching of \gls{ICN} contents at different locations/routers/gateways and are managed by \gls{ICN} routers.
\end{itemize}
\textbf{\review{Additional architecture or Technology Used.}}
O-ICN exploits the SAIL solution for the naming scheme and the \gls{SDN} technology for a separate management of data plane and control plane activities.
\textbf{Evaluation Parameters.}
As for the previous \emph{overlay} approaches, O-ICN is affected from a lack of traffic management. In addition, the overall solution suffers from scalability problems and the \gls{ICN} manager is not able to guarantee its \gls{DNS} functionalities in case of dynamic network conditions.
\vspace{0.4cm}
\subsection{CONET}
CONET~\cite{detti2011conet} is an architecture designed for connecting several \emph{\gls{CSS}}, which could be the whole Internet network, an \gls{IP} autonomous system or a couple of network connected components. The main components of the CONET design, shown in \review{Fig.}~\ref{fig:conet}, are as follows: \emph{\gls{EN}}, \emph{\gls{SN}}, \emph{\gls{BN}}, \emph{\gls{IN}}, and \emph{\gls{NSN}}. An \emph{EN} requests some named-data by issuing an interest routed by the \emph{\gls{BN}s}, which are located at the border of \emph{\gls{CSS}s}. The route-by-name process identifies the \emph{\gls{CSS}} address of the next \emph{\gls{BN}}, which is closest to the \emph{\gls{SN}} as soon as the appropriate \emph{\gls{CSS}} is reached. Then, the \emph{\gls{IN}s} forward the packet using the under-CONET routing engine. The \emph{\gls{CSS}} address of \emph{\gls{EN}} and the \emph{\gls{CSS}} addresses of the traversed nodes are appended to the packet. As soon as a CONET node is found to be able to provide the requested named-data, this is sent back on the reverse path to serve the requesting \emph{\gls{EN}}. All \emph{\gls{BN}s} and \emph{\gls{IN}s} along the traversed path may cache the content.
\begin{figure}[H]
\centering
\includegraphics[width=0.48\textwidth]{./images/conet.pdf}
\caption{Simplified view of the CONET architecture.}
\label{fig:conet}
\end{figure}
\textbf{Deployment Approach.}
The CONET architecture can follow either an \emph{overlay} or a \emph{hybrid} deployment approach. In the first case, CONET works on top of the \gls{IP} layer and the \emph{\gls{CSS}s} are nodes connected by overlay links (e.g, \gls{UDP}/\gls{IP} tunnels). In the second approach, the purpose is to make \gls{IP} content-aware by introducing a novel IPv4 option or an IPv6 extension header. The network components will have then hybrid routing tables with both \gls{IP} network addresses and names.
\textbf{Deployment Scenarios.}
Considering the \emph{overlay} solution, CONET supports the \emph{ICN-ICN communication in IP ``ocean''} scenario. On the contrary, the \emph{hybrid} approach allows it to be deployed in the \emph{Border Island} scenario as well.
\textbf{Addressed Coexistence Requirements.}
CONET guarantees the following three coexistence requirements.
\begin{itemize}
\item Forwarding and Management - these are guaranteed by \emph{\gls{BN}s} and \emph{\gls{NSN}s}. In addition, \emph{\gls{EN}s} provide transport-level functionalities such as reliability and flow control. Since the logic for requesting a content involves sending separate interests, containing a small part of the named-data, the control of interest sending rate can be used as a \gls{TCP}-like flow control mechanism.
\item Storage - \emph{\gls{BN}s} are able to store contents.
\end{itemize}
\textbf{\review{Additional architecture or Technology Used.}}
Besides the standard \gls{TCP}/\gls{UDP}/\gls{IP} tunneling, which is part of the \emph{overlay} approach, the CONET project does not rely on additional architectures.
\textbf{Evaluation Parameters.}
The \emph{hybrid} deployment solution is hard to be introduced since it requires a new \gls{IP} option. However, with respect to the \emph{clean-slate} approach, the \emph{hybrid} one is less disruptive, and it allows the architecture deployment in different scenarios.
\vspace{0.4cm}
\subsection{GreenICN}
The \gls{SDN} technology decouples control plane from data plane, and it provides a programmable, centrally managed network control that improves network performance and monitoring. \gls{SDN}-based implementations of \gls{ICN} exploit the centralized view available to \gls{SDN} controller, which enables the \gls{SDN} controller to install appropriate forwarding rules for \gls{ICN} requests/responses in such a manner that the network elements only have to support \gls{IP} forwarding. Vahlenkamp~et~al. in~\cite{vahlenkamp2013enabling} proposed an implementation of \gls{ICN} using \gls{SDN} under their GreenICN project. The proposal leverages \gls{ICN} protocol's Message IDs and features of \gls{SDN} instantiations such as OpenFlow to rewrite packet header information. \review{Fig.}~\ref{fig:Vahlenkamp13} presents a simplified view of this solution. Here, both the \emph{Content requester} and the \emph{Content source} are connected to \emph{OpenFlow-enabled switches} that are managed by the \emph{\gls{SDN} controller}. Routing information for the content requests and responses, upon arriving on OpenFlow switches, is handled/rewritten by the instructions from the controller.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\textwidth]{./images/Vahlenkamp13.pdf}
\caption{Simplified view of the GreenICN architecture.}
\label{fig:Vahlenkamp13}
\end{figure}
\textbf{Deployment Approach.} The proposed solution is an \emph{overlay} \gls{ICN} implementation as \gls{ICN} data is sent over the \gls{SDN}-managed \gls{IP} packets.
\textbf{Deployment Scenarios.}
Essentially, the authors in~\cite{vahlenkamp2013enabling} propose \gls{ICN} deployment over \gls{IP} network, where an \gls{ICN}-aware content source delivers the content to an \gls{ICN}-aware requester over \gls{IP} network. Hence, this solution supports both the \emph{ICN-ICN communication in IP ``ocean''} and the \emph{ICN-IP communication in IP ``ocean''} scenarios.
\textbf{Addressed Coexistence Requirements.}
The architecture addresses the following coexistence requirements:
\begin{itemize}
\item Forwarding - network programmability offered by \gls{SDN} enables forwarding and routing for \gls{ICN}.
\item Management - \gls{SDN} centrally managed network control supports load-balancing, traffic engineering, and explicit path steering (e.g., through \gls{ICN} caches).
\end{itemize}
\textbf{\review{Additional architecture or Technology Used.}} The authors argue that an ideal or native deployment of \gls{ICN}, in which user devices, content sources, and intermediary network elements are \gls{ICN} aware, may not be viable. Hence, the authors proposed to implement \gls{ICN}-awareness in the \gls{SDN}-enabled switches, where \gls{ICN} packets are carried over the \gls{IP} transport protocol. By using \gls{SDN}, the authors target all the services/applications of the \gls{TCP}/\gls{IP} protocol stack.
\textbf{Evaluation Parameters.} In the proposed \gls{ICN} implementation, \gls{SDN} controller must manage every \gls{ICN} request and rewrite several headers fields for every response packet, which might not scale with increased network size. Given that this solution is based on the widely accepted \gls{SDN} technology - that supports agile deployment and rapid alternation in networking - the hardware modifications required for its deployment are low in those scenarios where \gls{SDN} infrastructure already exists. Consequently, the time required for its deployment is also low. Nevertheless, the time and the hardware modifications required for its deployment would be higher if the \gls{SDN} infrastructure does not already exists.
\vspace{0.4cm}
\subsection{coCONET}
Similar to the work~\cite{vahlenkamp2013enabling}, Veltri~et~al.~\cite{veltri2012supporting} proposed a CONET~\cite{detti2011conet} inspired \gls{SDN}-based implementation of \gls{ICN}, called coCONET. \review{Fig.}~\ref{fig:Veltri12} presents a simplified view of this solution. In this architecture, \gls{ICN} nodes and user-terminals form the data plane and \emph{Name Resolution Service (NRS)} nodes are placed in the control plane. Moreover, \emph{\gls{ICN} node} works as an OpenFlow switch, while \emph{NRS node} works as an OpenFlow controller. To this end, the authors proposed to extend the OpenFlow protocol~\cite{mckeown2008openflow}.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{./images/Veltri12.pdf}
\caption{Simplified view of the coCONET architecture.}
\label{fig:Veltri12}
\end{figure}
\textbf{Deployment Approach.} Similar to the work~\cite{vahlenkamp2013enabling}, the proposed solution is an \emph{overlay} \gls{ICN} implementation as \gls{ICN} data is encapsulated inside the \gls{SDN}-based \gls{IP} packets.
\textbf{Deployment Scenarios.}
The proposed solution enables the \emph{ICN-ICN communication in IP ``ocean''} and the \emph{ICN-IP communication in IP ``ocean''} scenarios, where the underlying \gls{IP} network is managed by OpenFlow-based \gls{SDN} network.
\textbf{Addressed Coexistence Requirements.}
The present architecture provides the following coexistence requirements:
\begin{itemize}
\item Forwarding and Management - \gls{SDN}-based operations of the proposed approach support both forwarding and management of \gls{ICN} traffic.
\item Storage - \gls{ICN} capable nodes cache the contents.
\item Security - contents are cryptographically protected in order to assure content (and content generator) authentication and data integrity. This security service is provided through digital signature and can be verified through the public key associated to the private key of the content (or of the content generator). The proposed system enforces every \gls{ICN} node to verify such signature before forwarding the content toward the interested end-nodes, to protect the network against \gls{DoS} or other attacks.
\end{itemize}
\textbf{\review{Additional architecture or Technology Used.}} Here, the authors focus specifically on OpenFlow-based \gls{SDN} implementations and target all the services/applications of the \gls{TCP}/\gls{IP} protocol stack. OpenFlow is a flavor of \gls{SDN}.
\textbf{Evaluation Parameters.} The proposed solution requires \gls{ICN} capable OpenFlow network devices for \gls{ICN} operations. Due to such specific requirements, the hardware modifications and the time required for its deployment are high.
\subsection{DOCTOR}
\gls{DOCTOR}~\cite{doctor} is an ongoing project funded by French Nation Research Agency. The project provides support towards the adoption of new standards by developing a secure use of virtualized network equipment. This leads to ease the deployment of novel networking architectures, thus enabling the coexistence of \gls{IP} and emerging stacks, such as \gls{NDN}, as well as the progressive migration of traffic from one stack to the other. DOCTOR proposes the use of \gls{NFV} infrastructure to achieve the incremental deployment of \gls{NDN} at a low cost. The project proposes an HTTP/\gls{NDN} gateway to interconnect \gls{ICN} ``islands'' to the \gls{IP} world, and an experimental architecture able to process the web traffic passing through a virtualized \gls{NDN} network.
\par In particular, DOCTOR first deploys a virtual network based on OpenvSwitch to provide an end-to-end network connectivity between the virtualized network services and to enable a software control of the networking infrastructure. Then, it selects \gls{NDN} as an \gls{ICN} protocol stack. More specifically, the NDNx software is \textit{dockerized} to become a \gls{VNF}, deployable in DOCTOR architecture. In DOCTOR, \gls{NDN} is used both over \gls{IP} and over Ethernet since most \gls{NFV} tools are still \gls{IP}-dependent. To test the functionality of the coexistence, the web is considered as an application layer service due to its high popularity and predominance in the global network shares. However, since the current web clients and servers do not yet implement \gls{NDN}, dedicated gateways are used to perform an HTTP/\gls{NDN} conversion. Since these gateways are conceived as \gls{VNF}s, they can be deployed where and when required. In particular, two types of gateways are defined: (1) an \gls{iGW}, aimed at converting HTTP requests into \gls{NDN} Interest messages and \gls{NDN} Data messages into HTTP replies; (2) an \gls{eGW}, aimed at converting \gls{NDN} messages into HTTP requests, if the content is not available in the \gls{ICN} network, and HTTP replies into \gls{NDN} Data messages. \review{Fig.}~\ref{Fig:doctor} shows the high level architecture of a virtualized node in DOCTOR. The virtualized node is implemented on a single Linux server and it provides the required hardware resources for the \gls{VNF}s, which can act as various components (e.g., \gls{NDN} stack, \gls{IP} stack, and HTTP/\gls{NDN} gateway).
\begin{figure}[ht!]
\centering
\includegraphics[width=.9\columnwidth]{images/DOCTOR.pdf}
\caption{Internal architecture of a DOCTOR virtualized node.}
\label{Fig:doctor}
\end{figure}
\textbf{Deployment Approach.} DOCTOR uses an \textit{underlay} approach with the help of HTTP/\gls{NDN} gateways, that can map the HTTP protocol with \gls{NDN} messages and properly deliver the web content.
\textbf{Deployment Scenarios.} The \gls{iGW} and \gls{eGW} allow DOCTOR to support all the different deployment scenarios.
\textbf{Addressed Coexistence Requirements.}
The DOCTOR architecture addresses the following coexistence requirements:
\begin{itemize}
\item Forwarding - explicit name based routing of \gls{NDN} is performed at each router through the use of virtualized \gls{NDN} stack.
\item Storage - content stores perform the content caching.
\item Security - DOCTOR supports the same content oriented security as \gls{NDN}.
\item Management - the control and management plane of \gls{VNF}s in DOCTOR has been designed with respect to the recommendations of the ETSI \gls{NFV} group, concerning the \gls{NFV} \gls{MANO}~\cite{docmang}.
\end{itemize}
\textbf{\review{Additional architecture or Technology Used.}} The architecture of DOCTOR is flexible, as it is based on \gls{NFV} and \gls{SDN} principles. Its main component is the \gls{NFV} infrastructure, which enables the resource virtualization to deploy the \gls{ICN} protocol stack over the data plane and the \gls{MANO} aspects over the control plane. As a computing virtualization framework, the architecture uses Docker, which relies on a lightweight virtualization principle
\textbf{Evaluation Parameters.}
Among the key limitations of DOCTOR there is the latency, which occurs due to the repeated sending of requests to the \gls{ICN} servers, acting as gateways and attached to the content source. Since content names are different among each other, each new content name represents a new routing identifier to be given to the gateways. This results in a continuous interaction between content publisher and gateways for each HTTP request.
\subsection{POINT}
The H2020 project \gls{POINT}~\cite{point} started in January 2015 and ended in December 2017. Its main purpose is to evaluate both quantitatively and qualitatively the improvements introduced by running \gls{ICN} over an \gls{IP} network. To achieve this aim, POINT designs an evolution of the PURSUIT architecture, which both leverages on the \gls{SDN} technology and on additional network components that enable \gls{IP}-based applications to run in the new setup without any modification. Those new elements are the \gls{NAP} and the \gls{ICN BGW}. The former directly interacts with the end user devices and is responsible for the translation of all the \gls{IP} protocol abstraction layers (e.g., HTTP, TCP and \gls{IP}) into the \gls{ICN} paradigm, while the latter controls the communication between \gls{ICN} and \gls{IP} networks. Furthermore, the \gls{NAP} provides standard gateway functions such as \gls{NAT}, firewall, and dynamic \gls{IP} address assignment. The core \gls{ICN} functionalities are provided by the PURSUIT components (i.e., \gls{TM}, \gls{FN}, and \gls{RP}). Usually, content items are assigned a \gls{RID} and are stored on the publisher, which advertises the contents availability in the network. Then, a user device sends a request for a content item and the \gls{NAP} transforms the interest into a subscription for a specific \gls{RID}. The subscription is then sent to the RP, which triggers the TM towards the identification of a path between publisher and subscriber. The TM identifies all the nodes that need to be traversed and it calculates the associated \gls{FIs}, which are placed in the packet header. At this point, the \gls{SDN} switches are responsible for forwarding the packets by using only the \gls{FIs} and not the routing tables. The \gls{SDN} switches are not aware of the POINT architecture and are, instead, coordinated by an \gls{SDN} controller, which communicates directly with the TM. This communication is bidirectional since the \gls{SDN} controller informs the TM about any topology modification, and the TM notifies the \gls{SDN} controller about the configuration to be placed on the \gls{SDN} switches.
\review{Fig.}~\ref{fig:point} shows the internal architecture of a POINT node. In the upper layer of the node, there are generic applications (i.e., \emph{App1}, \emph{App2}, \emph{App3}, \emph{App4}) which interact with a set of abstractions provided by POINT (i.e., \emph{\gls{IP} Abstraction}, \emph{TCP Abstraction}, \emph{HTTP Abstraction}, \emph{CoAP Abstraction}). Those are aimed at enabling the communication between applications and \gls{ICN} networks without requiring any modification from the application interface side. Each abstraction, then, cooperates with the \emph{Pub/Sub (Information-centric) Service Abstraction} to adhere to a publish/subscribe paradigm, where information is delivered according to specific strategies (i.e., \emph{LIPSIN}, \emph{MSBF}, \emph{POINT Alternative3}). Finally, POINT exploits also the \gls{SDN} technology by introducing two new layers (i.e., \emph{ICN-over-\gls{SDN} shim layer} and \emph{\gls{SDN}}) just above the \emph{L2 Transport Network} layer.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.4\textwidth]{./images/Point-1.pdf}
\caption{Internal architecture of a POINT node.}
\label{fig:point}
\end{figure}
\textbf{Deployment Approach.}
The POINT project falls under the \emph{underlay} deployment approach due to the gateway components, which are responsible for the translation from the \gls{IP} semantics into the \gls{ICN} semantics.
\textbf{Deployment Scenarios.}
The main purpose of the POINT architecture is to enable different subnetworks to communicate between each other. Thus, POINT supports the \emph{Border Island} scenario.
\textbf{Addressed Coexistence Requirements.}
Given that POINT is an evolution of PURSUIT, they both share the same coexistence requirements, i.e., forwarding, storage, and security.
\textbf{\review{Additional architecture or Technology Used.}}
The POINT solution relies on both the PURSUIT architecture and the \gls{SDN} technology.
\textbf{Evaluation Parameters.}
The challenges introduced by the POINT project involve scalability, dynamic network management and latency of data transmission. The first two challenges refer to the appropriate configuration of \gls{SDN} switches to face an automatic update of the network topology (e.g., a new host being attached). On the contrary, the third challenge might be due to the high frequency of interaction between \gls{NAP}s and \gls{RP}s.
\subsection{RIFE}
The \gls{RIFE}~\cite{rife} architecture is a Horizon2020 funded project, which started in February 2015 and ended in January 2018. Its aim is to develop a new network infrastructure that brings connectivity to communities living in remote locations or unable to afford the communication network costs. To achieve the purpose, the RIFE project focuses on three different challenges regarding the current end-to-end communication paradigm: reduction of capacity, energy, and redundant contents available in the network. The first can be achieved through a time-shifted access to network services and applications. The energy consumed by connected devices can be reduced by introducing a tolerance delay in the communication, so that devices can stay in an idle mode during the absence of network activity. Finally, the third aim is achievable by serving the same content to all the clients that require it, instead of releasing each time a new copy. The architecture addressing those objectives is a combination of \gls{IP}, \gls{ICN}, and \gls{DTN} paradigms.
\textbf{Deployment Approach.}
The RIFE architecture follows the \emph{underlay} approach because of the gateway components, which are responsible for the translation from the \gls{IP} semantics into the \gls{ICN} semantics.
\textbf{Deployment Scenarios.}
RIFE supports the \emph{Border Island} scenario.
\textbf{Addressed Coexistence Requirements.}
RIFE is an evolution of the PURSUIT architecture. Thus, the coexistence requirements addressed are the same, i.e. forwarding, storage, and security.
\textbf{\review{Additional architecture or Technology Used.}}
The architecture proposed in the RIFE project is a modification of the PURSUIT architecture and it relies on the coexistence of \gls{IP}, \gls{ICN} and \gls{DTN}. This last architecture is responsible for introducing the delay and disruption tolerance required to enable the time-shift requirement.
\textbf{Evaluation Parameters.}
No challenges have been found for the RIFE project.
\vspace{0.4cm}
\subsection{CableLabs}
Among the different \emph{underlay} approaches, there is a solution designed by CableLabs, which is a non-profit Innovation and R\&D lab focused on the introduction of fast and secure release of data, video, voice, and services to end users. CableLas proposes an incremental introduction of \gls{CCN}/\gls{NDN} in the existing \gls{CDN}s to improve the overall content distribution without modifying \gls{IP} routers~\cite{cableLabs}. The architecture designed by CableLabs requires first a migration of some services/applications to the \gls{ICN} paradigm, and then the introduction of proxies. Those are able to manage the translation between HTTP and \gls{CCN}. Once several \gls{ICN} ``islands'' are deployed in the network, the communication among them is provided through \gls{IP} tunneling.
\textbf{Deployment Approach.}
The solution proposed by CableLabs adopts the \emph{underlay} approach because of the gateway components, which are responsible for the translation from the \gls{IP} semantics into the \gls{ICN} semantics.
\textbf{Deployment Scenarios.}
Except for the \emph{Border Island}, the CableLabs architecture supports all the deployment scenarios.
\textbf{Addressed Coexistence Requirements.}
The CableLabs architecture addresses the following coexistence requirements:
\begin{itemize}
\item Forwarding - the additional proxies introduced in the network to support the translations i.e., HTTP to \gls{CCN} and \gls{CCN} to HTTP, also work as \gls{CCN} forwarder.
\item Storage - as the architecture is an evolution of a \gls{CDN}, by design the network nodes can cache contents.
\end{itemize}
\textbf{\review{Additional architecture or Technology Used.}}
Throughout this project, CableLabs investigates how the \gls{CCN} infrastructure is better in supporting a content-oriented network with respect to the current solutions, such as CDNs. Thus, CableLabs illustrates an incremental deployment of a \gls{CCN} network over a \gls{CDN} existing one.
\textbf{Evaluation Parameters.}
The challenges identified by CableLabs with respect to their own architecture are as follows: traffic management, optimization of \gls{CCN} router implementation (e.g., FIB/PIT sizing and memory bandwidth), optimization of \gls{CCN} cache implementation, content object size and fragmentation (i.e., definition of the maximum content object size transmissible inside a network), \gls{CCN} to HTTP and HTTP to \gls{CCN} conversions (e.g, the computational complexity of the translation function).
\vspace{0.4cm}
\subsection{NDN-LAN}
The authors in~\cite{NDNLAN} propose a \emph{hybrid} \gls{ICN} architecture in which content names are mapped to the MAC addresses. In particular, the authors present the design of a \gls{D-switch}, which provides name-based forwarding for \gls{NDN} traffic and address-based forwarding for conventional traffic such as \gls{IP}. It can be seen from \review{Fig.}~\ref{Fig:D-switch} that the key component of D-switch architecture is the \textit{Dispatcher}, which checks the \emph{EtherType} field in the header of a received frame. When an \gls{IP} frame is detected, the D-switch works like a traditional Ethernet switch and it forwards the frame using the MAC address. If an \gls{NDN} frame (i.e., Interest or Data packet) is detected, the D-switch processes/forwards the frame based on the content name carried in the \gls{NDN} header (i.e., Layer 3). In particular, the dispatcher either selects the \textit{Process \gls{IP} Traffic} or \textit{Process \gls{NDN} Traffic} module in the D-switch based on the value of \emph{EtherType} field. In the \textit{Process \gls{NDN} Traffic} module, the PIT and FIB tables are modified to store the mapping between the content names and MAC addresses. For instance, when an Interest packet is received, the D-switch will forward it by searching the content name and its corresponding MAC in the FIB, and then fill the destination MAC address field in Ethernet header with the recorded MAC address.
\begin{figure}[ht!]
\centering
\includegraphics[width=.9\columnwidth]{images/Dual_Stack.pdf}
\caption{Dual-stack switch internal architecture.}
\label{Fig:D-switch}
\end{figure}
\textbf{Deployment Approach.} This coexistence approach falls under the \emph{hybrid} approach because the D-switches are able to process both types of traffic (i.e., \gls{IP} and \gls{NDN}). In particular, a LAN consists (fully or partially) of D-switches that can process the data traffic received from \gls{NDN}-enabled hosts, as well as \gls{IP} hosts. However, a fully \emph{hybrid} scenario needs to be consistent with D-switches only, else other techniques or polices/rules are required to perform the data forwarding.
\textbf{Deployment Scenarios.}
Since the D-switches allow \gls{NDN} traffic to run within the \gls{IP} network, except for the \emph{Border Island}, NDN-LAN supports all the deployment scenarios. As a matter of fact, due to the use of MAC-layer encapsulation only, the inter-network communications are not possible and the \emph{Border island} scenario cannot be supported.
\textbf{Addressed Coexistence Requirements.}
The present architecture provides the following coexistence requirements:
\begin{itemize}
\item Forwarding - full advantage of \gls{ICN} features, such as in-network caching and native multicast, is supported when the underlying LAN consists of D-switches only. However, when the LAN has both D-switch and conventional Ethernet switches, it has to be carefully designed to avoid conflict between name-based forwarding and address-based forwarding.
\item Storage - in-network caching is only supported at D-switches, and it is responsibility of the network manager to prevent the conventional Ethernet switches from receiving \gls{ICN} packets.
\item Management - management of such a deployment is challenging due to limitations of topology creation and forwarding rules installation.
\end{itemize}
\textbf{\review{Additional architecture or Technology Used.}}
NDN-LAN is mainly suitable for \gls{NDN} applications that run in small and private networks such as university campus and within an organization. However, the proposed coexistence solution aims to support a variety of applications which includes \gls{NDN} as well as \gls{IP} applications. This is achieved through the following design goals: (i) coexistence with \gls{IP} traffic, which ensures that the common mechanisms should run without any change or performance penalty, (ii) native \gls{NDN} support, by not relying on tunnels or overlays, and (iii) incremental deployment and general applicability. The proposed solution does not make use of any specific technology to implement the D-switch logic. Minor hardware and software changes in the D-switches allow them to process the \gls{IP} and \gls{NDN} traffic in a controlled environment (i.e., LAN).
\textbf{Evaluation Parameters.}
To implement the required logic and functionalities at D-switches so that it can support \gls{NDN}-enabled traffic processing, some changes are required in the switch hardware, as well as software. Additional forwarding polices need to be installed in scenarios where D-switches coexist with conventional Ethernet switches. Without any standardization of these new software/hardware components, the applicability of the proposed solution in real-world coexistence applications is limited. Designing mechanisms that support the name-based forwarding, meanwhile coexisting with address-based forwarding within the same \gls{LAN}, is a challenging task. Additionally, the process for D-switches to learn the forwarding table at Layer-2 and build name-based FIB at Layer-3 is an open problem that needs to be addressed. In \gls{LAN}, the implementation of the proposed solution is simple and straightforward. However, as the LAN size increases and communication between different \gls{LAN}s is needed, the deployment cost will increase significantly, and the current solution needs to be extended to deal with new issues such as interoperability and scalability.
\vspace{0.4cm}
\subsection{hICN}
Authors in~\cite{hICN} propose methods and systems to facilitate the integration of \gls{ICN} into \gls{IP} networks. The \gls{hICN} communication system claims to have the ability to preserve \gls{ICN} features and advantages, while, at the same time, benefiting from exploiting an existing \gls{IP} infrastructure. The major components of hICN communication system are as follows: (i) hICN-enabled \gls{IP} router(s), capable of processing and forwarding both regular \gls{IP} packets and \gls{IP} packets enhanced with \gls{ICN} semantics, (ii) \gls{IP} router(s), capable of handling \gls{IP} packets, and (iii) hICN router(s), being provisioned with a consumer or producer application. The traditional \gls{IP} packet headers have been modified to add the \gls{ICN} semantics. As it is shown in \review{Fig.}~\ref{Fig:hICN_node}, when a router receives an \gls{IP} packet, then according to the \gls{IP} header content, it can identify how to process it, i.e., using \gls{ICN} or \gls{IP} stack. The authors suggest two possible name mapping schemes for hICN content names to \gls{IP}: (i) pure \gls{IP} mapping, in which content name components can be directly encoded in the \gls{IP} header, and (ii) optimized mapping, in which a subset of the content name component is encoded in the network header, while the remainder is encoded in the transport header.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{images/hICN_Node.pdf}
\caption{Internal architecture of an hICN node.}
\label{Fig:hICN_node}
\end{figure}
\textbf{Deployment Approach.}
As the hICN-enabled \gls{IP} routers are able to process the \gls{IP}, as well as the \gls{ICN} traffic, hICN falls under the \emph{hybrid} deployment approach. However, unlike NDN-LAN, in which MAC-to-content name mapping and conversely is performed, in hICN, the \gls{IP}-to-content name and conversely is done.
\textbf{Deployment Scenarios.}
Due to the presence of dual stack routers, the proposed architecture supports all the deployment scenarios.
\textbf{Addressed Coexistence Requirements.}
hICN is among the best proposals supporting the coexistence because it retains most of the \gls{ICN} basic features (e.g., layer-3 name-based routing, partial symmetric routing, object-based security, anchorless mobility, and in-network reactive caching). This is because hICN exploits the IPv4 and IPv6 header fields content semantic to identify whether the received packet is an \gls{IP} Data packet or an \gls{IP} Interest packet. The use of IPv4 or IPv6 RFC compliant packet formats guarantees the communication between an IPv4/IPv6 router and a hICN router. More specifically, the hICN router processes and forwards both the regular \gls{IP} packets and the \gls{ICN}-semantic-based packets. Hence, it preserves pure \gls{ICN} behavior at Layer-3 and above by guaranteeing end-to-end service delivery between data producers and data consumers using \gls{ICN} communication principles. The present architecture provides the following coexistence requirements:
\begin{itemize}
\item Forwarding - the hICN-enabled \gls{IP} routers as well as \gls{IP} routers use the same forwarding module.
\item Storage - the cache stores are available on hICN-enabled \gls{IP} routers, and the Interest packets could be satisfied by these routers if the requested content is available in the router cache.
\item Management - for large scale usage of this architecture, the consumer and producer applications must have the mapping of content-names with the corresponding \gls{IP} addresses, so that the \gls{ICN} packets can be processed seamlessly by the non-\gls{ICN} enabled routers as well.
\item Security - the architecture provides the same security features that are provided by \gls{ICN}. However, the \gls{IP}-only routers are not able to check the received data packets integrity and authentication, hence, at least one hICN-enabled \gls{IP} router must be available in the route between the consumer and producer.
\end{itemize}
\textbf{\review{Additional architecture or Technology Used.}}
The hICN proposal uses the \gls{IP} packet header semantics to differentiate the \gls{ICN} and \gls{IP} packets, and the mapping table at hICN-enabled router or \gls{DNS} is used for performing the mapping task. To support the interoperablity among different networks, the edge router could translate the incoming packets to hICN compliant packets using a proxy. Therefore, hICN does not use any specific architecture (e.g., \gls{SDN}) or technology (e.g., virtualization or tunnelling) to perform the coexistence.
\textbf{Evaluation Parameters.}
The major challenges of hICN are similar to the other \emph{hybrid} approaches and include a lack of support for heterogeneity, scalability, and standardization of the proposed changes in the traditional Internet protocols and network components. Moreover, the communication delay caused by the additional time used by hICN routers for the mapping could be an issue for delay sensitive applications. The hardware modifications are minimal because the hICN routers can be created by installing a software bundle in the existing \gls{IP} routers. However, the memory requirements will increase due to the need of storage cache. The deployment effort will be considerable due to the need of the modifications in the consumers and producers applications.
\subsection{OFELIA}
Blefari Melazzi~et~al.~\cite{melazzi2012openflow} proposed an \gls{SDN}-based \emph{hybrid} implementation of \gls{ICN} under the OFELIA project. The proposed approach is an extension of the CONET architecture~\cite{detti2011conet} for OpenFlow networks, where dedicated \gls{BN}s perform name-to-location resolution, using an external system, for any requested \gls{NDO}. \review{Fig.}~\ref{fig:melazzi} presents a simplified view of this solution. The authors propose to include two different forwarding strategies in an \emph{\gls{ICN} node}: (1) to forward content requests; and (2) to deliver the data. \emph{Forward-by-name} feature of an \emph{\gls{ICN} node} applies to Interest packets, while \emph{Data Forwarding} is the mechanism that allows the content to be sent back to the device that issued a content request. \emph{Content routing} is used to disseminate information about location of contents, and \emph{Caching} is the ability of \gls{ICN} nodes to cache data and to directly reply to incoming content requests. \review{The OFELIA testbed was used in IRATI~\cite{rina} project for experimental activities.}
\begin{figure}[ht!]
\centering
\includegraphics[trim = 2mm 10mm 2mm 25mm, clip, width=0.4\textwidth]{./images/Melazzi12.pdf}
\caption{Simplified view of the solution proposed by OFELIA.}
\label{fig:melazzi}
\end{figure}
\textbf{Deployment Approach.} The proposed architecture adheres to a \emph{hybrid} approach.
\textbf{Deployment Scenarios.}
The proposed implementation of \gls{ICN} is an extension of the CONET framework, in which \gls{BN}s interconnect different \gls{CSS}s. Hence, this solution supports the \emph{Border Island} scenario.
\textbf{Addressed Coexistence Requirements.}
The proposed system is based on CONET framework. Extending the primary goals of CONET framework, this architecture aims to support forwarding, storage, security and management for \gls{ICN} deployment.
\textbf{\review{Additional architecture or Technology Used.}} The present solution strongly relies on the architecture proposed in the CONET project and, through \gls{SDN}/OpenFlow, it targets all the services/applications of the \gls{TCP}/\gls{IP} protocol stack.
\textbf{Evaluation Parameters.} The architecture of the solution requires the networking elements to be OpenFlow compliant. Given that OpenFlow (\gls{SDN}) has been widely adopted in the networking domain, the hardware modifications and the time required for its deployment are low in scenarios where OpenFlow-based network is already present. On another side, the hardware modifications and the time required for its deployment would be higher if OpenFlow-based network is not already present.
\section{\review{Discussion}}
\label{discussion_conclusion}
\review{The purpose of this section is to summarize the findings achieved through our systematic analysis of all the existing coexistence architectures (Section~\ref{Summary}) and discuss the open challenges (Section~\ref{challenges}), along with some future directions concerning the coexistence between the current and the future Internet architectures (Section~\ref{future_directions}).}
\subsection{\review{Summary of the survey}}
\label{Summary}
The main aim of this survey is to provide the necessary overview of the available solutions that already address the coexistence. We believe that it will help to move the research community towards the design of the most appropriate architecture for the future Internet. Thus, to guide the reader towards the interpretation of Table~\ref{table:comparison}, we add here two new tables, which are a summary of Table~\ref{table:comparison}. In particular, among all the features and evaluation parameters considered in this survey, the only ones that can be chosen by a network designer are the deployment approach and the possible \review{additional architecture or technology used} in the design of his solution. Thus, Table~\ref{LABEL1} and Table~\ref{LABEL2} are aimed at comparing each deployment approach and each \review{additional architecture or technology used} with respect to all the other features and evaluation parameters, respectively. As a matter of fact, the deployment scenarios, as well as the addressed coexistence requirements, directly depend on the deployment approach or on the \review{additional architecture or technology}, while the evaluation parameters are dynamic properties evaluated during the runtime deployment of an architecture.
\par The content of the cells as well as their meaning is shared between Table~\ref{LABEL1} and Table~\ref{LABEL2}. More specifically, the content of each cell corresponds to the number of coexistence architectures addressing both the properties specified in the corresponding row and column (e.g., in the first cell of Table~\ref{LABEL1} the value equal to 7 means that there are 7 coexistence architectures adhering to the \emph{overlay} approach and supporting the \emph{forwarding} functionality). The meaning of the values in the cells is different throughout the table. \review{In the upper part (i.e., rows referring to addressed coexistence requirements and deployment scenarios), the value in the cell refers to the number of architectures that guarantee a specific addressed coexistence requirement or a deployment scenario by adopting a deployment approach (listed in the columns). On the contrary, in the lower part of the table (i.e., rows referring to the evaluation parameters), the value in the cells refers to the number of limitations an architecture is affected from.}
\par Table~\ref{LABEL1} shows on the columns the three different deployment approaches (i.e., \emph{overlay}, \emph{underlay} and \emph{hybrid}), while on the rows there are all the other features, except for the architectures or technologies used, considered in Table~\ref{LABEL2}. Considering the deployment approaches, we found six architectures adopting the \emph{overlay} solution, four the \emph{underlay}, three the \emph{hybrid} and one architecture (i.e., CONET) adhering to both \emph{overlay} and \emph{hybrid}. As it is shown in the table, a plausible reason for this greater adoption of the \emph{overlay} approach might be the higher number of addressed coexistence requirements provided by it. As a matter of fact, almost all the \emph{overlay} architectures guarantee the forwarding and storage features and the number of the architectures supporting security and management is higher than in the \emph{underlay} and \emph{hybrid} cases. While, adopting an \emph{overlay} approach prevents architectures from being deployed in all the deployment scenarios: none of the \emph{overlay} architectures covers either the \emph{ICN-IP communication in ICN ``ocean''} or the \emph{IP-IP communication in ICN ``ocean''} scenarios. Finally, considering the evaluation parameters, most \emph{overlay} architectures are not able to properly manage the network traffic, but the other limitations are comparable with the ones affecting the \emph{underlay} and \emph{hybrid} solutions. Moreover, even if the number of challenges under the last class (i.e., \emph{Other}) might be significant, we \review{note} that those limitations strongly depend on the design of each coexistence architecture.
\begin{table}[ht!]
\centering
\resizebox{.95\columnwidth}{!}{
\begin{threeparttable}
\centering
\caption{Comparison of all the deployment approaches for coexistence architectures - The value of each cell refers to the number of coexistence architectures addressing both the properties specified in the corresponding row and column.}
\label{LABEL1}
\begin{tabular}{cc|c|c|c|}
\cline{3-5}
\textbf{} & & \multicolumn{3}{c|}{\textbf{Deployment Approach}} \\ \cline{3-5}
\textbf{} & & Overlay & Underlay & Hybrid \\ \hline
\multicolumn{1}{|c|}{\multirow{4}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Addressed\\ coexistence\\ requirements\end{tabular}}}} & Forwarding & 7 & 4 & 4 \\ \cline{2-5}
\multicolumn{1}{|c|}{} & Storage & 6 & 4 & 4 \\ \cline{2-5}
\multicolumn{1}{|c|}{} & Security & 4 & 3 & 2 \\ \cline{2-5}
\multicolumn{1}{|c|}{} & Management & 3 & 1 & 3 \\ \hline
\multicolumn{1}{|c|}{\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Deployment\\ scenarios\end{tabular}}}} & \begin{tabular}[c]{@{}c@{}}ICN-ICN\\ communication\\ in IP ``ocean''\end{tabular} & 7 & 2 & 3 \\ \cline{2-5}
\multicolumn{1}{|c|}{} & \begin{tabular}[c]{@{}c@{}}ICN-IP\\ communication\\ in IP ``ocean''\end{tabular} & 2 & 2 & 2 \\ \cline{2-5}
\multicolumn{1}{|c|}{} & \begin{tabular}[c]{@{}c@{}}ICN-IP\\ communication\\ in ICN ``ocean''\end{tabular} & 0 & 2 & 2 \\ \cline{2-5}
\multicolumn{1}{|c|}{} & \begin{tabular}[c]{@{}c@{}}IP-IP\\ communication\\ in ICN ``ocean''\end{tabular} & 0 & 2 & 2 \\ \cline{2-5}
\multicolumn{1}{|c|}{} & Border Island & 2 & 3 & 3 \\ \hline
\multicolumn{1}{|c|}{\multirow{6}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Evaluation\\ parameter\end{tabular}}}} & Traffic management & 4 & 1 & 1 \\ \cline{2-5}
\multicolumn{1}{|c|}{} & Access control & 1 & 0 & 0 \\ \cline{2-5}
\multicolumn{1}{|c|}{} & Scalability & 2 & 1 & 2 \\ \cline{2-5}
\multicolumn{1}{|c|}{} & \begin{tabular}[c]{@{}c@{}}Dynamic network\\ management\end{tabular} & 1 & 1 & 1 \\ \cline{2-5}
\multicolumn{1}{|c|}{} & Latency & 0 & 2 & 2 \\ \cline{2-5}
\multicolumn{1}{|c|}{} & Other & 4 & 4 & 2 \\ \hline
\end{tabular}
\begin{tablenotes}
\end{tablenotes}
\end{threeparttable}
}
\end{table}
Table~\ref{LABEL2} contains the same rows as Table~\ref{LABEL1}, while on the columns it shows all \review{the additional architectures or technologies used} in the analyzed coexistence solutions. Throughout this survey, we found the following results: one coexistence solution relying on the \gls{PSIRP} architecture, two on LAN, one on SAIL, six on \gls{SDN}, two on \gls{PURSUIT}, one on \gls{CDN}, one on \gls{DTN}, one on CONET, and one on \gls{DNS}. As it is clearly visible from the table, the reason for adopting the \gls{SDN} technology in a coexistence scenario is given by its numerous benefits in terms of both features and evaluation parameters with respect to the other possible solutions.
\begin{table*}[!htbp]
\centering
\scalebox{1}{
\begin{threeparttable}
\centering
\caption{Comparison of all \review{the additional architectures or technologies used} in coexistence architectures - The value of each cell refers to the number of coexistence architectures addressing both the properties specified in the corresponding row and column.}
\label{LABEL2}
\begin{tabular}{cc|c|c|c|c|c|c|c|c|c|}
\cline{3-11}
& & \multicolumn{9}{c|}{\textbf{\review{Additional architecture or technology used}}} \\ \cline{3-11}
& & PSIRP & LAN & SAIL & SDN & PURSUIT & CDN & DTN & CONET & DNS \\ \hline
\multicolumn{1}{|c|}{\multirow{4}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Addressed\\ coexistence\\ requirements\end{tabular}}}} & Forwarding & 1 & 2 & 1 & 6 & 2 & 1 & 1 & 1 & 1 \\ \cline{2-11}
\multicolumn{1}{|c|}{} & Storage & 1 & 2 & 1 & 5 & 2 & 1 & 1 & 1 & 1 \\ \cline{2-11}
\multicolumn{1}{|c|}{} & Security & 1 & 1 & 0 & 4 & 2 & 0 & 1 & 1 & 1 \\ \cline{2-11}
\multicolumn{1}{|c|}{} & Management & 0 & 0 & 0 & 4 & 0 & 0 & 0 & 1 & 1 \\ \hline
\multicolumn{1}{|c|}{\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Deployment\\ scenarios\end{tabular}}}} & \begin{tabular}[c]{@{}c@{}}ICN-ICN\\ communication\\ in IP ``ocean''\end{tabular} & 1 & 2 & 1 & 4 & 0 & 1 & 0 & 0 & 1 \\ \cline{2-11}
\multicolumn{1}{|c|}{} & \begin{tabular}[c]{@{}c@{}}ICN-IP\\ communication\\ in IP ``ocean''\end{tabular} & 0 & 1 & 0 & 3 & 0 & 1 & 0 & 0 & 1 \\ \cline{2-11}
\multicolumn{1}{|c|}{} & \begin{tabular}[c]{@{}c@{}}ICN-IP\\ communication\\ in ICN ``ocean''\end{tabular} & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 \\ \cline{2-11}
\multicolumn{1}{|c|}{} & \begin{tabular}[c]{@{}c@{}}IP-IP\\ communication\\ in ICN ``ocean''\end{tabular} & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 \\ \cline{2-11}
\multicolumn{1}{|c|}{} & Border Island & 0 & 0 & 1 & 4 & 2 & 0 & 1 & 1 & 1 \\ \hline
\multicolumn{1}{|c|}{\multirow{6}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Evaluation\\ parameter\end{tabular}}}} & Traffic management & 1 & 2 & 1 & 1 & 0 & 1 & 0 & 0 & 0 \\ \cline{2-11}
\multicolumn{1}{|c|}{} & Access control & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \cline{2-11}
\multicolumn{1}{|c|}{} & Scalability & 0 & 1 & 1 & 3 & 1 & 0 & 0 & 0 & 1 \\ \cline{2-11}
\multicolumn{1}{|c|}{} & \begin{tabular}[c]{@{}c@{}}Dynamic network\\ management\end{tabular} & 0 & 1 & 1 & 2 & 1 & 0 & 0 & 0 & 0 \\ \cline{2-11}
\multicolumn{1}{|c|}{} & Latency & 0 & 1 & 0 & 2 & 1 & 0 & 0 & 0 & 1 \\ \cline{2-11}
\multicolumn{1}{|c|}{} & Other & 0 & 0 & 0 & 3 & 0 & 4 & 0 & 1 & 0 \\ \hline
\end{tabular}
\begin{tablenotes}
\end{tablenotes}
\end{threeparttable}
}
\end{table*}
\iffalse
\subsection{Deployment Approaches vs. Evaluation Parameters}
\label{discussion_first_comparison}
\begin{table}[H]
\centering
\caption{Deployment Approach vs Evaluation Parameters}
\label{CATEGORY_VS_DYNAMIC_FEATURES}
\resizebox{.95\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Project}} & \multicolumn{3}{c|}{\textbf{Category}} \\ \cline{2-4}
& \textbf{Overlay} & \textbf{Underlay} & \textbf{Hybrid} \\ \hline
PURSUIT & 1 & - & - \\ \hline
NetInf & 3 & - & - \\ \hline
NDN and CCN & 1 & - & - \\ \hline
O-ICN & 3 & - & - \\ \hline
CONET & 1 & - & - \\ \hline
DOCTOR & \textcolor{red}{VALUE} & \textcolor{red}{1} & \textcolor{red}{VALUE} \\ \hline
Vahlenkamp et al.~\cite{vahlenkamp2013enabling} & 2 & - & - \\ \hline
Veltri et al.~\cite{veltri2012supporting} & 1 & - & - \\ \hline
POINT & - & 3 & - \\ \hline
RIFE & - & 3 & - \\ \hline
CableLabs & - & 6 & - \\ \hline
NDN-LAN & - & - & 4 \\ \hline
HICN & - & - & 2 \\ \hline
Melazzi et al.~\cite{melazzi2012openflow} & - & - & 1 \\ \hline
\begin{tabular}[c]{@{}c@{}}Total Number\\ of Architectures\end{tabular} & 7 Architectures & 4 Architectures & 3 Architectures \\ \hline
\end{tabular}
}
\end{table}
\subsection{Deployment Scenarios vs. Evaluation Parameters}
\label{discussion_second_comparison}
\begin{table}[H]
\centering
\caption{Deployment Scenarios vs Evaluation Parameters}
\label{DEPLOYMENT_SCENARIOS_VS_DYNAMIC_FEATURES}
\resizebox{.95\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Project}} & \multicolumn{5}{c|}{\textbf{Deployment Scenario}} \\ \cline{2-6}
& \textbf{\begin{tabular}[c]{@{}c@{}}ICN-to-ICN\\ in IP\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}ICN-to-IP\\ in IP\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}ICN-to-IP\\ in ICN\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}IP-to-IP\\ in ICN\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Border\\ Island\end{tabular}} \\ \hline
PURSUIT & 1 & - & - & - & - \\ \hline
NetInf & 3 & - & - & - & - \\ \hline
NDN and CCN & 1 & - & - & - & - \\ \hline
O-ICN & 3 & - & - & - & 3 \\ \hline
CONET & 1 & - & - & - & 1 \\ \hline
DOCTOR & \textcolor{red}{VALUE} & \textcolor{red}{1} & \textcolor{red}{1} & \textcolor{red}{VALUE} & \textcolor{red}{VALUE} \\ \hline
Vahlenkamp et al.~\cite{vahlenkamp2013enabling} & 2 & 2 & - & - & - \\ \hline
Veltri et al.~\cite{veltri2012supporting} & 1 & 1 & - & - & - \\ \hline
POINT & - & - & - & - & 3 \\ \hline
RIFE & - & - & - & - & 3 \\ \hline
CableLabs & - & 6 & 6 & 6 & - \\ \hline
NDN-LAN & 4 & - & - & - & - \\ \hline
HICN & - & - & - & - & 2 \\ \hline
Melazzi et al.~\cite{melazzi2012openflow} & - & - & - & - & 1 \\ \hline
\begin{tabular}[c]{@{}c@{}}Total Number\\ of Architectures\end{tabular} & 7 Architectures & 4 Architectures & 2 Architectures & 1 Architecture & 6 Architectures \\ \hline
\end{tabular}
}
\end{table}
\subsection{Addressed Coexistence Requirements vs. Evaluation Parameters}
\label{discussion_third_comparison}
\begin{table}[H]
\centering
\caption{Addressed Coexistence Requirements vs Evaluation Parameters}
\label{ADDRESSED_COEXISTENCE_REQUIREMENTS_VS_DYNAMIC_FEATURES}
\resizebox{.95\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Project}} & \multicolumn{4}{c|}{\textbf{Addressed Coexistence Requirements}} \\ \cline{2-5}
& \textbf{Forwarding} & \textbf{Storage} & \textbf{Security} & \textbf{Management} \\ \hline
PURSUIT & 1 & 1 & 1 & - \\ \hline
NetInf & 3 & 3 & 3 & - \\ \hline
NDN and CCN & 1 & 1 & 1 & - \\ \hline
O-ICN & 3 & 3 & - & - \\ \hline
CONET & 1 & 1 & - & 1 \\ \hline
DOCTOR & \textcolor{red}{1} & \textcolor{red}{1} & \textcolor{red}{1} & \textcolor{red}{VALUE} \\ \hline
Vahlenkamp et al.~\cite{vahlenkamp2013enabling} & 2 & - & - & 2 \\ \hline
Veltri et al.~\cite{veltri2012supporting} & 1 & 1 & 1 & 1 \\ \hline
POINT & 3 & 3 & 3 & - \\ \hline
RIFE & 3 & 3 & 3 & - \\ \hline
CableLabs & 6 & 6 & - & - \\ \hline
NDN-LAN & 4 & 4 & - & - \\ \hline
HICN & 2 & 2 & 2 & 2 \\ \hline
Melazzi et al.~\cite{melazzi2012openflow} & 1 & 1 & 1 & 1 \\ \hline
\begin{tabular}[c]{@{}c@{}}Total Number\\ of Architectures\end{tabular} & 14 Architectures & 13 Architectures & 9 Architectures & 9 Architectures \\ \hline
\end{tabular}
}
\end{table}
\subsection{\review{Additional Architecture or Technology Used} vs. Evaluation Parameters}
\label{discussion_fourth_comparison}
\begin{table}[H]
\centering
\caption{\review{Additional Architecture or Technology Used} vs Evaluation Parameters}
\label{ARCHITECTURE_VS_DYNAMIC_FEATURES}
\resizebox{.95\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Project}} & \multicolumn{9}{c|}{\textbf{\review{Additional Architecture or Technology Used}}} \\ \cline{2-10}
& \textbf{PSIRP} & \textbf{LAN} & \textbf{SAIL} & \textbf{SDN} & \textbf{PURSUIT} & \textbf{CDN} & \textbf{DTN} & \textbf{CONET} & \textbf{DNS} \\ \hline
PURSUIT & 1 & 1 & - & - & - & - & - & - & - \\ \hline
NetInf & - & - & - & - & - & - & - & - & - \\ \hline
NDN and CCN & - & - & - & - & - & - & - & - & - \\ \hline
O-ICN & - & - & 3 & 3 & - & - & - & - & - \\ \hline
CONET & - & - & - & - & - & - & - & - & - \\ \hline
DOCTOR & \textcolor{red}{VALUE} & \textcolor{red}{VALUE} & \textcolor{red}{VALUE} & \textcolor{red}{1} & \textcolor{red}{VALUE} & \textcolor{red}{VALUE} & \textcolor{red}{VALUE} & \textcolor{red}{VALUE} & \textcolor{red}{VALUE} \\ \hline
Vahlenkamp et al.~\cite{vahlenkamp2013enabling} & - & - & - & 2 & - & - & - & - & - \\ \hline
Veltri et al.~\cite{veltri2012supporting} & - & - & - & 1 & - & - & - & - & - \\ \hline
POINT & - & - & - & 3 & 3 & - & - & - & - \\ \hline
RIFE & - & - & - & - & 3 & & 3 & - & - \\ \hline
CableLabs & - & - & - & - & - & 6 & - & - & - \\ \hline
NDN-LAN & - & 4 & - & - & - & - & - & - & - \\ \hline
HICN & - & - & - & 2 & - & - & - & - & 2 \\ \hline
Melazzi et al.~\cite{melazzi2012openflow} & - & - & - & 1 & - & - & - & 1 & - \\ \hline
\begin{tabular}[c]{@{}c@{}}Total Number\\ of Architectures\end{tabular} & 1 Architecture & 2 Architectures & 1 Architecture & 7 Architectures & 2 Architectures & 1 Architecture & 1 Architecture & 1 Architecture & 1 Architecture \\ \hline
\end{tabular}
}
\end{table}
\fi
\subsection{\review{Open Challenges}}
\label{challenges}
\review{According to our findings, the following challenges need to be addressed while designing an efficient and secure coexistence architecture.}
\begin{itemize}
\item \textbf{\review{Traffic management:}} \review{the existing Internet applications are not completely compatible with architectures implementing the \emph{overlay} approach~\cite{ccnxudp,Zhang,NDNLP,ccnx-1.0} due to the issues that these applications introduce on the transport layer. Changing the addressing scheme from host-based to content-based, as well as changing network models from push to pull, are indeed the two obstacles in adapting the existing transport layer protocols to the \gls{NDN} and \gls{CCN} architectures. A vast number of existing applications and protocols, such as the HTTP based multimedia streaming protocols, might face false throughput estimations due to the aggressiveness of the underlying \gls{TCP} in case of content source location variations~\cite{wowmom2018,CONTI2018209}.}
\item \textbf{\review{Latency:}} \review{one fundamental issue introduced by the solutions supporting the translation of \gls{IP} and HTTP-level semantics into \gls{ICN}~\cite{point,rife} is latency. This occurs due to the frequent requests sent to the \gls{NAP}, that is attached to the source (also referred to as sNAP). Assuming a meaningful interaction between consumer and producer, the URIs are likely different for each content and for each new published content at sNAP, a new \gls{RID} has to be added to the \gls{cNAP} through the \gls{RF}. Thus, for each HTTP get request, sNAP and RF have to interact, causing an increasing network latency.}
\item \textbf{\review{Topological limitations:}} \review{in \emph{underlay} approaches, there might be several publishers for the same content that belong to the same network. In this case, whenever a consumer asks for a content released by different publishers, the RF should identify the best publisher and suggest the best content route. However, in the current architectures, the RF only announces which is the most appropriate publisher, leaving the other ones in a \textit{silent} phase. This might lead to the generation of multi-point forwarding identifiers, which create unnecessarily long routing tables.}
\item \textbf{\review{Routing and scalability:}} \review{the number of content objects, and its continuous growing in the current Internet, introduce a limitation in \gls{ICN} solutions, which have to handle content names of a possibly indefinite length. Thus, the existing networking devices might not support the content-based routing and might have to face special requirements and optimizations.}
\item \textbf{\review{Security issues in coexistence architectures:}}
\review{below, we illustrate the security risks affecting the coexistence architectures.}
\begin{itemize}
\item \review{\textbf{Attacks against NAP nodes:} in \emph{underlay} approaches, an attack performed against a NAP node can cause much more damage than one performed against the rendezvous system. This is because a NAP is a node in an \gls{ICN} network, which can be used by an attacker to launch prefix hijacking, replay attacks and many more attacks against the \gls{ICN} core network.}
\item \review{\textbf{\gls{DoS} attacks:} an external user sending a new \gls{IP} address causes the introduction of a state into a NAP. The same action can cause the introduction of states in centralized functions, such as the TF or the RF. Thus, if arbitrary users have a direct access to the centralized TF/RF, as it was the case in pure \gls{PURSUIT}/\gls{PSIRP} architectures~\cite{6231280}, they could also easily generate a \gls{DoS} attack.}
\item \review{\textbf{Lack of authorization and access control:} for every new node added to a network, the entire topology needs to be updated to guarantee the proper link among the new and the old network nodes. Thus, an enhanced access control policy is required in \gls{ICN} networks.}
\item \review{\textbf{Attacks against the SDN controller:} there have been increasing concerns about the security of \gls{SDN}-based networks. Many of these concerns are related to the fact that \gls{SDN} controller may parse an arbitrary part of a packet's content, and use this information to set up states in the flow tables (and possibly in the controller). Moreover, systems that parse user generated packet input (e.g. Wireshark packet analyzer and Snort intrusion detection system) have been the frequent cause of security vulnerabilities due to the large permutation of potential cases. Since numerous \gls{ICN} coexistence solutions propose to use \gls{SDN}, they are potentially open to the inherent vulnerabilities of an \gls{SDN} controller. Moreover, considering that an \gls{SDN} controller is the logically centralized entity that affects the entire network, the risk is even higher.}
\end{itemize}
\end{itemize}
\subsection{Future Research Directions}
\label{future_directions}
As confirmed by the large number of coexistence projects (e.g., POINT, DOCTOR, and hICN) that we surveyed in this paper, Industry and Government are pushing towards the definition of a new Internet architecture (i.e., \gls{ICN}) and its coexistence with the current one (i.e., \gls{IP}). Over the years, the research community has significantly grown around \gls{ICN}, following different coexistence design approaches. As mentioned before, a clean slate deployment of \gls{ICN} requires overhauling the entire Internet infrastructure and changing all the host and producer applications, thus, it is difficult to simply transit from research testbeds to operational networks. Based on the experience received from the initial \gls{ICN} architecture efforts (e.g., \gls{NDN}), researchers have realized that it is difficult, as well as infeasible, to replace a greatly successful imperative architecture with a clean slate approach. A plausible reason for this is that \gls{ICN} remains unproven due to the lack of large scale testbeds, and the consequently limited number of users in a trial, and that it has been tested on a limited number of applications so far.
\par In the past few years, a significant effort put by Governments, Industry, and Academia to assess the feasibility and effectiveness of \gls{ICN} indicates that \gls{ICN} paradigm is being considered as a possible replacement for the current \gls{IP}-based host-centric Internet infrastructure. Hence, we now present few research directions that need to be explored in this research field.
\begin{itemize}
\item \textbf{Secure transition phase:} from its start, \gls{ICN} was purposefully designed to have certain inherent security properties such as authentication of delivered content and (optional) encryption of the content. Additionally, relevant advances in the \gls{ICN} research community have occurred, promising to address each of the identified security gaps~\cite{8725179}~\cite{7009958}. However, due to the lack of real deployments, an array of security features in \gls{ICN} networks are still under-investigated, including access control~\cite{7447763}, security of in-network caches, protection against various network attacks (e.g., DDoS), and consumer privacy~\cite{8027034}. For instance, due to the distributed nature of content availability in \gls{ICN}, securing the content itself is much more important than securing the infrastructure or the end points. This lack of addressing security goals in the final \gls{ICN} paradigm is even more critical when considering the coexistence of \gls{TCP}/\gls{IP} and \gls{ICN}, which could lead to the introduction of new attacks and security issues. One of the main limitations of existing projects is that all of them address only the existence of a transition phase without investigating the impact of coexistence on the security and privacy of the system. We believe that not only passing through this intermediate step is unavoidable, but also that it is important to assess the security and privacy vulnerabilities that might come up under the coexistence of both architectures.
\item \textbf{Selection of an efficient coexistence approach:} in the literature, three main approaches (i.e., \emph{underlay}~\cite{8442635}, \emph{overlay}~\cite{detti2011conet}, and \emph{hybrid}~\cite{hICN}) have been used to deploy coexistence architectures. The \emph{underlay} approach introduces communication latency due to the required mapping between \gls{IP} and name addresses, which limits its usability for real-time and delay-sensitive applications. On the contrary, the \emph{underlay} approach maintains an unaltered quality of service under both normal and exceptional conditions, such as failure, server and link congestion, which are common in operator networks. Considering the \emph{overlay} approach, a major drawback is that it requires the definition and standardization of a new packet format, together with protocols that manage the mapping between \gls{ICN} faces and \gls{IP} addresses in the \gls{ICN} routers FIB. Thus, \emph{overlay} poses a significant challenge to network operators and developers. Additionally, upon new deployment, the tunnel configurations in \emph{overlay} needs to be manually changed to include the newly deployed \gls{ICN} nodes, and these point-to-point tunnels limit the \gls{ICN} capability in utilizing the underlying broadcast media. Finally, the \emph{hybrid} approach offers an interesting alternative as it allows \gls{ICN} semantics to be embedded in standard IPv4 and IPv6 packets so that the packets can be routed through either \gls{IP} routers or hybrid \gls{ICN} routers. However, the detailed performance results for \emph{hybrid} solutions are still incomplete, which limits its usage in real deployment scenarios.
\item \textbf{Coexistence solutions that preserve inherent \gls{ICN} advantages:} due to its inherent features such as in-network caching, interest aggregation, and content oriented security, \gls{ICN} provides improved communication system and security by design. Therefore, these essential features of \gls{ICN} should be protected while designing a coexistence architecture.
\item \textbf{Optimized \gls{ICN}-\gls{IP} name-space mapping:} an important issue in the state-of-the-art solutions, that provide translation of \gls{IP}/HTTP-level services into \gls{ICN} (or vice versa), is to ensure that the communication latency is comparable with the one in the current network. In most of the coexistence solutions, that use some sort of translation at any networking layer (e.g., transport or network), the main problem is the repeated sending of newly published content information towards the translation server, which generates delay in the response path of requester and congestion in the network. The problem lies in the fact that the URL is likely different for every request (assuming some form of meaningful service interaction between \gls{IP} client and \gls{ICN} producer). Additionally, the existing channel semantics cannot be applied directly because the corresponding routing identifier at the \gls{ICN} level is different for each publication, from the translation server to \gls{IP} client. Also, realizing the rendezvous function approach, which is responsible for the response of new publications, requires continue interaction between server and content publisher. This causes an additional latency for the client requests, waiting for a fresh mapping of \gls{ICN}-\gls{IP} at each published event.
\item \textbf{Data protection and confidentiality:} ensuring privacy for network entities (e.g., consumer and producer) in coexistence architecture is not a trivial task, mainly due to the poor privacy support provided in \gls{ICN}~\cite{BERNARDINI201913}. Hence, it is important to investigate how the privacy issues were dealt in the current coexistence architectures. Ideally, names should reveal no more than what is currently revealed by an \gls{IP} address and port. However, in \gls{ICN} the name prefix reveals some information about the content, and the in-network caching and data in PIT might expose the consumer identity~\cite{7874168}. Therefore, the researchers should focus on the specific issues concerning the privacy and data protection in the coexistence scenarios. For instance, in a coexistence architecture, \gls{IP} to name-prefix mapping is performed when an \gls{IP} packet travels from \gls{IP} to \gls{ICN} network. In this scenario, the \gls{IP} header does not reveal any information about the payload, but the prefix name does, thus, the data confidentiality is threatened when these data packets are traveling through the \gls{ICN} ``island''. In particular, since the use of name prefix for addressing the data in \gls{ICN} reveals sufficient information to the passive eavesdropper, ensuring privacy means that names and payloads cannot be correlated. However, such privacy requirement would need an upper-layer service similar to the one that would resolve non-topological identifiers (e.g., \gls{ICN} name prefix) to topological names (e.g., \gls{IP} network address).
\item \textbf{\gls{SDN}/\gls{NFV} for efficient coexistence:} as mentioned earlier, the \gls{SDN} technology separates the control plane from the data plane. The decoupled control plane is programmable and has a global view of the network that provides easier network management monitoring. \gls{SDN}-based implementations of \gls{ICN} exploit the centralized view available to the \gls{SDN} controller, which enables the \gls{SDN} controller to install appropriate rules in the data-plane to process \gls{ICN} requests/responses. In the state-of-the-art, both \textit{overlay} and \textit{hybrid} \gls{ICN} deployments have leveraged \gls{SDN} to address different coexistence requirements, e.g., forwarding, storage, management, security, and interoperability. \gls{SDN} has already been successfully adopted for network deployment; it makes \gls{SDN} an appropriate choice for quick deployment of \gls{ICN} with low hardware modifications. On the another side, \gls{NFV} can help to virtualize several network functions that were previously implemented via physical devices.
\end{itemize}
\section{Conclusion}
\label{conclude}
In this paper, we survey various efforts done by researchers and industries in recent years to propose a design of \gls{ICN}-\gls{IP} coexistence architecture. All these architectures differ from each other according to their specific design, but they all adhere to the \gls{ICN} paradigm, which means a content-oriented communication model in replacement of the current host-centric one. In our survey, we identify that all these architectures have important limitations: none of them has been designed through a comprehensive approach that considers all the new challenges introduced by a coexistence scenario. Instead, the main aim for most of them is to improve the current Internet by exploiting some of the core \gls{ICN} features (i.e., forwarding, storage, management, and security). Even though security also belongs to that list of features, none of the existing architectures has considered it as the main purpose. In future, we believe appropriate coexistence architecture designs are needed to build a secure path towards the future Internet. This can be done by considering the limitations and necessary improvements of the existing coexistence solutions we have analyzed in this survey. With the set of future research directions and open questions that we have raised, our work will motivate researchers towards designing a complete solution for \gls{ICN}-\gls{IP} coexistence while tackling the key security and privacy issues.
\section{Internet of Today}
Existing Internet architecture was indeed designed three decades ago to inter-connect various heterogeneous networks. Moreover, the core of existing Internet is the TCP/IP protocol suite, which is used to interconnect network devices by putting protocols, applications and network channels together, and arranging them in four fundamental abstraction layers: Application, Transport, Link and Internet. This design leads to an hourglass shape with IP as the network layer as its "thin waist" \cite{ Akhshabi:2011:ELP:2018436.2018460}.
We consider IP which is used to enable Internet functionality to be as generic reference point when discussing possible coexistence of ICN and existing IP-based architectures. In particular, IP is in charge for routing packets from the source IP interface to its destination correspondingly. A host in IP, possibly have one or more IP interfaces, whereas a router has at least two. Moreover, to each IP interface there is at least one distinct fixed-length IP address for its indentation. The essential component of existing Internet architecture which subject to our analysis are IP based applications and services, transport protocols, DNS and CDNs.
\subsection{Applications and services}
The existing Internet is full of IP based applications and services, utilizing the novel capabilities of several protocols defined over the packet level IP service. The application layer is the platform where applications produce user data and transfer it to other applications on other(s) of same host(s). The applications or services utilize the services provided by underlying layers, particularly Transport layer which delivers reliable or unreliable services to the processes.
Among many higher level protocols, HTTP provides a conjunction point for these services with numerous web development frameworks based on the semantics provided by the hypertext transfer protocol. In recent years, even the services providing the multimedia data delivery have been migrating from the traditional RTP-over-UDP delivery to the various HTTP-level streaming solutions, e.g., DASH and others. To this end HTTP (and even non HTTP) based requires considerations when migrating form existing IP-based architecture to ICN-based one.
\subsection{Transport}
Applications use the Transport layer which establishes basic data channels and utilize them for task-specific data exchange. In particular, the layer maintains host-to-host connectivity by providing end-to-end message transfer which is independent of data structure, underlying network and the logistics of exchanging information for any particular explicit purpose. The protocols in this layer are responsible to provide application addressing, segmentation and, error, congestion and flow control. The fundamental categorization of end-to-end delivery is connection-oriented, applied in TCP, or connection-less, applied in UDP.
\subsection{Domain Name System}
The functionality of DNS is to translate the domain names (e.g., URL prefixes) into IP addresses. In particular, these names follows a hierarchical style where a upper level domain, e.g., \textit{.com} is tailed by various sub-level domains such as \textit{project.com} and \textit{spritz.project.com}. An authoritative server is responsible to store, then replies to the queries for an explicit contiguous portion of the domain name space, named as DNS zone. In addition, to increase DNS scalability, the authoritative name servers are also capable to pass authority over sub-domains to further name servers.
A user initially generates a DNS query to a local resolver, i.e., the process functioning on the user's device which forwards the query to the suitable name server(s). For this purpose, the resolver directs a UDP packet towards the DNS server, which is generally positioned in the resolver's local network. The server first checks its local cache, and if not able to satisfy the query, it gets the appropriates reply from other or remote DNS servers.
Even though DNS was formerly designed with the purpose of static distributed database, nowadays it also permits dynamic records updates and zone handovers. Similarly, the work in proposed DNS as distributed database which also stores IP related information. For instance, [] proposed to store IPsec keys associated data in DNS records and mapp them to IP addresses.
\subsection{Content Delivery Networks}
Content Delivery Networks (CDN) are one of the utmost vital components of today's Internet. They were projected to maximize bandwidth, improve accessibility, and maintain correctness through content replication and bringing the content as close as possible to the hosts \cite{ 1250586}. The marketable accomplishment of the Internet and e-services, together with the exploding use of complex media content online has paved the way for the birth and growing interest in these networks. With CDNs, web content is distributed to cache servers located close to users, resulting in fast, reliable applications and Web services for the users. CDNs maintain multiple Points of Presence (PoP) with clusters of (the so-called surrogate) servers that store copies of identical content, such that users' requests are satisfied by the most appropriate site. There are two general approaches to building CDNs. One is overlay model, which replicate content to thousands of servers worldwide. Another approach is network model which deploys code to routers and switches so that they can recognize specific application types and make forward decisions on the basis of predefined policies. Conventional CDNs can be mainly classified into two sub-categories, Commercial CDNs, and Academic CDNs.
\section{ICN Overview}\todo{DO IT}
The ICN concept was initially proposed in TRIAD \cite{Cheriton00triad:a},
which proposed name-based information communication. Since then, researchers
have proposed multiple architectures. In 2006, the data-oriented network
architecture (DONA) project \cite{Koponen:2007:DNA:1282380.1282402} at UC Berkeley proposed an ICN architecture,
which improved the security and architecture of TRIAD. The Publish Subscribe
Internet Technology (PURSUIT) \cite{6231280} project, a continuation of the Publish
Subscribe Internet Routing Paradigm (PSIRP) \cite{Dimitrov:2010:PPP:1839379.1839409} project, both funded by the EU
Framework 7 Program (FP7), have proposed a publish/subscribe protocol stack that
replaces the IP protocol stack. In another approach, the Network of Information
(NetInf) project \cite{Dannewitz:2013:NII:2459510.2459643} was initially proposed by the European FP7 4WARD \cite{4ward}. project, and further development has been made by the Scalable and Adaptive
Internet Solutions (SAIL) \cite{sail} project. Similarly, Van Jacobson, a Research Fellow
at PARC, proposed the Content Centric Networking (CCN) project \cite{Jacobson} in 2007.
Currently work is being performed to enhance the CCN architecture called
"named-data networks" (NDN) \cite{Zhang}.
The core idea behind information-centric networking (ICN) architectures is that
who is communicating is less significant than what data are required. This paradigm
shift has occurred due to end-users' use of today's Internet, which is more
content-centric than location-centric, e.g., file sharing, social networking, or
retrieval of aggregated data.
The ICN approach fundamentally decouples information from its sources, by means of a clear location-identity split. The basic assumption behind this is that information is named, addressed, and matched independently of its location, therefore it may be located anywhere in the network \cite{ Arianfar:2010:CRD:1921233.1921240, Diallo2011LeveragingCF}. In ICN, instead of specifying a source-destination host pair for communication, a piece of information itself is named. An indirect implication (and benefit) of moving from the host naming model to the information naming model, is that information retrieval becomes receiver-driven. In contrast to the current Internet where senders have absolute control over the data exchanged, in ICN no data can be received unless it is explicitly requested by the receiver. In ICN, after a request is sent, the network is responsible for locating the best source that can provide the desired information. Routing of information requests thus seeks to find the best source for the information, based on a location-independent name.
Although ICN architectures are still under active development, all these
architectures address a set of key functionalities, albeit with
different approaches. Below we illustrate key functionalities \cite{6563278}, which are the basis for presenting and analyzing
the various ICN initiatives.
\subsection{Naming}
The structure of the name assigned to a piece of information (or service) that can be communicated over the network is one of the main characteristics of each ICN architectural proposal. In all ICN architectures information names are location-independent. On the other hand, depending on the approach, names may range from flat to hierarchical and may or may not be human-readable.
\subsection{Name resolution and Data routing}
Name resolution involves matching an information name to a provider or source that can supply that information, while data routing involves constructing a path for transferring the information from that provider to the requesting host. There are two approaches for these functions, i.e., integrated, or coupled, or are independent, or decoupled. In the coupled approach, the information request is routed to an information provider, which subsequently sends the information to the requesting host by following the reverse path over which the request was forwarded. In the decoupled approach, the name resolution function does not determine or restrict the path that the data will use from the provider to the subscriber. For example, an independent data routing module may send to the provider a source route to the requesting host.
\subsection{Caching}
ICN architectures provides two variant of caching, i.e., on-path and off-path caching. In on-path caching the network exploits information cached along the path taken by a name resolution request, while in off-path caching the network exploits information cached outside that path. In ICN architectures with decoupled name resolution and data routing, off-path caching is supported by the name resolution system, which handles caches as regular information publishers. In case, name resolution and data transfer are coupled, off-path caching is supported by the routing system used to forward the requests for information.
\subsection{Mobility}
Subscriber mobility is intrinsically supported in ICN architectures, since mobile subscribers can just send new subscriptions for information after a handoff. Publisher mobility is more difficult to support, since the name resolution system (in the coupled approach) or the routing tables (in the decoupled approach) need to be updated. Several proposals for handling producer mobility exist in ICN literature~\cite{7562050,Anastasiades2014}.
\subsection{Security}
In contrast to IP, where security is provided by the upper layers to secure host-to-host communication, ICN comes with security in mind. In ICN, the content is explicitly shared along with the signature key of the producer, which is used to verify the integrity of the received content. In particular, security in ICN follows a data-centric model. Therefore, the content is signed by the content provider, allowing interest senders to verify its integrity and data-origin authentication~\cite{Compagno2018}.
\section{Introduction}
\label{introduction}
The current Internet architecture was designed for a small research community over three decades ago with the purpose of interconnecting multiple heterogeneous networks. At that time, nobody foresaw the popularity and longevity that the Internet architecture started gaining in late `80s and early `90s and that led towards the connection of over 3 billion of mobile and desktop devices. Today, people exploit networking devices for a variety of purposes, that go from simple web browsing to video conferencing and content distribution, with the expectation of being always connected, regardless of their time and place. \review{The misalignment between the original design and the current usage highlighted the limitations of the \gls{IP}-based architecture and motivated researchers to explore new solutions to overcome them.} Among those limitations, the primary concern is the performance of the current Internet, which has to cope with the huge number of connected devices all over the world and with the new pattern of use of the network. According to this study~\cite{numberConnDevices}, currently there are around 23 billions of connected devices in the world, each one identified by a unique \gls{IP} address and consuming the network bandwidth. With such a huge number of devices, the first issue is the availability of unique \gls{IP} addresses to be assigned. Even though researchers originally chose to allocate 32 bits to compose an \gls{IP} address through the IPv4 protocol, they had to introduce the IPv6 protocol to extend the number of allocated bits from 32 to 128. \review{\gls{NAT}~\cite{RFC3022} is also another solution addressing the same problem, and it allows to assign the same public address to a set of devices belonging to the same private network. Thus, when using the private network each device has its own \gls{IP} address, chosen within a range of private \gls{IP} addresses, but, for an entry external to the network, all the devices have the same public \gls{IP} address. To enable the communication between the private network and the Internet a firewall is responsible for intercepting a request, forwarding it to the Internet with the public \gls{IP} and redirecting the incoming response to the appropriate device.}
Another problem is given by the type of network traffic: most of it is made of \gls{HTTP} requests, which means that users have changed the way they use Internet from a low-bandwidth interactive and store-and-forward approach towards a web and content dominated traffic. \review{To support this, Cisco Visual Networking Index~\cite{cisco2021} shows that in recent years video traffic delivery has suddenly become very popular on the Internet, with an Internet traffic that will be 194 exabytes per month by 2021, and multimedia traffic up to 82\%, from 70\% in 2015. Furthermore, due to the technological advancements in hardware devices and an increasing deployment of pervasive computing application, it is indicated that the number of communicating devices (including smart devices) will be three-times more than the world's population~\cite{zetta}. Moreover, it has also been reported~\cite{what2016} that 86\% of worldwide user traffic consists of only video data, which consists of \gls{VoD}, video streaming, \gls{P2P}, and \gls{TV}.}
Finally, from a security and privacy point of view the current Internet is not even able to guarantee some essential requirements, such as origin authentication, data integrity or data confidentiality, because of its lack of security by design. This is the motivation for the introduction of solutions, such as \review{\gls{IPsec} suite~\cite{RFC4301} or \gls{TLS}~\cite{RFC8446}}, that work on top of the current Internet and are aimed at overcoming its limitations.
For the above-mentioned reasons, researchers started designing new Internet architectures \review{(e.g., \gls{RINA}~\cite{rina}, \gls{ICN}~\cite{ccnx-1.0})}, that might replace the current one in the future. Among those, the most promising \review{architectures} adhere to the \gls{ICN} paradigm: a new network communication model in which the traditional host-centric paradigm has been moved to the new information-centric networking. While in the current Internet two endpoints can start communicating only if they know the respective \gls{IP} address, explicitly or by use of a \gls{DNS}, in \gls{ICN} they can send requests specifying only content names, without being aware of contents location in the network. This decoupling between request sending and content transferring introduces several benefits: reduction of latency and network load due to in-network caching \cite{Diallo2011LeveragingCF,TANG2019590,7467400,8057300}, inherent content integrity \cite{8539022} and better support for mobility due to name-based routing \cite{Anastasiades2014,8303694}.
\review{The ongoing research shows that the inherent benefits of \gls{ICN} (e.g., fast, efficient, and secure data delivery, improved reliability) make \gls{ICN} a suitable networking model for various emerging technologies, such as \gls{IoT}~\cite{8478349, NOUR201995} and 5G~\cite{8303694, 8263145}. In the first scenario, \gls{ICN} can help with establishing the connectivity among smart devices in an IoT environment, as well as in a smart city, in a smart e-health, and in a smart grid context. Also, the management of the huge amount of data generated by \gls{IoT} devices (i.e., the \gls{IoT} big data) is challenging in the existing \gls{IP} architecture, while it is minimized by the in-network caching feature in \gls{ICN}. This feature allows to reduce the traffic load on data producers by caching data on intermediate routers. Additionally, the receiver-driven communication in \gls{ICN} allows \gls{IoT}-receivers to ask for data without revealing their location information, thus being privacy supporting. Similarly, there are various advantages coming up from an \gls{ICN}-based 5G architecture (i.e., 5G-\gls{ICN}): (i) 5G-\gls{ICN} provides a single protocol able to handle mobility and security, instead of using a diverse set of \gls{IP}-based \gls{3GPP} protocols (such as in the case of existing mobile networks, e.g., \gls{LTE}, 3G, 4G), (ii) it provides a unifying platform with the same layer-3 \gls{APIs} to integrate heterogeneous radios (e.g., Wifi, \gls{LTE}, 3G) and wired interfaces in the same network, (iii) it converges services like computing, storage, and networking over a single platform, which enhances the flexibility of enabling virtualized service logic and caching functions anywhere in the network.}
\review{Due to the several advantages and the various potential next-generation applications, \gls{ICN} is gaining significant attention from both Industry and Academia~\cite{Kumar2019, 8624408}: the authors in~\cite{8027034} provide an in-depth study of the state-of-the-art techniques by focusing on security, privacy, and access control aspects of \gls{ICN} architectures; in~\cite{8240926}, the authors present a survey on \gls{ICN} cache management strategies, along with benefits and limitations; the authors in~\cite{8303694} focus on the state-of-the-art techniques proposed to achieve mobile \gls{ICN}. However, none of those survey articles discuss the research issues and challenges affecting an \gls{ICN}-\gls{IP} coexistence scenario, as we aim to do in this paper. Only in~\cite{RFC}, researchers from InterDigital Inc. and Huawei provided a comparison among the existing coexistence architectures, but they focused specifically on the different deployment approaches chosen by each solution.}
\textbf{Motivation.} The benefits of \gls{ICN} can occur only in a full-\gls{ICN} scenario, which implies a complete replacement of the current Internet. Despite its obvious need, this is a long and complex process, that requires the coordination among the different parties (i.e., \gls{ISPs}), time, costs for updating hardware and software of the network components and ability to face all the new possible challenges. Previous attempts to replace a widely used technology, protocol or architecture (e.g., IPv4/IPv6 protocol, 3G/4G technology, 4G/5G technology) have always faced a long period of coexistence between the old and the new solution. In the same way, the replacement of the current Internet will involve a transition phase during which \gls{IP} and \gls{ICN} architectures will coexist. More specifically, we envision that in a coexistence scenario there will be \gls{ICN} and \gls{IP} ``islands'' surrounded by an \gls{IP} or an \gls{ICN} ``ocean'', where an ``island'' will be a single device, a computer, an application or a server running either the \gls{ICN} or the \gls{IP} protocol, while an ``ocean'' will be a network containing components, that run different architectures.
Researchers working in this field have already addressed the coexistence of \gls{IP} and \gls{ICN} following two separate approaches. In the first, the research groups designed future Internet architectures facing the coexistence only during the deployment of their testbeds and without considering it as part of the initial design. On the contrary, in the second case, the design of the future Internet architectures specifically addressed the coexistence of \gls{IP} and \gls{ICN}.
All the existing networking solutions that consider the coexistence are affected by a strong limitation: the lack of a comprehensive approach in addressing the coexistence. The purpose of those solutions is to improve a network performance indicator, without considering all the issues that arise in a coexistence scenarios, especially those regarding the security and privacy of the end users. To design the first complete coexistence architecture, it is necessary first to have a comprehensive overview of strengths and weaknesses of the existing solutions.
\textbf{Contribution.} The purpose of this paper is to provide the first complete survey and classification of the existing coexistence solutions. Details of \gls{ICN} and of its working methodology are out of scope for this paper, since there are already several surveys addressing this aim~\cite{8303694, 8027034, 8240926}. Overall, the contributions of this paper are as follows:
\begin{enumerate}
\item We define a set of relevant features necessary for comprehensively analyze a coexistence architecture.
\item We provide the first comprehensive classification of all the main coexistence solutions.
\item We discuss the open issues and challenges affecting the existing coexistence architectures, by providing possible insights to design a more reliable future Internet architecture.
\end{enumerate}
\textbf{Organization.} The paper is organized as follows: in Section~\ref{background}, we introduce the ICN concept, by comparing it with the current \gls{IP} architecture and by illustrating its main benefits; Section~\ref{classification_criteria} describes all the criteria we identified and used for the analysis and classification of the coexistence architectures; in Section~\ref{coexistence_architectures} we deeply illustrate each coexistence architecture and provide the motivation for our classification; in Section~\ref{discussion_conclusion}, we discuss the main strengths and limitations of the current coexistence architectures, providing insights for improving the design of the future Internet; finally, in Section~\ref{conclude} we conclude the paper.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,354 |
Filippine
Union Theological Seminary – università di Dasmariñas
Stati Uniti d'America
Union Theological Seminary – università di New York
Union Theological Seminary – nome originario dello Union Presbyterian Seminary, università di Richmond | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,113 |
using System;
using System.Collections.Generic;
using System.Text;
using System.ComponentModel;
using System.Globalization;
namespace Revit.SDK.Samples.NewRebar.CS
{
/// <summary>
/// Type converter between RebarShapeParameter and string is provided for property grid.
/// </summary>
class TypeConverterRebarShapeParameter : TypeConverter
{
/// <summary>
/// RebarShape parameters list.
/// </summary>
public static List<RebarShapeParameter> RebarShapeParameters;
/// <summary>
/// Returns whether this converter can convert the object to the specified type.
/// </summary>
/// <param name="context">An System.ComponentModel.ITypeDescriptorContext that
/// provides a format context.</param>
/// <param name="destinationType">A System.Type that represents the type you want
/// to convert to.</param>
/// <returns></returns>
public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType)
{
return destinationType == typeof(string);
}
/// <summary>
/// Converts the given value object to the specified type, using the specified
/// context and culture information.
/// </summary>
/// <param name="context">An System.ComponentModel.ITypeDescriptorContext that
/// provides a format context.</param>
/// <param name="culture">A System.Globalization.CultureInfo. If null is passed,
/// the current culture is assumed.</param>
/// <param name="value">The System.Object to convert.</param>
/// <param name="destinationType">The System.Type to convert the value parameter
/// to.</param>
/// <returns>An System.Object that represents the converted value.</returns>
public override object ConvertTo(ITypeDescriptorContext context, CultureInfo culture,
object value, Type destinationType)
{
if (destinationType == typeof(String) && value is RebarShapeParameter)
{
RebarShapeParameter param = value as RebarShapeParameter;
if (null != param)
{
return param.Name;
}
}
throw new Exception("Can't be converted to other types except string.");
}
/// <summary>
/// Returns whether this converter can convert an object of the given type to
/// the type of this converter, using the specified context.
/// </summary>
/// <param name="context">An System.ComponentModel.ITypeDescriptorContext that
/// provides a format context.</param>
/// <param name="sourceType">A System.Type that represents the type you want to
/// convert from.</param>
/// <returns>true if this converter can perform the conversion; otherwise,
/// false.</returns>
public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)
{
return sourceType == typeof(string);
}
/// <summary>
/// Converts the given object to the type of this converter, using the specified
/// context and culture information.
/// </summary>
/// <param name="context">An System.ComponentModel.ITypeDescriptorContext that
/// provides a format context.</param>
/// <param name="culture">The System.Globalization.CultureInfo to use as the
/// current culture.</param>
/// <param name="value">The System.Object to convert.</param>
/// <returns>An System.Object that represents the converted value.</returns>
public override object ConvertFrom(ITypeDescriptorContext context, CultureInfo culture,
object value)
{
if(value is String)
{
foreach(RebarShapeParameter param in
TypeConverterRebarShapeParameter.RebarShapeParameters)
{
if(param.Name.Equals(value))
{
return param;
}
}
}
throw new Exception("Can't be converted from other types except from string.");
}
/// <summary>
/// Returns whether this object supports a standard set of values that can be
/// picked from a list.
/// </summary>
/// <param name="context"></param>
/// <returns>true if System.ComponentModel.TypeConverter.GetStandardValues() should be
/// called to find a common set of values the object supports; otherwise, false.</returns>
public override bool GetStandardValuesSupported(ITypeDescriptorContext context)
{
return true;
}
/// <summary>
/// Returns a collection of standard values for the data type this type converter
/// is designed for when provided with a format context.
/// </summary>
/// <param name="context">An System.ComponentModel.ITypeDescriptorContext that
/// provides a format context
/// that can be used to extract additional information about the environment
/// from which this converter is invoked. This parameter or properties of this
/// parameter can be null.</param>
/// <returns>A System.ComponentModel.TypeConverter.StandardValuesCollection that holds
/// a standard set of valid values, or null if the data type does not support
/// a standard set of values.</returns>
public override StandardValuesCollection GetStandardValues(ITypeDescriptorContext context)
{
return new StandardValuesCollection(RebarShapeParameters);
}
/// <summary>
/// Returns whether the collection of standard values returned from
/// System.ComponentModel.TypeConverter.GetStandardValues()
/// is an exclusive list of possible values, using the specified context.
/// </summary>
/// <param name="context">An System.ComponentModel.ITypeDescriptorContext that
/// provides a format context.</param>
/// <returns>true if the System.ComponentModel.TypeConverter.StandardValuesCollection
/// returned from System.ComponentModel.TypeConverter.GetStandardValues() is
/// an exhaustive list of possible values; false if other values are possible.</returns>
public override bool GetStandardValuesExclusive(ITypeDescriptorContext context)
{
return true;
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,552 |
{"url":"https:\/\/stats.stackexchange.com\/questions\/551232\/anova-model-assumptions","text":"# ANOVA model assumptions\n\nSuppose the following:\n\n$$X_{lj}$$ ~ $$N(\\mu_{l} , \\sigma^{2})$$ Where $$l = 1, ...g$$ and $$j = 1,..n_{g}$$\n\nFurthermore assume $$\\sigma^{2}_{1} = \\sigma^{2}_{2}...\\sigma^{2}_{g} = \\sigma$$\n\nThen we can define the one-way ANOVA as follows:\n\n$$X_{lj} = \\mu + \\tau_{l} + e_{lj}$$\n\nI'm trying to undestand the ANOVA model.\n\nIs the ANOVA model based on any sort of distribution? I know that the response variable $$X$$ is assumed to follow a normal distribution but I'm not sure why this assumption is even necessary to create this model. Also why is it assumed that the variance between groups are equal?\n\n\u2022 In a standard one-way ANOVA, the reliability of the F-ratio used to test whether all $g$ levels of the factor correspond to normal populations with the same mean, it is necessary that the population variances are the same $(\\sigma_\\ell^2 = \\sigma^2).$ Then the error mean square can be used to estimate $\\sigma^2.$ However, the procedure oneway.test in R can test for equal level means without assuming equal level variances. \/\/ If you doubt that data are nearly normal, then you might consider using the nonparametric Kruskal-Wallis test. Nov 6 '21 at 20:48\n\u2022 You can find good resources in the answers to this question: Checking ANOVA assumptions Nov 7 '21 at 10:18\n\nWith ANOVA you compare the variance of the difference between the group means and the variance of the differences within the groups.\n\nFor the comparison of variance it doesn't in principle matter what distribution you have. If the distributions are the same then the distribution of the means has the variance of the distribution of the individuals divided by $$n$$ the size of the groups*.\n\nBut... the problem is that variances are estimated based on the residuals (the difference between the observations and the mean).\n\n\u2022 When you have normal distributed data then these estimates will be $$\\chi^2$$ distributed and that's what is being used to test the hypothesis. (The ratio of the two, a ratio of $$\\chi^2$$ distributed variables, is an F-distribution which is the final measure)\n\n\u2022 When the data is not normal distributed then the estimates will only be approximately chi square distributed. But how bad that is will depend on the situation.\n\nIn the example below we see that there can be some discrepancies. The computed p-value will not be relating to the actual p-value. Low p-values might occur more\/less often than what they indicate.\n\nWhether this is a problem depends on the practical situation. If you compute a p-value of 0.02 and it is actually 0.04 did you make a big mistake?\n\nExample:\n\nSay the actual distribution of the data is\n\n\u2022 a beta distribution (with $$\\alpha=1$$ and $$\\beta=0.5$$)\n\u2022 a t-distribution with $$\\nu = 1$$\n\nand we have 5 groups with each 5 members. Let's simulate how often we get which p-values.\n\nIn the image below we see how much the p-values deviate from the expectation for one million simulations of a null hypothesis test. For the beta distribution, it does not matter much. For the t-distribution there is a big difference (but we chose an extreme example, the t-distribution with $$\\nu = 1$$ has infinite variance).\n\n*There will be some details like using $$n-1$$ when you express the distribution of residuals","date":"2022-01-23 16:51:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 15, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7958935499191284, \"perplexity\": 335.74426726810253}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320304287.0\/warc\/CC-MAIN-20220123141754-20220123171754-00055.warc.gz\"}"} | null | null |
Man accused of shooting two people during…
Man accused of shooting two people during Elijah McClain protest charged with attempted first-degree murder
Elijah McClain protesters shut down I-225 on July 25, 2020.
By Kieran Nicholson | knicholson@denverpost.com | The Denver Post
PUBLISHED: August 3, 2020 at 5:31 p.m. | UPDATED: August 3, 2020 at 5:35 p.m.
The Arapahoe County District Attorney charged a 23-year-old with four counts of attempted first-degree murder and other felonies for firing a revolver into a crowd of people at a protest last month on Interstate 225 in Aurora.
Samuel Alvin Young, of Wheat Ridge, also faces two counts of first-degree assault with a deadly weapon causing serious bodily injury and two counts of first-degree assault extreme indifference, according to an 18th Judicial District Attorney's Office news release.
Young allegedly fired a revolver at a Jeep that drove through a crowd blocking the interstate during a protest over the death of Elijah McClain on July 25. No one was hit by the Jeep.
Isaiah Chavous hid under his mom's college desk as a baby. Now he's CU Boulder student president.
Colorado to open grand jury investigation into Elijah McClain's death
Brauchler: We should not only encourage the prosecution of those who rioted at the U.S. Capitol, but celebrate it
Colorado lawmakers eye $10 million in security enhancements to Capitol
No charges against Wisconsin police officer who shot Jacob Blake
Two protesters, however, were hit by the gunfire. One man was shot in the leg and was taken by ambulance to a local hospital. Another man suffered a graze wound to his head and was taken by a private vehicle to a hospital.
Police have impounded the Jeep involved in the incident, as evidence in the case. Aurora police on Monday said an investigation is ongoing although they have not made an arrest.
Young called Aurora police the day after the incident and identified himself as the person of interest in the case after police released pictures on social media. Young is free on a $75,000 bond. His next court appearance is scheduled for Aug. 14.
Arapahoe County District Attorney
Aurora Police Department
Police protests 2020
Kieran Nicholson | Night Breaking News Reporter
Kieran Nicholson covers breaking news for The Denver Post. He started at the Post in 1986, at the old building on 15th and California streets. Nicholson has covered a variety of beats including suburbs, courts, crime and general assignment.
knicholson@denverpost.com
Follow Kieran Nicholson @kierannicholson | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,163 |
Q: ufraw fatal internal error on ubuntu 16.04 I am trying to use ufraw to convert cr2 images to jpeg on Ubuntu 16.04. When I run the following, from the folder in which my images are located:
ufraw-batch --out-type jpg *.CR2
I get the following:
ufraw-batch: Fatal internal error
Segmentation fault (core dumped)
The same thing happens in ufraw GUI. Can anyone help?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,798 |
The 4th Missouri Infantry Regiment was formed on April 28, 1862, and served in the Confederate States Army during the American Civil War. The infantry regiment did not see action at the Battle of Farmington on May 9, and the Battle of Iuka on September 19 despite being part of the Confederate force present at those battles. As part of Brigadier General Martin E. Green's brigade, the regiment participated in three charges against Union lines on October 3, 1862, during the Second Battle of Corinth. The following day, the regiment, along with the rest of Green's brigade, attacked the new Union lines. Despite initial success, the attack was repulsed by a Union counterattack. The regiment ceased to exist as a separate unit when it was combined with the 1st Missouri Infantry Regiment on November 7, 1862, to form the 1st and 4th Missouri Infantry Regiment (Consolidated).
The combined unit served in the Vicksburg campaign in 1863, before surrendering at the end of the siege of Vicksburg. After undergoing a prisoner exchange, the men rejoined the Confederate Army and served in the Atlanta Campaign and the Battle of Franklin in 1864, still as part of the 1st and 4th Missouri Infantry Regiment (Consolidated). On May 9, 1865, near the end of the war, the consolidated regiment surrendered during the Battle of Fort Blakely, ending the unit's existence. The 4th Missouri Infantry's battle flag is displayed at the American Civil War Museum.
Background and organization
When the American Civil War began in 1861, the state of Missouri was politically divided between those supporting secession and those wishing to remain in the Union. The Governor of Missouri, Claiborne Fox Jackson, was a secessionist and supported the Confederate States of America; he created a pro-secession militia unit known as the Missouri State Guard (MSG) in May. The MSG, under the command of Major General Sterling Price, had initial success, including a victory against the Union Army in the Battle of Wilson's Creek, but were confined to southwestern Missouri by the end of the year. In the Battle of Pea Ridge, fought on March 7 and 8, 1862, in northwestern Arkansas, Price and the MSG suffered another defeat while serving under Major General Earl Van Dorn. After Pea Ridge, Van Dorn's army was transferred east of the Mississippi River. Eventually, many of the men of the MSG joined Confederate Army units.
The 4th Missouri Infantry Regiment was formed on April 28, 1862, in Memphis, Tennessee. Two previously existing battalions, commanded by Archibald A. MacFarlane and Waldo P. Johnson, were combined with a small element of the MSG; many of MacFarlane and Johnson's men were MSG veterans. MacFarlane was appointed the regiment's first colonel, Johnson was the first lieutenant colonel, and Stephen W. Wood was the regiment's first major. On April 28, the regiment contained ten companies, all Missouri-raised; they were designated with the letters AI and K. Almost all of the regiment's soldiers were of Anglo-Saxon descent.
Service history
After formation, the regiment was transferred by railroad to Corinth, Mississippi, as part of the Army of the West. An accounting of the regiment's troops during a May 5, 1862 muster listed 547 men in the regiment. On May 9, the 4th Missouri Infantry was near the action at the Battle of Farmington and deployed, but did not enter the fray. After the Confederates evacuated Corinth because of Union pressure, the regiment trained in several locations in northern Mississippi. Price was in command of the Army of the West, which he had stationed at Iuka, Mississippi; Van Dorn had troops further to the south. The Confederates were conducting an offensive into Kentucky, and Price and Van Dorn were expected to move into Tennessee to support it. Major General Ulysses S. Grant, who was the Union commander in the region, attempted to trap Price before he could join Van Dorn, but the Confederates were able to escape after fighting the Battle of Iuka. At this time, the 4th Missouri Infantry was in Brigadier General Martin E. Green's brigade, which was held in reserve and did not fight at Iuka.
After escaping, Price joined Van Dorn, who commanded the combined force. Together, the Confederates moved against Corinth, which was strategically important to Union plans in the region. On October 2, Union Major General William S. Rosecrans occupied Corinth with 23,000 men; that same day, he learned of Van Dorn's approach. After arriving near the city, the Confederates deployed in an arc northwest of the Union defenses with 22,000 men. At 10:00a.m. on October 3, Van Dorn attacked, beginning the Second Battle of Corinth. At Corinth, the 4th Missouri Infantry was still part of Green's brigade, which was in Brigadier General Louis Hébert's division; Hébert's formation was, in turn, part of Price's corps within the Army of West Tennessee. The 4th Missouri Infantry and the rest of Green's brigade (except for the artillery) attacked an outer Union position held by Brigadier General Thomas A. Davies's division. The initial attack was repulsed, but Green ordered a second charge, which was again repulsed, this time by a Union counterattack led by the 2nd Iowa Infantry. Later in the afternoon, Green's brigade made another charge against Davies's line; this attack was supported by elements of Colonel Elijah Gates's and Brigadier General Charles W. Phifer's brigades. After heavy fighting, the Union line was broken. Despite an opportunity to attack the inner Union line, Price decided not to press the attack as only 30 minutes of daylight remained; instead, he waited for the morning of the 4th to resume the battle.
After Hébert fell ill, Green was promoted to divisional command on October 4. Command of Green's brigade then fell to Colonel William H. Moore, who led a charge against the inner Union line, to capture a fortification known as Battery Powell. The Union line was defended by men of Davies's division, who were quickly routed by the Confederate charge. After breaking through Davies's line, Moore's brigade aimed for the town of Corinth itself. Along with elements of Phifer's brigade and the brigade of Brigadier General John C. Moore, it entered Corinth and penetrated as far as the Tishomingo Hotel. A Union counterattack drove the Confederates out of Corinth. At Second Corinth, the 4th Missouri lost 129 men: 15 killed, 87 wounded, and 27 missing. MacFarlane suffered a serious head wound during the battle.
Legacy
On November 7, in the vicinity of Wyatt, Mississippi, the regiment consolidated with the 1st Missouri Infantry, due to losses in both units. The combination of the two regiments formed the 1st and 4th Missouri Infantry Regiment (Consolidated). Companies B, C, E, H, and I of the new regiment were composed of men from the 4th Missouri Infantry; Companies A, D, F, G, and K were composed of men from the 1st Missouri Infantry. MacFarlane and Colonel Amos C. Riley of the 1st Missouri Infantry came to an agreement whereby McFarlane became colonel of the unit and Riley lieutenant colonel; the latter commanded the unit while MacFarlane recovered from his wounds. As a result of the consolidation, about 40 officers were deemed superfluous and were sent back across the Mississippi River to recruit new soldiers.
In 1863, the new regiment fought at the Battle of Grand Gulf, the Battle of Champion Hill, the Battle of Big Black River Bridge, and the siege of Vicksburg, where the regiment was captured as part of a Confederate surrender. The men of the regiment then underwent a prisoner exchange and rejoined the Confederate army, still under the designation of the 1st and 4th Missouri Infantry Regiment (Consolidated). In 1864, the regiment was engaged at the Battle of New Hope Church, the Battle of Kennesaw Mountain, the siege of Atlanta, the Battle of Allatoona, and the Battle of Franklin. On May 9, 1865, near the end of the war, the 1st and 4th Missouri Infantry (Consolidated) surrendered at the Battle of Fort Blakely, ending the unit's existence.
As of January 2021, the flag of the 4th Missouri Infantry, a Van Dorn battle flag, is held by the American Civil War Museum in Richmond, Virginia.
See also
List of Missouri Confederate Civil War units
References
Sources
Further reading
Units and formations of the Confederate States Army from Missouri
1862 establishments in Tennessee
Military units and formations established in 1862
1862 disestablishments in Mississippi
Military units and formations disestablished in 1862 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,658 |
\section*{Contents}
\begin{enumerate}
\item Text S1 to Sx
\item Figures S1 to Sx
\item Tables S1 to Sx
\end{enumerate}
\section*{Additional Supporting Information (Files uploaded separately)}
\begin{enumerate}
\item Captions for Datasets S1 to Sx
\item Captions for large Tables S1 to Sx (if larger than 1 page, upload as separate excel file)
\item Captions for Movies S1 to Sx
\item Captions for Audio S1 to Sx
\end{enumerate}
\section*{Introduction}
\section*{Text S1.}
Type or paste text here. This should be additional explanatory text,
such as: extended descriptions of results, full details of models,
extended lists of acknowledgements etc. It should not be
additional discussion, analysis, interpretation or critique. It
should not be an additional scientific experiment or paper.
Repeat for any additional Supporting Text
\section*{Data Set S1.}
Upload your dataset(s) to AGU's journal submission site and select
"Supporting Information (SI)" as the file type. Following naming
convention: ds01.
Repeat for any additional Supporting data sets
\section*{Movie S1.}
Type or paste caption here.
Upload your movie(s) to AGU's journal submission site and select,
"Supporting Information (SI)" as the file type. Following naming convention: ms01.
Repeat any additional Supporting movies
\section*{Audio S1.}
Type or paste caption here.
Upload your audio file(s) to AGU's journal submission site and select
Supporting Information ``(SI)" as the file type. Following naming
convention: auds01.
Repeat for any additional Supporting audio files
\end{document}
\section*{Plain Language Summary}
Advances in computing have allowed computer models to simulate tropical weather systems spanning a few dozen kilometers at the same time as moist and dry regions spanning several thousand kilometers. To improve and validate computer models, we need to compare computer simulations to real-world observations, but we lack a compact way of simultaneously comparing them at scales close to 10km, 100km, 1,000km, and 10,000km. By breaking down water vapor variability near the Equator into contributions from these different length scales, we can identify the scales at which computer models agree with real-world observations and explain why. Surprisingly, even computer models that are run in a highly idealized configuration \textcolor{black}{compare} well against observations of the real world, despite the fact that nature never attains this idealized limit. We find that atmospheric radiation tends to intensify moist and dry regions of several thousand kilometers near the Equator, while lateral transport of energy and surface-atmosphere exchanges tend to smooth out these moist and dry regions.
\section{Introduction}
Tropical weather and climate are strongly shaped by the variability of column water vapor, which dominates column-integrated moist static energy (MSE) variability due to weak horizontal variations in tropical atmospheric temperature. On meteorological timescales, the intensity of extreme precipitation events depends on the humidity and temperature of the surrounding environment, e.g. for isolated convective cells, mesoscale convective systems \citep{LeMone1998} and tropical cyclones \citep{Hill2009}. On climatic timescales, the zonal variability of MSE is linked to the equator-to-pole energy transport \citep{Trenberth2002} and to climate sensitivity through the link between the hydrological cycle and cloud and water vapor feedbacks \citep{Feldl2014}. Persistent regions of high and low MSE occur due to surface heterogeneities, including ocean currents, continents, and mountain ranges (Figure \ref{fig:Snapshots}a), while transient anomalies in MSE near the Equator (e.g., Figure \ref{fig:Snapshots}c) relate to a rich spectrum of tropical weather across a range of temporal and spatial scales. This includes isolated convective activity ($\sim$1 hour, $\sim10$ km), mesoscale convective complexes ($\sim$10 hours, $\sim100$ km) \citep[e.g. review by][]{Houze2004}, tropical depressions ($\sim$ 10 days, $\sim1000$ km) \citep[e.g. review by][]{Montgomery2017}, \textcolor{black}{the Madden-Julian Oscillation \citep[e.g. review by][]{Zhang2005}, and the Asian monsoon ($\sim$ 60 days, $\sim10000$ km) \citep[e.g. review by][]{Webster1998}}.
\begin{figure}[H]
\begin{centering}
\includegraphics[width=15cm]{GRL_Fig01}
\par\end{centering}
\caption{(a) Instantaneous, (b) time-averaged, and (c) transient MSE in ERA from $10\text{\textdegree}$S to $10\text{\textdegree}$N. (d-i) Instantaneous transient MSE in each model of section \ref{sec:Data}. Transient MSE is normalized by the latent heat of vaporization of water $L_{v}\ $to yield units $\textnormal{kg}\ \textnormal{m}^{-2}\ $: the length of the bottom colorbar corresponds to $\sim60\textnormal{MJ\ m}^{-2}$. Panels (a-f) respect the original aspect ratio of the horizontal domain. While the length of the long-channel equals a third of the Equator's length, its width has been multiplied by a factor $5\ $in panels (g-i) to facilitate visualization. \label{fig:Snapshots}}
\end{figure}
Advanced computing now allows simulation of planetary-scale domains ($\sim10^4$ km) with \textcolor{black}{convection-permitting} models (CPM) of horizontal resolution $\sim 1$ km, which can resolve this entire spectrum of tropical weather. \textcolor{black}{Tropical weather systems have been extensively compared in field-campaign observations and regional CPM to evaluate CPMs' ability to adequately represent convective processes \citep[e.g., ][]{Beucher2014,Laing2012,Woodhams2018}. However, explicit comparisons of physical processes regulating the spatio-temporal spectrum of MSE} in observations and CPM are rare. The goal of this paper is to use a spectral budget for sources and sinks of transient MSE variance as a step towards comparing \textcolor{black}{these physical processes} across observations and models of varying complexity.
We use the column-integrated frozen moist static energy $H$ (units J m$^{-2}$)\textcolor{black}{:
\begin{linenomath*}
\begin{equation}
H\left(x,y,t\right)\overset{\mathrm{def}}{=}\int_{0}^{p_{s}}\frac{dp}{g}\left(L_{v}q-L_{f}q_{i}+\underbrace{c_{p}T+gz}_{s}\right),
\end{equation}
\end{linenomath*}
}as a diagnostic because it is \textcolor{black}{approximately conserved during convection}, and because previous studies in idealized CPM have successfully used its variance budget to assess processes that favor or disfavor convective aggregation \textcolor{black}{ \citep[e.g., ][]{Wing2014,Wing2016a}}. Here, \textcolor{black}{$ p_{s}$ is surface pressure and $ p$ the mean pressure profile}, $L_{v}$ and $L_f$ are the latent heat of vaporization and fusion of water, respectively, $q$ and $q_i$ are water vapor and ice mixing ratios, respectively, $c_{p}$ is the specific heat capacity of dry air at constant pressure, $T$ is the absolute temperature, \textcolor{black}{$g $ is the gravity constant, $z $ is the geopotential height, }and $s$ is the dry static energy. The total MSE field $H$ has spatial variability in its temporal mean $\overline{H}$, as well as spatiotemporal variability in the transient MSE anomaly $H^{\prime}$, here defined by:
\begin{linenomath*}
\begin{equation}
H\left(x,y,t\right)=\overline{H}\left(x,y\right)+H^{\prime}\left(x,y,t\right)\label{eq:Transient_definition}
\end{equation}
\end{linenomath*}
Note that transient MSE variability may be modulated nonlinearly by the stationary MSE features, adding another level of complexity to MSE transients --- but in this paper we will focus primarily on comparing transient MSE variability across models and observations without directly assessing this role of nonlinear modulation.
Previous work \citep[e.g.,][]{Held1993,Muller2015a} has consistently found that when CPMs are run on large enough domains, MSE self-organizes into moist and dry regions even in the absence of external forcing \textcolor{black}{(such as planetary rotation, surface inhomogeneities or large-scale wind shear)}. This emergent property of moist convection, referred to as ``convective self-aggregation'' \citep[e.g., review by][]{Wing2017,Holloway2017}, suggests that a significant fraction of transient MSE variability near the Equator might arise from internal self-organization rather than external processes such as surface characteristics, teleconnections with the mid-latitudes, or ocean coupling. The problem is that physical mechanisms of convective self-aggregation have been extensively studied in the context of idealized CPM with fixed surface temperatures, which ignore external processes, and thus provide an uncertain analogy to real-world settings.
This motivates the aim of our paper --- quantitatively comparing convective-aggregation processes in idealized CPM and observations\textcolor{black}{. This comparison} may deepen our understanding of (1) how transient MSE anomalies grow and decay and (2) how valid \textcolor{black}{idealized CPM simulations are as an analogy to the real world}. Idealized CPM have been compared to observations in the past, but mostly at coarse granularity by looking for similar correlations or distributions of variables.
Using satellite data, \citet{Tobin2012a} showed that $\left(10^{\circ}\times10^{\circ}\right)\ $longitude-latitude boxes with more convective organization also exhibited lower values of MSE and larger outgoing longwave radiation, consistent with idealized CPM experiments \citep{Wing2014}. \citet{Holloway2017} used data from the Nauru meteorological station and showed that the long-channel configuration of \citet{Wing2016a} had more realistic distributions of MSE and vertical velocity than traditional square-domain CPM. Additionally, \citet{Stein2017} showed that for a given large-scale precipitation rate and vertical motion, anvil clouds decreased with the degree of aggregation in satellite data, while low clouds and precipitation efficiency increased with aggregation, consistent with CPM simulations \citep[e.g., Figure 8 of][]{Wing2016a}. Recently, \citet{Holloway2017a} used the MSE spatial variance budget to show that interactive radiation maintained aggregation while MSE advection disaggregated convection in \textcolor{black}{CPM experiments forced using satellite data, as found in idealized CPM simulations of RCE}.
\textcolor{black}{While Holloway's simulations support the validity of the RCE analogy, their small domain size $\left(10^{\circ}\times10^{\circ}\right)\ $ may underestimate convective-aggregation feedbacks because of the effect of MSE advection from the prescribed boundary conditions.} Motivated by the recent availability of planetary-domain CPM and high-resolution reanalysis products, we proceed by comparing the observed transient MSE field (Figure \ref{fig:Snapshots}c) to the transient MSE field from several idealized\textcolor{black}{, large-domain} CPM experiments\textcolor{black}{ \citep[][ Figures \ref{fig:Snapshots}d-i]{Wing2017,Khairoutdinov2018}} and ask:
How do the physical \textcolor{black}{processes} that \textcolor{black}{regulate} observed moist static energy variance compare to the convective-aggregation \textcolor{black}{processes} from idealized models \textit{at each horizontal scale}?
The work below is organized as follows. After introducing the observational and model datasets in section \ref{sec:Data}, we investigate the zonal power spectra of transient MSE and how they evolve under the influence of radiation, surface enthalpy fluxes, and advection in section \ref{sec:Zonal-Spectral-Budget-MSE}, before
concluding in section \ref{sec:Conclusion}.
\section{Data\label{sec:Data}}
We use four datasets to compare convective aggregation in observations and idealized CPM: Meteorological reanalysis (ERA), satellite observations (CERES), a rotating near-global simulation (NG) and a non-rotating long-channel simulation (LC). A snapshot of the transient MSE field from each is shown in Figure \ref{fig:Snapshots}, and each is described in more detail below.
\subsection{Reanalysis observations: ERA}
The European Centre for Medium-Range Weather Forecasts Re-Analysis (ERA) version 5 \citep{Hersbach2016} was produced by assimilating observational data in version CY41R2 of the Integrated Forecast System. The new reanalysis dataset has a better hydrological cycle and sea surface temperatures in the Tropics and is calibrated for climate applications. Zonal and temporal resolutions are $27.5\textnormal{km}\times1\textnormal{hour}$.
\subsection{Satellite observations: CERES}
The Clouds \& Earth's Radiant Energy Systems \citep[CERES, ][]{Wielicki1996} ``CERES SYN1deg Ed4A'' dataset provides diurnally-complete top-of-atmosphere and surface radiative fluxes by using sixteen geostationary satellites as well as the National Aeronautics and Space Administration's Moderate Resolution Imaging Spectroradiometer. Zonal and temporal resolutions are \textcolor{black}{$1^{\circ}\times1\textnormal{hour}$}.
\subsection{\textcolor{black}{Convection-permitting} Model (CPM) Simulations}
The following experiments were conducted using the System for Atmospheric Modeling (SAM), a \textcolor{black}{convection-permitting} model that is widely used for idealized studies, solves the anelastic equations of motion, and includes cloud microphysics and subgrid turbulence parameterizations \citep{Khairoutdinov2003}.
\begin{itemize}
\item \textbf{LC: } A suite of three idealized long-channel (LC) experiments with \textcolor{black}{ a doubly-periodic} horizontal domain \textcolor{black}{of }size $12,288\times192\textnormal{km}^{2}$ over a uniform ocean surface with temperature 300K, using SAM v6.8.2. These consist of a control simulation (LC CTRL, Figure \ref{fig:Snapshots}g), described in \citet{Wing2016a}, \textcolor{black}{and simulations that horizontally homogenize either radiation (LC UNI-RAD, Figure \ref{fig:Snapshots}h), or surface enthalpy fluxes (LC UNI-SEF, Figure \ref{fig:Snapshots}i), described in \citet{Beucler2018d}}. Each experiment was run for 80 days to a statistically steady state and outputs were saved with zonal and temporal resolutions of $3\textnormal{km}\times1\textnormal{hour}$.
\item \textbf{NG: } Similar to LC, but for a much larger (near-global; NG, $40,360\times10,000\textnormal{km}^{2}$) ocean-only\textcolor{black}{, zonally-periodic }domain with prescribed ocean surface temperatures that decrease away from the equator and a Coriolis parameter that increases away from the equator, allowing for formation of extratropical eddies which intrude into the tropics. As with LC, three runs were conducted at 300K, consisting of NG CTRL (control experiment, Figure \ref{fig:Snapshots}d), NG UNI-RAD (horizontally-uniform radiative heating, Figure \ref{fig:Snapshots}e) and NG UNI-SEF (horizontally-uniform surface enthalpy fluxes, Figure \ref{fig:Snapshots}f) in \citet{Khairoutdinov2018}. Each experiment was run for a year, using SAM v6.10.6. Outputs were saved with zonal and temporal resolutions of $156.25\textnormal{km}\times1\textnormal{day}$.
\end{itemize}
At spatial scales below $\mathrm{O\left(100km\right)}$, most of the spatial variance comes from sub-diurnal variability from isolated convective events (not shown). Hence, to make meaningful comparisons across datasets, we time-average the fields of ERA, CERES and LC over one-day blocks before calculating spatial co-spectra using the Fast Fourier Transform algorithm \citep{Frigo2005}.
\section{Zonal Spectral Budget of Transient Column Moist Static Energy\label{sec:Zonal-Spectral-Budget-MSE}}
The following spectral method will allow us to (1) \textcolor{black}{separate the zonal variability of MSE in each dataset into contributions from different scales} and (2) quantify the amount of variance created by radiation, surface enthalpy fluxes, and advection at each zonal scale. Specifically, we measure zonal variability of transient MSE $H^{\prime}\ $at a given zonal wavelength $\lambda\ $using the zonal power spectrum $\varphi_{H}\ $of transient MSE, defined as:
\begin{linenomath*}
\begin{equation}
\varphi_{H}\left(\lambda,y,t\right)\overset{\mathrm{def}}{=}\widehat{H^{\prime}}^{*}\widehat{H^{\prime}},
\end{equation}
\end{linenomath*}
where $t\ $is time and $\widehat{H^{\prime}}\ $ is the zonal Fourier transform of the transient MSE field $H^{\prime}$:
\begin{linenomath*}
\begin{equation}
\widehat{H^{\prime}}\left(\lambda,y,t\right)\overset{\mathrm{def}}{=}\frac{1}{\sqrt{2\pi}}\int_{0}^{L\left(y\right)}\exp\left(-\frac{2\pi\imath x}{\lambda}\right)H^{\prime}\left(x,y,t\right)dx,\label{eq:Fourier_transform}
\end{equation}
\end{linenomath*}
where $\imath\ $is the unit imaginary number and $L\left(y\right)\ $ is the length of the latitude circles of ordinate $y$ \textcolor{black}{in all cases but LC, for which $L\left(y\right)\ $ is the periodic domain's length}. From $\varphi_{H}$, one can calculate various aspects of the transient MSE zonal variability, including its spectral-mean wavelength (equation 2 of \citet{Beucler2018d}) and its total (i.e. wavenumber-integrated) zonal variance.
\subsection{Zonal Power Spectra}
Figure \ref{fig:MSE_spectrum}a shows $\varphi_{H}\ $for two LC experiments:
\begin{enumerate}
\item The control experiment CTRL (full lines): This experiment is initialized with a horizontally uniform sounding taken from small-domain RCE, but moist and dry regions of finite size ($\sim2,000\textnormal{km}$) and MSE anomalies ($\sim7\textnormal{kg\ m}^{-2} \times L_{v}$) spontaneously form after $\sim1$month despite homogeneous boundary conditions \footnote{\textcolor{black}{See Figure \ref{fig:Snapshots}g for a snapshot at t=1month and Figure 1d of \citet{Beucler2018d} for a Hovmoller plot of the full time-evolution}}. Although the temporal variations of the transient MSE field appear complicated in physical space, Figure \ref{fig:MSE_spectrum}a reveals a simpler picture in spectral space: As convection self-aggregates (i.e. progressing from solid purple to yellow lines \footnote{\textcolor{black}{See Figure 4b of \citet{Beucler2018d} for the time-evolution of the total MSE variance}}), MSE variance increases at wavelengths above $\lambda\sim100\textnormal{km}$ before equilibrating with a variance peak at $\lambda\sim2,000\textnormal{km}$, explaining why anomalies of this scale are most visible in Figure \ref{fig:Snapshots}g. Note that the y-axis of Figure \ref{fig:MSE_spectrum}a is logarithmic so the total variance in the aggregated state (solid yellow line) is dominated by the $\lambda\sim2,000\textnormal{km}\ $variance peak.
\item The UNI-RAD experiment (dotted lines): Horizontally homogenizing radiative heating greatly weakens aggregation, as evidenced by reduced MSE perturbations ($\sim2\textnormal{kg\ m}^{-2}\times L_{v}$). Unlike the CTRL experiment, MSE variance only grows at the longest wavelengths for the first $10 $ days before stabilizing around $1-2\textnormal{kg}^{2}\ \textnormal{m}^{-4}\times L_{v}^{2}$. Using UNI-RAD as our reference ``non-aggregated'' experiment\footnote{This experiment is very similar to the experiment in which both radiative heating and surface enthalpy fluxes are horizontally uniform (see Figures 1a and 1c of \citet{Beucler2018d})}, the effect of self-aggregation can then be quantified as the difference between the full and dotted yellow lines, and is only significant on scales larger than $\lambda\sim1,000\textnormal{km}$.
\end{enumerate}
Moving to the more realistic NG simulations in Figure \ref{fig:MSE_spectrum}b, the effect of self-aggregation is qualitatively similar to the LC case if measured by the difference between the full and dotted green lines. That is, removing the spatial variability of radiation (green dotted line) prevents the transient MSE field from developing variance at long wavelengths relative to the control (green solid line). In contrast, removing the spatial variability of surface enthalpy fluxes adds variance at long wavelengths (dashed lines) in both the LC and NG setups, because surface enthalpy fluxes damp developing MSE anomalies after the initial stages of aggregation, opposite to radiation \citep[see ][ for an extensive discussion on this topic]{Beucler2018d}.
We now turn to our main goal of comparing idealized simulations against observational data. The black line in Figure \ref{fig:MSE_spectrum} depicts the observational ERA spectrum, averaged from 10\textdegree S to 10\textdegree N and over 5 full years (January 1st 2010 -- December 31st 2014). The zonal MSE variance is slightly larger in ERA than in the NG CTRL case, except over the range of wavelengths where the NG spectrum peaks $\left(\lambda\sim500\textnormal{km}-2,500\textnormal{km}\right)$. To assess the robustness of our observed spectrum, we recalculate the zonal MSE spectrum over the same latitude range and time period using satellite data (CERES, light blue line). The CERES and ERA spectra agree very well at all wavelengths, although this agreement breaks down at short wavelengths ($\lambda<1,000\textnormal{km}$) if we do not average the data over one-day blocks (not shown). Hourly ERA data exhibit more spatial variability at short wavelengths, suggesting that CERES data and one-day time-averaging in the NG case may smooth out the MSE variability at short wavelengths.
Finally, \textcolor{black}{although the observational MSE spectrum flattens progressively at long wavelengths with no clear maximum}, spectra from idealized CPM exhibit a local maximum in the MSE variance at wavelengths of $\sim1,000-5,000\textnormal{km}$. \textcolor{black}{This peak is consistent with the strong $\sim5,000\textnormal{km}$ Madden-Julian Oscillation-like signal in NG CTRL and NG UNI-SEF \citep[][]{Khairoutdinov2018}, and the $\sim2,000\textnormal{km}$-long moist and dry regions of LC CTRL \citep[][]{Wing2016,Beucler2018d}. In observations, self-aggregation may not appear as a distinct peak in the MSE power spectrum, but simply as an enhancement of MSE variance over a broad range of scales. A peak might not form in the real Tropics for several reasons, including the larger amount of external forcing, lateral mixing, and amplifying diabatic feedbacks operating across a broader range of wavelengths. This motivates a quantitative framework to compare MSE tendencies across scales in models and observations: if the same processes enhance variance at large-scales, then self-aggregation processes likely play an important role in regulating observed MSE spectra.}
\begin{figure}[H]
\begin{centering}
\includegraphics[width=13cm]{GRL_Fig02}
\par\end{centering}
\caption{(a) Zonal power spectrum of transient MSE in the LC CTRL experiment (full lines) and the LC UNI-RAD experiment (dotted lines), time-averaged over different stages of the simulation. (b) Zonal power spectrum of transient MSE of all datasets, time-averaged over \textcolor{black}{40d}-80d for the LC experiments and over the entire time period for all other experiments. In both panels, spectra have been averaged over the $y-$dimension, and divided by $\lambda$ so that they integrate to the total variance in logarithmic $\lambda-$space. \textcolor{black}{To facilitate interpretation, spectra are divided by $L_{v}^{2} $ to yield units $\textnormal{kg}^2\ \textnormal{m}^{-4}$. Note that the observed spectra (light-blue and black lines) resemble the aggregated idealized spectra (yellow and green solid lines), but differ from the non-aggregated spectra (yellow and green dotted lines) by more than an order of magnitude at long wavelengths}. \label{fig:MSE_spectrum}}
\end{figure}
\subsection{Adapting a Spectral Budget for Model-Observations Intercomparison}
We now derive a formal spectral decomposition of the transient MSE budget terms to quantitatively assess the respective roles of separate processes in maintaining the spectrum at each wavelength.
This begins with the transient MSE budget, building on standard approaches, but modified so as to help fairly compare our simulations with observations, thus forging new ground. The transient MSE field $H^{\prime}\ $evolves in response to the net MSE flux at the atmospheric column's boundaries, with contributions from the net longwave flux $\dot{H}_{\mathrm{lw}}$, the net shortwave flux $\dot{H}_{\mathrm{sw}}$, the surface enthalpy fluxes $\dot{H}_{\mathrm{sf}}\ $and the advection of MSE through the column's boundaries $\dot{H}_{\mathrm{adv}}$. Separating the four MSE tendencies $\dot{H}_{i}\ $into their temporal mean $\overline{\dot{H}_{i}}\ $and their transient component $\dot{H}_{i}^{\prime}\ $ in the same spirit as equation \ref{eq:Transient_definition}, we can write the transient MSE budget as:
\begin{linenomath*}
\begin{equation}
\frac{\partial H^{\prime}}{\partial t}=\sum_{i=\mathrm{lw,sw,sf,adv}}\dot{H}_{i}^{\prime}.\label{eq:MSE_transient_budget}
\end{equation}
\end{linenomath*}
Following Section 2.2 of \citet{Beucler2018d}, we take the Fourier transform of Equation \ref{eq:MSE_transient_budget} and multiply it by the complex conjugate $\widehat{H^{\prime}}^{*}\ $of Equation \ref{eq:Fourier_transform} to derive a budget for the zonal spectrum $\varphi_{H}\ $of transient MSE:
\begin{linenomath*}
\begin{equation}
\frac{1}{2}\frac{\partial\varphi_{H}}{\partial t}=\sum_{i=\mathrm{lw,sw,sf,adv}}\Re\left(\widehat{H^{\prime}}^{*}\widehat{\dot{H}_{i}^{\prime}}\right),\label{eq:Spectral_MSE_budget}
\end{equation}
\end{linenomath*}
where $\Re\ $is the real part of a complex number. At each wavelength $\lambda$, MSE variance is created if a MSE tendency $\dot{H}_{i}\ $increases MSE where the transient MSE anomaly $H^{\prime}\ $is positive, corresponding to a positive co-spectrum $\Re\left(\widehat{H^{\prime}}^{*}\widehat{\dot{H}_{i}^{\prime}}\right)$.
This framework generalizes the MSE variance framework of \citet{Bretherton2005} and \citet{Wing2014}, a particular case of Equation \ref{eq:Spectral_MSE_budget} that can be derived by integrating equation \ref{eq:Spectral_MSE_budget} across wavelengths before dividing it by the wavelength-integral of $\varphi_{H}\ $\citep[see Appendix B of ][]{Beucler2018d}.
To yield an equation for the rate at which MSE tendencies maintain the MSE spectrum at each wavelength (in units $\textnormal{s}^{-1}$), we average equation \ref{eq:Spectral_MSE_budget} in time over a time-period $t_{\overline{H}}$ and divide it by the time-mean MSE spectrum $\overline{\varphi_{H}}\ $ at each wavelength:
\begin{linenomath*}
\begin{equation}
\frac{1}{t_{\overline{H}}}\frac{\Delta\varphi_{H}}{\overline{\varphi_{H}}}=\sum_{i=\mathrm{lw,sw,sf,adv}}\frac{2\overline{\Re\left(\widehat{H^{\prime}}^{*}\widehat{\dot{H}_{i}^{\prime}}\right)}}{\overline{\varphi_{H}}},\label{eq:Time-averaged-budget}
\end{equation}
\end{linenomath*}
where $\Delta\varphi_{H}\ $is the MSE spectrum difference between the beginning and the end of the time-average. We refer to the terms on the right-hand side of equation \ref{eq:Time-averaged-budget} as components of the spectral MSE variance tendency or, for brevity, as "variance rates".
Since Equation \ref{eq:Time-averaged-budget} \textcolor{black}{does not explicitly depend on} the time-mean zonal structure of the MSE tendencies, we can make direct analogies between observations and zonally-symmetric RCE, which is a key theoretical result of this paper. \textcolor{black}{Note that transient MSE tendencies themselves may be nonlinearly modulated by stationary features of low-level winds, MSE, clouds, among others. Therefore, equation \ref{eq:Time-averaged-budget} is not a closed theory for MSE transient variability; instead it provides a convenient diagnostic tool to compare the amount of variance injected by each MSE tendency $ \dot{H}_{i}$ across different base states}. The left-hand side of equation \ref{eq:Time-averaged-budget} is small \footnote{\textcolor{black}{The left-hand side of equation \ref{eq:Time-averaged-budget} equals 0.1\% of the longwave variance rate when the ERA dataset
is time-averaged over the Jan1,2010--Dec31,2014 period, 1.1\% of the
longwave variance rate when NG CTRL is time-averaged over 1 year,
and 5.1\% of the longwave variance rate when LC CTRL is time-averaged
over 40-80d.}} when the initial and final spectra $\varphi_{H}\ $are similar, or when the time-average is taken over a long time-period $t_{\overline{H}}$.
\textcolor{black}{In both cases}, the four components of spectral MSE variance tendency on the right-hand side of equation \ref{eq:Time-averaged-budget} approximately balance. \textcolor{black}{Therefore, we can quantitatively compare the four rates of variance injection \textit{across scales} in models and observations.}
\subsection{Zonal Spectral Budget Intercomparison}
The spectral rates of variance injection\textcolor{black}{,} depicted in Figure \ref{fig:MSE-spectral-rates}\textcolor{black}{,} have similar signs and amplitude across models \textcolor{black}{(green and yellow lines)} and observations \textcolor{black}{(black and light-blue lines)}. Surprisingly, even the LC \textcolor{black}{variance} rates (yellow lines) have similar signs and amplitudes to the \textcolor{black}{variance} rates from planetary-domain experiments, and are simply shifted to shorter wavelengths, despite the smaller zonal extent and 64:1 aspect ratio of the LC configuration. Therefore, we see the LC configuration as an idealized, reduced-size model to study the interaction between convection and the large-scale circulation, which makes LC a promising yet relatively inexpensive framework to study the processes maintaining convective aggregation across climates \citep{Wing2018}.
\begin{figure}[H]
\begin{centering}
\includegraphics[width=13cm]{GRL_Fig03}
\par\end{centering}
\caption{Rate at which (a) longwave radiation (b) shortwave radiation (c) surface enthalpy fluxes and (d) MSE advection maintain the MSE power spectrum at each wavelength (in units $\textnormal{day}^{-1}$) and for all datasets. The UNI-SEF rates of variance injection (dashed lines) have been divided by a factor of 5 because the denominator of equation \ref{eq:Time-averaged-budget}, which is the time-averaged spectrum $\overline{\varphi_{H}}\ $, is smaller for non-aggregated simulations. \textcolor{black}{Note the similar signs and shapes of the observed variance rates (light-blue and black lines) and the variance rates from \textit{aggregated} idealized simulations (yellow and green solid lines)}. \label{fig:MSE-spectral-rates}}
\end{figure}
First, since longwave cooling to space is systematically lower in moist regions of high MSE \citep[e.g., ][]{Beucler2016b},
longwave radiation injects MSE variance at all wavelengths (Figure \ref{fig:MSE-spectral-rates}a), with rates as high at $1/\left(2\ \textnormal{weeks}\right)\ $at the planetary scale. Shortwave heating is larger in moister regions, mostly because of water vapor absorption \citep[e.g., sub-section 3.3 of ][]{Wing2017}, resulting in a shortwave injection of MSE variance at all wavelengths (Figure \ref{fig:MSE-spectral-rates}b).
Surface enthalpy fluxes remove variance in observations and for idealized cases where convection has aggregated, while they unrealistically inject variance for the sensitivity tests (UNI-RAD) in which aggregation is artificially \textcolor{black}{prevented} (Figure \ref{fig:MSE-spectral-rates}c) or in the early phases of convective self-aggregation \citep[see Appendix D of ][]{Beucler2018d}. The difference between the rate at which surface fluxes remove variance in the aggregated and non-aggregated cases can be explained by decomposing the surface enthalpy fluxes into a wind-driven component and a component driven by the near-surface enthalpy disequilibrium. Section 3.4 of \citet{Beucler2018d} shows that while the wind-driven component favors convective aggregation (variance injection) because convective gustiness is higher is convectively-active regions, surface enthalpy disequilibrium is largest in dry regions, damping MSE variance (variance removal). As convection aggregates, MSE variance increases at long wavelengths and so does the surface enthalpy disequilibrium. In the real-world atmosphere, additional factors \textcolor{black}{such as higher near-surface wind speeds \citep[][]{Maloney2010}, ocean heat transport \citep[][]{Benedict2011}, and dry air intrusions \citep[][]{Bretherton2015a} can increase the smoothing effect of the disequilibrium-driven variability of surface enthalpy fluxes, while further decreasing the aggregating effect of the wind-driven variability}. This leads to larger surface flux damping at scales where radiation injects the most variance (Figure \ref{fig:MSE-spectral-rates}c), and \textcolor{black}{might contribute to} the absence of a peak in the ERA MSE spectrum (Figure \ref{fig:MSE_spectrum}b). MSE advection removes variance at all wavelengths with a maximum removal rate at the planetary scale (Figure \ref{fig:MSE-spectral-rates}d). \textcolor{black}{Since total advection is calculated as a residual of equation \ref{eq:Time-averaged-budget}, the fine variability of its variance rate may not be resolved, especially in ERA which does not close the MSE budget; explicitly calculating the horizontal and vertical components of MSE advection from three-dimensional data will be needed to clarify its scale-selectivity in observations and models.}
\section{Conclusion\label{sec:Conclusion}}
The multi-scale patterns of convective aggregation are directly connected to the hydrologic cycle in the Tropics \citep[e.g., ][]{Kiranmayi2011}. While \textcolor{black}{convection-permitting} models have provided insight into the physical processes controlling convective aggregation, it has been hard to meaningfully \textcolor{black}{compare} idealized simulations against observations. We have addressed this issue by applying a spectral technique that reveals scale-selective aggregation processes in meteorological reanalyses, satellite retrievals, and idealized \textcolor{black}{convection-permitting} simulations of varying complexity.
The budget for the transient MSE spectrum exhibits scale-selective tendencies that hold across models and observations: longwave radiation injects variance at the longest wavelengths, shortwave radiation injects variance at long wavelengths, MSE advection removes variance across scales\textcolor{black}{, and} surface enthalpy fluxes mostly remove variance between $\lambda\approx1,000\textnormal{km}\ $and $\lambda\approx10,000\textnormal{km}$. We find a stronger damping effect of surface enthalpy fluxes in ERA reanalysis data relative to simulations that neglect ocean interaction and horizontal sea surface gradients. This finding is consistent with recent RCE simulations that have made surface flux feedbacks on aggregation more realistic by adding a meridional surface temperature gradient \citep[e.g., ][]{Bretherton2015a} or increasing surface temperature variability by adding a slab ocean \citep[e.g., ][]{Coppin2017,Hohenegger2016} or soil \citep[e.g., ][]{Hohenegger2018}, resulting in a damping of self-aggregation patterns.
Removing the interaction between radiation and water vapor in the simulations prevents convective self-aggregation, resulting in a loss of MSE variance at long wavelengths ($\lambda>1,000\textnormal{km}$), and corresponding disagreement with the observed MSE variance. This adds to the growing body of evidence that radiatively-driven self-aggregation is key to generating realistic \textcolor{black}{moisture variability from homogeneous boundary conditions} \citep[e.g., ][]{Arnold2015}.
Undoubtedly, aspects of the causality are still murky since vertically-resolved, lower-tropospheric specific humidity, whose variance dominates the column MSE variance \textcolor{black}{\citep[][]{Holloway2009}}, may not directly respond to the thermodynamical constraints governing column MSE. For instance, is the longwave variance production peak too high for LC in Figure \ref{fig:MSE-spectral-rates}a because cloud-radiation processes are represented incorrectly, or because vertical advection of water vapor amplifies variance too much at a specific length scale?
\textcolor{black}{The} framework introduced here generalizes to three-dimensional tracer variance budgets, and could be used to investigate the processes injecting zonal variance in the lower-tropospheric water vapor spectrum at long wavelengths.
Ultimately, we hope the tool summarized here can be deployed across the emerging hierarchy of global cloud resolving models \citep{Satoh2019} to help clarify their intrinsic thermodynamics. \textcolor{black}{Our spectral framework can be generalized to spatially-limited domains by choosing a transform insensitive to non-periodic boundaries, such as the discrete cosine transform \citep[e.g., ][]{Denis2002,Selz2018}. It can also be generalized to arbitrary subsets of the domain by choosing a transform retaining localization information, such as the wavelet transform \citep[e.g., ][]{Torrence1998}}. While spatio-temporal spectra are familiar to tropical dynamicists \citep[e.g., ][]{Wheeler1999,Yasunaga2019}, formal spectral decomposition of underlying process budgets are not yet in widespread use. In this context, traditional diagnostic tools may fail to compactly analyze the underlying causes of multi-scale discrepancies across models. By quantifying the preferential scales of zonal thermodynamic variability, our spectral framework allows comparison between models and observational datasets across configurations, resolutions, and scales.
\acknowledgments
Tom Beucler is supported by NSF grants AGS-1520683 and OAC-1835769, Tristan Abbott and Timothy Cronin are supported by NSF grants AGS-1740533 and AGS-1623218, and Mike Pritchard is supported by NSF grant AGS-1734164 and DOE grant DE-SC0012152. We thank Kerry Emanuel, Paul O'Gorman, Zhiming Kuang and Chris Bretherton
for review and guidance on an early version of this manuscript, \textcolor{black}{and two anonymous reviewers who helped improve the quality of the present manuscript}. The source code and data used to produce the figures can be found
at \url{https://github.com/tbeucler/2019\_Convective\_SA\_MSE\_Transients}. The ERA reanalysis data was downloaded from the Copernicus Data Store,
the CERES data was downloaded from the CERES NASA website, the NG
data is stored on the Cheyenne computing cluster provided by NCAR,
and the LC data is stored on the Engaging computing cluster provided
by MIT.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,128 |
RMXCRW, uitgesproken als RemixCrew, is een urban-dj/producerteam rondom DJ Chuckie en DJ Naffie.
Zowel DJ Chuckie en DJ Naffie zijn geboren in Paramaribo maar wonen in Nederland, waar ook RMXCRW werd opgericht.
Biografie
In 2003 besloot het duo om r&b- en hiphoptalenten te ondersteunen met het remixen van oude hits.
Hun eerste succes kwam met Turn me on, met r&b-zanger Ebon-E en rapper Ambush, dat ook internationaal hoog scoorde. Ook hun tweede single Fresh haalde in binnen- en buitenland de hitlijsten.
Hun eerste album was Da Soundtrack, waarop producties staan van verschillende urbanproducers en dat werd uitgebracht met medewerking van diverse artiesten, onder wie Ambush, Mega D, I.V.A. en Ebon-E.
In 2008 kwam er een volledig nieuwe remix van het album Da Soundtrack. Dit album werd uitgebracht in samenwerking met onder meer DJ 4tezian, Ebon-e, Soundflow en Mega D. Het album werd vooral een hit door het nummer Rocking the night riderz van DJ 4tezian.
Discografie
Albums
|- align=center
|align=left|da Soundtrack||2004||||||||
|- align=center
|align=left|da Soundtrack Remix||2008||||||||
|}
Singles
|- align=center
|align=left|Turn me on||||5-7-2003||17||7||met Ebon-E Plus en Ambush
|- align=center
|align=left|Fresh||||7-2-2004||36||2||met Ambush en I.V.A.
|- align=center
|align=left|Je doet!||||15-1-2005||35||3||met La Rouge en I.V.A.
|- align=center
|align=left|Als je weet wat je doet||||26-3-2005||tip||||met La Rouge
|- align=center
|align=left|Reggaeton style||||30-7-2005||tip||||ft. Immorales
|- align=center
|align=left|I'm sorry||||4-2-2006||22||6||vs. The Partysquad
|- align=center
|align=left|Maxine||||19-10-2006||tip||||--
|}
Nederlandse band | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 576 |
Joseph Joubert (1754-1824), moraliste et essayiste français ;
Joseph Joubert (1878-1963), organiste français. | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,703 |
La Creu de combatent voluntari de 1914-1918 (francès: croix du combattant volontaire 1914-1918) és una condecoració francesa que distingia els que es van oferir voluntaris per servir al front en una unitat de combatent durant la Primera Guerra Mundial.
Disseny
Una creu de bronze de quatre branques, mòdul de 36 mm .
A l'anvers: un medalló central rodó, amb la llegenda "REPUBLIQUE FRANCAISE" que envolta:
l'efígie d'un poilu amb casc, descansa sobre una espasa erigida verticalment sobre les branques
de la creu que es carrega amb fulles de llorer i roure formant relleu.
Al revers: dins del medalló central, una branca de llorer està envoltada per la inscripció "COMBATTANT VOLONTAIRE 1914-1918".
Les branques de la creu estan carregades de fulles de llorer i roure formant relleu.
Es va produir un model especial per als combatents voluntaris de la guerra de 1870-1871 amb les dates "1870-1871", substituint les de "1914-1918" al revers.
Atribució
Les condicions requerides per obtenir la creu van ser definides pel decret de 28 de novembre de 1935.
Els títols dels candidats van ser examinats per una comissió composta, a partir de 1951 , per dotze membres distribuïts de la següent manera:
Ministeri de Defensa Nacional: el president;
Secretariat d'Estat de Guerra: dos membres;
Secretariat d'Estat de Marina: dos membres;
Secretariat d'Estat de l'Aire: dos membres;
Oficina Nacional per a Discapacitats i Combatents: dos membres;
Associació de voluntaris i voluntaris: tres membres.
El decret de 10 d' abril de 1936 va estendre la seva atribució als rars combatents voluntaris que van sobreviure a la guerra francoprussiana del 1870.
La creu del combatent voluntari de la guerra de 1914-1918 es considera un títol bèl·lic quan s'examinen les sol·licituds per obtenir un grau a la Legió d'Honor, la Medalla Militar o l'Orde Nacional del Mèrit.
Disseny
Una creu de bronze de quatre branques, mòdul de 36 mm .
A l'anvers: un medalló central rodó, amb la llegenda "REPUBLIQUE FRANCAISE" que envolta:
l'efígie d'un poilu amb casc, descansa sobre una espasa erigida verticalment sobre les branques
de la creu que es carrega amb fulles de llorer i roure formant relleu.
Al revers: dins del medalló central, una branca de llorer està envoltada per la inscripció "COMBATTANT VOLONTAIRE 1914-1918".
Les branques de la creu estan carregades de fulles de llorer i roure formant relleu.
Es va produir un model especial per als combatents voluntaris de la guerra de 1870-1871 amb les dates "1870-1871", substituint les de "1914-1918" al revers.
Enllaços externs
Pàgina molt complerta sobre les condecoracions civils i militars franceses
Creu del Combatent voluntari de 1914-1918 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,886 |
The 2014–15 Liga III season is the 59th season of the Liga III, the third tier of the Romanian football league system. The season began on 30 August.
There is a new system, with five series of 13/14 teams that will play a regular season as a round-robin tournament. At the end of the regular season, the first team from each series will promote to Liga II. The last two teams from the series with 14 teams and the last one from the series with 13 teams will relegate to Liga IV. From the 12th placed teams, another three are relegated. To determine these teams, separate standings are computed, using only the games played against clubs ranked 1st through 11th.
Teams
At the end of 2013–14 season, FCM Dorohoi from Seria I, FC Voluntari from Seria II, CS Balotești from Seria III, FC Caransebeș from Seria IV, Șoimii Pâncota from Seria V and Fortuna Poiana Câmpina from Seria VI promoted to Liga II. Sixteen teams were relegated to Liga IV : CSM Moinești, Sporting Suceava, FCM Bacău (Seria I), Conpet Cireșu, Progresul Cernica and Rapid Fetești (Seria II), FC Balș (Seria III), Munictorul, FCM Reșița, Jiul Rovinari, Minerul Mătăsari, FC Avrig (Seria IV), FC Maramureș (Seria V), CSM Câmpina, Conpet Ploiești and Civitas Făgăraș (Seria VI). The winners of the 21 Play-Off matches of 2013–14 Liga IV series were promoted to Liga III.
League tables
Seria I
Seria II
Seria III
Seria IV
Seria V
References
2014
3
Romania | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,347 |
from __future__ import absolute_import
from atom.views import DeleteMessageMixin
from atom.ext.crispy_forms.forms import BaseTableFormSet
from braces.views import FormValidMessageMixin, LoginRequiredMixin, SelectRelatedMixin, UserFormKwargsMixin
from dal import autocomplete
from django.core.urlresolvers import reverse_lazy
from django.utils.translation import ugettext_lazy as _
from django.views.generic import DeleteView, DetailView, UpdateView
from django_filters.views import FilterView
from extra_views import CreateWithInlinesView, InlineFormSet, NamedFormsetsMixin
from foundation.letters.models import Letter
from foundation.offices.emails.models import Email
from foundation.offices.emails.forms import EmailForm
from .filters import OfficeFilter
from .forms import OfficeForm
from .models import Office
class OfficeListView(SelectRelatedMixin, FilterView):
filterset_class = OfficeFilter
model = Office
select_related = ['jst', ]
paginate_by = 25
def get_queryset(self, *args, **kwargs):
qs = super(OfficeListView, self).get_queryset(*args, **kwargs)
return qs.with_case_count()
class OfficeDetailView(SelectRelatedMixin, DetailView):
model = Office
select_related = ['jst', ]
def get_context_data(self, **kwargs):
context = super(OfficeDetailView, self).get_context_data(**kwargs)
context['inbox'] = (Letter.objects.filter(case__office=self.object).
for_milestone().
order_by('-created').all()[:20])
context['email_set'] = Email.objects.filter(office=self.object).all()
return context
class EmailInline(InlineFormSet):
model = Email
form_class = EmailForm
formset_class = BaseTableFormSet
fields = ['email', 'default']
def get_extra_form_kwargs(self):
return {'user': self.request.user}
class OfficeCreateView(LoginRequiredMixin, NamedFormsetsMixin, UserFormKwargsMixin,
CreateWithInlinesView):
model = Office
form_class = OfficeForm
inlines = [EmailInline]
inlines_names = ['emails']
class OfficeUpdateView(LoginRequiredMixin, UserFormKwargsMixin, FormValidMessageMixin,
UpdateView):
model = Office
form_class = OfficeForm
def get_form_valid_message(self):
return _("{0} updated!").format(self.object)
class OfficeDeleteView(LoginRequiredMixin, DeleteMessageMixin, DeleteView):
model = Office
success_url = reverse_lazy('offices:list')
def get_success_message(self):
return _("{0} deleted!").format(self.object)
class OfficeAutocomplete(autocomplete.Select2QuerySetView):
def get_queryset(self):
qs = Office.objects.all()
if self.q:
qs = qs.filter(name__istartswith=self.q)
return qs
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,350 |
Tag: Collusion
#Trump Taking Down the #Cabal and Securing our #Children: It's Happening. #SB2 decodes #QAnon
Mueller report out. Check. Rosenstein who was involved in FISAGATE kept around to validate the report exonerating Trump. Check. Trump vindicated. Check. The Left and its Fake News in disarray. Check. Next? We get rid of those we don't need anymore:
Q1131 "Drop after testimony." R U learning yet? Q
and we go after the traitors:
Q2914 "Time to start looking at the other side where real crimes were committed." – POTUS
Now that we know what's coming, we can adjust our Pop Corn budget and talk about what's going on behind the scenes.
Let's take a close look at the Maestro's tweets on 3/22/2019. You caught it was Skull and Bones 322 day. He gave us very important information from the battlefield. As usual, we first identify the pieces of the puzzle, then, we assemble them.
Img1PNG With the numbers 3.1 and 14, the first tweet of the day hints at Pi=3.14. This is confirmed by the capital letters adding up to 358, value for THE MEASUREMENT OF THE GREAT PYRAMID. We know the great pyramid of Giza proves Egyptians knew the number Pi since the ratio of the pyramid's perimeter to its height is 2Pi. The question is now what is the Maestro saying with this number.
Img2PNG Since Pi is connected to the circle and the Pyramid of Giza to the past (and the present), the riddle becomes Circle+Time. Can you solve? Yes: clock. Here is the confirmation: we are told in Q2467 to follow the watch and the time on the watch is between 3:14 (Pi) and 3:15, which Q rounded up to 3:15 in Q3093. Coincidence? This means we should start considering circling the Q board with the Maestro's tweet timestamps and go around the clock by adding 2400 to the first drop we hit. For example, if the timestamp is 6:45 AM, we should consider Q645 but also Q3045. This makes sense because otherwise we would only hit eligible drops between Q1 and Q2400 and never access the newest drops. Pretty cool right? Applying this to the tweet, we hit Q552 where the Maestro is warning about the coming storm and we also hit Q2952 where Maria Bartimoro is interviewing Devin Nunes on Sunday Morning Futures. Superb, Bartimoro added the future to the tenses Giza brought earlier. What I really like about the Maestro's riddles is he rewards your effort as you progress. Look at how he confirms we are on the right track with the next tweet:
Img3PNG It's his interview with… Maria Bartiromo! Coincidence? Hahaha! Thank you Mr. President! Now listen to his first answer in the interview link. Within 22 seconds he uses the verb "happen" 4 times! Later, at 2:04, he uses it twice in 15 seconds. You got it, he's referring to the BIG BIG BIG HAPPENINGS in Q2903 and the domino effect I told you about in my previous post.
Now what could be the BIGGEST happening? What do you think? Why are we here? Yes: we are here to save the children. I said it in the past, I'll say it again: we are here to save the children. Politics is a way through which we achieve this. But our primary objective is to save the children. This is what really drives Trump. It's right here, in Q153. Read very carefully: Img4PNG
As you can see, the images were very disturbing but there is more and I have to warn you. We, the Q community, all suspected the reality of the information I am about to share with you here. It is violent and extremely painful and this time, it comes from the Maestro himself, with strong links to validate the decoding method. It's coded in the same Bartiromo tweet, pointing through its timestamp to Q748 and Q3148 where we are still in child abuse and human trafficking territory with Rachel Chandler. The tweet is abruptly cut as POTUS says about the Fake News: "93% negative news and I think it's worse than th..". Who in the team would dare cut POTUS in the middle of a sentence? Is someone getting fired? You got it: done on purpose. That's where the message is. The Maestro wants us to focus on the number 93. Img5PNG If you notice 93=3X(14+17), you see Pi=3.14 and Q=17 appear and the hidden information is therefore the number 3. The length of the video is 7:51=471 seconds and this is the value for THEY EAT CHILDREN + THEY EAT CHILDREN + THEY EAT CHILDREN. As you can see, the number 3 is confirmed. But since this is very disturbing, the Maestro gave another confirmation. If you read the duration of the video 7:51 backwards, it becomes 15:7. Now what is the value for THEY EAT CHILDREN? Yes: 157. There you have it. Now that this horrific information is strongly confirmed, the Maestro is revealing to us how he punishes them when they try to escape using their money and power: the video length 471 is also the value for THEY FAKE THEIR DEATHS AND THEN I KILL THEM ANYWAYS. Let that sink in and happy hunting! You thought helicopters randomly crashing near billionaires' mansions had no meaning?..
Q176 Everything has meaning – EVERYTHING. Q
Do you see this improper S in anyways? This is code for this is done without any written trace… I can hear from here demon possessed mono neuronal creatures screaming: "SB2, how about the law and due process? They should be judged and have a fair trial!" Answer: "they were already dead, what are you talking about?"
For those who still need more confirmation, what do you think the Maestro is really saying hereYouTube when he repeats 4 times "we killed them all" after saying about the "plant" in Lima: "that's why you check and re-check (..) every little thing (..)". You understood the plant referred to the children right? Q gave you the hint here:
Q747 What does a 'Flower' represent? What does 'Deflower' represent? Q
Since Q revealed in Q3152 that Prince Andrew is deeply connected, it's relevant to notice that 471 is also the value for PRINCE WILLIAM HAS NO CLUE WHAT I GOT COMING TO HIM. And the final conclusion is here: THEY ABSOLUTELY NEED TO BE DELETED FROM EXISTENCE=471.
As you can see, gloves are off and they are now being clearly exposed. Img6PNG It's confirmed in the next tweet with capital letters adding up to 218, value for OPENING THE CURTAINS and PERSON OF INTEREST. This person is Michel Chelbin, connected to Chandler and pulled in by the timestamp through Q3157. Notice that, had we not have the +2400 clue, the whole decode would not have made sense because we would have missed all the references to child abuse and human trafficking.
Q757 is also brought in and warns us about another possible false flag. As you know, that's what they do when they are threatened:
Q3113 What occurred the last time a countdown was presented? [FF]
By the way, you caught the NZ false flag was a cover to exchange intel in plain sight about Q through the manifesto right? The whole manifesto is coded. Everything you need to know about this false flag is in the Q board. Everything.
The next tweet is the most impressive one. The Maestro posted a 77 second video about the Golan Heights. Watch very carefully link. Did you catch it? He says about recognizing Israel's sovereignty over the Golan Heights: "it's like Jerusalem". What does he really mean? This is a good opportunity to answer to those claiming Trump is a Zionist. First, we have to remember Hussein the Contortionist hated Netanyahu who totally returned the favor. The enemy of your enemy?… Img7PNG Second, on May 20 2017, POTUS signed a $110 billion arms deal with Saudi Arabia. This was a way to weaken the Deep State's positions in Iran and balance sovereign military powers in the region while decimating the rebellious ones controlled by the Cabal. This equidistant stance towards the powers in the region is confirmed in the Golan Heights tweet he made the day before. Capital letters add up to 149, value for NO COLLUSION. This means when he sells weapons to SA, he does not care what Israel thinks and when he gives the Golan Heights to Israel, he does not care what SA thinks: NO COLLUSION. This tweet points to Q1150 where we read 'The Plan'. The video tweet points to Q813 where we read "They want you weak". Do you see it? The Maestro's plan in the Middle East is to empower militarily and economically the sovereign powers in the region to undo the Cabal's division and disorder strategy: when you have something to lose, you cherish peace and when everybody is powerful, nobody wants to fight. And who sells the weapons/training to SA and the technology/know how to Israel to extract gas and oil from the Golan Heights? You got it: JOBS JOBS JOBS! For all this to work, rebellious forces like ISIS have to be neutralized and Assad (if he's smart) will be glad to stay in power in exchange for giving up the Golan Heights he never controlled anyway. Now you know why the Maestro showed the ISIS map in Lima and why Q1150 says: "think timing". But then, Q challenges the autists: "where are the autists?". Do you see it? How many days between the arms deal with SA and the Proclamation about the Golan Heights? Yes: 675 days. Img8PNG How many days between the start of the Mueller probe investigation and its end? Yes: 675 days! Coincidence? Our foreign policy in the Middle East and our domestic affairs mirror each other! Welcome to the twilight zone! What is the Maestro saying here? What about his message relayed by Kudlow in the next tweet? How is it all connected to the children?
I'm running out of space. Next post.
An Anon: How do you know the future?
Q2606 Control. Q
Intro video to Q
Author enki74Posted on March 27, 2019 March 30, 2019 Categories News of the DayTags children, Collusion, fisagate, Mueller, POTUS, qanon, Russian, serialbrain2.sb2.trump, timing, tweetLeave a comment on #Trump Taking Down the #Cabal and Securing our #Children: It's Happening. #SB2 decodes #QAnon
The Great Awakening. Q….. Q !!mG7VJxZNCI 08/21/18 (Tue) 01:09:08 No.172
Q Post Today on Patriots Fight a small post with a "Big Bang" I have included the text from the link to Reddit that was posted by Q.
PatriotsFight QPost Small Post Big Bang
!!mG7VJxZNCI 08/21/18 (Tue) 01:09:08 No.172
https://www.reddit.com/r/greatawakening/comments/98yduw/connnecting_some_dots/
The Great Awakening.
I am passing this on from someone who's connecting some dots with input from sources he cannot reveal.
Here's what it looks like when all the pieces are sewn together
It smells like conspiracy and treason. Everyone needs to read this. Slowly, and patiently, because it's very important……
From 2001 to 2005 there was an ongoing investigation into the Clinton Foundation.
A Grand Jury had been impaneled.
Governments from around the world had donated to the "Charity".
Yet, from 2001 to 2003 none of those "Donations" to the Clinton Foundation were declared. Now you would think that an honest investigator would be able to figure this out.
Look who took over this investigation in 2005: None other than James Comey; Coincidence? Guess who was transferred into the Internal Revenue Service to run the Tax Exemption Branch of the IRS? None other than, Lois "Be on The Look Out" (BOLO) Lerner. Isn't that interesting?
But this is all just a series of strange coincidences, right?
Guess who ran the Tax Division inside the Department of Injustice from 2001 to 2005?
No other than the Assistant Attorney General of the United States,
Rod Rosenstein.
Guess who was the Director of the Federal Bureau of Investigation during this time frame?
Another coincidence (just an anomaly in statistics and chances), but it was Robert Mueller.
What do all four casting characters have in common?
They all were briefed and/or were front-line investigators into the Clinton Foundation Investigation.
Another coincidence, right?
Fast forward to 2009….
James Comey leaves the Justice Department to go and cash-in at Lockheed Martin.
Hillary Clinton is running the State Department, official government business, on her own personal email server.
The Uranium One "issue" comes to the attention of the Hillary.
Like all good public servants do, supposedly looking out for America's best interest, she decides to support the decision and approve the sale of 20% of US Uranium to no other than, the Russians.
Now you would think that this is a fairly straight up deal, except it wasn't, America got absolutely nothing out of it.
However, prior to the sales approval, no other than Bill Clinton goes to Moscow, gets paid 500K for a one hour speech; then meets with Vladimir Putin at his home for a few hours.
Ok, no big deal right? Well, not so fast, the FBI had a mole inside the money laundering and bribery scheme.
Robert Mueller was the FBI Director during this time frame? Yep, He even delivered a Uranium Sample to Moscow in 2009.
Who was handling that case within the Justice Department out of the US Attorney's Office in Maryland?
None other than, Rod Rosenstein. And what happened to the informant?
The Department of Justice placed a GAG order on him and threatened to lock him up if he spoke out about it.
How does 20% of the most strategic asset of the United States of America end up in Russian hands when the FBI has an informant, a mole providing inside information to the FBI on the criminal enterprise?
Very soon after; the sale was approved!~145 million dollars in "donations" made their way into the Clinton Foundation from entities directly connected to the Uranium One deal.
Guess who was still at the Internal Revenue Service working the Charitable Division? None other than, – Lois Lerner.
Ok, that's all just another series of coincidences, nothing to see here, right?
Let's fast forward to 2015.
Due to a series of tragic events in Benghazi and after the 9 "investigations" the House, Senate and at State Department, Trey Gowdy who was running the 10th investigation as Chairman of the Select Committee on Benghazi discovers that the Hillary ran the State Department on an unclassified, unauthorized, outlaw personal email server.He also discovered that none of those emails had been turned over when she departed her "Public Service" as Secretary of State which was required by law. He also discovered that there was Top Secret information contained within her personally archived email.
Sparing you the State Departments cover up, the nostrums they floated, the delay tactics that were employed and the outright lies that were spewed forth from the necks of the Kerry State Department, we shall leave it with this…… they did everything humanly possible to cover for Hillary. .
Now this is amazing, guess who became FBI Director in 2013? None other than James Comey; who secured 17 no bid contracts for his employer (Lockheed Martin) with the State Department and was rewarded with a six million dollar thank you present when he departed his employer? Amazing how all those no-bids just went right through at State, huh?
Now he is the FBI Director in charge of the "Clinton Email Investigation" after of course his FBI Investigates the Lois Lerner "Matter" at the Internal Revenue Service and he exonerates her. Nope…. couldn't find any crimes there.
In April 2016, James Comey drafts an exoneration letter of Hillary Rodham Clinton, meanwhile the DOJ is handing out immunity deals like candy.They didn't even convene a Grand Jury!
Like a lightning bolt of statistical impossibility, like a miracle from God himself, like the true "Gangsta" Comey is, James steps out into the cameras of an awaiting press conference on July the 8th of 2016, and exonerates the Hillary from any wrongdoing.
Do you see the pattern?
It goes on and on, Rosenstein becomes Asst. Attorney General,Comey gets fired based upon a letter by Rosenstein, Comey leaks government information to the press, Mueller is assigned to the Russian Investigation sham by Rosenstein to provide cover for decades of malfeasance within the FBI and DOJ and the story continues.
FISA Abuse, political espionage….. pick a crime, any crime, chances are…… this group and a few others did it:
All the same players.
All compromised and conflicted.
All working fervently to NOT go to jail themselves
All connected in one way or another to the Clinton's.
They are like battery acid; they corrode and corrupt everything they touch.How many lives have these two destroyed?
As of this writing, the Clinton Foundation, in its 20+ years of operation of being the largest International Charity Fraud in the history of mankind, has never been audited by the Internal Revenue Service.
Let us not forget that Comey's brother works for DLA Piper, the law firm that does the Clinton Foundation's taxes.
The person that is the common denominator to all the crimes above and still doing her evil escape legal maneuvers at the top of the 3 Letter USA Agencies?
Yep, that would be Hillary R. Clinton.
Now who is LISA BARSOOMIAN? Let's learn a little about Mrs. Lisa H. Barsoomian's background.
Lisa H. Barsoomian, an Attorney that graduated from Georgetown Law, is a protégé of James Comey and Robert Mueller.
Barsoomian, with her boss R. Craig Lawrence, represented Bill Clinton in 1998.
Lawrence also represented:
Robert Mueller three times;
James Comey five times;
Barack Obama 45 times;
Kathleen Sebelius 56 times;
Bill Clinton 40 times; and
Hillary Clinton 17 times.
Between 1998 and 2017, Barsoomian herself represented the FBI at least five times.
You may be saying to yourself, OK, who cares? Who cares about the work history of this Barsoomian woman?
Apparently, someone does, because someone out there cares so much that they've "purged" all Barsoomian court documents for her Clinton representation in Hamburg vs. Clinton in 1998 and its appeal in 1999 from the DC District and Appeals Court dockets (?). Someone out there cares so much that even the internet has been "purged" of all information pertaining to Barsoomian.
Historically, this indicates that the individual is a protected CIA operative. Additionally, Lisa Barsoomian has specialized in opposing Freedom of Information Act requests on behalf of the intelligence community. Although Barsoomian has been involved in hundreds of cases representing the DC Office of the US Attorney, her email address is Lisa Barsoomian at NIH.gov. The NIH stands for National Institutes of Health. This is a tactic routinely used by the CIA to protect an operative by using another government organization to shield their activities.
It's a cover, so big deal right? What does one more attorney with ties to the US intelligence community really matter?
It deals with Trump and his recent tariffs on Chinese steel and aluminum imports, the border wall, DACA, everything coming out of California, the Uni-party unrelenting opposition to President Trump, the Clapper leaks, the Comey leaks, Attorney General Jeff Sessions recusal and subsequent 14 month nap with occasional forays into the marijuana legalization mix …. and last but not least Mueller's never-ending investigation into collusion between the Trump team and-the Russians.
Why does Barsoomian, CIA operative, merit any mention?
She is Assistant Attorney General Rod Rosenstein's WIFE!
Author enki74Posted on August 21, 2018 Categories 8chan, FlashMob, FREE night at TinyHouse BnB, Hacking your Social Network, News of the Day, POTUS, President Trump, Q-Anon, QProofs, SmartMob, Soctt Isbell, Steemit, Tiny Homes, Tiny Texas Houses, Trump2016, Trumpified, TrumpifiedRadioTags !!mG7VJxZNCI, (Tue), 01:09:08, 08/21/18, 172, 8chan, attorney general, barack obama, bengazi, Bill Clinton, Chinese, cia, clapper, Clinton, Clinton Foundation, Collusion, comey, DOJ, FBI, gowdy, HCF, Hillary, HRC>, James, jeff sessions, kathleen sebelius, links, Lisa barsoomian, lockheed martin, muellerFISA, no, Piper, President Trump, Q, qanon, robert mueller, rosenstein, russians, steel, tarrifs, the great awakening, thegreatawakening, trey, uranium oneLeave a comment on The Great Awakening. Q….. Q !!mG7VJxZNCI 08/21/18 (Tue) 01:09:08 No.172
QAnon of 8chan says "Real Russian Collusion" Hillary Clinton
Hillary Clinton at a "Get Out the Vote" rally in Concord, N.H., February 6, 2016.
From the perspective of the voters, Clinton's twin email travails — the hack of the DNC and the investigation into her server — were two faces of a single problem. Call it "Clinton, Inc."
more of the story….
https://www.nationalreview.com/2018/03/russia-collusion-real-story-hillary-clinton-dnc-fbi-media/
My Qanon Analysis
Author enki74Posted on March 29, 2018 March 29, 2018 Categories infowars, POTUS, President Trump, Q-AnonTags !xowAT4Z3VQ, 8chan, Clinton, Collusion, Fake, find, greatawakening, Hillary, how to, MAGA, news, Obama, of, post, qanon, REAL., Russia, Russian, says, star, trekLeave a comment on QAnon of 8chan says "Real Russian Collusion" Hillary Clinton | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,686 |
Press and presidents are usually adversaries
Florida Times-Union
Of all the biases known to us, the most common and least understood is recency bias.
We give recent events far more importance than they deserve.
This comes to mind with the relationship of President Donald Trump and the press. If one listens to either Fox News or MSNBC, one would think that the relationship is worse than ever — for different reasons, of course.
But ask historian Harold Holzer and he will say quickly that Trump's relations are not the worst of any president. He describes it in a new book, "The Presidents vs. The Press." His original title was going to be "and" the press but a little research showed that an adversary relationship was the norm.
Even presidents who were writers themselves like John Kennedy, had public spats with the press.
In a two-hour interview on CSPAN, Holzer said, "As much as I thought I would confirm my own suspicions … it's part of a long tradition."
Presidents have often vilified or attacked the press, many have bypassed the press, many have sought to limit negative news and many have courted the press that criticized them.
So Trump sat for multiple interviews with Author Bob Woodward even though he had to suspect a critical book would result from them.
The press surely helped bring down John Adams and Richard Nixon. It helped elevate Abraham Lincoln and Barack Obama, to name two, by emphasizing their inspiring personal stories. Yet the press has done little to inhibit Donald Trump, either because of his own mastery of nontraditional media, the establishment media's declining influence, a public inured to presidential misbehavior, or all of the above.
Newspapers at the time of the founding were often supported by political factions. The notion of objectivity was not in the air. Opponents could get nasty.
In fact, one reason President George Washington did not run for a third term was he was tired of criticism in the press. For instance, a cartoon depicted Washington with his head on a guillotine as if he were disgraced royalty.
Cartoons, which today infuriate readers, have a long history of brutal caricatures of presidents, depicting Andrew Jackson as a "violent brawler," Abraham Lincoln as a "satanic despot," Teddy Roosevelt as a "bully," Woodrow Wilson as a "foggy professor" and Lyndon Johnson as a "boor displaying a surgical scar resembling an outline map of Vietnam."
The second president, John Adams, pushed for the Alien and Sedition Acts that made it a federal offense to ridicule the government or its leaders.
Several presidents said almost identical things about fake news or false news as Trump does. The press and any president have different roles. Presidents looking for support are often displeased.
What's different about Trump is the technology of today.
"We just have more access to the complaints than ever," Holzer said. Trump uses Twitter continuously.
Thomas Jefferson, who famously praised newspapers, became disillusioned late in his presidency.
"Nothing can now be believed which is seen in a newspaper. Truth itself becomes suspicious by being put into that polluted vehicle," Jefferson wrote. "I will add that the man who never looks into a newspaper is better informed than he who reads them; inasmuch as he who knows nothing is nearer to truth than he whose mind is filled with falsehoods and errors."
Yet when Jefferson donated his huge personal library to the government, which became the Library of Congress, it included about 60 years worth of newspapers.
Newspapers around the time of the founding were largely dominated by one political party or another, much like the partisan cable news programs today.
Abraham Lincoln's respect for the press was so strong that he actually purchased a German language newspaper in Illinois as a campaign asset.
Lincoln was viciously attacked in the press from three directions: Democratic Party opponents (Copperheads) in the North, Europeans angered at Northern blockades of Southern ports and the Confederacy.
And he used his war powers to crack down. He saw the censorship as necessary to save the Union.
"His army and administration arrested and imprisoned scores of editors, banned disobliging news from the telegraph wires, stymied war correspondents embedded on the battlefront, seized and confiscated printing presses, tossed newspapers from trains before they could reach subscribers and barred them from post offices so they could not be mailed," Holzer wrote.
However, Lincoln did not crack down on political criticism during the 1864 presidential campaign.
"In border states still teetering between loyalty and secession, including Maryland, Missouri, and Kentucky, Lincoln's armies moved aggressively against anti-Union newspapers, confiscating printing presses and imprisoning editors without trial in an effort to choke off Confederate sentiment and maintain loyalty to the Union," Holzer wrote.
The flip side of cracking down on critical press was in courting a positive press. Andrew Jackson took advantage of the mass production of newspapers. Teddy Roosevelt introduced regular press conferences. John Kennedy's presidency was compared to Camelot and a complicit press rarely reported on his personal extramarital affairs and his extensive illnesses.
Presidents have long sought to bypass the media. Trump's use of Twitter is part of this long tradition, dating to Lincoln's use of the telegraph, Franklin Roosevelt's use of radio and Kennedy's use of television.
Holzer wrote that no president since Kennedy has had a constructive relationship with the press. Even Barack Obama?
"Obama may deserve a place alongside John Adams, Abraham Lincoln (his political hero), and Woodrow Wilson as the most aggressive presidents in blocking press scrutiny and making professional life difficult for his critics," Holzer wrote.
Former Washington Post executive editor Leonard Downie Jr. called the Obama administration's war on leaks and other efforts to control information as "the most aggressive I've seen since the Nixon administration, when I was one of the editors involved in the Washington Post's investigation of Watergate."
In one ironic episode, Obama received an award for transparency but excluded the press from the Oval Office ceremony. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 315 |
package me.tomassetti.turin.compiler;
import me.tomassetti.turin.parser.ast.typeusage.BasicTypeUsageNode;
import me.tomassetti.turin.parser.ast.*;
import me.tomassetti.turin.parser.ast.properties.PropertyDefinition;
import me.tomassetti.turin.parser.ast.typeusage.ReferenceTypeUsageNode;
import java.util.Collections;
import java.util.Optional;
public class ExamplesAst {
public static TurinFile registryAst() {
// define AST
TurinFile turinFile = new TurinFile();
NamespaceDefinition namespaceDefinition = new NamespaceDefinition("registry");
turinFile.setNameSpace(namespaceDefinition);
TurinTypeDefinition person = new TurinTypeDefinition("Person");
person.setPosition(Position.create(0, 0, 0, 0));
PropertyDefinition firstNameProperty = new PropertyDefinition("firstName", new ReferenceTypeUsageNode("String"), Optional.empty(), Optional.empty(), Collections.emptyList());
person.add(firstNameProperty);
PropertyDefinition lastNameProperty = new PropertyDefinition("lastName", new ReferenceTypeUsageNode("String"), Optional.empty(), Optional.empty(), Collections.emptyList());
person.add(lastNameProperty);
TurinTypeDefinition address = new TurinTypeDefinition("Address");
address.setPosition(Position.create(0, 0, 0, 0));
PropertyDefinition streetProperty = new PropertyDefinition("street", new ReferenceTypeUsageNode("String"), Optional.empty(), Optional.empty(), Collections.emptyList());
address.add(streetProperty);
PropertyDefinition numberProperty = new PropertyDefinition("number", new BasicTypeUsageNode("uint"), Optional.empty(), Optional.empty(), Collections.emptyList());
address.add(numberProperty);
PropertyDefinition cityProperty = new PropertyDefinition("city", new ReferenceTypeUsageNode("String"), Optional.empty(), Optional.empty(), Collections.emptyList());
address.add(cityProperty);
PropertyDefinition zipProperty = new PropertyDefinition("zip", new BasicTypeUsageNode("uint"), Optional.empty(), Optional.empty(), Collections.emptyList());
address.add(zipProperty);
turinFile.add(person);
turinFile.add(address);
return turinFile;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,535 |
Mladež Europske pučke stranke (YEPP, engl. Youth of the European People's Party, njem. Jugend der Europäischen Volkspartei, franc. Jeunes du Parti populaire européen) je politički podmladak Europske pučke stranke.
YEPP ima 48 članica iz 35 europskih država, a iz Hrvatske njena članica je Mladež Hrvatske demokratske zajednice (MHDZ). YEPP preko svojih članica ima više od milijun članova i najveća je europska politička organizacija mladih.
Politički profil
YEPP osnovana je 1997. kao krovna organizacija konzervativnih i demokršćanskih političkih podmladaka ne samo unutar zemalja-članica Europske unije, nego gotovo svih europskih država.
Osnovna načela organizacije su sloboda, pravna država, socijalno-tržišno gospodarstvo, ujedinjena Europa i načelo supsidijarnosti.
Ustrojstvo organizacije
Sjedište organizacije nalazi se u Bruxellesu. Organizacija ima tri tijela: Kongres (Congress), Savjet (Council) i Predsjedništvo (Board). Najviše tijelo je Kongres, on je sabor organizacije i zasjeda svake dvije godine. Kongres bira Predsjedništvo, usvaja politička načela i plan rada organizacije. Savjet definira politička stajališta i odlučuje o pristupu novih članica, te zasjeda tri puta godišnje. Predsjedništvo je odgovorno za dnevno-politički rad i sastoji se od predsjednika, prvog dopredsjednika, devet daljnih dopredsjednika, glavnog tajnika i njegovog zamjenika, te rizničara.
Poveznice
Europska pučka stranka
Politički podmladak
Vanjske poveznice
Youth of the European People's Party Mladež Europske pučke stranke
European People's Party Europska pučka stranka
EPP u Europskom parlamentu
Europske političke stranke
Politički podmladak | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,351 |
March 25, 2019 Charli M.
It's time for another vacation estimate! Our first vacation of 2019 is to Phoenix/Gilbert, Arizona to go to our friend's wedding! We leave out on a Friday and will head out late Wednesday or early Thursday. We don't have any specific plans of what we want to see other than the wedding and probably visit A's uncle in Surprise.
I'm feeling a little better about this vacation because I've been saving up for it for five months at this point, so I have $1500 put away. I will probably put a little more towards it because we plan on renting a car again. Between food, flight and rental car, I expect it to cost more than $1500.
Right now, I expect the flight to be around $700 for both A and I. I looked at flights today but I'm going to wait until Tuesday to book since I've heard it's cheapest to book on a Tuesday.
I always underestimate on food. This time I'm going to guesstimate about $100 per day on food. So I'll say $600 for that.
Fun: ?? Friday night is the bachelorette party and I got invited (!!) Not sure if that goes in food or fun budget but it's kind of fluid at this point. The Grand Canyon looks like it's about four hours away from where we're staying so I don't think we'll do that this time. We mostly just want to explore and see what Arizona is like since it could be home one day. Arizona is so much smaller than Texas so we can explore Gilbert (where the wedding is), Phoenix, Tempe, and Surprise pretty easily because they are all in a one-hour radius.
Hotel: $0. As per usual, I am hoping to not put anything toward hotel cost. I have 157,000 points with Holiday Inn which should be more than enough to book this trip and still have some left over for my next trip. Hotels typically range from 15,000 points to 25,000. Hopefully the trip will cost around 120,000. I will officially book on Tuesday and give an update eventually.
Without putting anything towards the fun budget that puts the trip estimate at about $1600. It's a little over what I have saved but I can put April's vacation savings towards the trip too which will be ok. That will give us a little wiggle room in case something exciting comes up. I'm looking forward to the trip. I was built for Arizona temperatures. I will keep you all posted on a final trip cost and what we ended up doing once we get back.
if you have time, Sedono is incredable. I think it was once in the top 10 for scenic drives. Can be quick or all day. many look outside, Native Americans set up with jewelry and trinkets. We didn't do any hiking we were with grandma and Lee. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,382 |
{"url":"http:\/\/clay6.com\/qa\/24150\/if-a-g-and-h-are-the-am-gm-and-hm-respectively-of-two-positive-numbers-x-an","text":"# If a, g and h are the AM, GM and HM respectively of two positive numbers x and y, then identify the correct statement.\n\n$(a)\\;h\\;is\\;the\\;HM\\;between\\;a\\;and\\;g\\qquad(b)\\;a\\;is\\;the\\;AM\\;between\\;h\\;and\\;g\\qquad(c)\\;g\\;is\\;the\\;GM\\;between\\;a\\;and\\;h\\qquad(d)\\;No\\;relationships\\;exits\\;between\\;a,g\\;and\\;h$\n\nAnswer : (c) $g\\;is\\;the\\;GM\\;between\\;a\\;and\\;h$\nExplanation : By definition , $a=\\frac{x+y}{2}\\;,g=\\sqrt{xy}\\;and\\;h=\\frac{2xy}{x+y}$\n$Multiplying\\;a*h\\;=\\frac{x+y}{2}\\;*\\frac{2xy}{x+y}=xy=g^2$\n$g^2=ah=\\frac{a}{g}=\\frac{g}{h}$\nTherefore a,g,h are in GP and g is the GM between a and h.","date":"2018-04-21 09:35:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9472973346710205, \"perplexity\": 730.0767034926663}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-17\/segments\/1524125945111.79\/warc\/CC-MAIN-20180421090739-20180421110739-00490.warc.gz\"}"} | null | null |
\section{Introduction}
\qquad Giuli and Salbany \cite{GS} studied the category $\textbf{BTOP}$ of bitopological spaces and identified two Sierpinski objects namely the `quad' and the `triad' in $\textbf{BTOP}$. Subsequently, Khastgir and Srivastava \cite{KS} studied the category $\textbf{BFTS}$ of fuzzy bitopological spaces and identified two Sierpinski objects `$I^{2}$' and `$2I$' in $\textbf{BFTS}$ and showed that they behave in the same way in $\textbf{BFTS}$ as do the `quad' and the `triad' in $\textbf{BTOP}$ respectively.
In the present work, we obtain another Sierpinski object in $\textbf{BFTS}$ which is different from `$I^{2}$' and `$2I$'.
\section{Preliminaries}
\qquad For the category theoretic notions used here, \cite{AHS} may be referred. All subcategories are assumed to be full and replete.\\
Let $L$ be a frame with $0$ and $1$ being its least and largest elements respectively. For a given set $X$, let $\bar{0}$ and $\bar{1}$ denote the constant maps from $X$ to $L$ with values $0$ and $1$ respectively.
Given a set $X$, $L^{X}$ will denote the family of all maps $\mu:X\rightarrow L$ (called \textit{$L$-sets} or \textit{$L$-fuzzy sets}). $L^{X}$ is also a frame under the frame structure induced by that on $L$.
We recall some definitions which are used in this paper.
\begin{def1}{\rm \cite{Roda}}
A family $\tau\subseteq L^{X}$ is called an {\rm $L$-topology} on a set $X$, and the pair $(X,\tau)$ an {\rm $L$-topological space}, if $\tau$ is closed under arbitrary suprema and finite infima. Furthermore, a map $f:(X,\tau)\rightarrow (Y,\delta)$ between two $L$-topological spaces is called {\rm continuous} if $f^{\leftarrow}(\nu)\in\tau$, for every $\nu\in\delta$.
\end{def1}
By $L\textbf{-TOP}$, we shall denote the category of all $L$-topological spaces and their continuous maps.
If we take $L=I(=[0,1])$, then an $L$-topological space is known as \textit{fuzzy topological space} (cf. \cite{Chang}).
Let $\textbf{FTS}$ denote the category of all fuzzy topological spaces and their continuous maps.
\begin{def1}{\rm \cite{KS}}
A {\rm fuzzy bitopological space} is a triple $(X,\tau_{1},\tau_{2})$, where $X$ is a set and $\tau_{1}$, $\tau_{2}$ are fuzzy topologies on $X$. Furthermore, if for every distinct pair $x,y\in X$, there exists $\mu\in \tau_{1}\cup\tau_{2}$ such that $\mu(x)\neq \mu(y)$, then $(X,\tau_{1},\tau_{2})$ is called $T_{0}$ $($or $2T_{0})$.
\end{def1}
\begin{def1}{\rm \cite{KS}}
A map $f:(X,\tau_{1},\tau_{2})\rightarrow (Y,\delta_{1},\delta_{2})$ between fuzzy bitopological spaces is called {\rm bicontinuous} $($resp. {\rm biopen}$)$ if $f^{\leftarrow}(\nu)\in \tau_{i}$, for every $\nu\in\delta_{i}$ $($resp. if $f^{\rightarrow}(\mu)\in\delta_{i}$, for every $\mu\in\tau_{i})$, for $i=1,2$.
\end{def1}
Let $\textbf{BFTS}$ denote the category of all fuzzy bitopological spaces and their bicontinuous maps and $\textbf{BFTS}_{0}$ denote the subcategory of $\textbf{BFTS}$ whose objects are $T_{0}$-fuzzy bitopological spaces.\\
The notions of \textit{subspace}, \textit{homeomorphism} and \textit{embedding}, for a fuzzy bitopological space, are on expected lines.\\
The category $\textbf{BFTS}$ has initial structures (cf. \cite{Alka}).
\begin{rem}
Let $\mathscr F=\{f_{i}:X\rightarrow (Y_{i},\delta_{i},\delta_{i}')$ $|$ $i\in I\}$ be a family of maps, where $X$ is a set and $\{(Y_{i},\delta_{i},\delta_{i}')$ $|$ $i\in I\}$ is a family of fuzzy bitopological spaces. Then the fuzzy bitopology $(\Delta, \Delta')$ on $X$, which is initial with respect to the family $\mathscr F$, is the one for which $\Delta$ $($resp. $\Delta')$ is the fuzzy topology on $X$ having the subbase $\{f_{i}^{\leftarrow}(\mu)$ $|$ $\mu\in\delta_{i}, i\in I\}$ $($resp. $\{f_{i}^{\leftarrow}(\mu')$ $|$ $\mu'\in\delta_{i}', i\in I\})$.
\end{rem}
\begin{def1}{\rm \cite{KS}}
Given a family $\{(X_{i},\tau_{i}, \tau_{i}')$ $|$ $i\in I\}$ of fuzzy bitopological spaces, the initial fuzzy bitopology on $X$ $(=\displaystyle\prod_{i\in I}X_{i})$ with respect to the family of all projection maps $\{p_{i}:X\rightarrow (X_{i},\tau_{i}, \tau_{i}')$ $|$ $i\in I\}$ is called the {\rm product fuzzy bitopology}.
\end{def1}
Let $\mathscr C$ be a category and $\mathscr H$ some class of $\mathscr C$-morphisms.
\begin{def1}{\rm \cite{KS}}
A $\mathscr C$-object $X$ is called:
\begin{itemize}
\item $\mathscr H$-{\rm injective} if for every $e:Y\rightarrow Z$ in $\mathscr H$ and every $\mathscr C$-morphism $f:Y\rightarrow X$, there exists a $\mathscr C$-morphism $g:Z\rightarrow X$ such that $g\circ e=f$.
\item a {\rm cogenerator} in $\mathscr C$ if for every pair of distinct $f,g\in \mathscr C (Y,Z)$, there exists $h\in \mathscr C (Z,X)$ such that $h\circ f\neq h\circ g$.
\end{itemize}
\end{def1}
\textbf{Note}: In many familiar categories, $\mathscr H$ is usually taken to be the class of all embeddings in these categories, in which case the term `$\mathscr H$-injective' is shortened to just `injective'.
\begin{def1}{\rm \cite{KS}}
Let $\mathscr A$ be a class of $\mathscr C$-objects. We say that $\mathscr C$ is {\rm $\mathscr H$-cogenerated by $\mathscr A$} if every $\mathscr C$-object is an $\mathscr H$-subobject $(X$ is an $\mathscr H$-subobject of $Y$ if there is some $h:X\rightarrow Y$, with $h\in\mathscr H)$ of a product of objects in $\mathscr A$.
\end{def1}
\begin{def1}{\rm \cite{Mane}}
Given a category $\mathscr C$ of sets with structures, an object $S$ of $\mathscr C$ is called a {\rm Sierpinski object} if for every $X\in ob\mathscr C$, the family of all $\mathscr C$-morphisms from $X$ to $S$ is initial.
\end{def1}
The object $(L, \Delta)$ of $L\textbf{-Top}$, where $\Delta=\{\bar{0},id_{L},\bar{1}\}$, is easily verified to be $T_{0}$ and a Sierpinski object in $L\textbf{-Top}$ (as has been shown in \cite{Arun} for the case $L=[0,1]$).
\section{Sierpinski objects in BFTS}
\qquad The set $L=\{(a,b)\in I\times I$ $|$ $a+b\leq 1\}$ is a frame under the partial order $\leq$, defined as $(a,b)\leq (a',b')$ iff $a\leq a'$ and $b\geq b'$; the \textit{supremum} and the \textit{infimum} of $\{(a_{i},b_{i})\in L$ $|$ $i\in I\}$ in $(L,\leq)$ being $(\displaystyle\bigvee_{i\in I}a_{i}, \displaystyle\bigwedge_{i\in I}b_{i})$ and $(\displaystyle\bigwedge_{i\in I}a_{i}, \displaystyle\bigvee_{i\in I}b_{i})$ respectively and $(0,1)$ and $(1,0)$ being its least and largest elements.
\textbf{In this section, $L$ will denote this particular frame $\{(a,b)\in I\times I$ $|$ $a+b\leq 1\}$}.
\begin{rem}
Note that an $L$-set $\mu:X\rightarrow L$ can be identified with two maps $\mu_{1}:X\rightarrow [0,1]$ and $\mu_{2}:X\rightarrow [0,1]$ such that $\mu_{1}=p_{1}\circ\mu$ and $\mu_{2}=p_{2}\circ\mu$, where $p_{1}, p_{2}:L\rightarrow [0,1]$ are the two projection maps $($to the first and second `coordinates' respectively$)$. Thus an $L$-set and an $L$-topology are what some authors call as an {\rm intuitionistic fuzzy set (cf. \cite{Ata})} and an {\rm intuitionistic fuzzy topology (cf. \cite{Coker})}, respectively.\\
\end{rem}
Given any $L$-topological space $(X,\tau)$, it turns out that $\tau_{1}=\{p_{1}\circ\mu$ $|$ $\mu\in\tau\}$ and $\tau_{2}=\{\bar{1}-(p_{2}\circ\mu)$ $|$ $\mu\in\tau\}$ are fuzzy topologies on $X$ (cf. \cite{Chang}) and so $(X,\tau_{1},\tau_{2})\in ob\textbf{BFTS}$. Accordingly, for the $L$-Sierpinski space $(L,\Delta)$, also we get $(L,\Delta_{1},\Delta_{2})\in ob\textbf{BFTS}$, where $\Delta_{1}=\{\bar{0},p_{1},\bar{1}\}$ and $\Delta_{2}=\{\bar{0},\bar{1}-p_{2},\bar{1}\}$.
We show in this section, that this object turns out to be a Sierpinski object in\textbf{ BFTS}.
\begin{rem}
We note that the two projection map $p_{1}, p_{2}:L\rightarrow [0,1]$ are such that $p_{1}\leq \bar{1}-p_{2}$. For if $(a,b)\in L$, then $a+b\leq 1$, whereby $a\leq 1-b$. Hence $p_{1}(a,b)\leq (\bar{1}-p_{2})(a,b)$.
\end{rem}
The following result is easy to verify.
\begin{pro}
For every $(X,\tau_{1},\tau_{2})\in ob \mathbf{BFTS}$ and for every $\mu\in \tau_{1}$ $($resp. $\mu\in \tau_{2})$, the map $h_{\mu}:(X,\tau_{1},\tau_{2})\rightarrow(L,\Delta_{1},\Delta_{2})$ defined as $h_{\mu}(x)=(\mu(x),0)$ $($resp. $h_{\mu}(x)=(0,1-\mu(x)))$ is a morphism in $\mathbf{BFTS}$, with $h_{\mu}^{\leftarrow}(p_{1})=\mu$ $($resp. $h_{\mu}^{\leftarrow}(\bar{1}-p_{2})=\mu)$.
\end{pro}
\begin{thm}
$(L,\Delta_{1},\Delta_{2})$ is a Sierpinski object in $\mathbf{BFTS}$.
\end{thm}
\textbf{Proof}: Let $(X,\tau_{1},\tau_{2})\in ob\textbf{BFTS}$ and $\mathscr{F}=\textbf{BFTS}((X,\tau_{1},\tau_{2}),(L,\Delta_{1},\Delta_{2}))$. Let $(Y,\delta_{1},\delta_{2})\in ob\textbf{BFTS}$ and $g:Y\rightarrow X$ be a map such that $f\circ g:(Y,\delta_{1},\delta_{2})\rightarrow (L,\Delta_{1},\Delta_{2})$ is bicontinuous for every $f\in\mathscr{F}$. We wish to show that $g$ is bicontinuous. Let $\mu\in \tau_{1}$. The bicontinuous map $h_{\mu}:(X,\tau_{1},\tau_{2})\rightarrow(L,\Delta_{1},\Delta_{2})$, described in Proposition $3.1$, is already in $\mathscr F$. Now, $g^{\leftarrow}(\mu)=g^{\leftarrow}(h_{\mu}^{\leftarrow}(p_{1}))=(h_{\mu}og)^{\leftarrow}(p_{1})$. Hence $g^{\leftarrow}(\mu)\in\delta_{1}$ (as $h_{\mu}\circ g$ is bicontinuous).
Similarly, for every $\mu\in \tau_{2}$, $g^{\leftarrow}(\mu)\in\delta_{2}$. So $g$ is bicontinuous. Thus $(L,\Delta_{1},\Delta_{2})$ is a Sierpinski object in $\textbf{BFTS}$. $\Box$
\begin{pro}
$(L,\Delta_{1},\Delta_{2})$ is $T_{0}$.\\
\end{pro}
We point out that, earlier, two interesting Sierpinski objects in $\textbf{BFTS}$ have been found by Khastgir and Srivastava in \cite{KS}, viz., (i) $(I^{2},\Pi_{1},\Pi_{2})$, where $I^{2}=I\times I$, $\Pi_{i}=\{\bar{0},\pi_{i},\bar{1}\}$, $i=1, 2$, and $\pi_{1},\pi_{2}:I^{2}\rightarrow I$ are the two projection maps and (ii) $(2I,\Omega_{1},\Omega_{2})$, where $2I=(I\times \{0\}) \cup (\{0\}\times I)$, $\Omega_{i}=\{\bar{0},q_{i},\bar{1}\}$, $i=1, 2$, and $q_{1},q_{2}:2I \rightarrow I$ are two maps defined as
\begin{equation*}
q_{1}(x)=
\begin{cases}
\alpha, & \text {if $x=(\alpha,0)\in I\times \{0\}$} \\
0, & \text{otherwise}
\end{cases}
\end{equation*}
and
\begin{equation*}
q_{2}(x)=
\begin{cases}
\alpha, & \text {if $x=(0,\alpha)\in \{0\}\times I$} \\
0, & \text{otherwise.}
\end{cases}
\end{equation*}
Furthermore, while both of these turned out to be cogenerators in $\textbf{BFTS}_{0}$, only $(I^{2},\Pi_{1},\Pi_{2})$ turned to be injective also. Thus it is natural to ask: in what respect(s) the `new found' Sierpinski object $(L,\Delta_{1},\Delta_{2})$ is similar to the Sierpinski objects of \cite{KS} in $\textbf{BFTS}$?\\
\begin{pro}
$(L,\Delta_{1},\Delta_{2})$ is a cogenerator in $\mathbf{BFTS}_{0}$.
\end{pro}
\textbf{Proof}: Consider any distinct pair $f,g:(X,\tau_{1},\tau_{2})\rightarrow (Y,\delta_{1},\delta_{2})$ of morphisms in $\textbf{BFTS}_{0}$. Then for some $x\in X$, $f(x)\neq g(x)$. As $(Y,\delta_{1},\delta_{2})$ is $T_{0}$, $\mu(f(x))\neq \mu(g(x))$ for some $\mu \in \delta_{1}\cup\delta_{2}$. If $\mu \in \delta_{1}$ (resp. $\mu \in \delta_{2}$), then by Proposition $3.1$, there exist a bicontinuous map $h_{\mu}:(Y,\delta_{1},\delta_{2})\rightarrow(L,\Delta_{1},\Delta_{2})$ defined as $h_{\mu}(y)=(\mu(y),0)$ (resp. $h_{\mu}(y)=(0,1-\mu(y))$). Clearly $h_{\mu}\circ f\neq h_{\mu}\circ g$. Thus $(L,\Delta_{1},\Delta_{2})$ is a cogenerator in $\textbf{BFTS}_{0}$. $\Box$
\begin{pro}
$(X,\tau_{1},\tau_{2})\in ob\mathbf{BFTS}_{0}$ iff $\mathscr F =\mathbf{BFTS}((X,\tau_{1},\tau_{2}), \\(L,\Delta_{1},\Delta_{2}))$ separates points of $X$.
\end{pro}
\textbf{Proof}: Let $(X,\tau_{1},\tau_{2})\in ob\textbf{BFTS}_{0}$ and $x,y\in X$ with $x\neq y$. Then $\mu(x)\neq\mu(y)$, for some $\mu\in\tau_{1}\cup\tau_{2}$. If $\mu\in\tau_{1}$ (resp. $\mu\in\tau_{2}$), then the bicontinuous map $h_{\mu}:(X,\tau_{1},\tau_{2})\rightarrow(L,\Delta_{1},\Delta_{2})$, described in Proposition $3.1$, is already in $\mathscr F$. Clearly $h_{\mu}(x)\neq h_{\mu}(y)$. Thus $\mathscr F$ separates points of $X$.
Conversely, let $\mathscr F$ separate points of $X$ and let $x,y\in X$ with $x\neq y$. Then $f(x)\neq f(y)$, for some $f\in \mathscr F$ and hence $f^{\leftarrow}(p_{1})\in\tau_{1}$ and $f^{\leftarrow}(\bar{1}-p_{2})\in\tau_{2}$. As $f(x)\neq f(y)$, either $p_{1}(f(x))\neq p_{1}(f(y))$ or $(\bar{1}-p_{2})(f(x))\neq (\bar{1}-p_{2})(f(y))$, showing that $(X,\tau_{1},\tau_{2})$ is $T_{0}$. $\Box$
\begin{pro}
$(L,\Delta_{1},\Delta_{2})$ $\mathscr H$-cogenerates $\mathbf{BFTS}_{0}$, where $\mathscr H$ is the class of all embeddings in $\mathbf{BFTS}_{0}$.
\end{pro}
\textbf{Proof}: Let $(X,\tau_{1},\tau_{2})\in ob \textbf{BFTS}_{0}$ and $\mathscr F= \textbf{BFTS}((X,\tau_{1},\tau_{2}), (L,\Delta_{1},\Delta_{2}))$. Define $e:(X,\tau_{1},\tau_{2})\rightarrow (L,\Delta_{1},\Delta_{2})^{\mathscr F}$ as $e(x)f=f(x)$, for every $x\in X$ and for every $f\in\mathscr F$. Let, for $f\in\mathscr F$, $\pi_{f}$ denote the $f$-th projection map. Then for every $x\in X$, $(\pi_{f}\circ e)(x)=\pi_{f}(e(x))=e(x)f=f(x)$ implying that $\pi_{f}\circ e=f$. Thus $e$ is bicontinuous. Let $x,y\in X$ with $x\neq y$. Then $\mu(x)\neq\mu(y)$, for some $\mu\in\tau_{1}\cup\tau_{2}$. If $\mu\in\tau_{1}$ (resp. $\mu\in\tau_{2}$), then the bicontinuous map $h_{\mu}:(X,\tau_{1},\tau_{2})\rightarrow(L,\Delta_{1},\Delta_{2})$, described in Proposition $3.1$, is already in $\mathscr F$. Clearly $h_{\mu}(x)\neq h_{\mu}(y)$, showing that $e(x)\neq e(y)$. Thus $e$ is injective. Let $\mu\in\tau_{1}$. Then for every $x\in X$, $e^{\rightarrow}(\mu)(e(x))=\vee\{\mu(x')$ $|$ $e(x')=e(x)\}=\mu(x)=p_{1}(\mu(x),0)=p_{1}(h_{\mu}(x))=p_{1}(e(x)h_{\mu})=p_{1}(\pi_{h_{\mu}}(e(x)))=(p_{1}\circ\pi_{h_{\mu}})(e(x))$, implying that $e^{\rightarrow}(\mu)=(p_{1}\circ\pi_{h_{\mu}})|_{e(X)}$. Similarly, if $\mu\in\tau_{2}$ then $e^{\rightarrow}(\mu)=((\bar{1}-p_{2})\circ\pi_{h_{\mu}})|_{e(X)}$. Thus $e:X\rightarrow e(X)$ is biopen, i.e., $e$ is an embedding. Hence $(L,\Delta_{1},\Delta_{2})$ $\mathscr H$-cogenerates $\mathbf{BFTS}_{0}$. $\Box$\\
For a fuzzy bitopological space $(X,\tau_{1},\tau_{2})$, $(X,\tau_{1}\vee\tau_{2})$ is a fuzzy topological space, where $\tau_{1}\vee\tau_{2}$ is the coarsest fuzzy topology on $X$ finer than $\tau_{1}$ and $\tau_{2}$.
Let $(X,\tau_{1},\tau_{2})$ be a fuzzy bitopological space. Put $pt(\tau_{1}\vee \tau_{2})=\{p:\tau_{1}\vee \tau_{2}\rightarrow I$ $|$ $p$ is a frame map$\}$. For $\mu\in \tau_{1}\vee \tau_{2}$, define $\mu^{s}:pt(\tau_{1}\vee \tau_{2})\rightarrow I$ as $\mu^{s}(p)=p(\mu)$, for every $p\in pt(\tau_{1}\vee \tau_{2})$. Then $\tau_{1}^{s}=\{\mu^{s}$ $|$ $\mu\in\tau_{1}\}$ and $\tau_{2}^{s}=\{\mu^{s}$ $|$ $\mu\in\tau_{2}\}$ are fuzzy topologies on $pt(\tau_{1}\vee \tau_{2})$ (cf. \cite{KS}).
\begin{def1}{\rm \cite{KS}}
A fuzzy bitopological space $(X,\tau_{1},\tau_{2})$ is called bisober if $\eta_{X}:(X,\tau_{1},\tau_{2})\rightarrow (pt(\tau_{1}\vee \tau_{2}),\tau_{1}^{s},\tau_{2}^{s})$, defined as $\eta_{X}(x)(\mu)=\mu(x)$, for every $x\in X$ and for every $\mu\in \tau_{1}\vee \tau_{2}$, is bijective.\\
\end{def1}
In \cite{KS}, both $(I^{2},\Pi_{1},\Pi_{2})$ and $(2I,\Omega_{1},\Omega_{2})$ are shown to be bisober.
\begin{pro}
$(L,\Delta_{1},\Delta_{2})$ is bisober.
\end{pro}
\textbf{Proof}: We show that $\eta_{L}: (L,\Delta_{1}, \Delta_{2})\rightarrow (pt(\Delta_{1}\vee \Delta_{2}), \Delta_{1}^{s},\Delta_{2}^{s})$ is bijective. The injectivity of $\eta_{L}$ easily follows from the fact that $(L,\Delta_{1},\Delta_{2})$ is $T_{0}$. Now we show that $\eta_{L}$ is surjective. Let $p\in pt(\Delta_{1}\vee \Delta_{2})$. Then $p:\Delta_{1}\vee \Delta_{2}\rightarrow I$, being a frame map, is order preserving. So if $p(p_{1})=\alpha$ and $p(\bar{1}-p_{2})=\beta$, then $\alpha\leq\beta$. Hence $\alpha+1-\beta \leq 1$, implying that $(\alpha, 1-\beta)\in L$. Clearly $\eta_{L}(\alpha, 1-\beta)=p$. Thus $\eta_{L}$ is surjective and hence $(L,\Delta_{1},\Delta_{2})$ is bisober. $\Box$
\begin{pro}
$(L,\Delta_{1},\Delta_{2})$ is not injective in $\mathbf{BFTS}_{0}$.
\end{pro}
\textbf{Proof}: Consider the identity map $id:(L,\Delta_{1},\Delta_{2})\rightarrow (L,\Delta_{1},\Delta_{2})$, which is clearly an extremal monomorphism. Define $e:(L,\Delta_{1},\Delta_{2})\rightarrow (I^{2},\Pi_{1},\Pi_{2})$ as $e(a,b)=(a,1-b)$. It is easy to see that $e$ is bicontinuous and injective. Also, for $(a,b)\in L$, $(e^\rightarrow (p_{1}))e(a,b)=\bigvee \{p_{1}(x,y)$ $|$ $e(x,y)=e(a,b)\}=p_{1}(a,b)=a$ and $\pi_{1}(e(a,b))=\pi_{1}(a,1-b)=a$. Thus, $e^\rightarrow (p_{1})=\pi_{1}|_{e(L)}$, whereby $e^\rightarrow (p_{1})\in \Pi_{1}|_{e(L)}$. Similarly, as $e^\rightarrow (\bar{1} -p_{2})=\pi_{2}|_{e(L)}$, $e^\rightarrow (\bar{1} -p_{2})\in \Pi_{2}|_{e(L)}$. Thus, $e: L\rightarrow e(L)$ is biopen. Hence, $e$ is an embedding. We show that $e$ is also an epimorphism in $\textbf{BFTS}_{0}$. Consider any distinct pair $f,g:(I^{2},\Pi_{1},\Pi_{2})\rightarrow (Y,\delta_{1},\delta_{2})$ of morphisms in $\textbf{BFTS}_{0}$. Then for some $x\in I^{2}$, $f(x)\neq g(x)$. As $Y$ is $T_{0}$, there exists $\mu\in \delta_{1} \cup\delta_{2}$ such that $\mu(f(x))\neq \mu (g(x))$, i.e. $f^{\leftarrow}(\mu)\neq g^{\leftarrow}(\mu)$. If $\mu\in \delta_{1}$ (resp. $\mu\in \delta_{2}$), then $f^{\leftarrow}(\mu), g^{\leftarrow}(\mu) \in \Pi_{1}$ (resp. $f^{\leftarrow}(\mu), g^{\leftarrow}(\mu) \in \Pi_{2}$). Now $(1/2,1/2)\in L$ and $f^{\leftarrow}(\mu)(1/2,1/2)\neq g^{\leftarrow}(\mu)(1/2,1/2)$. This implies that $(f\circ e)(1/2,1/2)\neq (g\circ e)(1/2,1/2)$, whereby $f\circ e\neq g\circ e$. Thus $e$ is an epimorphism.
Now if there exists a morphism $h:(I^{2},\Pi_{1},\Pi_{2})\rightarrow (L,\Delta_{1},\Delta_{2})$ in $\textbf{BFTS}_{0}$ such that $h\circ e=id$, then, as $id$ is an extremal monomorphism, $e$ will have to be an isomorphism, which clearly is not possible. Thus $(L,\Delta_{1},\Delta_{2})$ cannot be injective in $\textbf{BFTS}_{0}$. $\Box$\\
\begin{rem}
The above result shows that $(L,\Delta_{1},\Delta_{2})$ and $(I^{2},\Pi_{1},\Pi_{2})$ are different.
\end{rem}
In our last result, we shall use the following easy-to-verify result.
\begin{pro}
\begin{enumerate}
\item If $f:(X,\tau_{1},\tau_{2})\rightarrow (Y,\delta_{1},\delta_{2})$ is a homeomorphism in $\mathbf{BFTS}$ then $f:(X,\tau_{1}\vee\tau_{2})\rightarrow (Y,\delta_{1}\vee\delta_{2})$ is a homeomorphism in $\mathbf{FTS}$.
\item If $f:(X,\tau)\rightarrow (Y,\delta)$ is a homeomorphism in $\mathbf{FTS}$ then $f^{\leftarrow}:\delta\rightarrow\tau$ is bijective.
\end{enumerate}
\end{pro}
\begin{thm}
The fuzzy bitopological spaces $(2I,\Omega_{1},\Omega_{2})$ and $(L,\Delta_{1},\Delta_{2})$ are not homeomorphic.
\end{thm}
\textbf{Proof}: Consider the fuzzy topological spaces $(2I,\Omega_{1}\vee\Omega_{2})$ and $(L,\Delta_{1}\vee\Delta_{2})$. It is clear that $q_{1}\wedge q_{2}=\bar{0}$, so $\Omega_{1}\vee\Omega_{2}=\{\bar{0}, q_{1}, q_{2}, q_{1}\vee q_{2},\bar{1}\}$. As $p_{1}\leq (\bar{1}-p_{2})$, $\Delta_{1}\vee\Delta_{2}=\{\bar{0}, p_{1}, \bar{1}-p_{2}, \bar{1}\}$. This shows that the number of elements in $\Omega_{1}\vee\Omega_{2}$ and $\Delta_{1}\vee\Delta_{2}$ are not same. Hence there cannot exist any bijection between $\Omega_{1}\vee\Omega_{2}$ and $\Delta_{1}\vee\Delta_{2}$. Thus $(2I,\Omega_{1}\vee\Omega_{2})$ and $(L,\Delta_{1}\vee\Delta_{2})$ are not homeomorphic and hence $(2I,\Omega_{1},\Omega_{2})$ and $(L,\Delta_{1},\Delta_{2})$ are also not homeomorphic. $\Box$\\\\
\textbf{Acknowledgement}: The authors would like to thank Prof. A.K. Srivastava for providing counsel in the preparation of this paper. The authors (RN and SKS) would also like to respectively thank the {\it University Grants Commission} (New Delhi, India) and the {\it Council of Scientific \& Industrial Research} (New Delhi, India) for financial supports through their respective Senior Research Fellowships.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,035 |
Q: how to divide a string without any index in java I have a String like this "+91 2895675148 / +91 123456789 / +91 987654321". I want to split above string into
String str1 = +91 2895675148
String str2 = +91 1234567
String str3 = +91 987654321
How to make separate the above numbers from string in java without using index as a parameter
Thanks
A: String string = "+91 2895675148 / +91 123456789 / +91 987654321";
String[] sections = string.split(" / ");
String part1 = parts[0]; // +91 2895675148
String part2 = parts[1]; // +91 123456789
String part3 = parts[2]; // +91 987654321
Please view for more methods: link
A: Your can use split("/") which returns array of string.
Eg.
String str = "+91 2895675148 / +91 123456789 / +91 987654321";
System.out.println(Arrays.toString(str.split("/")));
out put :-
[+91 2895675148 , +91 123456789 , +91 987654321]
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,946 |
Вспышка серозного менингита в Новосибирске произошла летом 2004 года.
Зимой и весной 2004 года Бердск переключали на новую линию канализации. В это время происходил сброс фекалий в Бердский залив Обского водохранилища, по берегам которого расположено множество детских лагерей.
Летом началось массовая вспышка серозного менингита — по официальным данным заболело больше 500 человек, подавляющее большинство дети. В начале вспышки власти пытались скрыть информацию о происходящем, но поток больных нарастал и только когда число заболевших стало исчисляться многими десятками человек, средства массовой информации сообщили о массовом заболевании. Позже было запрещено купание в Обском море.
В конце сентября было сообщено о 582 заболевших, но после этого вспышка продолжалась, а официального сообщения о числе заболевших так и не было.
Было официально объявлено, что ведётся расследование источника загрязнения и о том, что прокуратура завела три уголовных дела по поводу загрязнения, однако сообщений о том, чем кончились эти дела в прессу не поступило.
Источники
События в Новосибирске
История Бердска
Здравоохранение
2004 год в Новосибирской области | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,657 |
{"url":"https:\/\/www-users.cse.umn.edu\/~ovenh001\/seminar\/03-18.html","text":"Date: 03\/18\/2022\n\nSpeaker: Swee Hong Chan\n\nCombinatorial Atlas for Log-Concave Inequalities\nThe study of log-concave inequalities for combinatorial objects have seen much progress in recent years. One such progress is the solution to the strongest form of Mason's conjecture (independently by Anari et. al. and Br\u00e1nd\u00ebn-Huh). In the case of graphs, this says that the sequence $f_k$ of the number of forests of the graph with $k$ edges, form an ultra log-concave sequence. In this talk, we discuss an improved version of all these results, proved by using a new tool called the combinatorial atlas method. This is a joint work with Igor Pak. This talk is aimed at a general audience.","date":"2022-07-04 20:48:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6272070407867432, \"perplexity\": 522.5945140988342}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656104496688.78\/warc\/CC-MAIN-20220704202455-20220704232455-00724.warc.gz\"}"} | null | null |
\section{Introduction}
In many cases, when we want to describe a dynamical system in physics we identify the effective field theory governing its dynamics. However, in some cases there are multiple effective field theories describing the same system. This phenomenon is referred to as duality. Dualities are a very powerful tool in fundamental physics, ubiquitously used in dynamical systems involving gauge theories, and are extremely explored and utilised in the context of string theory (cf.~\cite{Quevedo:1997jb,Polchinski:2014mva} for an overview). Such dualities provide two descriptions -- often two Lagrangians with distinct sets of fields and associated couplings -- of the same dynamical system. The difference between these effective field theories is that they describe certain properties of the system, i.e.~correlation functions, in a more efficient way.
The efficient calculation of correlation functions or estimates of them based on sample data is also relevant in typical data science applications such as classification. Here, we present examples of data questions where dualities prove to be useful (cf.~Section~\ref{sec:examples}). For simplicity we restrict ourselves at this stage to data questions in physical systems where we know a useful dual description. This has the added benefit that the results can be compared to interpretable solutions. We show that the classification with `simple' standard network architectures works much better for data in the dual representation. Better accuracy is achieved in the dual frame with less training effort.
We then show that finding a similar level of classification is not easily possible, i.e.~by examining several standard changes to the architectures such as wider and deeper networks. In particular, this includes architectures which in principle have the capability to perform the duality transformation. We find that the network generically does not find this beneficial configuration. As a next step, we then explore opportunities how to enforce such dual representations, beyond a `trivial' enforcing of dual variables when the duality transformation is known (cf.~Section~\ref{sec:enforcing}). In particular, we find positive results when we demand feature separation in the latent space. We also identify good representations with a modified autoencoder structure where we put an additional constraint (good performance on simple classification tasks) on the latent dimension. Finally we provide and exemplify a method how to enforce certain distributional properties of the dual representation. These representations found by the networks are the first examples where dual representations are obtained without the network ``knowing" them a priori.
Before concluding, we comment on the connection to other dualities in physics (cf.~Section~\ref{sec:connections}).
\section{Benefits of Dual Representations}
\label{sec:examples}
Here we present several examples where dualities prove useful to address supervised classification tasks.
\subsection{Discrete Fourier Transformation}\label{DiscreteFourierTransformation}
The Fourier transformation captures the essence of many dualities relating strongly-coupled and weakly-coupled field theories (cf. also Section \ref{sec:connections}). Strongly coupled theories feature non-vanishing correlations over large distances whereas weakly coupled theories only feature seizable correlations at short distances. This is resembled in Fourier transformation, where a delta-peak in momentum space is spread out over all of position space. {\it When is it useful to use position or momentum space representations?} A simple example is given by identifying whether there is a signal hiding under Gaussian noise. For concreteness we consider a signal which is a single peak in momentum space. An example of the data for each class in this binary classification problem is shown in Figure~\ref{fig:Fourier_example} and the details of the construction and our neural networks and numerical experiments can be found in Appendix~\ref{app:fourier}.
\begin{figure}
\includegraphics[width=0.245\textwidth]{Fourier-NoSignal_Re_X_col.pdf}
\includegraphics[width=0.245\textwidth]{Fourier-NoSignal_Im_X_col.pdf}
\includegraphics[width=0.245\textwidth]{Fourier-NoSignal_Re_P_col.pdf}
\includegraphics[width=0.245\textwidth]{Fourier-NoSignal_Im_P_col.pdf}\\
\includegraphics[width=0.245\textwidth]{Fourier-Signal_Re_X_col.pdf}
\includegraphics[width=0.245\textwidth]{Fourier-Signal_Im_X_col.pdf}
\includegraphics[width=0.245\textwidth]{Fourier-Signal_Re_P_col.pdf}
\includegraphics[width=0.245\textwidth]{Fourier-Signal_Im_P_col.pdf}
\vspace{-1.cm}\caption{Comparison of noisy signals and pure noise in position and Fourier space.}\label{fig:Fourier_example}
\end{figure}
When performing classification with a simple neural network\footnote{Here we perform a classification with a single Conv1D layer with 4 filters and ReLU activation followed by a Dense layer with a single neuron and sigmoid activation. Details on the experiment can be found in Appendix~\ref{app:fourier}.}, we find that a classification is possible for the data in the momentum representation (test accuracy $0.9835$) but not for the position representation (test accuracy at pure guessing $\sim 0.5$).
When adding a single or several hidden dense layers to the position space network, we find only a marginal improvement (again details can be found in Appendix~\ref{app:fourier}). As the reached performance does not even come close to the perfect score in the momentum space representation, it is clear that our deeper neural networks are not adapting the position space representation.
\subsection{2D Ising Model}
\label{sec:2DIsing}
A very well-known example of duality in physics is that of the high-low temperature duality in the 2D Ising model~\cite{Kramers:1941kn,Kramers:1941zz,Onsager:1943jn} (cf. also ~\cite{Savit:1979ny} for a review).
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{Ising2D_OriginalDual_Performance_Fit_intlegend.pdf}
\includegraphics[width=0.49\textwidth]{Ising2D_OriginalDual_Overlap_Fit_intlegend.pdf}
\vspace{-0.5cm}\caption{Classification of states according to their temperature in the square-lattice Ising model. Solid lines and dots indicate data for the original temperatures, dashed lines and crosses for the dual temperatures. Pairs of temperatures $T_{0},T=T_{0}+\Delta T$ were chosen by fixing a reference point $T_{0}$ and gradually increasing $\Delta T$ by increments of $0.05$.
\label{fig:Ising2D_OriginalDual_Performance}}
\end{figure}
This Ising model lives on a $N\times N$ square lattice with periodic boundary conditions.
On each lattice site there is a spin degree of freedom $s_i,$ which can take values $\pm 1$. The Hamiltonian of a given state $s$ in the original description is given by
\begin{equation}
H\left( s \right)=-J \sum_{\langle i,j\rangle}s_i s_j~,
\end{equation}
where we take the interaction to be ferromagnetic $J>0$ and from now set $J=1,$ $k_B=1.$ The partition function of this system at finite temperature $T$ is given by
\begin{equation}\label{2D-SL-Ising_PartitionFunction}
Z\left( \beta \right) = \sum_{s}e^{-\beta H\left( s \right)}\,,
\end{equation}
where $\beta=1/T.$ The duality in this Ising model is as follows. The partition function $Z\left( \beta \right)$ of the above system is related to that of another system at a dual temperature $\tilde{\beta}=-\frac{1}{2}\ln \tanh \beta$ by the dependency
\begin{equation}
Z\left( \beta \right) = \frac{1}{2}\left( \sinh (2 \tilde{\beta}) \right)^{-N}\sum_{\sigma}e^{-\beta H\left( \sigma \right)}\,,
\end{equation}
where the dual spins $\sigma_i$ also take values $\pm 1$ on a lattice with the same geometry, and the dual system shows the same coupling strength $J$. This is known as the Kramers-Wannier duality~\cite{Kramers:1941kn,Kramers:1941zz} which relates a description at low temperature with long-range correlations (strong coupling) and high temperature with short range correlations (weak coupling).\footnote{The fact that both partition functions describe the same
type of Ising model implies the existence of a critical temperature $\beta_{\mathrm{crit}}\approx 0.4407$ at which a transition between ordered and disordered phases occurs.}
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{Ising2D_FeatureSpace_Original2.pdf}
\includegraphics[width=0.49\textwidth]{Ising2D_FeatureSpace_Dual2.pdf}
\vspace{-0.5cm}\caption{Distribution of energies and magnetizations of a square-lattice Ising model for various temperatures and their duals. \label{fig:overlap1}}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{Ising2D_Overlap_125-15_Original.pdf}
\includegraphics[width=0.49\textwidth]{Ising2D_Overlap_125-15_Dual.pdf}
\vspace{-0.4cm}\caption{Energy distributions of the square-lattice Ising model for $T=1.25, 1.5$ and their
respective dual temperatures. The energies of the original representation concentrate on a very small
region and show a significant overlap. Both diagrams use bins of the same width respectively. \label{fig:overlap2}}
\end{figure}
\subsubsection*{Classification of Temperature}
{\it When is it useful to use the high temperature and when is it useful to use the low temperature phase?}
Similar in spirit to the Fourier case, we start with a classification task. In particular we are interested in predicting which temperature a sample is drawn from. Our experimental setup is as follows: We considered a square-lattice Ising model on a $40\times 40$ lattice at temperatures $T=0.25, 0.5, \dots , 2.25$ and their corresponding dual temperatures. The dataset for each temperature was split into 16000 training samples and 4000 test samples. Networks were then trained to classify states drawn from two datasets according to the respective temperature of the set they were drawn from (binary classification). We chose as architecture a simple convolutional neural network consisting of one $2\times 2$-convolutional layer with 8 filters and ReLU activation followed by a linear layer with sigmoid activation. The overall performance did not change significantly when increasing the number of layers to up to five and varying the umber of filters between $8,~12,~16,$ and $32.$ Weights were initialised randomly; training was performed using standard Nesterov Adam optimiser with initial learning rate $0.002$ and learning rate decay. No significant changes in performance were observed after a maximum of $200$ training epochs.
Dataset generation and training was performed for ten different seeds to prevent outliers in performance from distorting the results. The best test set accuracies reached after 200 epochs were then averaged over the ten test-runs. The average best test set accuracies for various pairs of temperatures are shown in Figure~\ref{fig:Ising2D_OriginalDual_Performance}.
As can be seen, the classification performance improves substantially when performed for the dual temperatures. This can be seen when visualising the energy and magnetisation for both representations, cf.~Figure~\ref{fig:overlap1}. An example of the overlap in the energy distributions for temperatures $T_1=1.0$ and $T_2=1.25$ is shown in Figure~\ref{fig:overlap2}. The correlation with the classification performance and the overlap of the energy distributions is shown in Figure~\ref{fig:Ising2D_OriginalDual_Performance}.
Further uses could be looked for in determining other correlation functions. In particular, we investigated several disorder correlation functions, e.g.~correlators of the type $\langle \sigma_i \sigma_j\rangle.$ However, as the performance difference between the two representations are not as dramatic as in the temperature classification we leave a detailed discussion of these correlators to the future.
\subsection{1D Ising Models}
\label{sec:1DIsing}
Other lattice systems offer different types of dualities, and here we present an example where the dual representation features a different Hamiltonian, i.e.~there is no self-duality of the same system. Simple examples of this type of duality are given in the context of one-dimensional Ising models on a finite spin-chain with $N$ spins, $n$-spin interactions and free boundary conditions. A discussion of such systems can be found for instance in~\cite{2016JPhA...49I5002T}, and we summarise here the important system properties for our sub-sequent analysis.
For $n$-spin interaction models, the Hamiltonian $H(s_1,\ldots,s_N)$ takes the form
\begin{equation}
H\left( s \right) =- J\sum_{k=1}^{N-n+1}\prod_{l=0}^{n-1}s_{k+l} - B\sum_{k=1}^{N}s_k\,.
\end{equation}
The free boundary conditions are to be understood in the sense that one considers only interactions of $n$-spin chains which can be fully embedded into the system $(s_1, s_2,\dots s_N)$, and there are no identifications or interactions connecting both ends of the chain. Furthermore, there do not exist any relations which fix the values of boundary (or other) spins to specific values.
Let us now consider the special case of a purely interacting theory with $B=0$. The Hamiltonian
then reduces to
\begin{equation}
\label{Ising1D_Hamiltonian}
H\left( s \right) =- J\sum_{k=1}^{N-n+1}\prod_{l=0}^{n-1}s_{k+l}\,,
\end{equation}
This can be bijectively mapped to a non-interacting theory with external field $J$ and Hamiltonian
\begin{equation}
\label{Ising1D_DualHamiltonian}
H\left( \sigma \right) = -J\sum_{k=1}^{N-n+1}\sigma_k\,.
\end{equation}
The corresponding duality transformation exchanges the roles of the spins and their interaction terms,
\begin{equation}
\label{1DIsing_DualityTransformation}
\sigma_k = \prod_{l=0}^{n-1}s_{k+l}, \hspace{50pt} k=1,\dots N\,,
\end{equation}
where spins $s_{l}$ with $l>N$ are to be understood as ghost spins taking the fixed value $1$. The inverse
transformation is given by
\begin{equation}
\label{1DIsing_DualityTransformation_inverse}
s_{k}= \prod_{r=0}^{q}\sigma_{k+rn}\sigma_{k+rn+1}\,,
\end{equation}
where $q$ is to be chosen as the maximum value such that $k+qn \leq N$ and one again introduces a ghost
spin $\sigma_{N+1}=1$ (further ghost spins can be introduced to generate representations of the same dimension, but they do not play any role in the inverse transformation).
\begin{figure}[t]
\begin{center}
\hspace{-10pt}
\includegraphics[width=0.39\textwidth]{KinkVariables_normal6.pdf}
\hspace{40pt}
\includegraphics[width=0.39\textwidth]{KinkVariables_dual6.pdf}
\end{center}
\vspace{-0.8cm}\caption{Comparison of spin configurations in a two-spin interaction model and a scalar field kink. Dual spins
located on the interaction links represent the energy distribution of a ``kink" in the spin model.}
\label{fig:1DIsing_KinkVariables}
\end{figure}
For $n$-spin interactions, the product runs over pairs of adjacent spins, starting from the
position $k$ and skipping $n-2$ spins between the individual pairs. The involvement of spins in the duality
transformation~\eqref{1DIsing_DualityTransformation} and its inverse~\eqref{1DIsing_DualityTransformation_inverse}
is exemplified in Figure~\ref{fig:Ising1D_Dualities_n3} for the case $N=10$ and $n=3$. Notice that this can be considered a direct generalisation of the special case $n=2$, for which the the duality transformation
corresponds to an exchange of roles between the original spins and their kink variables (cf.~Figure~\ref{fig:1DIsing_KinkVariables}).
\subsubsection*{Identifying (Meta-)Stable States}
{\it Which task is more easily addressed in the dual representation?}
A simple example for this would be to compute the total energy of a given spin configuration $(s_1,\ldots,s_N)$, which can involve high-order
products in the original frame and simplifies to summing over the first $N-n+1$ spins in the dual frame. Of course,
this is more of an ad-hoc example since the duality transformation by construction computes the local energy
contributions.
Generally speaking, there exist more sophisticated tasks where no such hand-crafted frame can be constructed. These tasks also can be drastically simplified by applying duality transformations known from or learned in a different context.
One such instance is the detection of states $s$ which are
(meta-)stable with respect to single-flip spin dynamics. Such single-flip stable states are defined as
configurations for which flipping any of the spins causes the energy of the system to increase.\footnote{Such metastable states can cause standard MCMC-algorithms to be trapped in a local minimum as the temperature approaches zero and is a major reason why the performance of common simulation algorithms tends to deteriorate at low temperatures.}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\textwidth]{1DIsing_Dualities_n3_3.pdf}
\end{center}
\vspace*{-12pt}
\caption{Structure of the duality mappings \eqref{1DIsing_DualityTransformation} (red)
and the inverse duality mappings \eqref{1DIsing_DualityTransformation_inverse} (blue)
for $N=10$ and $n=3$. White spins with ``$+$"-sign inside indicate ghost spins
with fixed value 1.}
\label{fig:Ising1D_Dualities_n3}
\end{figure}
\subsubsection*{Effect on Simple Networks}
In order to get an idea whether the duality \eqref{1DIsing_DualityTransformation} is a viable tool to improve the classification of metastable states, we choose as a first benchmark how ``simple" architectures of neural networks
can handle this classification problem and whether transforming our variables to the dual frame can
improve their performance. While, in practice, any improvement from utilising the dual frame might also be achieved by using more sophisticated
architectures, this setting nevertheless serves as an important first step. A positive result justifies a further scrutinising whether the same principles also hold for tasks which state-of-the-art models fail to solve.
Since the duality transformations \eqref{1DIsing_DualityTransformation} are themselves highly nontrivial from
the perspective of computational complexity, some caution is needed here to prevent distorting our results by
limitations arising from a mere lack of capacity. Taking, for instance, our toy-example of energy regression,
it is clear that the task cannot be solved by a linear network in the normal frame, while even a simple perceptron
with sufficiently high number of neurons can do so at ease. In this case, the only benefit coming from using the
dual frame thus lies in a lower network complexity, which is, however, in parts nullified by the computational
complexity of the duality transformation itself.
Taking this into account, we chose a suitable benchmark for our tests a single-layer perceptron with 128 hidden neurons,
ReLu activation for the hidden layer and sigmoid activation for the output layer. This architecture shows a sufficiently-high
capacity to easily learn the transformation \eqref{1DIsing_DualityTransformation} directly, while at the same time keeping
a relatively simple structure.
We generated all $2^{18}$ states for the 1D Ising chain
with $N=18$ spins and tested different networks for varying $n$. We
split the data into states labeled as ``not (meta-)stable" (0) or ``(meta-)stable"~(1) and normalised the training and test
sets to contain an equal number of samples for each class. We furthermore checked the performance for varying amounts
of training data in order to properly analyse effects on generalisation errors and data efficiency.
The average best test accuracies and losses achieved in 10 training runs of 500 epochs are listed in Table~\ref{table:val_acc_1DSimpleNets_original}.
Average training curves for the case $n=8$ and varying amounts of training data can be found in
Figure~\ref{fig:Ising1DN18n8_TrainingCurves}. Further details on the training and
testing modalities are discussed in Appendix \ref{app:1DIsing}.
\begin{figure}[t]
\includegraphics[width=0.33\textwidth]{Ising1DN18n8e500s600_TrainingCurve_Original_shaded.pdf}
\includegraphics[width=0.33\textwidth]{Ising1DN18n8e500s3000_TrainingCurve_Original_shaded.pdf}
\includegraphics[width=0.33\textwidth]{Ising1DN18n8e500s9500_TrainingCurve_Original_shaded.pdf}\\
\includegraphics[width=0.33\textwidth]{Ising1DN18n8e500s600_TrainingCurve_Dual_shaded.pdf}
\includegraphics[width=0.33\textwidth]{Ising1DN18n8e500s3000_TrainingCurve_Dual_shaded.pdf}
\includegraphics[width=0.33\textwidth]{Ising1DN18n8e500s9500_TrainingCurve_Dual_shaded.pdf}
\vspace*{-20pt}
\caption{
Example histories of training loss (blue) and test loss (orange) over the course of 300 epochs for $n=8$ and various
numbers of training samples. The plots show averaged curves computed over ten test-runs; standard deviations are indicated
with shaded colours.
}
\label{fig:Ising1DN18n8_TrainingCurves}
\end{figure}
\begin{table}[t]
\begin{footnotesize}
\begin{center}
\begin{tabular}{c||ccccc}
normal & n=4 & n=5 & n=8 & n=9 & n=12 \\
\hline\hline
$6\cdot 10^2$ & 0.9113 & 0.8688 & 0.8788 & 0.8813 & 0.8803
\\
$3\cdot 10^3$ & - & 0.9243 & 0.9215 & 0.9223 & 0.9295
\\
$9.5\cdot 10^3$ & - & - & 0.9424 & 0.9475 & 0.9739
\end{tabular}\qquad
\begin{tabular}{c||cccccc}
dual & n=4 & n=5 & n=8 & n=9 & n=12 \\
\hline\hline
$6\cdot 10^2$ & 0.9911 & 0.9783 & 0.9819 & 0.9855 & 0.9909
\\
$3\cdot 10^3$ & - & 0.9958 & 0.9977 & 0.9994 & 1.0000
\\
$9.5\cdot 10^3$ & - & - & 1.0000 & 1.0000 & 1.0000
\end{tabular}
\end{center}
\end{footnotesize}
\vspace*{-10pt}
\caption{Detection of (meta-)stable states in the 1D Ising chain for different interactions and amounts of training data.
The listed numbers describe the average best test accuracy over 10 training runs of 500 epochs each.
Missing values indicate that the number of required samples exceeds the total number of metastable states for the considered setting. On the left are the results for the normal variables, and the right side shows the results for the dual variables.}
\label{table:val_acc_1DSimpleNets_original}
\end{table}
\subsubsection*{Results}
The results show that there is indeed a major improvement of performance in the dual representation. While all networks
are able to detect at least some patterns in either frame, we find several advantages from using the dual representations:
\begin{itemize}
\item{The best performance achieved for low numbers of training samples is notably higher in the dual representation,
implying that the duality transformation \eqref{1DIsing_DualityTransformation} can be useful to prevent overfitting
and improve data efficiency.}
\item While increasing the amount of training data gradually tightens the performance gap between the original and dual
representations, the learning curves in the latter remain much steeper in all cases, leading to shorter and more stable training.
\item Even in cases for which the best test accuracies are high in both representations, there remains a significant
difference in the actual binary cross-entropy,
\begin{equation}
\begin{split}
\mathcal{L}=-[y_{\textrm{true}}\textrm{log}(y_{\textrm{pred}})+(1-y_{\textrm{true}})\textrm{log}(1-y_{\textrm{pred}})]\,,
\end{split}
\end{equation}
implying that networks trained on the dual representation perform classifications
with a considerably higher degree of certainty. This is also reflected in the model outputs, which are commonly closer
to 0 or 1 in the dual representation than in the original variables, even in settings with high test accuracies in both representations (cf. Figure~\ref{fig:Ising1D_NetworkOutput}).
\begin{SCfigure}
\includegraphics[width=0.55\textwidth]{Ising1D_NetworkOutput.pdf}
\caption{Output distribution of simple neural networks for states classified as \mbox{(meta-)stable} for $N=18$ and $n=8$. Both networks were trained on 3000 samples. Only values for the dual representation accumulate very close to one, implying a higher degree of certainty in this frame. \label{fig:Ising1D_NetworkOutput}}
\end{SCfigure}
\item{While overfitting is prevalent in the original representation, the loss curves additionally show signs of underfitting.
This can be remedied by increasing the capacity of the network, which, however, leads to even stronger overfitting. We found
that regularization techniques can slightly improve performance in this case, however, there remained a significant difference
between both representations for all tested methods. Details on this are discussed in Appendix~\ref{app:1DIsing}.}
\end{itemize}
\subsubsection*{Interpretation}
Some sense can be made out of this result when addressing the problem from a naive analytical viewpoint. In the original representation,
checking whether flipping a particular spin $s_{i}$ increases the total energy of the system requires taking into account
$n$ interaction terms containing $s_{i}$, some of whose contributions might cancel each other. On the other hand,
these $n$ interaction terms are represented by a cluster of $n$ spins $\sigma_{j}, \, j=i-n+1,\dots , i$ in the dual frame,
and flipping $s_{i}$ causes all of those $n$ dual spins to change sign. Since the total energy of the system can
be computed by simply adding up the first $N-n+1$ dual spins of the complete system,
an overall increase in energy then occurs precisely iff more than half of the flipped dual spins take the value $1$ (not counting those spins
$\sigma_{j}$ with $j\geq N-n$) .
In other words, the transformation~\eqref{1DIsing_DualityTransformation} maps the single-flip dynamics of the original
system to $n$-spin-cluster dynamics in the system governed by the Hamiltonian~\eqref{Ising1D_DualHamiltonian}, thus creating
a ``dual task" which is considerably easier to learn for neural networks. An illustrative example for the case
$N=10$ and $n=3$ is given in Figure~\ref{fig:1DIsing_Dualities_n3_Metastability}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\textwidth]{1DIsing_Dualities_n3_Metastability_Extended_Full.pdf}
\end{center}
\vspace{-0.8cm}
\caption{
Single-spin flip dynamics and metastability in the normal and dual representation.
{\bf Top:}~Flipping a single spin $s_i$ in the normal representation (left) causes $n$ dual spins $\sigma _j$ with $j=i-n+1,\dots, i$
to change sign (right), as indicated by red color for the case $n=3$. The overall energy increases iff more than half of the involved
dual spins have positive sign (counting only values $j$ with $1\leq j \leq N-n+1$). {\bf Bottom:} Example of a metastable state for $n=3$ in the normal (left) and dual (right) representation. Flipping any of the spins in the original representation causes the overall energy
of the state to increase.}
\label{fig:1DIsing_Dualities_n3_Metastability}
\end{figure}
\subsubsection*{Discussion and Limitations}
There are several important aspects as well as limitations of the considered experimental settings, which shall be briefly commented on in this subsection.
\begin{itemize}
{\item{\bf Modifications to Setup}
A first important point to remark is that the discussed setting describes a very low number of spins and is therefore to be understood as a toy model. While a large-scale simulation of realistic systems is beyond the scope of this work, it is worth mentioning that we found
a more drastic difference in performance as more complex settings such as $N=100$ and $n=50$ were considered.
This commonly led to pure guessing on the original data, whereas accuracies higher than
0.95 could be reached with as few as 1500 training samples in the dual representation. The benefits of dualities might thus extend beyond simple
toy-settings, however, further testing is required to confirm this.}
{\item{\bf Sensitivity to Architecture}
While the above tests were performed for a rather large number of different systems and training set sizes, defining a clear benchmark naturally required the utilization of a fixed model to test performance. In light of this, a natural question is to which degree the improvement is owed to the choice of architecture, and whether the results remain valid if a wider class of architectures is considered. We therefore checked the effect of various modifications on our results, as described in more detail Appendix \ref{app:1DIsing}.
We found that, except for strong results of convolutional neural networks on very simple systems with $n\leq 4$,
none of the above modifications led to a significant change in the overall results. It cannot be excluded that a similar improvement in performance
can alternatively be obtained by more sophisticated network architectures. However, our tests clearly demonstrate that the benefits of the dual representation is not isolated to our experimental setting, but does extended to a wider class of architectures.}
{\item{\bf Avoiding Shortcutting Predictors}
Since the dual representation by definition describes the spin system in terms of its local energy contributions,
there is one particular pitfall here which has to be treated with caution: (Meta-)stable states commonly accumulate at low energies, and relatively high accuracies in our classification task can be obtained by simply choosing a fixed energy cutoff to label states
as ``(meta-)stable" (see Table~\ref{table:val_acc_1D_energetic} and Figure~\ref{fig:Metastability_Energy_schematic2}). In such situations, a neural network can be prone to adopting shallow heuristics which perform well in many cases (in this case the total energy) instead of learning the actual task it is supposed to solve.
We found, however, that
the networks trained for the settings listed in Table~\ref{table:val_acc_1DSimpleNets_original} do
not rely purely on the lower energy of metastable states, and the difference in performance remains the same if tested
in low-energy regions where the ratio of each class is roughly the same. Going one step further, the element of energy can be eliminated completely
by additionally training only on states with fixed energies. While this drastically tightens the performance gap in simple settings,
a similar difference as before remains at more complex settings like $N=100$ and $n=50$. \label{Discussion:Mestability-Energy}}
\end{itemize}
\begin{table}[b]
\centering
\begin{equation*}
\begin{split}
\renewcommand{\arraystretch}{1.1}
\arraycolsep12pt
\begin{array}{c||cccccc}
& n=4 & n=5 & n=8 & n=9 & n=12 & \\
\hline\hline
\textrm{Energy cutoff} & 0.9925 & 0.9605 & 0.9535 & 0.9269& 0.8985
\end{array}
\end{split}
\end{equation*}
\vspace{-0.8cm}\caption{Classification accuracy for (meta-)stable states using only a fixed energy cutoff (cf. Figure~\ref{fig:Metastability_Energy_schematic2}).}
\label{table:val_acc_1D_energetic}
\end{table}
\begin{SCfigure}
\resizebox{0.55\textwidth}{!}{%
\includegraphics[width=300pt]{Metastability_Energy_N100n50_2.pdf}
}
\caption{Energy distributions of normal and (meta-)stable states for $N=100$ and $n=50$ (choice for illustrative reasons). Relatively
high accuracies can be obtained by choosing a fixed energy cutoff for classification (dashed line).}
\label{fig:Metastability_Energy_schematic2}
\end{SCfigure}
\section{Enforcing Good Representations}
\label{sec:enforcing}
Having established that dual representations can be `beneficial' for classification tasks, we now turn to the question how such representations can be adapted by the network dynamically. When a duality map is known explicitly, it could easily be learned by a neural network with appropriate regression. Although this can be of interest in principle, we here focus on unsupervised learning techniques for adapting dual representations.
To do this, we discuss three different training strategies which we find to lead to `dual-like' representations:
\begin{enumerate}
\item Feature separation in the latent space.
\item An autoencoder setup with an additional latent loss. In this case, the output of the encoder is the dual-like representation.
\item Demanding properties of the dual representation, for instance that it resembles the correct energy distribution.
\end{enumerate}
\subsection{Feature Separation}\label{FeatureSeparation}
For the discrete Fourier transform described in Section~\ref{DiscreteFourierTransformation} and Appendix~\ref{app:fourier}, the momentum space defines a valuable data representation in which the previously infeasible task of detecting
signals in noisy data becomes easy to solve. Based on our finding that deeper networks do not adapt this representation (cf.~Appendix~\ref{app:fourier}), we now pursue the question how one can assist the neural network to find such a beneficial representation without knowledge about its explicit form.
\subsubsection*{Basic Idea and Motivation}
Heuristically, the benefit is likely to come from the information of a non-localised signal in the space domain being collected in one single (complex) bin of the momentum space domain. This causes the signal in the momentum space domain being clearly separated from the background noise, which takes the same non-local form in both frames (cf. again Figure~\ref{fig:Fourier_example}).
\begin{figure}[t]
\centering
\vspace*{20pt}
\centering
\resizebox{0.95\textwidth}{!}{%
\begin{minipage}[b]{.95\linewidth}
\includegraphics[width=\linewidth]{Fourier-NoSignal_LearnedRep_Noise0075.pdf}
\end{minipage}
\hspace{.04\linewidth}
\begin{minipage}[b]{.95\linewidth}
\includegraphics[width=\linewidth]{Fourier-Signal_LearnedRep_Noise0075.pdf}
\end{minipage}
}
\vspace{-0.4cm}
\caption{
Output of the feature-separation network for pure noise and noisy signal.
\label{fig:LearnedRep_Output}
}
\end{figure}
{\it Can this ``feature separation" be exploited to automatically learn such favourable representations without analytic knowledge about the structure of the signal?} Assuming for the moment that there exists only one non-vanishing frequency, we would like to train a neural network to find a representation in which the outputs for pure signals and pure noise satisfy
\begin{equation}
|y_{\rm signal}|^2-|y_{\rm noise}|^2\geq\alpha\,.
\label{eq:loss1}
\end{equation}
Here, $\alpha >0$ denotes a margin where we want to push the latent representation. Formulated as a loss function, at values larger than $\alpha,$ this function shall take the value $0,$ which avoids a runaway of the signal (vanishing gradients).
Notice that this task resembles the minimisation of a triplet loss~\cite{Chechik2009LargeSO,2015arXiv150303832S}, with the location of the noise fixed at zero. To apply this strategy to a setup with $N=1000$ different frequencies, two aspects have to be taken into account:
\begin{enumerate}
\item{The relation \eqref{eq:loss1} should be satisfied for any frequency. }
\item{The information of different frequencies should be collected at different locations. Otherwise, the mapping might not be able to distinguish between clear signals and ``noisy" inputs with small components in many different frequencies (as is the case for the background noise in our setting).}
\end{enumerate}
A viable ansatz to achieve this is by defining a loss function
\begin{equation}
\label{eq:loss2}
\mathcal{L} = \textrm{max}(0,\alpha - (\xi _1 ^2 + \xi _2 ^2 ))\,,
\end{equation}
where $\xi _1 ^2$ and $\xi _2 ^2$ are defined as the two largest squared values of the $2N$ outputs for a given input sample. When using pure single-frequency signals as training data, this loss effectively urges the sum of only the two output components with largest absolute value to be pushed away from zero until the margin $\alpha$ is reached. The aim of this is to enforce a data representation similar to the actual Fourier transform, in which all information of the single-frequency signals is concentrated in the real and imaginary parts of the $p_k$.
At the same time, we keep the complexity of the network as low as possible (in this case linear). This is necessary because the loss~\eqref{eq:loss2} alone does not prevent the occurrence of representations in which an arbitrarily large number of bins is maximised for any frequency. As a consequence, enforcement of sparse and local representations of signals would not take place. In practice, such cases of ``overfitting" are possible for any network architecture, however, we observe that they commonly occur at higher degrees of complexity, whereas the constrained parameter space of low-capacity networks seems to act as an efficient preventive measure. Somewhat remarkably, this heuristic approach clearly outperformed more elaborate methods such as forcing sparse outputs via L1 penalty or penalising for correlation of latent variables.
Note that the network has no further knowledge on the structure of Fourier transformation or the structure of background noise.
\subsubsection*{Performance and Structure of Representation}
\begin{figure}[t]
\centering
\vspace*{10pt}
\centering
\resizebox{0.99\textwidth}{!}{%
\begin{minipage}[b]{.95\linewidth}
\includegraphics[width=\linewidth]{Fourier_FeatureSep_B1000_Noise0075_Proj1.pdf}
\end{minipage}
\hspace{.04\linewidth}
\begin{minipage}[b]{.95\linewidth}
\includegraphics[width=\linewidth]{Fourier_FeatureSep_B1000_Noise0075_Fourier1.pdf}
\end{minipage}
\hspace*{35pt}
\begin{minipage}[b]{.95\linewidth}
\includegraphics[width=\linewidth]{Fourier_FeatureSep_B1000_Noise0075_Proj2.pdf}
\end{minipage}
\hspace{.04\linewidth}
\begin{minipage}[b]{.95\linewidth}
\includegraphics[width=\linewidth]{Fourier_FeatureSep_B1000_Noise0075_Fourier2.pdf}
\end{minipage}
}
\caption{
Comparison of representations learned via feature separation and
embedding into true momentum space domain. The above plots show examples of learned
representations and Fourier transforms of single-frequency signals
at different frequencies without noise. Signals with non-vanishing component in
the respective frequency arrange in similar shapes, while the rest accumulates
close to or at the origin.}
\label{fig:Comparison_Learned-Fourier}
\end{figure}
Training a linear network with $2N$ output nodes with Nesterov Adam optimiser, learning rate $1\cdot 10^{-3}$ and $\alpha = 5$ commonly led to close-to-zero losses after less than five epochs. As can be seen in Figure~\ref{fig:LearnedRep_Output}, the learned representation shows characteristic properties of the actual Fourier transform when we trained just with noisy signals as input.
Using this representation for our previous task of signal detection in noisy data, the mean best test accuracy of the same simple one-layer convolutional neural network
as described in section \ref{DiscreteFourierTransformation} (cf.~also Appendix~\ref{app:fourier} for more details) indeed improved to around 0.7717.
Interestingly, the
learned data representations often take the form of transformations such as rescalings, reflections or rotations of the actual
Fourier transform in the $2N$-dimensional space. Projecting the output of the network for a large number of samples onto
particular pairs of components, the distribution
of values then corresponds to that of the real and imaginary parts of a certain value $p_k$ in the Fourier domain.
This is exemplified for two instances in Figure~\ref{fig:Comparison_Learned-Fourier}.
\subsubsection*{Response to Single-Frequency Signals}
Some more insights into the structure of the feature-separation network can be gained by analysing its outputs $f_j(x)$. Here,
we do this by analysing the $2N$ response values $f_j(x)|_{p_{i}\neq 0}$ when given pure signals with single non-vanishing frequency
$p_i$. These can be stored a $N\times 2N$ response matrix
\begin{equation}\label{Fourier-FeatureSep-ActivationMatrix}
M_{ij} = \langle |\left. f_{j}(x) \vphantom{\frac{1}{2}}|^{2}\rangle\,\right|_{p_{i}\neq 0} \,,
\end{equation}
where the mean is taken over all samples satisfying the condition $p_{i}\neq 0$. The matrix generally shows a high
degree of sparsity, and we find that a fraction of higher than 0.8 of all
rows contain at least one large value, implying that the network makes efficient use of the $2N$ dimensions to embed
the signals into the latent space. An example plot of the matrix $M$ for the case $N=100$ can be found in Figure~\ref{fig:FeatureSep_Activation}. It can be observed that each row of the matrix commonly contains between 2 and 4 large
activations, with the remaining entries being close to zero. Visualising the corresponding latent dimensions, one finds that this
behaviour reflects precisely the way in which the Fourier-transform is embedded into the latent space. This is exemplified for various
cases in Figure~\ref{fig:FeatureSep_Activation2}.
\begin{figure}[t]
\centering
\vspace*{20pt}
\centering
\resizebox{0.85\textwidth}{!}{%
\begin{minipage}[b]{.95\linewidth}
\includegraphics[width=\linewidth]{Fourier_FeatureSep_B100_Activation_reordered_Noise01.pdf}
\end{minipage}
}
\vspace*{-7pt}
\caption{
Example plot of an activation matrix \eqref{Fourier-FeatureSep-ActivationMatrix} for the
case $N=100$. The columns have been reordered according to the indices of their respective
largest entries. The number of non-vanishing values in a given row matches with the dimension of the subspace
of the $2N$-dimensional latent space into which the representation of signals with a corresponding
non-vanishing frequency $p_i$ is nontrivially embedded (cf. Figure~\ref{fig:FeatureSep_Activation2}).
\label{fig:FeatureSep_Activation}
}
\end{figure}
\begin{figure}[h]
\centering
\centering
\resizebox{0.95\textwidth}{!}{%
\begin{minipage}[b]{1.00\linewidth}
\includegraphics[width=\linewidth]{Fourier_Embedding_2Dims_Noise01_42.pdf}
\end{minipage}
\begin{minipage}[b]{1.00\linewidth}
\includegraphics[width=\linewidth]{Fourier_Embedding_2Dims_Noise01_66.pdf}
\end{minipage}
}
\vspace{6pt}
\centering
\resizebox{0.83\textwidth}{!}{%
\hspace*{10pt}
\begin{minipage}[b]{1.00\linewidth}
\includegraphics[width=\linewidth]{Fourier_Embedding_2Dims_Noise01_42-Activation.pdf}
\end{minipage}
\hspace*{250pt}
\begin{minipage}[b]{1.00\linewidth}
\includegraphics[width=\linewidth]{Fourier_Embedding_2Dims_Noise01_66-Activation.pdf}
\end{minipage}
}
\vspace*{15pt}
\centering
\resizebox{0.95\textwidth}{!}{%
\begin{minipage}[b]{1.00\linewidth}
\includegraphics[width=\linewidth]{Fourier_Embedding_3Dims_Noise01_63.pdf}
\end{minipage}
\begin{minipage}[b]{1.00\linewidth}
\includegraphics[width=\linewidth]{Fourier_Embedding_3Dims_Noise01_56.pdf}
\end{minipage}
}
\vspace{6pt}
\centering
\resizebox{0.83\textwidth}{!}{%
\hspace*{10pt}
\begin{minipage}[b]{1.00\linewidth}
\includegraphics[width=\linewidth]{Fourier_Embedding_3Dims_Noise01_63-Activation.pdf}
\end{minipage}
\hspace*{250pt}
\begin{minipage}[b]{1.00\linewidth}
\includegraphics[width=\linewidth]{Fourier_Embedding_3Dims_Noise01_56-Activation.pdf}
\end{minipage}
}
\vspace{-10pt}
\caption{
Interpretation of the activation matrix illustrated in Figure~\ref{fig:FeatureSep_Activation}. The plotted
latent dimensions correspond to the three largest entries of a given row. {\bf (Top)} Two non-vanishing entries in one row.
The Fourier transform is completely embedded into two latent dimensions. {\bf (Bottom)} Three non-vanishing entries in one row. The Fourier transform is nontrivially embedded
into three latent dimensions.
\label{fig:FeatureSep_Activation2}
}
\end{figure}
\subsection{Autoencoder with Latent Loss}
\label{sec:CAE}
We now turn to the second example of adapting an appropriate latent dimension dynamically which is based on the 1D Ising setup already described in Section~\ref{sec:1DIsing}.
\subsubsection*{Motivation and Architecture}
We have seen that by exchanging the roles of individual spins and their interaction
terms, the task of detecting (meta-)stable states becomes more accessible due to the relevant information being easier to extract from a lower number of spins in the dual frame.
To find such a suitable representation, we here employ the following strategy: We use the fact that a simple task can be performed very efficiently in the dual representation. In this case this is the (trivial) task of energy classification.
By itself, this is not sufficient and we need to ensure that no information is lost in the latent representation.
A viable method to achieve this goal is to use a autoencoder-like architecture whose `bottleneck' has (at least) the same
dimension as the original input and is required to represent the data in a way that the total energy can be
extracted by a simple linear model. This way, the model is guaranteed to learn a representation which encodes
the energetic properties of a state in a manner similar to the dual frame (cf. Equation~\eqref{1DIsing_DualityTransformation}),
while at the same time the presence of an additional reconstruction loss forces the mapping to be information conserving.
In practice, this can be implemented by training a neural network to map an input state $s_1,\ldots,s_N$ to an intermediate
output of (at least) the same dimension, which in turn serves as input for a linear model extracting the total energy of
the input state and another network reconstructing the initial input configuration. Figure~\ref{fig:TaskConstrainedAE} illustrates this architecture
schematically.
\subsubsection{Results and Discussion}
We tested the performance in classifying (meta-)stable states using the same setting as before,
with the duality transformation~\eqref{1DIsing_DualityTransformation} replaced by the intermediate output
of a constrained autoencoder with latent dimension 18 and 50. Details on the experimental conditions are provided in Appendix \ref{app:1DIsing}; results are shown in Table~\ref{table:val_acc_1DSimpleNets_learned}.
One again observes a significant improvement compared to the original representation (cf. Table~\ref{table:val_acc_1DSimpleNets_original}, left), albeit not as drastic as in the actual dual representation. Autoencoders with latent dimension 18 often suffered from underfitting problems, and further benefits were possible when increasing the latent dimension to 50. Networks trained on the learned representation mostly outperformed accuracies reachable by pure energy cutoffs in particular at latent dimension 50, but showed a slight tendency to misclassify samples which are located in energy regions dominated by the respective other class. While part of the improvement might therefore be attributed to the correlation between overall energy and (meta-)stability, the learned representation still allows to solve the classification task significantly better than by
training on the original representation directly, and the networks do not resort completely to superficial energetic arguments.
\subsubsection*{Further Applications}
Let us conclude this discussion by stressing that the main purpose of the above architecture is to realise transfer learning between different
physically related problems. This can be beneficial when training data is limited or expensive to
generate for one task but can be efficiently acquired for a simpler task. In such cases, it might not be a reduction of required overall training data, but rather a change in the type of data that eventually leads to an improvement in overall performance.
In our considered setting,
we indeed found that benefits in performance are only possible when the constrained autoencoder is trained
on relatively large datasets. While this obviously nullifies the improvement in overall data efficiency of analytical dualities,
it can simplify the process of training due to the possibility to replace large datasets
of metastable states (which might not even exist for some settings) by corresponding pairs of random states and their energy.
Generally, finding such physically related tasks commonly requires domain knowledge or heuristic arguments,
but it nevertheless opens up a wide range of new possibilities going beyond known analytical dualities.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\textwidth]{TaskConstrainedAE_3_Rotated5.pdf}
\end{center}
\caption{Schematic illustration of a task-constrained autoencoder used to learn suitable representations
for difficult tasks. The intermediate output takes the role of the ``dual" representation.}
\label{fig:TaskConstrainedAE}
\end{figure}
\hspace*{5pt}
\subsubsection*{Interpretation of Intermediate Output}
Before we delve into the interpretation of the intermediate output, it is important to remark that we did not impose any further constraints regarding the structure
of the intermediate output as performance commonly suffered from reduced network capacity in such cases. As a consequence, the intermediate output has no obvious physical interpretation and relations to the true dual representation are a priori not obvious.
\begin{table}[t]
\begin{footnotesize}
\begin{center}
\begin{tabular}{c||ccccc}
lat (18) & n=4 & n=5 & n=8 & n=9 & n=12 \\
\hline\hline
$6\cdot 10^2$ & 0.9880 & 0.9540 & 0.9180 & 0.9072 & 0.9228
\\$
3\cdot 10^3$ & - & 0.9677 & 0.9527 & 0.9353 & 0.9476
\\
$ 9.5\cdot 10^3$ & - & - & 0.9607 & 0.9500 & 0.9597
\end{tabular}\qquad
\begin{tabular}{c||cccccc}
lat (50) & n=4 & n=5 & n=8 & n=9 & n=12 \\
\hline\hline
$ 6\cdot 10^2$ & 0.9887 & 0.9526 & 0.9300 & 0.9304 & 0.9500
\\
$3\cdot 10^3 $& - & 0.9718 & 0.9787 & 0.9637 & 0.9829
\\
$9.5\cdot 10^3$ & - & - & 0.9910 & 0.9885 & 0.9968
\end{tabular}
\end{center}
\end{footnotesize}
\caption{Detection of (meta-)stable states in the 1D Ising chain for different interactions and amounts of training data.
The listed numbers describe the average best test accuracy over 10 training runs of 500 epochs each when trained
on the intermediate output of a constrained autoencoder with latent dimension 18 {\bf(Left)} and 50 {\bf(Right)}.
Missing values indicate that the number of required samples exceeds the total number of metastable states for the
considered setting.}
\label{table:val_acc_1DSimpleNets_learned}
\end{table}
An interesting question in this context is whether there is some way to make sense of how the relevant information is encoded in our learned representation. A viable way to study dependencies between the input and latent variables is to analyse
the sensitivity of the latent variables with respect to flips of a particular spin $s_j$ while keeping all other spins fixed.
This information can be stored in the matrix
\begin{equation}
\label{Ising1D_Sensitivity}
M_{ij} = \frac{\langle\left( f_{i}( s_{1}, \dots ,s_{j} ,\dots s_{N}) - f_{i}( s_{1}, \dots ,-s_{j} ,\dots s_{N})\right)^{2}\rangle}
{\frac{1}{N}\sum _{k=1}^{N}\langle\left( f_{i}( s_{1}, \dots ,s_{k} ,\dots s_{N}) - f_{i}( s_{1}, \dots ,-s_{k} ,\dots s_{N})\right)^{2}\rangle}\,,
\end{equation}
where the expectation values are to be computed for the complete (test) dataset. Heuristically, this matrix encodes the average sensitivity
of the components $f_{i}$ of the transformed representation with respect to flips of a particular spin $s_j$, normalised by
the average sensitivity of $f_{i}$ to flips of any spin. For the actual duality transformation~\eqref{1DIsing_DualityTransformation},
the numerator takes precisely the values 0 or 4, leading to a staircase-like structure as depicted on the left hand side in Figure~\ref{fig:Ising1DN10n2_Sensitivity}.
We trained 25 constrained autoencoders for the simple setting $N=10$ and $n=2$ and compared the transformation
behaviour of the learned variables to that of the true duality transformation \eqref{1DIsing_DualityTransformation}.
Interestingly, there exist many instances of networks with structurally similar dependencies as the proper duality transformation. These commonly include
components $f_{i}$ depending strongly on neighbouring pairs of spins and a distinguished value $f_{N}$ which is highly sensitive
to one particular spin - the matrix $M_{ij}$ for one such
example is presented on the right hand side in Figure~\ref{fig:Ising1DN10n2_Sensitivity}.
Notice that this basically represents the way the duality transformations \eqref{1DIsing_DualityTransformation}
encode the information of the original system in that there exist $N-1$ terms $\sigma _{i}, i=1,\dots, N-1$ describing the
nearest-neighbour interactions and one value $\sigma _{N}$ which does not interact with the external field and stores the
overall sign of the system.
\subsection{Distributional properties}
The next question we analysed is to which degree neural networks are capable of learning the relation between dual Ising models on the square lattice. A minimal requirement for this is that the duality map between the two systems
can be learned if samples from both data representations are provided explicitly.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\textwidth]{Ising1DN10n2_Sensitivity.pdf}
\end{center}
\vspace*{-15pt}
\caption{Plots of the sensitivity matrix~\eqref{Ising1D_Sensitivity} for the actual duality transformation (left) and a learned
representation of a constrained autoencoder (right) for $N=10$ and $n=2$. Both matrices show characteristic nearest neighbour
interactions; the latter contains additional nonlocal components.}
\label{fig:Ising1DN10n2_Sensitivity}
\end{figure}
Here we start with no one-to-one mapping between states of a system at temperature $T$ and
those of a system at dual temperature $\widetilde{T}.$ Instead, we match features of the dual representation on the level of the probability
distributions, i.e.~that the learned representation shares features with the target dual distribution. For this purpose, we consider the following architecture: States $s$ sampled from the temperature
$T$ are used as input for a deep convolutional network
and mapped onto a lattice of the same shape whose entries are interpreted as probabilities of the the
respective spins to take the value 1.
Binary states are then sampled by utilising the Gumbel trick
to preserve differentiability of the network. In the discussed setting, this can be realised by sampling for each
site $p_i$ of the lattice some value $\varepsilon _i \sim U(0,1)$ uniformly and map the input state $s$ to an output
state $f(s)$ with
\begin{equation}\label{GumbelTrick}
f_{i}(s)= 2\cdot\textrm{sig}\left[\gamma\left(\log (\varepsilon _i) - \log (1-\varepsilon _i)
+ \log (p_i) - \log (1-p_i)\right)\right]-1\,,
\end{equation}
where $ \textrm{sig}$ denotes the sigmoid function $ \textrm{sig(x)}=\frac{1}{1+e^{-x}}$ and $\gamma$
is a scale parameter which can be used to force the output values closer to the extremal values 0 and 1\footnote{Some caution is needed when choosing $\gamma$ as high values can lead to vanishing or exploding gradients,
resulting in poor training.} .
The output states $f(s)$ are then fed into a hard-coded layer to compute their total energy, and the loss
function is defined as the Kullback-Leibler divergence
\begin{equation}
D_{\textrm{KL}}(P_{f}\lVert P_{\sigma})=-\sum _{E} P_{f}(E) \log \left(\frac{P_{\sigma}(E)}{P_{f}(E)} \right)
\end{equation}
between the energy distributions $P_{f}(E) $ and $P_{\sigma}(E)$ of
states sampled from the network and the true dual temperature, respectively.
The network produces
binary outputs as desired, with the energy distributions closely resembling those of the actual dual system.
This is depicted for two examples in Figure~\ref{fig:KWDuality_Unet_40x40}.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.85\textwidth]{U-Net_Schematic_formatted5.pdf}
\end{center}
\vspace*{-10pt}
\caption{Schematic illustration of a U-Net architecture. }
\label{fig:U-Net_Schematic}
\end{figure}
\subsubsection*{Results}
We used a U-Net architecture as depicted in Figure~\ref{fig:U-Net_Schematic} with three levels consisting of two layers
of 15, 30 respectively 60 $2\times 2$ filters with ReLu activations. The scale parameter in \eqref{GumbelTrick} was set to 50.
Tests were conducted for a $40\times 40$ lattice at temperatures $T=0.25, 0.5, \dots , 2.25$ using standard Nesterov Adam
optimiser with initial learning rate 0.002 and learning rate decay. The dataset for each temperature was again split into 16000 training samples and 4000 test samples.
Training equilibrium was commonly reached within 50 epochs; no significant changes were noticed after 500 epochs. Tests were again performed for
10 random seeds per temperature and showed consistent overall performance, however, there were rare instances in which poor local minima
required reinitialization of the network in particular when mapping to lower temperatures.
The network produces
binary outputs as desired, with the energy distributions closely resembling those of the actual dual system.
This is depicted for two examples in Figure~\ref{fig:KWDuality_Unet_40x40}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\textwidth]{Ising2D_Unet40x40_T28_probabilities.pdf}
\includegraphics[width=0.4\textwidth]{Ising2D_Unet40x40_T35_probabilities.pdf}
\includegraphics[width=0.4\textwidth]{Ising2D_Unet40x40_T28inverse_prob.pdf}
\includegraphics[width=0.4\textwidth]{Ising2D_Unet40x40_T35inverse_prob.pdf}
\end{center}
\vspace*{-17pt}
\caption{Energy distributions of U-Net outputs and true dual temperatures. {\bf Top:} Mapping from low- to high-temperature regions. {\bf Bottom:} Mapping from high- to low-temperature regions.}
\label{fig:KWDuality_Unet_40x40}
\end{figure}
We next checked the output of U-nets trained on a single temperature for input states sampled from other temperatures. For networks trained on larger original
temperatures, the output energy distribution shows some resemblance of the true dual temperatures, albeit with wrong numerical values. This behaviour is shown in Figure~\ref{fig:Unet_CrossCheck} for temperature $T=1.80.$
For lower training set temperatures,
the networks gradually lose their ability to distinguish between input states.
When we trained the network with data from multiple temperatures, we have not (yet) found a significant improvement compared to Figure~\ref{fig:Unet_CrossCheck}.
Generally speaking, one can think of extending this method and incorporating more and more properties, i.e.~matching more and more correlators. This would lead to a more and more precise map which satisfies more and more properties of the respective dynamical system.
\section{Connection to Other Dualities in Physics}
\label{sec:connections}
We have seen in previous sections that dualities are a change in the basis which describes the system. Although we have already used this in the case of physical systems, such as the 2D Ising model (cf.~\ref{sec:2DIsing}), we would like to highlight how such a change in the basis appears analytically in physical systems and how it is connected to Fourier transformation. To do this we repeat the key steps from arguments presented for instance in~\cite{Polchinski:2014mva}.
To do this, one can consider electromagnetism in four dimensions without sources. The path integral is described by
\begin{equation}
\int {\cal D}A~e^{iS(A)/\hbar}~,\qquad S(A)=-\frac{1}{4g^2}\int d^4 x~(\partial^\mu A^\nu-\partial^\nu A^\mu)(\partial_\mu A_\nu-\partial_\nu A_\mu)
\end{equation}
This can be re-formulated as a path integral over the antisymmetric tensor field $F_{\mu\nu}$ subject to the constraint that the Bianchi identity $\partial_\mu \tilde{F}^{\mu\nu}=0$ is satisfied at each point $x$
\begin{equation}
\int {\cal D }F\prod_x\delta(\partial_\mu \tilde{F}^{\mu\nu}(x))e^{-\frac{i}{4\hbar g^2}\int d^4 x F_{\mu\nu}F^{\mu\nu}}~,
\end{equation}
where a potential Jacobian is ignored. By using an integral representation for the $\delta$ function and some integration by parts, this action can be rewritten as
\begin{equation}
\int {\cal D }F{\cal D}V~e^{-\frac{i}{\hbar}\int d^4 x \frac{1}{4g^2}F_{\mu\nu}F^{\mu\nu}-\frac{1}{4\pi}(\partial_\mu V_\nu-\partial_\nu V_\mu)\tilde{F}^{\mu\nu}}~.
\label{eq:full}
\end{equation}
In this formulation one can now also integrate out $F_{\mu\nu}$ as the integral is essentially Gaussian. This leads to
\begin{equation}
\int {\cal D}V e^{-\frac{i g^2}{16\pi^2}\int d^4 x (\partial_\mu V_\nu-\partial_\nu V_\mu)(\partial^\mu V^\nu-\partial^\nu V^\mu)}
\label{eq:piv}
\end{equation}
This path integral is now over a different field $V$ which was introduced merely as an auxiliary field. The relation between both representations can be seen from the equations of motion from the action involving both fields $A$ and $V:$
\begin{equation}
\tilde{F}_{\mu\nu}=-\frac{g^2}{2\pi}(\partial_\mu V_\nu-\partial_\nu V_\mu)\equiv -G_{\mu\nu}
\label{eq:locrelation}
\end{equation}
Electric and magnetic fields components are exchanged between these two descriptions and in addition the appearance of the coupling constant is inverted $g\to 1/g.$\footnote{Note that this becomes a real strong-weak duality once charged fields are introduced.} Despite the local relation~\eqref{eq:locrelation}, the map relating both representations is non-local as it involves the integration over space-time.
Note that the integration of a Gaussian from~\eqref{eq:full} to~\eqref{eq:piv} corresponds precisely to the transformation of a Gaussian from position space to momentum space in the Fourier transformation. This highlights the connection between Fourier transformation and mapping fields under duality.
This analysis for electromagnetism in four dimensions can be extended to the discussion of massive $p-$form fields in $D$ dimensions (cf.~\cite{Quevedo:1997jb} for a review). Again a relation between the variables in terms of Fourier transformation can be established.
\begin{figure}
\begin{center}
\includegraphics[width=0.49\textwidth]{Ising2D_Unet40x40_T32-1-35_Dual3.pdf}
\includegraphics[width=0.49\textwidth]{Ising2D_Unet40x40_T32-1-35_Learned3.pdf}
\end{center}
\vspace*{-20pt}
\caption{
Output of U-networks trained on a single-temperature dataset for various temperatures. The ability
to distinguish between inputs depends strongly on the original temperature. Here we show results for $T=1.80$ where the network is able to distinguish between different inputs.
\label{fig:Unet_CrossCheck}
}
\end{figure}
\subsubsection*{Applications in Physics}
In the previous sections we have focused on the determination of classification tasks with the help of dual variables. In the context of physics, the use of dualities is generally speaking in the context of determining correlation functions more accurately. In turn this can be seen as properties of the data and hence can be connected with our classification tasks. To highlight the strength of these techniques we mention two major applications where the methods based on dualities outperform other techniques:
\begin{enumerate}
\item {\bf Hydrodynamic transport coefficients for quark gluon plasma:} In the context of holography, strongly coupled conformal field theories are related with weakly coupled gravitational systems\footnote{See~\cite{Hashimoto:2019bih} for the connection between holography and deep Boltzmann machines.} in one higher dimension. Field theory correlators can be calculated by performing the appropriate perturbation analysis in the gravitational system~\cite{Maldacena:1997re, Witten:1998qj, Aharony:1999ti}. One of the prime examples includes the calculation of the shear viscosity $\eta/s$ of ${\cal N}=4$ super Yang-Mills theory which effectively is a two-point correlation function of the stress energy tensor~\cite{Policastro:2001yc,Kovtun:2004de}. It has been argued that these calculations can be used to understand properties of the quark-gluon plasma and provide - at reasonably low calculational effort - quantitatively more accurate results than lattice predictions (cf.~\cite{CasalderreySolana:2011us} for a review and further interesting applications).
\item {\bf Yukawa couplings in the standard embedding for the heterotic string:} Here the duality in use is referred to as mirror symmetry, a generalisation of T-duality. In the heterotic standard embedding it facilitates the calculation of Yukawa couplings in the standard embedding. Concretely, in the dual frame the ${\bf 27}^3$ couplings are purely topological whereas in the original frame the couplings ($\overline{\bf 27}^3$) depend on the K\"ahler moduli. The topological couplings can be computed with standard methods in finding solutions to the Picard-Fuchs equations. Both couplings have to be identical due to mirror symmetry and utilising the mirror map between the dual moduli spaces allows a calculation of the K\"ahler moduli dependence in the $\overline{\bf 27}^3$ coupling. The direct calculation of these corrections requires counting of appropriate rational curves on the background Calabi-Yau manifold which is known as a hard problem in Mathematics. Using mirror symmetry this hard calculation can be avoided. For a physicist the Yukawa couplings in the original frame capture a tree-level part and non-perturbative corrections. It is these non-perturbative corrections which can be calculated using mirror symmetry. For explicit constructions of these dualities and more details see for instance~\cite{Candelas,Candelas:1993dm,Hosono:1993qy,Hosono:1994av}. Note that the reduced calculational complexity required to calculate the Yukawa couplings in the dual frame was mentioned in~\cite{Halverson:2018cio}.
\end{enumerate}
Both examples highlight the capability of calculating far beyond the realm of standard perturbation theory. As a final comparison to showcase the connection of the dualities in the 1D Ising case, we discuss the connection with Seiberg duality. Here we identify a starting point for correlators which can serve as candidate replacements of metastability in the 1D Ising case.
\subsection{Seiberg duality}
Let us comment on the connection to the classical example of Seiberg duality in the context of SQCD~\cite{Seiberg:1994bz,Seiberg:1994pq,Intriligator:1995au}. Here two gauge theories share the same infrared physics but differ in the UV. These are referred to as the electric and magnetic phase. The electric phase ($3/2 N_c<N_f<3N_c$) is described by the field content presented in Table~\ref{tab:fieldcontentelectric} and the magnetic one in Table~\ref{tab:fieldcontentmagnetic}. The electric theory has no superpotential whereas the magnetic theory has a superpotential of the form $W=\tilde{M}q\tilde{q}$ where $\tilde{M}$ is related to the meson $M$ built out of quarks in the electric phase.
\begin{table}
\begin{center}
\begin{tabular}{c | c | c | c | c | c | c}
Field & $SU(N_c)$ & $SU(N_f)_{L}$ & $SU(N_f)_{R}$ & $U(1)_A$& $U(1)_B$& $U(1)_R$\\ \hline
$Q$ &$\bf{N_{c}}$ & $\bf{N_{f}}$ & $1$ & $1$ & $1$ & $1-\frac{N_{c}}{N_{f}}$\\
$\tilde{Q}$ & $ \mathbf{\overline{N}_{c}}$ &$ 1$ &$ \mathbf{\overline{N}_{f}}$ &$ 1$ &$ -1$ &$ 1-\frac{N_{c}}{N_{f}}$\\
\end{tabular}
\end{center}
\vspace*{-10pt}
\caption{Field content of the electric phase.}\label{tab:fieldcontentelectric}
\end{table}
{
\subsubsection*{Electric Phase}
As a supersymmetric theory with zero
tree-level superpotential, the classical Lagrangian of the electric phase involves a D-term potential whose flat directions at vanishing value parameterise the moduli space of the theory. More precisely, the corresponding quark expectation values can be determined by imposing the D-flatness condition $D^{A}=0$ with
\begin{equation}
D^{A} = \sum _{i} {Q}_{i}^{\dagger}T_{i}^{A} Q_{i}+\tilde{Q}_{i}^{\dagger}T_{i}^{A} \tilde{Q}_{i}\,,
\end{equation}
where the $T^{A}$ denote the generators of the respective gauge group $SU(N_c)$. The classical moduli space is then defined as the space of quark vacuum expectation values modulo gauge equivalence. As argued in~\cite{Intriligator:1995au,Luty:1995sd}, this allows for an equivalent description in terms of expectation values of gauge-invariant polynomials in the fields subject to any classical relations. For the theories considered here, such combinations are given by the $2{N_f\choose N_c}$ baryon and $N_{f}^2$ meson operators
\begin{eqnarray}
\label{BaryonsMesons_Electric}
\nonumber B^{i_1\ldots i_{N_c}}&=& Q^{i_1}_{a_1}\cdots Q^{i_{N_c}}_{a_{N_c}}\epsilon^{a_1\ldots a_{N_c}}\,,\\
\tilde{B}_{i_1\ldots i_{N_c}}&=& \tilde{Q}_{i_1}^{a_1}\cdots \tilde{Q}_{i_{N_c}}^{a_{N_c}}\epsilon_{a_1\ldots a_{N_c}}\,,\\
\nonumber M^{i}_{j}&=&Q^{i}_{a}\tilde{Q}_{j}^{a}\,.
\end{eqnarray}
Due to the identity
\begin{equation}
\epsilon _{a_{1}\dots a_{N_c}}\epsilon ^{b_{1}\dots b_{N_c}} = \delta _{a_1}{}^{[\underline{b_1}}\delta _{a_{N_c}}{}^{\underline{b_{N_c}}]}\,,
\end{equation}
these are subject to additional constraints
\begin{equation}
B^{i_1\ldots i_{N_c}}\tilde{B}_{j_1\ldots j_{N_c}}=M _{j_1}^{[\underline{i_1}}M _{j_{N_c}}^{\underline{i_{N_c}}]}\,,
\end{equation}
leaving a total of $2N_{f}N_{c}-(N_c^2-1)$ light D-flat directions (cf.~\cite{WechtLec}). The physical interpretation of this is that the gauge group $SU(N_c)$ is completely broken, which is reflected in the number $N_c^2-1$ of broken generators~\cite{Intriligator:1995au}.
\subsubsection*{Magnetic Phase}
In the infrared, the above theory is dual to a magnetic description based on the gauge group $SU(\tilde{N}_c=N_f - N_c)$. The corresponding field content is listed in Table~\ref{tab:fieldcontentmagnetic}. Unlike the electric phase, the magnetic phase involves an additional superpotential
\begin{equation}
\label{SeibergDuality_Superpotential}
W=\tilde{M}^i_j q_i \tilde{q}^j\,,
\end{equation}
where the magnetic meson $\tilde{M}$ defines a fundamental degree of freedom and is related to its electric counterpart defined in~\eqref{BaryonsMesons_Electric} by a characteristic scale $\mu$,
\begin{equation}
\label{SeibergDuality:MagneticMeson}
\tilde{M}=\frac{1}{\mu}M\,.
\end{equation}
Often both mesons are identified and one uses the notation $M$ in either phase, which is indeed valid at the infrared fixed point. The presence of the dimensionful parameter $\mu$ in \eqref{SeibergDuality:MagneticMeson} is only required to relate both meson operators in the ultraviolet limit: Here, the electric meson is a composite state with canonical dimension 2, picking up an anomalous dimension $3\frac{\tilde{N}_c}{N_f}$ during the renormalisation group flow to the infrared fixed point, while the latter defines a fundamental field of dimension one flowing to the same fixed point. It is therefore common to define a separate operator as in \eqref{SeibergDuality:MagneticMeson} to correctly describe the magnetic meson in the ultraviolet limit.
The characteristic scale $\mu$ also appears in the matching condition
\begin{equation}
\Lambda^{3N_c - N_f}\tilde{\Lambda}^{3\tilde{N}_c-N_f}=(-1)^{\tilde{N}_c}\mu ^{N_f}
\end{equation}
for the scales $\Lambda$ and $\tilde \Lambda$ of the electric and magnetic theory, respectively.
From this, it can be seen that the duality relates different theories at strong and weak coupling, thus resembling the characteristic structure of a strong-weak duality.
\begin{table}[t]
\begin{center}
\begin{tabular}{c | c | c | c | c | c | c}
Field & $SU(\tilde{N}_c=N_f-N_c)$ & $SU(N_f)_{L}$ & $SU(N_f)_{R}$ & $U(1)_A$& $U(1)_B$& $U(1)_R$\\ \hline
$q$ & $\bf{\tilde{N}_c}$ & $\bf{\overline{N}_{f}}$ & $ 1$ & $ 1$ & $ \frac{N_{c}}{\tilde{N}_c}$ & $1-\frac{\tilde{N}_c}{N_{f}}$\\
$\tilde{q}$ & $\bf{\overline{\tilde{N}}_c}$ & $ $1 & $\bf{N_{f}} $ & $ 1$ &$ -\frac{N_{c}}{\tilde{N}_c}$ & $1-\frac{\tilde{N}_c}{N_{f}}$ \\
$\tilde{M}$ & 1 & $ \bf{N_{f}}$ & $ \bf{\overline{N}_{f}}$ & $ -2 $ & $ 0 $ & $2\frac{\tilde{N}_c}{N_{f}}$
\end{tabular}
\end{center}
\vspace*{-10pt}
\caption{Field content of the magnetic phase.}\label{tab:fieldcontentmagnetic}
\end{table}
Analogously to the electric phase, one can define $2{N_f\choose \tilde{N}_c}$ magnetic baryon operators as
\begin{eqnarray}
\label{Baryons_Magnetic}
\nonumber b_{i_1\ldots i_{\tilde{N}_c}}&=& q_{i_1}^{a_1}\cdots q_{i_{\tilde{N}_c}}^{a_{\tilde{N}_c}}\epsilon_{a_1\ldots a_{\tilde{N}_c}}\,,\\
\tilde{b}^{i_1\ldots i_{\tilde{N}_c}}&=& \tilde{q}^{i_1}_{a_1}\cdots \tilde{q}^{i_{\tilde{N}_c}}_{a_{\tilde{N}_c}}\epsilon^{a_1\ldots a_{\tilde{N}_c}}\,,
\end{eqnarray}
which, due to the identity ${N_f\choose N_c}={N_f\choose N_f-N_c}$, carry the same number of degrees of freedom as their electrical counterparts.
Formally, further mesons could be defined by $\tilde{m}=q\tilde{q}$, however, these do not lead to new degrees of freedom in the moduli space due to additional equations of motion $\langle q\tilde{q}\rangle =0 $ arising from the presence of the superpotential \eqref{SeibergDuality_Superpotential}, thus avoiding inconsistency of the duality~\cite{WechtLec}. A more in-depth analysis of the moduli spaces as well as further consistency checks of the duality were performed (e.g. in~\cite{Intriligator:1995au}) and we would like to refer the interested reader to the original works for more details.
\subsubsection*{Application to Neural Networks}
At the infrared fixed point, there exists a direct relation between both types of baryon operators,
\begin{eqnarray}
\nonumber B^{i_{1},\ldots i_{N_{c}}} & = \sqrt{-(-\mu)^{N_c - N_f}\Lambda^{3N_c - N_f}}\epsilon^{i_{1},\ldots i_{N_{c}}j_{1},\ldots j_{\tilde{N}_c}} b_{j_1\ldots j_{\tilde{N}_c}}\,,\\
\widetilde{B}_{{i}_{1},\ldots{i}_{N_{c}}} & =\sqrt{-(-\mu)^{N_c - N_f}\Lambda^{3N_c - N_f}}\epsilon_{{i}_{1},\ldots{i}_{N_{c}}{j}_{1},\ldots{j}_{\tilde{N}_c}} \tilde{b}^{j_1\ldots j_{\tilde{N}_c}}\,.
\end{eqnarray}
As can be seen, the baryons in the electric and magnetic phase involve products of $N_c$ and $\tilde{N}_c=N_f - N_c$ quarks, respectively.
This is similar to our discussion of the 1D Ising chain, in which determining the total energy required the computation
of $n$-spin products in the original representation, while the dependency was linear in the dual frame and therefore significantly easier to learn for neural networks. As the degree $n$ of interactions was raised, the value of the total energy became increasingly sensitive to flips of single spins due to their involvement in an increasing number of $n$ local interaction terms (cf. Figure~\ref{fig:1DIsing_Dualities_n3_Metastability}), which eventually led to a complete deterioration of performance at very high $n$.
In the above setting, the baryon operators in \eqref{BaryonsMesons_Electric} and \eqref{Baryons_Magnetic} take the form of sums over products of $N_c$ or $\tilde{N}_c$ quarks, with each particular component appearing in $(N_c-1)!$ or $(\tilde{N}_c-1)!$ non-vanishing products (taking the role of the ``local interaction terms"). Similar to the 1D Ising chain, such dependencies are likely to be learned more easily in the phase for which the number of factors is lower. In the setting discussed here, there exists a range $3/2 N_c < N_f < 2N_c$ for which $\tilde{N}_c<N_c$, implying that baryon relations might be easier to be accessed in the magnetic theory. Conversely, the electric phase might be preferable in the region $2 N_c < N_f < 3N_c$, where generically $\tilde{N}_c>N_c$.
It is a natural question to explore whether this fact can be used to re-discover Seiberg-like dualities following the strategy successfully applied for the 1D Ising case in Section~\ref{sec:1DIsing}. As this analysis promises to be too lengthy for this proof of concept paper, we leave this issue for the future.
\section{Conclusion}
\label{sec:conclusions}
Dualities offer a more efficient way of calculating correlation functions in physics. In particular, in the context of strongly coupled regions they provide in several examples the best technique to calculate properties of these dynamical systems. We have presented several examples where this improved way of calculating correlation functions via dual representations can be related to improved classification tasks.
Such different and more efficient data descriptions are clearly desirable, but how can one get them without knowing about the explicit map between such representations. We have shown in this work how such beneficial representations can be obtained in an unsupervised fashion, i.e.~without telling the network about its existence. By reproducing several human-made dualities automatically we provide a proof of concept that machines can be programmed to find dualities. Clearly, further and more involved types of dualities need to be addressed with these kind of techniques, which then will enable the search for new dualities.
Undoubtedly our tasks are relatively simple and can be achieved for instance in the case of the 1D Ising and Fourier analysis by more sophisticated architectures. However, we want to stress that these settings serve as an important first step to address tasks which are not accessible with state-of-the-art techniques with the same strategies used here.
The dual representations obtained by our networks can be analysed and we have found a representation which is interpretable, e.g.~we could recognise a Fourier-like transformation or transformations similar to the duality transformation in the 1D Ising example. This is encouraging as the neural network provides us with the explicit map to this interpretable representation.
Where will further steps in this new field of exploring dualities between different descriptions of dynamical systems with the help of machine learning take us?
\section*{Acknowledgments}
We would like to thank Jim Halverson, Fernando Quevedo, and Fabian Ruehle for discussions.
SK thanks the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611, and the Simons Center for Geometry and Physics during the Neural Networks and the Data Science Revolution program for providing a very stimulating work environment to develop some of this work. Parts of these results have been presented already at the following conferences and workshops: String Phenomenology 2019, QTS 2019 (Montreal), Corfu Summer Institute, DLAP in Kyoto, 1st French-German Meeting in Physics, Mathematics and Artificial Intelligence Theory, and XAIENCE in Seoul.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,392 |
Smart Hardware
Depot Solution
Products Line
Fujitsu FRAM
Touch and Fingerprint IC
Bluetooth IC
Quectel Modules
Huawei Modules
Electronic Shelf Labels
R&D Strength
Factory Strength
2020 - iPhone year of new energy vehicles
Column:Industry News Time:2020-05-12
On March 10th, the new energy industry swiped the screen again. Iron man posted a message on twitter. Tesla officially ushered in the 1 million offline, with remarkable achievements.
On March 10th, the new energy industry swiped the screen again, iron man tweeted on twitter, and Tesla officially ushered in the 1 million, offline with remarkable achievements
Ningde times, a domestic battery giant, is not idle, offering a hero post of "in the electric age, will you come or not". 23 domestic giants responded with their arms. This hero post includes FAW Pentium, Chang'an Automobile, SAIC Maxus, Roewe marvelx, GAC new energy, Weilai, Weima, Jinlong bus and Aichi automobile... Covering state-owned giants, new forces in car manufacturing, commercial vehicles and passenger vehicles.
The beginning of 2020 is unfavorable. As a green and a little red, the moving spring of new energy is hard to come out.
In 2011, Joe Gang died, and cook led apple to quadruple the market value of apple in eight years.
In June 2019, Tesla's share price once fell to $180 a share, but musk doubled Tesla's market value in just eight months. After all, the value of a car is much more expensive than a mobile phone.
The soaring stock price has triggered a long and short super war of words, but the formation of a super track has become an industry consensus. The apple of the automotive industry is accelerating, and the stage of the new energy era has also played.
Part 1. Chinese car making forces
At the end of 2010, when consumers lined up outside the apple store in the cold wind for an iPhone 4, Chinese manufacturers began to run into the strongest outlet of the year.
The following year, in Beijing, Xiaomi launched its first mobile phone for fever. In the south, oppo, which specializes in DVD, launched the first smartphone find x903 equipped with Android platform, and invited international star Leonardo to shoot an advertisement comparable to a movie trailer. Yu Chengdong, a middle-aged man, also received a task in the same year: to get Huawei's mobile phone business up.
This year, the streets are discussing whether Tesla's model 3 can be lowered a little more. But a scene similar to that of 10 years ago is being staged on Chinese manufacturers.
After all, with the coming of apple in the automotive industry, will Xiaomi, Huawei and ov in the automotive industry be far away?
BYD plans to launch a new flagship electric vehicle Han Series in 2020. In addition, the new technology blade battery has also attracted enough attention.
On February 25, Hefei announced that the China headquarters of Weilai automobile, a new force in car manufacturing, will be located locally, and the project is planned to raise 14.5 billion yuan. Weima, who followed closely, vowed to be the first profitable new force. Xiaopeng intensively released three new models to compete for the market.
Just like domestic Xiaomi, OV, Huawei and old giant Samsung, those who learn from Apple recognize Tesla electrification and start transformation one after another, as well as traditional big Mac.
From the perspective of the layout of new energy of the world's top ten mainstream automobile enterprises, 2020 is also a turning point to focus on mass production. Fuel giants have increased their bets and intensified their transformation.
BMW's pure electric ix3 will be put into operation in Shenyang, China in 2020. It is required to provide 25 electric models to the market by 2023.
Toyota and FAW will invest US $1.2 billion to build a plant in Tianjin, with a production capacity of 200000 electric vehicles per year.
The first phase of Volkswagen's battery plant with a capacity of 16 gigawatt hours in Europe will start construction in 2020.
After the war broke out, the east wind of the policy lit the fire even more vigorously!
On March 4, 2020, the Standing Committee of the Political Bureau of the CPC Central Committee held a meeting. The meeting pointed out that we should speed up the construction of new infrastructure. New infrastructure is hot for the moment, and new energy is prominently listed in the core list of new infrastructure.
At the same time, there are constant rumors of good news in the market. Some media broke out that the relevant parts of the state are stepping up research and Discussion on the relevant supporting policies for the new energy industry.
The global electric transformation is the general trend. If you look at the trend, you won't tangle with the short-term epidemic black swan.
Part 2. Industry chain leader: we are ready
When the tide of a new industry rises, the manufacturers who stand at the head of the tide and attract attention are often those who face consumers at the end, but what surges below is the power of the supply chain.
Apple has ushered in a new era of smart phones, but it is not enough to rely on the designers in Silicon Valley alone. It also needs the efforts of hundreds of millions of industrial workers and thousands of resident engineers in the supply chain to make the products landing.
Only when the glass cover provided by Lansi technology, the high-definition camera assembled by ofI light and the three-dimensional loudspeaker supplied by goer acoustics are ready, can an Apple phone be smoothly handed over to consumers.
Now, behind the endless new energy vehicle terminals, China's industrial chain is also rising rapidly.
They are ready for this moment.
Suppliers have a long history of flirting with Tesla. Xusheng Co., Ltd. cooperated with Tesla in 2013. Junsheng electronics has supplied more than 200000 batteries and circuit protection systems to Tesla by 2020.
With the downline of domestic Tesla, the expansion speed of Tesla's circle of friends in China has increased rapidly.
In 2019, the installed capacity of Sanyuan and iron lithium batteries of Ningde times ranked first
On February 3, Ningde Times announced that it plans to sign a contract with Tesla to supply power battery products, officially becoming its third battery supplier after Panasonic and LG chemistry. The supply period is from July 1, 2020 to June 30, 2022. After that, Ningde times now has several trading limits
Source: Tianfeng securities, Yuanchuan Research Institute
As of February this year, Tianfeng securities had 30 major domestic suppliers (A-share listed companies), and Dongxing securities had a total list of more than 130 Tesla suppliers.
Song Gang, manufacturing director of Tesla Shanghai factory, revealed that the localization rate of domestic Model3 parts will be 30% by the end of 2019, 70% by the year, and 100% by the end of 2020.
It can be said that Tesla industrial chain will become a hot investment main line throughout 2020. With the rapid growth of the industry, the profits of leading enterprises will expand with the boom.
However, unlike the situation of "foreigners eat meat and we drink soup" in the era of Apple industrial chain, we grabbed the commanding height at the beginning of this time!
Part 3. Grab the heart
Although in the apple era, China established a huge supply chain that attracted the attention of the world and produced more than 70% of mobile phones, it is a pity that we failed to get stuck on the core chip.
Xiaomi and ov can easily assemble a powerful intelligent machine, but they have to wait in line for the supply of a new generation of Gaotong Xiaolong chips. Even if Huawei catches up in the field of processors, Samsung will still steal large profits from memory chips.
The heart of consumer electronics has always been the face of Europe and America, dominated by Britain and the United States (led by arm, Qualcomm, etc.), and when it comes to the new energy track, China, Japan and South Korea have completely replaced the British and American system and become the first echelon.
The first step is to move towards the core.
The core of electric vehicle is three electricity (battery, motor and electric control), among which the battery is the most important. At this time, we have to mention the Ningde era, the shoulder of domestic batteries. The weight of this rising power fully confirms that sentence:
Opportunities are reserved for those who are ready。
In fact, Ningde era is not only the king of the supply chain in the new energy era, but also an invisible champion in the apple era. The founding team of Ningde era once founded the famous ATL (New Energy Technology) in the industry. It is the core supplier of Apple lithium battery, which not only solves the battery life problem of Apple mobile phone, but also becomes a real wireless intelligent mobile terminal.
From ATL to Ningde era, from apple to new energy, the kings of the two times have solved two core problems and pried the starting point of the take-off of the two major industrial cycles.
As the main force of the global power battery market, China has developed the most rapidly. In 2015, it overtook Japan to become the world's largest power battery producer.
According to the data of SNE research in South Korea, among the top ten enterprises in power battery shipments in 2019, there are five in China, two in Japan and three in South Korea. Ningde times won the global power battery shipment champion for the third consecutive year.
Not only a large quantity, but also good goods.
From the technical path, the square in Ningde era and the soft package of LG chemistry are the mainstream development direction of the industry, while Panasonic's cylindrical route is relatively small. Tesla, who has always been fond of cylindrical batteries, has also joined hands with Ningde era.第二步,大哥
We should not only be early, but also be stable.
The global new energy industry is on the eve of great changes. The hard won leading position of China's independent brand must not be taken away again. Therefore, the new leading brothers have once again made a good demonstration, with hard enough technology and enough food and grass.
According to industry observation, Ningde times has reserved a number of black technologies. For example, in terms of battery life, the company has developed an advanced zero attenuation battery, which can achieve zero attenuation within 1500 cycles. This is a milestone long-life battery.
In addition, Ningde Times announced a 20 billion private placement plan on the evening of February 26, adding a total of 52gwh of battery capacity. At the same time, it accelerated the research and development of energy storage battery projects, competed with iron man, and made great strides in the field of energy storage.
Previous: Fujitsu Ferroelectric Memory FRAM used in Smart Energy Meters
Next: Total Investment 70.7 Billion | Hangzhou Shaoyong intelligent Expressway fully supports automatic driving
Deport Solution
Transportationh Burean Solution
Touch Fingerprint IC
Electronic Shelf Label
Support Hotline
TEL:0755-2399700029360836/29356622
Add:6F,Block B, Rongxinxing Creative Building, No.19 Liuxian2nd Road, Xin'an Street, Bao'an District, Shenzhen
Email: sales@kingdom-tech.net
Copyright © 2020 KINGDOM-TECH ELECTRONIC CO.,LTD. All rights reserved 粤ICP备:888****74号 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,716 |
{"url":"https:\/\/wikimili.com\/en\/Free_category","text":"# Free category\n\nLast updated\n\nIn mathematics, the free category or path category generated by a directed graph or quiver is the category that results from freely concatenating arrows together, whenever the target of one arrow is the source of the next.\n\nMathematics includes the study of such topics as quantity, structure, space, and change.\n\nIn mathematics, and more specifically in graph theory, a directed graph is a graph that is made up of a set of vertices connected by edges, where the edges have a direction associated with them.\n\nIn mathematics, a quiver is a directed graph where loops and multiple arrows between two vertices are allowed, i.e. a multidigraph. They are commonly used in representation theory: a representation\u00a0V of a quiver assigns a vector space\u00a0V(x) to each vertex\u00a0x of the quiver and a linear map\u00a0V(a) to each arrow\u00a0a.\n\n## Contents\n\nMore precisely, the objects of the category are the vertices of the quiver, and the morphisms are paths between objects. Here, a path is defined as a finite sequence\n\n${\\displaystyle V_{0}{\\xrightarrow {\\;\\;E_{0}\\;\\;}}V_{1}{\\xrightarrow {\\;\\;E_{1}\\;\\;}}\\cdots {\\xrightarrow {E_{n-1}}}V_{n}}$\n\nwhere ${\\displaystyle V_{k}}$ is a vertex of the quiver, ${\\displaystyle E_{k}}$ is an edge of the quiver, and n ranges over the non-negative integers. For every vertex ${\\displaystyle V}$ of the quiver, there is an \"empty path\" which constitutes the identity morphisms of the category.\n\nThe composition operation is concatenation of paths. Given paths\n\n${\\displaystyle V_{0}{\\xrightarrow {E_{0}}}\\cdots {\\xrightarrow {E_{n-1}}}V_{n},\\quad V_{n}{\\xrightarrow {F_{0}}}W_{0}{\\xrightarrow {F_{1}}}\\cdots {\\xrightarrow {F_{n-1}}}W_{m},}$\n\ntheir composition is\n\n${\\displaystyle \\left(V_{n}{\\xrightarrow {F_{0}}}W_{0}{\\xrightarrow {F_{1}}}\\cdots {\\xrightarrow {F_{n-1}}}W_{m}\\right)\\circ \\left(V_{0}{\\xrightarrow {E_{0}}}\\cdots {\\xrightarrow {E_{n-1}}}V_{n}\\right):=V_{0}{\\xrightarrow {E_{0}}}\\cdots {\\xrightarrow {E_{n-1}}}V_{n}{\\xrightarrow {F_{0}}}W_{0}{\\xrightarrow {F_{1}}}\\cdots {\\xrightarrow {F_{n-1}}}W_{m}}$\n\nNote that the result of the composition starts with the right operand of the composition, and ends with its left operand.\n\n## Examples\n\n\u2022 If Q is the quiver with one vertex and one edge f from that object to itself, then the free category on Q has as arrows 1, f, ff,fff, etc. [2]\n\u2022 Let Q be the quiver with two vertices a, b and two edges e, f from a to b and b to a, respectively. Then the free category on Q has two identity arrows and an arrow for every finite sequence of alternating es and fs, including: e, f, ef, fe, fef, efe, etc. [1]\n\u2022 If Q is the quiver ${\\displaystyle a{\\xrightarrow {f}}b{\\xrightarrow {g}}c}$, then the free category on Q has (in addition to three identity arrows), arrows f, g, and gf.\n\u2022 If a quiver Q has only one vertex, then the free category on Q has only one object, and corresponds to the free monoid on the edges of Q. [1]\n\nIn abstract algebra, the free monoid on a set is the monoid whose elements are all the finite sequences of zero or more elements from that set, with string concatenation as the monoid operation and with the unique sequence of zero elements, often called the empty string and denoted by \u03b5 or \u03bb, as the identity element. The free monoid on a set A is usually denoted A. The free semigroup on A is the subsemigroup of A containing all elements except the empty string. It is usually denoted A+.\n\n## Properties\n\nThe category of small categories Cat has a forgetful functor U into the quiver category Quiv:\n\nIn mathematics, specifically in category theory, the category of small categories, denoted by Cat, is the category whose objects are all small categories and whose morphisms are functors between categories. Cat may actually be regarded as a 2-category with natural transformations serving as 2-morphisms.\n\nIn mathematics, in the area of category theory, a forgetful functor 'forgets' or drops some or all of the input's structure or properties 'before' mapping to the output. For an algebraic structure of a given signature, this may be expressed by curtailing the signature: the new signature is an edited form of the old one. If the signature is left as an empty list, the functor is simply to take the underlying set of a structure. Because many structures in mathematics consist of a set with an additional added structure, a forgetful functor that maps to the underlying set is the most common case.\n\nU\u00a0: CatQuiv\n\nwhich takes objects to vertices and morphisms to arrows. Intuitively, U \"[forgets] which arrows are composites and which are identities\". [2] This forgetful functor is right adjoint to the functor sending a quiver to the corresponding free category.\n\n### Universal property\n\nThe free category on a quiver can be described up to isomorphism by a universal property. Let C\u00a0: QuivCat be the functor that takes a quiver to the free category on that quiver (as described above), let U be the forgetful functor defined above, and let G be any quiver. Then there is a graph homomorphism I\u00a0: GU(C(G)) and given any category D and any graph homomorphism F\u00a0: GU(B), there is a unique functor F'\u00a0: C(G) \u2192 D such that U(F')\u2218I=F, i.e. the following diagram commutes:\n\nIn mathematics, the phrase up to appears in discussions about the elements of a set, and the conditions under which subsets of those elements may be considered equivalent. The statement \"elements a and b of set S are equivalent up to X\" means that a and b are equivalent if criterion X is ignored. That is, a and b can be transformed into one another if a transform corresponding to X is applied.\n\nIn category theory, an abstract branch of mathematics, an equivalence of categories is a relation between two categories that establishes that these categories are \"essentially the same\". There are numerous examples of categorical equivalences from many areas of mathematics. Establishing an equivalence involves demonstrating strong similarities between the mathematical structures concerned. In some cases, these structures may appear to be unrelated at a superficial or intuitive level, making the notion fairly powerful: it creates the opportunity to \"translate\" theorems between different kinds of mathematical structures, knowing that the essential meaning of those theorems is preserved under the translation.\n\nIn various branches of mathematics, a useful construction is often viewed as the \u201cmost efficient solution\u201d to a certain problem. The definition of a universal property uses the language of category theory to make this notion precise and to study it abstractly.\n\nThe functor C is left adjoint to the forgetful functor U. [1] [2] [3]\n\n## Related Research Articles\n\nIn category theory, a branch of mathematics, the abstract notion of a limit captures the essential properties of universal constructions such as products, pullbacks and inverse limits. The dual notion of a colimit generalizes constructions such as disjoint unions, direct sums, coproducts, pushouts and direct limits.\n\nIn mathematics, specifically category theory, adjunction is a relationship that two functors may have. Two functors that stand in this relationship are known as adjoint functors, one being the left adjoint and the other the right adjoint. Pairs of adjoint functors are ubiquitous in mathematics and often arise from constructions of \"optimal solutions\" to certain problems, such as the construction of a free group on a set in algebra, or the construction of the Stone-\u010cech compactification of a topological space in topology.\n\nAn exact sequence is a concept in mathematics, especially in group theory, ring and module theory, homological algebra, as well as in differential geometry. An exact sequence is a sequence, either finite or infinite, of objects and morphisms between them such that the image of one morphism equals the kernel of the next.\n\nIn category theory, a category is considered Cartesian closed if, roughly speaking, any morphism defined on a product of two objects can be naturally identified with a morphism defined on one of the factors. These categories are particularly important in mathematical logic and the theory of programming, in that their internal language is the simply typed lambda calculus. They are generalized by closed monoidal categories, whose internal language, linear type systems, are suitable for both quantum and classical computation.\n\nIn mathematics, the idea of a free object is one of the basic concepts of abstract algebra. It is a part of universal algebra, in the sense that it relates to all types of algebraic structure. It also has a formulation in terms of category theory, although this is in yet more abstract terms. Examples include free groups, tensor algebras, or free lattices. Informally, a free object over a set A can be thought of as being a \"generic\" algebraic structure over A: the only equations that hold between elements of the free object are those that follow from the defining axioms of the algebraic structure.\n\nIn category theory, a branch of mathematics, the functors between two given categories form a category, where the objects are the functors and the morphisms are natural transformations between the functors. Functor categories are of interest for two main reasons:\n\nIn mathematics, particularly category theory, a representable functor is a functor of a special form from an arbitrary category into the category of sets. Such functors give representations of an abstract category in terms of known structures allowing one to utilize, as much as possible, knowledge about the category of sets in other settings.\n\nIn mathematics, the derived categoryD(A) of an abelian category A is a construction of homological algebra introduced to refine and in a certain sense to simplify the theory of derived functors defined on A. The construction proceeds on the basis that the objects of D(A) should be chain complexes in A, with two such chain complexes considered isomorphic when there is a chain map that induces an isomorphism on the level of homology of the chain complexes. Derived functors can then be defined for chain complexes, refining the concept of hypercohomology. The definitions lead to a significant simplification of formulas otherwise described by complicated spectral sequences.\n\nIn mathematics, a triangulated category is a category together with the additional structure of a \"translation functor\" and a class of \"distinguished triangles\". Prominent examples are the derived category of an abelian category and the stable homotopy category of spectra, both of which carry the structure of a triangulated category in a natural fashion. The distinguished triangles generate the long exact sequences of homology; they play a role akin to that of short exact sequences in abelian categories.\n\nFibred categories are abstract entities in mathematics used to provide a general framework for descent theory. They formalise the various situations in geometry and algebra in which inverse images of objects such as vector bundles can be defined. As an example, for each topological space there is the category of vector bundles on the space, and for every continuous map from a topological space X to another topological space Y is associated the pullback functor taking bundles on Y to bundles on X. Fibred categories formalise the system consisting of these categories and inverse image functors. Similar setups appear in various guises in mathematics, in particular in algebraic geometry, which is the context in which fibred categories originally appeared. Fibered categories are used to define stacks, which are fibered categories with \"descent\". Fibrations also play an important role in categorical semantics of type theory, and in particular that of dependent type theories.\n\nThis is a glossary of properties and concepts in category theory in mathematics.\n\nIn category theory, a discipline within mathematics, the nerveN(C) of a small category C is a simplicial set constructed from the objects and morphisms of C. The geometric realization of this simplicial set is a topological space, called the classifying space of the categoryC. These closely related objects can provide information about some familiar and useful categories using algebraic topology, most often homotopy theory.\n\nIn category theory, a branch of mathematics, a diagram is the categorical analogue of an indexed family in set theory. The primary difference is that in the categorical setting one has morphisms that also need indexing. An indexed family of sets is a collection of sets, indexed by a fixed set; equivalently, a function from a fixed index set to the class of sets. A diagram is a collection of objects and morphisms, indexed by a fixed category; equivalently, a functor from a fixed index category to some category.\n\nIn mathematics, the category of rings, denoted by Ring, is the category whose objects are rings and whose morphisms are ring homomorphisms. Like many categories in mathematics, the category of rings is large, meaning that the class of all rings is proper.\n\nIn mathematics, a topos is a category that behaves like the category of sheaves of sets on a topological space. Topoi behave much like the category of sets and possess a notion of localization; they are in a sense a generalization of point-set topology. The Grothendieck topoi find applications in algebraic geometry; the more general elementary topoi are used in logic.\n\n## References\n\n1. Awodey, Steve (2010). Category theory (2nd ed.). Oxford: Oxford University Press. pp.\u00a020\u201324. ISBN \u00a0 0199237182. OCLC \u00a0 740446073.\n2. Mac Lane, Saunders (1978). Categories for the Working Mathematician (Second ed.). New York, NY: Springer New York. pp.\u00a049\u201351. ISBN \u00a0 1441931236. OCLC \u00a0 851741862.\n3. \"free category in nLab\". ncatlab.org. Retrieved 2017-09-12.","date":"2021-09-22 12:16:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 7, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9323025345802307, \"perplexity\": 271.6201286502057}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057347.80\/warc\/CC-MAIN-20210922102402-20210922132402-00334.warc.gz\"}"} | null | null |
Biografia
Figlio di Herbert Quandt, che nel 1959 è diventato azionista di riferimento della BMW, ha conseguito la laurea in commercio nel 1985, nel 1991 ha iniziato a dedicarsi anche alle corse automobilistiche correndo nei rally raid come pilota e co-pilota di fuoristrada. Nel corso degli anni all'esperienza maturata nella gestione delle aziende, ha unito la sua passione per l'automobilismo. Nel 1998 ha vinto la Marathon Cup con una Mitsubishi Pajero del GECO Raid Sport, team che ottenne il 1º, 2ºe 3º posto di classe all Parigi-Dakar. Dal novembre 2002 fino alla fine del 2004 ha ricoperto il ruolo di capo del settore sportivo della Mitsubishi Motors Motor Sport GmbH.
Dal 2002 Sven Quandt ha messo in pratica la sua passione per l'automobilismo sportivo, fondando il team X-Raid, settore corse della casa automobilistica di Monaco di Baviera, specificatamente dedicato ai rally raid., questa squadra privata da lui gestita, dispone del supporto della casa automobilistica BMW.
Note
Voci correlate
BMW
X-raid
Collegamenti esterni
Profilo di Sven Quandt dal sito X.raid.de | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,335 |
class CreateAccounts < ActiveRecord::Migration
def change
create_table :accounts do |t|
t.integer :user_id
t.integer :balance, :default => 0
t.timestamps :null => false
end
add_foreign_key "accounts", "users", :column => "user_id"
add_index :accounts, :user_id
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,741 |
Q: How to use bootstrap tooltip? I have used Bootstrap but the tooltip is not working
<a href="#" data-original-tittle="test"
data-placement="right"
rel="tooltip"
target=" _blank"> hover me
</a>
Or do I need to use jQuery?
A: Try to add tooltip like
<script type="text/javascript">
$(function(){
$('[rel="tooltip"]').tooltip();
});
</script>
or you can directly select as
$('a').tooltip();
or try with it
<a href="#" data-toggle="tooltip"
data-original-title="Tooltip on right">hover me</a>
and your script like
$(function() {
$('a').tooltip({placement: 'right'});
});
MY ULTIMATE FIDDLE finally HERE
A: use jqueryselectors as follows:
<script type="text/javascript">
$(function () {
$("[rel='tooltip']").tooltip();
});
</script>
Hope Its Helpful.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,003 |
var weiPerEther = 1000000000000000000;
// Make alias of docuemtn.getElementById -> $
function makeAlias(object, name) {
var fn = object ? object[name] : null;
if (typeof fn == 'undefined') return function () {}
return function () {
return fn.apply(object, arguments)
}
}
var passphrase = 'youvegoteth';
var gasPrice = 1;
var gas = 100000 * 2;
var gasLimit = gas * 2;
var maxGas = 4468057;
// Make docuemtn.getElementById aliased by $
$ = makeAlias(document, 'getElementById');
// Create Accounts Object
if(Accounts){
var Accounts = new Accounts();
// Set web3 provider
var host = ''
if(network_id==9){
host = "http://localhost:8545"; //testrpc
}
else if(network_id==3){
host = 'https://ropsten.infura.io/'; //ropsten
} else {
host = 'https://mainnet.infura.io/'; //mainnet
}
var provider = new HookedWeb3Provider({
host: host,
transaction_signer: Accounts
});
web3.setProvider(provider);
// Extend the web3 object
Accounts.log = function(msg){console.log(msg);};
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 108 |
package com.twitter.algebird.immutable
import java.io.{ByteArrayOutputStream, ObjectOutputStream}
import org.scalacheck.{Arbitrary, Gen}
import org.scalacheck.Prop._
import org.scalatest.matchers.should.Matchers
import org.scalatest.wordspec.AnyWordSpec
import com.twitter.algebird.{
ApproximateProperties,
ApproximateProperty,
Bytes,
CheckProperties,
Hash128,
Monoid,
MurmurHash128
}
object BloomFilterTestUtils {
def toDense[A](bloomFilter: BloomFilter[A])(bf: bloomFilter.Hash): bloomFilter.Hash = bf match {
case bloomFilter.Item(item) =>
val bs = bloomFilter.hashToArray(item).foldLeft(BitSet.empty)(_ + _)
bloomFilter.Instance(bs)
case bfi => bfi
}
}
class ImmutableBloomFilterLaws extends CheckProperties {
import com.twitter.algebird.BaseProperties._
import BloomFilterTestUtils._
val bf: BloomFilter[String] = BloomFilter[String](6, 12)
import bf._
implicit val bfGen: Arbitrary[bf.Hash] =
Arbitrary {
val item = Gen.choose(0, 10000).map(v => bf.create(v.toString))
val zero = Gen.const(Monoid.zero)
val sparse = Gen.listOf(item).map { its =>
Monoid.sum(its)
}
val dense = Gen.listOf(item).map { its =>
toDense(bf)(Monoid.sum(its))
}
Gen.frequency((1, zero), (5, item), (10, sparse), (10, dense))
}
property("BloomFilter is a Monoid") {
commutativeMonoidLaws[bf.Hash]
}
property("++ is the same as plus") {
forAll((a: bf.Hash, b: bf.Hash) => Equiv[bf.Hash].equiv(a ++ b, Monoid.plus(a, b)))
}
property("the distance between a filter and itself should be 0") {
forAll((a: bf.Hash) => a.hammingDistance(a) == 0)
}
property(
"the distance between a filter and an empty filter should be the number of bits" +
"set in the existing filter"
) {
forAll((a: bf.Hash) => a.hammingDistance(Monoid.zero) == a.numBits)
}
property("all equivalent filters should have 0 Hamming distance") {
forAll { (a: bf.Hash, b: bf.Hash) =>
if (Equiv[bf.Hash].equiv(a, b))
a.hammingDistance(b) == 0
else {
val dist = a.hammingDistance(b)
(dist > 0) && (dist <= a.width)
}
}
}
property("distance between filters should be symmetrical") {
forAll((a: bf.Hash, b: bf.Hash) => a.hammingDistance(b) == b.hammingDistance(a))
}
property("+ is the same as adding with create") {
forAll { (a: bf.Hash, b: String) =>
Equiv[bf.Hash].equiv(a + b, Monoid.plus(a, bf.create(b)))
}
}
property("maybeContains is consistent with contains") {
forAll((a: bf.Hash, b: String) => a.maybeContains(b) == a.contains(b).isTrue)
}
property("after + maybeContains is true") {
forAll((a: bf.Hash, b: String) => (a + b).maybeContains(b))
}
property("checkAndAdd works like check the add") {
forAll { (a: bf.Hash, b: String) =>
val (next, check) = a.checkAndAdd(b)
val next1 = a + b
Equiv[bf.Hash].equiv(next, next1) &&
(check == a.contains(b))
}
}
property("a ++ a = a for BF") {
forAll((a: bf.Hash) => Equiv[bf.Hash].equiv(a ++ a, a))
}
property("BF Instance has 1 or more BitSet") {
forAll { (a: bf.Hash) =>
a match {
case bf.Instance(bs) => bs.size >= 1
case _ => true
}
}
}
}
class ImmutableBloomFilterHashIndices extends CheckProperties {
implicit val bf: Arbitrary[BloomFilter[String]] =
Arbitrary {
for {
hashes <- Gen.choose(1, 10)
width <- Gen.choose(100, 5000000)
} yield BloomFilter[String](hashes, width)
}
property("Indices are non negative") {
forAll((bf: BloomFilter[String], v: Long) => bf.hashToArray(v.toString).forall(e => e >= 0))
}
/**
* This is the version of the Hash as of before the "negative values fix"
*/
case class NegativeHash(numHashes: Int, width: Int) {
val size = numHashes
def apply(s: String): Stream[Int] = nextHash(s.getBytes, numHashes)
private def splitLong(x: Long) = {
val upper = math.abs(x >> 32).toInt
val lower = math.abs((x << 32) >> 32).toInt
(upper, lower)
}
private def nextHash(bytes: Array[Byte], k: Int, digested: Seq[Int] = Seq.empty): Stream[Int] =
if (k == 0)
Stream.empty
else {
val d = if (digested.isEmpty) {
val (a, b) = MurmurHash128(k)(bytes)
val (x1, x2) = splitLong(a)
val (x3, x4) = splitLong(b)
Seq(x1, x2, x3, x4)
} else
digested
Stream.cons(d(0) % width, nextHash(bytes, k - 1, d.drop(1)))
}
}
implicit val pairOfHashes: Arbitrary[(BloomFilter[String], NegativeHash)] =
Arbitrary {
for {
hashes <- Gen.choose(1, 10)
width <- Gen.choose(100, 5000000)
} yield (BloomFilter[String](hashes, width), NegativeHash(hashes, width))
}
property(
"Indices of the two versions of Hashes are the same, unless the first one contains negative index"
) {
forAll { (pair: (BloomFilter[String], NegativeHash), v: Long) =>
val s = v.toString
val (bf, negativeHash) = pair
val indices = negativeHash.apply(s)
(indices == (bf.hashToArray(s).toStream)) || indices.exists(_ < 0)
}
}
}
class BloomFilterFalsePositives[T: Gen: Hash128](falsePositiveRate: Double) extends ApproximateProperty {
type Exact = Set[T]
type Approx = BloomFilter[T]#Hash
type Input = T
type Result = Boolean
val maxNumEntries = 1000
def exactGenerator =
for {
numEntries <- Gen.choose(1, maxNumEntries)
set <- Gen.containerOfN[Set, T](numEntries, implicitly[Gen[T]])
} yield set
def makeApproximate(set: Set[T]) = {
val bfMonoid = BloomFilter[T](set.size, falsePositiveRate)
val values = set.toSeq
bfMonoid.create(values: _*)
}
def inputGenerator(set: Set[T]) =
for {
randomValues <- Gen.listOfN[T](set.size, implicitly[Gen[T]])
x <- Gen.oneOf((set ++ randomValues).toSeq)
} yield x
def exactResult(s: Set[T], t: T) = s.contains(t)
def approximateResult(bf: BloomFilter[T]#Hash, t: T) = bf.contains(t)
}
class BloomFilterCardinality[T: Gen: Hash128] extends ApproximateProperty {
type Exact = Set[T]
type Approx = BloomFilter[T]#Hash
type Input = Unit
type Result = Long
val maxNumEntries = 10000
val falsePositiveRate = 0.01
def exactGenerator =
for {
numEntries <- Gen.choose(1, maxNumEntries)
set <- Gen.containerOfN[Set, T](numEntries, implicitly[Gen[T]])
} yield set
def makeApproximate(set: Set[T]) = {
val bfMonoid = BloomFilter[T](set.size, falsePositiveRate)
val values = set.toSeq
bfMonoid.create(values: _*)
}
def inputGenerator(set: Set[T]) = Gen.const(())
def exactResult(s: Set[T], u: Unit) = s.size
def approximateResult(bf: BloomFilter[T]#Hash, u: Unit) = bf.size
}
class ImmutableBloomFilterProperties extends ApproximateProperties("BloomFilter") {
import ApproximateProperty.toProp
for (falsePositiveRate <- List(0.1, 0.01, 0.001)) {
property(s"has small false positive rate with false positive rate = $falsePositiveRate") = {
implicit val intGen = Gen.choose(1, 1000)
toProp(new BloomFilterFalsePositives[Int](falsePositiveRate), 50, 50, 0.01)
}
}
property("approximate cardinality") = {
implicit val intGen = Gen.choose(1, 1000)
toProp(new BloomFilterCardinality[Int], 50, 1, 0.01)
}
}
class ImmutableBloomFilterTest extends AnyWordSpec with Matchers {
val RAND = new scala.util.Random
"BloomFilter" should {
"be possible to create from an iterator" in {
val bloomFilter = BloomFilter[String](RAND.nextInt(5) + 1, RAND.nextInt(64) + 32)
val entries = (0 until 100).map(_ => RAND.nextInt.toString)
val bf = bloomFilter.create(entries.iterator)
assert(bf.isInstanceOf[bloomFilter.Hash])
}
"be possible to create from a sequence" in {
val bloomFilter = BloomFilter[String](RAND.nextInt(5) + 1, RAND.nextInt(64) + 32)
val entries = (0 until 100).map(_ => RAND.nextInt.toString)
val bf = bloomFilter.create(entries: _*)
assert(bf.isInstanceOf[bloomFilter.Hash])
}
"be possible to create from a BitSet" in {
val bloomFilter = BloomFilter[String](RAND.nextInt(5) + 1, RAND.nextInt(64) + 32)
val entries = (0 until 100).map(_ => RAND.nextInt.toString)
val bf = bloomFilter.create(entries: _*)
val instance = bloomFilter.fromBitSet(bf.toBitSet)
assert(instance.isSuccess)
}
"be possible to create from a empty BitSet" in {
val bloomFilter = BloomFilter[String](RAND.nextInt(5) + 1, RAND.nextInt(64) + 32)
val instance = bloomFilter.fromBitSet(BitSet.empty)
assert(instance.isSuccess)
}
"fail to create from a larger BitSet" in {
val bloomFilter = BloomFilter[String](6, 0.01)
val entries = (0 until 6).map(_ => RAND.nextInt.toString)
val bf = bloomFilter.create(entries: _*)
val instance = BloomFilter[String](6, 0.1).fromBitSet(bf.toBitSet)
assert(instance.isFailure)
}
"identify all true positives" in {
(0 to 100).foreach { _ =>
val bloomFilter = BloomFilter[String](RAND.nextInt(5) + 1, RAND.nextInt(64) + 32)
val numEntries = 5
val entries = (0 until numEntries).map(_ => RAND.nextInt.toString)
val bf = bloomFilter.create(entries: _*)
entries.foreach { i =>
assert(bf.contains(i).isTrue)
}
}
}
"have small false positive rate" in {
val iter = 10000
Seq(0.1, 0.01, 0.001).foreach { fpProb =>
val fps = (0 until iter).map { _ =>
val numEntries = RAND.nextInt(10) + 1
val bfMonoid = BloomFilter[String](numEntries, fpProb)
val entries = RAND
.shuffle((0 until 1000).toList)
.take(numEntries + 1)
.map(_.toString)
val bf = bfMonoid.create(entries.drop(1): _*)
if (bf.contains(entries(0)).isTrue) 1.0 else 0.0
}
val observedFpProb = fps.sum / fps.size
// the 2.5 is a fudge factor to make the probability of it low
// in tests
assert(observedFpProb <= 2.5 * fpProb)
}
}
"approximate cardinality" in {
val bloomFilter = BloomFilter[String](10, 100000)
Seq(10, 100, 1000, 10000).foreach { exactCardinality =>
val items = (1 until exactCardinality).map(_.toString)
val bf = bloomFilter.create(items: _*)
val size = bf.size
assert(size ~ exactCardinality)
assert(size.min <= size.estimate)
assert(size.max >= size.estimate)
}
}
"work as an Aggregator" in {
(0 to 10).foreach { _ =>
val bloomFilter = BloomFilter[String](RAND.nextInt(5) + 1, RAND.nextInt(64) + 32)
import bloomFilter.aggregator
val numEntries = 5
val entries = (0 until numEntries).map(_ => RAND.nextInt.toString)
val bf = aggregator(entries)
entries.foreach(i => assert(bf.contains(i.toString).isTrue))
}
}
"not serialize @transient dense Instance" in {
val bloomFilter = BloomFilter[String](10, 0.1)
def serialize(bf: bloomFilter.Hash): Array[Byte] = {
val stream = new ByteArrayOutputStream()
val out = new ObjectOutputStream(stream)
out.writeObject(bf)
out.close()
stream.close()
stream.toByteArray
}
val bf = bloomFilter.create((1 until 10).map(_.toString): _*)
val bytesBeforeSizeCalled = Bytes(serialize(bf))
val beforeSize = bf.size
assert(bf.contains("1").isTrue)
val bytesAfterSizeCalled = Bytes(serialize(bf))
assert(bytesBeforeSizeCalled.size == bytesAfterSizeCalled.size)
assert(beforeSize == bf.size)
}
/**
* this test failed before the fix for https://github.com/twitter/algebird/issues/229
*/
"not have negative hash values" in {
val bf = BloomFilter[String](2, 4752800)
val s = "7024497610539761509"
val index = bf.hashToArray(s).head
assert(index >= 0)
}
}
"BloomFilter method `checkAndAdd`" should {
"be identical to method `+`" in {
(0 to 100).foreach { _ =>
val bloomFilter = BloomFilter[String](RAND.nextInt(5) + 1, RAND.nextInt(64) + 32)
import bloomFilter._
val numEntries = 5
val entries = (0 until numEntries).map(_ => RAND.nextInt.toString)
val bf = bloomFilter.create(entries: _*)
entries
.map(entry => (entry, bloomFilter.create(entry)))
.foldLeft((Monoid.zero, Monoid.zero)) { case ((left, leftAlt), (entry, _)) =>
val (newLeftAlt, contained) = leftAlt.checkAndAdd(entry)
left.contains(entry) shouldBe contained
(left + entry, newLeftAlt)
}
entries.foreach(i => assert(bf.contains(i.toString).isTrue))
}
}
}
"BloomFilters" should {
"be able to compute Hamming distance to each other" in {
import BloomFilterTestUtils._
val bf = BloomFilter[String](3, 64)
val firstBloomFilter = bf.create(Seq("A").iterator)
val secondBloomFilter = bf.create(Seq("C").iterator)
val distance1 = firstBloomFilter.hammingDistance(secondBloomFilter)
assert(distance1 === 4)
val thirdBloomFilter = bf.create(Seq("A", "B", "C").iterator)
// Make it dense to make sure that that case is also covered
// even though these examples are small and thus sparse.
val forthBloomFilter = toDense(bf)(bf.create(Seq("C", "D", "E").iterator))
val distance2 = thirdBloomFilter.hammingDistance(forthBloomFilter)
assert(distance2 === 8)
val emptyBloomFilter = bf.create(Iterator.empty)
val distanceToEmpty = thirdBloomFilter.hammingDistance(emptyBloomFilter)
assert(distanceToEmpty === thirdBloomFilter.numBits)
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,771 |
\section{Security}
The security check employing conjugate bases in purely frequency and time relies on
the mutually unbiased nature of these bases; a measurement in the wrong basis reveals no information about the state in the other basis and by their conjugate nature introduces errors. We therefore seek a mutually unbiased basis to that employed in our resource-efficient scheme. Alice and Bob's coarse measurements in spectrum are described by an operator that projects subsets of spectral states onto degenerate eigenvalues $\Omega_l$. The degeneracy within these subsets is lifted by also performing measurements in time, which are described by an operator with coarse timing resolution $T_m$ that correspond to degenerate sets of eigenstates. Simultaneous eigenstates for time and spectral measurements $\ket{\Omega_l T_m}$ describe one basis set. A conjugate basis can be found that forms the second basis for measurements by Alice and Bob.
Under certain conditions, a security check can be performed using simple instrumentation. For now, we assume that Eve chooses to attack by using either a Gaussian envelope in time or one in frequency, i.e.,
$\hat{E}_t=\int_{-\infty}^\infty e^{-t^2/2(\sigma_{coh}^{E})^2}\ket{t}\bra{t}dt$ \cite{PhysRevLett.98.060503} or $\hat{E}_\omega=\int_{-\infty}^\infty e^{-(\sigma_{cor}^{E})^2(\omega-\omega_p/2)^2}\ket{\omega}\bra{\omega}d\omega$, respectively.
Eve's temporal measurement leads to a decrease in $\sigma_{coh}$, and her frequency measurement creates an increase in the biphoton correlation time, $\sigma_{cor}$. Alice and Bob can detect both of these attacks with the `extended Franson interferometer' (eFI) shown in Fig. \ref{secure}. The eFI is composed of two unbalanced Mach-Zehnder interferometers (MZI) in the possession of Alice and Bob, where the long path on one arm can be actively modulated.
The probability for Alice and Bob to detect a photon coincidence in their eFI is \cite{PhysRevLett.62.2205}
\begin{equation}
P_C\propto \frac{1}{2}+\frac{1}{2}\cos[\omega(2\Delta t-\delta t)]e^{-\delta t^{2}/8\sigma_{cor}^{2}}e^{-\Delta t^{2}/8\sigma_{coh}^{2}},
\end{equation}
\noindent where $\omega = \omega_p/2$ is the center frequency of the SPDC signal and idler photons, $\Delta t$ is the path-length difference between the long and short arm of Alice's MZI, and $\delta t$ is the path-length difference between Alice's and Bob's long arm. $\Delta t$ is large enough to avoid single photon interference between long and short paths of a single arm of the eFI, and $\delta t$ is varied on the order of $1/\omega$ about zero. The interference is plotted in Fig. \ref{secure} as a function of an additional delay in Alice's MZI. This interference curve shows the oscillations in $P_C$ that is typical of the Franson interferometer near $\delta t = 0$. In addition, this oscillation has a Gaussian envelope whose width is given by $\sigma_{cor}$.
\begin{figure}
\centering\includegraphics[scale=0.5]{\string"T_and_tau\string".pdf}
\centering\caption{\small{The eFI used for security checks. Alice switches the short arm of her Franson between lengths $\delta t_1$ and $\delta t_2$. This allows determination of both $\sigma_{coh}$ and $\sigma_{cor}$ so that weak spectral and temporal measurements on the photon pair can be detected. Alice and Bob do security checks by varying $\delta t$ as shown in the insets. We show one possible switching scheme that makes use of Pockels cells (PC) to rotate the polarization of the photon by $\pi/2$ so that it is either transmitted or reflected at the polarizing beam splitter (PBS) and sent to the extended or standard delay line, respectively. The eFI also includes non-polarizing beam splitters (BS) and single photon detectors (D).}}
\label{secure}
\end{figure}
The visibility of the eFI interference is $V=e^{-\delta t^{2}/8\sigma_{cor}^{2}}e^{-\Delta t^{2}/8\sigma_{coh}^{2}}$. If Eve measures in the temporal domain with a resolution better than $\Delta t$, then Alice and Bob can detect a drop in $V$ near $\delta t =0$; this is the security check used by Kahn \emph{et. al} in Ref. \cite{PhysRevLett.98.060503}. On the other hand, if Eve measures in the spectral domain with a resolution better than $\Delta\Omega$, then Alice and Bob can detect an increase in $V$ near $\delta t =\sigma_{cor}$. To guard against temporal and spectral measurements by Eve simultaneously, Alice and Bob measure $V$ while Alice switches randomly between delays of 0 and $\sigma_{cor}$ (see Fig. \ref{secure}).
Alice and Bob can deduce the correlation time and coherence time from two visibility measurements $V_1$ and $V_2$ using two delays, $\delta t_1$ and $\delta t_2$, respectively. We label these extrapolated values $\sigma_{coh}^{E'}$ and $\sigma_{cor}^{E'}$, which are given by
\begin{equation}
(\sigma_{cor}^{E'})^2=\frac{1}{8}\frac{\delta t^2_1-\delta t^2_2}{\ln V_2 - \ln V_1}
\end{equation}
\begin{equation}
(\sigma_{coh}^{E'})^2=\frac{1}{8}\frac{\Delta t^2(\delta t^2_1-\delta t^2_2)}{\delta t^2_1\ln V_2 - \delta t^2_2\ln V_1}.
\end{equation}
Using $(\sigma_{coh}^E)^2=1/[(\sigma_{coh}^{E'})^{-2}-\sigma_{coh}^{-2}]$ and $(\sigma_{cor}^E)^2=1/[(\sigma_{cor}^{E'})^{-2}-\sigma_{cor}^{-2}]$ derived from this measurement, the bound on Eve's information per photon is $I_E\le\log_2 (\sigma_{coh}/\sigma_{coh}^E)+\log_2 (\sigma_{cor}^E/\sigma_{cor})$, which is the sum of her information obtained from temporal and spectral measurements. Our assumption of a Gaussian form of Eve's POVM will be generalized in future work.
\section{Conclusion}
The often limited photon budget for quantum key distribution makes high-dimensional encoding desirable. However, achieving the limit on this dimensionality in the temporal domain using time-frequency entangled photon pairs requires detectors with sub-ps timing jitter and resolution. By invoking conjugate spectral correlations, we present a protocol to approach this fundamental limit using current detectors and existing telecom networks. The conjugate nature of temporal and spectral encoding means that one can trade spectral for temporal bits (and vice versa) to minimize the effect of channel distortion such as nonlinear frequency conversion and dispersion, in addition to optimizing over transmission rate and channel bandwidth.
\\\\
This work was supported by the DARPA Information in a Photon program, through grant W911NF-10-1-0416 from the Army Research Office.
\section{Methods}
\subsection{Mutual information}
\label{sec:MI}
Alice and Bob ideally communicate information by discretizing the wave function into agreed-upon time-bin $\ket{\sigma_{bin}^i}$ and frequency-bin $\ket{\nu^i}$ macrostates by
\begin{equation}
\ket{\bar{\Psi}}=\sum_{i,j,k,l}G^{i,j,k,l}\ket{\sigma_{bin,A}^{i},\sigma_{bin,B}^{j},\nu_A^k,\nu_B^l},
\end{equation}
where
\begin{equation}
G^{ijkl}=\int_{i\sigma_{bin}}^{(i+1)\sigma_{bin}}\!\int_{j\sigma_{bin}}^{(j+1)\sigma_{bin}}\textnormal{FT}_2\left[\int_{k\delta\nu}^{(k+1)\delta\nu}\!\int_{l\delta\nu}^{(l+1)\delta\nu}\psi(\omega_A,\omega_B)d\omega_Ad\omega_B\right] dt_A dt_b
\end{equation}.
The probability of Alice and Bob projecting into time bins $\ket{\sigma_{bin,A}^i}$ and $\ket{\sigma_{bin,B}^j}$ and frequency bins $\ket{\nu_A^k}$ and $\ket{\nu_B^l}$ is $p^{i,j,k,l}=|\bra{\sigma_{bin}^i,\sigma_{bin}^j,\nu_A^k,\nu_B^l}\bar{\Psi}\rangle|^2=|G^{i,j,k,l}|^2$. We label the frequency bins so that for $k=l$, the center frequencies of these bins add to the pump frequency. We plot the mutual information in Fig. \ref{jitter}b as a function of the number of spectral channels added. The wave function is a two-dimensional Gaussian. As we increase the number of spectral channels, the mutual information (MI) increases, however the timing correlations eventually start to decrease, as the filtered photons extend into neighboring time bins. Jitter is also a very important to the MI calculation. We include this in the inset to Fig. \ref{jitter}b.
\subsection{Detector timing jitter}
Detector timing jitter refers to the added uncertainty in the photon detection time of some stimulus, purely a result of detector electronics. Superconducting nanowire single photon detectors and InGaAs APDs both exhibit jitter of roughly 30 to 40 ps \cite{Hadfield_single_photon}. We model timing jitter as a Gaussian projection, $\hat{\sigma}_{det}=\int e^{-t_x^{2}/2\sigma_{det}^{2}}\ket{t}\bra{t+t_x}dt_x$. The jitter profile of a real photodetector is not truly Gaussian and can be quite asymmetric, however (1) this model allows for first-order analysis and (2) certain single photon detectors do have approximately Gaussian timing jitter \cite{4277352}. If we apply $\hat{\sigma}_{det}$ on both Alice and Bob's photons, assuming the two-dimensional Gaussian given earlier, we get
\begin{equation}
\hat{\sigma}_{det,A}\hat{\sigma}_{det,B}\ket{\Psi} \propto \int^\infty_{-\infty}\int^\infty_{-\infty}\exp\left[\frac{-(t_A+t_B)^2}{4\sigma_{det}^2+16\sigma_{coh}^2}\right]\exp\left[\frac{-(t_A-t_B)^2}{4\sigma_{det}^2+4\sigma_{cor}^2}\right]e^{i\omega_p(t_A+t_B)/2}\ket{t_{A},t_{B}}dt_{A}dt_{B}
\end{equation}
Since $\sigma_{coh} \gg \sigma_{det}$, the most important effect of jitter is to increase the observed correlation time roughly from $\sigma_{cor}$ to $\sigma_{det}$. This can have a significant effect on the mutual information between Alice and Bob if $\sigma_{det}$ is on the order of $\sigma_{bin}$, as shown in Fig. \ref{jitter}b.
\section{Supplementary Information}
\subsection{Lossy Franson interferometry}
The Franson interference derived in the text assumes lossless propagation through the interferometer. This assumption is not valid in photonic integrated chips or fiber networks. We can account for loss in our analysis by adding a virtual beam splitter in the long path of the otherwise-lossless Franson, which couples the waveguide mode with a vacuum mode (see Fig. \ref{lossy}). We work in the Heisenberg construction, evolving the annihilation operator through the virtual-loss beam splitter and the two Franson beam splitters. The matrix for beam splitters 1 and 2, which leave the third mode undisturbed is given by
\begin{equation}
\hat{U}_i = \left(\begin{matrix} \sqrt{r_i}&\sqrt{1-r_i}&0\\\sqrt{1-r_i}&-\sqrt{r_i}&0\\0&0&1 \end{matrix}\right)
\end{equation}
\noindent where $i\in{1,2}$. The virtual-loss beam splitter is given by
\begin{equation}
\hat{U}_L = \left(\begin{matrix} 1&0&0\\ 0&\sqrt{t_L}&\sqrt{1-t_L}\\0&\sqrt{1-t_L}&-\sqrt{t_L}\\ \end{matrix}\right)
\end{equation}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.5]{\string"lossy_franson\string".pdf}
\par\end{centering}
\caption{\small{The eFI with an additional virtual beam splitter for loss in the long arm.}}
\label{lossy}
\end{figure}
The resulting annihilation operators are then $\hat{a}_A(t_A)=C_1\hat{a}(t)+C_2\hat{a}(t-\Delta t)$ and $\hat{a}_B(t_B)=C_1\hat{a}(t)+C_2\hat{a}(t-\Delta t - \delta t)$, disregarding the vacuum term, which will not affect coincidence counting. $C_1=\sqrt{r_1}\sqrt{r_2}$ and $C_2=\sqrt{1-r_1} \sqrt{1-r_2} \sqrt{t_L}$. For $r_1=r_2=1/2$, and $t_L=e^{-2t/\tau_\alpha}$ where $\tau_\alpha$ is the lifetime of the photon in the interferometer arm, the visibility simplifies to
\begin{equation}
V_{PIC}=\frac{2e^{-2\Delta t/\tau_{\alpha}}}{1+e^{-4\Delta t/\tau_{\alpha}}}e^{-\delta t^{2}/2\sigma_{cor}^{2}}e^{-\Delta t^{2}/2\sigma_{coh}^{2}}.
\end{equation}
However for maximum visibility, $C_1=C_2$, so
\begin{equation}
\frac{\sqrt{r_1}\sqrt{r_2}}{\sqrt{1-r_1} \sqrt{1-r_2}}=\sqrt{t_L}.
\end{equation}
The Franson beam splitters can therefore be tuned to account for loss in the interferometer.
\subsection{Eve and the wave function}
We focus on the case of a single eavesdropper measuring a single photon of the photon pair. Eve's temporal measurement is a Gaussian filtering function
\begin{equation}
\hat{E}_t=\int_{-\infty}^\infty e^{-t^2/2(\sigma_{coh}^{E})^2}\ket{t}\bra{t}dt
\end{equation}
Following \cite{PhysRevLett.98.060503}, the amplitude function
\begin{equation}
\psi(t_A,t_B)\propto \exp[-(t_A-t_B)^2/4\sigma_{cor}^2]\exp[-t_A^2/4\sigma_{coh}^2]e^{i\omega_p(t_A+t_B)/2},
\end{equation}
for $\sigma_{coh}\gg \sigma_{cor}$.
Therefore
\begin{eqnarray}
\ket{\Psi_E} &=& \hat{E_t}\ket{\Psi} \\
&\propto& \int^\infty_{-\infty}\int^\infty_{-\infty}\exp\left[-t_A^2\left(\frac{1}{4\sigma_{coh}^2}+\frac{1}{4(\sigma_{coh}^{E})^2}\right)\right]\exp\left[\frac{-(t_A-t_B)^2}{4\sigma_{cor}^2}\right]e^{i\omega_p(t_A+t_B)/2}\ket{t_{A},t_{B}} \nonumber
\end{eqnarray}
so the coherence time of the biphoton packet is strongly influenced by Eve's timing resolution when $\sigma_{coh}^E\ll\sigma_{coh}$. Similarly, we define a weak spectral POVM,
\begin{equation}
\hat{E}_\omega=\int_{-\infty}^\infty e^{-(\sigma_{cor}^{E})^2(\omega-\omega_p/2)^2}\ket{\omega}\bra{\omega}d\omega
\end{equation}
For $1/\sigma_{cor}\gg1/\sigma_{coh}$, $\ket{\Psi}$ can be written in the spectral-domain representation as follows
\begin{equation}
\ket{\Psi}\propto\int\int\exp[-\sigma_{cor}^{2}/4(2\omega_{A}-\omega_{p})^{2}]\exp[-\sigma_{coh}^{2}(\omega_{A}+\omega_{B}-\omega_{p})^{2}]\ket{\omega_{A},\omega_{B}} d\omega_{A}d\omega_{B},
\end{equation}
from which we find that
\begin{eqnarray}
\hat{E}_\omega\ket{\Psi}\propto\int\int\exp[-(\sigma_{cor}^{2}/4+(\sigma_{cor}^{E})^{2}/4)(2\omega_{A}-\omega_{p})^{2}]\\\nonumber
\times\exp[-\sigma_{coh}^{2}(\omega_{A}+\omega_{B}-\omega_{p})^{2}]\ket{\omega_{A},\omega_{B}} d\omega_{A}d\omega_{B}.
\end{eqnarray}
Thus, Eve projects the biphoton pair onto a narrower frequency distribution.
Reverting to the time-domain representation we get
\begin{equation}
\hat{E}_\omega\ket{\Psi}\propto \int\int\exp(-t_A^2/4\sigma^2_{coh})\exp[-(t_A-t_B)^2/4(\sigma_{cor}^E)^2]e^{i\omega_p(t_A+t_B)/2}\ket{t_A,t_B}dt_Adt_B,
\end{equation}
for $\sigma_{cor} \ll \sigma^E_{cor} \ll \sigma_{coh}$.
\bibliographystyle{apsrev_no_links.bst}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,319 |
{"url":"https:\/\/www.smarteditionacademy.com\/test-detail-page-clone-jw-do-not-edit\/","text":"# ATI TEAS 6\n\n## Free ATI TEAS 6 Practice Test\n\n### Used by over 30,000 nursing students\n\nTake our free ATI TEAS 6 Practice Test with 169 questions with answer explanations and diagnostic report.\n\n## FAQ: Have questions? We have answers\n\nCheck out some of the most frequently asked questions from our students\n\n100% yes, which can\u2019t be said for most TEAS test prep. We only work with highly qualified subject matter experts who have actually taken the TEAS test to develop the content and our editorial team is second to none ensuring you get the highest quality questions. We don\u2019t mess around when it comes to preparing for the TEAS.\n\nYou bet, we help over 10,000 students per year prepare for and dominate the TEAS to the best of their ability. Our reviews speak for themselves.\n\n## What Is A Passing Score For The ATI TEAS?\n\nEach individual program sets its own passing scores. However, most programs require at least a 70% \u2013 80% on each test section.\n\n## Can I share my TEAS test score with more than one school?\n\nYes, you can share your TEAS score with as many schools as you would like, but there is a \\$27 charge for each additional school that you would like to receive results.\n\n## What is on the ATI TEAS 6 test?\n\nClick to watch detailed video explanations of every single topic for each subject\n\n53 Questions\n\n64 minutes\n\n36 Questions\n\n35 minutes\n\n55 Questions\n\n35 minutes\n\n55 Quersitons\n\n35 mins\n\n## Reading Section: 53 questions\/64 minutes\n\nThe Reading section of the ATI TEAS (TEAS 6) test comprises about 31% of the entire test. The test has\u00a053 questions\u00a0that you have to complete in\u00a064 minutes. To do well, you need to be able to read for comprehension of key ideas, as well as details. The test also has questions regarding the author\u2019s purpose, style, and point of view. Finally, you will be asked to take the knowledge you gain from reading and extend it through strategies such as prediction and analysis\n\nThis section covers key ideas and details in reading passages, the craft and structure or sentences, and the integration of knowledge and ideas.\n\nIf the question doesn\u2019t reference something in one of the answers, that answer is probably incorrect. Check to see what is\/isn\u2019t referenced and choose the best answer from there.\n\nDo not assume facts about questions. Often, if information is not provided in the question, it will not be relevant. Stick to the facts that are provided.\n\nSome questions will focus on your ability to determine the difference between opinion and fact. Practice recognizing the difference between fact (the grass is green) and opinion (the grass smells nice).\n\nRead carefully and slowly. Questions may be confusing if you read too quickly.\n\nIf you think that 2 answers could be correct, ask yourself, \u201cWhat is it REALLY asking\u201d.\n\nStudy and know different types of writing styles. You may be asked to identify. i.e. narrative, expository, entertaining, analytical, persuasive, etc.\n\nKnow how to identify first person (I), second person (You), third person (Narration).\n\nUse only the information you are given, if it is not stated in the text then don\u2019t assume it to be relevant.\n\nUse Process of elimination. Eliminate answers you know are wrong and work your way to one, final answer.\n\nKnow how to use an index, dictionary, almanac, encyclopedia, and glossary.\n\nTry to improve your reading speed and comprehension in advance. You want to ensure that you can finish the section before the time is up.\n\nPay attention to the wording in questions. The wording in the question itself will usually provide helpful hints that can lead you toward the correct answer.\n\nScroll to Top","date":"2021-01-27 03:01:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3583790063858032, \"perplexity\": 1839.1175271456718}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610704820894.84\/warc\/CC-MAIN-20210127024104-20210127054104-00616.warc.gz\"}"} | null | null |
Q: What are the `0.0.0.0` addresses returned by `ntpdc` It is a just installed ubuntu 16.04.2 server, and I installed ntp package there, which currently runs with the default config with only single line added:
enable mode7
since otherwise ntpdc and collectd cannot fetch data from it.
And what I cannot interpret is this output:
# ntpdc -c peers localhost
remote local st poll reach delay offset disp
=======================================================================
=ntp2.ntp.net.nz 10.50.200.3 1 256 377 0.01117 0.000599 0.10858
*timeball1.its.w 10.50.200.3 1 256 377 0.01054 -0.000771 0.10974
=timeball3.its.w 10.50.200.3 1 256 377 0.01007 -0.001039 0.11723
=ns1.tdc.akl.tel 10.50.200.3 2 512 377 0.00882 0.000451 0.12932
=ntp1.ntp.net.nz 10.50.200.3 1 256 377 0.01041 0.000254 0.13625
=0.0.0.0 0.0.0.0 16 64 0 0.00000 0.000000 4.00000
=0.0.0.0 0.0.0.0 16 64 0 0.00000 0.000000 4.00000
=0.0.0.0 0.0.0.0 16 64 0 0.00000 0.000000 4.00000
=0.0.0.0 0.0.0.0 16 64 0 0.00000 0.000000 4.00000
=0.0.0.0 0.0.0.0 16 64 0 0.00000 0.000000 4.00000
What are those 5 lines with 0.0.0.0 remote?
Additionally, here is the output from ntpq:
# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 0.000 0.000
-ntp1.ntp.net.nz .GPS. 1 u 43 256 377 10.418 0.254 0.443
-ns1.tdc.akl.tel 74.189.58.78 2 u 351 512 377 8.831 0.451 0.270
+timeball3.its.w .GPS. 1 u 148 256 377 10.080 -1.039 1.199
*timeball1.its.w .GPS. 1 u 73 256 377 10.551 -0.771 0.431
+ntp2.ntp.net.nz .GPS. 1 u 79 256 377 11.183 0.599 0.270
Why this question at all:
collectd causes these annoying syslog messages and I think it's relevant:
Jul 12 01:59:45 server collectd[2773]: uc_update: Value too old: name = server.domain.tld/ntpd/time_dispersion-0.0.0.0; value time = 1499824785.998; last cache update = 1499824785.998;
Jul 12 01:59:45 server collectd[2773]: uc_update: Value too old: name = server.domain.tld/ntpd/time_offset-0.0.0.0; value time = 1499824785.998; last cache update = 1499824785.998;
Jul 12 01:59:45 server collectd[2773]: uc_update: Value too old: name = server.domain.tld/ntpd/delay-0.0.0.0; value time = 1499824785.998; last cache update = 1499824785.998;
A: Those are placeholders for the pool associations. See this bug:
Bug 2014: strange interaction between pool directive and maxclock
Notice the p in the type column? That indicates its a placeholder entry for pool directives.
As for "why this question" the problem lies in whatever you are using to parse the output before sending it to collectd. It should ignore any line that contains a type of p
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,947 |
Q: Define a generator which updates global variables before the first __next__() call Given the following function in python I would like to update the global variable before calling "next()". Let me show it to you with an example.
# some script happening before
# a (global) variable is created
L = 0
def generate(batch=128, ID=0, YIND=0):
# just a call to a global dictionary
x_, y_ = global_dict[id_to_keys[ID]]
# update global variable please
global L
L = len(x_)
X,Y=[],[]
counter=0
for i,x in enumerate(x_):
counter+=1
X.append(x)
Y.append(y_[i])
if counter==batch:
counter=0
yield np.asarray(X[-batch:]), np.asarray(Y[-batch:])
Then you can run:
print(L)
g = generate()
print(f'I would like this to output some number but it outputs ',L)
_ = g.__next__()
print(f'I would like this to output some number and it outputs ',L)
Which outputs the following:
I would like this to output some number but it outputs 0
I would like this to output some number and it outputs 12312
Finally: please note that there is a way of doing this via a class definition that has a class variable, but I'm currently wondering for a full "functional" implementation.
Thank you for your time
A: I'm not entirely sure if I understand correctly what you are trying to do. But the thing is that a generator function in Python only starts to execute once the generator is being enumerated:
def gen():
print('before 1')
yield 1
print('before 2')
yield 2
print('after 2')
>>> g = gen()
>>> next(g)
before 1
1
>>> next(g)
before 2
2
>>> next(g)
after 2
Traceback (most recent call last):
File "<pyshell#9>", line 1, in <module>
next(g)
StopIteration
So if you want to run code before the generator is being enumerated, you cannot have that code be part of the generator function itself.
What you can do instead is to have a normal function that does something and then returns a created generator:
def gen_internal():
yield 1
print('before 2')
yield 2
print('after 2')
def gen():
print('before 1')
return gen_internal()
>>> g = gen()
before 1
>>> next(g)
1
>>> next(g)
before 2
2
>>> next(g)
after 2
Since you apparently only set L once, before the first yield, this might be what you want to do.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,012 |
Ford Theatre was an American psychedelic rock band from Boston, Massachusetts, that were active between 1966 and 1971. Their sound was similar to other Boston-based psychedelic rock bands of the era, but more genuine.
History
The band formed from the members of The Continentals (Jimmy Altieri, John Mazzarelli, Robert Tamagni, and Butch Webster), who then recruited Harry Palmer and Joe Scott. Although they existed during the period, the group disassociates itself with the Bosstown Sound.
Ford Theatre was one of the most promising bands of the 1960s that were influenced by the bands such as the Kingsmen, the Beatles and the Byrds, although they recorded only two albums, both under the ABC Records label. The band's first album Trilogy for the Masses was produced by Bob Thiele in 1968. The album's band tracks were done at Fleetwood Studios in Revere, Massachusetts, and the vocals were at Capitol Studios in New York City. And a year later their second album Time Changes was produced by Bill Szymczyk who later went on to produce the Eagles. The second album was done at the Hit Factory in New York City.
After 1969, the band disappeared from records and their memory was overshadowed by the more successful bands of the 1970s. In a recent interview Jimmy Altieri stated that after the release of Time Changes, the band didn't manage to get a new deal for a third album that was already partially recorded and the members decided to disband Ford Theatre in 1971.
Band members
Harry Palmer - guitar
John Mazzarelli - keyboards, vocals
Butch Webster - lead guitar
Joey Scott - lead vocals
Jimmy Altieri - bass, vocals
Robert Tamagni drums, vocals
Wally Magee
Discography
Singles
"From a Back Door Window" b/w "Theme for the Masses" (ABC 11118) 1968
"I've Got the Fever" b/w "Jefferson Airplane" (ABC 11227) 1969
"Time Changes" b/w "Wake Up in the Morning" (Columbia(EMI) 1C006-90288) 1969
"At the Station" b/w "Wake Up in the Morning" (Stateside 5C 006-90 589) 1969
Albums
Trilogy for the Masses (ABC ABCS-658) 1968
Time Changes (ABC ABCS 681) 1969
References
External links
"Exciting sound of Ford Theater". Beaver County Times. August 27, 1968.
Ford Theatre in R. Stevie Moore HomePage
Musical groups established in 1966
Musical groups disestablished in 1971
American progressive rock groups
Psychedelic rock music groups from Massachusetts | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 487 |
AEZS
Aeterna Zentaris Inc. Performance, Technical Analysis and Valuation
Aeterna Zentaris Inc.
(NASDAQ:AEZS)
Aeterna Zentaris Inc. 1 Year - Performance
How is AEZS price-to-earnings ratio currently?
AEZS has a current P/E ratio of N/A.
How are AEZS's price-to-sale and price-to-book ratios?
The AEZS stock has a trailing twelve month price-to sales (P/S) ratio of 2.71 while its price-to-book (P/B) value is 0.37.
Is Aeterna Zentaris Inc.'s price-to-cash ratio high or low?
AEZS's price-to-cash (P/C) ratio is 0.33, an indicator that is currently being trending upward.
What's current simple moving average (SMA) for AEZS?
Based on AEZS's recent bid, its distance from 20 days simple moving average is 4.58%. It is 3.82% apart from 50 days simple moving average, and it is -20.32% apart from 200 days simple moving average.
What is the 52-week price range for AEZS?
The 52-week high for AEZS is $11 which is 0.66% away from the current price level whereas the distance of that from the 52-week low of $2.89 stands at 0.29%.
An overview of AEZS's stock performance: how did it fare?
The performance of AEZS stock was not encouraging with most of its significant price indictors in red. Aeterna Zentaris Inc. (NASDAQ: AEZS) shares have moved upward 1.36% or $0.05 in the latest trading session and have gained 17.30% year-to-date (YTD). The stock has loss nearly -57.97% over the past year. After moving -14.06% over the trailing 3-month period, the stock is -16.18% lower in the 6-month period. Looking at its performance over the shorter term, it's down -3.24% a week and 21.10% a month.
What is AEZS's 14-day relative strength index (RSI)?
The current 14-day RSI for AEZS stock is 55.99. Investors and traders alike rely on the relative strength index, or RSI, as an oscillating indicator. In terms of values, the RSI operates within a range ranging from 0 to 100. A rising RSI line indicates strength in the shares. As the RSI line falls, the opposite occurs. It is possible to examine different time periods when using the RSI indicator. Shorter time frames can cause the RSI to be more volatile. Most traders pay close attention to the marks between 30 and 70 on the RSI scale. When the stock price moves over 70, it is often considered to be an indicator of overbought conditions. Dropping below 30 indicates oversold territory. These levels are often used by traders to forecast stock price reversals.
For AEZS, what is the average true range (ATR) for the past two weeks?
ATR stands for Average True Range, which may be useful when traders or investors are assessing technical inventory. Currently, Aeterna Zentaris Inc. (AEZS) has a 14-day ATR of 0.17.
What is AEZS's current enterprise value?
After closing at $3.73 with 4.86 Million of its shares outstanding, the current enterprise value of AEZS is roughly N/A.
Over the past year, how has the stock price of AEZS changed?
Aeterna Zentaris Inc. (NASDAQ: AEZS) is up N/A when it comes to the percentage price change over the past 52-weeks. The percentage price change came more than that of broader S&P 500 index that came at a rise of N/A. AEZS stock price is also up 17.30% on its value in year-to-date trading.
What is the average trading volume for AEZS shares?
The number outstanding shares of AEZS is 4.86 Million, and of these 4.84 Million shares are freely available for trading. On average, AEZS stock has traded 15.84 Thousands shares per day for the past 10 days. A total of 9.74 Thousands shares changed hands during the last trading session with the average session volume of 15.41 Thousands shares. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,013 |
{"url":"http:\/\/www.physicsforums.com\/showthread.php?p=3647497","text":"## Integrate wrt x^2\n\n\u222b$\\frac{1}{x^{2}+25}$d$x^{2}$\nThey were substituting x^2 for y ($x^{2}$=y) and thus the answer would come to be log(y+25)\nthat is log($x^{2}$+25)\n\nI don't think this is the case , i guess that we would be differentiating wrt a 2nd degree curve like a parabola in case of this problem .\n\nWould you people point out whats the real thing.\n PhysOrg.com science news on PhysOrg.com >> Ants and carnivorous plants conspire for mutualistic feeding>> Forecast for Titan: Wild weather could be ahead>> Researchers stitch defects into the world's thinnest semiconductor\n Recognitions: Homework Help If the integral is really as you wrote it, $$\\int d(x^2) \\frac{1}{x^2+25},$$ then you are free to set y = x^2, as in this case your differential is d(x^2), and it already contains the x^2 so you are free to make the substitution. If the integral were $$\\int dx \\frac{1}{x^2+25},$$ then you can of course still substitute $y = x^2$, but then the differential term is different: $dx \\rightarrow dy\/(2x) = dy\/(\\pm 2\\sqrt{y})$. Note that if this were a definite integral were the limits went from some negative value to a positive one, you would have to split the integral into two pieces, one from the negative value to 0 and one from 0 to the positive value, and then make the substitution, as you need to choose a sign for the square root and it's different for x > 0 and x < 0.\n Recognitions: Gold Member Science Advisor Staff Emeritus In general, if g(x) is a differentiable function, then dg(x)= g'(x)dx. d(x^2)= 2xdx so $$\\int\\frac{1}{x^2+ 25}d(x^2)= \\int\\frac{2x}{x^2+ 25}dx$$ Now, if you let $u= x^2+ 25$, du= 2xdx and the integral becomes $$\\int\\frac{du}{u}= ln(u)+ C= ln(x^2+ 25)+ C$$ just as before. That really is the case.\n\n## Integrate wrt x^2\n\nI was wondering if we could carry out these operations with respect to curves aswell like the $x^{2}$.","date":"2013-05-23 09:52:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.896026074886322, \"perplexity\": 670.4119533052339}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368703108201\/warc\/CC-MAIN-20130516111828-00068-ip-10-60-113-184.ec2.internal.warc.gz\"}"} | null | null |
\section{Introduction}
The magnetorotational instability (MRI) is of great importance in astrophysics. First discovered by Velikhov~\cite{velikhov1959stability} in 1959, it remained unnoticed until 1991 when Balbus and Hawley \cite{balbus1991} realised its application to accretion disc theory. Accretion discs are astrophysical systems that consist of ionised gas and dust orbiting a massive body. Planets and stars are formed from this initially dispersed matter. The physical mechanism of accretion is straightforward: a parcel of viscous fluid in the differentially rotating disc loses its angular momentum over time and falls onto the central object. To explain the astrophysically observed rates of accretion, however, one must assume a turbulent transport of angular momentum in the outward direction \cite{shakura1973}. In so-called Keplerian discs the angular velocity profile of gas follows the law
\begin{equation}
\label{eq:keplerian}
\Omega\sim r^{-3/2},
\end{equation}
which is hydrodynamically stable according to the Rayleigh criterion for rotating fluids \cite{rayleigh1917dynamics}:
\begin{equation}
\label{eq:rayleigh}
\frac{d(r^2\Omega)^2}{dr}>0 \quad\text{for stability}.
\end{equation}
Ionized accretion discs, however, are necessarily magnetized and the MRI may still act in rotating flows, provided the angular velocity decreases with radius, which is true of Keplerian flows (\ref{eq:keplerian}).
The growth rates of the MRI and the parameter ranges in which it acts were determined in several linear analyses~\cite{balbus1991,ogilvie1996non,balbus1998instability,hollerbach2005new,hollerbach2010nonaxisymmetric}, but these do not provide information about the flow structure and scaling of angular momentum transport after nonlinear saturation. In the last two decades there has been a great deal of numerical work concerned with the nonlinear properties of the MRI. \AG{Simulations are usually performed with the shearing sheet approximation, which is a local model of an accretion disc with shear-periodic boundary conditions in the radial direction~\cite{balbus1998instability}. The main disadvantage of this model is the influence of boundary conditions on the geometry of the observed modes and transport scaling. In particular, the length of the computational box fixes the modes that appear and determines their nonlinear saturation. As it is not clear how the length should be selected, the interpretation of the transport scaling becomes quite involved~\cite{umurhan2007weakly}.}
These theoretical results and numerical simulations inspired physicists to
realise the MRI in laboratory experiments. In 2001 Ji et al.~\cite{ji2001magnetorotational} and R{\"u}diger and Zhang~\cite{rudiger2001mhd} independently suggested the possibility of directly observing magnetorotational instabilities in a cylindrical vessel made of two co-axial and independently rotating cylinders containing a liquid metal alloy (see Fig. \ref{fig:TC_scheme}a). The standard form of the MRI (SMRI), which they proposed, emerges when a purely axial magnetic field is imposed, but this has not yet been achieved in experiments. The difficulty is that liquid metals have very small magnetic Prandtl numbers (e.g.\ $Pm \sim 10^{-6}$ for gallium alloys), leading in this case to very large Reynolds numbers ($Re\gtrsim 10^7$) necessary to observe the SMRI \AG{(see Table~\ref{tab:param} for the definitions of $Re$ and $Pm$)}. In fact, such high Reynolds numbers have never been achieved even for non-magnetic Taylor-Couette flows. \AG{A further difficulty of Taylor--Couette experiments in the quasi-Keplerian regime arises because of Ekman vortices that arise adjacent to the endplates. Unless a very specific endplate arrangement is used, the Ekman vortices extend deep into the flow and even at moderate Reynolds number the basic Couette flow cannot be obtained experimentally~\cite{edlund2014nonlinear}. The resulting velocity profiles are no longer quasi-Keplerian and hydrodynamic instabilities render the flow turbulent even in the absence of magnetic fields~\cite{avila2012stability,nordsiek2014azimuthal}.}
Hollerbach and R\"udiger \cite{hollerbach2005new} proposed instead a combination of axial and azimuthal magnetic fields, giving rise to the helical MRI (HMRI) at much lower $Re\sim 10^3$ for Hartmann numbers $Ha \sim 10$ \AG{(see Table \ref{tab:param} for the definition of Ha)}. This was successfully observed \cite{stefani2006experimental,stefani2009helical} in the PROMISE facility (Potsdam-ROssendorf Magnetic Instability Experiment). Both the SMRI and HMRI consist of axisymmetric toroidal vortices, which are stationary for the former but travel axially for the latter. Hollerbach \emph{et al}.~\cite{hollerbach2010nonaxisymmetric} realised that although a purely azimuthal magnetic field does not yield any axisymmetric instabilities, non-axisymmetric modes can be destabilised. The resulting azimuthal MRI (AMRI) arises in Taylor-Couette flow at $Re\sim 10^3$ for $Ha \sim 10^2$ \cite{hollerbach2010nonaxisymmetric}, and a recent upgrade of the PROMISE power supply made its experimental observation possible \cite{seilmayer2014experimental}. \AG{In PROMISE the endplates are split into two parts and the inner (outer) one is attached to the inner (outer) cylinder. Although this configuration is acceptable at $Re \leq 3000$, as studied experimentally, endplate effects become dominant at larger $Re$. Because of this, and of the practical impossibility of generating a purely azimuthal magnetic field experimentally, it is challenging to actually identify the AMRI modes in the experimental data unambiguously (see \cite{seilmayer2014experimental}).}
Despite this recent experimental progress in realising magnetorotational instabilities in the laboratory, little is known about their bifurcation scenario, transition to turbulence and transport properties as $Re$ increases. In this work we address these points for the AMRI. We perform direct numerical simulations of the coupled induction and Navier-Stokes equations using axially periodic boundary conditions, thereby avoiding undesired endplate effects \AG{and focusing on the features intrinsic to the AMRI}. We find that the laminar quasi-Keplerian flow becomes unstable to a wave rotating in the azimuthal direction and standing in the axial direction. Subsequently, we identify a new bifurcation scenario giving rise to spatio-temporal defects via a subcritical subharmonic Hopf bifurcation. As the Reynolds number is further increased, the flow becomes turbulent and outward momentum transport is enhanced, albeit at a weak rate. The results are in good qualitative agreement with the PROMISE observations~\cite{seilmayer2014experimental} and substantially extend the parameter range
explored experimentally.
\section{Governing equations}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
(a) \hspace{2cm} & (b) \\ \includegraphics[width=0.28\linewidth]{TCsystem.eps} \hspace{1cm} &
\includegraphics[width=0.6\linewidth]{lin_stab_new_data.eps}\\
\end{tabular}
\end{center}
\caption{\small ($a$) \textbf{Schematic of the Taylor-Couette geometry with azimuthal magnetic field}. A liquid metal is confined between two coaxial cylinders of radii $r_i$ and $r_o$, which can rotate independently at angular velocities $\Omega_i$ and $\Omega_o$. In this work the rotation rate is fixed at $\mu=\Omega_o/\Omega_i=0.26$ and an azimuthal magnetic field of the form $(r_i/r)B_0$ is imposed, where $r$ is the radial coordinate. ($b$) \textbf{Instability region of the AMRI}. \AG{Blue circles correspond to the points at which our simulations were conducted, the red dashed line to the curve of maximum growth rate and the green solid line is a fit to the latter of the form $Ha=a Re^b$, where $a=0.71$ and $b=1.55$.}}
\label{fig:TC_scheme}
\end{figure}
We consider an incompressible viscous liquid metal that is sheared between two independently rotating cylinders of radii $r_i$ (inner) and $r_o$ (outer). The angular velocity of the cylinders are $\Omega_i$ and $\Omega_o$, respectively, and an external azimuthal magnetic field $(r_i/r)B_0$, where $r$ is the radial coordinate, is imposed. The relevant fluid properties are the electrical conductivity $\sigma$, the kinematic viscosity $\nu$ , the density $\rho$ and the magnetic diffusivity $\eta$. The velocity field $\mathbf{u}$ is determined by the Navier-Stokes equations (\ref{eq:NSeq}), whereas the magnetic field $\mathbf{B}$ is determined by the induction equation (\ref{eq:Ieq}), which represents a combination of the laws of Ampere, Faraday, and Ohm. The equations were rendered dimensionless by using the gap between cylinders $d=r_o-r_i$ for length, $d^2/\nu$ for time, and $B_0$ for the magnetic field. In dimensionless form they read
\begin{equation}\label{eq:NSeq}
(\pd{t} + \mathbf{v}\cdot\nabla) \mathbf{v} = -\nabla p + \nabla^2 \mathbf{v} + \frac{Ha^2}{Pm} (\nabla \times \mathbf{B}) \times \mathbf{B},
\end{equation}
\begin{equation}\label{eq:Ieq}
( \pd{t} - \frac{1}{Pm} \nabla^2 ) \mathbf{B} = \nabla \times (\mathbf{v} \times \mathbf{B}),
\end{equation}
together with $\nabla\cdot{\mathbf{v}}=\nabla\cdot{\mathbf{B}}=0$. Here $p$ is the pressure, $Ha$ the Hartmann number, and $Pm$ the magnetic Prandtl number. The dimensionless parameters of the system are specified in Table~\ref{tab:param}. Following the PROMISE experiment~\cite{seilmayer2014experimental}, we use a magnetic Prandtl number of $Pm=1.4\cdot 10^{-6}$ (corresponding to the alloy $Ga^{67}In^{20.5}Sn^{12.5}$), a radius-ratio of $\delta=0.5$ and a rotation-ratio of $\mu=0.26$. This places the velocity profile in the quasi-Keplerian regime, for which the angular velocity decreases radially, whereas the angular momentum increases, i.e.~$\delta^2<\mu<1$.
\begin{table}[!h]
\renewcommand{\arraystretch}{1.3}
\centering
\caption{\small \textbf{Dimensionless parameters of the magnetohydrodynamic
Taylor-Couette problem}.}
\begin{tabular}{|l|l|l|l|}
\hline
Abbrev. & Parameter &Definition & Range \\
\hline
$\delta$ & Radius ratio &$r_i/r_o$ & 0.5\\
$\alpha$ & Axial wavenumber (geometrical parameter) & $2\pi / L_z$ & $0.5$---$4.5$ \\
$\mu$ & Angular velocity ratio &$\Omega_o/\Omega_i $ & 0.26 \\
$Pm$ & Magnetic Prandtl number &$\nu/\eta$ & $1.4 \cdot 10^{-6}$ \\
$Re$ & Reynolds numbers of inner cylinder &$\Omega_i r_i d/\nu$ & $1480$---$9333$\\
$Ha$ & Hartmann number &$B_0 d/(\frac{\sigma}{\rho\nu})^{1/2}$ & $90$---$457$\\
\hline
\end{tabular} \label{tab:param}
\end{table}
\subsection{Boundary conditions}
We employ cylindrical coordinates $(r,\phi,z) \in [r_i,r_o] \times [0,2\pi]\times [0,L_z]$, for which the no-slip velocity boundary conditions at the cylinders read
\begin{equation}
u_\phi(r_i,\phi,z)=Re, \qquad
u_\phi(r_o,\phi,z)=\mu Re.
\end{equation}
Periodicity in the axial direction is imposed with basic length $L_z$.
The background circular Couette flow $\mathbf{V} = V(r)\mathbf{e_\phi}$ is a solution to the equations and boundary conditions given by
\begin{equation}\label{eq:couette}
V(r) = \frac1{1+\delta}
\left[
( \frac{\mu}{\delta} Re - \delta \, Re) r + \frac{\delta}{(1-\delta)^2}
( Re - \mu \, Re) \frac1{r}
\right] .
\end{equation}
The magnetic field is also assumed to be periodic in the axial direction. In the radial direction the boundary condition depends on the material of the cylinders. Typically two idealized cases are considered in the MRI problem: insulating and conducting cylinders. These lead to slightly different results, as theoretically demonstrated by Chandrasekhar \cite{chandrasekhar1961}. However, the difference is not great, and here we will consider only the case of insulating boundaries.
\AG{Assuming the Fourier expansion for each variable
\begin{equation}\label{eq:fourier}
A=\sum_{|k|<K} \sum_{|m|<M} A_{k,m}(r)\exp[\mathrm{i}(\alpha kz + m\phi)],
\end{equation}
one obtains the following boundary conditions for $\vec{B}$:\\
For case $k=m=0$:
\begin{equation}
B_\theta = B_z = 0 .
\end{equation}
case $k=0$, $m\ne 0$:
\begin{equation}
B_r\pm\mathrm{i} B_\theta=0,\quad B_z=0
\qquad \mbox{($+$ on $r_i$, $-$ on $r_o$)}.
\end{equation}
case $k\ne0$:
\begin{equation}
B_r + \mathrm{i}\,\frac{\mathcal{B}'_m(\alpha k R)}
{\mathcal{B}_m(\alpha k R)} \, B_z = 0,
\quad
\alpha k B_\theta - \frac{m}{r} B_z = 0,
\end{equation}
where $\mathcal{B}_m(x)$ denotes the modified Bessel function
$I_m(x)$ for $r=r_i$ and $K_m(x)$ for $r=r_o$, and
$\mathcal{B}'_m=\pd{x}\mathcal{B}_m$. See Willis and Barenghi \cite{willis2002hydromagnetic} for a detailed derivation. }
\subsection{Brief remarks on symmetries of rotating magnetohydrodynamic flows}\label{sec:sym}
The basic circular Couette flow \eqref{eq:couette} has SO(2) $\times$ O(2) symmetry, where SO(2) represents the rotational symmetry in the azimuthal direction. In the axial direction the group O(2) may be written as O(2)$=Z_2 \rtimes$SO(2), where $Z_2$ is a reflection (up-down symmetry) and SO(2) the translational symmetry in the $z$ direction. The presence of purely axial or purely azimuthal imposed magnetic field does not change the symmetry group of the system. Hence if the primary instability is a Hopf bifurcation the resulting states can be either standing or traveling waves in the axial direction \cite{crawford1991symmetry}. By contrast, a combined helical magnetic field breaks the reflection symmetry and only traveling waves (TW) can be observed \cite{knobloch1996symmetry}. Finally, if the bifurcating solution is non-axisymmetric, as in the AMRI, this will generically be a rotating wave in the azimuthal direction.
\section{Numerical method}
In the numerical simulations only the deviation from the basic flow $\mathbf{u} = \mathbf{v} - \mathbf{V}$ is computed. Its governing equations read
\begin{equation}
(\pd{t} - \nabla^2)\, \mathbf{u}
\,=\, \mathbf{N} - \nabla p , \qquad
\nabla \cdot \mathbf{u} = 0,
\end{equation}
which are supplemented with homogeneous boundary conditions $\mathbf{u} = \mathbf{0}$.
Here $\mathbf{N}$ stands for the nonlinear term in the Navier-Stokes equations
(\ref{eq:NSeq}), which contains the advective terms and the Lorentz force:
\begin{eqnarray}
\mathbf{N} & = & \mathbf{u} \times (\nabla \times \mathbf{u})
- (\mathbf{V}\cdot\nabla) \mathbf{u} - (\mathbf{u}\cdot\nabla) \mathbf{V} + \frac{Q}{Pm} (\nabla \times \mathbf{B}) \times \mathbf{B} \\
& = & \mathbf{u} \times (\nabla \times \mathbf{u})
- (V/r) \pd{\phi} \mathbf{u}
+ (2V/r) u_\phi \mathbf{e_r}
- u_r ( 1+\pd{r}) V \mathbf{e_\phi}
+ \frac{Q}{Pm} (\nabla \times \mathbf{B}) \times \mathbf{B}. \nonumber
\label{eq:nonlinvel}
\end{eqnarray}
Spatial discretisation is accomplished via the Fourier expansion in the azimuthal and axial directions (\ref{eq:fourier}), and
as the variables are real, their Fourier coefficients satisfy the property $A_{k,m}=A^*_{-k,-m}$,
where $A^*$ denotes the complex conjugate.
\APW{The pseudospectral Fourier method is the most efficient choice for periodic boundary conditions. Because of their great accuracy at low resolutions, spectral methods have also been used to discretise the hydromagnetic Taylor--Couette flow problem in the radial direction~\cite{hollerbach2008spectral,willis2002hydromagnetic}. In these works the Chebyshev collocation method was chosen because of its simplicity. Nevertheless its computational and storage costs scale as $\mathcal{O}(N^2)$, where $N$ is the number of radial points, making computations at large Reynolds number impractical. Petrov--Galerkin formulations reduce the cost to $\mathcal{O}(N)$ and have been also used in hydrodynamic Taylor--Couette flows~\cite{Moser_jcp1983,Meseguer_EPJ2007}. However, the treatment of the radial boundary conditions becomes very cumbersome for the magnetic field. We here use the finite-difference method in the radial direction. Radial derivatives are calculated using 9-point stencils to $(9-j)^\mathrm{th}$ order, where $j$ is the order of the derivative. This results in banded matrices with associated $\mathcal{O}(N)$ cost, while providing excellent accuracy. Ref.~\cite{LiRaHoAv14} provides a more thorough discussion of these computational issues and the accuracy of the finite-difference method applied to hydrodynamic Taylor--Couette flow.
}
\APW{
A second-order scheme is applied to the time discretisation
$t^q=q\, \Delta t$, based on the implicit Crank--Nicolson method.
Applying this method to the Navier-Stokes equations, we find
\begin{equation}\label{eq:NS-CN}
\left(1/\Delta t - \nabla^2\right)\mathbf{u}^{q+1}=
\left(1/\Delta t + \nabla^2\right)\mathbf{u}^q + \mathbf{N}^{q+\frac{1}{2}}
- \nabla p,
\qquad
\nabla^2 p = \nabla \cdot \mathbf{N}^{q+\frac{1}{2}},
\end{equation}
where $\mathbf{N}^{q+\frac{1}{2}}$ is an estimate for the
nonlinear terms (Euler predictor, Crank-Nicolson corrector).
In cylindrical coordinates the $r$- and $\phi$-components of the
Laplacian operator couples the two components, and
for a Fourier decomposition they are complex operators.
Programming complexity and computational cost
of inversion for $u_r^{q+1}$ and $u_\phi^{q+1}$ can be reduced,
}
however, by considering
\begin{equation}
u_\pm = u_r \pm \mathrm{i} \, u_\phi,
\qquad \mbox{i.e.}~~
u_r = \frac{1}{2} ( u_+ + u_-),
\quad
u_\phi = -\,\frac{\mathrm{i}}{2}(u_+ - u_- ) ,
\end{equation}
where the $\pm$ are taken respectively.
\APW{The equations governing these components separate
and the Laplacian operator is now real ($\partial_\phi\to\mathrm{i}m$),}
\begin{equation}
(\pd{t} - \nabla^2_\pm)\, u_\pm
= N_\pm - (\nabla p)_\pm , \qquad
\nabla^2_\pm = \nabla^2 - \frac{1}{r^2}
\pm \frac{\mathrm{i}}{r^2}\pd{\phi}.
\end{equation}
\subsection{Influence-matrix method}
\APW{
The natural approach to solving (\ref{eq:NS-CN}) is to invert
for $p$ and then for $\mathbf{u}^{q+1}$. All the boundary
conditions are on $\mathbf{u}^{q+1}$, however, there are none on $p$,
and it
is well known that primitive variable formulations are subject
to loss of temporal order if inappropriate boundary conditions
are enforced on $p$ \cite{rempfer2006}.
}
For the magnetic
field, there appear at first sight to be too few boundary conditions,
and further, the components of $\vec{B}$ are coupled
in the boundary condition.
It is shown here that the influence-matrix method resolves these
issues, the appropriate boundary conditions can be satisfied to
machine precision, and temporal order is retained. We show first
how the method is applied for time integration of the velocity field,
similar to \cite{marcus1984simulation}.
An analogous approach is applied to the magnetic field.
\subsubsection{Method for the velocity field}
We write the time-discretised Navier-Stokes equations
(\ref{eq:NS-CN})
in the form
\begin{equation}
\label{eq:NSdisc}
\left\{\begin{array}{rcl}
X \mathbf{u}^{q+1} & = & Y \mathbf{u}^q
+ \mathbf{N}^{q+\frac{1}{2}} - \mbox{\boldmath $\nabla$} p \, , \\
\nabla^2 p & = & \mbox{\boldmath $\nabla$}\cdot(Y \mathbf{u}^q
+ \mathbf{N}^{q+\frac{1}{2}}) ,
\end{array}\right.
\end{equation}
where $q$ denotes time $t_q$. This form is sixth order in $r$ for
$\mathbf{u}^{q+1}$ and second order for $p$, without the
solenoidal condition explicitly imposed.
In principle this system should be inverted simultaneously for $p$ and $\mathbf{u}^{q+1}$ with boundary conditions
$\mathbf{u}^{q+1}=\mathbf{0}$ and $\mbox{\boldmath $\nabla$}\cdot\mathbf{u}^{q+1}=0$.
In practice it
would be preferable to invert for $p$ first then for $\mathbf{u}^{q+1}$,
but the boundary conditions do not involve $p$ directly.
Note that the $Y \mathbf{u}^q$ term has been included in the
right-hand side of the pressure-Poisson equation, so that
it corresponds precisely to the divergence of the equation
for $\mathbf{u}^{q+1}$. This ensures that any non-zero
divergence in the initial condition is projected out
after a single time-step. We split the system (\ref{eq:NSdisc})
into the `bulk' solution, $\{\bar{\mathbf{u}},\bar{p}\}$,
\begin{equation}
\label{eq:NSbulk}
\left\{\begin{array}{rcl}
X \bar{\mathbf{u}} & = & Y \mathbf{u}^q
+ \mathbf{N}^{q+\frac{1}{2}} - \mbox{\boldmath $\nabla$} \bar{p} \, , \\
\nabla^2 \bar{p} & = & \mbox{\boldmath $\nabla$}\cdot(Y \mathbf{u}^q
+ \mathbf{N}^{q+\frac{1}{2}}) ,
\end{array}\right.
\end{equation}
with boundary conditions $\bar{\mathbf{u}}=\mathbf{0}$ and $\pd{r}\bar{p}=0$,
and the auxiliary systems
\begin{equation}
\label{eq:NSp0}
\left\{\begin{array}{rcl}
X \mathbf{u}' & = & -\nabla p' \, , \\
\nabla^2 p' & = & 0,
\end{array}\right.
\end{equation}
with two sets of boundary conditions, $\mathbf{u}'=\mathbf{0}$ with $\pd{r}p'=\{0,1\}$ and $\{1,0\}$ on $r=\{r_i,r_o\}$, and
\begin{equation}
\label{eq:NSu0}
\left\{\begin{array}{rcl}
X \mathbf{u}' & = & \mathbf{0},
\end{array}\right.
\end{equation}
with boundary conditions $u'_+=\{0,1\}$, $u'_-=\{0,1\}$,
$u'_z=\{0,\mathrm{i}\}$ and similarly their reversed versions
on $\{r_i,r_o\}$.
\APW{
These dashed functions may be precomputed, and will be used to
correct the approximate boundary conditions used to calculate
$\bar{\mathbf{u}}$ and $\bar{p}=0$.}
The system (\ref{eq:NSp0}) provides,
with the two boundary conditions, two linearly independent functions
$\mathbf{u}'_j$ that may be added to $\bar{\mathbf{u}}$ without altering the
right-hand side in (\ref{eq:NSbulk}). Similarly the system (\ref{eq:NSu0})
provides a further six functions. The superposition
\begin{equation}
\label{eq:usuperpos}
\mathbf{u}^{q+1} = \bar{\mathbf{u}} + \sum_{j=1}^8 a_j\, \mathbf{u}'_j \,
\end{equation}
may be formed in order to satisfy the eight original
boundary conditions, $\mathbf{u}^{q+1}=\mathbf{0}$ and
$\mbox{\boldmath $\nabla$}\cdot\mathbf{u}^{q+1}=0$ on $r=\{r_i,r_o\}$.
Substituting (\ref{eq:usuperpos}) into the
boundary conditions, they may be written
\begin{equation}
A \mathbf{a} = -\mathbf{g}(\bar{\mathbf{u}}) ,
\end{equation}
where $A=A(\mathbf{g}(\mathbf{u}'))$
is an 8$\times$8 matrix. The appropriate coefficients required
to satisfy the boundary conditions are thereby recovered from the
solution of this small system for $\mathbf{a}$.
The error in the boundary conditions $g_j(\mathbf{u}^{q+1})$
using the influence-matrix technique
is at the level of the machine precision.
The auxiliary functions $u'_j(r)$, the matrix
$A$ and its inverse may all be precomputed,
and the boundary conditions for $\mathbf{u}'$ have been chosen
so that $u'_\pm$ are purely real,
$u'_z$ is purely imaginary, and $A$ is real.
For each timestep, this application of the influence matrix
technique requires only evaluation of the deviation from
the boundary condition,
multiplication by an 8$\times$8 real matrix,
and the addition of four functions to each component of $\mathbf{u}$,
each either purely real or purely imaginary.
Compared to the evaluation of nonlinear terms,
the computational overhead is negligible.
\subsubsection{Method for the magnetic field}
Consider the induction equation (\ref{eq:Ieq})
time-stepped without $\nabla\cdot{\mathbf{B}}=0$
enforced. Evolution of $\psi=\nabla \cdot \mathbf{B}$ is
then governed by the divergence of (\ref{eq:Ieq}),
\begin{equation}
\label{eq:diveqn}
\pd{t}\psi = \frac{1}{Pm}\nabla^2\psi\, .
\end{equation}
In addition to the boundary conditions derived in
\cite{willis2002hydromagnetic},
the condition $\psi=0$ must be satisfied on the boundary
to avoid introduction of divergence into the domain.
Then (\ref{eq:Ieq}) has the appropriate number of
boundary conditions,
\APW{and $\psi=\nabla\cdot{\mathbf{B}}$ should remain zero
for a solenoidal initial condition.}
To prevent accumulation of divergence from artificial internal sources,
i.e.\ discretisation error, it is commonplace to introduce
an artificial pressure $\Pi$ \cite{Brackbill1980effect}.
The discretised system is then as in
(\ref{eq:NSdisc}) where one reads $\mathbf{B}$ for $\mathbf{u}$ and
$\Pi$ for $p$.
The boundary condition
for $\Pi$ is any choice such that,
when one computes $\Pi$ for a given $\mathbf{B}^q$,
it is found to be constant when $\mbox{\boldmath $\nabla$}\cdot\mathbf{B}^q=0$
\cite{ramshaw1983method}. The choice $\pd{r}\Pi=0$ is applied here.
When comparing with the problem for the velocity, here
the difficulty is not the coupling of the boundary
condition for $\mathbf{B}$ with $\Pi$,
but between the components of $\mathbf{B}$ at an insulating boundary.
Here the system is split as in (\ref{eq:NSbulk}) for the `bulk'
solution, with approximate boundary condition
$\bar{\mathbf{B}}=\mathbf{B}^q$ on $\{r_i,r_o\}$. This is then
corrected precisely via the influence matrix
requiring only the simple auxiliary system
\begin{equation}
\label{eq:inflpert}
X \mathbf{B'} = \mathbf{0},
\end{equation}
with boundary conditions
$B'_\pm=\{0,1\}$ or $\{1,0\}$ and
$B'_z=\{0,\mathrm{i}\}$ or $\{\mathrm{i},0\}$
on $\{r_i,r_o\}$.
Problem (\ref{eq:inflpert}) separates for the
three components, which, with the
two boundary condition options for each,
provides six functions
$B'_j(r)$.
The correction is then
\begin{equation}
\label{eq:superpos}
\mathbf{B}^{q+1} = \bar{\mathbf{B}} + \sum_{j=1}^6 a_j\, \mathbf{B}'_j \, .
\end{equation}
Let $g_j(\mathbf{B})=0$ denote the insulating boundary conditions
and solenoidal condition evaluated at $r_i$ and $r_o$.
Substituting (\ref{eq:superpos}) into the
boundary conditions, they may be written
$
A \mathbf{a} = -\mathbf{g}(\bar{\mathbf{B}}) ,
$
where $A=A(\mathbf{g}(\mathbf{B}'))$
is a 6$\times$6 matrix. The appropriate coefficients required
to satisfy the boundary conditions are recovered from
solution of this small system for $\mathbf{a}$.
Again, the auxiliary functions $B'_j(r)$, the matrix
$A$ and its inverse may be precomputed, and
the boundary conditions for $\mathbf{B}'$ have been chosen
so that $B'_\pm$ are purely real,
$B'_z$ is purely imaginary, and $A$ is real.
At the end of the timestep, the solution is solenoidal and
satisfies the boundary conditions to machine precision.
\subsection{Implementation notes and parallelization}
The Taylor-Couette flow code was written in Fortran90. Nonlinear terms are evaluated using the pseudo-spectral method and are de-aliased using the 3/2 rule. The Fourier transforms are performed with the FFTW3 library \cite{frigo2005design} and matrix and vector operations are performed with BLAS \cite{lawson1979basic}. Each predictor-corrector iteration involves the solution of banded linear systems with forward-backward substitution using banded LU-factorizations that are precomputed prior to time-stepping. These operations are performed with LAPACK \cite{laug}. The code was parallelized so that data is split over the Fourier harmonics for the linear parts of the code: evaluating curls, gradients and matrix inversions for the time-stepping (these linear operations do not couple modes). Here all radial points for a particular mode are located on the same processor; separate modes may be located on separate processors. Data is split radially when calculating Fourier transforms and when evaluating products in real space (nonlinear term of the equations). The bulk of communication between processors occurs during the data transposes.
\subsection{Numerical validation}
The code was validated against several published linear stability results, as well as three-dimensional nonlinear simulations of the coupled induction and Navier-Stokes equations.
We tested the inductionless limit $Pm=0$ and finite $Pm$, obtaining excellent agreement in all cases.
\subsubsection{Linear stability of Couette flow subject to magnetic fields}
Linear instabilities were detected in the calculations
by monitoring the kinetic energy of the deviation from
circular Couette flow after introduction of a small disturbance.
In the linear regime we write
$$
u' \sim \exp(\lambda t + \mathrm{i}[k z +m \phi]),
\qquad E \sim |u'|^2 \sim \exp(2\sigma t),
$$
where $\lambda = \sigma +\mathrm{i}\omega$ is a complex number; the imaginary part $\omega$ is the oscillation frequency and the real part $\sigma$ the growth rate of the dominant perturbation. The latter is readily extracted from the relationship $\log(E) \sim 2 \sigma t$.
We first reproduced the classical results of Roberts \cite{roberts1964stability}, who considered the inductionless limit $Pm=0$ for narrow gap geometry $\delta=0.95$ and stationary outer cylinder. For a Hartmann number of $Ha=5.477$ he obtained a critical Reynolds number of $Re_c=281.05$ with associated critical axial wavenumber of $\alpha_c=2.69$ and $m=0$. In our simulations we fixed $\alpha=\alpha_c$ and obtained $Re_c=281.055$ using $N=33$ radial points. For finite magnetic Prandtl number $Pm=1$ we reproduced the results of Willis and Barenghi \cite{willis2002} for wide gap $\delta=0.5$ and stationary outer cylinder. For $Ha=39$, and $\alpha=2.4$ and $m=0$ they found $Re_{c}=60.5$, which is in good agreement with our result ($Re_{c}=60.3$). In order to test the azimuthal magnetic field we reproduced recent results of Hollerbach \emph{et al}.~\cite{hollerbach2010nonaxisymmetric} for the AMRI. For example at $Pm=0$, $Ha=316$, $Re=1000$, $\delta=0.5$ and $\mu=0.26$, they obtained $\sigma=-78.6$ for wavenumbers $\alpha=7.17$ and $m=1$, which is in very good agreement with our value $\sigma=-78.7$.
\subsubsection{Nonlinear simulations}
Willis and Barenghi \cite{willis2002taylor} explored dynamo action in Taylor-Couette flow. They first solved the Navier-Stokes equations in the absence of magnetic field and subsequently applied a small magnetic disturbance to test whether it grew into a dynamo. In the axisymmetric Taylor-vortex regime
axisymmetric magnetic fields were found to decay, in accordance with Cowling's anti-dynamo theorem. Non-axisymmetric magnetic fields
may be excited, however, and for $Re =136.4$, $\delta=0.5$, $Pm=2$, $\alpha =1.57$ and stationary outer cylinder they observed that the magnetic disturbance grows for $m=1$ ($\sigma_{B,m=1}\approx 0.2$, leading to dynamo action), whereas it decays for $m=2$ ($\sigma_{B,m=2}\approx-1.4$). We reproduced this setting using $N=41$, $K=16$ and $M=12$ and obtained $\sigma_{B,m=1}\approx 0.16$ and
$\sigma_{B,m=1}\approx -1.42$, in good agreement with \cite{willis2002taylor}.
Finally, we compared results of the axisymmetric HMRI (helical field $\mathbf{B}= B_0 (\mathbf{e_z}+\gamma \mathbf{e_\phi})$) obtained with the spectral code of Hollerbach \cite{hollerbach2008spectral}. A typical diagnostic quantity is the torque at the cylinders
\begin{equation}
\label{eq:torque}
G \sim - 2 \pi r^3 \frac{\partial}{\partial r} \Big [\frac{u_\phi}{r} \Big ] \sim 2 \pi r^2 \Big [ \frac{u_\phi}{r} - \partial_r u_\phi \Big ].
\end{equation}
The laminar flow torque will be used as a scale, so that the
dimensionless ratio
$G/G_\text{laminar}$
measures the intensity of angular momentum transfer relative to
laminar flow.
We choose the parameters $Re=300$, $Ha=10$, $\delta = 0.5$, $\gamma=2$, $\alpha=0.314$, which are well into the nonlinear regime. After nonlinear saturation the dimensionless torque on the cylinders obtained with our code for $N=81$ and $K=192$ was $G/G_\text{laminar}=1.4122$, which is in excellent agreement with the code of Hollerbach
($G/G_\text{laminar}=1.4123$).
\section{Primary instability: standing waves}
In the experiments of Seilmayer \emph{et al.} \cite{seilmayer2014experimental} the AMRI was explored near the onset of instability for two different Reynolds numbers $Re = 1480$ and $2960$, and Hartmann numbers in the range $Ha\in[0,160]$. The experiments have an aspect-ratio of $10$. Here we selected a periodic domain of length $L_z=12.6$, and initialised the simulations by disturbing all Fourier modes with the same amplitude, thus allowing the axial wavenumber to be naturally selected.
Because of the symmetries (see \S\ref{sec:sym}), two different Hopf-bifurcation scenarios are possible \cite{knobloch1996symmetry}. In the first one, the $z$-reflection symmetry is broken and depending on the initial conditions either upward traveling waves (with $k>0$ modes) or downward traveling waves (with $k<0$ modes) may be observed. In the second scenario, the $z$-reflection symmetry is preserved and a standing wave emerges. This is a combination of upward and downward traveling waves for which positive and negative $k$ modes are in phase and have exactly the same amplitude. In both scenarios waves rotate in the azimuthal direction.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
(a) \hspace{1cm} & (b)\\ \includegraphics[width=0.35\linewidth]{SW1480_dense.eps} \hspace{1cm}&
\includegraphics[width=0.45\linewidth]{coef_new.eps}\\
\end{tabular}
\end{center}
\caption{\small \textbf{Primary instability: a standing wave arises through a supercritical Hopf bifurcation}. ($a$) \textbf{Standing wave} with $k=9$ (corresponding to 9 vortex-pairs in the axial direction) at $Re=1480$, $Ha=150$ and $L_z=12.6$ (long domain). From left to right: isosurfaces of axial velocity \AG{$v_z= \pm 0.005$ (normalized with the velocity of the inner cylinder $\Omega_i r_i$)}, contours of axial and radial velocity. \AG{The aspect ratio of the colormaps has been stretched by a factor of $0.6$.} ($b$)
\textbf{Onset of instability} at $Re=1480$. The critical Hartmann number is $Ha_{c} \approx 107$ with critical axial mode $k=8$. The square of the amplitude of the Fourier coefficient $A_{8,1}^2$ depends linearly on $Ha - Ha_{c}$ close to the critical point as expected in a Hopf bifurcation. The coefficient $A_{-8,1}$ has the same amplitude as $A_{8,1}$, confirming that the axial reflection symmetry is preserved (standing wave).}
\label{fig:sup_Hopf}
\end{figure}
We found that at $Re=1480$ the circular Couette flow (\ref{eq:couette}) becomes unstable at $Ha_c = 107$. The emerging pattern is a standing wave (SW) with dominant mode $(k,m)=(\pm 8,1)$, so that 8 pairs of vortices fit in the domain. Figure~\ref{fig:sup_Hopf}b shows the square of the amplitude of the complex Fourier coefficient $A_{ 8,1}$ for increasing $Ha$. As expected in a Hopf bifurcation, $A_{k,m}^2 \propto Ha-Ha_c$ near the onset of instability, and this relationship holds up to $Ha \approx 112$. The vortex arrangement of the standing wave at $Ha=150$ is shown in the flow snapshot of figure~\ref{fig:sup_Hopf}a. In this case the mode $k=9$ was naturally selected. Thus the dominant axial wavenumber depends on $Ha$ because of the Eckhaus instability, as also observed in hydrodynamic Taylor-Couette flow \cite{riecke1986stability}. The torque changes respectively with axial wavenumber (black curve on the Fig. \ref{fig:subcr_Hopf}a), so at the same parameter value states with different wavenumber and torque can be realised depending on the initial conditions. Further increasing $Ha$ the instability is gradually damped until it disappears at $Ha \approx 175$. Over the whole Hartmann range the additional torque due to the SW never exceeds 1\% of the laminar flow (see figure~\ref{fig:subcr_Hopf}a), indicating very weak transport of angular momentum. \AG{The maximum in torque correlates well with the maximum growth rate from the linear stability analysis shown in Fig. \ref{fig:subcr_Hopf}b}.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
(a) & (b) \\ \includegraphics[width=0.475\linewidth]{tor_good_edge_new.eps}&
\includegraphics[width=0.495\linewidth]{eig_new.eps}\\
\end{tabular}
\end{center}
\caption{\small \textbf{Onset of spatio-temporal chaos}. ($a$) \textbf{Dimensionless torque for AMRI} versus $Ha$ for $Re = 1480$ and $Re = 2960$. Eckhaus instability at $Re=1480$: the branches of the black curve belong to different axial wavenumbers (k=8, 9 and 10) of the standing wave. Bistability at $Re = 2960$: in the yellow-shaded region standing waves (green) and defects (red) coexist; between them there is an unstable branch or edge state (blue). \AG{($b$) \textbf{Perturbation growth rates} $\sigma$ (normalised with $\Omega_i$) as a function of $Ha$ for $Re = 1480$ and $Re = 2960$. Positive values of $\sigma$ correspond to instability.}}
\label{fig:subcr_Hopf}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
(a) \hspace{2cm} & (b) \\ \includegraphics[width=0.35\linewidth]{Def2960_dense.eps} \hspace{2cm} &
\includegraphics[width=0.35\linewidth]{Def4000_dense.eps}\\
\end{tabular}
\end{center}
\caption{\small ($a$) \textbf{Defects at} $Re=2960, Ha=190$ and $L_z=12.6$ (long domain). From left to right: isosurfaces of axial velocity \AG{($v_z= \pm 0.01$ [$\Omega_i r_i$])}, contours of axial and radial velocity. ($b$) \textbf{ Onset of turbulence}. Isosurfaces of axial velocity \AG{($v_z= \pm 0.0125$ [$\Omega_i r_i$])}, contours of axial and radial velocity. $Re=4000$, $Ha=264$ and $L_z=12.6$. \AG{The aspect ratio of the colormaps has been stretched by a factor of $0.6$.}}
\label{fig:defects}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
(a) & (b) \\ \includegraphics[width=0.48\linewidth]{edge_track_Ha140_new.eps}&
\includegraphics[width=0.49\linewidth]{edge_track_Ha155_new.eps}\\
\end{tabular}
\end{center}
\caption{\small \textbf{Edge tracking procedure} at $Re=2960$. \AG{(a) $Ha=140$ and (b) $Ha=155$}. Green curves evolve toward the standing wave state and red curves toward defects, although all of them start very close to the edge state. \AG{Oscillations at $Ha=155$, which is close to the destabilisation point of the standing wave, appear to decay, while at $Ha=140$ they saturate. Time is normalised using the inner cylinder rotation frequency $1/\Omega_i$, i.e. $t=t \cdot Re$.}}
\label{fig:edge_track}
\end{figure}
\section{Onset of spatio-temporal chaos}
At $Re=2960$ a Hopf bifurcation occurs at $Ha_c = 120$ and the emerging SW remains stable until $Ha=160$. Increasing $Ha$ beyond this point a catastrophic transition to spatio-temporal chaos is observed: the vortex structure is damaged and the up-down symmetry is broken (Fig. \ref{fig:defects}). Between $Ha=130$ and $160$ there is a hysteresis region in which both SW and spatio-temporal chaos (defects) are locally stable (see Fig. \ref{fig:subcr_Hopf}a). In this $Ha$-range, if the initial condition is a SW from another run with slightly different $Ha$, this remains stable. However, when starting for example from a randomly disturbed Couette profile the flow evolves directly to defects.
This catastrophic transition \AG{suggests} a subcritical bifurcation. We investigated this hypothesis by computing the unstable branch separating defects and SW. For this purpose we combined time-stepping with a bisection strategy as follows. If the SW is slightly disturbed, then the flow should rapidly converge to the SW because it is locally stable. The same applies to defects. For intermediate initial conditions the flow should take a long time before asymptotically reaching either the SW or the defects. Such initial conditions were generated here by performing a linear combination between two selected flow snapshots of SW and defects. This combination was parametrised with a variable $\beta$, for which $\beta=0$ corresponds to SW and $\beta=1$ to defects. With the bisection procedure, refining $\beta$ results in an initial condition successively closer to the manifold (or edge) delimiting the two basin boundaries. The edge is comprised of those initial conditions that tend neither to defects nor to SW, and the attractor in this manifold is referred to as an edge state \cite{skufca2006edge}.
Figure \ref{fig:edge_track} shows that as initial conditions are taken closer to the edge, the temporal dynamics become simple as the edge state is approached. \AG{At $Ha=155$, which is very close to the destabilisation of the SW, the dynamics appear to exhibit a damped oscillation (see Fig. \ref{fig:edge_track}b). Unfortunately, it is difficult to establish whether the oscillation finally decays or saturates at a tiny amplitude, as expected close to the bifurcation point. At $Ha=140$, however, which is further from the bifurcation point, the oscillation saturates at non-zero amplitude (see Fig. \ref{fig:edge_track}a). This suggests that the edge state is a relative periodic orbit (or modulated wave) emerging at a subcritical Hopf bifurcation of the SW. Despite this simple temporal behaviour, the spatial structure of the edge state is complicated (see Fig.~\ref{fig:edge_iso}a--c). It consists of a long-wave (subharmonic) modulation of the axially periodic pattern of the SW, which can be seen as a precursor to defects (compare Fig.~\ref{fig:edge_iso}a--b to Fig.~\ref{fig:defects}a). We expect that as $Ha$ is further reduced the edge state suffers a bifurcation cascade and becomes chaotic. This should continuously connect to defects and stabilise at a turning point for $Ha\gtrsim130$, which is the lowest $Ha$ for which defects remain stable.}
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
(a) \hspace{2cm} & (b) & \hspace{2cm} (c) \\
\includegraphics[width=0.22\linewidth]{155edge_iso.eps} \hspace{2cm} &
\includegraphics[width=0.22\linewidth]{140edge_iso.eps} & \hspace{2cm}
\includegraphics[width=0.17\linewidth]{140edge_cont_crop.eps}\\
\end{tabular}
\end{center}
\caption{\small \textbf{Edge state at $Re=2960$ and $L_z=12.6$. (a) Close to the bifurcation point}, $Ha=155$: isosurfaces of axial velocity \AG{$v_z= \pm 0.0135$ [$\Omega_i r_i$]}. \textbf{(b) Far from the bifurcation point}, $Ha=140$: isosurfaces \AG{$v_z= \pm 0.012$ [$\Omega_i r_i$]}. \textbf{(c) Contours of axial and radial velocity}, $Ha=140$. \AG{The aspect ratio of the colormaps has been stretched by a factor of $0.6$.} The edge state consists of a long-wave modulation of the standing wave. }
\label{fig:edge_iso}
\end{figure}
\section{Turbulent transport of momentum}
As the Reynolds number is further increased, defects are expected to grow gradually into turbulence. \AG{Although it would be very interesting to perform a two-parameter study of the dynamics in $Ha$ and $Re$, this is computationally expensive and beyond the scope of the current work. We here chose to follow a parameter path of the form
\begin{equation}\label{eq:path}
Ha=a \,Re^b,
\end{equation}
with $a=0.71$ and $b=1.55$. This path is shown as a solid green line in Fig.~\ref{fig:TC_scheme}b and provides a very good approximation to the curve of maximum growth rate of the linear stability analysis (red dashed line). It goes deep into the instability region and so we expect the instability to fully develop as $Re$ increases with $Ha$ subject to \eqref{eq:path}.}
At $Re=4000$ the vortices are small at the inner cylinder and remain quite large at the outer cylinder (Fig.~\ref{fig:defects}b), and at $Re=9333$ this tendency develops into rapidly drifting small vortices at the inner cylinder and slow large vortices at the outer cylinder (Fig. \ref{fig:turbulence}a). There is no preferred direction in the system; vortices can travel up or down, both at the inner and outer cylinders.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
(a) & (b) \\ \includegraphics[width=0.5\linewidth]{Def9333_changed.eps} &
\includegraphics[width=0.45\linewidth]{tor_turb.eps}\\
\end{tabular}
\end{center}
\caption{\small ($a$) \textbf{Turbulent flow in a short domain} at $Re=9333$, $Ha=456.7$ and $L_z=1.4$. Axial velocity isosurfaces \AG{$v_z= \pm 0.011$ [$\Omega_i r_i$]}, contours of axial and radial velocity. ($b$) \textbf{Dimensionless torque} as a function of $Re$ \AG{along the parameter path $Ha=a \,Re^b$, with $a=0.71$ and $b=1.55$. This path is shown as a green line in Fig.~\ref{fig:TC_scheme}b.}}
\label{fig:turbulence}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
(a) & (b) \\ \includegraphics[width=0.47\linewidth]{vel_r_1480_Omega_R.eps} \hspace{1cm}&
\includegraphics[width=0.47\linewidth]{vel_r_edge_140_Omega_R.eps}\\
(c) & (d) \\ \includegraphics[width=0.47\linewidth]{vel_r_def_190_2960_Omega_R.eps} \hspace{1cm}&
\includegraphics[width=0.47\linewidth]{vel_r_9333_Omega_R.eps}\\
\end{tabular}
\end{center}
\caption{\small \textbf{Transition to turbulence.} Evolution of radial velocity perturbation $u_r$ at the point $(r, \phi, z) = (1.5, 0, 0)$ with time ($a$) at $Re = 1480$ and $Ha=150$ (standing wave); ($b$) - the same at $Re = 2960$ and \AG{$Ha =140$} (edge state); ($c$) - at $Re = 2960$ and $Ha=190$ (defects); ($d$) - at $Re = 9333$ and $Ha=456.7$ (turbulence). Time is scaled using the inner cylinder rotation frequency $1/\Omega_i$, i.e. $t=t \cdot Re$; \AG{velocities are normalised with the velocity of the inner cylinder $\Omega_i r_i$}.}
\label{fig:tran_turb}
\end{figure}
The qualitative difference between standing wave, defects and turbulent flow is apparent in time series of the radial velocity $v_r$ taken at the mid-gap between the cylinders $(r, \phi, z) = (1.5, 0, 0)$.
\AG{Figure~\ref{fig:tran_turb}a shows that the radial velocity of the standing wave oscillates periodically around zero. The edge state features a slow temporal frequency modulating the oscillation of the SW (Fig. \ref{fig:tran_turb}b). For defects at $Re=2960$ the time series is mildly chaotic. As $Re$ increases toward turbulence the velocity pulsates in a very chaotic manner (Fig. \ref{fig:tran_turb}d). However, the main frequency associated with the AMRI can still be discerned. By comparing all panels it becomes apparent that this frequency scales with the rotation-rate of the inner cylinder. This is consistent with the linear stability analysis of \cite{hollerbach2010nonaxisymmetric}, and with the studies \cite{kirillov2010,kirillov2012}, where it is shown that in the low $Pm$ limit the AMRI is an inertial wave.}
The transfer-rate of angular momentum between the cylinders is important for accretion-disc modelling. We checked the torque scaling for increasing $Re$ numbers (see Fig. \ref{fig:turbulence}b) along the parameter path~\eqref{eq:path}. The dimensionless torque increases with $Re$ according to the scaling law
\begin{equation}\label{eq:torq}
G \sim Re^{1.15},
\end{equation}
which is surprisingly low compared to hydrodynamic experiments in the Rayleigh unstable regime \cite{lathrop1992turbulent}. \AG{We believe that this torque scaling comes close to being an upper bound for the torque scaling of the AMRI in the $Re$ range studied here because the maximum in torque correlates well with maximum growth rates at low $Re$ (see Fig. \ref{fig:subcr_Hopf}). However, we must caution that in hydrodynamic Taylor--Couette flow different maxima of the torque have been observed as a function of the relative rotation of the cylinders~\cite{brauckmann2015momentum}. At large $Re$ these are not correlated to the maximum growth rate of the primary instability. Similar phenomena may occur for the AMRI.}
\section{Discussion}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
(a) & (b) \\ \includegraphics[width=0.48\linewidth]{1480_freq.eps} &
\includegraphics[width=0.48\linewidth]{2960_freq.eps}\\
\end{tabular}
\end{center}
\caption{\small \textbf{Comparison to PROMISE experiment:} angular drift frequencies of the waves at ($a$) Re 1480 and ($b$) Re 2960. Blue and red lines correspond to experimental results, black to our nonlinear simulations; the green line denotes outer cylinder rotation $\Omega_o/\Omega_i=0.26$. The waves rotate at approximately the outer cylinder frequency and slow down with increasing $Ha$.}
\label{fig:freq}
\end{figure}
We showed that the AMRI in Taylor-Couette flow manifests itself as a wave rotating in the azimuthal direction and standing in the axial direction, thereby preserving the reflection symmetry in the latter. In order to compare to experimental observations~\cite{seilmayer2014experimental} we computed the angular drift frequency of the wave. This is shown in figure~\ref{fig:freq} after being normalised with the rotation frequency of the inner cylinder. The wave rotates at approximately the outer cylinder frequency (dashed line) and slows down as the Hartmann number increases, which is in qualitative agreement with the experimental data. Note, however, that in the experiment two frequencies are simultaneously measured, corresponding to the up- and down-traveling spiral waves, respectively. Although in the standing wave the two frequencies are identical, in the experiment the asymmetric wiring creates $B_r$ and $B_z$ components of magnetic field which break the reflectional symmetry. As a result, up and down spirals travel with different frequencies, similar to
co-rotating Taylor-Couette flow in which the reflection symmetry is broken by an imposed axial flow \cite{avila2006double}. Another difference is that in the experiment the flow becomes unstable at lower $Ha$, which may be explained
by the different boundary conditions in the experiment from our simulations. In the experiment copper cylinders are used, so perfectly conducting walls would be a closer boundary condition for the magnetic field.
More significantly, in the experiments the cylinders are of finite length, so to reproduce their results exactly a no-slip condition on end-plates should be used. We have applied periodic boundary conditions in the axial direction, which more accurately model the accretion disc problem and allow us to compute high Reynolds number flows more efficiently.
As Re increases, a catastrophic transition to spatio-temporal chaos occurs directly from the SW. In a range of parameters SW and chaos are both locally stable and can be realised depending on the initial conditions. We have shown that the first step in this transition process is a subcritical Hopf bifurcation giving rise to an unstable relative periodic orbit, which
has been computed using an analogue of the edge-tracking algorithm introduced by Skufca \emph{et al}.~\cite{skufca2006edge} in shear flows. This unstable relative periodic orbit consists of a long-wave modulation of the axially periodic pattern of the standing wave and destroys the homogeneity of the vortical pattern. It can thus be seen as a temporally simple defect precursor of the ensuing spatio-temporal chaos. Because of the computational cost we could not track further instabilities on the unstable branch, which we speculate result in chaotic flow before the dynamics stabilize at a turning point ($Ha=130$ at $Re=2960$). After the turning point defects are stable and can be computed simply by time-stepping.
We believe that such long-wave instabilities are ubiquitous in fluid flows. In linearly stable shear flows, such instabilities of traveling waves were found to be responsible for spatial localisation \cite{melnikov2014long,chantry2014genesis}. In fact, in pipe flow the ensuing localised solutions, which are also relative periodic orbits, suffer a bifurcation cascade leading to chaos \cite{avila2013streamwise}. One difference is that in pipe flow the traveling waves are disconnected from laminar flow, where the standing wave of the AMRI is connected to the circular Couette flow.
Our simulations were performed with a powerful spectral DNS method, which we have developed and validated with published results to excellent agreement. The method allowed us to compute flows up to $Re=10^4$. As $Re$ increases, defects accumulate and the flow becomes gradually turbulent. Although we found that the AMRI exhibits a weak scaling of angular momentum transport, with $G \propto Re^{1.15}$, we expect that larger magnetic Prandtl numbers $Pm$ (realistic for accretion discs) may result in a stronger scaling. Astrophysically important issues such as the precise angular momentum transport scalings obtained for different values of $Pm$, and also for different choices of imposed field (e.g.\ SMRI, HMRI, AMRI) will be the subject of future investigations.
\begin{acknowledgements}
Support from the Deutsche Forschungsgemeinschaft (grant number AV 120/1-1) and computing time from the J{\"u}lich Supercomputing Centre (grant number HER22) and Regionales Rechenzentrum Erlangen (RRZE) are acknowledged.
\end{acknowledgements}
\section*{References}
\bibliographystyle{ieeetr}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,262 |
package x7c1.linen.modern.init.settings.my
import android.os.Bundle
import android.support.v4.app.FragmentActivity
import android.support.v7.app.{AlertDialog, AppCompatDialogFragment}
import android.widget.Button
import x7c1.linen.database.control.DatabaseHelper
import x7c1.linen.database.struct.HasAccountId
import x7c1.linen.glue.res.layout.SettingMyChannelCreate
import x7c1.linen.modern.init.settings.my.CreateChannelDialog.Arguments
import x7c1.linen.repository.channel.my.ChannelCreator.InputToCreate
import x7c1.linen.repository.channel.my.{ChannelCreator, ChannelWriterError, EmptyName, UserInputError}
import x7c1.wheat.ancient.context.ContextualFactory
import x7c1.wheat.ancient.resource.ViewHolderProviderFactory
import x7c1.wheat.lore.dialog.DelayedDialog
import x7c1.wheat.macros.fragment.TypedFragment
import x7c1.wheat.macros.intent.LocalBroadcaster
import x7c1.wheat.macros.logger.Log
import x7c1.wheat.modern.callback.either.EitherTask
import x7c1.wheat.modern.decorator.Imports._
import x7c1.wheat.modern.dialog.tasks.KeyboardControl
object CreateChannelDialog {
class Arguments(
val clientAccountId: Long,
val dialogFactory: ContextualFactory[AlertDialog.Builder],
val inputLayoutFactory: ViewHolderProviderFactory[SettingMyChannelCreate]
)
}
class CreateChannelDialog extends AppCompatDialogFragment
with DelayedDialog
with TypedFragment[Arguments] {
lazy val args = getTypedArguments
private val provide = EitherTask.hold[ChannelWriterError]
private lazy val helper = new DatabaseHelper(getActivity)
private lazy val keyboard = {
KeyboardControl[ChannelWriterError](this, layout.channelName)
}
def showIn(activity: FragmentActivity) = {
show(activity.getSupportFragmentManager, "channel-dialog")
}
override def onCreateDialog(savedInstanceState: Bundle) = {
args.dialogFactory.createAlertDialog(
title = "Create my channel",
positiveText = "Create",
negativeText = "Cancel",
layoutView = layout.itemView
)
}
override def onStart(): Unit = {
super.onStart()
initializeButtons(
positive = onClickPositive,
negative = onClickNegative
)
}
override def onStop(): Unit = {
super.onStop()
helper.close()
}
private def onClickPositive(button: Button) = {
val tasks = for {
input <- validateInput
channelId <- createChannel(input)
_ <- keyboard.taskToHide()
_ <- notifyCreated(channelId)
} yield {
input
}
tasks run {
case Right(input) =>
Log info s"channel created: $input"
case Left(error: UserInputError) =>
showError(error).execute()
case Left(error) =>
Log error error.dump
}
}
private def onClickNegative(button: Button) = {
Log info s"[init]"
keyboard.taskToHide().execute()
}
private def validateInput = provide {
def parseName = {
val name = layout.channelName.text.toString
if (name.isEmpty) {
Left(EmptyName())
} else {
Right(name)
}
}
def parseDescription = {
val description = layout.channelDescription.text.toString
Right(Option(description))
}
for {
name <- parseName.right
description <- parseDescription.right
} yield InputToCreate(
channelName = name,
description = description
)
}
private def createChannel(input: InputToCreate) = provide async {
val factory = ChannelCreator(helper, args.clientAccountId)
factory createChannel input
}
private def showError(error: UserInputError) = provide ui {
error match {
case EmptyName() => layout.channelNameLayout setError error.message
}
}
private def notifyCreated(channelId: Long) = provide {
val event = new ChannelCreated(
accountId = args.clientAccountId,
channelId = channelId
)
LocalBroadcaster(event) dispatchFrom getActivity
Right(event)
}
private lazy val layout = {
args.inputLayoutFactory.create(getActivity).inflate()
}
}
class ChannelCreated(
val accountId: Long,
val channelId: Long
)
object ChannelCreated {
implicit object account extends HasAccountId[ChannelCreated] {
override def toId = _.accountId
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,101 |
Q: Internet connection fails on 1 machine, the other is OK I have 2 Windows XP computers connected on a wireless network via a router. Both machines can see each other's shared folders and on machine 1 I can connect to the Internet without a problem. The connection on the other machine OTOH is so sssllooowww that Firefox (or IE8 for that matter) never manages to load a page ("The connection was reset" after 15 minutes or so).
PC 2 can ping the router and the other machine successfully, and also load the router's setup page.
The software (OS + apps) was reinstalled today, and Firefox did work properly after installation.
Any ideas?
TIA
Steven
edit:
Fixed! Installed an audio(!) driver and everything is peachy again (for the time being). Question can be closed AFAIC.
A: I would unplug all connections to the router and turn it off and unplug it. Then, turn off both XP machines. Plug the router in and then boot XP machine number 1. Go to the setup network wizard and create a network that replaces the existing network. Call the work-group something different than it was named before.
Then boot XP machine number 2, and go through the same network setup wizard. Name the work-group the same. Reboot both XP machines and see what happens.
What you might find is that some config was messed up and doing a fresh configuration may help this.
PS - At some point early one, perhaps before configuring the network for the two XP machines, plug one directly into the router and be sure it is configured correctly, i.e., IP range set (192.168.10.10 through 100, for example, DHCP enabled, etc).
A: I'm afraid this will be another "not an answer". I don't know how else to approach this other than by trying to narrow down the scope of the problem via trial and error.
The first thing I suggest you do is determine whether the problem is related to your wireless connection or not. Turn off wireless on the PC experiencing problems and connect it via wired ethernet. Then use it long enough to determine whether the problem is still there.
If the problem goes away then I think that would suggest a problem with your wireless connection. If the problem persists when using a wired connection to the router, then I would guess ... and it's only a WAG ... that you have some sort of malware installed on that system.
When you "reinstalled" the software, did you do an XP repair install or did you format the partition and do a "clean" XP install?
A: Fixed! Installed an audio(!) driver and everything is peachy again.
edit:
the driver was the SIGMATEL driver which you can download from the audio section in the XPS M1710 downloads on the Dell support page. Filename is R171789.exe. Note that I'm not quite sure that the lacking driver was the culprit, but the installation was the only change I made.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,502 |
You might be surprised to learn there is actually a link. The same principle that allows objects to fly through space – theoretically forever – is used in Tetra Pak Separators to create financial and environmental benefits for our customers.
Watch the short animation to find out more. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,418 |
Have you ever wondered how the Summer season effects your website visitors or customer engagement? Find out in our first newsletter article detailing our brand new web analytics platform, free to all our existing clients! In our second newsletter article we've detailed the logo design and branding process, outlining a typical Studio Whitby project and the work involved to create a new brand identity.
Our redesigned and improved website analytics dashboard helps our hosting clients gauge the effectiveness of online advertising and promotions, as well as website performance over time.
If you're thinking of a new logo or fresh look, rest assured at Studio Whitby we believe in making our logo design projects as easy and as simple as possible for our clients. While every project is different, I've outlined a typical logo design and branding project - from first contact to final design.
Studio Whitby is based at the Green Lane Business Centre, Whitby and open 9am - 5pm every week day. If you'd like to visit us to discuss any upcoming graphic design or web development work please call us on 01947 899120 to book a meeting. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,050 |
Tegafur é um pró-fármaco que após metabolização transforma-se em 5-fluoruracil utilizado como antineoplásico, especialmente nos casos de cancro de mama, de fígado e gastrintestinal. Todavia, deve ser evitado nas mulheres grávidas e as que amamentam. A via de administração é a oral ou intravenosa.
Mecanismo de ação
O metabólito ativo interfere na fase S do ciclo celular.
Doses usuais
As doses usuais para o medicamento administrado pela via oral são 1 g/m² por dia durante 6 semanas. A dose intravenosa é de 2 g/m² por 5 dias ou dose única de 3-4 g/m² por dia, repetindo por duas ou três semanas.
Pró-fármacos
Antineoplásicos | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,379 |
Ein Hüllkörper () ist in der algorithmischen Geometrie ein einfacher geometrischer Körper, der ein komplexes dreidimensionales Objekt oder einen komplexen Körper umschließt.
Anwendungen und Varianten
Hüllkörper werden vor allem zur Beschleunigung von Algorithmen der algorithmischen Geometrie oder Computergrafik, etwa beim Raytracing, verwendet. Sie werden oft auch hierarchisch strukturiert (Bounding Volumes umschließen andere Bounding Volumes), um die Effizienz zusätzlich zu steigern. In Computerspielen finden sie als Hitbox Anwendung, um die Kollisionserkennung zu vereinfachen.
Folgende Hüllkörper sind gebräuchlich:
Kugeln (Bounding Spheres). Diese Art von Hüllkörpern ist besonders bei der Kollisionserkennung verbreitet, da sich Kollisionen mit Kugeln sehr leicht berechnen lassen.
Quader oder Würfel (Bounding Boxes). Quaderförmige Hüllkörper umschreiben Objekte oft genauer als Kugeln und sind deshalb in einigen Anwendungen wie Raytracing von Vorteil. Über Bounding Volume Hierarchies (BVH) kann das Raytracing beschleunigt werden. Beliebig orientierte Quader werden auch als Oriented Bounding Boxes (OBB), an den Achsen ausgerichtete Quader als Axis-Aligned Bounding Boxes (AABB) bezeichnet. AABBs werden üblicherweise durch zwei Punkte definiert, die die Position der Ecken auf beiden Seiten einer Quaderdiagonalen angeben. Eine zweidimensionale Bounding Box wird als minimal umgebendes Rechteck bezeichnet.
k-DOP oder k-Discretely Oriented Polytopes genannt. Im Gegensatz zu OBBs erlauben k-DOP's mehrere Beschränkungsflächen, wodurch sie Objekte besser (enger) einschließen können. Diese Beschränkungsflächen müssen immer paarweise parallel zueinander sein, so dass ein k-DOP auch als Schnittmenge von k Slabs betrachtet werden kann. Der Überlappungstest (Schnitttest) zweier k-DOPs lässt sich in Zeit durchführen.
Literatur
Michael Bender, Manfred Brill: Computergrafik: ein anwendungsorientiertes Lehrbuch, S. 54 f. Hanser, München 2006, ISBN 3-446-40434-1
Fußnoten und Einzelnachweise
Algorithmische Geometrie | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,610 |
\section{Introduction}
\label{sec:intro}
Statistical model selection in the high-dimensional regime arises in a number of applications. In many data analysis problems in geophysics, radiology, genetics, climate studies, and image processing, the number of samples available is comparable to or even smaller than the number of variables. However, it is well-known that empirical statistics such as sample covariance matrices are not well-behaved when both the number of samples and the number of variables are large and comparable to each other (see \cite{MarP1967}). Model selection in such a setting is therefore both challenging and of great interest. In order for model selection to be well-posed given limited information, a key assumption that is often made is that the underlying model to be estimated only has \emph{a few degrees of freedom}. Common assumptions are that the data are generated according to a graphical model, or a stationary time-series model, or a simple factor model with a few latent variables. Sometimes geometric assumptions are also made in which the data are viewed as samples drawn according to a distribution supported on a low-dimensional manifold.
A model selection problem that has received considerable attention recently is the estimation of covariance matrices in the high-dimensional setting. As the sample covariance matrix is poorly behaved in such a regime \cite{MarP1967,Joh2001}, some form of \emph{regularization} of the sample covariance is adopted based on assumptions about the true underlying covariance matrix. For example approaches based on banding the sample covariance matrix \cite{BicL2008a} have been proposed for problems in which the variables have a natural ordering (e.g., times series), while ``permutation-invariant'' methods that use thresholding are useful when there is no natural variable ordering \cite{Elk2008,BicL2008b}. These approaches provide consistency guarantees under various sparsity assumptions on the true covariance matrix. Other techniques that have been studied include methods based on shrinkage \cite{LedW2003,WuP2003} and factor analysis \cite{FanFL2008}. A number of papers have studied covariance estimation in the context of \emph{Gaussian graphical model selection}. In a Gaussian graphical model the \emph{inverse} of the covariance matrix, also called the concentration matrix, is assumed to be sparse, and the sparsity pattern reveals the conditional independence relations satisfied by the variables. The model selection method usually studied in such a setting is $\ell_1$-regularized maximum-likelihood, with the $\ell_1$ penalty applied to the entries of the inverse covariance matrix to induce sparsity. The consistency properties of such an estimator have been studied \cite{RotBLZ2008,RavWRY2008,LamF2009}, and under suitable conditions \cite{LamF2009,RavWRY2008} this estimator is also ``sparsistent'', i.e., the estimated concentration matrix has the same sparsity pattern as the true model from which the samples are generated. An alternative approach to $\ell_1$-regularized maximum-likelihood is to estimate the sparsity pattern of the concentration matrix by performing regression separately on each variable \cite{MeiB2006}; while such a method consistently estimates the sparsity pattern, it does not directly provide estimates of the covariance or concentration matrix.
In many applications throughout science and engineering, a challenge is that one may not have access to observations of all the relevant phenomena, i.e., some of the relevant variables may be hidden or unobserved. Such a scenario arises in data analysis tasks in psychology, computational biology, and economics. In general latent variables pose a significant difficulty for model selection because one may not know the number of relevant latent variables, nor the relationship between these variables and the observed variables. Typical algorithmic methods that try to get around this difficulty usually fix the number of latent variables as well as the structural relationship between latent and observed variables (e.g., the graphical model structure between latent and observed variables), and use the EM algorithm to fit parameters \cite{DemLR1977}. This approach suffers from the problem that one optimizes non-convex functions, and thus one may get stuck in sub-optimal local minima. An alternative method that has been suggested is based on a greedy, local, combinatorial heuristic that assigns latent variables to groups of observed variables, based on some form of clustering of the observed variables \cite{EliNN2007}; however, this approach has no consistency guarantees.
In this paper we study the problem of latent-variable graphical model selection in the setting where all the variables, both observed and hidden, are jointly Gaussian. More concretely let the covariance matrix of a finite collection of jointly Gaussian random variables $X_O \cup X_H$ be denoted by $\Sigma_{(O ~ H)}$, where $X_O$ are the observed variables and $X_H$ are the unobserved, hidden variables. The marginal statistics corresponding to the observed variables $X_O$ are given by the marginal covariance matrix $\Sigma_O$, which is simply a submatrix of the full covariance matrix $\Sigma_{(O ~ H)}$. However suppose that we parameterize our model by the concentration matrix $K_{(O ~ H)} = \Sigma_{(O ~ H)}^{-1}$, which as discussed above reveals the connection to graphical models. In such a parametrization, the \emph{marginal concentration matrix} $\Sigma_O^{-1}$ corresponding to the observed variables $X_O$ is given by the Schur complement \cite{HorJ1990} with respect to the block $K_H$:
\begin{equation*}
\tilde{K}_{O} = \Sigma_O^{-1} = K_O - K_{O,H} K_H^{-1} K_{H,O}.
\end{equation*}
Thus if we only observe the variables $X_O$,
we only have access to $\Sigma_O$ (or $\tilde{K}_O$). The two terms that compose $\tilde{K}_O$ above have interesting properties. The matrix $K_O$ specifies the concentration matrix of the \emph{conditional statistics} of the observed variables given the latent variables. If these conditional statistics are given by a sparse graphical model then $K_O$ is \emph{sparse}. On the other hand the matrix $K_{O,H} K_H^{-1} K_{H,O}$ serves as a \emph{summary} of the effect of marginalization over the hidden variables $H$. This matrix has small rank if the number of latent, unobserved variables $H$ is small relative to the number of observed variables $O$ (the rank is equal to $|H|$). Therefore the marginal concentration matrix $\tilde{K}_O$ of the observed variables $X_O$ is generally \emph{not sparse} due to the additional low-rank term $K_{O,H} K_H^{-1} K_{H,O}$. Hence standard graphical model selection techniques applied directly to the observed variables $X_O$ are not useful.
A modeling paradigm that infers the effect of the latent variables $X_H$ would be more suitable in order to provide a simple explanation of the underlying statistical structure. Hence we \emph{decompose} $\tilde{K}_O$ into the sparse and low-rank components, which reveals the conditional graphical model structure in the observed variables as well as the \emph{number} of and effect due to the unobserved latent variables. Such a method can be viewed as a blend of principal component analysis and graphical modeling. In standard graphical modeling one would directly approximate a concentration matrix by a sparse matrix in order to learn a sparse graphical model. On the other hand in principal component analysis the goal is to explain the statistical structure underlying a set of observations using a small number of latent variables (i.e., approximate a covariance matrix as a low-rank matrix). In our framework based on decomposing a concentration matrix, we learn a graphical model among the observed variables \emph{conditioned} on a few (additional) latent variables. Notice that in our setting these latent variables are \emph{not} principal components, as the conditional statistics (conditioned on these latent variables) are given by a graphical model. Therefore we refer to these latent variables informally as \emph{hidden components}.
Our first contribution in Section~\ref{sec:iden} is to address the fundamental question of \emph{identifiability} of such latent-variable graphical models given the marginal statistics of only the observed variables. The critical point is that we need to tease apart the correlations induced due to marginalization over the latent variables from the conditional graphical model structure among the observed variables. As the identifiability problem is one of \emph{uniquely} decomposing the sum of a sparse matrix and a low-rank matrix into the individual components, we study the algebraic varieties of sparse matrices and low-rank matrices. An important theme in this paper is the connection between the tangent spaces to these algebraic varieties and the question of identifiability. Specifically let $\Omega(K_O)$ denote the tangent space at $K_O$ to the algebraic variety of sparse matrices, and let $T(K_{O,H} K_H^{-1} K_{H,O})$ denote the tangent space at $K_{O,H} K_H^{-1} K_{H,O}$ to the algebraic variety of low-rank matrices. Then the \emph{statistical} question of identifiability of $K_O$ and $K_{O,H} K_H^{-1} K_{H,O}$ given $\tilde{K}_O$ is determined by the \emph{geometric} notion of \emph{transversality} of the tangent spaces $\Omega(K_O)$ and $T(K_{O,H} K_H^{-1} K_{H,O})$. The study of the transversality of these tangent spaces leads us to natural conditions for identifiability. In particular we show that latent-variable models in which $(1)$ the sparse matrix $K_O$ has a small number of nonzeros per row/column, and $(2)$ the low-rank matrix $K_{O,H} K_H^{-1} K_{H,O}$ has row/column spaces that are not closely aligned with the coordinate axes, are identifiable. These two conditions have natural statistical interpretations. The first condition ensures that there are no densely-connected subgraphs in the conditional graphical model structure among the observed variables $X_O$ given the hidden components, i.e., that these conditional statistics are indeed specified by a sparse graphical model. Such statistical relationships may otherwise be mistakenly attributed to the effect of marginalization over some latent variable. The second condition ensures that the effect of marginalization over the latent variables is ``spread out'' over many observed variables; thus, the effect of marginalization over a latent variable is not confused with the conditional graphical model structure among the observed variables. In fact the first condition is often assumed in some papers on standard graphical model selection without latent variables (see for example \cite{RavWRY2008}). We note here that question of parameter identifiability was recently studied for models with discrete-valued latent variables (i.e., mixture models, hidden Markov models) \cite{AllMR2009}. However, this work is not applicable to our setting in which both the latent and observed variables are assumed to be jointly Gaussian.
As our next contribution we propose a \emph{regularized maximum-likelihood decomposition} framework to approximate a given sample covariance matrix by a model in which the concentration matrix decomposes into a sparse matrix and a low-rank matrix. A number of papers over the last several years have suggested that heuristics based on using the $\ell_1$ norm are very effective for recovering sparse models \cite{Don2006a,Don2006b,CanRT2006}. Indeed such heuristics have been effectively used, as described above, for model selection when the goal is to estimate sparse concentration matrices. In her thesis \cite{Faz2002} Fazel suggested a convex heuristic based on the nuclear norm for rank-minimization problems in order to recover low-rank matrices. This method generalized the previously studied trace heuristic for recovering low-rank positive semidefinite matrices. Recently several conditions have been given under which these heuristics provably recover low-rank matrices in various settings \cite{RecFP2009,CanR2009}. Motivated by the success of these heuristics, we propose the following penalized likelihood method given a sample covariance matrix $\Sigma^n_O$ formed from $n$ samples of the observed variables:
\begin{equation}
\begin{aligned}
(\hat{S}_n,\hat{L}_n) = \arg \min_{S,L} & ~~~ -\ell(S-L; \Sigma^n_O) ~~ + ~~ \lambda_n ~ (\gamma \|S\|_{1} +
\mathrm{tr}(L)) \\ \mbox{s.t.} & ~~~ S-L \succ 0, ~~ L \succeq 0.
\end{aligned}
\label{eq:sdp}
\end{equation}
Here $\ell$ represents the Gaussian log-likelihood function and is given by $\ell(K;\Sigma) = \log\det(K) - \mathrm{tr}(K \Sigma)$ for $K \succ 0$, where $\mathrm{tr}$ is the trace of a matrix and $\det$ is the determinant. The matrix $\hat{S}_n$ provides an estimate of $K_O$, which represents the conditional concentration matrix of the observed variables; the matrix $\hat{L}_n$ provides an estimate of $K_{O,H} K_H^{-1} K_{H,O}$, which represents the effect of marginalization over the latent variables. Notice that the regularization function is a combination of the $\ell_1$ norm applied to $S$ and the nuclear norm applied to $L$ (the nuclear norm reduces to the trace over the cone of symmetric, positive-semidefinite matrices), with $\gamma$ providing a tradeoff between the two terms. This variational formulation is a \emph{convex optimization} problem. In particular it is a regularized max-det problem and can be solved in polynomial time using standard off-the-shelf solvers \cite{WanST2009}.
Our main result in Section~\ref{sec:main} is a proof of the consistency of the estimator \eqref{eq:sdp} in the high-dimensional regime in which both the number of observed variables and the number of hidden components are allowed to grow with the number of samples (of the observed variables). We show that for a suitable choice of the regularization parameter $\lambda_n$, there exists a range of values of $\gamma$ for which the estimates $(\hat{S}_n,\hat{L}_n)$ have the same sparsity (and sign) pattern and rank as $(K_O,K_{O,H} (K_H)^{-1} K_{H,O})$ with high probability (see Theorem~\ref{theo:main}). The key technical requirement is an identifiability condition for the two components of the marginal concentration matrix $\tilde{K}_O$ with respect to the Fisher information (see Section~\ref{subsec:fi}). We make connections between our condition and the irrepresentability conditions required for support/graphical-model recovery using $\ell_1$ regularization \cite{ZhaY2006,RavWRY2008}. Our results provide numerous scaling regimes under which consistency holds in latent-variable graphical model selection. For example we show that under suitable identifiability conditions consistent model selection is possible even when the number of samples and the number of latent variables are on the same order as the number of observed variables (see Section~\ref{subsec:scal}).
\paragraph{Related previous work} The problem of decomposing the sum of a sparse matrix and a low-rank matrix, with no additional noise, into the individual components was initially studied in \cite{ChaSPW2009} by a superset of the authors of the present paper. Specifically this work proposed a convex program using a combination of the $\ell_1$ norm and the nuclear norm to recover the sparse and low-rank components, and derived conditions under which the convex program exactly recovers these components. In subsequent work Cand\`es et al. \cite{CanLMW2009} also studied this noise-free sparse-plus-low-rank decomposition problem, and provided guarantees for exact recovery using the convex program proposed in \cite{ChaSPW2009}. The problem setup considered in the present paper is quite different and is more challenging because we are only given access to an inexact sample covariance matrix, and we are interested in recovering components that preserve both the sparsity pattern and the rank of the components in the true underlying model. In addition to proving such a consistency result for the estimator \eqref{eq:sdp}, we also provide a statistical interpretation of our identifiability conditions and describe natural classes of latent-variable Gaussian graphical models that satisfy these conditions. As such our paper is closer in spirit to the many recent papers on covariance selection, but with the important difference that some of the variables are not observed.
\paragraph{Outline} Section~\ref{sec:bg} gives some background on graphical models as well as the algebraic varieties of sparse and low-rank matrices. It also provides a formal statement of the problem. Section~\ref{sec:iden} discusses conditions under which latent-variable models are identifiable, and Section~\ref{sec:main} states the main results of this paper. We provide experimental demonstration of the effectiveness of our estimator on synthetic and real data in Section~\ref{sec:sims}. Section~\ref{sec:conc} concludes the paper with a brief discussion. The appendices include additional details and proofs of all of our technical results.
\section{Background and Problem Statement}
\label{sec:bg}
We briefly discuss concepts from graphical modeling and give a formal statement of the latent-variable model selection problem. We also describe various properties of the algebraic varieties of sparse matrices and of low-rank matrices. The following matrix norms are employed throughout this paper:
\begin{itemize}
\item $\|M\|_2$: denotes the spectral norm, which is the largest singular value of $M$.
\item $\|M\|_\infty$: denotes the largest entry in magnitude of $M$.
\item $\|M\|_F$: denotes the Frobenius norm, which is the square-root of the sum of the squares of the entries of $M$.
\item $\|M\|_\ast$: denotes the nuclear norm, which is the sum of the singular values of $M$. This reduces to the trace for positive-semidefinite matrices.
\item $\|M\|_1$: denotes the sum of the absolute values of the entries of $M$.
\end{itemize}
A number of \emph{matrix operator} norms are also used. For example, let $\mathcal{Z}: \mathbb{R}^{p \times p} \rightarrow \mathbb{R}^{p \times p}$ be a linear operator acting on matrices. Then the induced operator norm $\|\mathcal{Z}\|_{q \rightarrow q}$ is defined as:
\begin{equation}
\|\mathcal{Z}\|_{q \rightarrow q} \triangleq \max_{N \in \mathbb{R}^{p \times p}, ~ \|N\|_q \leq 1} ~~~ \|\mathcal{Z}(N)\|_q.
\end{equation}
Therefore, $\|\mathcal{Z}\|_{F \rightarrow F}$ denotes the spectral norm of the matrix operator $\mathcal{Z}$. The only vector norm used is the Euclidean norm, which is denoted by $\| \cdot \|$.
\subsection{Gaussian graphical models with latent variables}
\label{subsec:gm}
A graphical model \cite{Lau1996} is a statistical model defined with respect to a graph $(V,\mathcal{E})$ in which the nodes index a collection of random variables $\{X_v\}_{v \in V}$, and the edges represent the conditional independence relations (Markov structure) among the variables. The absence of an edge between nodes $i,j \in V$ implies that the variables $X_i,X_j$ are independent conditioned on all the other variables. A \emph{Gaussian graphical model} (also commonly referred to as a Gauss-Markov random field) is one in which all the variables are jointly Gaussian \cite{SpeK1986}. In such models the sparsity pattern of the inverse of the covariance matrix, or the \emph{concentration} matrix, directly corresponds to the graphical model structure. Specifically, consider a Gaussian graphical model in which the covariance matrix is given by $\Sigma \succ 0$ and the concentration matrix is given by $K = \Sigma^{-1}$. Then an edge $\{i,j\} \in \mathcal{E}$ is present in the underlying graphical model if and only if $K_{i,j} \neq 0$.
Our focus in this paper is on Gaussian models in which some of the variables may not be observed. Suppose $O$ represents the set of nodes corresponding to observed variables $X_O$, and $H$ the set of nodes corresponding to unobserved, hidden variables $X_H$ with $O \cup H = V$ and $O \cap H = \emptyset$. The joint covariance is denoted by $\Sigma_{(O~H)}$, and joint concentration matrix by $K_{(O~H)} = \Sigma_{(O~H)}^{-1}$. The submatrix $\Sigma_O$ represents the marginal covariance of the observed variables $X_O$, and the corresponding marginal concentration matrix is given by the Schur complement with respect to the block $K_H$:
\begin{equation}
\tilde{K}_{O} = \Sigma_O^{-1} = K_O - K_{O,H} K_H^{-1} K_{H,O}. \label{eq:schur}
\end{equation}
The submatrix $K_O$ specifies the concentration matrix of the conditional statistics of the observed variables conditioned on the hidden components. If these conditional statistics are given by a sparse graphical model then $K_O$ is sparse. On the other hand the marginal concentration matrix $\tilde{K}_O$ of the marginal distribution of $X_O$ is \emph{not} sparse in general due to the extra correlations induced from marginalization over the latent variables $X_H$, i.e., due to the presence of the additional term $K_{O,H} K_H^{-1} K_{H,O}$. Hence, standard graphical model selection techniques in which the goal is to approximate a sample covariance by a sparse graphical model are not well-suited for problems in which some of the variables are hidden. However, the matrix $K_{O,H} K_H^{-1} K_{H,O}$ is a low-rank matrix if the number of hidden variables is much smaller than the number of observed variables (i.e., $|H| \ll |O|$). Therefore, a more appropriate model selection method is to approximate the sample covariance by a model in which the concentration matrix decomposes into the sum of a sparse matrix and a low-rank matrix. The objective here is to learn a sparse graphical model among the observed variables \emph{conditioned} on some latent variables, as such a model explicitly accounts for the extra correlations induced due to unobserved, hidden components.
\subsection{Problem statement}
\label{subsec:ps}
In order to analyze latent-variable model selection methods, we need to define an appropriate notion of model selection consistency for latent-variable graphical models. Notice that given the two components $K_O$ and $K_{O,H} K_H^{-1} K_{H,O}$ of the concentration matrix of the marginal distribution \eqref{eq:schur}, there are \emph{infinitely} many configurations of the latent variables (i.e., matrices $K_H \succ 0, K_{O,H} = K_{H,O}^T$) that give rise to the \emph{same} low-rank matrix $K_{O,H} K_H^{-1} K_{H,O}$. Specifically for any non-singular matrix $B \in \mathbb{R}^{|H| \times |H|}$, one can apply the transformations $K_H \rightarrow B K_H B^T, K_{O,H} \rightarrow K_{O,H} B^T$ and still preserve the low-rank matrix $K_{O,H} K_H^{-1} K_{H,O}$. In \emph{all} of these models the marginal statistics of the observed variables $X_O$ remain the same upon marginalization over the latent variables $X_H$. The key \emph{invariant} is the low-rank matrix $K_{O,H} K_H^{-1} K_{H,O}$, which \emph{summarizes} the effect of marginalization over the latent variables. These observations give rise to the following notion of consistency:
\begin{DEF}
A pair of (symmetric) matrices $(S,L)$ with $S,L \in \mathbb{R}^{|O| \times |O|}$ is an \emph{algebraically consistent} estimate of a latent-variable Gaussian graphical model given by the concentration matrix $K_{(O~H)}$ if the following conditions hold:
\begin{enumerate}
\item The sign-pattern of $S$ is the same as that of $K_O$:
\begin{equation*}
\mathrm{sign}(S_{i,j}) = \mathrm{sign}((K_O)_{i,j}), ~~~ \forall i,j.
\end{equation*}
Here we assume that $\mathrm{sign}(0) = 0$.
\item The rank of $L$ is the same as the rank of $K_{O,H} K_H^{-1} K_{H,O}$:
\begin{equation*}
\mathrm{rank}(L) = \mathrm{rank}(K_{O,H} K_H^{-1} K_{H,O}).
\end{equation*}
\item The concentration matrix $S-L$ can be realized as the marginal concentration matrix of an appropriate latent-variable model:
\begin{equation*}
S - L \succ 0, ~~~~~ L \succeq 0.
\end{equation*}
\end{enumerate}
\end{DEF}
The first condition ensures that $S$ provides the correct structural estimate of the conditional graphical model (given by $K_O$) of the observed variables conditioned on the hidden components. This property is the same as the ``sparsistency'' property studied in standard graphical model selection \cite{LamF2009,RavWRY2008}. The second condition ensures that the number of hidden components is correctly estimated. Finally, the third condition ensures that the pair of matrices $(S,L)$ leads to a realizable latent-variable model. In particular this condition implies that there exists a valid latent-variable model on $|O \cup H|$ variables in which $(a)$ the conditional graphical model structure among the observed variables is given by $S$, $(b)$ the number of latent variables $|H|$ is equal to the rank of $L$, and $(c)$ the extra correlations induced due to marginalization over the latent variables is equal to $L$. Any method for matrix factorization (see for example, \cite{WitTH2009}) can be used to factorize the low-rank matrix $L$, depending on the properties that one desires in the factors (e.g., sparsity).
We also study parametric consistency in the usual sense, i.e., we show that one can produce estimates $(S,L)$ that converge in various norms to the matrices $(K_O,K_{O,H} K_H^{-1} K_{H,O})$. Notice that proving $(S,L)$ is close to $(K_O,K_{O,H} K_H^{-1} K_{H,O})$ in some norm does not in general imply that the support/sign-pattern and rank of $(S,L)$ are the same as those of $(K_O,K_{O,H} K_H^{-1} K_{H,O})$. Therefore parametric consistency is different from algebraic consistency, which requires that $(S,L)$ have the same support/sign-pattern and rank as $(K_O,K_{O,H} K_H^{-1} K_{H,O})$.
\paragraph{Goal} Let $K^\ast_{(O ~ H)}$ denote the concentration matrix of a Gaussian model. Suppose that we have $n$ samples $\{X^i_O\}_{i=1}^n$ of the observed variables $X_O$. We would like to produce estimates $(\hat{S}_n,\hat{L}_n)$ that, with high-probability, are both algebraically consistent and parametrically consistent (in some norm).
\subsection{Likelihood function and Fisher information}
\label{subsec:ll}
Given $n$ samples $\{X^i\}_{i=1}^n$ of a finite collection of jointly Gaussian zero-mean random variables with concentration matrix $K^\ast$, we define the sample covariance as follows:
\begin{equation}
\Sigma^n \triangleq \frac{1}{n}\sum_{i=1}^n X_i X_i^T.
\end{equation}
It is then easily seen that the log-likelihood function is given by:
\begin{equation}
\ell(K; \Sigma^n) = \log\det(K) - \mathrm{tr}(K \Sigma^n),
\end{equation}
where $\ell(K;\Sigma_n)$ is a function of $K$. Notice that this function is strictly concave for $K \succ 0$. Now consider the latent-variable modeling problem in which we wish to model a collection of random variables $X_O$ (with sample covariance $\Sigma_O^n$) by adding some extra variables $X_H$. With respect to the parametrization $(S,L)$ (with $S$ representing the conditional statistics of $X_O$ given $X_H$, and $L$ summarizing the effect of marginalization over the additional variables $X_H$), the likelihood function is given by:
\begin{equation*}
\bar{\ell}(S,L;\Sigma^n_O) = \ell(S~-~L; \Sigma_O^n).
\end{equation*}
The function $\bar{\ell}$ is \emph{jointly concave} with respect to the parameters $(S,L)$ whenever $S-L \succ 0$, and it is this function that we use in our variational formulation \eqref{eq:sdp} to learn a latent-variable model.
In the analysis of a convex program involving the likelihood function, the Fisher information plays an important role as it is the negative of the Hessian of the likelihood function and thus controls the curvature. As the first term in the likelihood function is linear, we need only study higher-order derivatives of the log-determinant function in order to compute the Hessian. Letting $\mathcal{I}$ denote the Fisher information matrix, we have that \cite{BoyV2004}
\begin{equation*}
\mathcal{I}(K^\ast) \triangleq - \nabla^2_{K} \log\det(K) |_{K = K^\ast} = (K^\ast)^{-1} \otimes (K^\ast)^{-1},
\end{equation*}
for $K^\ast \succ 0$. If $K^\ast$ is a $p \times p$ concentration matrix, then the Fisher information matrix $\mathcal{I}(K^\ast)$ has dimensions $p^2 \times p^2$. Next consider the latent-variable situation with the variables indexed by $O$ being observed and the variables indexed by $H$ being hidden. The concentration matrix $\tilde{K}^\ast_O = (\Sigma^\ast_O)^{-1}$ of the marginal distribution of the observed variables $O$ is given by the Schur complement \eqref{eq:schur}, and the corresponding Fisher information matrix is given by
\begin{equation*}
\mathcal{I}(\tilde{K}^\ast_O) = (\tilde{K}^\ast_O)^{-1} \otimes (\tilde{K}^\ast_O)^{-1} = \Sigma^\ast_O \otimes \Sigma^\ast_O.
\end{equation*}
Notice that this is precisely the $|O|^2 \times |O|^2$ submatrix of the full Fisher information matrix $\mathcal{I}(K^\ast_{(O~H)}) = \Sigma_{(O~H)}^\ast \otimes \Sigma_{(O~H)}^\ast$ with respect to all the parameters $K^\ast_{(O~H)} = (\Sigma_{(O~H)}^\ast)^{-1}$ (corresponding to the situation in which \emph{all} the variables $X_{O \cup H}$ are observed). The matrix $\mathcal{I}(K^\ast_{(O~H)})$ has dimensions $|O\cup H|^2 \times |O\cup H|^2$, while $\mathcal{I}(\tilde{K}^\ast_O)$ is an $|O|^2 \times |O|^2$ matrix. To summarize, we have for all $i,j,k,l \in O$ that:
\begin{equation*}
\mathcal{I}(\tilde{K}^\ast_O)_{(i,j),(k,l)} = [\Sigma_{(O~H)}^\ast \otimes \Sigma_{(O~H)}^\ast]_{(i,j),(k,l)} = \mathcal{I}(K^\ast_{(O~H)})_{(i,j),(k,l)}.
\end{equation*}
In Section~\ref{subsec:fi} we impose various conditions on the Fisher information matrix $\mathcal{I}(\tilde{K}^\ast_O)$ under which our regularized maximum-likelihood formulation provides consistent estimates with high probability.
\subsection{Algebraic varieties of sparse and low-rank matrices}
\label{subsec:av}
An algebraic variety is the solution set of a system of polynomial equations. The set of sparse matrices and the set of low-rank matrices can be naturally viewed as algebraic varieties. Here we describe these varieties, and discuss some of their properties. Of particular interest in this paper are geometric properties of these varieties such as the tangent space and local curvature at a (smooth) point.
Let $\mathcal{S}(k)$ denote the set of matrices with at most $k$ nonzeros:
\begin{equation}
\mathcal{S}(k) \triangleq \{M \in \mathbb{R}^{p \times p} ~ | ~ |\mathrm{support}(M)| \leq k \}.
\end{equation}
The set $\mathcal{S}(k)$ is an algebraic variety, and can in fact be viewed as a union of ${p^2 \choose k}$ subspaces in $\mathbb{R}^{p \times p}$. This variety has dimension $k$, and it is smooth everywhere except at those matrices that have support size strictly smaller than $k$. For any matrix $M \in \mathbb{R}^{p \times p}$, consider the variety $\mathcal{S}(|\mathrm{support}(M)|)$; $M$ is a smooth point of this variety, and the tangent space at $M$ is given by
\begin{equation}
\Omega(M) = \{N \in \mathbb{R}^{p \times p} ~ | ~ \mathrm{support}(N) \subseteq \mathrm{support}(M) \}.
\end{equation}
In words the tangent space $\Omega(M)$ at a smooth point $M$ is given by the set of all matrices that have support contained within the support of $M$. We view $\Omega(M)$ as a subspace in $\mathbb{R}^{p \times p}$.
Next let $\mathcal{L}(r)$ denote the algebraic variety of matrices with rank at most $r$:
\begin{equation}
\mathcal{L}(r) \triangleq \{M \in \mathbb{R}^{p \times p} ~ | ~ \mathrm{rank}(M) \leq r \}.
\end{equation}
It is easily seen that $\mathcal{L}(r)$ is an algebraic variety because it can be defined through the vanishing of all $(r+1) \times (r+1)$ minors. This variety has dimension equal to $r (2p - r)$, and it is smooth everywhere except at those matrices that have rank strictly smaller than $r$. Consider a rank-$r$ matrix $M$ with SVD given by $M = U D V^T$, where $U,V \in \mathbb{R}^{p \times r}$ and $D \in \mathbb{R}^{r \times r}$. The matrix $M$ is a smooth point of the variety $\mathcal{L}(\mathrm{rank}(M))$, and the tangent space at $M$ with respect to this variety is given by
\begin{equation}
T(M) = \{U Y_1^T + Y_2 V^T ~ | ~ Y_1,Y_2 \in \mathbb{R}^{p \times r}\}.
\end{equation}
In words the tangent space $T(M)$ at a smooth point $M$ is the span of all matrices that have either the same row-space as $M$ or the same column-space as $M$. As with $\Omega(M)$ we view $T(M)$ as a subspace in $\mathbb{R}^{p \times p}$.
In Section~\ref{sec:iden} we explore the connection between geometric properties of these tangent spaces and the identifiability problem in latent-variable graphical models.
\subsection{Curvature of rank variety}
\label{subsec:crv}
The sparse matrix variety $\mathcal{S}(k)$ has the property that it has \emph{zero} curvature at any smooth point. Consequently the tangent space at a smooth point $M$ is the \emph{same} as the tangent space at any point in a neighborhood of $M$. This property is implicitly used in the analysis of $\ell_1$ regularized methods for recovering sparse models. The situation is more complicated for the low-rank matrix variety, because the curvature at any smooth point is nonzero. Therefore we need to study how the tangent space changes from one point to a neighboring point by analyzing how this variety curves locally. Indeed the amount of curvature at a point is directly related to the ``angle'' between the tangent space at that point and the tangent space at a neighboring point. For any subspace $T$ of matrices, let $\mathcal{P}_{T}$ denote the projection onto $T$. Given two subspaces $T_1,T_2$ of the same dimension, we measure the ``twisting'' between these subspaces by considering the following quantity.
\begin{equation}
\rho(T_1,T_2) \triangleq \|\mathcal{P}_{T_1} - \mathcal{P}_{T_2}\|_{2 \rightarrow 2} = \max_{\|N\|_2 \leq 1} ~ \|[\mathcal{P}_{T_1} - \mathcal{P}_{T_2}] (N)\|_2. \label{eq:rho}
\end{equation}
In Appendix~\ref{app:matper} we briefly review relevant results from matrix perturbation theory; the key tool used to derive these results is the resolvent of a matrix \cite{Kat1995}. Based on these tools we prove the following two results in Appendix~\ref{app:rankcurv}, which bound the twisting between the tangent spaces at nearby points. The first result provides a bound on the quantity $\rho$ between the tangent spaces at a point and at its neighbor.
\begin{PROP}\label{theo:tspace}
Let $M \in \mathbb{R}^{p \times p}$ be a rank-$r$ matrix with smallest nonzero singular value equal to $\sigma$, and let $\Delta$ be a perturbation to $M$ such that $\|\Delta\|_2 \leq \frac{\sigma}{8}$. Further, let $M+\Delta$ be a rank-$r$ matrix. Then we have that
\begin{equation*}
\rho(T(M+\Delta),T(M)) \leq \frac{2}{\sigma} ~ \|\Delta\|_2.
\end{equation*}
\end{PROP}
The next result bounds the error between a point and its neighbor in the normal direction.
\begin{PROP}\label{theo:nspace}
Let $M \in \mathbb{R}^{p \times p}$ be a rank-$r$ matrix with smallest nonzero singular value equal to $\sigma$, and let $\Delta$ be a perturbation to $M$ such that $\|\Delta\| \leq \frac{\sigma}{8}$. Further, let $M+\Delta$ be a rank-$r$ matrix. Then we have that
\begin{equation*}
\|\mathcal{P}_{T(M)^\bot} (\Delta) \|_2 \leq \frac{\|\Delta\|_2^2}{\sigma}.
\end{equation*}
\end{PROP}
These results suggest that the closer the smallest singular value is to zero, the more curved the variety is locally. Therefore we control the twisting between tangent spaces at nearby points by bounding the smallest nonzero singular value away from zero.
\section{Identifiability}
\label{sec:iden}
In the absence of additional conditions, the latent-variable model selection problem is ill-posed. In this section we discuss a set of conditions on latent-variable models that ensure that these models are identifiable given marginal statistics for a subset of the variables.
\subsection{Structure between latent and observed variables}
\label{subsec:lv}
Suppose that the low-rank matrix that summarizes the effect of the hidden components is itself sparse. This leads to identifiability issues in the sparse-plus-low-rank decomposition problem. Statistically the additional correlations induced due to marginalization over the latent variables could be mistaken for the conditional graphical model structure of the observed variables. In order to avoid such identifiability problems the effect of the latent variables must be ``diffuse'' across the observed variables. To address this point the following quantity was introduced in \cite{ChaSPW2009} for any matrix $M$, defined with respect to the tangent space $T(M)$:
\begin{equation}
\xi(T(M)) \triangleq \max_{N \in T(M), ~ \|N\|_2 \leq 1} ~~~ \|N\|_\infty. \label{eq:xi}
\end{equation}
Thus $\xi(T(M))$ being small implies that elements of the tangent
space $T(M)$ cannot have their support concentrated
in a few locations; as a result $M$ cannot be too sparse. This idea is formalized in \cite{ChaSPW2009} by relating $\xi(T(M))$ to a notion of ``incoherence'' of the row/column spaces, where the row/column spaces are said to be incoherent with respect to the standard basis if these spaces are not aligned closely with any of the coordinate axes. Letting $M = U D V^T$ be the singular value decomposition of $M$, the incoherence of the row/column spaces of $M$ (initially proposed and studied by Cand\`es and Recht \cite{CanR2009}) is defined as:
\begin{equation}
\mathrm{inc}(M) \triangleq \max\{\max_i \|P_U(e_i)\|, \max_i \|P_V(e_i)\|\}. \label{eq:inc}
\end{equation}
Here $P_V , P_U$ denote projections\footnote{We denote projections onto vector subspaces (defined by a matrix) by $P$, and projections onto matrix subspaces (defined by a general linear operator) by the calligraphic $\mathcal{P}$.} onto the row/column spaces of $M$, and $e_i$ is the $i$'th standard basis vector. Hence $\mathrm{inc}(M)$ measures the projection of the most ``closely aligned'' coordinate axis with the row/column spaces. For any rank-$r$ matrix M we have that
\begin{equation}
\sqrt{\frac{r}{p}} \leq \mathrm{inc}(M) \leq 1, \label{eq:incineq}
\end{equation}
where the lower bound is achieved (for example) if the row/column spaces span any $r$ columns of a $p \times p$ orthonormal Hadamard matrix, while the upper bound is achieved if the row or column space contains a standard basis vector. Typically a matrix M with
incoherent row/column spaces would have $\mathrm{inc}(M) \ll 1$.
The following result (proved in \cite{ChaSPW2009}) shows that the more incoherent the row/column spaces of $M$, the smaller is $\xi(M)$.
\begin{PROP} \label{theo:xiinc}
For any $M \in \mathbb{R}^{p \times p}$, we have that
\begin{equation*}
\mathrm{inc}(M) \leq \xi(T(M)) \leq 2 ~ \mathrm{inc}(M),
\end{equation*}
where $\xi(T(M))$ and $\mathrm{inc}(M)$ are defined in \eqref{eq:xi} and \eqref{eq:inc}.
\end{PROP}
Based on these concepts we roughly require that the low-rank matrix that summarizes the effect of the latent variables be \emph{incoherent}, thereby ensuring that the extra correlations due to marginalization over the hidden components cannot be confused with the conditional graphical model structure of the observed variables. Notice that the quantity $\mathrm{inc}$ is not just a measure of the number of latent variables, but also of the overall effect of the correlations induced by marginalization over these variables.
\textbf{Curvature and change in $\xi$}: As noted previously an important technical point is that the algebraic variety of low-rank matrices is locally curved at any smooth point. Consequently the quantity $\xi$ changes as we move along the low-rank matrix variety smoothly. The quantity $\rho(T_1,T_2)$ introduced in \eqref{eq:rho} also allows us to bound the variation in $\xi$ as follows.
\begin{LEMM}\label{theo:rhotspace}
Let $T_1,T_2$ be two matrix subspaces of the same dimension with the property that $\rho(T_1,T_2) < 1$, where $\rho$ is defined in \eqref{eq:rho}. Then we have that
\begin{equation*}
\xi(T_2) \leq \frac{1}{1-\rho(T_1,T_2)} ~ [\xi(T_1) + \rho(T_1,T_2)].
\end{equation*}
\end{LEMM}
This lemma is proved in Appendix~\ref{app:rankcurv}.
\subsection{Structure among observed variables}
\label{subsec:ov}
An identifiability problem also arises if the conditional graphical model among the observed variables contains a densely connected subgraph. These statistical relationships might be mistaken as correlations induced by marginalization over latent variables. Therefore we need to ensure that the conditional graphical model among the observed variables is sparse. We impose the condition that this conditional graphical model must have small ``degree'', i.e., no observed variable is directly connected to too many other observed variables conditioned on the hidden components. Notice that bounding the degree is a more refined condition than simply bounding the total number of nonzeros as the \emph{sparsity pattern} also plays a role. In \cite{ChaSPW2009} the authors introduced the following quantity in order to provide an appropriate measure of the sparsity pattern of a matrix:
\begin{equation}
\mu(\Omega(M)) \triangleq \max_{N \in \Omega(M), \|M\|_\infty \leq 1} ~~~ \|N\|_2. \label{eq:mu}
\end{equation}
The quantity $\mu(\Omega(M))$ being small for a matrix implies that
the spectrum of any element of the tangent space $\Omega(M)$
is not too ``concentrated'', i.e., the singular values of the elements of the tangent space are not too large. In \cite{ChaSPW2009} it is shown that a sparse matrix $M$ with ``bounded degree'' (a small number of nonzeros per row/column) has small $\mu(M)$.
\begin{PROP} \label{theo:mudeg}
Let $M \in \mathbb{R}^{p \times p}$ be any matrix with at most $\mathrm{deg}_{\max}(M)$ nonzero entries per row/column,
and with at least $\mathrm{deg}_{\min}(M)$ nonzero entries per
row/column. With $\mu(\Omega(M))$ as defined in \eqref{eq:mu}, we have that
\begin{equation*}
\mathrm{deg}_{\min}(M) \leq \mu(\Omega(M)) \leq \mathrm{deg}_{\max}(M).
\end{equation*}
\end{PROP}
\subsection{Transversality of tangent spaces}
\label{subsec:tts}
Suppose that we have the sum of two vectors, each from two known subspaces. It is possible to uniquely recover the individual vectors from the sum if and only if the subspaces have a transverse intersection, i.e., they only intersect at the origin. This simple observation leads to an appealing algebraic notion of identifiability. Consider the situation in which we have the sum of a sparse matrix and a low-rank matrix. In addition to this sum, suppose that we are also given the tangent spaces at these matrices with respect to the algebraic varieties of sparse and low-rank matrices respectively. Then a necessary and sufficient condition for \emph{local} identifiability is that these tangent spaces have a transverse intersection. It turns out that these transversality conditions on the tangent spaces are also sufficient for the regularized maximum-likelihood convex program \eqref{eq:sdp} to provide consistent estimates of the number of hidden components and the conditional graphical model structure of the observed variables conditioned on the latent variables (without any side information about the tangent spaces).
In order to quantify the level of transversality between the tangent spaces $\Omega$ and $T$ we study the \emph{minimum gain} with respect to some norm of the addition operator restricted to the cartesian product $\mathcal{Y} = \Omega \times T$. More concretely let $\mathcal{A}: \mathbb{R}^{p \times p} \times \mathbb{R}^{p \times p} \rightarrow \mathbb{R}^{p \times p}$ represent the addition operator, i.e., the operator that adds two matrices. Then given any matrix norm $\|\cdot\|_q$ on $\mathbb{R}^{p \times p} \times \mathbb{R}^{p \times p}$, the minimum gain of $\mathcal{A}$ restricted to $\mathcal{Y}$ is defined as follows:
\begin{equation*}
\epsilon(\Omega,T,\|\cdot\|_q) \triangleq \min_{(S,L) \in \Omega \times T, ~ \|(S,L)\|_q = 1} ~~~ \|\mathcal{P}_\mathcal{Y} \mathcal{A}^\dag \mathcal{A} \mathcal{P}_\mathcal{Y}(S,L)\|_q,
\end{equation*}
where $\mathcal{P}_\mathcal{Y}$ denotes the projection onto the space $\mathcal{Y}$, and $\mathcal{A}^\dag$ denotes the adjoint of the addition operator (with respect to the standard Euclidean inner-product). The tangent spaces $\Omega$ and $T$ have a \emph{transverse} intersection if and only if $\epsilon(\Omega,T,\|\cdot\|_q) > 0$. The ``level'' of transversality is measured by the magnitude of $\epsilon(\Omega,T,\|\cdot\|_q)$. Note that if the norm $\|\cdot\|_q$ used is the Frobenius norm, then $\epsilon(\Omega,T,\|\cdot\|_F)$ is the square of the \emph{minimum singular value} of the addition operator $\mathcal{A}$ restricted to $\Omega \times T$.
A natural norm with which to measure transversality is the dual norm of the regularization function in \eqref{eq:sdp}, as the subdifferential of the regularization function is specified in terms of its dual. The reasons for this will become clearer as we proceed through this paper. Recall that the regularization function used in the variational formulation \eqref{eq:sdp} is given by:
\begin{equation*}
f_\gamma(S,L) = \gamma \|S\|_1 + \|L\|_\ast,
\end{equation*}
where the nuclear norm $\|\cdot\|_\ast$ reduces to the trace function over the cone of positive-semidefinite matrices. This function is a norm for all $\gamma > 0$. The dual norm of $f_\gamma$ is given by
\begin{equation*}
g_\gamma(S,L) = \max\left\{\frac{\|S\|_\infty}{\gamma}, \|L\|_2 \right\}.
\end{equation*}
The following simple lemma records a useful property of the $g_\gamma$ norm that is used several times throughout this paper.
\begin{LEMM}\label{theo:gg}
Let $\Omega$ and $T$ be tangent spaces at any points with respect to the algebraic varieties of sparse and low-rank matrices. Then for any matrix $M$, we have that $\|\mathcal{P}_\Omega(M)\|_\infty \leq \|M\|_\infty$ and that $\|\mathcal{P}_T(M)\|_2 \leq 2\|M\|_2$. Further we also have that $\|\mathcal{P}_{\Omega^\bot}(M)\|_\infty \leq \|M\|_\infty$ and that $\|\mathcal{P}_{T^\bot}(M)\|_2 \leq \|M\|_2$. Thus for any matrices $M,N$ and for $\mathcal{Y} = \Omega \times T$, one can check that $g_\gamma(\mathcal{P}_\mathcal{Y}(M,N)) \leq 2 g_\gamma(M,N)$ and that $g_\gamma(\mathcal{P}_{\mathcal{Y}^\bot}(M,N)) \leq g_\gamma(M,N)$.
\end{LEMM}
Next we define the quantity $\chi(\Omega,T,\gamma)$ as follows in order to study the transversality of the spaces $\Omega$ and $T$ with respect to the $g_\gamma$ norm:
\begin{equation}
\chi(\Omega,T,\gamma) \triangleq \max\left\{\frac{\xi(T)}{\gamma}, 2 \mu(\Omega) \gamma \right\} \label{eq:chi}
\end{equation}
Here $\mu$ and $\xi$ are defined in \eqref{eq:mu} and \eqref{eq:xi}. We then have the following result (proved in Appendix~\ref{app:rg}):
\begin{LEMM}\label{theo:rg1}
Let $S \in \Omega, L \in T$ be matrices such that $\|S\|_{\infty} = \gamma$ and let $\|L\|_2 = 1$. Then we have that $g_\gamma(\mathcal{P}_{\mathcal{Y}} \mathcal{A}^\dag \mathcal{A} \mathcal{P}_\mathcal{Y}(S,L)) \in [1-\chi(\Omega,T,\gamma),1+\chi(\Omega,T,\gamma)]$, where $\mathcal{Y} = \Omega \times T$ and $\chi(\Omega,T,\gamma)$ is defined in \eqref{eq:chi}. In particular we have that $1-\chi(\Omega,T,\gamma) \leq \epsilon(\Omega,T,g_\gamma)$.
\end{LEMM}
The quantity $\chi(\Omega,T,\gamma)$ being small implies that the addition operator is essentially isometric when restricted to $\mathcal{Y} = \Omega \times T$. Stated differently the magnitude of $\chi(\Omega,T,\gamma)$ is a measure of the level of transversality of the spaces $\Omega$ and $T$. If $\mu(\Omega) \xi(T) < \frac{1}{2}$ then $\gamma \in (\xi(T), \frac{1}{2 \mu(\Omega)})$ ensures that $\chi(\Omega,T,\gamma) < 1$, which in turn implies that the tangent spaces $\Omega$ and $T$ have a transverse intersection.
\textbf{Observation}: Thus we have that the smaller the quantities $\mu(\Omega)$ and $\xi(T)$, the more transverse the intersection of the spaces $\Omega$ and $T$.
\subsection{Conditions on Fisher information}
\label{subsec:fi}
The main focus of Section~\ref{sec:main} is to analyze the regularized maximum-likelihood convex program \eqref{eq:sdp} by studying its optimality conditions. The log-likelihood function is well-approximated in a neighborhood by a quadratic form given by the Fisher information (which measures the curvature, as discussed in Section~\ref{subsec:ll}). Let $\mathcal{I}^\ast = \mathcal{I}(\tilde{K}^\ast_O)$ denote the Fisher information evaluated at the true marginal concentration matrix $\tilde{K}^\ast_O = K^\ast_O - K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}$, where $K^\ast_{(O~H)}$ represents the concentration matrix of the full model (see equation \eqref{eq:schur}). The appropriate measure of transversality between the tangent spaces\footnote{We implicitly assume that these tangent spaces are subspaces of the space of \emph{symmetric} matrices.} $\Omega = \Omega(K_O^\ast)$ and $T = T(K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O})$ is then in a space in which the inner-product is given by $\mathcal{I}^\ast$. Specifically, we need to analyze the minimum gain of the operator $\mathcal{P}_{\mathcal{Y}} \mathcal{A}^\dag \mathcal{I}^\ast \mathcal{A} \mathcal{P}_{\mathcal{Y}}$ restricted to the space $\mathcal{Y} = \Omega \times T$. Therefore we impose several conditions on the Fisher information $\mathcal{I}^\ast$. We define quantities that control the gains of $\mathcal{I}^\ast$ restricted to $\Omega$ and $T$ separately; these ensure that elements of $\Omega$ and elements of $T$ are individually identifiable under the map $\mathcal{I}^\ast$. In addition we define quantities that, in conjunction with bounds on $\mu(\Omega)$ and $\xi(T)$, allow us to control the gain of $\mathcal{I}^\ast$ restricted to the direct-sum $\Omega \oplus T$.
\textbf{$\mathcal{I}^\ast$ restricted to $\Omega$}: The minimum gain of the operator $\mathcal{P}_{\Omega} \mathcal{I}^\ast \mathcal{P}_{\Omega}$ restricted to $\Omega$ is given by
\begin{equation*}
\alpha_{\Omega} \triangleq \min_{M \in \Omega, \|M\|_\infty = 1} ~~ \|\mathcal{P}_{\Omega} \mathcal{I}^\ast \mathcal{P}_{\Omega}(M)\|_\infty.
\end{equation*}
The maximum effect of elements in $\Omega$ in the orthogonal direction $\Omega^\bot$ is given by
\begin{equation*}
\delta_{\Omega} \triangleq \max_{M \in \Omega, \|M\|_\infty = 1} ~~ \|\mathcal{P}_{\Omega^\bot} \mathcal{I}^\ast \mathcal{P}_{\Omega}(M)\|_\infty.
\end{equation*}
The operator $\mathcal{I}^\ast$ is injective on $\Omega$ if $\alpha_\Omega > 0$. The ratio $\frac{\delta_\Omega}{\alpha_\Omega} \leq 1 - \nu$ implies the irrepresentability condition imposed in \cite{RavWRY2008}, which gives a sufficient condition for consistent recovery of graphical model structure using $\ell_1$-regularized maximum-likelihood. Notice that this condition is a generalization of the usual Lasso irrepresentability conditions \cite{ZhaY2006}, which are typically imposed on the covariance matrix. Finally we also consider the following quantity, which controls the behavior of $\mathcal{I}^\ast$ restricted to $\Omega$ in the spectral norm:
\begin{equation*}
\beta_\Omega \triangleq \max_{M \in \Omega, \|M\|_2 = 1} ~~ \|\mathcal{I}^\ast (M)\|_2.
\end{equation*}
\textbf{$\mathcal{I}^\ast$ restricted to $T$}: Analogous to the case of $\Omega$ one could control the gains of the operators $\mathcal{P}_{T^\bot} \mathcal{I}^\ast \mathcal{P}_T$ and $\mathcal{P}_T \mathcal{I}^\ast \mathcal{P}_T$. However as discussed previously one complication is that the tangent spaces at nearby smooth points on the rank variety are in general different, and the amount of twisting between these spaces is governed by the local curvature. Therefore we control the gains of the operators $\mathcal{P}_{T'^\bot} \mathcal{I}^\ast \mathcal{P}_{T'}$ and $\mathcal{P}_{T'} \mathcal{I}^\ast \mathcal{P}_{T'}$ for all tangent spaces $T'$ that are ``close to'' the nominal $T$ (at the true underlying low-rank matrix), measured by $\rho(T,T')$ \eqref{eq:rho} being small. The minimum gain of the operator $\mathcal{P}_{T'} \mathcal{I}^\ast \mathcal{P}_{T'}$ restricted to $T'$ (close to $T$) is given by
\begin{equation*}
\alpha_{T} \triangleq \min_{\rho(T',T) \leq \frac{\xi(T)}{2}} ~ \min_{M \in T', \|M\|_2 = 1} ~~ \|\mathcal{P}_{T'} \mathcal{I}^\ast \mathcal{P}_{T'}(M)\|_2.
\end{equation*}
Similarly the maximum effect of elements in $T'$ in the orthogonal direction $T'^\bot$ (for $T'$ close to $T$) is given by
\begin{equation*}
\delta_{T} \triangleq \max_{\rho(T',T) \leq \frac{\xi(T)}{2}} ~ \max_{M \in T', \|M\|_2 = 1} ~~ \|\mathcal{P}_{T'^\bot} \mathcal{I}^\ast \mathcal{P}_{T'}(M)\|_2.
\end{equation*}
Implicit in the definition of $\alpha_T$ and $\delta_T$ is the fact that the outer minimum and maximum are only taken over spaces $T'$ that are tangent spaces to the rank-variety. The operator $\mathcal{I}^\ast$ is injective on all tangent spaces $T'$ such that $\rho(T',T) \leq \frac{\xi(T)}{2}$ if $\alpha_T > 0$. An irrepresentability condition (analogous to those developed for the sparse case) for tangent spaces near $T$ to the rank variety would be that $\frac{\delta_T}{\alpha_T} \leq 1 - \nu$. Finally we also control the behavior of $\mathcal{I}^\ast$ restricted to $T'$ close to $T$ in the $\ell_\infty$ norm:
\begin{equation*}
\beta_T \triangleq \max_{\rho(T',T) \leq \frac{\xi(T)}{2}} ~ \max_{M \in T', \|M\|_\infty = 1} ~~ \|\mathcal{I}^\ast (M)\|_\infty.
\end{equation*}
The two sets of quantities $(\alpha_\Omega,\delta_\Omega)$ and $(\alpha_T,\delta_T)$ essentially control how $\mathcal{I}^\ast$ behaves when restricted to the spaces $\Omega$ and $T$ \emph{separately} (in the natural norms). The quantities $\beta_\Omega$ and $\beta_T$ are useful in order to control the gains of the operator $\mathcal{I}^\ast$ restricted to the \emph{direct sum} $\Omega \oplus T$. Notice that although the magnitudes of elements in $\Omega$ are measured most naturally in the $\ell_\infty$ norm, the quantity $\beta_\Omega$ is specified with respect to the spectral norm. Similarly elements of the tangent spaces $T'$ to the rank variety are most naturally measured in the spectral norm, but $\beta_T$ provides control in the $\ell_\infty$ norm. These quantities, combined with $\mu(\Omega)$ and $\xi(T)$ (defined in \eqref{eq:mu} and \eqref{eq:xi}), provide the ``coupling'' necessary to control the behavior of $\mathcal{I}^\ast$ restricted to elements in the direct sum $\Omega \oplus T$. In order to keep track of fewer quantities, we summarize the six quantities as follows:
\begin{eqnarray*}
\alpha &\triangleq& \min(\alpha_\Omega, \alpha_T) \\ \delta &\triangleq& \max(\delta_\Omega, \delta_T) \\ \beta &\triangleq& \max(\beta_\Omega, \beta_T).
\end{eqnarray*}
\noindent \textbf{Main assumption} There exists a $\nu \in (0, \frac{1}{2}]$ such that:
\begin{equation*}
\frac{\delta}{\alpha} \leq 1 - 2 \nu.
\end{equation*}
This assumption is to be viewed as a generalization of the irrepresentability conditions imposed on the covariance matrix \cite{ZhaY2006} or the Fisher information matrix \cite{RavWRY2008} in order to provide consistency guarantees for sparse model selection using the $\ell_1$ norm. With this assumption we have the following proposition, proved in Appendix~\ref{app:rg}, about the gains of the operator $\mathcal{I}^\ast$ restricted to $\Omega \oplus T$. This proposition plays a fundamental role in the analysis of the performance of the regularized maximum-likelihood procedure \eqref{eq:sdp}.
\begin{PROP}\label{theo:irr}
Let $\Omega$ and $T$ be the tangent spaces defined in this section, and let $\mathcal{I}^\ast$ be the Fisher information evaluated at the true marginal concentration matrix. Further let $\alpha,\beta,\nu$ be as defined above. Suppose that
\begin{equation*}
\mu(\Omega) \xi(T) \leq \frac{1}{6} \left(\frac{\nu \alpha}{\beta (2 - \nu)}\right)^2,
\end{equation*}
and that $\gamma$ is in the following range:
\begin{equation*}
\gamma \in \left[\frac{3 \beta (2-\nu) \xi(T)}{\nu \alpha}, \frac{\nu \alpha}{2 \beta (2-\nu) \mu(\Omega)} \right].
\end{equation*}
Then we have the following two conclusions for $\mathcal{Y} = \Omega \times T'$ with $\rho(T', T) \leq \frac{\xi(T)}{2}$:
\begin{enumerate}
\item The minimum gain of $\mathcal{I}^\ast$ restricted to $\Omega \oplus T'$ is bounded below:
\begin{eqnarray*}
\min_{(S,L) \in \mathcal{Y}, ~ \|S\|_\infty = \gamma, ~ \|L\|_2 = 1} ~ g_\gamma(\mathcal{P}_\mathcal{Y} \mathcal{A}^\dag \mathcal{I}^\ast \mathcal{A} \mathcal{P}_\mathcal{Y}(S,L)) &\geq& \frac{\alpha}{2}.
\end{eqnarray*}
Specifically this implies that for all $(S,L) \in \mathcal{Y}$
\begin{equation*}
g_\gamma(\mathcal{P}_\mathcal{Y} \mathcal{A}^\dag \mathcal{I}^\ast \mathcal{A} \mathcal{P}_\mathcal{Y}(S,L)) \geq \frac{\alpha}{2} g_\gamma(S,L).
\end{equation*}
\item The effect of elements in $\mathcal{Y} = \Omega \times T'$ on the orthogonal complement $\mathcal{Y}^\bot = \Omega^\bot \times T'^\bot$ is bounded above:
\begin{equation*}
\left\|\mathcal{P}_{\mathcal{Y}^\bot} \mathcal{A}^\dag \mathcal{I}^\ast \mathcal{A} \mathcal{P}_\mathcal{Y} \left(\mathcal{P}_\mathcal{Y} \mathcal{A}^\dag \mathcal{I}^\ast \mathcal{A} \mathcal{P}_\mathcal{Y} \right)^{-1} \right\|_{g_\gamma \rightarrow g_\gamma} \leq 1-\nu.
\end{equation*}
Specifically this implies that for all $(S,L) \in \mathcal{Y}$
\begin{equation*}
g_\gamma(\mathcal{P}_{\mathcal{Y}^\bot} \mathcal{A}^\dag \mathcal{I}^\ast \mathcal{A} \mathcal{P}_\mathcal{Y} (S,L)) \leq (1-\nu) g_\gamma(\mathcal{P}_\mathcal{Y} \mathcal{A}^\dag \mathcal{I}^\ast \mathcal{A} \mathcal{P}_\mathcal{Y}(S,L)).
\end{equation*}
\end{enumerate}
\end{PROP}
The last quantity we consider is the spectral norm of the marginal covariance matrix $\Sigma^\ast_O = (\tilde{K}^\ast_O)^{-1}$:
\begin{equation}
\psi \triangleq \|\Sigma^\ast_O\|_2 = \|(\tilde{K}^\ast_O)^{-1}\|_2. \label{eq:psi}
\end{equation}
A bound on $\psi$ is useful in the probabilistic component of our analysis, in order to derive convergence rates of the sample covariance matrix to the true covariance matrix. We also observe that
\begin{equation*}
\|\mathcal{I}^\ast\|_{2 \rightarrow 2} = \|(\tilde{K}^\ast_O)^{-1} \otimes (\tilde{K}^\ast_O)^{-1}\|_{2 \rightarrow 2} = \psi^2.
\end{equation*}
\section{Regularized Maximum-Likelihood Convex Program and Consistency}
\label{sec:main}
\subsection{Setup}
\label{subsec:setup}
Let $K^\ast_{(O~H)}$ denote the full concentration matrix of a collection of zero-mean jointly-Gaussian observed and latent variables, let $p = |O|$ denote the number of observed variables, and let $h = |H|$ denote the number of latent variables. We are given $n$ samples $\{X_O^i\}_{i=1}^n$ of the observed variables $X_O$. We consider the high-dimensional setting in which $(p,h,n)$ are all allowed to grow simultaneously. The quantities $\alpha,\beta,\nu,\psi$ defined in the previous section are accounted for in our analysis, although we suppress the dependence on these quantities in the statement of our main result. We explicitly keep track of the quantities $\mu(\Omega(K^\ast_O))$ and $\xi(T(K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}))$ as these control the complexity of the latent-variable model given by $K^\ast_{(O~H)}$. In particular $\mu$ controls the sparsity of the conditional graphical model among the observed variables, while $\xi$ controls the incoherence or ``diffusivity'' of the extra correlations induced due to marginalization over the hidden variables. Based on the tradeoff between these two quantities, we obtain a number of classes of latent-variable graphical models (and corresponding scalings of $(p,h,n)$) that can be consistently recovered using the regularized maximum-likelihood convex program \eqref{eq:sdp} (see Section~\ref{subsec:scal} for details). Specifically we show that consistent model selection is possible even when the number of samples and the number of latent variables are on the same order as the number of observed variables. We present our main result next demonstrating the consistency of the estimator \eqref{eq:sdp}, and then discuss classes of latent-variable graphical models and various scaling regimes in which our estimator is consistent.
\subsection{Main results}
\label{subsec:mainres}
Given $n$ samples $\{X_O^i\}_{i=1}^n$ of the observed variables $X_O$, the sample covariance is defined as:
\begin{equation*}
\Sigma^n_O = \frac{1}{n} \sum_{i=1}^n X^i_O (X^i_O)^T.
\end{equation*}
As discussed in Section~\ref{subsec:ps} the goal is to produce an estimate given by a pair of matrices $(S,L)$ of the latent-variable model represented by $K^\ast_{(O~H)}$. We study the consistency properties of the following regularized maximum-likelihood convex program:
\begin{equation}
\begin{aligned}
(\hat{S}_n,\hat{L}_n) = \arg \min_{S,L} & ~ \mathrm{tr}[(S-L)~\Sigma^n_O] - \log\det(S-L) ~ + ~ \lambda_n [\gamma \|S\|_{1} +
\mathrm{tr}(L)] \\ \mbox{s.t.} & ~~~ S-L \succ 0, ~~ L \succeq 0.
\end{aligned}
\label{eq:sdp1}
\end{equation}
Here $\lambda_n$ is a regularization parameter, and $\gamma$ is a tradeoff parameter between the rank and sparsity terms. Notice from Proposition~\ref{theo:irr} that the choice of $\gamma$ depends on the values of $\mu(\Omega(K^\ast_O))$ and $\xi(T(K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}))$; essentially these quantities correspond to the degree of the conditional graphical model structure of the observed variables and the incoherence of the low-rank matrix summarizing the effect of the latent variables (see Section~\ref{sec:iden}). While these quantities may not be known \emph{a priori}, we discuss a method to choose $\gamma$ numerically in our experimental results (see Section~\ref{sec:sims}). The following theorem shows that the estimates $(\hat{S}_n, \hat{L}_n)$ provided by the convex program \eqref{eq:sdp1} are consistent for a suitable choice of $\lambda_n$. In addition to the appropriate identifiability conditions (as specified by Proposition~\ref{theo:irr}), we also impose lower bounds on the minimum nonzero entry of the sparse conditional graphical model matrix $K_O^\ast$ and on the minimum nonzero singular value of the low-rank matrix $K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}$ summarizing the effect of the hidden variables. We suppress the dependence\footnote{We use the notation $a \gtrsim b$ if there exists a function $r(\alpha,\beta,\nu,\psi)$ such that $a \geq r(\alpha,\beta,\nu,\psi) b$. Similarly we use the notation $a \asymp b$ if there exists a function $r(\alpha,\beta,\nu,\psi)$ such that $a = r(\alpha,\beta,\nu,\psi) b$.} on $\alpha,\beta,\nu,\psi$ as we assume that these quantities remain bounded and do not scale with the other parameters. We emphasize the dependence on $\mu(\Omega(K^\ast_O))$ and $\xi(T(K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}))$ because these control the complexity of the underlying latent-variable graphical model as discussed above.
\begin{THEO} \label{theo:main}
Let $K^\ast_{(O ~ H)}$ denote the concentration matrix of a Gaussian model. We have $n$ samples $\{X^i_O\}_{i=1}^n$ of the $p$ observed variables denoted by $O$. Let $\Omega = \Omega(K^\ast_O)$ and $T = T(K^\ast_{O,H} (K^\ast_H)^{-1} K^\ast_{H,O})$ denote the tangent spaces at $K^\ast_O$ and at $K^\ast_{O,H} (K^\ast_H)^{-1} K^\ast_{H,O}$ with respect to the sparse and low-rank matrix varieties respectively.
\textbf{Assumptions}: Suppose that the following conditions hold:
\begin{enumerate}
\item The quantities $\mu(\Omega)$ and $\xi(T)$ satisfy the assumption of Proposition~\ref{theo:irr} for identifiability, and $\gamma$ is chosen in the range specified by Proposition~\ref{theo:irr}.
\item The number of samples $n$ available is such that
\begin{equation*}
n \gtrsim \frac{p}{\xi(T)^4}.
\end{equation*}
\item The regularization parameter $\lambda_n$ is chosen as
\begin{equation*}
\lambda_n \asymp \frac{1}{\xi(T)} \sqrt{\frac{p}{n}}.
\end{equation*}
\item The minimum nonzero singular value $\sigma$ of $K^\ast_{O,H} (K^\ast_H)^{-1} K^\ast_{H,O}$ is bounded as
\begin{equation*}
\sigma \gtrsim \frac{1}{\xi(T)^3} \sqrt{\frac{p}{n}}.
\end{equation*}
\item The minimum magnitude nonzero entry $\theta$ of $K^\ast_O$ is bounded as
\begin{equation*}
\theta \gtrsim \frac{1}{\xi(T) \mu(\Omega)} \sqrt{\frac{p}{n}}.
\end{equation*}
\end{enumerate}
\textbf{Conclusions}: Then with probability greater than $1 - 2\exp\{-p\}$ we have:
\begin{enumerate}
\item{Algebraic consistency}: The estimate $(\hat{S}_n,\hat{L}_n)$ given by the convex program \eqref{eq:sdp1} is algebraically consistent, i.e., the support and sign pattern of $\hat{S}_n$ is the same as that of $K^\ast_O$, and the rank of $\hat{L}_n$ is the same as that of $K^\ast_{O,H} (K^\ast_H)^{-1} K^\ast_{H,O}$.
\item {Parametric consistency}: The estimate $(\hat{S}_n,\hat{L}_n)$ given by the convex program \eqref{eq:sdp1} is parametrically consistent:
\begin{equation*}
g_\gamma(\hat{S}_n-K^\ast_O, \hat{L}_n - K^\ast_{O,H} (K^\ast_H)^{-1} K^\ast_{H,O}) \lesssim \frac{1}{\xi(T)} \sqrt{\frac{p}{n}}.
\end{equation*}
\end{enumerate}
\end{THEO}
The proof of this theorem is given in Appendix~\ref{app:main}. The theorem essentially states that if the minimum nonzero singular value of the low-rank piece $K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}$ and minimum nonzero entry of the sparse piece $K^\ast_O$ are bounded away from zero, then the convex program \eqref{eq:sdp1} provides estimates that are both algebraically consistent and parametrically consistent (in the $\ell_\infty$ and spectral norms). In Section~\ref{subsec:cov} we also show that these results easily lead to parametric consistency rates for the corresponding estimate $(\hat{S}_n-\hat{L}_n)^{-1}$ of the marginal covariance $\Sigma_O^\ast$ of the observed variables.
Notice that the condition on the minimum singular value of $K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}$ is more stringent than on the minimum nonzero entry of $K^\ast_O$. One role played by these conditions is to ensure that the estimates $(\hat{S}_n,\hat{L}_n)$ do not have smaller support size/rank than $(K^\ast_O,K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O})$. However the minimum singular value bound plays the additional role of bounding the curvature of the low-rank matrix variety around the point $K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}$, which is the reason for this condition being more stringent. Notice also that the number of hidden variables $h$ does not explicitly appear in the bounds in Theorem~\ref{theo:main}, which only depend on $p,\mu(\Omega(K^\ast_O)),\xi(T(K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}))$. However the dependence on $h$ is implicit in the dependence on $\xi(T(K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}))$, and we discuss this point in greater detail in the following section.
Finally we note that algebraic and parametric consistency hold under the assumptions of Theorem~\ref{theo:main} for a \emph{range} of values of $\gamma$:
\begin{equation*}
\gamma \in \left[\frac{3 \beta (2-\nu) \xi(T)}{\nu \alpha}, \frac{\nu \alpha}{2 \beta (2-\nu) \mu(\Omega)} \right].
\end{equation*}
In particular the assumptions on the sample complexity, the minimum nonzero singular value of $K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}$, and the minimum magnitude nonzero entry of $K^\ast_O$ are governed by the lower end of this range for $\gamma$. These assumptions can be weakened if we only require consistency for a smaller range of values of $\gamma$. The following corollary conveys this point with a specific example:
\begin{CORL} \label{theo:maincorl}
Consider the same setup and notation as in Theorem~\ref{theo:main}. Suppose that the quantities $\mu(\Omega)$ and $\xi(T)$ satisfy the assumption of Proposition~\ref{theo:irr} for identifiability. Suppose that we make the following assumptions:
\begin{enumerate}
\item Let $\gamma$ be chosen to be equal to $\frac{\nu \alpha}{2 \beta (2-\nu) \mu(\Omega)}$ (the upper end of the range specified in Proposition~\ref{theo:irr}), i.e., $\gamma \asymp \tfrac{1}{\mu(\Omega)}$.
\item $n \gtrsim \mu(\Omega)^4 ~ p$.
\item $\lambda_n \asymp \mu(\Omega) \sqrt{\frac{p}{n}}$.
\item $\sigma \gtrsim \frac{\mu(\Omega)^2}{\xi(T)} \sqrt{\frac{p}{n}}$.
\item $\theta \gtrsim \sqrt{\frac{p}{n}}$.
\end{enumerate}
Then with probability greater than $1 - 2\exp\{-p\}$ we have estimates $(\hat{S}_n,\hat{L}_n)$ that are algebraically consistent, and parametrically consistent with the error bounded as
\begin{equation*}
g_\gamma(\hat{S}_n-K^\ast_O, \hat{L}_n - K^\ast_{O,H} (K^\ast_H)^{-1} K^\ast_{H,O}) \lesssim \mu(\Omega) \sqrt{\frac{p}{n}}.
\end{equation*}
\end{CORL}
The proof of this corollary\footnote{By making stronger assumptions on the Fisher information matrix $\mathcal{I}^\ast$, one can further remove the factor of $\xi(T)$ in the lower bound for $\sigma$. Specifically the lower bound $\sigma \gtrsim \mu(\Omega)^3 \sqrt{\frac{p}{n}}$ suffices for consistent estimation if $\alpha_T,\beta_T$ bound the minimum/maximum gains of $\mathcal{I}^\ast$ for \emph{all} matrices (rather than just those near $T$), and $\delta_T$ bounds the $\mathcal{I}^\ast$-inner-product for \emph{all} pairs of orthogonal matrices (rather than just those near $T$ and $T^\bot$).} is analogous to that of Theorem~\ref{theo:main}. We emphasize that in practice it is often beneficial to have consistent estimates for a range of values of $\gamma$ (as in Theorem~\ref{theo:main}). Specifically the stability of the sparsity pattern and rank of the estimates $(\hat{S}_n,\hat{L}_n)$ for a range of tradeoff parameters is useful in order to choose a suitable value of $\gamma$, as prior information about the quantities $\mu(\Omega(K^\ast_O))$ and $\xi(T(K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}))$ is not typically available (see Section~\ref{sec:sims}).
\subsection{Scaling regimes}
\label{subsec:scal}
Next we consider classes of latent-variable models that satisfy the conditions of Theorem~\ref{theo:main}. Recall that $n$ denotes the number of samples, $p$ denotes the number of observed variables, and $h$ denotes the number of latent variables. Recall the assumption that the quantities $\alpha,\beta,\nu,\psi$ defined in Section~\ref{subsec:fi} remain bounded, and do not scale with the other parameters such as $(p,h,n)$ or $\xi(T(K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}))$ or $\mu(\Omega(K^\ast_O))$. In particular we focus on the tradeoff between $\xi(T(K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}))$ and $\mu(\Omega(K^\ast_O))$ (the quantities that control the complexity of a latent-variable graphical model), and the resulting scaling regimes for consistent estimation. Let $d = \mathrm{deg}(K^\ast_O)$ denote the degree of the conditional graphical model among the observed variables, and let $i = \mathrm{inc}(K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O})$ denote the incoherence of the correlations induced due to marginalization over the latent variables (we suppress the dependence on $n$). These quantities are defined in Section~\ref{sec:iden}, and we have from Propositions~\ref{theo:xiinc} and \ref{theo:mudeg} that
\begin{equation*}
\mu(\Omega(K^\ast_O)) \leq d, ~~~ \xi(T(K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O})) \leq 2 i.
\end{equation*}
Since $\alpha,\beta,\nu,\psi$ are assumed to be bounded, we also have from Proposition~\ref{theo:irr} that the product of $\mu$ and $\xi$ must be bounded by a constant. Thus, we study latent-variable models in which
\begin{equation*}
d ~ i = \mathcal{O}(1).
\end{equation*}
As we describe next, there are non-trivial classes of latent-variable graphical models in which this condition holds.
\textbf{Bounded degree and incoherence}: The first class of latent-variable models that we consider are those in which the conditional graphical model among the observed variables (given by $K^\ast_O$) has constant degree $d$. Recall from equation \eqref{eq:incineq} that the incoherence $i$ of the effect of the latent variables (given by $K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}$) can be as small as $\sqrt{\tfrac{h}{p}}$. Consequently latent-variable models in which
\begin{equation*}
d = \mathcal{O}(1), ~~~ h \sim p,
\end{equation*}
can be estimated consistently from $n \sim p$ samples as long as the low-rank matrix $K^\ast_{O,H} (K^\ast_H)^{-1} K^\ast_{H,O}$ is almost maximally incoherent, i.e., $i \sim \sqrt{\tfrac{h}{p}}$ so the effect of marginalization over the latent variables is diffuse across almost all the observed variables. Thus consistent latent-variable model selection is possible even when the number of samples and the number of latent variables are on the same order as the number of observed variables.
\textbf{Polylogarithmic degree} The next class of models that we study are those in which the degree $d$ of the conditional graphical model of the observed variables grows poly-logarithmically with $p$. Consequently, the incoherence $i$ of the matrix $K^\ast_{O,H} (K^\ast_H)^{-1} K^\ast_{H,O}$ must decay as the inverse of poly-$\log(p)$. Using the fact that maximally incoherent low-rank matrices $K^\ast_{O,H} (K^\ast_H)^{-1} K^\ast_{H,O}$ can have incoherence as small as $\sqrt{\tfrac{h}{p}}$, latent-variable models in which
\begin{equation*}
d \sim \log(p)^q, ~~~ h \sim \frac{p}{\log(p)^{2q}},
\end{equation*}
can be consistently estimated as long as $n \sim p~ $poly-$\log(p)$.
\subsection{Rates for covariance matrix estimation}
\label{subsec:cov}
The main result Theorem~\ref{theo:main} gives conditions under which we can consistently estimate the sparse and low-rank parts that compose the marginal concentration matrix $\tilde{K}^\ast_O$. Here we prove a corollary that gives rates for covariance matrix estimation, i.e., the quality of the estimate $(\hat{S}_n-\hat{L}_n)^{-1}$ with respect to the ``true'' marginal covariance matrix $\Sigma^\ast_O$.
\begin{CORL}
Under the same conditions as in Theorem~\ref{theo:main}, we have with probability greater than $1 - 2 \exp\{-p\}$ that
\begin{equation*}
g_\gamma(\mathcal{A}^\dag[(\hat{S}_n-\hat{L}_n)^{-1} - \Sigma^\ast_O]) \lesssim \frac{1}{\xi(T)} \sqrt{\frac{p}{n}}.
\end{equation*}
Specifically this implies that $\|(\hat{S}_n-\hat{L}_n)^{-1} - \Sigma^\ast_O\|_2 \lesssim \tfrac{1}{\xi(T)}\sqrt{\tfrac{p}{n}}$.
\end{CORL}
\textbf{Proof}: The proof of this lemma follows directly from duality. Based on the analysis in Appendix~\ref{app:main} (in particular using the optimality conditions of the modified convex program \eqref{eq:sdptsm}), we have that
\begin{equation*}
g_\gamma(\mathcal{A}^\dag[(\hat{S}_n-\hat{L}_n)^{-1} - \Sigma^n_O]) \leq \lambda_n.
\end{equation*}
We also have from the bound on the number of samples $n$ that with probability greater than $1 -2 \exp\{-p\}$ (see Appendix~\ref{app:final})
\begin{equation*}
g_\gamma(\mathcal{A}^\dag[\Sigma_O^\ast - \Sigma_O^n]) \lesssim \lambda_n
\end{equation*}
Based on the choice of $\lambda_n$ in Theorem~\ref{theo:main}, we then have the desired bound. $\square$
\subsection{Proof strategy for Theorem~\ref{theo:main}}
\label{subsec:proof}
Standard results from convex analysis \cite{Roc1996} state that $(\hat{S}_n,\hat{L}_n)$ is a minimum of the convex program \eqref{eq:sdp1} if the zero matrix belongs to the subdifferential of the objective function evaluated at $(\hat{S}_n,\hat{L}_n)$ (in addition to $(\hat{S}_n,\hat{L}_n)$ satisfying the constraints). The subdifferential of the $\ell_1$ norm at a matrix $M$ is given by
\begin{equation*}
N \in \partial\|M\|_1 ~~ \Leftrightarrow ~~ \mathcal{P}_{\Omega(M)}(N) = \mathrm{sign}(M), ~ \|\mathcal{P}_{\Omega(M)^\bot}(N)\|_\infty \leq 1.
\end{equation*}
For a symmetric positive semidefinite matrix $M$ with SVD $M = U D U^T$, the subdifferential of the trace function restricted to the cone of positive semidefinite matrices (i.e., the nuclear norm over this set) is given by:
\begin{equation*}
N \in \partial[\mathrm{tr}(M) + \mathbb{I}_{M \succeq 0}] ~~ \Leftrightarrow ~~ \mathcal{P}_{T(M)}(N) = U U^T, ~ \mathcal{P}_{T(M)^\bot}(N) \preceq I,
\end{equation*}
where $\mathbb{I}_{M \succeq 0}$ denotes the characteristic function of the set of positive semidefinite matrices (i.e., the convex function that evaluates to $0$ over this set and $\infty$ outside). The key point is that elements of the subdifferential decompose with respect to the tangent spaces $\Omega(M)$ and $T(M)$. This decomposition property plays a critical role in our analysis. In particular it states that the optimality conditions consist of two parts, one part corresponding to the tangent spaces $\Omega$ and $T$ and another corresponding to the normal spaces $\Omega^\bot$ and $T^\bot$.
Consider the optimization problem \eqref{eq:sdp1} with the additional (non-convex) constraints that the variable $S$ belongs to the algebraic variety of sparse matrices and that the variables $L$ belongs to the algebraic variety of low-rank matrices. While this new optimization problem is non-convex, it has a very interesting property. At a globally optimal solution (and indeed at any locally optimal solution) $(\tilde{S},\tilde{L})$ such that $\tilde{S}$ and $\tilde{L}$ are smooth points of the algebraic varieties of sparse and low-rank matrices, the first-order optimality conditions state that the Lagrange multipliers corresponding to the additional variety constraints must lie in the \emph{normal spaces} $\Omega(\tilde{S})^\bot$ and $T(\tilde{L})^\bot$. This fundamental observation, combined with the decomposition property of the subdifferentials of the $\ell_1$ and nuclear norms, suggests the following high-level proof strategy:
\begin{enumerate}
\item Let $(\tilde{S},\tilde{L})$ be the globally optimal solution of the optimization problem \eqref{eq:sdp1} with the additional constraints that $(S,L)$ belong to the algebraic varieties of sparse/low-rank matrices; specifically constrain $S$ to lie in $\mathcal{S}(|\mathrm{support}(K^\ast_O)|)$ and constrain $L$ to lie in $\mathcal{L}(\mathrm{rank}(K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}))$. Show first that $(\tilde{S},\tilde{L})$ are smooth points of these varieties.
\item The first part of the subgradient optimality conditions of the original convex program \eqref{eq:sdp1} corresponding to components \emph{on} the tangent spaces $\Omega(\tilde{S})$ and $T(\tilde{L})$ is satisfied. This conclusion can be reached because the additional Lagrange multipliers due to the variety constraints lie in the normal spaces $\Omega(\tilde{S})^\bot$ and $T(\tilde{L})^\bot$.
\item Finally show that the second part of the subgradient optimality conditions of \eqref{eq:sdp1} (without any variety constraints) corresponding to components in the normal spaces $\Omega(\tilde{S})^\bot$ and $T(\tilde{L})^\bot$ is also satisfied by $(\tilde{S},\tilde{L})$.
\end{enumerate}
Combining these steps together we show that $(\tilde{S},\tilde{L})$ satisfy the optimality conditions of the \emph{original convex program} \eqref{eq:sdp1}. Consequently $(\tilde{S},\tilde{L})$ is also the optimum of the convex program \eqref{eq:sdp1}. As this estimate is also the solution to the problem with the variety constraints, the algebraic consistency of $(\tilde{S},\tilde{L})$ can be directly concluded. We emphasize here that the variety-constrained optimization problem is used solely as an analysis tool in order to prove consistency of the estimates provided by the convex program \eqref{eq:sdp1}. These steps describe our broad strategy, and we refer the reader to Appendix~\ref{app:main} for details. The key technical complication is that the tangent spaces at $\tilde{L}$ and $K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}$ are in general different. We bound the twisting between these tangent spaces by using the fact that the minimum nonzero singular value of $K^\ast_{O,H} (K^\ast_{H})^{-1} K^\ast_{H,O}$ is bounded away from zero (as assumed in Theorem~\ref{theo:main} and using Proposition~\ref{theo:tspace}).
\section{Simulation Results}
\label{sec:sims}
In this section we give experimental demonstration of the consistency of our estimator \eqref{eq:sdp1} on synthetic examples, and its effectiveness in modeling real-world stock return data. Our choices of $\lambda_n$ and $\gamma$ are guided by Theorem~\ref{theo:main}. Specifically, we choose $\lambda_n$ to be proportional to $\sqrt{\tfrac{p}{n}}$. For $\gamma$ we observe that the support/sign-pattern and the rank of the solution $(\hat{S}_n,\hat{L}_n)$ are the same for a \emph{range} of values of $\gamma$. Therefore one could solve the convex program \eqref{eq:sdp1} for several values of $\gamma$, and choose a solution in a suitable range in which the sign-pattern and rank of the solution are stable. In practical problems with real-world data these parameters may be chosen via cross-validation. For small problem instances we solve the convex program \eqref{eq:sdp1} using a combination of YALMIP \cite{Lof2004} and SDPT3 \cite{TohTT}, which are standard off-the-shelf packages for solving convex programs. For larger problem instances we use the special purpose solver LogdetPPA \cite{WanST2009} developed for log-determinant semidefinite programs.
\begin{figure}
\begin{center}
\epsfig{file=composite_res.eps,width=8cm,height=6cm} \caption{Synthetic data: Plot showing probability of consistent estimation of the number of latent variables, and the conditional graphical model structure of the observed variables. the three models studied are $(a)$ 36-node conditional graphical model given by a cycle with $h=2$ latent variables, $(b)$ 36-node conditional graphical model given by a cycle with $h=3$ latent variables, and $(c)$ 36-node conditional graphical model given by a $6 \times 6$ grid with $h = 1$ latent variable. For each plotted point, the probability of consistent estimation is obtained over $50$ random trials.} \label{fig:fig1}
\end{center}
\end{figure}
\subsection{Synthetic data}
In the first set of experiments we consider a setting in which we have access to samples of the observed variables of a latent-variable graphical model. We consider several latent-variable Gaussian graphical models. The first model consists of $p=36$ observed variables and $h = 2$ hidden variables. The conditional graphical model structure of the observed variables is a cycle with the edge partial correlation coefficients equal to $0.25$; thus, this conditional model is specified by a sparse graphical model with degree $2$. The second model is the same as the first one, but with $h = 3$ latent variables. The third model consists of $h = 1$ latent variable, and the conditional graphical model structure of the observed variables is given by a $6 \times 6$ nearest-neighbor grid (i.e., $p=36$ and degree $4$) with the partial correlation coefficients of the edges equal to $0.15$. In all three of these models each latent variable is connected to a random subset of $80\%$ of the observed variables (and the partial correlation coefficients corresponding to these edges are also random). Therefore the effect of the latent variables is ``spread out'' over most of the observed variables, i.e., the low-rank matrix summarizing the effect of the latent variables is incoherent.
For each model we generate $n$ samples of the observed variables, and use the resulting sample covariance matrix $\Sigma_O^n$ as input to our convex program \eqref{eq:sdp1}. Figure~\ref{fig:fig1} shows the probability of recovery of the support/sign-pattern of the conditional graphical model structure in the observed variables and the number of latent variables (i.e., probability of obtaining algebraically consistent estimates) as a function of $n$. This probability is evaluated over $50$ experiments for each value of $n$.
\begin{figure}
\begin{center}
\epsfig{file=stockHGM.eps,width=6.5cm,height=4cm} \hspace{-0.35in}
\epsfig{file=stockGM.eps,width=6.5cm,height=4cm} \caption{Stock returns: The figure on the left shows the sparsity pattern (black denotes an edge, and white denotes no edge) of the concentration matrix of the conditional graphical model (135 edges) of the stock returns, conditioned on $5$ latent variables, in a latent-variable graphical model (total number of parameters equals $639$). This model is learned using \eqref{eq:sdp1}, and the KL divergence with respect to a Gaussian distribution specified by the sample covariance is $17.7$. The figure on the right shows the concentration matrix of the graphical model (646 edges) of the stock returns, learned using standard sparse graphical model selection based on solving an $\ell_1$-regularized maximum-likelihood program (total number of parameters equals $730$). The KL divergence between this distribution and a Gaussian distribution specified by the sample covariance is $44.4$.} \label{fig:fig2}
\end{center}
\end{figure}
In all of these cases standard graphical model selection applied directly to the observed variables is not useful as the marginal concentration matrix of the observed variables is not well-approximated by a sparse matrix. These experiments agree with our theoretical results that the convex program \eqref{eq:sdp1} is an algebraically consistent estimator of a latent-variable model given (sufficiently many) samples of only the observed variables.
\subsection{Stock return data}
In the next experiment we model the statistical structure of monthly stock returns of 84 companies in the S\&P 100 index from 1990 to 2007; we disregard 16 companies that were listed after 1990. The number of samples $n$ is equal to $216$. We compute the sample covariance based on these returns and use this as input to \eqref{eq:sdp1}.
The model learned using \eqref{eq:sdp1} for suitable values of $\lambda_n,\gamma$ consists of $h = 5$ latent variables, and the conditional graphical model structure of the stock returns conditioned on these hidden components consists of $135$ edges. Therefore the number of parameters in the model is $84 + 135 + (5 \times 84) = 639$. The resulting KL divergence between the distribution specified by this model and a Gaussian distribution specified by the sample covariance is $17.7$. Figure~\ref{fig:fig2} (left) shows the \emph{conditional} graphical model structure. The strongest edges in this conditional graphical model, as measured by partial correlation, are between Baker Hughes - Schlumberger, A.T.\&T. - Verizon, Merrill Lynch - Morgan Stanley, Halliburton - Baker Hughes, Intel - Texas Instruments, Apple - Dell, and Microsoft - Dell. It is of interest to note that in the Standard Industrial Classification\footnote{See the United States Securities and Exchange Commission website at http://www.sec.gov/info/edgar/siccodes.htm} system for grouping these companies, several of these pairs are in different classes. As mentioned in Section~\ref{subsec:ps} our method estimates a low-rank matrix that summarizes the effect of the latent variables; in order to factorize this low-rank matrix, for example into sparse factors, one could use methods such as those described in \cite{WitTH2009}.
We compare these results to those obtained using a sparse graphical model learned using $\ell_1$-regularized maximum-likelihood (see for example \cite{RavWRY2008}), without introducing any latent variables. Figure~\ref{fig:fig2} (right) shows this graphical model structure. The number of edges in this model is $646$ (the total number of parameters is equal to $646 + 84 = 730$), and the resulting KL divergence between this distribution and a Gaussian distribution specified by the sample covariance is $44.4$. Indeed to obtain a comparable KL divergence to that of the latent-variable model described above, one would require a graphical model with over $3000$ edges.
These results suggest that a latent-variable graphical model is better suited than a standard sparse graphical model for modeling the statistical structure among stock returns. This is likely due to the presence of global, long-range correlations in stock return data that are better modeled via latent variables.
\section{Discussion}
\label{sec:conc}
We have studied the problem of modeling the statistical structure of a collection of random variables as a sparse graphical model conditioned on a few additional hidden components. As a first contribution we described conditions under which such latent-variable graphical models are identifiable given samples of only the observed variables. We also proposed a convex program based on regularized maximum-likelihood for latent-variable graphical model selection; the regularization function is a combination of the $\ell_1$ norm and the nuclear norm. Given samples of the observed variables of a latent-variable Gaussian model we proved that this convex program provides consistent estimates of the number of hidden components as well as the conditional graphical model structure among the observed variables conditioned on the hidden components. Our analysis holds in the high-dimensional regime in which the number of observed/latent variables are allowed to grow with the number of samples of the observed variables. In particular we discuss certain scaling regimes in which consistent model selection is possible even when the number of samples and the number of latent variables are on the same order as the number of observed variables. These theoretical predictions are verified via a set of experiments on synthetic data. We also demonstrate the effectiveness of our approach in modeling real-world stock return data.
Several research questions arise that are worthy of further investigation. While the convex program \eqref{eq:sdp1} can be solved in polynomial time using off-the-shelf solvers, it is preferable to develop more efficient special-purpose solvers that can scale to massive datasets by taking advantage of the structure of the formulation \eqref{eq:sdp1}. Finally it would be of interest to develop a similar convex optimization formulation with consistency guarantees for latent-variable models with non-Gaussian variables, e.g., for categorical data.
\section*{Acknowledgements}
We would like to thank James Saunderson and Myung Jin Choi for helpful discussions, and Kim-Chuan Toh for kindly providing us specialized code to solve larger instances of our convex program.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,008 |
require "drb_queue/version"
require "drb_queue/store"
require 'drb/drb'
require 'drb/unix'
require "fileutils"
require 'timeout'
require 'json'
require 'monitor'
module DRbQueue
extend self
extend Forwardable
autoload :Server, 'drb_queue/server'
autoload :Configuration, 'drb_queue/configuration'
ConfiguredAfterStarted = Class.new(StandardError)
attr_reader :started
alias_method :started?, :started
def enqueue(worker, *args)
raise Server::NotStarted, "You must start the server first" unless started?
raise ArgumentError, "#{worker} is not a module" unless worker.is_a?(Module)
raise ArgumentError, "#{worker} does not respond to perform" unless worker.respond_to?(:perform)
server.enqueue(worker, *args)
end
def start!
raise Server::AlreadyStarted, "The server is already started" if started?
synchronize do
return if started?
@pid = fork_server
at_exit { shutdown! }
@started = true
begin
connect_client!
rescue => e
shutdown!
raise
end
end
end
def configure
raise ConfiguredAfterStarted, "You must configure #{self.name} BEFORE starting the server" if started?
synchronize { yield configuration }
end
def shutdown!(immediately = false)
return unless started?
synchronize do
return unless started?
Process.kill(immediately ? 'KILL' : 'TERM', pid)
begin
::Timeout.timeout(20) { Process.wait }
rescue Timeout::Error
Process.kill('KILL', pid)
Process.wait
logger.error("#{self}: forced shutdown")
ensure
cleanup_socket
@started = false
@pid = nil
end
end
end
def connect_client!
synchronize do
tries = 0
begin
@server = DRbObject.new_with_uri(server_uri)
@server.ping
rescue DRb::DRbConnError => e
raise Server::UnableToStart.new("Couldn't start up the queue server", e) if tries > 4
sleep 0.1 * (2 ** tries)
tries += 1
retry
end
end
end
private
attr_reader :pid, :server
def fork_server
cleanup_socket
execute_before_fork_callbacks
fork do
execute_after_fork_callbacks
server = Server.new(configuration)
DRb.start_service(server_uri, server)
shutting_down = false
trap('TERM') { shutting_down = true }
sleep 0.1 until shutting_down
server.shutdown!
DRb.stop_service
end
end
def execute_before_fork_callbacks
before_fork_callbacks.each(&:call)
end
def execute_after_fork_callbacks
after_fork_callbacks.each(&:call)
end
def configuration
@configuration ||= Configuration.new
end
def_delegators :configuration, :server_uri, :socket_location, :before_fork_callbacks, :after_fork_callbacks, :num_workers, :logger
def synchronize(&block)
synchronization_monitor.synchronize(&block)
end
def synchronization_monitor
@synchronization_monitor ||= Monitor.new
end
def cleanup_socket
FileUtils.rm(socket_location) if File.exist?(socket_location)
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,045 |
WKU Biology
GraduateUndergraduate
BiodiversityBioinformaticsBiotechnology
Public invited to help scientists study bat behavior
Arizona BatWatch project created by education coordinator of Mammoth Cave International Center for Science and Learning
Bats play an important role in the ecosystem. Yet, scientists know relatively little about their behaviors in the wild. Arizona BatWatch is a new, National Science Foundation-funded, citizen science project designed to help scientists study bat behaviors around a roost.
Arizona BatWatch (www.ArizonaBatWatch.org) launched Wednesday morning (Oct. 19), just in time for National Bat Week on Oct. 24-31.
Lesser long-nosed bats are endangered species that pollinate agave and southwestern columnar cacti. Arizona BatWatch, a National Science Foundation-funded, citizen science project, launched Oct. 19. (Photo by U.S. Fish and Wildlife Services)
"It is hard to study bat behaviors in the wild because bats are small, nocturnal, and easily disturbed," said Shannon Trimboli, the creator of Arizona BatWatch and Education Coordinator of the Mammoth Cave International Center for Science and Learning, a partnership between WKU and Mammoth Cave National Park.
"Recent advances in technology have made it easier to study bats in the wild through the use of near-infrared video cameras that can record bat behaviors without a person being present. However, a single study can create many thousands of hours of video which someone has to watch and classify."
"Traditionally a single scientist and his or her team of one or two assistants would watch and classify all of the videos," said Trimboli. "With so few people working on the videos, it would take a very long time to go through all the videos. Later, if another scientist wanted to use the same videos to study a different set of behaviors, the new scientist would have to repeat the process to identify the new set of behaviors."
According to Trimboli, Arizona BatWatch uses citizen science to solve this problem. Participants in Arizona BatWatch watch archived videos of endangered, lesser long-nosed bats flying around a roost in Arizona. As they watch the videos, they identify the behaviors they see. With so many people watching the videos, the behaviors on the videos will be identified much faster than a single scientist and his or her team could do by themselves. Also, once the videos are classified anyone can use them without having to re-watch and re-classify the entire set of videos.
"Citizen science creates partnerships between the public and researchers working on large, scientific studies. These partnerships can lead to valuable scientific contributions," said Dr. Cathleen Webb, Associate Dean for Research in WKU's Ogden College of Science and Engineering.
"Citizen science can also profoundly influence the communication of science at the most fundamental levels by igniting a passion for discovery through hands-on, active, participation as a research partner," Dr. Webb said. "At WKU, we value these accomplishments and strive to actively engage people in research and educational opportunities."
In addition to the opportunity to participate in the research, the Arizona BatWatch website also includes educational opportunities. The website contains detailed information about the project, lesser long-nosed bats, bats in general, and the data being collected. It also includes discussion boards where participants can ask questions and share their findings.
"Everyone is invited to participate in Arizona BatWatch," Trimboli said. "It is a great way to celebrate National Bat Week and to help scientists learn more about bats."
The bats in the videos on the Arizona BatWatch website are lesser long-nosed bats (Leptonycteris curasoae). Lesser long-nosed bats are an endangered species of bat that lives in Mexico, Arizona and New Mexico. It is a pollinator and feeds on the nectar and pollen of agave and columnar cacti. Lesser long-nosed bats are endangered primarily due to habitat destruction and human disturbances at roost sites.
Arizona BatWatch was funded through a National Science Foundation grant (1223908). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
More: Arizona BatWatch on Facebook; Arizona BatWatch blog.
The Mammoth Cave International Center for Science and Learning is a partnership between WKU and Mammoth Cave National Park. At WKU, it is housed within the Dean's Office of Ogden College of Science and Engineering.
For information about the project, contact Shannon Trimboli, Education Coordinator of the Mammoth Cave International Center for Science and Learning, (270) 758-2422 or shannon.trimboli@wku.edu
All CategoriesGeneralCampus Library Advisory Committee
Ogden News
All CategoriesNews from Ogden CollegeInteresting STEM NewsWKU Campus NewsTop News
Bioinformatics and Information Science Center
Different platinum chemotherapy compounds modulate distinct RNA targets to regulate important cancer cell signaling pathways
Researchers from Dr. Michael Smith's laboratory in the Biology Department at WKU and Dr. Yann Gibert's laboratory in the Department of Cell and Molecular Biology at the University of Mississippi Medical Center collaborated on the project.
WKU Mechanical Engineering team places second in 2021 ASME competition
Two teams of WKU Mechanical Engineering seniors successfully participated in the annual Student Design Competition held by American Society of Mechanical Engineers International.
WKU Libraries awards Connie Foster Student Scholarships to Jolie Finley and Andy Cheng
Two WKU students, Jolie Finley and Andy Cheng, have been awarded the Connie Foster Student Scholarship for the 2021-22 academic year.
41 inducted into WKU's chapter of Alpha Epsilon Delta honor society
WKU's chapter of Alpha Epsilon Delta (AED) inducted 41 new members – the largest number in the chapter's history -- in a ceremony on Nov. 19.
WKU GEO Faculty/Staff/Students Attend Two National Conferences
Students, faculty, and staff from the Center for Human GeoEnvironmental Studies (CHNGES) and Department of Geography & Geology recently attended national conferences in Arkansas and Washington state.
Psychological Science Graduate Student Earns Scholarship for Research
Shelby Bandel, a graduate student in Psychological Sciences at WKU, is a recipient of a 2017 Graduate Research Scholarship from the American Psychological Foundation (APF) and the Council of Graduate Departments of Psychology. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 647 |
The lethal injection chamber of the South Dakota State Penitentiary in Sioux Falls on Oct. 9, 2012.
Photo by Amber Hunt/AP
Scalia's perfect capital-punishment case falls apart
A little over two decades ago, Supreme Court Justice Antonin Scalia was dismissive of then-Justice Harry Blackmun's concerns about the death penalty. In fact, Scalia had a case study in mind that demonstrated exactly why the system of capital punishment has value.
As regular readers may recall, Scalia specifically pointed to a convicted killer named Henry Lee McCollum as an obvious example of a man who deserved to be put to death. "For example, the case of an 11-year-old girl raped by four men and then killed by stuffing her panties down her throat," Scalia wrote in a 1994 ruling. "How enviable a quiet death by lethal injection compared with that!"
For Scalia, McCollum was the perfect example – a murderer whose actions were so heinous that his crimes stood as a testament to the merit of capital punishment itself.
Yesterday, McCollum was pardoned. Scalia's perfect example of a man who deserved to be killed by the state was innocent. North Carolina's News & Observer reported:
Gov. Pat McCrory on Thursday pardoned two half-brothers who were exonerated of murder after spending three decades in prison.
The governor took nine months to make the decision, saying he thoroughly reviewed the pardons sought by Henry McCollum and Leon Brown. Both men are intellectually disabled.
If this story sounds at all familiar, it was last fall when a judge ordered the men released. The confessions appeared to have been coerced 30 years ago and new DNA evidence implicated another man whose possible involvement had been overlooked at the time.
As recently as 2010, the North Carolina Republican Party used a McCollum photo on campaign fliers to attack a Democratic candidate as "soft on crime."
McCollum hadn't done anything wrong.
The pardon is a welcome development, though the News & Observer added that the middle-aged men, after having spent most of their lives behind bars – and on death row – for a crime they didn't commit, are struggling.
[T]he men have been living with their sister, who has struggled to pay rent and utilities on her home in Fayetteville. The Center for Death Penalty Litigation established a fund to help them survive.
Each man now qualifies for $50,000 for each year they were imprisoned, up to a maximum of $750,000. They needed a gubernatorial pardon in order to collect the compensation.
As best as I can tell, Scalia has not yet commented.
The MaddowBlog, Antonin Scalia, Capital Punishment, Criminal Justice, Death Penalty, North Carolina, Pat McCrory and Society
Jobs growth soars, 280k jobs created in May
Boehner's posture on ISIS descends into... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 50 |
You can take into personal disposition our personal bank safe from one day up to one year. Service payment made at the same date you sign an agreement for using personal bank safe.
The original ID on tax payments (documents that proves the registration of physical entity – resident in the State register for physical entities – tax payments that is issued by State Fiscal Service).
Valuable objects require true protection, especially at periods of vacations or business trips of their owners. To solve this problem you can count on personal bank safe at our vault at JSC Bank "ARCADA". At the safe, you can keep any type of value objects, money, precious metals, documents, art objects, collections, photo- video- audio materials, etc.
Only you will know what`s at your personal bank safe! Every safe has 2 keys of different configuration, one is given to you, the other – (locking key) is kept at the bank. The safe is opened only with simultaneous usage of both keys, anyone from the bank personnel can`t open the safe by himself.
Saturday, Sunday and festive days — holidays.
At paying for the bank services, the client pays for the keys a sum 2 200 gryvnas.
For not following the terms of exploitation of the safe the client must pay a fine to the amount of 200% of the bank safe service for a day per every expiration day.
Sunday and festive days — holidays.
At paying for the bank services, the client pays for the keys a sum 2 000 gryvnas. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,119 |
\section{Introduction}
There is a great need to predict the distribution of the number of
hurricanes that might make landfall in the US in the next few
years. Such predictions are of use to all the entities that are
affected by hurricanes, ranging from local and national
governments to insurance and reinsurance companies. How, then,
should we make such predictions? There is no obvious best method.
For instance, one might consider making a prediction based on time-series analysis of the time-series of
historical landfalling hurricane numbers;
one might consider making a prediction of basin hurricane numbers using time-series analysis, and convert
that prediction to a prediction of landfalling hurricane numbers;
one might consider trying to predict SSTs first, and
convert that prediction to a prediction of landfalling numbers; or
one might try and use output from a numerical model of the climate
system. All of these are valid approaches, and each has their own
pros and cons.
In this article, we consider the idea of first predicting SST and then predicting
hurricane numbers given a prediction of SST. There are two obvious flavours of
this. The first is what we will call the `direct' (or `one-step') method, in which one regresses historical
numbers of landfalling hurricanes directly onto historical SSTs, and uses the
fitted regression relation to convert a prediction of future SSTs into a prediction
of future hurricane numbers. The second is what
we will call the `indirect' (or `two-step') method, in which one regresses \emph{basin} hurricane numbers onto
historical SSTs, predicts basin numbers, and then predicts landfalling numbers
from basin numbers. In the simplest version of the indirect method one might predict landfalling numbers
as a constant proportion of the number of basin hurricanes, where this proportion is
estimated using historical data.
Consideration of the direct and indirect SST-based methods motivates the question: at a theoretical level, which
of these two methods is likely to work best? This is a statistical question about the properties
of regression and proportion models. We consider this abstract question in the
context of two simple models.
The first model is the more realistic of the two.
It uses observed SSTs, models the mean number of hurricanes in the basin as a linear function of SST,
and models each basin hurricane as having a constant probability of making landfall.
We run simulations that allow us to directly compare the performance of the direct and indirect methods
in the context of this model.
The second model is less realistic, but allows us to derive a general analytical result for the relative
performance of the direct and indirect methods. In this model we represent SST, basin and landfalling hurricane numbers
as being normally distributed and linearly related.
We don't think the answer as to which of the direct or indirect methods is better is \emph{a priori} obvious.
On the one hand, the direct method has fewer parameters to estimate, which might work in its favour.
On the other hand, the indirect method allows us to use more data by incorporating the basin hurricane numbers
into the analysis.
Section~\ref{methods1} describes the methods used in the simulation study, and
section~\ref{results1} describes the results from that study.
In section~\ref{linearnormalmodel} we derive general analytic results for the linear-normal model.
Finally in section~\ref{summary} we discuss our results.
\section{Simulation-based analysis: methods}\label{methods1}
For our simulation study, we compare the direct and indirect methods described above as follows.
\subsection{Generating artificial basin hurricane numbers}
First, we simulate 10,000 sets of artificial basin hurricane numbers for the period 1950-2005,
giving a total of 10,000 x 56 = 560,000 years of simulated hurricane numbers.
These numbers are created by sampling from poisson distributions with mean given by:
\begin{equation}
\lambda=\alpha+\beta S
\end{equation}
where $S$ is the observed MDR SST for each year in the period 1950-2005.
The values of $\alpha$ and $\beta$ are derived from model 4 in table 7 in~\citet{e04a}, in which
observed basin hurricane numbers were regressed onto observed SSTs using data
for 1950-2005. They have
values of 6.25 and 5, respectively.
The basin hurricane numbers we create by this method should contain roughly the same long-term
SST driven variability as the observed basin hurricane numbers, but different numbers of
hurricanes in the individual years. We say `roughly' the same, because (a) the linear model
we are using to relate SST to hurricane numbers is undoubtedly not exactly correct, although given
the analysis in~\citet{e04a} is certainly seems to be reasonable, and (b) the
parameters of the linear model are only estimated.
\subsection{Generating artificial landfalling hurricane numbers}
Given the 10,000 sets of simulated basin hurricane numbers described above, we then
create 10,000 sets of simulated \emph{landfalling} hurricane numbers by applying the rule
that each basin hurricane has a probability of 0.254 of making landfall (this value is taken
from observed data for 1950-2005).
The landfalling hurricane numbers we create by this method should contain roughly the
same long-term SST driven variability as the observed landfalling series, but different
numbers of hurricane in the individual years. They should also contain roughly the right
dependency structure between the number of hurricanes in the basin and the number at landfall
(e.g. that years with more hurricanes in the basin will tend to have more hurricanes at landfall).
\subsection{Making predictions}
We now have 10,000 sets of 56 years of artificial data for basin and landfalling hurricanes.
This data contains a realistic representation of the SST-driven variability of hurricane numbers, and
of the dependency structure between the numbers of hurricanes in the basin and at landfall,
but different actual numbers of hurricanes from the observations. We can consider this data
as 10,000 realisations of what might have occurred over the last 56 years, had the SSTs
been the same, but the evolution of the atmosphere different. This data is a test-bed
that can help us understand aspects of the predictability of landfalling hurricanes given SST.
The observed and simulated data is illustrated in figures~\ref{f01} to \ref{f05}.
Figure~\ref{f01} shows the observed basin data (solid black line) and the observed
landfall data (solid grey line). The dashed black line shows the variability in the
observed basin data that is explained using SSTs. The dotted grey line shows the variability
in the observed landfall data that is explained using SSTs using the direct method, and
the dotted grey line shows the variability in the landfall data that is explained using
SSTs using the indirect method.
Figures~\ref{f02} to \ref{f05} show 4 realisations of the simulated data. In each figure
the dotted and dashed lines are the same as in figure~\ref{f01}, and show the SST driven
signal. The solid black line then shows the simulated basin hurricane numbers and the solid
grey line shows the simulated landfalling hurricane numbers.
We test predictions of landfalling hurricane numbers using the direct method as follows:
\begin{itemize}
\item we loop through the 10,000 sets of simulated landfalling hurricanes
\item for each set, we miss out one of the 56 years
\item using the other 55 years in that set, we build a linear regression model between
SST and landfalling hurricane numbers
\item we then use that fitted model to predict the number of landfalling hurricanes in
the missed year, given the SST for that year
\item we calculate the error for that prediction
\item we then repeat for all 10,000 sets (missing out a different year each time)
\item this gives us 10,000 prediction errors, from which we calculate the RMSE
\end{itemize}
We test the indirect method in almost exactly the same way, except that this time we
also fit a model for predicting landfalling numbers from basin numbers.
\subsection{Comparing the predictions}
We compare the direct and indirect predictions in two ways:
\begin{itemize}
\item First, we compare the two RMSE values
\item Second, we count what proportion of the time the errors from the direct method
are smaller than the errors from the indirect method
\end{itemize}
We also repeat the entire calculation a number of times as a rough way to evaluate the
convergence of our results.
\section{Simulation-based analysis: results}\label{results1}
We now present the results from our simulation study.
The RMSE for the direct method is 1.61 hurricanes, while the RMSE for the indirect method is 1.58 hurricanes.
This difference is small, but the sign of it does appear to be real: when we repeat the whole experiment
a number of times, we always find that the indirect method beats the direct method.
The indirect method beats the direct method 51.8\% of the time.
Given the design of the experiment, these results tell us how the two methods perform,
on average over the whole range of SST values. Next year's SST, however, is likely to be warm
relative to historical SSTs. We therefore also consider the more specific question of how the methods
are likely to perform for given warm SSTs. Based on~\citet{e20}, we fit a linear trend to the historical
SSTs, and extrapolate this trend out to 2011. This then gives SST values that are warmer than
anything experienced in history (27.987$^o$C to be precise). We then repeat the whole analysis
for predictions for this warm SST only. The results are more or less as before: the indirect
method still wins, only this time by a slightly larger margin. The ratio of RMSE scores
(direct divided by indirect) increases from 1.02 to 1.04.
\section{The Linear normal case}\label{linearnormalmodel}
We now study a slightly less realistic model, in which we take SSTs and hurricane numbers
in the basin and at landfall to be normally distributed. These changes allow us to derive
a very general result for the relative performance of the direct and indirect methods.
\subsection{The setup}
Here's how we set the problem up in this case.
Consider two simple regression models for centred random
variables $Y$ and $Z$,
\begin{eqnarray*}
Y &=& X \beta + \eps, \quad \eps \sim (0,\sigma_\eps^2 I_n), \\
Z &=& Y \gamma + \eta, \quad \eta \sim (0,\sigma_\eta^2 I_n),
\end{eqnarray*}
where $\eps$ and $\eta$ are independent. Here $X$, $Y$, $Z$,
$\eps$ and $\eta$ are $n \times 1$ column vectors, $\beta$ and
$\gamma$ are scalars, and $I_n$ is the $n \times n$ identity
matrix. We will assume $X$ is fixed.
In relation to the hurricane problem, $X$ is the time-series of $n$ years
of SST values, $Y$ is the time-series of $n$ years of basin hurricane numbers and $Z$ is
the time-series of $n$ years of landfalling hurricane numbers. Note that in our
notation $X$ is the \emph{whole time-series} of SST, written as a vector, and similarly
for $Y$ and $Z$. Using vector notation avoids the messy use of subscripts.
Two immediate comments about this setup: (a) we are assuming that basin and landfalling hurricane
numbers are normally distributed. This doesn't really make sense, since they are counts that
can only take integer values: using a poisson distribution would make more sense.
We are starting off by addressing this question for normally distributed data
because it's more tractable that way;
(b) we are assuming a linear relationship (with offset and slope) between basin hurricanes
and landfalling hurricanes. This is also a little odd, since there is no reason to have an offset
in this relationship: if there aren't any basin hurricanes, there can't be any landfalling hurricanes.
The most obvious model would be that each hurricane has a constant proportion of making landfall.
Again, we are starting off by addressing this question in a linear context because it's more
tractable that way.
We want to know about the accuracy of forecasts that we might make with the direct and
indirect methods. This translates mathematically into saying that we want to estimate
\begin{eqnarray}
E(z_{n+1}) &=& E(y_{n+1}) \gamma\\
&=& x_{n+1} \beta \gamma\\
&=& x_{n+1}\delta
\end{eqnarray}
where $\delta = \beta \gamma$.
The problem then boils down to measuring
the quality of the estimator of $\delta$ since, if $\hat{z}_{n+1}
= x_{n+1} \hat{\delta}$ is an estimator of $E(z_{n+1})$ then
\begin{eqnarray}
\mse(\hat{z}_{n+1}) &=& \mse(x_{n+1} \hat{\delta})\\
&=& E[(x_{n+1}\hat{\delta} - x_{n+1} \delta)(x_{n+1} \hat{\delta} - x_{n+1}\delta)']\\
&=& x_{n+1} \mse(\hat{\delta}) x_{n+1}'.\label{mse}
\end{eqnarray}
So we now consider the direct and indirect methods for estimating $\delta$.
\subsection{Direct estimator of $\delta$}
We start by considering the direct, or one-step, method.
This means we consider the relationship between $X$ and $Z$, ignoring $Y$. The
usual OLS estimator for $\delta$ is
\begin{eqnarray}
\delta^\dagger &=& (X'X)^{-1} X' Z\\
&=& (X'X)^{-1} X'(X \beta \gamma +\eps \gamma + \eta)\\
&=& \delta + (X'X)^{-1} X' (\eps \gamma + \eta).
\end{eqnarray}
What are the statistical properties of this estimator?
In terms of mean:
\begin{equation}
E(\delta^\dagger) = \delta
\end{equation}
i.e. the estimator is unbiased.
In terms of variance
\begin{eqnarray}
\var(\delta^\dagger) &=& (X'X)^{-1} X' \var(\eps \gamma + \eta) X(X'X)^{-1}.
\end{eqnarray}
We know that $\var(\eps \gamma + \eta) = \sigma_\eps^2 I_n
\gamma^2 + \sigma_\eta^2 I_n$, so
\begin{equation}\label{vard1}
\var(\delta^\dagger) = (X'X)^{-1} (\sigma_\eps^2 \gamma^2 + \sigma_\eta^2).
\end{equation}
By equation~\ref{mse} this then gives us an expression for the performance of the direct method.
\subsection{Indirect estimator of $\delta$}
We now consider the indirect, or two-step, method.
This means considering the relationships between $X$ and $Y$, and $Y$ and $Z$.
First, we consider estimating each regression separately. The OLS estimators for the slopes in each case are:
\begin{eqnarray}
\hat{\beta} &=& (X'X)^{-1} X' Y \\
&=& \beta + (X'X)^{-1} X' \eps \\
\hat{\gamma} &=& (Y'Y)^{-1} Y' Z\\
&=& \gamma + (Y'Y)^{-1} Y' \eta
\end{eqnarray}
We now put the two models together, to create a single regression model based on the separate estimates
for the two steps. We call the estimate of the slope of this combined model $\hat{\delta}$.
Combining the expressions above, we have that:
\begin{eqnarray}
\hat{\delta} &=& \hat{\beta} \hat{\gamma}\\
&=& \beta \gamma +(X'X)^{-1}X' \eps \gamma + \beta (Y'Y)^{-1} Y' \eta + (X'X)^{-1} X' \eps (Y'Y)^{-1} Y' \eta
\end{eqnarray}
What are the statistical properties of this estimator $\hat{\delta}$?
It is clear (by independence of $\eps$ and $\eta$) that
$\hat{\delta}$ is unbiased;
\begin{eqnarray}
E(\hat{\delta}) &=& \beta \gamma\\
&=&\delta
\end{eqnarray}
The variance is more awkward. Note that if $\eps$ were known then
$\hat{\beta}$ and $Y$ would be fixed constants. Thus,
\begin{eqnarray}
E(\hat{\delta}| \eps) &=&E(\hat{\beta} \hat{\gamma}| \eps)\\
&=& \hat{\beta} E(\hat{\gamma}|\eps)\\
&=& \hat{\beta} \gamma, \\
\var(\hat{\delta}| \eps) &=& \var(\hat{\beta} \hat{\gamma}|\eps)\\
&=& \hat{\beta} \var(\hat{\gamma}|\eps) \hat{\beta}'\\
&=& \hat{\beta} (Y'Y)^{-1} \hat{\beta}' \sigma_\eta^2.
\end{eqnarray}
and so
\begin{eqnarray}
\var(\hat{\delta}) &=& \var(\hat{\beta} \hat{\gamma})\\
&=& E[\var(\hat{\beta} \hat{\gamma}|\eps)] + \var[E(\hat{\beta} \hat{\gamma}| \eps)]\\
&=& E[\hat{\beta} (Y'Y)^{-1} \hat{\beta}'] \sigma_\eta^2 + \gamma \var(\hat{\beta}) \gamma'.
\end{eqnarray}
where we have used a standard relation for disaggregating the variance:
\begin{equation}
\mbox{var}(a)=E[\mbox{var}(a|b)]+\mbox{var}[E(a|b)]
\end{equation}
Using the facts that
\begin{eqnarray}
E(Y'Y) & = & \beta' X' X \beta + n \sigma_\eps^2 \\
E(\hat{\beta} \hat{\beta}') &=& \beta \beta' + (X'X)^{-1}
\sigma_\eps^2
\end{eqnarray}
and approximating to second order:
\begin{equation}\label{vard2}
\var(\hat{\delta}) =
\left[\frac{\beta^2 + q^2}{\beta^2 + n q^2}\right] (X'X)^{-1}
\sigma_\eta^2 + q^2 \gamma^2.
\end{equation}
where $q^2=(X'X)^{-1} \sigma_\eps^2$.
\subsection{Comparing the two estimators}
We are now in a position to compare the estimators for the direct and indirect methods.
Subtracting equation~\ref{vard2} from equation~\ref{vard1} gives:
\begin{eqnarray}
\var(\delta^\dagger)-\var(\hat{\delta})
&=&(X'X)^{-1} (\sigma_\eps^2 \gamma^2 + \sigma_\eta^2)
-\left[\frac{\beta^2 + q^2}{\beta^2 + n q^2}\right] (X'X)^{-1}\sigma_\eta^2 - (X'X)^{-1} \sigma_\eps^2 \gamma^2\\
&=&(X'X)^{-1} \sigma_\eta^2
-\left[\frac{\beta^2 + q^2}{\beta^2 + n q^2}\right] (X'X)^{-1}\sigma_\eta^2\\
&=&\left(1-\left[\frac{\beta^2 + q^2}{\beta^2 + n q^2}\right]\right) (X'X)^{-1}\sigma_\eta^2\\
&=&\left[\frac{(n-1) q^2}{\beta^2 + n q^2}\right] (X'X)^{-1} \sigma_\eta^2
\end{eqnarray}
The right hand side of this equation is clearly positive for $n>1$.
This indicates:
\begin{itemize}
\item that using the indirect method is an improvement on the direct method,
at least up to our second order approximations
\item that if $\frac{\beta^2}{q^2}$ is small
or $\sigma_\eta^2$ large then using the indirect method
provides a marked improvement over the direct approach
\end{itemize}
\section{Conclusions}\label{summary}
We have compared the likely performance of direct and indirect methods for predicting landfalling hurricane numbers
from SST. The direct method is based on building a linear regression model directly
from SST to landfalling hurricane numbers. The indirect method is based on building a regression model
from SST to basin numbers, and then predicting landfalling numbers from basin numbers using
a constant proportion.
First, we compare these two methods in the context of a reasonably realistic model, using simulations.
We find that the indirect method is better than
the direct method, but that the difference is small.
Secondly, we compare the two methods in the context of a less realistic model in which
all variables are normally distributed. For this model
we are able to derive the interesting general result that the indirect method should \emph{always} be better.
Which method should we then use in practice?
If we had to chose one method, our results seem to imply that we should choose the indirect method,
since it is more accurate.
The simulation results suggest, however, that the performance of the two methods is likely to be very close
for the values of the parameters appropriate for hurricanes in the real world.
Given the possibility to use two methods we would use both, as alterative points of view.
Ideally we would also be able to solve the more realistic model analytically, as we have done for the
linear-normal case. We are working on that.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,033 |
Q: Why insert query does not work? my code returns success, but it does not insert the record to the table "bekuldottkerdesek". What is causing the problem?
Even select command fails, when I start to echo some records from the table.
This is how the table looks: imgur
<?php
$servername = "***";
$username = "***";
$password = "***";
$dbname = "***";
$conn = new mysqli($servername, $username, $password, $dbname);
date_default_timezone_set('Europe/Bucharest');
$current_date = date("Y-m-d H:i:s");
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
$check="SELECT * FROM bekuldottkerdesek WHERE kerdes = '$_POST[kerdes]'";
$res = mysqli_query($conn,$check);
if($res->num_rows){
header("Location: /bekuld.php?hiba=1");
}
else
{
$var1 = isset($_POST['at']) ? 1 : 0;
$var2 = isset($_POST['bt']) ? 1 : 0;
$var3 = isset($_POST['ct']) ? 1 : 0;
$sql = "INSERT INTO bekuldottkerdesek (datum, kerdes, a, b, c, at, bt, ct)
VALUES ('$current_date', '$_POST[kerdes]', '$_POST[a]', '$_POST[b]', '$_POST[c]', $at, $bt, $ct)";
if ($conn->query($sql) === TRUE) {
echo "New record created successfully";
} else {
echo "Error: " . $sql . "<br>" . $conn->error;
}
header("Location: /bekuld.php?hiba=2");
}
$conn->close();
?>
A: change the insert statment to ,error passing data from $_POST
$sql = "INSERT INTO bekuldottkerdesek (datum, kerdes, a, b, c, at, bt, ct)
VALUES ('$current_date', '{$_POST['kerdes']}', '{$_POST['a']}', '{$_POST['b']}', '{$_POST['c']}', $at, $bt, $ct)";
and change this statment
$check="SELECT * FROM bekuldottkerdesek WHERE kerdes = '$_POST[kerdes]'";
to
$check="SELECT * FROM bekuldottkerdesek WHERE kerdes = '{$_POST['kerdes']}'";
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,508 |
Q: How to solve $L(u,v)=\int_{-1}^8\sqrt{u'^2+v'^2}dx$ I am trying to solve this functional:
$L(u,v)=\int_{-1}^8\sqrt{u'^2+v'^2}dx$
where the points $(u,v)$ on BC: $-1<x<8$
I use the Euler Lagrange formula and get two equations, considering $u$ and $v$ to represent two different dimensions. That is:
\begin{cases}
\frac{d}{dx}\frac{\partial F}{\partial u'}-\frac{\partial F}{\partial u}=C_1\\
\frac{d}{dx}\frac{\partial F}{\partial v'}-\frac{\partial F}{\partial v}=C_2
\end{cases}
obtaining with $F=\sqrt{u'^2+v'^2}$,
\begin{cases}
\frac{d}{dx}\frac{u'}{\sqrt{u'^2+v'^2}}=C_1\\
\frac{d}{dx}\frac{v'}{\sqrt{u'^2+v'^2}}=C_2
\end{cases}
Integrating on both sides I get:
\begin{cases}
\frac{u'}{\sqrt{u'^2+v'^2}}=C_1x\\
\frac{v'}{\sqrt{u'^2+v'^2}}=C_2x
\end{cases}
which leaves the relation:
\begin{cases}
\frac{1}{x\sqrt{u'^2+v'^2}}=\frac{C_1}{u'}\\
\frac{1}{x\sqrt{u'^2+v'^2}}=\frac{C_2}{v'}
\end{cases}
So now we can equate the RHS:
\begin{equation}
\frac{C_1}{u'}=\frac{C_2}{v'}\\
C_1u'=C_2v'
\end{equation}
The we get:
\begin{equation}
u'=\frac{C_2}{C_1}v' \rightarrow \\
u=\frac{C_2}{C_1}v+A
\end{equation}
But what is $v$ ? Is it simply the inverse of $u$?
Thanks
A: Something like
$$ u =Cv + D$$
for some constants $C, D$ is probably the best you can get. Your functional is the length functional (thinking of $\gamma (x) = (u(x), v(x))$ as a curve in $\mathbb R^2$) and in $\mathbb R^2$ only the straight lines minimizes the length.
You cannot get further (that is, you cannot find $u, v$ explicitly, even with specific boundary conditions) since the length functional is invariant under re-parametrization. That is, if $\sigma : [-1 ,8]\to [-1, 8]$ is any $C^1$ bijective maps with $\sigma ' >0$ for all $t\in [-1, 8]$, then
$$ L(\gamma) = L ( \gamma\circ \sigma).$$
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,248 |
Supreme Court Judges
Home / Hon. Justice (Prof) Jacton B. Ojwang
Hon. Justice (Prof) Jacton B. Ojwang
By July 9, 2018 Supreme Court Judges
The Hon. Justice (Prof.) Jackton B. Ojwang was appointed, on 16th June, 2011 to the inaugural Bench of the Supreme Court of Kenya. He previously served as a Full Professor of Law at the Faculty of Law (now School of Law), University of Nairobi where he worked for a total of 27 years, holding various academic ranks, and also served as the Chair of the Department of Private Law, Dean of Faculty, Director of the University Board of Postgraduate Studies and member of the University Senate. In early 1982 he had served as Visiting Associate Professor of Law at the J. Reuben Clark Law School, Brigham Young University, Provo, Utah, USA.
Justice (Prof.) Ojwang studied law at Nairobi and Cambridge Universities and has carried out research, taught and served as external examiner in reputable universities around the world. He is an accomplished legal scholar and academic with vast experience nationally, on the African continent, and globally.
Recently, Justice Ojwang, who holds the Ph.D. in law of the University of Cambridge (1981), has developed a Higher Doctoral Thesis from his continual research and publication, for which he earned the Doctor of Laws (LL.D.) degree of the University of Nairobi on 4th September, 2015. This original analysis depicts the thread in his legal works, and bears the title, The Unity of the Constitution and the Common Law, and has now been published by Lambert Academic Publishing, Norderstedt, Germany (2019).
Justice (Prof.) Ojwang has served the public in different capacities as a scholar and a judicial officer. He has also contributed to the development of law, as a member of Government Commissions, and as an advisor to Government bodies.
Justice (Prof.) Ojwang has also consulted for a wide range of agencies. He is a prolific academic and legal scholar who has authored many works. His research interests include: Legal theory/Jurisprudence; Constitutional law; Comparative Law; Environmental Law and Intellectual Property law.
Justice (Prof.) Ojwang's deeply cultivated foundations in advanced legal research and scholarship have been fully implanted in judicial dispute-settlement, a sphere in which he has derived distinct intellectual and professional fulfilment. (See detailed profile) | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,835 |
Porndemic is a 2018 documentary film about an HIV outbreak within the pornography industry in California's San Fernando Valley, during the 1990s. The film premiered on Showtime. It was directed by Brendan Spookie Daly and features interviews with several porn actors and actresses, including Tricia Devereaux, Ron Jeremy, and Marc Wallice.
See also
HIV/AIDS in the United States
STDs in the porn industry
References
External links
Porndemic at Showtime
2018 television films
Documentary films about HIV/AIDS
Documentary films about sexuality
HIV/AIDS in American films
San Fernando Valley
Showtime (TV network) films
STDs in the sex industry
Pornography in California
2018 films
American documentary films
2010s American films | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,030 |
Q: Submit form automatically if the URL has parameters I am trying to submit a form automatically if the url has a parameter. The form is prefilled with the value of the parameter.
<form name="search" method="post">
<div class="input-group">
<input tabindex="1" type="text" name="search" class="form-control" placeholder="Item" value="<?php echo (htmlspecialchars($_GET["item"]) != '') ? htmlspecialchars($_GET["item"]) : '';?>">
<input type = "hidden" name = "doSearch" value = "1">
<span class="input-group-btn">
<button class="btn btn-default" type="submit" name = "submit"><i class="material-icons icon-button">search</i></button>
</span>
</div>
</form>
How can I do that? I would only like to submit the form it if there is a paramter, otherwise not.
EXAMPLE:
www.test.com?item=example --> submit prefilled form automatically
www.test.com --> do nothing
A: I'm not sure how complex you want this to be, so here is a simple example.
JavaScript
function submitForm() {
if (window.location.href.indexOf("?") > -1) {
document.getElementsByName("search")[0].submit();
}
}
submitForm();
This will only check if there is a query string separator in the url (which typically means that query strings are to follow the ? mark).
So if you have www.test.com?item=example the form will be submitted. However, www.test.com? will also submit.
Let me know if that's a solution that works for you or not.
EDIT: You must rename the html elements with name submit and search to something else. You probably also need to add the action attribute to your form.
So you end up with something like this:
<form name="search" method="post" action="form_handle.php">
<div class="input-group">
<input tabindex="1" type="text" name="search_input" class="form-control" placeholder="Item" value="<?php echo (htmlspecialchars($_GET["item"]) != '') ? htmlspecialchars($_GET["item"]) : '';?>">
<input type = "hidden" name = "doSearch" value = "1">
<span class="input-group-btn">
<button class="btn btn-default" type="submit" name = "submit_btn"><i class="material-icons icon-button">search</i></button>
</span>
</div>
</form>
<script>
function submitCheck() {
if (window.location.href.indexOf("?") > -1) {
document.getElementsByName("search")[0].submit();
}
}
submitCheck();
</script>
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,509 |
Q: Can I use a single custom cell for multiple different cells? I have created a single prototype cell which has two labels (mainLabel and subLabel) and an uiimageview. In the uitableview I'd like to have several cells which reuse the prototype and when needed the subLabel is hidden and the uiimageview is changed with different one or with a uiswitch. The two labels have different text for each cell. Do you have any suggestions/hints in order to do it? possibly in a mvvm architecture?
I'll describe what I am doing:
I have a struct (the Model) with two properties: label and sublabel. This is then instantiate by a viewModel which provides text for each cell, done by a method called getModel(_ indexPath: IndexPath) -> cellModel { ... }. Finally in UIViewController, in tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) { ... } I am calling getModel(), using dequeueReusableCell and setting up each cell.
In getModel() there is a huuuuge switch which I use to know which cell is which
Then in uitableviewcell I have some method that hides sublabel and changes uiimageview.
It kind of works, however I have some issues with while scrolling. For example, sometimes a uiimageview is drawn in another cell, or a subLabel is hidden, even if it is not supposed to. I guess this is due because it is reusing the cell, and I am not resetting it.
Anyway, any suggestions or ideas?
I know this is overkilling...
A: No need for any pattern. Yes, you can use that single cell design for all cells. Just hide/empty label(s) and image view as you like per cell.
A: First of all you have to set default value to both the labels and imageview
i.e. (consider a title label, a sub label and a imageview)
lblTitle.isHidden = false
lblSubLabel.isHidden = false
imgViewIcon.image = nil
Then just show labels in specific condition that you want to match and set image in imageview
i.e. (consider your condition to hide sub label)
if needToHide == true {
lblSubLabel.isHidden = true
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,087 |
{"url":"https:\/\/prettymuchphysics.github.io\/quantum-mechanics\/index.html","text":"# Quantum Mechanics\n\nThe main topics here are the mathematical formulation of Hilbert spaces, understanding the Schr\u00f6dinger equation for bound states and investigating scattering processes.\n\n#### Mathematical Foundation\n\nHilbert spaces, operators, bras and kets.\n\n##### Hermitean Operators Have Real Eigenvalues\n\nA fundamental axiom of quantum mechanics states that observables are represented by hermitean operators acting in a Hilbert space. The reason behind this is that the result of a measurement should yield a real number instead of a complex one.\n\nShow that the eigenvalues of a hermitean operator $A$ are real!\n\nFor a hermitean operator, $A=A^\\dagger$.\n\n##### Operator Exponentials\n\nConsider a two-dimensional Hilbert space with basis vectors $|b_i\\rangle$ ($i=1,2$). A hermitean operator is given by $B = |b_1\\rangle\\!\\langle b_2| + |b_2\\rangle\\!\\langle b_1|$.\n\n1. Calculate the operator $\\exp(\\text i \\epsilon B)$ for $\\epsilon\\in\\mathbb R$!\n\nUse a Taylor series.\n\n2. Calculate $\\exp(\\text i \\epsilon B)|b_1\\rangle$!\n\n##### Operator Derivatives\n\nThe operators $A(\\lambda)$ and $B(\\lambda)$ depend on a real parameter $\\lambda$, whereas the operator $C$ is independent of $\\lambda$. Prove the following statements!\n\n1. $\\frac{\\text d}{\\text d\\lambda} (AB) = \\frac{\\text dA}{\\text d\\lambda}B + A\\frac{\\text dB}{\\text d\\lambda}$\n\n2. $\\frac{\\text d}{\\text d\\lambda}A^{-1} = -A^{-1}\\frac{\\text dA}{\\text d\\lambda}A^{-1}$\n\n3. $\\frac{\\text d}{\\text d\\lambda}\\text e^{\\lambda C} = C \\text e^{\\lambda C}$\n\n4. $\\frac{\\text d}{\\text d\\lambda}A^n = \\sum\\limits_{k=1}^n A^{k-1} \\frac{\\text dA}{\\text d\\lambda}A^{n-k}$\n\n##### Baker-Campbell-Haussdorf (Simplified)\n\nIf the commutator of $[A,B]$ with the two operators $A$ and $B$ vanishes, that is $[A,[A,B]]=0$ and $[B,[A,B]]=0$, a simplified version of the Baker-Campbell-Haussdorf formula holds: $$\\text e^A \\,\\text e^B = \\text e^{A+B+\\tfrac{1}{2}[A,B]}.$$\n\n1. Prove this statement!\n\nConstruct a differential equation for the operator $\\text e^{\\lambda A}\\text e^{\\lambda B}$ in $\\lambda$ and integrate it.\n\n2. Check that this equation can be applied for $A=\\hat x$ and $B=\\hat p$ and prove the following statement: $$\\text e^{-\\frac{\\text i}{\\hbar}ap}\\,\\text e^{\\text ibx}\\,\\text e^{\\frac{\\text i}{\\hbar}ap} = \\text e^{\\text i b(x-a)}, \\quad a,b\\in\\mathbb R.$$Note the connection to the translation operator $T(a)$!\n\n3. Check that this equation can be applied for the creation and annihilation operators of the harmonic oscillator, $A=\\hat a$ and $B=\\hat a^\\dagger$, and prove the following statement: $$\\text e^{ca^\\dagger-c^* a} = \\text e^{-\\tfrac{1}{2}|c|^2}\\,\\text e^{ca^\\dagger}\\,\\text e^{-c^* a} = \\text e^{\\tfrac{1}{2}|c|^2}\\,\\text e^{-ca^\\dagger}\\,\\text e^{c^* a}, \\quad c\\in\\mathbb C.$$\n\n##### Proving Bloch's Theorem\n\nBloch's theorem states that for particles in a perfect crystal of lattice size $a$, there is a basis of energy eigenstates that can be written as, $$\\psi(\\textbf r)=\\text e^{\\text i\\textbf {kr}}u(\\textbf r),$$where $u(\\textbf r)$ has the same periodicity as the lattice: $u(x+a)=u(x)$. This is called a Bloch wave.\n\nVerify Bloch's theorem via the eigenvalue equation of the translation operator $T(a)$\n\nThe translation operator is defined as $T(\\Delta\\textbf x)=\\text e^{-\\text i \\Delta\\textbf{x}\\cdot \\textbf{p}\/\\hbar}$.\n\n##### Parity Operator\n\nFor a one-dimensional system, the parity operator $\\Pi$ acts on the state $|x\\rangle$ as $$\\Pi|x\\rangle = |-\\!x\\rangle,$$where the state is defined via $\\hat x|x\\rangle = x|x\\rangle$.\n\nProve the following statements!\n\n1. $\\Pi = \\Pi ^\\dagger = \\Pi^{-1}$, therefore its eigenvalues are $\\pm 1$.\n\n2. If the potential obeys $V(-x)=V(x)$, the eigenfunctions of the Hamiltonian are either even or odd functions in $x$.\n\nShow that the eigenvalue of the parity operator (that is, whether a function is even or odd) is a conserved quantity.\n\n##### Commutator Relations\n\nThe commutator of the position operator $x$ and the momentum operator $p$ in one dimension is given by, $$[x,p]=\\text{i}\\hbar.$$\n\nProve the following equations!\n\n1. $[x,p^n] = \\text{i} \\hbar \\frac{\\partial}{\\partial p}(p^n)$.\n\n2. $[x,f(p)] = \\text{i}\\hbar \\frac{\\partial f(p)}{\\partial p}$\n\n#### Wave Functions\n\nNormalization, probability densities.\n\n##### One-dimensional Wave function\n\nThe wave function of a particle is given by, $$\\psi(x) = N\\, \\text e^{-a |x|} \\cos (x)\\,\\text e^{\\text i \\varphi x},\\quad -\\infty <x<\\infty,$$ where $a>0$ and $\\varphi\\in\\mathbb R$.\n\n1. For a given $a$ and $\\varphi$, calculate $N$ such that the wave function is normalized!\n\nNormalizaion condition: $\\int_{-\\infty}^\\infty \\psi^* \\psi \\stackrel{!}= 1$.\n\n2. What is the probability of finding the particle in the interval $[-\\tfrac{\\pi}{2},\\tfrac{\\pi}{2}]$?\n\nIntegrate the probability density over a suitable interval.\n\n#### Time-Independent Schr\u00f6dinger Equation\n\nBound states, eigenfunctions and eigenenergies.\n\n##### The Time-Independent Schr\u00f6dinger Equation\n\nThe time-dependent Schr\u00f6dinger equation in one dimension is given by $$H\\psi(x,t) = -\\text i\\hbar\\frac{\\text d}{\\text dt}\\psi(x,t),$$ where $H$ is the Hamiltonian of the system.\n\nUsing an ansatz for the wave function $\\psi(x,t)$, show that the time-independent Schr\u00f6dinger equation is given by $H\\psi(x) = E\\psi(x)$!\n\nUse a separating ansatz like $\\psi(x,t)=\\psi(x)f(t)$ with a suitable function $f(t)$.\n\n##### Free Particle (1D)\n\nThe Schr\u00f6dinger equation for a free particle in one dimension is given by $H\\psi = E\\psi$, where $H$ is the free Hamiltonian, $$H_\\text{free} = -\\frac{\\hbar^2}{2m}\\frac{\\text d^2}{\\text dx^2}.$$\n\nUsing an ansatz for the wave function $\\psi(x)$, solve the Schr\u00f6dinger equation and calculate the eigen energies!\n\nA suitable ansatz would be $\\psi(x) = A\\, \\text e^{\\text i kx}$ with $A,k\\in\\mathbb R$.\n\n##### Particle with a Constant Potential (1D)\n\nA particle of mass $m$ is travelling in a constant potential $V(x)=V_0$.\n\nSolve the one-dimensional Schr\u00f6dinger equation to get the wave function $\\psi(x)$ and the eigen energies $E$ ($E>V_0$)!\n\nThis case is very similar to the one without any potential.\n\n##### Infinite Square Well (1D)\n\nA particle of mass $m$ is trapped in an infinite potential well, $$V(x)=\\begin{cases}0 & |x|\\le a\\\\+\\infty & |x|>a\\end{cases}.$$\n\nFind the bound-state wave functions and their energies (Don't forget to normalize the wave functions)!\n\n...\n\n##### Finite Square Well (1D)\n\nA particle of mass $m$ is trapped in a finite potential well, $$V(x)=\\begin{cases}-V_0 & |x|\\le a\\\\0 & |x|>a\\end{cases}.$$\n\nFind the bound-state wave functions and their energies (Assume $-V_0 < E < 0$ and don't forget to normalize the wave functions)! What is the smallest value of $V_0$ that allows a bound state for any given $a$?\n\n##### Delta Potential (1D) - First look\n\nIn the neighborhood of a delta potential, the wave function is not smooth anymore.\n\nFor the potential $$V(x) = -\\Lambda\\, \\delta(x-x_0)$$ find the boundary condition for the first derivative of $\\psi(x)$ when it approaches $x_0$ from the right and from the left!\n\nIntegrate the Schr\u00f6dinger equation along a small region near $x_0$.\n\n##### Delta Potential (1D)\n\nA particle of mass $m$ encounters a delta potential: $$V(x) = -\\Lambda\\, \\delta(x), \\quad \\Lambda >0.$$\n\nFind the bound states of this system and the corresponding energies!\n\n##### Infinite Square Well (2D)\n\nA particle of mass $m$ is trapped in a two-dimensional infinite square well potential, $$V(x,y) = V_x(x)+V_y(y),$$$$V_x(x) = \\begin{cases}0 & |x|\\le a\\\\+\\infty & |x|>a\\end{cases}, \\quad V_y(y) = \\begin{cases}0 & |y|\\le b\\\\+\\infty & |y|>b\\end{cases}.$$\n\nFind the bound states of this system and the corresponding energies! Are there degenerate energies in this system?\n\nA suitable ansatz for the wavefunction could be $\\psi(x,y) = X(x)Y(y)$.\n\n##### Infinite Square Well (3D)\n\nA particle of mass $m$ is trapped in a three-dimensional infinite square well potential, $$V(x,y,z) = V_x(x)+V_y(y)+V_z(z),$$$$V_x(x) = \\begin{cases}0 & |x|\\le a\\\\+\\infty & |x|>a\\end{cases}, \\quad V_y(y) = \\begin{cases}0 & |y|\\le b\\\\+\\infty & |y|>b\\end{cases}.$$ $$V_z(z) = \\begin{cases}0 & |z|\\le c\\\\+\\infty & |z|>c\\end{cases}.$$\n\nFind the bound states of this system and the corresponding energies! Are there degenerate energies in this system?\n\nA suitable ansatz for the wavefunction could be $\\psi(x,y,z) = X(x)Y(y)Z(z)$.\n\n##### Dirac Comb (1D)\n\nA particle of mass $m$ is trapped in a one-dimensional, periodic potential, $$V(x) = -\\frac{\\hbar^2}{m} \\sum\\limits_{n=-\\infty}^\\infty \\delta(x+na),\\quad n\\in\\mathbb N, a\\in\\mathbb R$$\n\nFind the eigenvalues of the system and show that there are allowed and forbidden values for the energy!\n\nA suitable ansatz for the wave function is a Bloch wave: $$\\psi_\\text{Bloch}(x) = \\text e^{\\text ikx}\\,u(x),$$ where $u(x)$ is a periodic function with respect to the potential: $u(x+a) = u(x)$, and $k$ is a parameter for which $-\\frac{\\pi}{a}\\le k\\le \\frac{\\pi}{a}$.\n\n#### Potential Barriers\n\nOne-dimensional scattering, Transmission, Reflection.\n\n##### Single Barrier\n\nA particle of mass $m$ is propagating along the $x$-axis towards a barrier. The potential is given by $$V(x) = \\begin{cases}0 & x< 0\\\\ V_0 & 0<x< a \\\\ 0 & x> a\\end{cases}.$$\n\n1. Make an ansatz for the wave function in the three regions $x<0$, $0<x<a$ and $x>a$!\n\n2. Use the boundary conditions at $x=0$ and $x=a$!\n\n3. Find the probability for transmission and reflection for this system and check that $R+T=1$!\n\n##### Delta Barrier\n\nA particle of mass $m$ is propagating along the $x$-axis towards a potential barrier. The potential is given by $$V(x) = -\\Lambda\\, \\delta(x),\\quad \\Lambda >0.$$\n\n1. Make an ansatz for the wave function in the two regions $x<0$ and $x>a$!\n\n2. Use the boundary conditions at $x=0$!\n\n3. Find the probability for transmission and reflection for this system and check that $R+T=1$!\n\n4. What changes for $\\Lambda < 0$?\n\n#### Measurement\n\nExpectaton values, uncertainties, Heisenberg's uncertainty.\n\n##### Infinite Square Well - Energy\n\nA particle of mass $m$ is trapped in an infinite potential well, $$V(x)=\\begin{cases}0 & |x|\\le a\\\\+\\infty & |x|>a\\end{cases}.$$At time $t=0$, the particle is in the state $$\\psi(x,t=0)=N(a^2-x^2),\\quad x\\in[-a,a].$$\n\n1. For $t=0$, calculate the probability $P_n$ for finding the particle in the $n$-th eigenfunction of the infinite square well!\n\n2. Calculate the expectation value of the energy, i.e. the Hamiltonian $\\langle H\\rangle$ as well as the uncertainty in the energy $\\Delta E$!\n\nThe uncertainty of a quantity is given by $\\Delta A = \\sqrt{\\langle A^2\\rangle - \\langle A\\rangle^2}$.\n\n##### Infinite Square Well - Position and momentum\n\nA particle of mass $m$ is trapped in an infinite potential well, $$V(x)=\\begin{cases}0 & |x|\\le a\\\\+\\infty & |x|>a\\end{cases}.$$At time $t=0$, the particle is in the state $$\\psi(x,t=0)=N(a^2-x^2),\\quad x\\in[-a,a].$$\n\n1. Calculate the following expectation values: $\\langle x\\rangle$, $\\langle x^2\\rangle$, $\\langle p\\rangle$ and $\\langle p^2\\rangle$!\n\nUse the momentum operator $\\hat p = -\\text i\\hbar \\frac{\\text d}{\\text dx}$.\n\n2. Calculate the uncertainties $\\Delta x$ and $\\Delta p$ and compare their results to Heisenberg's uncertainty relation!\n\nThe uncertainty of a quantity is given by $\\Delta A = \\sqrt{\\langle A^2\\rangle - \\langle A\\rangle^2}$.\nHeisenberg's uncertainty relation is given by $\\Delta x \\cdot \\Delta p \\ge \\frac{\\hbar}{2}$.\n\n##### Momentum of an Eigenfunction\n\nShow that the expectation value of a particle's momentum, $\\langle p\\rangle$, is zero if that particle is in an eigenfunction of a general Hamiltonian $H=\\frac{p^2}{2m}+V(x)$!\n\nShow that the momentum operator can be written as $\\hat p = \\frac{\\text im}{\\hbar}[\\hat H, \\hat x]$ and use this expression in order to calculate the expectation value.\n\n##### Virial Theorem\n\nThe Hamiltonian of a system is given by $$H=T+V=\\frac{p^2}{2m}+V(x),$$ where the potential is a homogenous function of degree $n$: $V(\\lambda x) = \\lambda^n V(x)$.\n\nAssume the system is in an eigenfunction $\\psi$ of the Hamiltonian. Show that the expectation values of kinetic energy and potential energy are related via $$\\langle T\\rangle = \\frac{n}{2}\\langle V\\rangle,$$where $\\langle T\\rangle = \\langle \\psi|T|\\psi\\rangle$!\n\nInvesitgate the expectation value of the commutator of the Hamiltonian with the observable $A:=\\frac{1}{2}(xp+px)$.\n\n#### Harmonic Oscillator\n\n##### Parity Operator\n\nFor a one-dimensional system, the parity operator $\\Pi$ acts on the state $|x\\rangle$ as $$\\Pi|x\\rangle = |-\\!x\\rangle,$$where the state is defined via $\\hat x|x\\rangle = x|x\\rangle$.\n\nShow that the parity operator for the harmonic oscillator can be written as $\\Pi = \\text e^{\\text i\\pi \\, a^\\dagger a}$!\n\n##### Coherent States\n\nCoherent states in the quantum harmonic oscillator are defined via the eigenvalue equation of the annihilation operator: $a|\\alpha\\rangle = \\alpha |\\alpha\\rangle$, where $\\alpha\\in\\mathbb C$.\n\n1. Show that $|\\alpha\\rangle = \\text e^{-|\\alpha|^2\/2}\\,\\text e^{\\alpha a^\\dagger}|0\\rangle = \\text e^{-|\\alpha|^2\/2}\\, \\sum\\limits_{n=0}^{\\infty}\\frac{\\alpha^n}{\\sqrt{n!}}|n\\rangle$!\n\n2. Calculate the uncertainties of position $\\Delta x$ and momentum $\\Delta p$ and show that coherent states fulfil the equality in Heisenberg's uncertainty relation!\n\n3. Calculate the energy uncertainty $\\Delta E$ of a coherent state!\n\n##### 2D Harmonic Oscillator - Eigenvalues\n\nThe potential of an isotropic, two-dimensional harmonic oscillator is given by $V(x,y) = \\tfrac{1}{2}m\\omega^2(x^2+y^2)$.\n\n1. Calculate the energy eigenvalues via the creation and annihilation operators $a_x$, $a_x^\\dagger$, $a_y$ and $a_y^\\dagger$ that are constructed via $\\{x,y,p_x,p_y\\}$.\n\n2. Calculate the energy eigenvalues via the creation and annihilation operators for right- and left-circular quanta, defined by: $$a_R = \\frac{1}{\\sqrt 2}(a_x-\\text ia_y),\\quad a_L = \\frac{1}{\\sqrt 2}(a_x+\\text ia_y).$$\n\n3. Show that the eigenfunctions of b. are eigenfunctions of the $z$-component of the angular momentum $L_z = xp_y-yp_x$.\n\n...\n\n...\n\n...","date":"2019-10-23 22:02:21","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9582112431526184, \"perplexity\": 315.62961675150126}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570987836295.98\/warc\/CC-MAIN-20191023201520-20191023225020-00337.warc.gz\"}"} | null | null |
Q: Android expansion apk problems We recently started making use of the new Google Expansion APK mechanism. Overall it works well, but it seems somewhat flakey for us. Some questions:
*
*Some users get the expansion app downloaded along with the app while others don't and our app has to download it itself. Does anyone know what governs when it works automatically and when not?
*Sometimes when we need to download the expansion file ourselves, Google Play returns -1 for the file size and null for the URL, indicating the expansion file doesn't exist. If I run the app again, the second time it will generally return a valid size and URL. Does anyone else see this flakiness?
Here are the basics of the code:
This is how we set up the call to verify licensing via a callback
policy = new APKExpansionPolicy( context, new AESObfuscator( SALT, context.getPackageName(), deviceId ) );
mChecker = new LicenseChecker( context, policy, BASE64_PUBLIC_KEY );
mLicenseCheckerCallback = new MyLicenseCheckerCallback();
mChecker.checkAccess( mLicenseCheckerCallback );
Then in the callback we have this for the allow() method (when the license is valid).
public void allow( int reason )
{
String expansionFileName = policy.getExpansionFileName( APKExpansionPolicy.MAIN_FILE_URL_INDEX );
String expansionURL = policy.getExpansionURL( APKExpansionPolicy.MAIN_FILE_URL_INDEX );
long expansionFileSize = policy.getExpansionFileSize( APKExpansionPolicy.MAIN_FILE_URL_INDEX );
}
We just released the app with this new code, but a significant number of users are getting -1 back as the expansionFileSize and null as the url. This causes the user to not get the expansion file installed. Generally if they run the app again, it will work on the second (or third) time.
Anyone have any thoughts on what could be going on?
A: You are getting -1 because the APKExpansionPolicy responds with a local cached result if you try to contact the licensing server again - but the URL, filesize and filename are not cached and are lost after the first real response. APKExpansionPolicy does not cache these results, here is a comment from the APKExpansionPolicy source code which explains it:
Expansion URL's are not committed to preferences, but are instead intended to be stored when the license response is processed by the front-end.
So you need to store these values in the preferences right after you get the first successful response (in allow callback method);
A: The blog post on Android Developers addresses #1:
On most newer devices, when users download your app from Android Market, the expansion files will be downloaded automatically, and the refund period won't start until the expansion files are downloaded. On older devices, your app will download the expansion files the first time it runs
A: To add to Daniel Novak's answer, if you reset the policy before the call to checkAccess(), this will force it to make a new license request, and therefore retrieve the URL:
policy.resetPolicy();
You probably only want to do this if you're sure you need the URL (ie, if you've already checked that the expansion file is missing).
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,973 |
{"url":"https:\/\/nforum.ncatlab.org\/discussion\/4082\/","text":"# Start a new discussion\n\n## Not signed in\n\nWant to take part in these discussions? Sign in if you have an account, or apply for one below\n\n## Site Tag Cloud\n\nVanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.\n\n\u2022 CommentRowNumber1.\n\u2022 CommentAuthorTodd_Trimble\n\u2022 CommentTimeAug 25th 2012\n\nAdded some examples to allegory, including that of modular lattice as one-object allegory.\n\n\u2022 CommentRowNumber2.\n\u2022 CommentAuthorFinnLawler\n\u2022 CommentTimeOct 22nd 2012\n\nAdded a section on syntactic allegories to allegory, mostly to record a result about the interpretation of $\\exists$ in unitary pre-tabular allegories.\n\n\u2022 CommentRowNumber3.\n\u2022 CommentAuthorMike Shulman\n\u2022 CommentTimeOct 23rd 2012\n\nPresumably the syntactic allegory in turn arises by a standard construction from a syntactic hyperdoctrine?\n\n\u2022 CommentRowNumber4.\n\u2022 CommentAuthorTodd_Trimble\n\u2022 CommentTimeOct 23rd 2012\n\nRe #3: I was thinking the same thing.\n\nWe were discussing allegories and such a couple of months ago. Mike asked a question here which is still basically unanswered. Suffice it to say that of all the various categorical machines for discussing first-order theories (including hyperdoctrines, bicategories of relations, and allegories), allegories seem the least well tied-in to the matrix of higher category theory (or anyway the least well-grokked here at the nLab, if Mike\u2019s question and my lack of response are any indication).\n\n\u2022 CommentRowNumber5.\n\u2022 CommentAuthorFinnLawler\n\u2022 CommentTimeOct 24th 2012\n\nSome small edits to allegory. I\u2019ve also added redirects from pre-logos and logos to coherent category and Heyting category respectively.\n\nRe #3: I would expect so, definitely.\n\nCould there be analogous theorems like \u201cIf a locally posetal 2- (or perhaps F-) category has (some universally characterized objects), then it is an allegory if and only if it is a bicategory of relations\u201d and \u201cThe free completion of a locally posetal 2\/F-category under (some universally characterized objects) is a bicategory of relations if and only if the original category was an allegory\u201d?\n\nWe do know, though, that a locally posetal 2-category that is a cartesian bicategory is an allegory iff it is a bicategory of relations, don\u2019t we?\n\n\u2022 CommentRowNumber6.\n\u2022 CommentAuthorTodd_Trimble\n\u2022 CommentTimeOct 24th 2012\n\nWe do know, though, that a locally posetal 2-category that is a cartesian bicategory is an allegory iff it is a bicategory of relations, don\u2019t we?\n\nIt\u2019s never occurred to me to wonder until now: is an allegory an extra structure on a locally posetal 2-category, or is it really just a property? In other words, is the dagger structure uniquely determined?\n\n\u2022 CommentRowNumber7.\n\u2022 CommentAuthorTobyBartels\n\u2022 CommentTimeOct 25th 2012\n\nI\u2019ve also added redirects from pre-logos and logos to coherent category and Heyting category respectively.\n\nAre these synonyms? Can you add something to the target pages to say this (or to say whatever is true)?\n\n\u2022 CommentRowNumber8.\n\u2022 CommentAuthorUrs\n\u2022 CommentTimeOct 25th 2012\n\u2022 (edited Oct 25th 2012)\n\nYeah, there needs to be some mentioning of XYZ on a page to which XYZ redirects.\n\nFrom page 12 of\n\n\u2022 Casten Butz, Peter Johnstone, Classifying toposes for first order theories, BRICS Report Series RS-97-20\n\nwe have the following. Let $\\kappa$ be a cardinal. Then\n\n1. A $\\kappa$-geometric category is a regular category with unions for $\\kappa$-small families of subobjects, stable under pullback.\n\nMakkai-Reyes called these $\\kappa$-logical categories and Freyd-Scedrov called them pre-logoi.\n\n2. A $\\kappa$-Heyting category is a regular category with unions and intersections of $\\kappa$-small sets of subobjects and such that pullback of subobjects along any morphism $f$ has a right adjoint $\\forall_f$ (the universal quantifier).\n\nIn Freyd-Scedrov this is called a logos when $\\kappa = \\omega$.\n\nI am now moving this into the relevant entries.\n\n\u2022 CommentRowNumber9.\n\u2022 CommentAuthorUrs\n\u2022 CommentTimeOct 25th 2012\n\u2022 (edited Oct 25th 2012)\n\nBy the way, looking again at the entry allegory I find it is missing more of an indication of why we care about allegories. Right in the Idea-section there should be a sentence saying \u201cThe theory of allecgories is useful for\u2026\u201d and then probably mention implications for exact completions etc.\n\n\u2022 CommentRowNumber10.\n\u2022 CommentAuthorTodd_Trimble\n\u2022 CommentTimeOct 25th 2012\n\nI suppose someone could write something, but:\n\nThere doesn\u2019t seem to be overwhelming enthusiasm for them around here in the first place; they are one way of doing categorical relational calculus, yes, but notions like hyperdoctrines or cartesian bicategories also serve that purpose and seem more flexible or adaptable to categorification. We keep asking ourselves: why this selection of axioms (which look ad hoc to some of us)? I personally would like to understand that better before trying to answer why we care.\n\nYou could say, in the manner of a ten-year-old writing up a desultory book report, \u201cthe theory of allegories is useful because Freyd and Scedrov (and others) proved a whole bunch of results about them that can now be referred to.\u201d The stuff about regular and exact completions in terms of splitting certain classes of idempotents in bicategories of relations doesn\u2019t particularly need allegories to say it.\n\n\u2022 CommentRowNumber11.\n\u2022 CommentAuthorUrs\n\u2022 CommentTimeOct 25th 2012\n\nAh, interesting. I didn\u2019t know this. I kept looking at the page \u201callegories\u201d and asking myself why I should care.\n\nBut so this is also a useful piece of information. Why not say it in the entry?\n\nI think such \u201cwhy-this-definition\u201d-answers are needed also for the general perception of the $n$Lab. It makes a bad impression to happen upon a page that indulges in definitions without telling the reader what the payoff is supposed to be. It makes the impression that somebody is just playing around with definitions instead of doing fruitful mathematics.\n\nRight this moment I cannot, but if you prefer I can later try to distill some remark into the entry from what you just said.\n\n\u2022 CommentRowNumber12.\n\u2022 CommentAuthorMike Shulman\n\u2022 CommentTimeOct 25th 2012\n\nThere\u2019s one important way in which the notion of bicategory of relations is less \u2019flexible\u2019 than that of allegory: a bicategory of relations must have a product. If you want to perform exact completion by adding kleisli objects (i.e. splitting some idempotents, in the locally posetal case) and your input data doesn\u2019t have products of objects yet, then allegories may work where bicategories of relations would fail. This was my situation in my exact completions paper, where after a long time of disparaging allegories I found myself forced to use them!\n\nI like the idea of seeing an allegory as \u2019a (1,2)-category that would be a bicategory of relations if it had products\u2019. I guess Finn is right that by \u2019having products\u2019 here we could mean \u2019being a cartesian bicategory\u2019. Even better would be if we could characterize the \u2019cartesian\u2019 (1,2)-categories with some universal property, such as being cartesian objects in some 2-category. Then we could ask the other half of my suggestion: is the free completion of an allegory under \u2019products\u2019 a bicategory of relations, and conversely?\n\n\u2022 CommentRowNumber13.\n\u2022 CommentAuthorMike Shulman\n\u2022 CommentTimeOct 25th 2012\n\n@Todd #6: if the allegory is tabular, or even \u2019weakly k-tabular\u2019, then the dagger-structure is uniquely determined, but in general I can\u2019t think of any reason why it would be.\n\n\u2022 CommentRowNumber14.\n\u2022 CommentAuthorMike Shulman\n\u2022 CommentTimeOct 25th 2012\n\n@Urs #8: Also, I called those k-geometric categories k-ary regular categories, wanting to emphasize that k is the \u2019arity\u2019 and not, say, the category dimension.\n\n\u2022 CommentRowNumber15.\n\u2022 CommentAuthorUrs\n\u2022 CommentTimeOct 25th 2012\n\nMike,\n\nwhatever these things are called, the entries need to say it. It\u2019s not sufficient that you tell me here or somewhere out there is some paper that says it. There should be a remark at geometric category saying what you just said, then.\n\n\u2022 CommentRowNumber16.\n\u2022 CommentAuthorTodd_Trimble\n\u2022 CommentTimeOct 25th 2012\n\nI wrote up something at allegory as per Urs\u2019s suggestion. See what you think.\n\n\u2022 CommentRowNumber17.\n\u2022 CommentAuthorTodd_Trimble\n\u2022 CommentTimeOct 25th 2012\n\n@Mike: I can see it for tabular categories. But I don\u2019t know what \u201cweakly k-tabular\u201d means (and I\u2019m too lazy or tired now to attempt a guess).\n\n\u2022 CommentRowNumber18.\n\u2022 CommentAuthorUrs\n\u2022 CommentTimeOct 25th 2012\n\u2022 (edited Oct 25th 2012)\n\nI wrote up something at allegory as per Urs\u2019s suggestion. See what you think.\n\nThanks, Todd! Very nice, yes, that\u2019s the kind of comment that I was hoping for.\n\nBy the way, since it keeps being mentioned, can we say something contentful at relational calculus, at least such as to give a broad orientation?\n\n\u2022 CommentRowNumber19.\n\u2022 CommentAuthorFinnLawler\n\u2022 CommentTimeOct 25th 2012\n\nRe Toby\u2019s #7, Urs\u2019s #8: Yes, sorry, I should have said something about (pre-)logoses on those pages. I\u2019ve added a reference to k-ary regular category and a link to Mike\u2019s paper at geometric category.\n\nRe Mike\u2019s #12: I\u2019m still working on this, so I can\u2019t give you a proof quite yet, but I\u2019m pretty sure that a cartesian bicategory will be the same thing as a \u2019cartesian equipment\u2019 that is \u2019functionally complete\u2019\/chordate, a cartesian equipment being a cartesian object in the 2-category of equipments, pseudo-functors and lax transformations that are valued in, and pseudo-natural with respect to, tight maps. That is certainly suggested by the material (due to Todd, I think) at cartesian bicategory.\n\n\u2022 CommentRowNumber20.\n\u2022 CommentAuthorUrs\n\u2022 CommentTimeOct 25th 2012\n\nThanks, Finn!\n\n\u2022 CommentRowNumber21.\n\u2022 CommentAuthorTodd_Trimble\n\u2022 CommentTimeOct 25th 2012\n\nRe relational calculus, I\u2019d be tempted to try to recall some history, or at least a mathematician\u2019s history, which would involve names like Peirce, Schr\u00f6der, Tarski, \u2026 In the early days there were lots of analogies made between relational calculus and linear algebra, explainable by the fact that $Rel$ is $CMon$-enriched and self-dual. Trouble is that I don\u2019t know the history, really.\n\n\u2022 CommentRowNumber22.\n\u2022 CommentAuthorUrs\n\u2022 CommentTimeOct 25th 2012\n\nI (only) now realize that I pretty much missed that story about \u201cfamilial regularity and exactness\u201d.\n\nThe entries on all the notions unified by this need to point back to that unification. So I have created now a floating TOC and am including it into all the relevant entries:\n\nPlease check out that TOC and edit\/modify as need be.\n\n\u2022 CommentRowNumber23.\n\u2022 CommentAuthorMike Shulman\n\u2022 CommentTimeOct 26th 2012\n\nVery quick reply: I\u2019m sorry (and surprised) that I didn\u2019t add enough links. I certainly intended to! But there were a lot of pages that needed editing at once, and I guess I missed a bunch. Thanks for the fixes.\n\n\u2022 CommentRowNumber24.\n\u2022 CommentAuthorMike Shulman\n\u2022 CommentTimeOct 26th 2012\n\n@Finn 19 : excellent! I look forward to it.\n\n@Todd 17: It\u2019s in my paper\u2026 sorry I don\u2019t have time to write more now, I\u2019m getting up early to go to Montreal tomorrow\u2026\n\n\u2022 CommentRowNumber25.\n\u2022 CommentAuthorFinnLawler\n\u2022 CommentTimeOct 31st 2012\n\nThere was a small mistake at the end of the proof I put at allegory, so I\u2019ve put a lemma on my personal web here and referred to that instead.\n\n\u2022 CommentRowNumber26.\n\u2022 CommentAuthorTodd_Trimble\n\u2022 CommentTimeMar 29th 2013\n\nHere is a basic question about allegories that I don\u2019t know the answer to right away: is \u201callegory\u201d a property or structure one can put on a locally posetal 2-category $B$? The issue is whether there is at most one \u201copposite\u201d operation $(-)^{op}: \\hom(a, b) \\to \\hom(b, a)$ that makes $B$ an allegory.\n\n\u2022 CommentRowNumber27.\n\u2022 CommentAuthorMike Shulman\n\u2022 CommentTimeMar 30th 2013\n\nYou probably do know that if B is tabular, or more generally if every morphism is a join of a composite of maps and their inverses, then its allegory structure is unique, since the opposite of a map in an allegory is its adjoint.\n\n\u2022 CommentRowNumber28.\n\u2022 CommentAuthorTodd_Trimble\n\u2022 CommentTimeMar 30th 2013\n\nYeah, I do know that! Strangely, BTW, Freyd-Scedrov define a map in an allegory to be a morphism $R: A \\to B$ such that $R^{op}: B \\to A$ is its right adjoint, instead of simply as a morphism that possesses a right adjoint (and then proving the right adjoint must be $R^{op}$). Maybe Johnstone proves this in the Elephant; I haven\u2019t checked.\n\n\u2022 CommentRowNumber29.\n\u2022 CommentAuthorMike Shulman\n\u2022 CommentTimeMar 30th 2013\n\nYeah, he does.\n\n\u2022 CommentRowNumber30.\n\u2022 CommentAuthorTodd_Trimble\n\u2022 CommentTimeMar 30th 2013\n\nThanks, Mike! The question of allegories being property-like, while a natural one to ask, is not urgent for me; I just wondered whether you or Finn or someone else happened to know. I can\u2019t tell whether a negative answer would make allegories even more or even less alluring to me, but I suspect \u201cless\u201d.\n\nFor what it\u2019s worth: I can show that a (locally posetal) cartesian bicategory carries at most one allegory structure, and this occurs precisely if it\u2019s a bicategory of relations. Meanwhile, cartesian bicategories are property-like with respect to 2-categories. I think these observations suffice for my immediate purpose.\n\n\u2022 CommentRowNumber31.\n\u2022 CommentAuthorEvan Patterson\n\u2022 CommentTimeMar 25th 2017\n\nHopefully the experts here can help a newbie trying to understand allegories. The current definition of distributive allegory says:\n\nA distributive allegory is an allegory whose hom-posets have finite joins that are preserved by composition. Thus a distributive allegory is locally a lattice.\n\nBased on Freyd-Scedrov I wonder whether it should say something like:\n\nA distributive allegory is an allegory whose hom-posets have finite joins that are preserved by composition and that satisfy the distributivity law. Thus a distributive allegory is locally a distributive lattice.\n\nIs this a mistake?\n\n\u2022 CommentRowNumber32.\n\u2022 CommentAuthorMike Shulman\n\u2022 CommentTimeMar 25th 2017\n\nI think you\u2019re right. I\u2019ve fixed it, mentioning also the weaker notion under the name \u201cunion allegory\u201d (which is used in the Elephant).\n\n\u2022 CommentRowNumber33.\n\u2022 CommentAuthorTodd_Trimble\n\u2022 CommentTimeMar 25th 2017\n\nThis reminds me that there are still some loose ends in the alternative account of power allegories (\u201coriginal research\u201d). I should get back to that.\n\n\u2022 CommentRowNumber34.\n\u2022 CommentAuthorEvan Patterson\n\u2022 CommentTimeMar 25th 2017\n\nThanks! I didn\u2019t know about the weaker notion of \u201cunion allegory.\u201d\n\n1. Added a reference to Michael Winter, Goguen Categories. Therein the author develops an application to the construction of fuzzy controllers.\n\n2. Added missing properties. Moreover, I changes \u201c(1,2)-category\u201d to \u201clocally posetal 2-category\u201d because the former is only stated as a notion depending on a notion of $\\infty$-category.\n\n3. Added proof of distributivity of composition over meets.\n\n4. Expanded on definition of map, entire morphism, and functional morphism.\n\n5. Corrected $g^o f$ to $g f^o$ in definition of tabulation.","date":"2022-06-29 09:42:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 22, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8490527868270874, \"perplexity\": 2945.261774340048}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103626162.35\/warc\/CC-MAIN-20220629084939-20220629114939-00625.warc.gz\"}"} | null | null |
{"url":"http:\/\/www.numdam.org\/item\/CM_1987__64_2_133_0\/","text":"Higher asymptotics of the complex Monge-Amp\u00e8re equation\nCompositio Mathematica, Tome 64 (1987) no. 2, pp. 133-155.\n@article{CM_1987__64_2_133_0,\nauthor = {Robin Graham, C.},\ntitle = {Higher asymptotics of the complex {Monge-Amp\\ere} equation},\njournal = {Compositio Mathematica},\npages = {133--155},\npublisher = {Martinus Nijhoff Publishers},\nvolume = {64},\nnumber = {2},\nyear = {1987},\nzbl = {0628.32033},\nmrnumber = {916479},\nlanguage = {en},\nurl = {http:\/\/www.numdam.org\/item\/CM_1987__64_2_133_0\/}\n}\nTY - JOUR\nAU - Robin Graham, C.\nTI - Higher asymptotics of the complex Monge-Amp\u00e8re equation\nJO - Compositio Mathematica\nPY - 1987\nDA - 1987\/\/\/\nSP - 133\nEP - 155\nVL - 64\nIS - 2\nPB - Martinus Nijhoff Publishers\nUR - http:\/\/www.numdam.org\/item\/CM_1987__64_2_133_0\/\nUR - https:\/\/zbmath.org\/?q=an%3A0628.32033\nUR - https:\/\/www.ams.org\/mathscinet-getitem?mr=916479\nLA - en\nID - CM_1987__64_2_133_0\nER - \nRobin Graham, C. Higher asymptotics of the complex Monge-Amp\u00e8re equation. Compositio Mathematica, Tome 64 (1987) no. 2, pp. 133-155. http:\/\/www.numdam.org\/item\/CM_1987__64_2_133_0\/`\n\n1 J. Bland, Local boundary behaviour of the canonical Einstein-K\u00e4hler metric on pseudo-convex domains, UCLA PhD. thesis, 1982.\n\n2 S.-Y. Cheng and S.-T. Yau, On the existence of a complete K\u00e4hler metric on noncompact complex manifolds and the regularity of Fefferman's equation, Comm. Pure Appl. Math 33 (1980) 507-544. | MR 575736 | Zbl 0506.53031\n\n3 S.S. Chern and J. Moser, Real hypersurfaces in complex manifolds, Acta Math. 133 (1974) 219-271. | MR 425155 | Zbl 0302.32015\n\n4 C. Fefferman, Monge-Amp\u00e8re equations, the Bergman kernel, and geometry of pseudo-convex domains, Ann. Math. 103 (1976) 395-416. | MR 407320 | Zbl 0322.32012\n\n5 C. Fefferman, Parabolic invariant theory in complex analysis, Adv. in Math. 31 (1979) 131-262. | MR 526424 | Zbl 0444.32013\n\n6 R. Graham, Scalar boundary invariants and the Bergman kernel, Proceedings of the Special Year in Complex Analysis, Univ. of Maryland, to appear. | Zbl 0626.32027\n\n7 J. Lee, Higher asymptotics of the complex Monge-Amp\u00e8re equation and geometry of CR-manifolds, MIT PhD. thesis, 1982.\n\n8 J. Lee and R. Melrose, Boundary behaviour of the complex Monge-Amp\u00e8re equation, Acta. Math. 148 (1982) 159-192. | MR 666109 | Zbl 0496.35042\n\n9 R. Melrose, Transformation of boundary problems, Acta. Math. 147 (1981) 149-236. | MR 639039 | Zbl 0492.58023","date":"2022-10-07 12:41:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3141491711139679, \"perplexity\": 4740.964028989682}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030338073.68\/warc\/CC-MAIN-20221007112411-20221007142411-00340.warc.gz\"}"} | null | null |
Capable of reaching a height of nearly three feet tall, this gentle giant of summer is among the largest members of the Lily Family. With recurved petals colored yellow to orange and bearing reddish-brown spots, this flower can be identified from some distance away, growing on moist roadsides and meadows. Up to 40 flowers have been counted on just one plant of L. superbum, while a similar but smaller species, L. michauxii bears only 1 to 6 flowers per plant. Blooming time for both species begins in July and runs through September. This plant possesses no significant medicinal properties, although early American Indians used the bulbs in soups. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,747 |
July 22, 2016 3:38pm PT by Kate Stanhope
Comic-Con: Kristin Chenoweth Joins 'American Gods'
Vincent Sandoval/FilmMagic
Bryan Fuller also discussed the possibility of a 'Hannibal' reunion on the Starz drama.
Kristin Chenoweth and Bryan Fuller are reuniting.
The Pushing Daisies alum has joined American Gods as Easter, it was announced Friday at San Diego Comic-Con with a surprise appearance from the star.
"I'm so excited to be reunited with my Bryan Fuller," the actress said when she was brought up midway through the panel for the highly anticipated Starz adaptation of Neil Gaiman's 2001 novel. Chenoweth won an Emmy for her role on Fuller's short-lived ABC series Pushing Daisies.
American Gods centers on a war brewing between old and new gods. The traditional gods of biblical and mythological roots from around the world continue to lose believers to an upstart pantheon of gods reflecting society's modern love of money, technology, media, celebrity and drugs. Series protagonist Shadow Moon (Ricky Whittle) is an ex-con who becomes the bodyguard and traveling partner to Mr. Wednesday (Ian McShane). A con man who is secretly one of the older gods, Mr. Wednesday is on a cross-country mission to gather his troops in preparation for a battle with the new deities.
In the series, Chenoweth's character, Easter, is one of the old gods. Once known as Ostara, goddess of spring, Easter still embraces the jelly beans and chocolate bunnies associated with the holiday that bears her name in an effort to stay relevant.
The actress joins a cast that also includes Pablo Schreiber, Yetide Badaki, Bruce Langley, Crispin Glover, Jonathan Tucker, Gillian Anderson, Peter Stormare, Orlando Jones and Cloris Leachman.
In addition to Pushing Daisies, Chenoweth's other TV credits include Glee, The Good Wife and The West Wing.
In addition to Chenoweth, Fuller has previously worked with Anderson on NBC's Hannibal. Later in the panel, Fuller was asked about the likelihood of any other Hannibal stars coming to American Gods.
"The door is always open for those lovely folks. It was such an incredible experience working with that production for three years," he said. "As soon as schedules sync up, yes, we would love it."
Production on the 10-episode first season is currently underway in Toronto.
American Gods is set to debut in 2017 on Starz.
Kate Stanhope
kate.stanhope@thr.com @katestanhope | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,148 |
{"url":"https:\/\/www.exampaper.com.sg\/questions\/e-maths\/probability-the-search-for-miss-loi","text":"Close\n\nProbability \u2013 The Search For Miss Loi\n\n(2)\nTuition given in the topic of E-Maths Tuition Questions from the desk of at 12:17 am (Singapore time)\n\nUpdated on\n\nThe probability of a Probability question appearing in your exams is probably 1, so Miss Loi probably thinks that it\u2019s probably a good idea to finally include a probability problem here. (Try saying this in 5 seconds!)\n\nAnyway, students should be alert to the fact that probability questions tend to be a little long-winded, low on mathematical figures\/notations but high on rhetorics.\n\nSometimes you feel like you\u2019re doing English comprehension right in the middle of your Maths paper, like this:\n\nLure by the promises of exotic virgins, endless rivers of wine, treasures of gold, meeting the Venerable Miss Loi and attaining Mathematical salvation, a gallant knight named John one day decides to embark on a quest to discover the whereabouts of Miss Loi\u2019s Temple once and for all.\n\nAs such, this brave knight shall journey once a week (in shining armour and all) into the deep recesses of Novena, and brave the dangers that lurk within, till he finds the elusive Temple.\n\nThe probability of Sir John finding Miss Loi\u2019s Temple in each journey to Novena is 1\/5.\n\n1. Expressing your answers as fractions, find the probability that Sir John\n1. fails to find it in the first journey but finds it in the second journey.\n2. finds it either in the first or second week.\n3. fails to find it in the first three journeys.\n4. finds it in one of the first five weeks.\n2. Find the probability, in terms of n, that Sir John finds the Temple in one of the first n weeks.\n\nFrankly this question is not difficult but a curious number of students contrive to get it wrong. The real danger here is probably that big mess of words designed to confuse the knight out of you. So read carefully.\n\nWhat will John do when he finds the Temple eventually? Will he still make another journey the following week?\n\nRevision Exercise\n\nTo show that you have understood what Miss Loi just taught you from the centre, you must:\n\n1. Cris commented in tuition class\n\n2007\nJul\n31\nTue\n11:38pm\n\n1\n\nQuestion: What will John do when he finds the Temple eventually? Will he still make another journey the following week?\nAnswer: he will not do anything but to have tuition with mS loi. Yes he will come back the following week because he paid his tuition fees. :P:P\n\n2. Miss Loi Friend Miss Loi on Facebook @MissLoi commented in tuition class\n\n2007\nAug\n1\nWed\n4:04pm\n\n2\n\nCris, that's some very good out-of-the-box stuff from you.\n\nBut for the purpose of this question, shall we just assume that after Sir John finds the Temple, he fell so in love with Miss Loi the place that he applied for PR, sold off his horse, got a big car, bought some prime property and settled there happily ever after?","date":"2021-12-02 00:26:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 1, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2231956273317337, \"perplexity\": 3321.8211664106407}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964361064.58\/warc\/CC-MAIN-20211201234046-20211202024046-00534.warc.gz\"}"} | null | null |
A permanent Australian resident of Indian origin has been refused Australian citizenship.
The Administrative Appeals Tribunal of Australia has affirmed the decision of the Minister for Immigration and Border Protection to reject Mr Manishkumar Mishra's application for Australian citizenship.
The department had refused Mr Mishra's application on 5 July 2016 on character grounds. Mr Mishra had two convictions against him in a case of domestic violence. On 20 May 2013, the applicant appeared before the Queanbeyan Local Court where he was fined $200 for common assault (DV) and $500 for assault occasioning actual bodily harm (DV).
The Minister for Immigration and Border Protection refused Mr Mishra's application for citizenship on the basis of these convictions. On 13 February 2017, Mr Mishra appealed to this Tribunal to review the decision.
An Australian citizen of Indian-origin, Mr Sunil Pahuja, has been awarded $300,000 in damages after the Supreme Court of New South Wales decided in favour of Mr Pahuja in a defamation case against TCN Channel Nine Pty Ltd.
The tribunal reviewed his case and found that Mr Mishra had not been entirely honest about disclosing his convictions. Deputy President Dr P McDermott RFD of the Tribunal observed, "In my opinion, the applicant failed to disclose that he had been convicted of two assault offences which are serious offences, one of which involved the occasioning of bodily harm. His failure to disclose both of these serious offences when making his application for citizenship by conferral is certainly a matter of deception."
Mr Mishra had arrived in Australia on 27 August 2007 on a student visa. He was granted permanent residency on 31 January 2015. He lodged an application for Australian citizenship in February 2016. In May, Department of Immigration and Border Protection invited him to comment on the findings of adverse information which was related to his conviction. Eventually, his application was refused.
The AAT heard that Mr Mishra separated from his wife after assaulting her on the night of May 1, 2013.
In Australia, what do you miss India for? Most of what you can think of is available down under but there are some things you have to live without.
Mr Mishra disputed these facts, but the tribunal accepted that the applicant had committed the offences for which he was convicted when assessing whether he was of good character.
"Domestic violence is certainly quite contrary to the values of our society," said Dr P McDermott RFD.
Deputy President Dr P McDermott RFD of the Tribunal also pointed out that Mr Mishra failed to accept the responsibility for his actions.
Dr P McDermott RFD, however, said: "My decision has no bearing on his entitlement to remain in Australia". | {
"redpajama_set_name": "RedPajamaC4"
} | 3,455 |
## Edited by Kannan M., with Rebecca Whittington, David C. Buck and D. Senthil Babu
### Time Will Write a Song for You
#### Contemporary Tamil Writing from Sri Lanka
## Contents
About the Author
_Introduction_
1. The Temple Car and the Moon _Mahakavi_
2. Enlightenment _Dominic Jeeva_
3. To His Holiness Arumuga Navalar: An Appeal _Mu. Thalaiyasingam_
4. Oh Driver _Neelavanan_
5. Walk _Mu. Ponnampalam_
6. Yesterday Evening, This Morning _M.A. Nuhman_
7. Your Plight Also _A. Jesurasa_
8. Journey _S. Sivasegaram_
9. Hope _V.I.S. Jayapalan_
10. Seashore _V.I.S. Jayapalan_
11. Unsung Songs _Shanmugam Sivalingam_
12. Lankapuri Raja _Piramil_
13. In the Evenings _Sivaramani_
14. I Don't Have the Words _Sivaramani_
15. My Lineage and I _Sivaramani_
16. Place: Jaffna University Canteen _Sivaramani_
17. Summer Scorches Day after Day . . . _Su. Vilvarathinam_
18. Time Will Write a Song for You _S. Ranjakumar_
19. Woman Humiliated _Sivaramani_
20. Darkness _Aswagosh_
21. To Those Who Come with Sticks _Ilavalai Wijayendran_
22. Days in the Trenches _Pa. Ahilan_
23. War Journey: Diary of a Tamil Tiger _Malaravan_
24. A Space That No Longer Is _Su. Vilvarathinam_
25. Heroes Rest Here _Cheran_
26. One Night _Maalika_
27. I Am a Snail . . . _Shanmugam Sivalingam_
28. The Eighth Ghost _V.I.S. Jayapalan_
29. On the Surface of the Mind _Majeed_
30. The Sorrow within Me Has the Surface Area of a Straight Line _Majeed_
31. Lost Life _R. Muralisvaran_
32. Veena _Bose Nilhale_
33. On the Present _Bose Nilhale_
34. Pyre _Rashmy_
35. The Song of an International Refugee _Shanmugam Sivalingam_
36. The Echo of Moonlight _Su. Vilvarathinam_
37. Anxious Sermon _Selvam Arulanandam_
38. 'Questions' _Aruntati_
39. Earthen Towns _Nilanthan_
40. Hanifa and the Two Bulls _Kumarmurthy_
41. A Story Lost in Time, Lasting in Time _Iravi Arunasalam_
42. Questions for the One Who Is Coming _Karunakaran_
43. Appe Ratta _V. Gowribalan_
44. Iron Birds _V. Gowribalan_
45. Encounter _Ilaiya Abdullah_
46. Night _S. Vinodhine_
47. Midday _S. Vinodhine_
48. My Songs _S. Vinodhine_
49. After Catastrophe _Faheema Jahan_
50. Merciless Ones _S. Chelian_
51. Those Who Killed Them _S. Vinodhine_
52. Take the Child from Me _Faheema Jahan_
53. Barrel-toothed Ghost _T. Malar Chelvan_
54. Burning Nest _Karunakaran_
55. Black Dog _Karunakaran_
56. The Warrior Who Could Not Part from His Shadow _Karunakaran_
57. Let's Move on Again, to Yet Another Place _Deebachelvan_
58. A Boy's Father Dies _Tha. Agilan_
59. A Refugee's Motherland _Ki. Pi. Aravinthan_
60. Immense Land: An Introduction to Its Soil Strata _Pa. Ahilan_
61. Story of an Unwritten Letter _Na. Sathyabalan_
62. Little Brother _S. Chelian_
63. Restless Sea . . . Sleepless Land . . . Endless Dream _Karunakaran_
64. Yugapuranam: Myth of an Era _Nilanthan_
65. Keep All That to Yourself _Karunakaran_
66. The Sea and Dreams _Ki. Pi. Aravinthan_
67. Madakkombarai in Jaffna: A Memoir _Malliappu Santhi Thilakar_
68. Release _V. Gowribalan_
_Copyright Acknowledgements_
Footnotes
_Introduction_
2. Enlightenment _Dominic Jeeva_
3. To His Holiness Arumuga Navalar: An Appeal _Mu. Thalaiyasingam_
26. One Night _Maalika_
40. Hanifa and the Two Bulls _Kumarmurthy_
41. A Story Lost in Time, Lasting in Time _Iravi Arunasalam_
43. Appe Ratta _V. Gowribalan_
44. Iron Birds _V. Gowribalan_
53. Barrel-toothed Ghost _T. Malar Chelvan_
59. A Refugee's Motherland _Ki. Pi. Aravinthan_
62. Little Brother _S. Chelian_
64. Yugapuranam: Myth of an Era _Nilanthan_
66. The Sea and Dreams _Ki. Pi. Aravinthan_
67. Madakkombarai in Jaffna: A Memoir _Malliappu Santhi Thilakar_
_Copyright Acknowledgements_
_Note on Authors_
_Further Reading_
The French Institute of Pondicherry
_Acknowledgements_
Follow Penguin
Copyright
PENGUIN BOOKS
##### TIME WILL WRITE A SONG FOR YOU
Kannan M. (b. 1968) heads the Programme on Contemporary Tamil Culture, Department of Indology, French Institute of Pondicherry.
Rebecca Whittington (b. 1987) is a PhD student in the Department of South and Southeast Asian Studies at the University of California, Berkeley. Her research interests include Tamil and Bengali modern literature, comparative literature, literary modernism, and translation studies.
D. Senthil Babu (b. 1972) is a historian of science affiliated to the Department of Indology, French Institute of Pondicherry.
David C. Buck (b. 1948) has been translating Tamil works into English since 1965. He has also studied Cittar and Saiva religion and philosophy, as well as Carnatic music on the veena. His publications include a number of collaborations with the late Dr K. Paramasivam, including a translation of _Iraiyanar Akapporul_ with Nakkirar's commentary, as well as some Sangam poetry. He has also published a translation, with comments, of _Thirukkurraalak Kuravanci_. More recently, he has published a number of translations from contemporary Tamil literature in collaboration with Kannan
M. of the French Institute in Pondicherry. David C. Buck is an Associate Professor Emeritus at Elizabethtown Community and Technical College in Kentucky, USA.
## Introduction
> _Walking on my bare knees
> Through a field of broken glass /
> Walking on my naked soul
> Through a field of broken comrades /
> . . ._
> _Of death / killed /_
> _By bullets or cyanide / by
> Their own or another's hand / dead
> All the same / rotting
> . . ._
>
> Juan Gelman
#### A Landscape and Its People
The words 'Tamil Writing from Sri Lanka'—part of the subtitle of this anthology—may invite critical discussion. Some people may say why _Sri Lanka_ , why not _Tamil Ealam_? To understand this response, we have to engage with the concerns of the diverse Tamils in Sri Lanka, their roots and journey, especially over the past century and a half.
A teardrop in the Indian Ocean, the island's geopolitical location (its proximity to India and particularly Tamil Nadu) has made it a playground for the 'superpowers' operating in the region—India, China and the US. Tamils in Sri Lanka, constituting 18 per cent of the population, form the main minority, and live primarily in the northern and eastern regions and in the hills of the central region. Most hill-country Tamils are descended from indentured labourers brought from South India to work on British-owned tea plantations in central Sri Lanka during the era of British rule. There is a sharp distinction made between the 'Sri Lankan Tamils' descended from a community of people who have lived on the island for centuries, and the hill-country labourers. The ancient name for the parts of Sri Lanka populated mainly by Tamil-speaking people is _Ealam._ The majority Sinhalese-speaking population (74 per cent) lives in the lower central, western and southern parts of the island. Though these communities historically have been living in distinct regions within the island, they share a subcontinental religious culture, rooted in both classical and folk traditions. Further, the Tamil population itself is anything but monochromatic: besides the Sri Lankan versus hill-country division noted above, there are religion-based divisions as well. While most are Hindu, there are also many Christian Tamils and many Tamil-speaking Muslims, who live in geographically and culturally distinct regions. The major areas populated mainly by the Tamils include the Jaffna peninsula, which is the northernmost region of the island, and just to its south the Vanni, comprising the districts of Killinocchi, Mannar, Vavuniya and Mullaittivu, and the eastern coastal region centred on Triconamalai, Batticaloa and Ambarai. These communities all speak different, but mutually intelligible, dialects of Tamil, and all share in the same Tamil literary tradition.
As in India, the Tamils here are mired in hierarchies of caste and region as seen in Mahakavi's iconic poem, 'The Temple Car and the Moon'. The caste system of the Tamils here is however distinct from that of the Tamil region in India in that there has been no Brahmin hegemony. There has, however, existed a Saiva Vellala hegemony that aped the Brahmins, and has constituted its own hierarchy of lesser castes and untouchables. Regionalism among the Tamils of Sri Lanka has been sustained by the animated dominance of Jaffna, often considered the cultural capital of the Tamils, and the corresponding resentment and opposition towards it from other regions. Every other regional and caste group among the Tamils in Sri Lanka has notoriously looked down on the hill-country Tamils and considered them as yet another population of untouchables. There are other minorities, including certain tribal communities, which live in different regions of Sri Lanka.
#### People and their Struggle
The root cause of the struggle between the minority and the majority in Sri Lanka is the majoritarian attitude of many Sinhalese Buddhists, who are practitioners of Theravada Buddhism. An Act passed in 1956 made Sinhalese the single official language of Sri Lanka (then Ceylon), thereby practically forcing the minorities into a second-order citizenship soon after independence in 1948. The two decades that followed the passing of this Act witnessed significant political turbulence that in more ways than one, sowed the seeds of the conflict to come later. Within the Tamil community, these years were characterized by the emergence of strong working-class movements led by the Communist party. The Communists, with their belief in organizing an all-Sri Lankan working class, found it important to address issues of caste oppression within the Tamil community. They led several agitations and propaganda campaigns against caste oppression—in particular, against untouchability and the denial of basic rights like access to education, water and places of worship, to the oppressed sections. The Tamil nationalist parties, with their belief in parliamentary democracy, chose to frame their politics within a language of political rights, regional autonomy and democratic federalism vis-à-vis the Sri Lankan state. In 1964, a pact signed by Sirimavo Bandaranaike, then Sri Lankan prime minister, and Lal Bahadur Shastri, her Indian counterpart, forced lakhs of disenfranchised Tamil indentured labourers living in the central hills to return to India, where they continue to live like refugees, isolated in a few hill stations, still finding it difficult to cope with the vagaries of resettlement. Those who managed to stay back faced severe hardship on the plantations, as they were harassed by their Sinhalese neighbours. As for those who resettled in other Tamil regions of the island, they faced even worse harassment at the hands of the dominant Tamil castes. Despite the serious injustice committed to lakhs of Tamil labourers, both the Tamil nationalists and the Communists staged merely token protests and continued with their set agenda. The political and social schisms within the Tamil community in the decades after 1956 prevented them from forging a unified political movement in the face of increasingly oppressive, majoritarian policies of the state of Ceylon.
In 1971, the Government of Ceylon implemented the so-called 'policy of standardization', which in effect curtailed the entry of Tamils into institutions of higher education and recruitment into government service. The proclamation of Sri Lanka as a republic in 1972 and the pride of place that Buddhism as a religion acquired in its Constitution further aggravated the tensions between the two communities. The Tamil people deeply resented such policies and perceived them as clear acts of discrimination against their culture and their legitimate status within the Sri Lankan nation. It was in such a situation that the fourth World Tamil Conference was held in Jaffna in 1974, much against the wishes of the Sri Lankan government, which preferred the national capital Colombo as the venue for the event. The rally on the last day of the conference was charged by the Sri Lankan police, who opened fire, causing the death of nine Tamils. (Iravi Arunasalam's 'A Story Lost in Time, Lasting in Time' has echoes of the event.)
These were the circumstances that animated the Tamil youth, disappointed with both the Tamil nationalist leadership and the Communist parties, whose politics did not seem to deliver hoped-for results. The events following the 1968 global uprisings and the strong emergence of the Naxalite movement in India inspired large sections of the Tamil youth to take a militant path to secure their rights: at best, a Tamil nation; or at least, an autonomous region within the Sri Lankan state. The struggle of the people of Palestine under the Palestine Liberation Organization (PLO) was an inspiration as well. To contend with these sentiments of such a significant section of the Tamil youth, the Tamil nationalist parties convened a conference at Vattukkottai in 1976, where they passed a resolution demanding a full-fledged _Tamil Ealam_ , nothing short of a Tamil homeland.
The 'self-appointed' President of Sri Lanka, J.R. Jayawardane, riding on a popular victory in the general elections of 1977, launched a direct attack against the Tamil nationalist leadership in the Sri Lankan Parliament, who were protesting against the police brutality in Jaffna. This provoked widespread attacks by Sinhalese gangs against the Tamil people all over Sri Lanka, resulting in riots, killing more than a hundred people and the destruction and loot of property of the Tamil people. Riots on this scale had not occurred since 1958.
Many militant, armed organizations, including the LTTE, emerged among the Tamils during this time, most of them in and around the Jaffna peninsula, with clear leanings towards the extreme left and with a strong socialist orientation. These organizations came to capture the political imagination of the Tamil people, pushing the conventional Tamil nationalists and the Communists to the margins. In June 1981, the Jaffna Public Library, previously the American Mission Library, was burnt to ashes by certain Sinhalese miscreants with the active support of the Sri Lankan police. This triggered retaliation by the Tamil armed groups against the Sri Lankan police, and led to the huge deployment of the Sri Lankan Army in the Tamil regions. This marked an end to the civilian administration of the Tamil people as we sense in the poems here of M.A. Nuhman ('Yesterday Evening, This Morning') and A. Jesurasa ('Your Plight Also').
Then in 1983, the LTTE attacked and killed thirteen soldiers of the Sri Lankan Army in Jaffna, which provoked an unprecedented all-out attack against the Tamil people all over the island. That July, when more than three thousand Tamils were killed and thousands more were forced to flee their homes (not even Tamil prisoners were spared), came to be known as Black July. Following the large-scale migration of the Tamil population from the Sinhalese-dominated areas into the Tamil regions in the North and the East, waves of refugees started arriving on the Tamil Nadu coast, while many other Tamils became internal refugees living in makeshift camps in various parts of Sri Lanka, and still others took refuge in a number of countries in Europe. The chain of events triggered by these riots would only reach a semblance of closure in 2009. Contemporary historians are still trying to unpack the complexities of these years.
The decade following 1983 saw the issue of Sri Lankan Tamils dominating the political scene in Tamil Nadu, India, resulting in widespread protests against the Sri Lankan government and mobilization of active support for the Tamil refugees and militant Tamil organizations. The Indian government trained the militant organizations, quite in the open, often playing one against the other, even though the professed aim was to bring the Sri Lankan government to the negotiating table. Thus, Tamil Nadu, by default, became the operating base for several militant organizations. Some of them published books and magazines from Tamil Nadu, not only to promote their cause but also to bring into circulation a literature that brought together writers, ideologues and readers with a growing nascent commitment to the cause of the Tamil people beyond national borders. The Tamil public in India developed a reverence towards the Tamil militants of Sri Lanka, tracing their cultural and militant lineage from classical Tamil heroic literature. However, this did not necessarily mean a substantive engagement with Sri Lankan Tamil literature or an understanding of the particular geographical and historical context of its production, even though historically there had been continuous trade and cultural contact between the two cultures. The Tamil-reading public of this period and after made possible a cultural market and this became the main source of recognition and patronage for the Sri Lankan Tamil writers. With fears of having to face separatist politics from gaining ground in its own territory, the Government of India organized several rounds of negotiations between the militants and the Sri Lankan government, even as the guerrilla warfare by the militants continued in Sri Lanka as we see here in Ranjakumar's 'Time Will Write a Song For You'.
In 1987, amidst various pacts with militant groups in Punjab and Assam, the Indian government, led by the then prime minister Rajiv Gandhi, also signed an accord with the Sri Lankan government. This pact guaranteed an amendment to the Sri Lankan Constitution, providing the Tamils with an autonomous North Eastern Administrative Region, and the withdrawal of the Sri Lankan Army from this region. The militants were supposed to lay down arms and return to the political mainstream. The period of transition was to be monitored by an Indian Peace Keeping Force (IPKF). Soon after the arrival of the IPKF in the Tamil regions— initially much to the relief of the Tamil population—the situation deteriorated drastically. Unable to stop the Sri Lankan Army from harassing the Tamil people on the one hand, and unable to make the Tamil militants to surrender their arms on the other, the IPKF soon found itself cornered into an open conflict with the Tamil militants, in particular with the LTTE, on behalf of the Sri Lankan state. Unfamiliar with the terrain, unequipped to deal with the urban guerrilla war tactics of the LTTE, and unable to distinguish between the people and the militants, the IPKF rapidly turned into an 'Innocent People Killing Force' in the eyes of the Tamil people as seen in Tha. Agilan's story, 'A Boy's Father Dies', in this volume. Due to a curious collusion between the LTTE and the Sri Lankan government and the huge unpopularity that the IPKF gained among the public of Tamil Nadu, not to mention its own casualties, the IPKF was forced to retreat and withdraw from Sri Lanka in 1991.
After the IPKF withdrawal, the LTTE, in its effort to become the sole representative of the Tamils in Sri Lanka, embarked on a mission to eliminate all opposition, including all other Tamil militant organizations and the Tamil political leadership, with alarming ruthlessness and military discipline. It was during the same time that the LTTE decided to expel all Tamil Muslims from the Northern Province, accusing them of treachery to the Tamil cause. The Muslims were forced to leave their homes and all their belongings within two hours of the LTTE announcement. (Kumarmurthy's story, 'Hanifa and the Two Bulls', and V.I.S. Jayapalan's poem, 'The Eighth Ghost', in this volume echo this traumatic expulsion.) This eviction drove them to claim a separate ethnic identity for themselves as Muslims who speak Tamil. All this happened when the LTTE had already resumed its war with the Sri Lankan Army (see Malaravan's 'War Journey' in this volume).
In May 1991, during his electoral campaign in Tamil Nadu, Rajiv Gandhi was assassinated by the LTTE to avenge the atrocities of the IPKF on the Tamil people. The LTTE, though, never admitted its role in this assassination. This incident put an end to the LTTE's operations in Tamil Nadu and alienated the Sri Lankan Tamil people from the Indian middle class, hitherto sympathetic to the Sri Lankan Tamil struggle. The Indian state came down heavily on any form of activity supportive of the Tamil cause, and banned the LTTE, making it difficult for even human rights-based activities to raise genuine concerns. Such measures on the part of the Indian government made it difficult for refugees from Sri Lanka to enter Tamil Nadu, not to mention the severe harassment that the refugees already living in the camps in Tamil Nadu had to face. The desperate living conditions of the Sri Lankan Tamil refugees in these camps remains largely unaddressed by all sections concerned. Given this situation, the Tamils in Sri Lanka, caught between the LTTE and the Sri Lankan Army, began their perilous journeys seeking asylum in Europe, Canada and Australia, thus marking the beginning of a new Tamil Diaspora. By 1995, the Sri Lankan Army was able to chase the LTTE out of Jaffna, sparking another spate of internal migration of the Tamil people from the Jaffna region into Vanni and vice versa, as reflected in Nilanthan's poem 'Earthen Towns'. Confined to the Vanni region and the East, the LTTE gradually transformed itself from a guerrilla force into a conventional army, running a parallel administration, with Killinocchi as the capital of their de facto state. The Vanni region remained entirely cut off from the rest of the world, placed as it was under an economic embargo by the Sri Lankan state from 1995 to 2002. The LTTE consolidated itself during this period, within Vanni as well as among the Diaspora, strengthening its international networks of finance, weapons, and culture. It consciously built a heroic image for itself as the saviour of the Tamil people, very much modelled on the popular cultural means adopted by the Dravidian political parties in Tamil Nadu. It's a different story that it was never able to distance itself from its reputation as a terrorist outfit.
After 11 September 2001, in a changed global scenario, a ceasefire agreement was reached between the LTTE and the Sri Lankan government, mediated by Norway on behalf of the international community and covertly supported by India. The peace lasted from 2002 to 2004, making it possible for people living in the Vanni region to see the world and for the world to see the ravages of war. By 2005, the ceasefire was violated on both sides several times and war was once again imminent. Mahinda Rajapaksa was elected President, probably helped by the abstention from the electoral process forced on the Tamil people by the LTTE. In 2006, the Sri Lankan Army launched an all-out, no-holds-barred war on the LTTE and the people in the Vanni region, boxing them from all sides. Several thousands of people were killed in the continuous aerial bombardments and ground-based missile attacks, and the survivors were forced to flee from place to place, ending up on the Mullaittivu coast, as we see in Nilanthan's epic poem 'Yugapuranam' in this volume. In this war without respite, thousands of Tamil people hoped in vain for help from India or the USA. In May 2009, the Sri Lankan government proclaimed victory over the LTTE, and announced the death of the LTTE chief V. Prabhakaran.
The reasons cited for the tragic collapse of the LTTE in 2009 are various: the weariness of the war-afflicted people of Vanni, who had gone through two decades of war, with almost every family forced to send at least one member to war and most certainly losing them (Deebachelvan's poem 'Let's Move on Again, to Yet Another Place' has echoes of this trauma); the changed world order that choked off arms supply to the LTTE; an altered global political landscape which seemed to have exhausted any sympathy for movements fighting for self-determination anywhere in the world; the implosion of the LTTE's carefully constructed myth about itself as a supreme heroic force; and the failure of any international forum to come to the rescue of the war-ravaged Tamil population, even when it verged on genocide, compounded by the Tamil Diaspora's inability to mobilize any semblance of international pressure on the Sri Lankan state.
The end of the war has not brought any respite for the surviving Tamil population in the Vanni region. Even after five years, they continue to live in makeshift camps under the constant surveillance of the army; there has been absolute refusal to account for the thousands who went missing during the war. Tamils in other regions of Sri Lanka live in a forced peace under a symbolically elected provincial administration bereft of any real power. There are clear signs that the Sri Lankan government will execute its plan to resettle the Tamil regions with the Sinhalese people. The Tamil Muslim population, still without any means to return to their homes, are being targeted once again, this time, again by the Sinhalese majority. The working masses on the plantations, equally ravaged by the war, are yet to find a space for themselves. The Sri Lankan media is constantly under a very real threat of abductions and killings. The Tamil Diaspora without a centre to hold them together is reconciling itself to the impossibility of a Tamil homeland, notwithstanding the simulations of victimhood and glory projected by Dravidian sentiments soaked in Tamil nationalism. Another phenomenon on the rise is the growth of a Hindu fundamentalism with aspirations of being part of a subcontinental Hindu majority, consciously orchestrated by right-wing groups trying to destroy the distinct Sri Lankan Tamil identity, and with aims to subsume even the whole of Sri Lanka within its inherently imperialist project. In such a fragmented context, there are only islands of despair, perhaps looking to literature for hope.
#### People, Struggle and Their Literature
Tamil in India and in Sri Lanka was shaped by a common literary heritage. This is evident in the works of Arumuga Navalar (1822–1879) and C.W. Damodara Pillai (1832–1901) in nineteenth-century Jaffna, who reinvented the classical Tamil corpus in print. Despite this common heritage, Sri Lankan Tamil differs widely from Indian Tamil due to a number of factors. The island's distinct material culture and landscape (Jaffna has no rivers, for instance!) has produced unique dialects. Portuguese and Dutch colonial rule played a part in the codification of caste rules. The American Mission made substantial contributions in the fields of education and translation (the Jaffna Public Library's collection of manuscripts and books was testimony to this, before it was burnt down in 1981). The Mission and the Saivite response to it helped constitute a unique system of education in Tamil up to the level of the University which insulated their Tamil from the ever-pervasive influence of English. This helped Sri Lankan Tamil, in both its written and spoken forms, to retain a classical flavour, a source of pride for its speakers.
Despite its distinct character, Sri Lankan Tamil literature since the early twentieth century has remained under the shadow of its big brother, Indian Tamil literature. Due to the skewed nature of the market, Sri Lankan Tamil literature was not widely available to the Tamil-reading public in India, whereas the Sri Lankan market was flooded with Indian Tamil popular literature and cinema. As in India, Tamil literature in early twentieth-century Sri Lanka was divided into popular magazines of large circulation and small journals. The history of modern Sri Lankan Tamil literature, in fact, is conventionally divided into periods marked by the publication of important journals, such as the _Eala Kesari_ phase (1930–1958), the _Marumalarcchi_ phase (1946–1948) and the _Dinakaran_ phase (1932–). Literature of this early period was primarily reformist in orientation, inclined towards social criticism and romance, and written in a highly formal prose full of authorial commentary.
Two distinct literary trends dominated the period between 1956 and the 1970s: progressive literature ( _murpokku ilakkiyam_ ), modelled on social realism; and another body of literature supposedly loyal to art and aesthetics. The progressive literature movement was spearheaded by two Marxist scholars, K. Kailasapathi (1933– 1982) and K. Sivathambi (1932–2011), trained in England by the British Marxist scholar George Thomson. The activities of the Communist party and the Progressive Writers' Association in this period inspired many young writers of the working class. Their writings altered the middle-class ethos which had defined literary production until then. Grounding themselves in the realities of the Sri Lankan Tamil rural landscape and its caste hierarchy, they brought dialects into literature (See Dominic Jeeva in this volume). Importantly, writers from the plantations were brought into the Tamil literary mainstream by this movement. The other trend did not have a defined leadership, though Mu. Thalaiyasingam provided a critique of the dominant social realist literature and emphasized the self and aesthetics over material relations. By the beginning of the 1970s, in the context of the increased oppression by the Sri Lankan state in the Tamil-speaking regions, a neorealist literature was in the making, in which the author turned into a mute observer. The journal _Alai_ (1975–1984, a total of thirty-five issues), then published from Jaffna, is representative of this kind of writing.
After Black July, with the publication of the poetry anthology _Maranathul Vaazhvom_ 16 (To Live in Death), the war literature of Sri Lankan Tamil was born. The violence of the army and the guerrilla attacks of the militants, along with a stark fear of living preoccupied the literary imagination. This period also witnessed the beginning of various militant organizations starting their own publications and journals, which involved translations of poetry and political theory from English, along with the creative process of producing a literature of advocacy and propaganda, not to mention certain dissident writings from within these organizations. Much of this was facilitated by the movement of writers and militants between Tamil Nadu and Sri Lanka. Later in 1990, with the expulsion of Tamil Muslims from the Northern Province by the LTTE, there was the emergence of a Tamil Muslim literature, differentiating itself from the rest, and seeking an identity for itself. After the LTTE retreated into the Vanni region, and established their de facto state, they formed their own publication department and produced 'official' propaganda literature and also translations of technical (medical, engineering, law, warfare) literature from all over the world into Tamil. They also published their own literary journals, providing a platform for writers from Vanni. Some writers living in Vanni also self-published their writings during this period. These publications were largely unknown to the world, with their circulation limited to Vanni and a section of the Diaspora. Unfortunately, none of this survived the final war in 2009, with the Archives and the Library of the LTTE wiped out.
The late 1990s witnessed the arrival of the Tamil Diaspora based in Europe, Canada and Australia, into the world of literature. Through their publications, journals, theatre and cinema, they were able to express in Tamil their newfound anxieties in different landscapes (for the first time in the history of Tamil literature, there was 'snow') and their experiences of a newly felt freedom (see Ki. Pi. Aravinthan and Aruntati in this volume), providing contemporary Tamil a place in the civil society and universities of the western world. The publication of literary anthologies from London, Paris and Toronto, which brought together writings from Sri Lanka and from the Diaspora, provided a much-needed avenue for these writers. But along with their deeply felt nostalgia for Tamil, the Diaspora also retained their religious culture and caste system. Mired in the polarized politics of pro- and anti-LTTE, they managed to sustain a vibrant cultural activism in foreign lands. By the end of the 1990s, the Sri Lankan Diaspora also turned into a market for the Indian Tamil publishing world, which used it to commercial advantage. For the Tamil middle class in India, Sri Lankan Tamil remained exotic, familial, but not intimate (there is not a single Sri Lankan Tamil restaurant of any kind, in the whole of Tamil Nadu!). The Diasporic literary world, on the other hand, started to gravitate towards the Indian Tamil literary world, mimicking current fads in Indian Tamil and anticipating recognition from it, but oblivious to the instrumentalities of the Indian Tamil publishing and literary networks. On the other hand, the Diaspora did not consciously engage with the literary traditions of their new homes either. This may change in the near future, with the next generation of the Diaspora increasingly turning to the languages of their adopted nations to express themselves, and producing a Sri Lankan Tamil literature in English or in French. It is very difficult to discuss in concrete terms the nature of circulation and reading practices of Sri Lankan Tamil literature, as no institutional spaces exist for it—not even a bookshop or a library. The traffic of printed material between India, Sri Lanka and other countries of the Tamil Diaspora continues in haphazard, unpredictable ways, making access to this literature difficult.
Now we have before us a Sri Lankan Tamil literature which has branched into several micro-literatures from different regions in Sri Lanka: Dalit literature, Muslim literature, hill-country literature, war literature, popular literature, feminist literature, Diaspora literature, and so on.
#### Time Will Write a Song for You
This anthology is an outcome of the research programme of Contemporary Tamil Culture at the French Institute of Pondicherry, which tries to collect, organize, preserve and study the sources of contemporary Tamil literature. The anthology has been translated and edited by four researchers affiliated to the French Institute of Pondicherry, none of Sri Lankan Tamil origin. The works included in this anthology have not been selected for the context or the situation in which they were written. They themselves are the context, for the simple reason that when read only for their context, or outside of their context, they are susceptible to voyeurism. We have chosen works which are singular and universal in their expression, which can stand alongside anything in the world republic of letters—such as the works of Juan Gelman, cited at the beginning of this Introduction, or Kafka, cited at the end. A major portion of this anthology is comprised of poetry, which remains without doubt the high point of achievement in Sri Lankan Tamil literature. Even in its free-verse form, this poetry, quite distinct from Indian Tamil poetry, sustains an element of lyricism derived from classical prosody, yet grounded in the registers of folk and spoken Tamil. (See the poems of Mahakavi, Neelavanan and Mu. Ponnampalam.) It is a poetry which can be sung and staged. It is not a coincidence that many Sri Lankan Tamil poets until the 1980s were also exemplary performers of their own and others' poetry, such as Vilvarathinam, V.I.S. Jayapalan, Cheran and Nilanthan. However, this performative lyricism often lent itself to moments of simple escapism and euphoria, away from reality rather than returning to it. The poetry of the 1990s' generation breaks away from this lyricism, probably due to the stark reality of the war that surrounded them (see, for example, the poems by Sivaramani, Vinodhine, Faheema Jahan, Bose Nilhale, Majeed, Deebachelvan and others) and the emergent influence of Indian Tamil poetry, which lost its lyrical quality a long time ago.
We have not been able to include the works of the Communist Dalit writer K. Daniel (1927–1986) because they are predominantly novels, infused with dialects to which a mere excerpt will not do any justice. We have not included plays despite their significant role in the social life of Sri Lankan Tamils, most importantly in the reformist struggles against untouchability and during the later stages of the militant struggle. These are strong texts of performance, embedded in folk music and theatre, but may not reverberate in translation. We have also omitted populist, sensational writings aimed at the Indian Tamil world which made the Sri Lankan Tamil world exotic. Certain iconic representatives of Sri Lankan Tamil literature do not figure in our collection, as they are already available in translation. However, we really regret our inability to include more works from upcountry Tamils, whose politics and continuing conditions of oppression have stifled their literary endeavours to realism and mobilizational idioms. One might be surprised to find the name of Piramil (1939–1997) in this anthology as he is well known in Tamil Nadu, where he spent the major part of his life. However, he remained a Sri Lankan Tamil in letter and spirit as well as in his unique understanding of the crisis on the island as a 'collective suicide' of a nation.
The works collected here force us to think about unthinkable experiences of violence, displacement, dispossession and vulnerability. These experiences seem in stark contrast to our relatively comfortable lives, and at the same time, the writing brings them uncomfortably close, making us unsure whether to identify with the human in them or recoil at the inhuman—reminding us, perhaps, that what we call inhumanity is all too human. But this literature is not simply of interest as a reflection of or response to civil war and ethnic violence. It is incisive, analytical, abstract, detailed, dreamlike, lyrical, satirical, allusive, layered and multi-vocal. The writers represented in this collection are both individually and stylistically diverse. They hail from different regions and religions, and their writing ranges from the powerfully simple to the highly experimental. This anthology presents a literature that expresses hope and life amidst oppression, violence and death, with lucidity and rigour, a literature which is under grave threat of being silenced by the prevailing politics in the region. At this point the sirens are still singing, what is to be feared is their silence.
#### On Translation
The translation of Sri Lankan writing from Tamil presents some particular difficulties. The left-branching syntax of Tamil makes the order of images, so important in poetry and even in prose, difficult to replicate in English; we have gone case-by-case in determining whether it is possible to retain the order of images without compromising rhythm and the minimum of clarity. In experimental writing such as Gowribalan's, it is sometimes difficult at first even to sort out the referents of a long sentence. The internal diversity of Tamil as represented in the texts—its distinct 'written' and 'spoken' varieties, and the dialect spoken in Sri Lanka—and its interaction with Sinhalese present more problems. In this anthology, Sinhalese words have been footnoted. As 'spoken' Tamil appears mainly in the dialogues of the short stories, it has been rendered by all the translators in more or less colloquial English. The particularities of Sri Lankan Tamil that may appear in the stories or the poems have gone unremarked, as the markers of dialect are often subtle differences in word choice, usage and pronunciation which are not easy to represent even in the standardized language, much less in another language altogether.
Translations for this anthology were done by two native speakers of Tamil and two non-native Tamil speakers, with active inputs from the authors themselves, particularly in the case of Vilvarathinam and Gowribalan. Hence, to us, the process of translation was a delicate negotiation with varied sensibilities, but with a unified conviction that a translation is not an explanation. We feel that translation is an act of learning, of reading, and of writing, as a powerful piece of writing makes us want to write it again in a new language. We say this 'not with complacency, but feeling our shortcomings in every bone'.
Kannan M.
Rebecca Whittington
D. Senthil Babu
David C. Buck
## The Temple Car and the Moon
#### _Mahakavi_
'the whole town together is pulling a temple car,
come, let's take hold of the rope,'
someone came and said.
he too was a son born
to the worldmother, borne
in her suffering womb to live here for a hundred years.
one with broad shoulders
and big arms, with light in his eyes, with a heart that would like to
be delivered from worry;
he came; he was a young man;
a human.
a younger brother of the one who, the day before,
fluttering his wings of thought, had climbed into the sky
touched the full moon and returned
a very hard-working man!
'in this world we should all stand together
in harmony.' saying this with a sweet ardour,
he came to bow his head in prayer and take the rope.
'stand still!' said one man.
'stop!' said another.
'shit!' said one.
'scum!' said yet another.
'speak!' said one.
'burn!' said a different one.
a stone fell
a neck cut
lips split against teeth
red water streaming out
reddening the earth
there was a struggle
and a man killed.
the temple car the whole town was to pull together
jerked to a halt as if it had grown roots
the mother who once produced the whole world
sat down dumbstruck at the sight
of her own offspring's frenzy.
look, there rolls in dirt
kith and kin of the one
who touched the full moon
just the day before!
_Translated by Rebecca Whittington_
Mahakavi, _Therum Thingalum_ (early 1960s), in M.A. Nuhman
and Jesurasa (eds), _Patinoru Eelattu Kavingargal_ , CreA, Chennai, 1984,
pp. 27–28
## Enlightenment
#### _Dominic Jeeva_
Jaffna. Third Cross Street. A corner nuzzling the warehouse street. To its west, the Poun* Mark warehouse. About ten yards from the warehouse—like an impassioned, restless young woman—stood a municipal street light, in solitude. Perched on its top, the flickering bulb, like a lantern lit by fireflies, its feeble rays striking a faint light on the ground. Tramping on his own shadow, with one hand supporting the lamp post, shaking his head like a lizard, stood Big Brother 'Shokolo' ('how wonderful') Kandaiah.
One can easily tell that he is in his state of Enlightenment. Siddhartha experienced Enlightenment on a full moon day during the month of Vaishaki, but Kandaiah attains his Enlightenment at times like this. Since it is now palm toddy season it's easy to get a share of toddy and a cigar for a mere 20 catam. Kandaiah never misses this opportunity. After nine, he surrenders himself to the power of the fermented intoxicant and attains his enlightened state.
Today, now, it's nine.
Enlightenment has been born in Big Brother Kandaiah.
Big brother Kandaiah is a loner with no home, no family ties. Hefting loads of roofing tiles in the Poun Mark warehouse is his occupation. Just south of the warehouse, in the all-expansive ocean, cargo boats from Kallikkottai lie at anchor in Aluppandi harbour. Some twenty men and women lug tiles on their heads from these boats to the warehouse. They carry their tile loads all day long, and collect their two-rupee wage each evening. Among the workers forced to run a 'pinch-belly family' on that measly income, Kandaiah the Loner is a prince, alright. The other workers all call him Big Brother Kandaiah—it doesn't matter if they are actually younger or older than he is. Because he often says 'shokolo' when he talks, that honorific title gets stuck to his name as well.
In his non-enlightened state, Kandaiah is a good man for sure. During those hours, he is pretty jovial with his colleagues Adaikkalamuthu, Rafael, Mary Pillai and Lourdamma. True enough, he usually keeps his distance from them, but there are moments when he gets really friendly.
Sometimes they all sit under the shade of the tree in front of the warehouse, drinking black tea brought in old fabric-bleach tins from a nearby tea shop; they chew their betel leaves, talk loud, and laugh a lot. Even though Kandaiah won't drink tea with the rest of the workers, he is particular about not missing those sessions. And when he tosses his discoloured red head-rest towel over his shoulders and joins in, there is no end to fun. When Lourdamma is also there, the banter gets really spicy. Folk songs and dances are staged with fine music.
Lourdamma twists up her betel-coloured lips and baits him, 'What's up, Big Brother Kandaiah? You're strutting around, acting silly like a bridegroom today!'
'Oh yes . . . yes . . . you're looking to get me to put a ring on your finger and marry you, ain't ya? That's what you want, isn't it? Hmm . . . Shokolo . . . here, gimme a pinch of tobacco,' he says.
'Ha . . . There's nothing wrong with borrowing a little tobacco . . . but just look at you, always borrowing tobacco like a besotted husband,' Lourdamma says, handing him a bit of tobacco from the little box hanging off her waist.
'How come Big Brother Kandaiah keeps getting his tobacco just from Lourdamma? Is there something going on here? I wonder what's at stake?' teases Mary Pillai, fiddling with her coarse, greying hair above her forehead.
'What about her? She is not a grandma like you . . . she's young . . . she's just a girl,' Kandaiah says.
These are hardly matters of love. Lourdamma is the youngest of the lot and happens to be pretty, but only in comparison to the rest of the women—like an _iluppai_ flower when you have no sugar. Everybody gets a kick out of making her the butt of their jokes. When Kandaiah participates in these 'games', he acts like he's just the equal of the other people who work there—the tile carriers from the _cheri_. However, when he forgets himself in a trance, Kandaiah never forgets his pride in his own caste.
But to articulate that he has to reach his enlightened state. Once he gets there, his song begins: 'Shokolo . . . Who do you take me for? Lifting tiles does not make me a Pariah . . . I am a Vellala, man, a Vellala of the Cankiliar lineage, who once ruled Nallur! I am not of a low caste . . . not of a rising caste, man . . . Hmmm . . . Shokolo . . . My caste is Shokolo Supreme.'
The next morning, when he wakes up, he is the same old tile-hefting Kandaiah. Then, to him, Adaikkalamuthu is still Adaikkalamuthu, Rafael is still Rafael, Mary Pillai is still Mary Pillai, and Lourdamma is still Lourdamma.
But, for the moment, the Kandaiah standing with the support of the street light is well into his state of Enlightenment!
Yesterday too, he arrived in this same state of Enlightenment. He held forth at the same spot and proclaimed his artful wisdom to the world. That time the clerk pulled him up and said, "What's going on, Kandaiah? I hear you get yourself drunk every night and disturb the neighbourhood?' Kandaiah could not take that, and he drank with a vengeance.
Here he was, in his state of revelation, standing under the same lamp post, just looking around. In front of him was that thatched hut. Third Cross Street is where respectable, civilized people live. Just this one thatched hut stands in isolation like a curry leaf tree sapling, an eyesore. Adaikkalamuthu and his family live in it. He hefts tiles like everybody else, but somehow he earned a 'good boy' reputation from the warehouse managers, and got the nightwatchman's post as well. He was proud of his watchman's position, which earned him this thatched hut, without rent.
Enlightened Kandaiah's nightly habit was to stand under his favourite lamp post, next to this hut, and offer 'worship' through his enchanting songs of Enlightenment. Setting himself up to drink his palm toddy, he suspected that this Adaikkalamuthu might be the one who told the clerk about him. Then, through his blessed service to the Lord of the Toddy, that suspicion rooted in him as a certain truth.
'I know—I know for sure who grumbled to Mister Clerk about me being a drunkard. Shokolo . . . So, I drink. Hey man— Adaikkalamuthu! Do I take money out of your father's house to get drunk? Even if I do carry tiles around, I am a Vellala—damn it—of the great Cankiliar's legacy. You are a Pariah, the lowest of the low castes, man. You showed off your low-caste mentality there. Swear to God, I do drink. What's it to you, damn you? Shokolo . . . You son of a Pariah, come on outside. I'll show you—I'll smash all your joints.' He stood there and hurled these challenges at Adaikkalamuthu.
'Hey, Big Brother Kandaiah! It's dark and people are going to sleep. Go on home and eat, and go to bed.'
'Shaat-aaap-yur-movuth in the white man's language means shut your mouth tight. Am I the son of a damn Pariah like you? I am a single man, I'm on my own. I toss some money out, Shokolo, and there's my food. I sleep where I choose, on anyone's porch . . . but you, man, you have gone and shown the mentality of your caste. Let me take care of you first, get arrested and stuff myself up with food there.'
Adaikkalamuthu decided not to prolong the rant. He shut his gate.
'I am the King of the Cankilis, celebrated in all three worlds,' continued Kandaiah's rhymes. He acted them out in street-drama style, interspersed with the choicest of swear words that substituted for the absent drumbeats.
'Who's that, man?'
That was not Adaikkalamuthu's voice, Kandaiah realized despite his spiritual trance.
He looked deeper.
Two policemen on patrol down the street.
'Who's there? What's all this dancing in the street? You must be Kandaiah. We've received a lot of "complaints" about you. People around here are losing sleep because of you. What do you think you're doing, boy, abusing people like this? Does your body require some "attention"? Do you want our special "camel treatment"?' threatened one of the policemen.
'Who? Me? Abusing? Oh god! No way! Don't I know that my tongue will rot in hell if I swear at people? Where did you get this devil's tale from?'
'That's as may be, boy, but this is the last time. If there are any more complaints about you getting drunk and prancing around in the streets, that's it.'
'I swear on Muniyappan of the fort. Why would I drink? Why would I shout?' Kandaiah melted, squirmed, twisted, drooled, begged and somehow managed to get rid of the policemen yesterday.
However, today's flourish is way more than yesterday's.
With his arm extending to the municipal lamp post for support, Brother Shokolo Kandaiah vividly remembered what the policemen said yesterday. And it hurt. His suspicion, that Adaikkalamuthu must be behind their annoyance, assumed gigantic proportions. He started shouting, determined that his voice should pierce right through the sleeping little hut.
'Shokolo . . . Hey Adaikkalamuthu! You low-caste son of a Pariah! Was it you that told on me to the police, damn you? . . . That measly policeman that came around yesterday actually cursed me . . . He threatened to chew me up if I did any more prancing around in the street, didn't he? I would have smashed them both into little pieces, but I took pity on them and let them go. Who am I? Lifting tiles does not make me a son of a Pariah. I belong to the glorious lineage of the Cankiliars, man! Shokolo . . . Hey Adaikkalamuthu! . . . Bring on your policemen now. I'll take care of them with one hand,' Kandaiah screams.
As if destined to appear, the same two policemen are heading this way.
'What's up, Kandaiah? Your body feels "sour"? Needs some real treatment, huh?'
Come what may, I shall never flinch in front of these policemen again, swore Kandaiah to himself, and he took a swig of the toddy. So, unmindful of the consequences, he starts talking.
'Do you know who I am, Policeman, sir? I belong to the valiant and glorious lineage of the Cankiliars. I am not a Pariah,' says Kandaiah, squirming again.
The khaki shirts cannot tolerate that insult. Their rage rises up all the way, out of their noses, they leap on Kandaiah and maul him . . .
Kandiah lies there, whimpering . . .
The two policemen disappear, leaving no trace that they had ever come. They talk about upholding the law, but they seem to have mastered the art of breaking it . . .
Everybody living in that street was well aware of the thrashing received by Kandaiah. But all the elites' doorways were shut tight.
Silence. Ripping right through it, a recurring groan . . .
'Who am I? An orphan with nobody . . . An orphan without a single soul to question this injustice . . . Who is here for me?' Shokolo Big Brother Kandaiah muttered into his own mouth, in agony.
Fearfully, fearfully, with a hurricane lantern in his hand, bringing his son along, Adaikkalamuthu comes out to look at Kandaiah.
Severe beating. Blood gushing from the head. Splashing some water on the wound and organizing first aid, Adaikkalamuthu asks his son to get a taxi.
The taxi arrives.
Adaikkalamuthu and his son together help Kandaiah into the car. Kandaiah regains a bit of his consciousness . . .
The taxi rushes to the hospital.
Still intoxicated and wavering in the realm of Enlightenment, Kandaiah's mouth rambles on and on.
'Hey, do you know who I am? Did you policemen fall from the sky, huh? . . . Adaikkalamuthu, you low-caste Pariah scum. Watch out! I might carry roof tiles, but I am not a Pariah, damn it . . . I belong to the lineage of the great King Cankili, man! The dual heritage, purebred Vellala lineage, damn it . . .'
_Translated by D. Senthil Babu_
Dominic Jeeva, ' _Gnanam_ ' (around early 1960s), in _Dominic Jeeva_
_Sirukataikal_ , Mallikaippantal, Jaffna, 1996, pp. 109–118
## To His Holiness Arumuga Navalar*: An Appeal
#### _Mu. Thalaiyasingam_
Dear Father, Greetings:
It's a hundred or so years since you passed away, and I am a son in this Tamil lineage that is finally organizing a memorial celebration for you. I don't know what you would make of our leaders organizing this sort of a festival such a long time after the fact, but to me it is a big deal. Just think: now, delayed by a hundred years, we really are putting this grand festival together. As far as our Tamil people are concerned, we can truthfully call this a huge achievement.
The situation in today's Jaffna—oh, now that is an altogether different story, Father, far worse than those old stories that you saw with your own eyes! Your passions and your goals are coming to fruition here and now, but totally upside down. It's true that there is a rush of devotees jostling for a place in the line at temples on every street corner, and it's true that spectacles of gods and goddesses appear on wall posters everywhere—but people these days just pay a bit of money to see things you tried to see in your searching, your singing, and your wanderings. They see it in songs and in screenplays. They dissect their viewings of it into 'First Class', 'Second Class', and 'Gallery', then stand on tiptoe while they worship it. (You must excuse me if these words no longer carry meanings that are familiar to you.)
Festivals in the morning, festivals in the middle of the day, festivals in the evening, festivals at night! Sometimes these spectacles even take place after midnight. You could never have imagined these temples! But that does not mean things are any better now than what you must have witnessed in your day and time. We have all these dramas: 'lockouts', 'picket lines', 'satyagraha', 'police security', 'confrontation', 'fights', and so on. Mostly—please note the emphasis on 'mostly'—these are the popular temples and festivals of today, our gods and our goddesses! It's not that we are bereft of leaders: during election times one of them sprouts up on every corner. Then there is education, all the way up to university. But all these different sorts of pompous robots manufactured by the University focus only on employability, to the point that they forget all about Tamil, about religion and about freedom. Father, this is the situation these days in Jaffna. It seems awful, does it not? Now tell me: in this state of affairs, is it not a huge achievement to stage a festival that neither ignores you, nor Tamil nor religion?
If you were here today, what would you do, Father? Actually, that is not my most important question, but I'll ask it anyway. Please, though, whatever you say, just don't tell me I have to write another castigation. After all, aren't all of these dramas taking place well after you wrote your own blistering castigations? What we need is something else. Dreams take form when you castigate and push things down—not just in an individual, but that's what happens in the wider society as well. When you hold things down, then push down on yet more things—even when those things suffocate more and more, even when their horizons are more and more restricted— what is suppressed will burst forth as factions, and as dreams. They set up sheds on every street corner and decorate them with wall posters. They wait in line. But what do they see in all these things if not just dreams? A collective dream! Will all your condemnations and mass meetings eradicate these things? I do not think so, Father. On the contrary it seems to me that they might even be functioning as long-distance aids in causing them.
I do not like your castigations. Now, do not think that this generation intimidates me, for after all, I am one of them. No, I am not scared. It's just that, as inadequate as their dreams and their factions are, your own critiques also seem to me incomplete, on an aimless terrain. What we need is a new path. Love, love! There—those dream-lovers whistling and clowning around inside their sheds: would you find it in yourself to embrace them, too? And those faction-lovers roaming from street to street, carrying their placards and shouting their slogans: would you accept them too? I mean, would you want to embrace and guide them too? Could you do that? Would you even consider doing that? Well, in any case whatever your reactions, to us it seems that this is not only possible, but particularly important. In fact, it is the first thing we want to do. People who have been marginalized, crushed and exploited are not embraced and included and nurtured in our religion and in our tradition; so this tradition and this religion do not strike us as worthy of our praise, nor of our support, Father. Actually, the Tamil and the Saivism that you fostered only crushes them and pushes them even farther to the margins. It crushes them, and it crushes them, and it causes them to dream. It makes them scream. You never realized it, but all your critiques were hurled against people who were just holding out a helping hand to them. Maybe in your day you thought it necessary to castigate people who stepped up to hold out a helping hand when you yourself would not lift a finger. Now, however, Father, we see through all that. So today we are asking for a different path. Love, love! Are you capable of showing that now?
I don't know if our situation and our needs would make much sense to you. We would like to explain them to you, patiently.
Father, I must admit something right at the outset. As thoughts about you have grown inside me, I have come to realize that I feel an immeasurable sense of connection to you. Everything I read about you and about your service to Tamil and to religion captivates me. I get restless, and I pace back and forth. It astonishes me how you were able to utilize whatever you needed to fulfil your objectives in those days. The printing press, magazines, schools, the publishing of books, prayers, sermons—they all flourished when touched by your hands. Why, even your condemnations served your ends extremely well. I do not deny that.
But your objectives were limited to Saivism and to Tamil, where they stopped. They never went beyond that to touch and reform the foundations of society. Enraged as you were by the proselytizing efforts of other religions, you failed to see the gangrenous cruelties of caste, and the narrow-mindedness that lies embedded in our own society, and which handed victory to those very proselytizers. Vallalar of Vadalur** saw them right away. Not only did he see the dictatorship and the exploitation that went in the name of caste, but he also saw the rule of arrogance and the war cry of ignorance that went in the name of religion. Maybe it was the India he lived in that showed that to him. That is why he tried to develop and act upon visions that future worlds would find astonishing, and revere him for. But the times were not yet ripe for his plans to mature, nor did he try hard enough to nurture them. Fundamentally, he was an itinerant philosopher, going wherever his mind and his god (Siva) led him. I do not think he tried very hard to subject his wisdom to the power of knowledge and action. So, because his disciples were bereft of insights and did not really understand his message, and because of shallow sympathizers, and also because of his enemies, his vision went unfulfilled. (Ramakrishna, though he transcended all spheres, was better positioned to accept everything and to integrate wisdom, devotion and knowledge completely and harmoniously.) You were, basically, just an ardent devotee of Siva. Still, you were brave enough to subject your devotion to the full power of knowledge. A warrior general at work. You turned the very strategies of 'others' against themselves, and to that extent you enjoyed your triumphs. But if you and Vallalar had acted in concert, his efforts would not have gone in vain. Your efforts too would have been victorious; they would not have lost their lustre and stagnated, as they seem today. That's what we think these days, Father. There was nobody at that point in time but Vallalar who would have helped you achieve your objectives, if only you had approached him with the attitude of accepting his abilities while not losing any of your own. But you were incapable of doing that. The Rev. Father Peter Percival showed his love for you, and because of that, he was able to make use of you; it is sad that you could not show Ramalingasamy your love in the same way. Oh Father, do not think we are simply finding fault with you. We accept your goals as our goals; it's just that that's how it seems to us from today's perspective. That is to say, if those things were to happen today and we were around, we would not have let them happen. That's the sense in which I feel pushed to point out these things. Although there were many other reasons as well that you and Vallalar did not join together, your narrow perspective was in large measure responsible: that is the thought that rises up in us. But whatever we feel, you cannot refute the truth of what we see today. Today Vallalar's efforts have failed, and your early victories, too, have lost their lustre and gone stagnant. Even today we remain pathetically amusing, with issues like 'temple entry', 'satyagraha', 'lockouts' and 'police brutality'. Now, if we claim that these dreamers in their sheds and these sloganeers roaming the streets are the fruits of those reforms that you refused to undertake, that you blocked, or that your narrow perspective simply would not permit you to undertake, would there not be at least an element of truth in that? That's how it strikes us in our state of affairs, Father. Those noisy sloganeers are trying to correct your mistakes—and the dreamers in their sheds are trying to forget your mistakes. Because you tried to envision all of literature and the arts within the Saivite fold you were unable to give your full support to other arts and literatures, and you let them go, and when you did that through your ignorance, you forgot all about true art and literature, Father. You forgot them and you denounced them, Father. You constricted the boundaries of art, Father. And that is why now, in this land where you tried to nurture Tamil and Saivism, instead of the arts it is only dreams that flourish. That is precisely why we feel that we have to remind ourselves over and over again, as though we were newly attempting to be civilized: 'Having a home is not enough to emancipate us—our lives must become a stage for wondrous plays and songs.' Moreover, while you wrote your condemnations of people from other religions, Father, you forgot all about the social reforms and the service to literature that the other religions had managed to accomplish. Now, to eradicate that mistake, political shops have opened their doors, one for every person, one on every street corner, hard-selling items of foreign production. This shouting and sloganeering in all the streets is just a part of that same sales campaign. However they, too, are unable to plumb the depths of our society. But look at us! We are organizing a Commemoration for you. In this context, organizing an event could easily become farcical—particularly so if we did not include a new agenda. It is precisely to establish this agenda that we are laying all these things out, in your august presence.
Father, here is our first appeal: You must bless us to instil henceforth, into everything and everywhere, the love that you showered upon Tamil, upon Saivism, and upon Siva. Siva is indeed everywhere. That love which we give to Siva must also become a love for everything that Siva creates, and which he himself becomes. Love is not attachment. It is an ecstasy that breaks attachments. True love will be born only after all attachments have been transcended. It must even transcend the attachment inherent in thinking that love is the only goal. When we say love alone is what is needed, if that turns into an attachment, it will turn itself into an impediment for that very love. Attachments are simply impediments to a great, expansive love. Thus with no special attachment to anything—even without loving the perspective called love—an ecstatic love that upholds wisdom must become our weapon. Love that upholds the wisdom of destroying our very own selves! (The all-transcending Buddha remains the embodiment of total love!) Through such a love, unattached to anything and accepting and nurturing everything, the love that upholds wisdom must be what Siva means to us. You must grant us that; it is our first appeal.
We bring up our agenda in the very fundamentals of that appeal. This is what it is. In Jaffna where you nurtured Saivism and Tamil, the religion of humanity and equality also must henceforth thrive. All efforts must be made so that it will thrive. That is next on our agenda, which we lay out in your august presence. We will also set out action plans for this in your august presence, but now first, we are standing here as we ask for your blessing.
Now, I wonder what you think of all this. But you must understand that certainly, in one stroke, we have brought you together with Vallalar Ramalingasamy of Vadalur. Yes, as far as we are concerned, you were not two separate people. Just two sides of the same trajectory. You didn't realize it back then. Maybe it was the compulsions of your time, or rather your perspective on it, that made it impossible for you to see that. But now we understand, and we understand your failure to realize it. However, henceforth you should understand it yourself. That also is our request. Further, if today you were to wear the 'Navalar' costume that you wore from 1822 to 1879, that would in truth be the kind of blasphemy against Siva that you yourself condemned. We would like to tear up the various costumes that you have worn. It is your august presence beyond these costumes which we seek and touch through these appeals. Even if other people do not understand this, we are sure you will. Some might think that you did what you did as a simple, human individual, but did you think that? Did you not live your life in the belief that in your state of complete surrender it was god alone who worked through you, that it was all god's doing? If not, would you have determined to die if you lost your court case in Chidambaram? He is the one who did it all, and he will also be the one who saves us all. If not, you were prepared to make your death a sacrifice, to actually annihilate yourself, were you not? Isn't that so, Father? Just as that very god goaded you into a life of service, did he not also melt Ramalingasamy's heart and cause him to overflow with grace, and to sing? Had you turned the love you held for Siva and for Saivism to Ramalingasamy as well, might you not have seen god in him, too? That, Father, is what we want to do. That same love, that grand love, that upholds wisdom, that is what we are going to turn for everywhere; we will go everywhere in search of it. In twisting and searching for that great light we will see the omnipresent lord himself.
Jaffna henceforth should become the birthplace of the search for that great light. It should become the abode of that inner light, its wellspring. In the cast of that inner light not only Ramalingasamy but also everyone he tried to seek out and embrace, like Buddha, and Krishna, and so many other people, need to be praised and embraced. And not only they but Marx, Lenin and Mao Zedong also need to be realigned from this proper perspective. That is our request and our agenda: we submit these things in your august presence. But what we are looking for in your august presence is not just the usual holy-man 'Navalar' part that you play. We are going way beyond that costume, searching for the one who is always everywhere and in everything, the one who causes all things to act—he is the one we are searching for. In fact it is really him to whom we submit our petitions and our agenda, through you. To a son of the true tradition of Jaffna you are but a means to help in the search for that almighty one.
Father, there is this image that arises in my heart along with my thoughts about you, but is it true?
I really don't know. It is that picture of you that we see in all the books—wearing your dhoti, adorned with a shawl, with stripes of sacred ash all over your body, sitting with an open book in your hand—that's how I see you as well. That may well be the appearance you presented for Tamil and for Saivism. In preparing for the festival, I thought about displaying a picture of you; so I searched all over town—Jaffna town. 'If it were not for Navalar from Nallai town, where would Tamil be? Where would music be?'—in the Jaffna that praises you this way and is organizing a festival in your honour, I could not find a single picture of that Navalar from Nallai town. I went from shop to shop but I could not find you. True, you are not for sale. Still, it seemed a real shame that in shops where I could see pictures of the god you worshipped, your picture was not there. The anger that rose in that disappointed heart flowed out onto a different picture, one which was hanging everywhere: Sri Sathya Sai Baba!
Well, it's unfair to think you might know this character. He is my contemporary. Tight, billowing curls of hair that stand up like steel wool, a long, flowing, red silk shirt, a glistening gold neck chain that hangs down to his chest, and a little smile—and with all this his fair skin. Holy man Sri Sathya Sai Baba! He hangs there in shop after shop. I guess he is the avatar for this kali yuga. Om Sri Sri Sri Sathya Sai Baba! Om Sri Sri Sri Sathya Sai Baba! Om Sri Sri Sri Sathya Sai Baba! If you pray and write it down on postcards this way and mail it to twenty people, you will get everything you wish for. Om Sri Sri Sri Sathya Sai Baba!
But really, I did not write this to make you smile, or for your merriment. I just want to show you how things felt to me when I got mad and disappointed because I could not find your picture. Actually, I neither love nor loathe Sathya Sai Baba. I don't have the mindset that says religious devotion flourishes only if you make sacred ash and bananas materialize out of thin air. So I have listened to the stories that circulate about him, but I have kept my distance. Only time will tell whether or not he really is an avatar—or some other avatar could tell. So, up to that point I neither accepted him nor denounced him. But at that moment, when I could not find your picture, I did, in my heart, denounce him.
Lies, utter lies! Not just him, but those shops that sold his picture, and today's consumerist culture that fosters such shops, all felt like lies and fakes. Lies and fakes!
That's when it struck me. I felt that I saw right then the meaning of that little bud of a smile on that face with flowing hair.
'Lies and fakes—that's me, Sai Baba, too, right? Who dares to have the gold and silks that I do not have? Who encompasses the lies and tricks that god does not encompass? Lies and fakes—that's me, too.'
That was the answer that my heart handed me, given my reading and my experience up to that point. In that state of mind, I felt that I really understood Sai Baba. In the same way it felt like I really understood that in this day and time it is not enough to show off a body adorned with holy ash.
Father, a while ago I asked if you could accept and embrace those dream-lovers clowning around in their sheds and those faction-lovers shouting slogans as they roam the streets—do you remember? Maybe that is something you cannot do. But there has to be someone who can do it. For the compulsions of our times we need someone like that. I am not saying it is Sathya Sai Baba. But through him I have discerned the characteristics of the guru I seek. What we need these days is a saint who can instil wisdom not only in all departments of society, not only in all the labours of society, but even into deceit, theft, prostitution, and all the other illusions, someone who will accept everything, and reform it. Only that kind of an expansive spirituality will be of help in this age. The end of kali yuga must come about in this way.
Father, why has your face darkened? It seems like you have not even read the last bits that I wrote. An avatar of god! Are those words so terrifying?
Dear Father, if the roles you played, and your attitudes, were appropriate to the demands of your time, then they will be appropriate for all time, and they will determine whether your loving face should be forgotten or darkened. Okay, I am not trying to make you accept anything I insist on. We just wish to spread our principles out and dedicate them to your august presence, and we seek your blessings. Your unbounded love will not pass us by without illuminating us. We have faith in that. So now I want to say this as well: We totally believe that god (Siva) will be born as a human avatar. In this sense we are not only Vaishnavas, but Christians as well. You see, Dear Father, could that be the only thing that god cannot do? If we were to define him as incapable of that, would we not be trying to constrict him within our own boundaries?
Ramalingasamy asked, 'How would a name make a difference to madmen, anyway?' Didn't he? We would like to raise this question in a different form: 'For a mad man, what physical appearance would be impossible, or not to his liking?' Father, that is precisely why, as a thoroughgoing Saivite myself, I accept this madman and I worship him. At the same time, as a Buddhist I reject him. As a Muslim, I see him only as the one who is utterly beyond all this crazy costumery. As a Vedantin I call him an illusion, and I call him yet another form within that illusion. And he is universal, I say. And the universe is Brahma, I say. For such a great man, who puts on the appearance of a madman, what could ever be impossible, Father? At the same time, Father, would he not be capable of standing as everything? Have you read the Gita and appreciated it? I like it a lot. In the Gita, god says, 'I will appear to you in any way you wish to see me.' That is the first thing a person who believes in equality must understand. A man who understands that will accept everything, and he will move beyond everything as well. That is exactly what we wish for, Father.
Father, we would like to emphasize one thing here, now, at the end. Please do not jump to the conclusion that we are insulting the principles that you cultivated, or that we are simply laying out before your holy presence those vulgarities you condemned. It could well be that people might come to you and complain about me along exactly those lines. Father, I am just a child of that Tamil tradition that you fostered. It is simply with that understanding, and at the urging of that understanding, that we express all these things. But with the passage of time, our objectives have matured as well. We are trying to spread this tradition as one that belongs to all nations, yet without doing damage to the Saivism and the Tamil that you cultivated. It is not enough, Father, for our tradition to stop with Jaffna. Our problems today are of a different sort. We must be held accountable for every bomb that explodes far away, for every war that is waged, and for every event that occurs. And we must demonstrate how to bring all that to an end. If we do not, our silence will destroy us. It will aid in the destruction of the whole world. All of that has become our own lives' problems today. Given that situation, if we chatter on about praising your tradition, simply apply some holy ash to our bodies and go on with our lives, we will in fact, Father, be killing both you and your tradition. Turning ourselves into trained teachers or graduates, dreaming of going into medicine or engineering, calling ourselves scholars and pundits, thinking mostly about our survival, living for ourselves and our own troubles—that is what the highest token of your tradition has come to, Father. We can't even resolve neighbourhood temple disputes and caste conflicts. We are crippled. With this helplessness and selfishness of ours, your tradition also is dying a crippled death. In such a state, leaving aside international problems, when some among us really need to call upon distant gods like Marx and Mao Zedong to address the problems of our very own society, is it their fault? Don't we have to admit that they, at least, do not shut their eyes and forget their own dreams—that they continue in their search? In this state of affairs, Father, we are beginning to search for a way to revitalize your tradition and through it—through it!—to solve not only our own problems but also international ones. This is something you should not lose sight of. Today there are still some oldsters who preserve the past as it was done in the past. They make the claim that they are your direct descendants. Father, do not think that we are competing with them. If we were compared to them, in their eyes, we would be tiny wisps of straw. We admit that. But even a tiny stick of straw, if it receives grace and strength in the presence of a mighty person, can turn to steel, can it not, Father? All that those oldsters preserved could become a background from which to surge forward, could it not, Father? This we surrender in your presence, and ask for your grace. You must give us your grace and your blessing.
We will write more at another time.
With love,
Nachiketan
(Written by Thalaiyasingam under the pen name Nachiketan, just before his death in 1973, having come to know about plans to organize a centenary for Navalar.)
_Translated by D. Senthil Babu_
Thalaiyasingam, ' _Sree La Sree Arumuga Naavalarku Ezhudhum Vinnappam_ ', in Mu. Ponnampalam (ed.) _, Thalayasinkam Padaippukal_ , Kalachuvadu, Nagerkovil, 2006, pp. 771–783
## Oh Driver
#### _Neelavanan_
oh . . . oh . . .
driver drive the cart drive
we're headed for a new town
before the sundown
oh . . . oh . . . driver . . .
in the flowers and gardens, all over the fields
a lovesoaked song
together until our journey is ended
we will keep walking along.
oh . . . oh . . . driver . . .
even before the path disappears
in the sorrowful teardrop sea of the fog
even before the moon's sickened shadow
behind us begins to follow along . . .
Oh . . . oh . . . driver . . .
_Translated by Rebecca Whittington_
Neelavanan, ' _O . . . O . . . Vandikkara_ ' (1970s), in M.A. Nuhman and A. Jesurasa (eds.), _Patinoru Eelattu Kavingargal_ , CreA, Chennai, 1984, p. 81
## Walk
#### _Mu. Ponnampalam_
I'm walking with a friend
an evening of threadbare sun
in front of us the darkness breathing
scattered by the wind's sneeze
the feathers of darkness spread all over the sky
I'm walking with a friend.
the bamboo thatched forest road
nevertheless
human
mouths' lost speech sounds—
a little village, towering
mountains all around
tunnelling through them
feet go floating, by
way of that ever open forest road.
a roaring sound
a bridge, below it drawn
out of the forest the river's little wail
branching off—
the arm-wearying thought of love?
stroking the full-bodied tender shoots
the disembodied wind rolling along!
is the road still growing?
the road buries itself in darkness
yet—
the song of the diverging branches
can still be heard.
_Translated by Rebecca Whittington_
Mu. Ponnampalam, ' _Natai_ ' (1970s), in Mu. Ponnampalam, _Kaalil_ _Leelai_ , Dhwani, Chennai, 1997, pp. 90–91
## Yesterday Evening, This Morning
#### _M.A. Nuhman_
yesterday evening
we were here.
through the crowded streets of the city of Yaazh
through the traffic jam
we went pushing our bicycles.
we stood in front
of the Bhupala Singam bookstand.
we flipped through the magazines.
we were looking
at the crowd of people at the bus stop.
various faces
various colours
coming going
getting on and off
we saw them leaving.
we went walking up to the market
past the statue of Tiruvallavar
crossing the post-office junction
we took a breath of air in Pannaiveli.
at the kiosk
right by the 'Regal'
we drank tea and smoked cigarettes.
we watched
Jack London's
'Call of the Wild'.
in a wind that ruffled our hair
climbing onto our bicycles
we went back home.
so this morning dawned.
in the city streets we had been walking
rifles were roaming in khaki uniforms
bullets were raining down.
boring into bodies
they were drinking souls.
even the bus stop had died
the city lost the smell of humans.
the shops lay burning and smoking
like buildings felled by bullets
the old market lay in ruins
in every street
lay burnt, charred tires.
this is how
we lost
life today.
this evening
we lost.
_Translated by Rebecca Whittington_
M.A. Nuhman, ' _Nerraiya Malaiyum Inraiya Kaalaiyum_ ' (1977), in _Alai_ Journal, Jaffna, December 1977, pp. 239–240
## Your Plight Also
#### _A. Jesurasa_
you may be returning from the beach
or you may be returning home
from the theatre
sudden sound of a gunshot
followed by the sound of hurrying boots.
you, having died,
will lie fallen in the street
a knife will sprout in your hand;
a gun will sprout too!
you will make your name
as a 'terrorist'
no one can question anything.
frozen silence;
but
in the hearts of the people
rage rises.
_Translated by Rebecca Whittington_
A. Jesurasa, ' _Unnudaiyavum Kathi_ ' (1979), in M.A. Nuhman and A. Jesurasa (eds), _Patinoru Eelattu Kavingargal_ , CreA, Chennai, 1984, p. 179
(The backdrop to this poem is the emergency that was declared in Jaffna, in the Northern Province, from July to December 1979.)
## Journey
#### _S. Sivasegaram_
the weakness of daylight the strength of the darkness
the night triumphs once again.
flourishing trees, night drying out in the leaves
scorched, turned into charcoal.
swaying crown of a tall coconut
the ghosts stand shrunk with fear.
you can hear the little beetles crying out
the trembling of frogs' bodies.
the moon stumbles in the sky
falls into the pool of clouds and drowns.
darkness still surrounds.
a long journey lies ahead—
eyes turned blind
struggling feet search for a path
dawn may still break tomorrow
and the feet may move faster
if it's possible to go
just two steps beyond the darkness, I will.
will time stand still and wait for the dawn?
_Translated by Rebecca Whittington_
S. Sivasegaram, ' _Payanam_ ', in M.A. Nuhman and A. Jesurasa (eds), _Patinoru Eelattu Kavingargal_ , CreA, Chennai, 1984, p. 170
## Hope
#### _V.I.S. Jayapalan_
like the sorrow
of a koel bereft of its lover
gently gently
the river seeps.
the _varaal_ fishes jump
gasping for breath
among the reeds set dancing in the wind.
a summer evening.
next to me
on the warm white sand
I see
lying drying
rinds of banyan fruit
and five or six little seeds,
even though
somewhere far off in the distance
in a sweet voice
a Vanni boy
is singing of rain.
_Translated by Rebecca Whittington_
V.I.S. Jayapalan, ' _Nambikkai_ ', in M.A. Nuhman and A. Jesurasa (eds), _Patinoru Eelattu Kavingargal_ , CreA, Chennai, 1984, p. 186
## Seashore
#### _V.I.S. Jayapalan_
the girl of time draws
with sand on the seashore a poor girl.
before her the sea stretches out
behind the ancient sea
the sky continues
beyond the sky
she stands still to follow it with her eyes.
a fence of screw pine trees
in the distance a little hut;
inside the hut
a little child sleeping
in the rowboat dancing
in the deep sea
the surface wind comes carrying
the scent of screw pine flowers
winds, big winds
winds with pitch-darkness
the pitch-darkness trembling
many miles of sea swelling.
that night
of hands joined in prayer
at every little stone shrine
cannot be so quickly forgotten.
even after conquering the rolling
seas and bringing riches
this little hut,
two fistfuls of rice,
by the grace of the boatman
a twisted thought
a sigh.
the morning star only shines
in the sky
in life, it's just dark.
_Translated by Rebecca Whittington_
V.I.S. Jayapalan, ' _Kadarpuram_ ', in M.A. Nuhman and A. Jesurasa (eds), _Patinoru Eelattu Kavingargal,_ CreA, Chennai, 1984, p. 190
## Unsung Songs
#### _Shanmugam Sivalingam_
the moment the flowers have budded
they wilt and fall.
the instant after conception
abortion.
even though young shoots
sometimes form stunted
all of a sudden
they are cast out
the stench of blood drifts in.
inside the egg a chick
dies with its just budding wings
but still
you tell me to sing.
in the street the corpses stink,
when the bullets broke the lock
the white doves fell head down
wings broken, and lie cruelly curled
the boys leave without telling us
they tell us to look for corpses on the shore
they tell us the corpses heaped on the shore
are the ones that were dumped in the sea
but still
you tell me to sing.
blood-curdling songs
songs of rotting corpses
songs of darkness
hanging overhead
like pitch-black smoke
without being aborted
without being stunted
without being cast out
a time will come
rending the heart
stumbling in the throat
exploding on the tongue
ask then—
not now.
_Translated by Rebecca Whittington_
Shanmugam Sivalingam, ' _Paadatha Padalkal_ ' (1984), in Shanmugam Sivalingam, _Neer Valaiyangal_ , Tamizhiyal, Chennai, 1988, pp. 112–113
## Lankapuri Raja
#### _Piramil_
It was that unearthly hour after midnight, and Gopalakrishnan found himself awake—12:49 a.m.
The electronic wall clock, burning in the gleam of the night-light, announced the death of 1984 and the birth of 1985. The two dots between the 12 and the 49, like drops of blood one above the other, seemed to quiver on the point of disappearing.
Padmini and Abhiraman. The child Abhiraman had not even made it past the age of five when Padmini passed away from complications due to the miscarriage of her second pregnancy.
Thirteen years had already gone by, and like Padmini, Abhiraman had also vanished—but not into the realm of death. If that had been the nature of his disappearance, his memory, like Padmini's, would have merely brimmed up and subsided. By now Gopalakrishnan, a retired government surveyor, would have lost himself deep in his heaps of private land records.
Abhiraman was now barely seventeen. Once, while he was still a little child, Gopal and Padmini had been forewarned of this recent disappearance.
The incident took place in the town of Lankapuri in the Sinhala forest region—the memory hit Gopal with as much force as if it had happened yesterday. That day, while he was playing with the Sinhalese village children in the entrance to the survey tent, Abhi had dubbed himself Raja, King. According to the villagers' traditional belief, the Lankapuri Raja was an extraordinary elephant that reigned over this forest region. Elephants often came to the village attracted by the sugar cane fields, but none of them could withstand the villagers' intimidation tactics. The Lankapuri Raja was another matter altogether. He was never perturbed. Rising up on the horizon, he would stand towering above the sugar cane fields like an unshakeable mountain. Even so, he never ate more than a lorry-load of sugar cane, and he didn't often put in an appearance.
When the Raja did appear, either out of time-honoured custom or to test his authenticity, the villagers would set off firecrackers. At the sound of these firecrackers, any other tusker would turn tail and lumber off at top speed, but if it was the Raja himself, he would turn his magnificently curved, long tusks towards the noise and send an inquisitive look in the direction of the explosion. The villagers, taking notice, would crowd around to behold the king with fearful reverence. For all that, people took any smoky form far off in the distance to be the Raja. But for the Raja, the fireworks were a kind of ritual binding him and the village.
Padmini had shown the Raja three times to the child Abhi, and in certain peculiar mental states, Abhi had to be addressed as Raja, or else he would pick up and fling about anything and everything that came to hand. It was more or less this mental state that had seized Abhi on the day in question, and the incident that followed stirred up the whole village.
Abhi, who had been playing with the village children, strutted off in his shorts and vanished from Gopalakrishnan's sight. When Gopalakrishnan asked, 'Where's Abhi?' the children said, 'Raja has gone into the forest.'
Gopal was struck by the solemn tone in which the children said this. He came out of the tent, and his child was nowhere to be seen. Gopal crossed the road in the direction the other children were pointing and called out, 'Abhi! Abhi!' On the other side of the road, the densely thicketed forest began abruptly. The crowd of men and women that had gathered at the sound of Gopal's voice plunged into the thicket and spread out in all four directions in search of the child. Padmini stood terror-stricken in the road. The old Sinhala village headman Charles Udavatta admonished the crowd: 'Quiet down, don't make a racket.' Once he had imposed enough silence so that only his own voice could be heard, he called out in a mild voice, 'Raja!' At once, Abhi's shrill voice sounded out like an elephant's trumpet. Abhi was sitting beside the path that led to the sugar cane fields, endeavouring to extract, without tears, a thorn that had pierced the sole of his foot.
Now, twelve years after this event, Abhi had disappeared again. He had not bothered to get in touch with either his high school or his friends to find out if he had been promoted to the next grade.
The Sri Lankan political problem, which had reared its head many a time between father and son, seemed to be a clue to Abhi's disappearance. Abhi's view that the Sri Lankan Tamils could attain liberation from Sinhala military atrocities only through armed violence seemed to Gopalakrishnan just that of an unruly child. Abhi seemed to forget all about Lankapuri, the Sinhala forest village where he had grown up. After his mother's death in Colombo and his father's transfer to his hometown Triconamalai, the Lankapuri Raja that had once loomed so large in his thought was no longer part of Abhi's consciousness.
The childhood memory of the extraordinary experience of beholding the Raja had remained imprinted in Abhi's heart until the age of thirteen. Then one day Abhi asked his father how his mother had died. He didn't believe the story of the miscarriage; he declared: 'Amma was killed in Lankapuri village.' Gopal, disconcerted by this statement of Abhi's, pulled out the pile of papers containing Padmini's death certificate and slapped it down in front of his son, shouting, 'Look here, see for yourself.' But how many mothers, how many women had been disgraced and killed—when Abhi launched this next weapon against him, Gopal had nothing to say in return. This tender thirteen-year-old heart had seized up and turned to stone before it even ripened.
Gopal looked up at his son. Even at thirteen, Abhiraman, on his way to becoming the star soccer player in the eastern province school system, sported the build of a seventeen-year-old. His brows often furrowed in a secret sorrow that lent maturity to his childish eyes . . . A very close friend of Abhi's, Chandirasekaran, was a member of a Tamil family in Rattinapuri ravaged by Sinhala fanatics . . . His was one such tender heart that hardened into stone before it ripened on seeing his mother torn apart before his very eyes. This stony quality must have spread from Chandirasekaran's heart to Abhi's.
Gopal began to sense that, even at the age of thirteen, Abhi must be working out in some secret physical exercise programme. He witnessed in anguish a distant gaze brewing in Abhi's eyes by the age of sixteen. The child Abhi had at some point abruptly become a young man, and still Gopal could not see through to the bottom of this distant gaze. Shooting up unusually fast, as if goaded by his physical traits, Abhi had already reached his full adult height at sixteen. Then one day he took off on his bicycle, saying he would only go as far as the stadium and back. He never came back.
Even police Inspector Jayatilake said to Gopal as he searched Abhi's belongings for evidence, 'Don't get angry, Surveyor—if your child just had an accident somewhere, we'll be glad to let you know.' With that, though, he pulled out a number of books from inside Abhi's pillow, and placed them in front of Gopal. Amilcar Cabral, Frantz Fanon, Nelson Mandela—who were these writers?
'I'm no bookworm,' said the inspector. 'But I do have information about certain matters. These people are not writers. They're armed revolutionaries. According to our information, terrorists. All three of them Africans.'
Gopal did not rise up, but his eyes flared. His teardrops fell onto Cabral's bright smile, dampening the cover of the book.
At this point, Inspector Jayatilake, who had come across so amicably at first, began to lose patience and revealed his true colours. This policeman, who was supposedly trying to help Gopal find his son, now seemed to be asking, indirectly, 'Where is your son hiding?'
During that evil hour, Jayatilake appeared twice in uniform, his revolver drawn, with a jeepful of armed police behind him.
'As far as we are concerned, Abhiraman is not a child, Surveyor. He's a terrorist of the first order who trains directly with a character named Charles Anthony, here in Triconamalai. He would only come to you if he were mortally wounded. Recently there was a gunfight between the police and a gang of terrorists, so we came to find out what's going on.'
During this speech, Gopal stood aloof. After the policeman left, it took him two days to put the house back in order.
Now, there came a knock on the door at this unearthly hour of the morning, and he was reminded of the inspector's visit. But this was a different knock, a stealthy sort of knock. At first, Gopal heard the knock in his sleep and it woke him up, but once awake, he forgot why he had woken up. When the knock came again, he understood that this was what was waking him up. 12:49 a.m.
It couldn't be Abhi, or it must be Abhi. He would come home only if he is mortally wounded. Gopal, his body and mind trembling, got up and opened the front door without putting on the light. The realization that it was not Abhi produced at the same time a deep pain and a deep relief. Who was it?
In the darkness, all Gopal could see were the sharp eyes of a shortish figure.
'Who is it? What do you want?' Gopal asked in Tamil. The figure laughed strangely.
'Same old Gopal Mahatmiya, same old voice. I wish you a happy new year.'
Gopal put on the light. There stood, bag in hand, the old Sinhalese village headman of Lankapuri, Charles Udavatta. For the first time in twelve or thirteen years, Gopal encountered a drop of Lankapuri. He said, 'Come inside.'
Udavatta looked Gopal up and down, and said, 'Where's Abhirama Raja?'
Gopal said, 'First sit and rest a little. What are you doing here? On your way to Ceruvavilai?'
'That too, but I don't have much faith in that place. All that is a Sinhala trick to take the Tamilians' land. I've been to Bodh Gaya,' said Udavatta, mixing Sinhala and Tamil. Then he said in Tamil, 'I came mostly to see you and Abhi. Where is he?'
Gopal, hospitably sitting his guest down, and steadying himself at the same time, said, 'I've sent Abhi to study in Madras.'
'You did well. This Sri Lanka of ours is disintegrating. In fact, you could have gone with him. I knew all along that all this would happen.' With that, he looked at Gopal significantly. 'You know about the Lankapuri Raja, don't you? All the signs were there.'
'I don't understand,' said Gopal. 'This New Year, and the one before that too, there's been a curfew. Did you get off the bus and come straight here? How did you get my address?'
'That boy Piyadas who used to work for you, he's my son-in-law now. He fell in love with Nalani and married her, and they have a seven-year-old daughter. Nalani named the girl Padmini. And you ask how I got your address!' Charles Udavatta bit into the biscuit Gopal had served him. 'The world can go to rot, but you and I have stood shoulder to shoulder before the Lankapuri Raja. Don't you know that the Raja has reached the great nirvana?'
In plain speech, this meant that the elephant called Lankapuri Raja had died. But, for the Lord Buddha and for all those who attain the status of _arhat_ , death means reaching the great nirvana.
Gopal set the drink in his hand down on the table. 'Was the Raja that old?'
'Old? That elephant could have lived another five hundred years, such an awesome strength he had! He uprooted and hurled down a couple of hundred trees, a whole jungle in fact, in the course of a few sleepless days, before he disappeared!'
'Why? Had he gone mad or something?'
Charles Udavatta glared angrily at Gopal for a moment; the next moment, he burst out laughing. 'Mahatmiya, I told you, this was no elephant. He was more of a man than any man. How could he go mad? Let me tell you what happened—then you will understand why Sri Lanka is falling apart like this today.'
After a pause, Charles Udavatta began: 'You know Lalit Adulat Mudali, who's unleashing the Sinhalese army on the Tamils nowadays? He has an older cousin-brother, Cyril Tissanayakka. This Cyril took out a government contract to catch elephants, so he came to Lankapuri and set up a tent. None of this came out in the news.
'We villagers tried so many times to tell Cyril—don't go trying to catch elephants here. We said everything we could think of to make him understand that the Raja wouldn't allow it. Half the people in the village started to leave.
'But Cyril Tissanayakka didn't budge. One time he took a big rifle from under the table, aimed it at me, and shouted rudely, "Clean this out, old man." " _Chi_ , what kind of a man are you?" I shouted back at him. I went straight to Nalani and Piyadas and arranged to send them to Colombo, but I didn't go myself—a few of us stayed put in Lankapuri.
'In the meantime, the elephant catchers spent thousands of rupees chopping down gigantic trees and planting them in curved rows in the forest, so that if the elephants went inside, they wouldn't be able to get out. What could we do? The elephant catchers crushed all opposition from us Lankapuri folks. When we told them about the Raja, they said, "He's just a beast like any other that stands there in the fields and eats the sugar cane, isn't he? We'll catch him too, and then you can grow your sugar cane in peace." They told us all kinds of barefaced lies about Cyril's rifle.
'Within a month, the drums began to sound. Catching the scent of the elephant herd, the catchers stationed kettledrum players here and there to beat their drums, and the rest of them stood in rows in the bushes and made a racket to drive the elephants out. Sure enough, according to Cyril's plan, the elephants went running into the cage and stood there trapped by the huge arches of that cage. The elephant catchers explained their method: they would trap the elephants, leave them there to starve for a while, and then tame them with food. But the Lankapuri Raja was not among the herd—only the villagers knew that.
'That night, the elephants started trumpeting. After half an hour or so, a sudden silence fell. The elephant catchers were stumped by this silence—according to their calculations, the elephants should have kept trumpeting for days on end until they wore themselves out. In the middle of the night, the men who were guarding the elephant cage started shouting and came running back to town like wounded dogs. We managed to make out from their babbling that a gigantic elephant, an elephant as tall as two elephants, was standing outside the cage, uprooting the tree-arches and hurling them away.
'Far off on the edge of the forest, you could hear the sound of a mountainous form angrily gnashing its teeth—the sound of trees snapping. Some of us who knew our way around the forest took an alternative path to see what was going on.
'Dark in the darkness, the Raja was breaking down the elephant cage. He had to reach the central part of the cage and break four or so of the arches, each of them a giant tree that had been felled and shifted by means of gigantic machines. When we came to look, the first arch had already been broken down to the breadth of two elephants. It was three o'clock in the morning and the Raja's anger was storming in the wind. We got scared and ran back home to the village.
'At first, Cyril Tissanayakka didn't bother his head about this. He figured that even if the elephant broke one of the arches and went inside, it would get trapped in the arches of the tree-fence that had been driven in on both sides and be forced into the centre of the trap. That was what his experience told him.
'A couple of days went by. The sound of snapping trees stopped short. Nobody dared to go into the forest to scout out what had happened. Now the elephants started trumpeting again. Cyril, assuming that the giant elephant must have gotten trapped, started up the machines again to rebuild the broken fence. It was time for one of the machines to start picking up the huge whole trees that had been flung haphazardly here and there. Eleven o'clock. Suddenly, a trumpet very close by. Cyril Tissanayakka got out of his jeep and stood there, watching. The Raja burst out from some hiding place in the thick of the trees and came running towards the machine. Cyril ran and climbed into his jeep. The Raja came charging with lowered head and thrust his tusks right into the track wheels of a German machine built like a battle tank and as big as a house. The Raja lifted it up and tossed it on its side with one kick. With a meaningless roar, the machine grotesquely toppled over onto its back. Then the Raja turned and headed straight for Cyril Tissanayakka. Cyril couldn't get the jeep to start, so he grabbed the rifle beside him, jumped out of the jeep, and took off running. At that moment, we understood why the Raja had stopped halfway in breaking down the trap: his real aim was to destroy those who had caused it.
'Cyril saw the Raja heading straight for him. Planting a knee on the ground like a soldier, crouching and aiming the rifle at the centre of the Raja's forehead, he pulled the trigger twice. The rifle sound burst out with a crack like stone splitting. Instantly, two red spots appeared on the Raja's forehead. Cyril straightened his rifle and stood up. Like the machines, his rifle had been specially ordered from Germany; its bullets could hit a target a quarter of a mile away, and it was fitted with a telescope—so Cyril Tissanayakka stood up, sure that the bullets had pierced into the elephant's brain. At this point the elephant should have folded his knees, slumped down, and fallen to the ground. But that idiot Cyril didn't have the sense to realize that this was no elephant. The bullets that had entered the Raja's brain didn't slow him down in the least; in two steps he reached Cyril's side. Cyril's arms and legs grew numb; only his mouth screamed out in fear. The next moment, destruction. A week later, Cyril Tissanayakka's relatives gathered him up in two plastic buckets and took him away.
'Once he had Cyril destroyed, the Raja never broke his stride, but headed straight for the trap. Our hair stood on end as we watched from our hiding place. The Raja began to break down the trap again, tree by tree, and hurl the trees aside. Each tree collapsed under the Raja's assault in the space of two or three minutes. Streaming blood from the spots in the middle of his forehead, the Raja continued his battle. We watched for a long time, and we began to get hungry and thirsty. The elephant catchers had long since taken to their heels, and only we Lankapuri folks were left. We turned back to the village as the evening darkened. The cruel sound of trees snapping could be heard all night long. In the morning, we got up and quickly gulped down some bread and bananas, and then went out again to watch the Raja's tireless struggle. By now, his entire forehead was one cloud of blood. The point of one tusk had cracked off. At the place where the tusks entered the mouth, the blood had blackened. I broke down in tears.
'That day I stuck it out there until evening without even drinking a drop of water—but I didn't even feel the time pass. Once the Raja had gone straight to the centre of the cage, where we could no longer see him, we went back to the village.
'That night I woke up to the sound of the Raja's trumpeting. A terrible pain like two tusks seemed to shatter my head from inside my skull. I thought the trumpeting was some sort of hallucination, so I put a wet cloth on my forehead and tried to go back to sleep. My body was fiery hot, and began to tremble. Fever. I didn't even have Nalani nearby—somehow or the other I got back up and made myself an herbal decoction, then lay down again. That day the year 1982 ended and the year 1983 began. I know for certain that at the time I heard the trumpet, it was exactly twelve o'clock. My wall clock was striking twelve at that very moment. It was the timeless interval dividing two days, two years, two epochs. The Raja's trumpet rang out and disappeared into that empty space.
'The next day I stumbled over to look at the elephant cage. The tuskers and the female elephants and the baby elephants were all standing scattered outside the cage. I wandered into the midst of the trees and stood at the entrance to the broken cage. All four fences were broken down in one uniform line. The herd of elephants trapped inside had come out and was standing in the open, but the Raja's body was sitting there in the shade of the fence with its tusks pointing up toward the sky, like a statue of an excellent god. His right tusk was shattered, and his tusks and his whole head were one cloud of blood. My own body too lost its strength all of a sudden and I collapsed. If the village boys had not carried away my unconscious body, I would have died peacefully then and there. After that, a new and terrible epoch began. July 1983. I was witness to the rowdy armies of both Cyril Mathew and Gemini Tissanayakka tearing the Colombo Tamils into tiny shreds. It was my fate to be in Colombo then, to see those demons face-to-face. With the same eyes I took to Bodh Gaya for _darshan_ , I was forced to see the full form of human cruelty. That's divine logic.'
Though Udavatta's voice had not once broken, from time to time his face crumpled. Tears went streaming from his eyes, ascending and descending the creases in the flesh of his face like the irregular surface of a craggy landscape. Gopal, too, however much he tried to contain his tears by straining his forehead, he could not. Looking reverently at the shattered face in front of him, he started to say, 'I didn't send Abhirama Raja to Madras. In reality he . . .'
Udavatta interrupted, 'I knew that even before I knocked on your front door.' He rubbed his face with his hands. 'I'm not crying only for the Lankapuri Raja. When I got off the bus, the police were stopping everyone and interrogating them before sending them on their way. As soon as I said your address, I alone got the royal treatment. They took me straight to the police station in a jeep and took down everything I said. One inspector told me about Abhirama Raja in a roundabout way. He asked me to spy on you and find out about him, and they brought me here in a jeep. This is why we call these Sinhalese morons. They've sent me to spy on you. I have only one thing to say to you: I've been on a pilgrim's journey to seek out everyone I know, pin them down, and tell them in person the story of the Lankapuri Raja.
'Abhiraman was born in Lankapuri, and his own journey is not at all far from the Raja's dharma.'
_Translated by Rebecca Whittington_
Piramil _,_ ' _Lankapuri Raja_ ' (23 June 1985), _Tinamani Katir_ , Chennai; in K. Subramaniam (ed.), _Piramil Pataippukal_ , Adaiyalam, Puthanatham, 2003, pp. 101–111
## In the Evenings
#### _Sivaramani_
in the evenings
all burdens grow heavier.
when light and heat
inescapably chafing
against each other
on the dead daytimes
disappear
like words scrawled on a slate
and wiped away without a trace
I count my breaths
as I let them out—
not only to pass the time.
beside the light
the winged mites were falling down dead, one by one.
what should I count—
the winged mites?
or the stars that give out
unelucidated meanings
like the eyes
of the fallen?
I don't know the truths;
to spot the lies
in this darkness is not an easy task—
but
I can't ask my younger sister
who is studying for a test:
look for meanings in the things you do—
on the whole
everyone is in some sort of hurry.
for me
only memories remain.
outside,
the shadows of trees
that stand in tensionless silence
are torn down.
when dogs bark
suffering and tense
in the street
at the time when everyone goes to sleep
after checking the locked doors
one more time
I
cannot think
about the sun that will appear tomorrow.
to me this night is significant.
this darkness
where yet another friend
might be lost
like yesterday
is worth much to me.
_Translated by Rebecca Whittington_
Sivaramani, ' _Maalai Nerangalil_ ' (1989), in _Sivaramani Kavithaigal_ , Women's Study Circle, Batticaloa, 1993, pp. 39–41
## I Don't Have the Words
#### _Sivaramani_
I
don't have the words
to voice beliefs and solutions
like a pamphlet.
night;
day commanded by the night;
to me who doubts even
that tomorrow morning
the sun will rise
dreams
have lost their meaning.
when guns are thrust
at society's birth cord
the dream of a butterfly
that might sit
on the soft edge of a flower
is nothing to me
but an irrelevant occurrence.
in my efforts to live as a human being
I would like to leave the flowers on the trees. to me
the beautiful night given form by day
is a dream.
_Translated by Rebecca Whittington_
Sivaramani, ' _Ennidam_ ' (1989), in _Sivaramani Kavithaigal_ , Women's Study Circle, Batticaloa, 1993, p. 38
## My Lineage and I
#### _Sivaramani_
in this darkness
that is searching for everything
it is now certain there is absolutely nothing.
in the space crossed
by all the lines of descent
behind me
even I am left behind.
in the expanse
where heaven and hell
have been effaced
my feet have sunk
in unfathomable mud.
everybody
bears their own coffin
but eats their meals too
every mealtime.
even the space, the time, and the teachings
of the gods' messenger, the preacher
and the prophet
have been effaced.
no one has
anything like the joy
that might uplift
our stooping
times.
in an extraordinary effort
to bring everything
back to normal
among the sleeping and the dying
with my beliefs
I
am failing.
_Translated by Rebecca Whittington_
Sivaramani, ' _Enathu Paramparaiyum Naanum_ ' (1989), in _Sivaramani_ _Kavithaigal_ , Women's Study Circle, Batticaloa, 1993, p. 42–43
## Place: Jaffna University Canteen Time: 4.30 p.m.
#### _Sivaramani_
like a lonely
little railway station
ignored and without passengers,
in the midst of everyone
between the compound walls that
rise along with laughter
one evening . . .
I was talking
with my friends.
the times
we like to be happy
in the middle of many wounds
not worthy of mention
without words to speak with
tapping out a forgotten song,
a friend and his fingers,
drops of tea
scattered on the long table . . .
the flies get caught
in cobwebs up there . . .
shrugging her shoulders my friend
laughed inside herself
who cracked the joke?
I don't know
the clouds moving slowly
over the glass roof-tiles
along with them the hour and minute,
the table and chairs
left standing without a trace of scent—
leaving the empty cups . . .
coming in through the door
the rays of the western sun
were chasing us
we got up—
not to change the world,
just heading for another night.
_Translated by Rebecca Whittington_
Sivaramani, ' _Thanithu_ ' (1989), in _Sivaramani Kavithaigal_ , Women's Study Circle, Batticaloa, 1993, p. 46–47
## Summer Scorches Day after Day . . .
#### _Su. Vilvarathinam_
summer scorches day after day
even the water in the deep well has dried up
the bucket goes in, comes up empty
summer scorches day after day
not a spot of cloud in the sky
the bald trees suffer, extending their fingers into arid space
like men of withered dreams
summer scorches day after day
look at men wandering
shadeless shadowless
their footprints lost
crossing the path of my eyelids
in all directions the eye throws out
there are only hands stretched out to offer mirages
not one hand bears them
water, life
summer scorches day after day
fires burn all around
self-immolations
raw flesh
raw feelings
dreams of rawness
burn; charred
the very earth burns to a corpse
in the mirage
the shadow of fire falls
on the future—
glimmering like a shed snakeskin . . .
summer scorches day after day
in the scorching summer sun
even birds' shadows suffer
beneath bald trees
humanity is hunched, crouched
in the monstrously extended desert
its life parched
by the long-unbreathing wave of wind
is there anyone
waiting
though the heart be wrung out
for a drop of life-water
though the back be broken
for vertebrae to be handed out as crutches
on this long-drawn-out desert road
is anyone there
anyone at all . . .
_Translated by Rebecca Whittington_
Su. Vilvarathinam, ' _Oru Paalaiyin Kural_ ' (1989), in _Uyirtthezhum_ _Kaalathirkaga_ , Vitiyal, Coimbatore, 2001, pp. 157–158
## Time Will Write a Song for You
#### _S. Ranjakumar_
Today, the first bath in many days. Today's sun rose with a new look. Strange, without a hint of the overly warm dawns you expect in the month of Cittirai, but rather as if giving off just a touch of coolness. A coolness that creeps in, among the roots of our hairs, tingling.
In the early sunlight, unfurling their long, dew-strewn leaves, the tobacco plants emerged into view. Water pumps were spitting out a rapid, heavy flow of water. The sweet, heady pungency of tobacco mingled with the smoke of burning kerosene; they blended into a single fragrance.
The idea came to Arul first. He got up suddenly and went over and squatted down, without even taking off his shirt, and let the water pour over his head. Everyone turned and looked questioningly at Konamalai. Why were Konamalai's eyes so red all the time? Anger and pride always seemed to be gleaming between his tightly sealed, thin lips. His face had a strange, hard sheen, like black granite.
This morning Konamalai laughed with those red eyes of his. With fatherly tenderness, he watched Arul sticking his head into the cascading stream of water like a little kid.
'One gets to bathe, at last.'
_Appa_! What voice is that? Coming from someone who hardly spoke, there was great emphasis in every syllable.
Michael sat down.
Konamalai, Kedari, Periyannan, Perumal, Yosef, Anbarasan, then him . . . they all headed for the roaring water pumps.
Michael just sat there, waiting to collect the tactical information. People looked at them once. Then, shaking their heads, they continued on their way. A little boy, around ten or eleven, came running towards them, shaking a soap box. He stood there with his hand stretched out. They kept washing themselves without paying him any heed.
Konamalai looked up. The boy's eyes were pleading, 'Take it . . . Take it . . .'
'Arul, get the soap.'
Arul began to grin through the soap suds foaming all over his head, navel, thighs, the soles of his feet. Arul was something of a joker. He told him all his secrets. One time he and Arul doubled up on a bicycle to go somewhere on an urgent errand. As they rode past a house rising sombrely behind a stretch of high land bordered by balsam trees in bloom, Arul gave him a thump in the ribs.
'That's my girl's house!' he whispered into his ear.
'. . .'
'What's it like . . . ?'
'A house . . . It's not bad at all . . .'
'You haven't seen her . . . if you'd seen her you'd know . . .'
'. . .'
'Hm . . . If everything ends well . . . If I'm even alive . . .'
'If . . . ?'
'Hm.'
He turned and looked up at Arul's face. For a second he saw all the splendours of the world flowering in Arul's tense face. He let out a sigh for Arul.
Only a moment!
Then Arul changed. The old tension in his eyes came back. He started pedalling the bicycle at high speed. A real hard worker!
A long time later. Today, a rare feast. From what a gifted woman's hands! Fragrant cooking. Scooping and slurping up the food, Arul looked at him with a wink and a grin.
Every woman's cooking has its own taste. Even so, no one can match the taste of a mother's simple cooking. If Amma gives you plain hot water, it has a special taste. When the gentle darkness gathered, Amma would come home by way of the temple. It was all those gods she worshipped who forced her into her white widow's sari; then they watched her in spiteful silence. Every evening, with the back of her white sari wet with the water dripping from her hair after she doused her head and pulled up her damp hair, Amma would set out looking for Amman temples. When she came home, a whiff of camphor would come from Amma's body. As if Amman had come to infuse herself in Amma's persona, terrible and beautiful to see.
Amma didn't eat any meat or fish. After dark, she lit a stove of baked brick and cooked her food separately. How is it possible for Amma, who ate very little, and only once a day, to bustle about attending to so many tasks! Overcooked, starchy rice gruel and a thin curry with some lightly fried vegetables. Without fail, a crunchy _appalam_.
Amma could cook a meal in a second. He would wait with his mouth watering. Amma was about as tall as his shoulder. She had the colour and the coolness of a sliced cucumber. She must have been a great beauty back then. Is that why she had so many children? Even at this age, with her skin dry and her pace slackened, Mother's eyes give out a bright light.
He would get up and stand proudly, looking at his mother with a tender smile. Mother would throw back her head and daub sacred ash on his forehead.
' _Ammale_! . . .' Whenever he heard his mother's intimate, entreating voice, he would simply melt. And the lingering scent of burning camphor too, warm and moist—when mother touched his forehead, he would thrill at the scent that filled his nose. Mother's breath would touch his chest for a moment and move away. He would immediately be seized with hunger!
Sulochana Akka cooked in a hurry. She was in a hurry with everything. Whatever did she see in her dear husband? She would swoon like a snake at the sound of a snake-charmer's flute. Akka turned out so many dishes like so many children. If she put too much salt in one, another would have no salt at all. Only fish curry did she make properly—everything mixed together to give birth to a truly unprecedented taste.
He couldn't even remember the last time he went to Sulo Akka's house. Her husband, muttering to himself, would suddenly turn his face away and leave. He was a proper water buffalo. When he wasn't playing cards or slogging away at work, what did he do with his time except drink? He was good at making a mother out of Sulo Akka, though, every year, without fail. Sulo Akka had become like an eggplant, her eyelids cracked and her chest shrunk. Her husband was mumbling something or the other. Let him go! He only came to see her, anyway.
'Unnecessary problems for us . . .'
On his way out, her husband started to cackle. Akka bit her tongue and dragged him into the kitchen. She would not let go of his hands; she kept gripping them tight.
'Did you eat yet . . .' she asked, eager to nourish him. Not finding the strength to look up, he sat down silently. Akka briskly set out the food. She started to put morsels of food into his hands.
He could hear Akka sniffling.
'Amma quit going to the temple . . .'
'. . .'
'Amma has even given up going to the temple . . .' In the strength of her emotion, his sister's voice broke and started to squeak.
He shook off her hand and abruptly turned to leave.
'Wash your hands before you go . . .' Akka called out in a tearful voice. Unwilling to turn and look back, clenching his fists tightly over his palms, he went out as if he were ready to punch the wind.
He could hear the sound of Akka coming up behind him, calling out in a surge of emotion. He felt like plugging up his ears.
'From this moment on I won't go into any house,' he swore to the wind.
One time they broke down the big bridges on all four main roads.
Konamalai separated the men into groups. He got the north road. In a panic, people rushed to fill in the pits, using tractors to bring in dirt and gravel.
His watch! He stood there ready, looking all around him, an agile man with a big responsibility. The vans and buses reduced their loads and descended slowly into the pit.
Somebody gently took hold of his elbow.
He turned to look.
Sundari Akka!
Limping Sundari Akka! Sundari Akka, who caught the bus at sunrise to go to work. Would Amma be standing there still, staring anxiously at the doorway? If it weren't for this lame leg, wouldn't Indiran, Chandiran and all the other boys be waiting in line behind Sundari Akka? She had a set routine: she caught the bus at sunrise, went to work, and got home after dark.
Sundari Akka, standing there staring at him . . . He pretended to be distracted and looked off somewhere else. The bus was slowly making its way down into the lowland. Sundari Akka came and stood even closer to him. Opening her handbag, she took out a few notes and stuffed them into his pocket. He stuffed them back into her hands. She looked at him helplessly.
'Keep it, man . . .'
'I don't need it . . . I don't need anything . . .'
Sundari Akka looked him up and down. From his matted hair to his feet that had wandered in the sun and the rain, he was covered in red dust. He had tied his lungi so high his thighs were showing. The shoulder seam of his shirt had come apart.
'Couldn't you at least buy a shirt . . .'
'. . .'
'Keep it, man . . .'
'There . . . Look . . . The bus is leaving . . .'
She had big eyes, Sundari Akka. She looked at him intently. She looked at him, wishing she could take him captive in her eyes. Two diamond drops slid down, dampening her cheeks.
Turning back time and again to look at him, Sundari Akka left. Limping, the last person to board, she got on the bus. She stood looking at him through the back window. He looked that way as if by chance.
He saw Sundari Akka's big eyes filling the entire back window. He suddenly turned the other way.
'I won't give my feelings any room,' he swore to the wind.
Today, extraordinarily, everything looked new. The cool wind embraced his freshly bathed body. He looked up at the sky. The brilliant blue sky of the month of Cittirai, with bales of cotton floating in it, was not to be seen today. Rain was in the air and the wind felt dense. Dark clouds were spreading slowly from the east.
Michael gave out the information.
'They said the stuff is coming by the south road . . . We are to go and take it over midway . . .'
Konamalai got up abruptly. Excited, he gave several orders in succession.
'Right, the south road!. Don't worry . . . We don't need heavy arms . . . One per person is enough . . .'
'Perumal, take the van . . . It looks like it's going to rain . . . We have to transfer the stuff without letting it get wet . . .'
'Yosef, you stay here . . .'
Perumal was a good driver; he'd studied every inch of the roads. Perumal could fly along roads with mines and potholes, turning sharply, without shifting gears.
Konamalai jumped up next to Perumal; next to him, Periyannan.
Him . . . Kedari . . . Arul . . . Anbarasan . . . Michael . . . they crowded into the back. He sat next to the door. Except for Perumal, everyone had 'the thing' clenched in their palms. They were ready.
Stirring up red dust, Perumal took off. The sky let out a roar.
A little ways ahead, they could drive up onto the main road. They had to make a sudden right-angle turn. Perumal got ready to shift gears. At the crossroads, a crowd of people were standing in the shadow of an old banyan tree. Perumal slowed down.
Their eyes were dazzled by a flood of light. A bolt of lightning came blazing down across the sky like a creeper. The sky roared again with a vengeance. Perumal hesitated. He stopped.
There was a crackling sound of something snapping and falling. People moved away, dispersed. One of the big branches of the banyan tree cried out 'Oh' and hit the ground.
Confusion spread over people's faces. Konamalai said, 'Go ahead.' Perumal planted his foot on the clutch.
A shrivelled old man, with his mouth dripping saliva, came in front of them, waving his hands.
'Sons . . .'
Konamalai looked at him as if to say, 'What?'
'Banyan trees aren't supposed to break and fall like that, sons . . . It's dangerous to travel now, sons . . .'
Arul let out a mocking laugh.
'We're in danger every second, Thatha!'
The old man looked at them with pity and regret.
Perumal easily made the turn up onto the main road. Once again lightning came crashing down on the horizon. The wind turned heavier. What is this? Today, everything was turning strange. From the south-east corner, forgetting that this was the month of Cittirai, the cold wind kept blowing, hard.
'Heavy rain's coming this way, Perumal . . . Quick . . .' Perumal pressed his foot down harder.
People were scurrying into their houses. An expression of welcome, eager and astonished, to the rain falling out of season showed on their faces. Standing in their entryways, they watched intently as their van went roaring off.
The rain began to fall in big, heavy drops. The windshield began to fog up. Like apparitions in a dream, the road and the trees took on a strange appearance. Perumal switched on the windshield wipers.
They didn't work . . .!
Perumal wasn't one to give up easily. They were used to seeing through pitch-darkness like cats. He went on, looking sharply ahead.
What rain! A rain such as he had never seen in his life. The scent of settling dust grew stronger. He took in that scent joyfully. He put his feet up comfortably on the front seat.
Suddenly the road seemed desolate. Why? There was nobody around. Because of the rain, or what? Not one other vehicle came from the other direction. Except for an old bullock getting tirelessly drenched in the pouring rain. A desolate road.
Now the rain was like fierce, frenzied arrows inundating the earth. The sky gave frequent warnings with loud claps of thunder. Then lightning would strike, splitting the sky from top to bottom like a sharp sword.
For some reason their hair began to stand on end. They were used to withstanding more blood-freezing cold than this. But what's happened today? A cold that pierced through their bones to the marrow.
Arul rubbed his hands to warm them up. He folded his hands close up against his chest. Looking at him, he laughed with his natural friendliness. Today, for some reason, Arul was laughing more than normal. A strange light had come over Arul's face.
Suddenly, for some reason, the thought of Amma, Sulo Akka, Sundari Akka, all of them, rose in his mind.
Along with the rain, the camphor fragrance of Amma's body blew in. Who mixed camphor into the rain?
Sundari Akka's big eyes appeared before him, with two diamond drops of tears. It was as if she were touching his hands gently. Sundari Akka probably went to work even in this rain.
Sulo Akka's squeaking voice came too, mingled with the ghostly wind. Sulo Akka seemed to be calling him in unbearable frustration and overflowing love.
This One closed his eyes for a moment. He let out a long sigh, as if a cobra had taken shape in his heart and burst out, splitting his throat.
He opened his eyes. Perumal was comfortably making a turn without changing his speed. An expert, all right! With the heavy rain coming down in sheets, making it impossible to see for even a hundred yards, and the windshield wipers not working, he was driving the van along at high speed! Who else could do that? A genuine expert!
The road stretched ahead in a straight line. Perumal kept tearing along.
Why . . .? Why . . .? What . . .?
Perumal slammed on the brake. Everyone began to sense some demonic thing coming straight towards them.
Perumal turned and looked at Konamalai. Konamalai, his face tense, shook his head as if to say, 'Go on ahead.' Everyone felt they had to breathe fast. Everyone closed their fingers tightly over the palms of their hands.
Perumal drove ahead calmly. He went along sticking his head out the window and looking around. All the entryways, their doors locked, were silently getting drenched in the rain.
Right in front of them a little road would join the main road. At that crossroads, some cunning trap seemed to be lying in wait for them. Perumal put his head out in the pouring rain.
A big vehicle. Standing there trying to hide its monstrous body! . . . Perumal clenched his molars together . . . his face suddenly darkened and flushed.
Konamalai understood. His eyes reddened with a rush of blood. Then all of them understood. The insides of their skulls started buzzing. Their breath came fast and hard. Their bodies seemed to burn.
'Ready . . . Ready . . . Ready . . .' their hearts urged. They were prepared to receive orders. Their every limb began to quiver.
Konamalai seemed impatient.
'Come on, turn into that alley! We have to break through and get out of here . . . Where are they all positioned? . . . Have they surrounded us? . . . Damned rain!'
Perumal quickly began to back up. He looked behind him. From behind too, there was a vehicle coming at them like a demon!
Nearby a stream appeared, like an alley. Perumal made a huge effort to turn into it. But They had already overtaken them. Spouting fire, They began to close in steadily, fearfully, from both sides. The rain was blessing Them. The sky was looking at us with thunderclaps of laughter. Lightning was flashing, winking, as if to mock us.
Every second felt precious to them. Defeat came at them rapidly with its mouth agape. Are they to be defeated easily? Arul was in a great hurry . . . He shook his head this way and that. He trembled in the violence of his feelings . . . Do something . . . Quickly . . . Immediately . . .
Arul bit and pulled the clip in his mouth. Ayyo! Left hand . . . Left hand . . . It won't come, won't let itself be pulled out.
He could see clearly the horrible mistake taking place.
His thighs and ankles were strong as a horse's. Gathering all his strength in his toes, he sprang up. Hurtling momentarily through the demonically howling wind and the rain bearing down like great arrows, he fell into slushy rain-soaked mud. Swiftly rolling over four or five times, he moved away.
He'd understood . . . That's all . . . Just one more second . . . Arul! Oh, you idiot! You were in too much of a hurry!
His thighs must have been hurt. He got up limping. He started running wherever his legs took him. Rising rapidly up to his ankles, coloured with red earth, the flood water was rushing down through the alleyways. The alleys were giving way, unable to bear the slapping of his heavy footfalls.
Like a big thunderclap, the first explosion. The van jolted again. Continuing, one . . . two . . . three . . . four . . . five . . . six . . .
They stood still in fear!
Still running, he turned and looked back once. A great ring of smoke was rising up like a black ghost, shoving the rain aside. The suffocating stench of sulphur suddenly spread everywhere.
He ran without using his brain, his eyes staring straight ahead. His legs were dragging him swiftly, somewhere, all on their own.
Konamalai! . . . Perumal! . . . Kedari! . . . Periyannan! . . . Michael! . . . Anbarasan! . . .
That's it! . . . Now what? . . . That's it . . . They've been blown to pieces.
Now?
What should he do? Somehow or the other he had to get to the southern road safely. The stuff would come. He had to turn them in another direction. Had the men who were coming already heard the news and turned back? . . . Who could have told them? This wind and rain?
He had to get to the southern road. Damn you! Rain! Won't you stop? . . .
The sky gave a great clap of thunder for the last time and wore itself out. The rain began to subside. It showered drops on him like a sprinkling of rosewater.
He'd come running a long way. Maybe there was nothing to fear any more. He started walking with big strides. He still had three miles or so to go. He would get there . . . Somehow!
The rain stopped completely. An eerie silence set in. Trees that had been trembling with fear in the rain stopped trembling and shed tears. Only the flood came with him.
The alleys lay as if they were smothered in woven nets. People were standing in the entryways of their houses. They stared at him. This strange, commanding young man, completely soaked in the rain, where was he off to in such a hurry!
He could rest a bit and then go on, stop to catch his breath and then go on! . . . Or what if he didn't go at all . . . What other strange things would they witness today!
Really, what if he didn't go at all?
People were looking at him strangely, eagerly, like a novelty. Was it some kind of fear that showed in their eyes . . . or devotion? Some people's eyes seemed to radiate unrestrained affection.
Faces appeared in the windows like moons. Pleading eyes bored into his face.
'Don't go . . . Don't go . . .' their gaze seemed to be begging.
If he liked, he could go into one of the houses and rest.
But he is not the one to go into any house! His task was the most important thing to him. He had to get to the southern road, and fast. He started walking on a road that split off from the alleys.
A few fields, eagerly lapping up the rainwater, watched him with silent gratitude as if he himself were the God of Rain.
He went on by.
A church appeared. The Virgin Mary, holding the baby Jesus in one hand, was looking at him tenderly. Lifting her other hand in the air, she sent him her blessings.
He kept going by.
A temple came along. Strange!
The temple was a little ways from the town. Chattering with palmyra tree fronds, with its doors wide open, it seemed like it was calling him. The temple tower, pointing to the sky, seemed to call out to him, 'Come.'
If he liked, he could go in the temple to lie down and rest for a while!
He would not rest. His task was the most important thing to him. He had to get to the southern road fast. Alone! Walking . . . or . . . running . . .
He started running again.
He felt a little exhausted. The shock, the running . . . He'd gotten a little exhausted, all right. If some van came along, he'd hitch a ride. He'd get there faster . . .
A little ways off he saw the path that joined the southern road, cutting across his path at a right angle. An old-time overloaded van was groaning along. If he clapped his hands, would they hear him? Would they stop? Would they take him along with them?
He clapped his hands. He clapped again and again.
'Can I come? . . . Can I come too? . . .'
He heard faintly as he ran:
'Come on . . . Come on . . . run! . . .,' A strange voice called out. It had an odd sound to it.
He was getting closer. There were a few people with bare chests. All men. Are they coming back from a temple somewhere?
He got very close.
The van stopped. New and unfamiliar faces! They were looking indifferently, off in random directions. No . . . No . . . No! Suddenly, they all turned towards him, as if commanded. Their eyes shone with hatred.
They all pounced at the same time. In their hands bayoneted rifles flashed. His heart stood still for a moment.
He was lost! He hadn't expected it . . . Even in a dream . . . That They would go around in such a disguise!
The guns closed in on him hungrily. A blow came down on his chest. Then a flawlessly aimed kick between his thighs!
He reeled and fell. Dragging him by the hair, they threw him into the van.
Bits of flesh emitting the nauseating smell of blood . . . Konamalai! . . . Kedari! . . . Perumal! . . . Periyannan! . . . Michael! . . . Anbarasan! . . .
A borderless darkness surrounded him.
_Translated by Rebecca Whittington_
Ranjakumar _,_ ' _Kaalam Unakku Oru Paattu Ezhudum_ ' (1989), in Ranjakumar _, Mokavasal_ , Yathartha, Paruthithurai, 1989, pp. 18–31
## Woman Humiliated
#### _Sivaramani_
you cannot push me
behind the latticed window
of your definitions.
like a little pebble
plucked from where it's been
all this time,
lying in the endless mud
I
have picked myself out.
you cannot snatch
my days
between your fingers
closed over your eyes
like a baby star
bringing itself down
my being
has attained assurance.
I am she who cannot be disregarded.
what now
like a question that can't be cast off
I
am present
you cover me with insults
and uncivilized words
but,
I will sully
your shiny shoes
like a heap of dirt
on top of all your
civilized dreams.
as long as you reject
all my just words
there will be dirt
in your every path.
_Translated by Rebecca Whittington_
Sivaramani, ' _Avamaana Paduthappattaval_ ' (1990), in _Sivaramani Kavithaigal_ , Women's Study Circle, Batticaloa, 1993, p. 44–45
## Darkness
#### _Aswagosh_
I lived
in the midst of white oozing wounds
the heart-rending cries
of decaying sons disturbed me
I was pained
I know
the faces of sons lost
long in the distance
I do not bother to ask
if in those faces
there was knowledge, beauty
I cannot disparage
the thoughts of sons
who knew only sacrifices
young sprouts
who caught a bus
hearts racing
and were swept up in the wind
I cannot throw out
yet another question
just yesterday
two died
I didn't ask for details
oh merciful one, did you hear
crows are cawing
a cock is crowing
trees are waving in the wind
deaths are taking place
the demon born today
ate up tomorrow's dreams
epic darkness descended
only time passed
there was no one to be seen
neither those who went far to pick fruits
nor those who showed the way
there was no answer to my feelings
seeking light
when I was still destroying myself
my son set out to find
a meaning for himself
to make his fortune
he went to lend his ear
to the voice of the earth
where my dreams are fallen.
it's not possible
for me to bear your absence, it's not possible
no
he is no longer with me
he went with an answer
to the voice of the earth
I will talk about memories
that trouble me
I will talk about the pain and heaviness
of those aching days of mine
in the language
spoken by oozing wounds
let me speak.
in the end
he came back to me
his body had gone cold
mosquitoes did not come to suck his blood
I did not let
the flies get close
_Translated by Rebecca Whittington_
Aswagosh, ' _Irul_ ' (1990), in _Vanathin Azhaippu_ , Nigari, Kalkilai, 1997, pp. 16–18
## To Those Who Come with Sticks
#### _Ilavalai Wijayendran_
my
words alone have strength
not my body.
to scare the people
bring your concoctions of borrowed words
and pile them up.
shattering them
my words
stand upright
when you've lost to words
and come with sticks
to show your strength
what can I say?
_Translated by Rebecca Whittington_
Ilavalai Wijayendran, ' _Thadi Kondu Tiribavargalukku_ ' (1990), in _Niramarru Pona Kanavugal_ , Desiya Kalai Ilakkiya Peravai and South Vision, Colombo, Chennai, 1999, p. 56
## Days in the Trenches
#### _Pa. Ahilan_
good friday
the day you were crucified
that day the burning wind
sweeping across sea and land,
a seagull or two
soaring in the spotless sky.
the sound of wind
scraping against palmyra trees
aroused an unutterable panic
that day was our last day in town
we came to the seashore.
only the waves returned.
when the sun fell into the sea,
we knelt down and cried.
a black howl arose
and night fell.
in the distance
like a corpse in the cremation ground
our town was burning.
good friday
the day you were crucified.
_Translated by Rebecca Whittington_
Pa. Ahilan, ' _Pathungu Kuzhi Natkal_ ' (1992), in _Pathungu Kuzhi Natkal_ , Kuruthu, Erode, 2000, p. 15
## War Journey: Diary of a Tamil Tiger
#### _Malaravan_
**The Battle Is On**
**15 November 1990**
Around 5.10 a.m. the fog was still thick, and we parked the tractor, hidden in the tall bushes, and walked towards a hut to wash our faces. The darkness melted but the fog stayed on. On the left side of the road, set slightly farther back, there was another little hut.
'There's no point calling from outside the gate, let's go on in,' suggested Ranjan.
We removed the sticks blocking the gate's entrance and walked into the yard.
On both sides of the footpath, yellow and red flowers were in full bloom. The marigold flowers must have opened recently. The petals were moist and smooth. A large neem tree in the corner gave plenty of shade. A tiny shrine stood at the base of the tree.
'Amma, may we use your well to wash our faces?'
'Certainly, come on around here. There is some toothpaste out there too—go ahead and wash up,' the Amma said. Near the well the vegetable plot was lush and green. The vegetable plants struggled under the weight of their produce.
'How do you water these plants, Amma? Kerosene for the water pump is expensive, isn't it?' I asked.
'Who needs kerosene? We water with the _thula_. Our son helps. It takes maybe two hours.'
Her son, standing nearby, was only nine years old. I thought to myself that in Jaffna it is so very different. The children there would still be in bed or would have gone to tuition classes.
Today we depend so much on the food ships and trucks that bring foreign food to us. When will our own self-sufficient economy, destroyed by colonialism, sprout once again? Our economy will only grow when our people become aware of what has been done to us.
'Son, all of you, come on inside and have a cup of tea'.
'No, no, we'll drink it out here'.
'No, no, it's okay. Come on in and have your cup of tea'.
The cups were modest, but the tea itself, with fresh cow milk, tasted superb. We were all energized.
We said goodbye and went out to the road. We walked along the road and found a log to sit on. A monkey sat in a tree across the road and stared.
'Go stare at my grandpa,' said Master as he threw a stone at it. It jumped and ran off.
We ate the bread and plantains that Mani Annai brought, and discussed our need for sleep.
'Where will you be at noon?' Mani Annai asked.
'We'll eat and sleep here. We won't be going anywhere,' we said.
'I'm off, then. There are buns and biscuits in the tin'.
'Okay, Annai, see you.'
As soon as Mani Annai's head disappeared, everyone jumped for the bag.
'Leave it, boys. I'll distribute it,' said Master and he did.
The breeze was wonderful. We slept well. We tidied up the ground, removing the dead leaves, spread out some sacks and lay down. Shortly after noon we went for a walk through the paddy fields. We came to a mango orchard and a Lankan Government office with not a soul to be seen anywhere. We picked mangoes and sat down by a water canal to eat them. Two peacocks in the fields ran away when one of us stood up and shouted.
We started off at night and arrived at the Mankulam camp early in the morning. We kept quiet. From now on, it was of absolute importance that we maintain secrecy. Even if our own people saw us it was possible that the military could be alerted and our plans ruined. We strengthened the security lines around us. We stayed where we were until the next day, then we walked towards the camp to do some preparatory work.
**16 November 1990**
At 9.00 a.m. on the 16th morning, we arrived at our security posts. We sent Varman to his post on the other side of the camp.
'Master, look, let's stay in this house,' I said.
We went inside the house. I had seen this same house when I came here the first time, but now its outer walls were destroyed.
'Master, you're going to have to control the boys. They won't follow my orders,' I said.
'Just don't try and trick me again.'
'I promise,' I said and proceeded to put my hands on his head to gesture that this promise was real.
He stopped my hands, saying, 'Okay, okay, go on. I will control the boys. They'll never obey you because your face is too ugly.'
'Ok, I'll head on over to my post, then.'
I jumped over the barrier and stopped to chat with the other militants at their security posts, and returned to the house. Master and Alahu were tidying the house.
'Alahu, when did you get here?'
'This prince just arrived now. The idiot got onto the wrong tractor and went off to the other side of the camp.' Master laughed aloud. 'Hey! It was you who pushed me into the water, wasn't it?'
I read the situation quickly and realized I would be given a cold shower if I went up too close to him. I invited them into the house, but made sure to stand far away from them, and then I went and sat on the verandah.
Master called, 'Alahu, can you go inside and get the biscuits . . .'
Before he could finish, I rushed in to beat Alahu to the biscuits. When I opened the door, a bucket of water tied to the door tipped and drenched me.
Master's and Alahu's laughter was loud, and I heard it as I walked to the well with a defeated smile.
When we all lay down for a rest, Master asked me to tell him about Kannan, another Tamil Tiger warrior we knew—a militant— and how he died.
Kannan had been driving the tractor. He was sleepy and started dozing off. I kept tapping him to wake him up. He yelled at me for constantly tapping him. After a while, the tractor's course changed and it felt like we were going over a log. I couldn't see Kannan anywhere. I stopped the tractor, got off and saw that the log was actually Kannan. The tractor had climbed over his chest.
I fell asleep thinking about Kannan. He was a good weapon maker. He had joined the movement despite being the son of a big businessman and having the privilege that went with it.
**22 November 1990**
The 22nd arrived as we laughed, joked and worked. The morning was not that cold. The sun was up early too. Birds were busy singing and hopping around in the trees. Peacocks came down from the tree tops to the fields. They pecked at seeds on the ground.
Big flocks of parrots landed on the densely growing fruit creepers. Monkeys sat in the trees and watched us intently.
A gentle breeze embraced us all. Unwilling to leave the militants, the breeze banged its head on the walls of the security posts. Clouds rushed by after shedding a few tears and wiping their eyes. That sweet morning was bidding farewell to the militants. Birds sang mournfully and left. A rooster could be heard far away. We sat together for a cup of tea and began our preparations.
Master came and sat by me.
'What is it, Master? What is that look about?'
'I just wanted to have a good look at you before letting you go.'
'You bastard.' I started to chase him. We had lunch under the tree together.
The usual hot rays of the sun spread their calm heat on us. The _poovarasu_ trees that droop in the heat were still standing upright. The breeze surrounded the militants and brought flower petals down from the poovarasu. The hut was cool and cosy. Even the crows crowed more pleasantly than usual, inviting their kin.
'If a crow crows while flying, a letter will arrive. If it crows while walking, a visitor will come,' said Master.
'What if it crows while it's sitting down?' I asked.
'Artillery shells will arrive,' he said.
We hurried to our meeting.
A large group of militants were sitting in rows. Their hands were firmly holding various types of weapons. Their faces were bright, alert and determined.
The female militants stood together on one side. They are the burning lights sprouting from a male-dominated society. They are making history.
A lieutenant in the front described the Mankulam camp thus: 'This camp was set up in 1971 to suppress an uprising among the Sinhala youth. It was re-established in 1978 and it remains a huge hurdle for our movement, a source of disruption in normal people's lives. It is a cruel camp and a threat to this whole area. We attacked it once before, but without success. We must make up for the shortcomings in that attack.'
Our discussion continued on to many more topics and was completed at 4.30 p.m. We returned to our posts and immersed ourselves in getting ready. Around 5.00 p.m. two Y-12 planes flew in from the south.
'Buddy, do you think he has been alerted?' asked Master, grabbing his weapons.
'Let's wait and see. They may be coming to drop off food,' I consoled him.
The two planes separated and started flying in big circles.
'Master, get the boys to transfer the shells into the bunker,' I said.
I watched him rush off. Ithayan was nearby, with his walkie-talkie in hand. Darwin was walking away with Master. Ranjan was squatting under a tree.
'Did he drop something?' Ranjan queried. 'Oh, he did!'
As all three of us watched, it came down with a hissing noise and hit the ground with a thud far away from the camp. Perhaps near the school.
'It's the "thing". This is all going wrong,' Master said, standing beside a nearby tree.
The other plane cut in and made a smaller circle.
'It's a bomb, look, it's coming down like a shuttlecock. All take cover!' As Ithayan said this, we could see it going past us and we froze.
Following the loud noise of the exploding bombs there was non-stop gunfire from the army camp.
'Do you think they've been alerted?' asked someone in a state of shock. As he said this a bullet hit a nearby tree.
'Is there a problem?' the leader asked through the walkie-talkie.
'No problem,' said another voice in the walkie-talkie.
Another plane circled round and round and dropped four parcels. One fell behind me. The next one was dropped in front of us just opposite the previous one.
'Buddy, this one is close to us, all take cover!' I shouted and everyone scrambled to take cover.
It came down with a loud hissing noise and exploded with a terrifying blast. The sand on one side of the bunker fell in from the shock.
We all came out brushing sand from our heads. The small hut where we were staying had disappeared without a trace. Near where it was, there was a huge hole in the earth. The planes had disappeared. We could now hear a howl of pain. We all walked towards it. The dog that we had been feeding for the last four days, less one leg and with a big wound, was lying on the ground.
'Master, look at our dog,' said Alahu.
'The poor fool. Now, now, don't start playing dog owner, go on and get to work.'
It looked at us pathetically.
'Okay, come on and bring all the shells out.'
The time was 6.05 p.m. The sun was turning red in the western sky. The sky, the clouds, and everything else was brushed with patches of red. Perhaps they were mourning the blood that was about to flow. Birds were hurrying back to their nests. Even the flapping of their wings seemed to presage sadness. Animals howled. Among all the sounds the howl of our dog gradually pervaded everywhere and everything and then eventually it died down, too.
A lonely lost heron flew past. Flocks of bats flew in patches. My heart was shaken by the sad songs of two lapwing birds that circled us, then the enemy camp, and then flew away. The sun was now buried in the earth. The clouds too had run away.
The time was 6.35 p.m.
'Buddy, is everything set up okay? Come over here. Shells and rounds will soon be flying and you're standing there without any cover. Be careful,' I warned those standing around.
The mouth of the cannon, camouflaged behind tree branches, was aimed at the police station.
The time was 6.56.
'Who's that?' I said sharply.
'It is me—Master,' came the reply. He took a pair of pliers and ran back.
The time was 6.58.
'Ranjan, is everything okay?' I wiped my face with the back of my hand. The walkie-talkie started and shone a red light.
It was 7.00.
'Okay, go,' the leader ordered and the firing noise of the projectiles shook the enemy camp before the guns there started to operate. The entire earth appeared to be shaking.
Noise from the camp was loud. It was the dying cry of the enemy. Enemy fire began to arrive everywhere. We faced the heaviest fire because we were very close to the enemy camp. Bullets hit tree trunks and branches, and exploded.
All types of bullets, some of them very powerful, were coming our way. Some of them passed between our legs. Yet, in this life or death battle, we sent a constant barrage of bullets their way and the confused enemy often fired in the wrong direction. Bullets went over the trees and even towards the sky.
'Look, from that post in the shop he's firing .30 calibre.'
We watched shells leaving that army post in all directions. The shop had caught fire from our attack. The ammunitions inside the shop had started to explode.
Every second, we received commands through the walkie-talkie about the modes and locations for our attack actions. We followed the orders. Gradually, the number of militants began to diminish as they were wounded and removed from the battleground. Those of us remaining continued with our attack. Suddenly, there were bullets coming straight down, hitting the trees. That was when we saw the two helicopters. They started firing into our positions.
Five-inch shells, RPG shells and many guns started firing towards us. We decided to take the plunge and attacked one target from three angles simultaneously. Our target, the police station, collapsed under our fire.
We would have suffered many more casualties if we had not adopted this strategy. Suddenly, there were two huge bangs and I felt my eardrums almost bursting. I felt blood oozing from my leg. It was a minor injury. Only later did I figure out that this was an aerial bombing raid by the enemy.
We continued with our fast attack when another bomber swooped down. Our .50 calibres were aimed at it and started to spit shells. The bomber rose up and backed away. Amidst the noise of the .50 calibre fire, we heard cries of pain. There was a problem at Cheran's post. I ordered the others to continue the attack and I ran to Cheran's post. Four young militants were lying in a pool of blood. Master also arrived from the next post.
'Take the boys away. He will hit again at that range.'
Carrying the young militants, we moved back. Soon the boom was heard again.
'Is there a problem, Master?'
'No, no . . . just a bruised hand. Walk fast.'
We had alerted the medics via walkie-talkie before we handed the four young militants over. The medical militants started first aid immediately. One of the four young militants, injured in the stomach, was lying in my arms. He grabbed my hand and tightened his grip. It felt as if he was using every bit of life left in him to tighten that grip. His voice too came out, using all the energy left in him.
'Buddy, invade the camp, kill them all, and grab every bit of equipment you can.'
His young life dissolved with the breeze and entered my heart and filled the space around.
His grip loosened. A star fell, leaving its silvery streak. The last words of the young militant echoed in my ears among all the outside noises. We hurried back to the battle ground. The bushes and the forest trees were burning, hit by shells and bombs. The shells and bombs continued to fall as we rushed back to our locations.
It was 10.15.
I briefed everyone about what had happened and readied the cannons. I began monitoring our target. Two shells from the enemy flew past us and exploded. Following an order from our leader, we aimed and our shell flew in the direction of the enemy.
'Buddy, it missed. Fire again before he comes out of the bunker.' This time our shell hit the target and we could see the building crashing down. We sent an occasional 'light bomb' to identify our targets, and continued with our attack.
The next shell from the enemy hit a tree and Darwin's thighs began to bleed. I took out the cotton from the field compress in my pocket and applied a pressure bandage. The blood oozed through the bandage.
'It's okay, buddy. Set up the next one,' said Darwin.
'Are you crazy? You have shrapnel in your thigh. You can't stay here with blood pouring out like this.' I got mad, but he was adamant that he would stay.
The warmongers who are occupying our land are mercenaries working for wages. During battles, if their life is threatened, they hide somewhere and indiscriminately open fire. They back out of battle with the slightest injury. Our militants are very different. They dedicate their life and body to the battlefield. They value a free homeland more than their life. They never back away from a battle. They refuse to leave the battleground even if they are badly injured. They leave only as lifeless bodies or they are removed by others when they can't function any longer. This is the quality of our militants. It is this quality that paves the way for victories against the arrogant confidence of the occupier.
We continued with our attack. We had damaged their long-distance communication system, so one of their helicopters fired into their own camp. We stayed where we were, facing stiff attacks from the air and the enemy camp.
A plane began to circle overhead. We took cover immediately. A bomb fell just fifty metres away and a huge _paalai_ tree ( _Manilkara_ _hexandra_ ) was broken to pieces; we were bruised by flying bits of the tree. Our attack came to an end when we took aim at the bank building.
11.59 p.m. The cannons boomed for the final time, shook the camp, and went quiet.
12 midnight. Their hands firmly holding their weapons, our militants started to crawl through the darkness. The night bid goodbye to them silently. Only the helicopters in the sky broke the darkness.
The grass gently stroked the crawling bodies. The grass bent over trying to make the path soft. There was no moonlight. The clouds hid the stars to keep the darkness as we crawled across the open paddy fields. With the enemy stunned by our fire, we crawled forward to destroy them completely and recover our land.
It was the enemy fire that started first, launching face-to-face fighting. The fire from the enemy camp intensified as our militants, sculptors of the future, crawled on with no cover. Finally, our guns began to show the enemy the truth. Enemy fire began falling in the open plain like raindrops. Helicopters, too, began their hunt for their prey in the paddy fields.
Helicopters were spitting .50 calibres non-stop. Our fire from the ground chased them back up to the clouds. Our brave militants began to float in a river of blood. Tiger soldiers jumped over enemy bodies to meet more of them. The enemy camp fell apart from our RPG fire and the mercenary force started to run away.
I turned the walkie-talkie on and our leaders' orders came non-stop. Jegan Annai and Vasthanan Annai were constantly being called over the walkie-talkie among the regular orders being issued. Then the walkie-talkie began calling for Thileepan Annai and Gopu. Their connection was broken. They were moving forward to capture the camp at the nun's hostel. I thought something must be wrong. It is very unusual for all the leaders of the four divisions moving towards a target to be cut off.
Yet, a little later, another voice said that they had captured some of the positions near that same hostel. Our leader congratulated them and promised to send reinforcements. I was able to work out that a battle was going on near the temple and our poralis were getting closer. Injured militants began to arrive from the division that approached the police station. I went behind them, along the railway track. Bullets were flying. I heard the groan of an injured militant. He was hit in the stomach and unable to move. I draped him over my shoulders and brought him with me. I could see bullets hitting the track in a line.
I sent my hands around the militant's neck looking for his cyanide capsule. It is the LTTE practice to remove the cyanide capsule from an injured militant's neck to prevent him from biting into it out of sheer pain. He grabbed my hand very tight.
'I am not going to die quickly without chasing away those bastards. If I can't today, I will come back tomorrow.'
He said this gritting his teeth in anger. He did not appear to be crying for the pain in his body. His thoughts were entirely on the battleground. 'Leave the rifle on safety,' he told me.
He fainted while he continued to grip my hands.
I changed the lever on the rifle and handed it to another militant, then I got out of there. Shells passed by me very close as I crouched and ran towards the gunfire of our division.
'Buddy, there is nobody who can carry those two over there, hurry up and take them in,' ordered another militant.
I slung the rifle back on my shoulder and moved, crouched towards the house that sheltered the enemy. Two militants lay there; they had lost a lot of blood. There was no bandaging material at hand. Where can one go to look for bandaging when shells are constantly whizzing past? I took off my Tiger striped shirt and cut it with a knife. I tied a bandage around the wounded stomach of one militant. His breathing was slowing down. The other had an apple-sized hole in his thigh. I wrapped that too.
I supported both militants on my shoulders and, hiding behind any available cover, I started moving back.
The one injured in the stomach kept trying to shout. 'Put one of us down and keep moving on inside.'
But his shouting came out soft in a pained voice. His hand folded over the rifle sling, making it impossible to remove it from him. His voice slowly got softer. I could now see a figure coming towards us.
'Hand one of them over to me, don't try to carry them both.' 'Master? I will carry both of them. You go to the front and bring someone else back.'
'Leo. Take them to the medics in the bunker near the tree.' I could see him running fast.
I handed over the two injured militants to the medics for treatment. As I turned to go, I could see medic- _poralis_ in large numbers.
'No problem now. Bring any injured militant here immediately,' said one in the front as he ran, and the others followed him.
I waited near the tree for Master. The battle was intense.
I could work out from the messages over the walkie-talkie that the enemy was being weakened and that our divisions were entering enemy posts.
'Master, come. If we go to the post at the roadside, we can go down with the standby group. If not, we can at least go to the supply group.'
Both of us moved fast. Shells were exploding near us.
'We can capture half the camp today. Guarding the captured area from aerial bombardments will be very difficult,' Master said as he came towards me.
_Translated by N. Malathy_
Captain Malaravan _, Por Ula_ , Publication Division, LTTE, Killinocchi, 1993; second edition, Vitiyal, Coimbatore, 2009
Extract from Malaravan, _War Journey_ , _Diary of a Tamil Tiger_ , translated by N. Malathy, Penguin Books India, New Delhi, 2013, pp. 59–73
## A Space That No Longer Is
#### _Su. Vilvarathinam_
in the dawn
after the night all the village folk left
tearing life up by the roots
as the outsiders came in
the village
lies drained of the flood called life.
the whole sun
the whole moon
the whole wind,
the whole of the life of the courtyards
took its leave
the outsiders cut down the fences.
they broke down the front doors.
they plundered everything inside
the houses were left wide open
the wind blowing straight in through the open entryways
shutting, opening, shoving the doors
running inside, trying this one and that one
searching in vain for someone to relate to
runs through and through a space
that no longer is.
_Translated by Rebecca Whittington_
Su. Vilvarathinam, ' _Vetraki Ninra Veli_ ' (1994), in _Uyirtthezhum_ _Kaalathirkaga_ , Vitiyal, Coimbatore, 2001, pp. 139–140
## Heroes Rest Here
#### _Cheran_
in the terrible din of dried palm leaves breaking,
falling, swept up in the wind,
a koel
cries out in fear
to the west, a palmyra grove.
a flood channel running
with sand in the summer
and water in the winter
the south stretches ahead.
on the eastern border, Ponnipulam
a colony of landless people,
a colony of coloured people,
a colony of oppressed people.
red earth and fields to the north
and in the centre
stretched over the farmlands' disquiet
a burial ground,
a field of memorial stones.
the heroes' place of rest.
on this ground there are hundreds
sleeping.
those who walked
with upright chests
with vigour, with faces
shining with smiles
heartfelt friends,
relatives
and young men I don't know
for a moment my chest burns
the thoughts that rise in my chest burn
the story of life stretched out in thought
burns.
my people sleep here in this field
those who went too far to win
those who went up in the wind, in the sea
in the smoke
with them gone
they bring the leftover bodies
to bury them beneath the hero stones
and say it's your souls
that lie in all eight directions
from these stones.
Tamils don't believe the old story
that those who die heroic deaths
reach heaven
once a year
relatives, friends, and the 'state' too
will come round
to remember you
after mothers' tears
wash away the dust lying thick on the grave
in the long land of this graveyard
that grows memorial stones
they will shower flowers,
they will light a lamp and grieve;
wringing their hearts
they will sob out their stories of heroism
what did we remember?
what did we forget?
do not trust words completely
for within these heroes
dwell those who crushed
their enemies' military might
and also those
who chopped the heads and breasts of innocent people
night after night, wiping their blood-drenched faces
changing course changing tongue changing views
coming back victorious
this is the other side of heroism, sacrifice
all of these truths that live
in speech borne from ear to ear
honour does not lie only
in the thirst for destiny
do not trust words completely do not trust words
empty words embellishing your memory,
all the battle ballads
sung in every street
wither away by the fourth day
like the banana plants
that mark death in houses
and memory recedes
in the blood of children
caked on the sword of Cankilian
my dream disappears
in the wretched eyes of expecting women
destroyed by Raja Raja Chola
history commits suicide.
my poetry is drenched
in the sorrow of skeletons
buried under the Big Temple of Thanjavur
in history there are no heroes.
do not trust words completely.
in the sanctum sanctorum of time
I am waiting to tell the story
of a valour which will wash away the stain
I will sing then
of life, loss and death.
_Translated by Rebecca Whittington_
Cheran, ' _Veerargal Thuyilum Nilam_ ' (1995), in _Sarinigar_ , No. 172, (27 May–9 June) 1995, Colombo; published in _Vetraagi Ninra Veli_ , Vitiyal, Coimbatore, 2001, pp. 15–17
## One Night
#### _Maalika_
one night
I came out empty-handed
down into the street
with nothing
oh sun-god
roaming over my city
bring me the key I forgot to take
it must be under the thatch of the veranda
in the courtyard of the house I left behind
that is the key to my great-grandfather's strongbox,
ancestral land deeds
jewels for the body of Kaliamman,
my grandmother's
silver anklets,
bracelets, and the sword
with which she chased the devil away
in a trance
the brass lamp my mother lit
in her last days,
palm leaves inscribed with spells,
copper shields, images of cobras
and a few copper coins
I need these
to pass on to my
grandson tomorrow
so I can die saying, this is your land
these are your roots
oh sun-god, bring them to me.
I need my roots.
even though I will go tomorrow,
I need them today
bring them to me.
_Translated by Rebecca Whittington_
_Maalika *_, ' _Oriravil_ ' (1996), in _Erimalai_ , September 1996; published in _Vetraagi Ninra Veli_ , Vitiyal, Coimbatore, 2001, p. 47
## I Am a Snail . . .
#### _Shanmugam Sivalingam_
I am a snail
but can't freeze still
in the loneliness of this
disaster
among these ruins
how could I retreat
into my shell?
to the max I stretched and
stretched my feelers
I secrete slime
to slide across
these ruins
my slime freezes
in the rage of these ruins
I secrete even my colourless blood
and my flesh
I dig deeper and deeper
within myself to secrete
my heart
my intimacies
my feelings, even my
loneliness
within me
there is nothing
but the shell
an empty cave
I stretch my feelers
further out
to embrace my
ruined nation
_Translated by Rebecca Whittington_
Shanmugam Sivalingam, ' _Sithaninthu Pona Desamum Thoornthu Pone_ _Manakkugaiyum_ ' (1997), _Kalachuvadu_ , Tamiliyal, Nagerkovil, 2010, pp. 195–196
## The Eighth Ghost
#### _V.I.S. Jayapalan_
oh my neighbour, oh my neighbour
oh suns glowing in skullcaps and veils
oh moons playing joyfully on the sands
oh stars smiling in every cradle
our children have lost their way
they tore you apart
they wounded our Ealam soil
plaited with various flowers
that day we were silent
in your streets stuck with the
happiness of earlier days
and the blood-tears of the last day.
the next day we slunk in like foxes
and stole your very house.
we gulped down your children's food
we kindled our cooking fires with _meesan_ wood
we tore apart your holy books
to wipe our spittle-covered hands.
that night that dawned without the prayer call
the angels disappeared waving their twelve arms
and seven ghosts followed.
among the seven ghosts that came following
the sixth pounced upon us
on those paths of your inconsolable
distress and tears
we too ran down
bearing harvested thorns
here comes the eighth ghost
before it rolls our heads
before it fills our princes'
coffins with earth
before it erases our poems
and writes lamentations in the wind
before it sweeps our poetic grandeur
into the dustbin of time . . .
we implore you . . .
oh my neighbour, oh my neighbour
come back and save us
with those six prayer calls
six times a day
_Translated by Rebecca Whittington_
V.I.S. Jayapalan, ' _Ettavathu Pey_ ' (1997), in _Sarinigar_ , No. 135, 20 November–3 December 1997, Colombo; published in _Vetraagi Ninra Veli_ , Vitiyal, Coimbatore, 2001, pp. 41–42
(The background to the poem is the eviction of Tamil Muslims from the Northern Province by the LTTE in 1990.)
## On the Surface of the Mind
#### _Majeed_
in the midst of the tender interstices of sorrows
that too came to pass—
by way of beautiful eyes
like a red-legged heron
from now on that too will settle on my shoulders.
there is nothing new to say
about what has happened
through the back door of its interior passage
through the seething force
of the lower surface of my mind
the potent force of my reality slipping
in the movements of fingers
in the strokes of eyelashes
in the rubbing of heels
spiralling up like a worm or
rising like the momentary smoke of burning trash
rainclouds of my trust and feelings crumbling,
truth and lies being, deep down, inseparably merged
of these
in the midst of the tender interstices of sorrows,
my mind will just go on singing to god.
_Translated by Rebecca Whittington_
Majeed, ' _Ulmana Veli Parappinil_ ' (1998), in _Sarinigar_ , No. 162 (24 December– 14 January), 1998, Colombo; published in _Vetraaki Ninra Veli_ , Vitiyal, Coimbatore, 2001, p. 25
## The Sorrow within Me Has the Surface Area of a Straight Line
#### _Majeed_
two crying eyes
like deep chasms
echoed all your sorrows
my mind denies the sight
of the times held tight in a fist
like a butterfly with its colours smudged out.
to the small question time poses
so many replies have been uttered
wrongly.
the poem within me has the surface area of a straight line
what can i recite
for everyone to hear
time pierced me like a worm on a fish hook
and flung me into life
so many times I fell like rain
on the empty expanse of a desert
so many times I was dissolved
in the rain leaking into my hut.
like the ants' enchantingly ordered progression
I arranged the language of my heart
word by word in a straight line
a poem formed colourless
from now on, my mind will always deny the sight
of the times held tight in a fist
like a butterfly with its colours smudged out.
_Translated by Rebecca Whittington_
Majeed, ' _Ner Kottu Parappalave Enakkullum Thuyar_ ' (1998), in _Sarinigar_ , No. 152 (6–10 August), 1998, Colombo; published in _Vetraaki Ninra Veli_ , Vitiyal, Coimbatore, 2001, p. 26
## Lost Life
#### _R. Muralisvaran_
with an ascetic's
determination
the people
displaced in Vanni
again stood
asking for the boon of life
on this earth that had sprouted huts
a widowed mother
breathing darkness
lifted hands to jaws
and eyes to the sky.
the news—
they said
her son
lay buried in Semmani.
relatives—
someone said
lost money to the boatman
gave up life to the god of death
in the seas off Rameswaram.
why is she still
looking at the sky?
she could have looked at the earth
since that was where
she lost her son.
she could have looked at the sea
since that is where
relationships are lost.
so why did she look at the sky
was that where
she lost her life?
_Translated by Rebecca Whittington_
R. Muralisvaran, ' _Tholaintha Vaazhvu_ ' (1998), in _Sarinigar_ , No. 155, 17–30 September 1998, Colombo; published in _Vetraaki Ninra Veli_ , Vitiyal, Coimbatore, 2001, 29–30
## Veena
#### _Bose Nilhale_
in my time it never travelled with me.
even so
I lived with it.
when the veena floated in its own alluring sound
when I did not yet feel like an old man
when my skull was not burned by fear of the dark
when I was yet unable to feel
the cruel stench of war wafting
through the poems everyone wrote . . .
oh god!
the veena
in my time
never travelled with me
I lived in its sound.
yesterday
I felt like an old man
my nerves trembling with fear of the dark
in the poems I wrote in war-filled days
you can smell bones and hear the sounds of men and even nerves
oh god!
I have lost
even the last note of the veena that faded away in the wind
there was nothing in it but the waning moon
afterwards, every day
dawned by the sun
a poem long and dark.
_Translated by Rebecca Whittington_
Bose Nilhale, ' _Veenai_ ' (1999), in _Sarinigar_ , No. 172 (27 May–9 June), 1999, Colombo; published in _Vetraaki Ninra Veli_ , Vitiyal, Coimbatore, 2001, p. 23
## On the Present
#### _Bose Nilhale_
in the time that turns round
like a child crying pushing its eyes out
face shuddering
feet of dust climb on
at each time
self decays as time
tears and sighs and pining of men
wander as distinct faces
with their feet buried in dust deposits
the sky has lost its blue colour
and will no longer go on giving wings
to the angels
the beggar children stand
behind the smoke and dust and ashes
that reminds death
mocking time.
heading towards them
time turns at an even faster pace than
dust and ashes and smoke.
life dissolves
in the depths of their eyes.
_Translated by Rebecca Whittington_
Bose Nilhale, ' _Nigazh_ ' (1999), in _Sarinigar_ , No. 172, (27 May–9 June), 1999, Colombo; published in _Vetraagi Ninra Veli_ , Vitiyal, Coimbatore, 2001, p. 22
## Pyre
#### _Rashmy_
01.
black on red
or
red and yellow mixed half and half
a sprinkling of white
or
in the black border to a soft blue
designs and dots
dots and designs.
I was like a butterfly.
leaping and swimming in the wind
waving and waving my wings—the sky stretched out
without beginning in time
six legs smeared with pollen dust
impregnating the flowers
I wrote my poems in many colours
you eaters of boiled honey, what do you know
of the headiness of my honey and my languid song?
the moon
lost in the glow of my eyes
shards of a mirror shattered in anger
at being told it has lost its sheen
you yourselves call them stars
look closely at the mirror shards
I ask, do they not still glow
in the deep light of my eyes.
never mind, this is beyond your understanding.
I was like a butterfly.
02.
then this is what happened
that my wings should fade
and, turned to powder, should fly through the air
shedding their dots one by one—
so god cursed me.
he made a paste of neem leaves and smeared it on the flowers
not ambrosia but poison, he said.
he did an injustice.
time said,
two legs are enough for you
break the other four
and a pair of feelers
into kindling, to light the stove.
03.
what now?
the unappeased soul of a butterfly
now roams as a ghost, the townsfolk say.
the story spreads of a singing ghost
that comes out at night
children wet their beds in fear
with the swaying of banana and coconut leaves
and the fluttering of garments as they dry on the line
I come into being
mothers call on me to scare their children into eating.
04.
cruelty
it is cruel to turn from a butterfly
into a dead human.
even more cruel
to witness with human eyes
ants gnawing at the pyre
and dragging it away.
_Translated by Rebecca Whittington_
Rashmy, ' _Eemam_ ' (1999), in _Kaavu Kollappatta Vaazhvu Mudalaaya_ _Kavithaigal_ , Exil, Coubevoie, 2002, pp. 49–51
## The Song of an International Refugee
#### _Shanmugam Sivalingam_
I am a speck
but
a speck floating in the ocean
I am a wanderer
but
I wander
in a turbulent sea, a roaring
storm
I am a person with no refuge,
no, due to war i lost it, my
refuge
I am one
with many losses
but
not yet
to lose myself
in the war to recover what i lost
I am the grass trampled upon
but
a grass lucky enough
to witness a generation
which fights on
I am the one who runs
for my life, escaping missiles
and bombs
but
I have no history of
surrender
ruined, I am
but
not one without
hope.
_Translated by Rebecca Whittington_
Shanmugam Sivalingam, ' _Oru Sarvadesa Agatiyin Paadal_ ' (1990), in _Sithaninthu Pona Desamum Thoornthu Pona Manakkugaiyum_ , Kalachuvadu, Tamiliyal, Nagerkovil, 2010, pp. 205–206
## The Echo of Moonlight
#### _Su. Vilvarathinam_
Parampu mountain.
Pari had died,
and the sun had vanished in the darkness.
Ankavai and Cankavai
were refugees.
They had fallen
to the 'royal drums beating victory'
and on their hill, on a narrow path
that seemed to be filled with the sorrow
of their downcast moonlike faces
Pari's daughters walked. Coming down
from the hill, the moonlight itself seemed to walk slowly
accompanying them like Kapilar, who had grown so old.
the moonlight,
Kapilar,
Pari's daughters
and the good life of Parampu.
They all walked
growing weary
weak and pale
With reverence Kapilar entrusted
Pari's daughters whose lives were broken
to Auvai and disappeared.
The journey continued.
Pari's daughters walked with Auvai
through all the villages
of the poor whose only food was gruel,
and the moon stood, hesitating, and went with them.
As if her long life
granted by Atiyaman's _nelli_ fruit
were approaching its end,
Auvai hurried.
Sealing the marriages by pouring water,
she gave Pari's daughters
to the men who had destroyed their lives on Parampu mountain,
Pari's daughters who, like her,
made Tamil.
If only she had given them
to the families of men so poor
that, late in giving taxes,
they have only gruel or porridge to pay,
Pari's soul would have rejoiced.
That day, in the white light of that moon,
there was Parampu mountain,
and the drums beating victory,
and Ankavai and Cankavai
who became slaves in the harem of kings
and cried in pain—and now
on this day, in the white light of this moon,
their echoes still resound.
_Translated by George L. Hart_
Su. Vilvarathinam, ' _Nilavin Ethiroli_ ' (1999), in _Uyirtthezhum_ _Kaalathirkaga_ , Vitiyal, Coimbatore, 2001, pp. 324–325
Inspired by the classical Tamil poem from _Purananuru_ :
On that day, under the white light of that moon,
We had our father and no enemies had taken the hill,
On this day, under the white light of this moon, the kings, Royal drums beating out the victory,
Have taken the hill. And we! We have no father.
THE SONG OF _PARI'S_ DAUGHTERS _, PURANANURU 112 TINAI: POTUVIYAL, TURAI: KAIYARUNILAI_
George L. Hart and Hank Heifetz (eds), _The Purananuru_ : _Four_ _Hundred Songs of War and Wisdom, An Anthology of Poems from Classical_ _Tamil_ , Penguin Books India, New Delhi, 2002, p. 75
## Anxious Sermon
#### _Selvam Arulanantham_
I left
and arrived on the thirty-third day
exhausted
as if I had been walking for ages.
the immigration official
was reddened
as if he were staring into fierce sunshine
born as a Dalit
bowed down as a Tamil
I felt myself black.
why have you come? he said
sir, I am one born
in a time when love has weakened
and atrocity has reared its head
I said.
I am one who lived
when stone turned into wood
wood into iron
and iron into fifty calibre.
staring at me again,
he asked, why have you come?
when he heard of my tragedy
of brothers fighting against each other
neighbours driving each other out
he asked, perturbed,
and what did you bring?
I said
there's a cross dragged across three thousand years
and the nails made over thirty.
he sent me into Canada with a handshake,
saying, you go nail yourself in
here,
where the flag with the leaf flutters
I will nail myself right onto the cross.
_Translated by Rebecca Whittington_
Selvam Arulanantham, ' _Vyakula Prasangam_ ' (1999), in _Thotruthaan Povoma_ , Sabalingam Nanbargal Vattam, Gorges Les Gonesse, France, 1999; published in _Vetraagi Ninra Veli_ , Vitiyal, Coimbatore, 2001, p. 50
## 'Questions'
#### _Aruntati_
It was, indeed, an unexpected meeting today, and as the man relentlessly held on to me, I felt well and truly trapped. He looked like he was fifty or so, maybe even fifty-five or fifty-six. Evidently, he was a connoisseur of music. In a Tamil provision store. Now, I have this habit of singing to myself, just any old song that comes to mind. So I was standing in front of this pile of books, running my fingers across them, looking for this month's new issues, when he slowly came up behind me and asked, 'Little brother, you have a good voice. Do you ever sing on stage?'
I didn't know what to say. I wasn't even singing loud enough to be heard clearly. He could not have made out the lines of my song. Why praise me for singing well when I was only singing some random songs to myself ?
Maybe he was teasing me. Who knows? That's probably why he asked me if I sang on stage. Or perhaps we share the same opinion about people who claim to be singers, but who really just pretend to sing.
What could I say? 'I wasn't singing anything,' I said.
'Come on, boy. What were you doing, then, just muttering to yourself ?'
I'm done for! This man really means it. I don't even remember what I was singing, or if I sang it well. I just sing whatever comes into my head—a line here, a line there, whatever. It's a habit of mine.
It seemed to me that the man must be a good singer himself, or a connoisseur of music. His fingers drummed a beat on his thigh as he swayed his head and slowly hummed a tune.
I felt like laughing.
'Why are you laughing, little fella?' he asked.
I evaded the question. 'No, I think you have good taste in music.' I said.
'Well, why else would I drop what I was doing and talk about music with a complete stranger like you?'
Him saying he was giving up his work to talk to me was pretty amusing. In fact, here he was, talking to me, keeping me from getting my books so I couldn't leave and get on with my own life. How annoying!
'I rarely meet people like you, little brother. I'm not going to leave you.' He laughed as he said this, but I grew tense. I might as well forget my work for the day.
'Why, what do you see in me that makes you say something like that?' I asked. I was slightly elated, and rather eager to learn something nice about myself that I did not know.
'Tell me,' I asked.
'You really do have a good voice. You use it beautifully. And the books you choose to buy are not just any old books. You seem to be a good reader too. What else? I am, of course, happy to see people like this.'
That I read is true. But that I have a good voice or sing beautifully is doubtful. I felt like going home at once and really singing. In any case I could give it a try.
'Little brother, if you don't mind, I really want to talk to you. Why don't we go over there and have a cup of coffee?' he asked, almost dragging me along.
Now it all made sense to me. This was something I had done before, and now he was doing it to me.
When I was fifteen or sixteen, I and some other boys who were about my age or slightly older would go to festivals or folk performances. Whenever it happened to be folk theatre, it was really special. They would go on and on, until dawn broke. We mostly did not watch them from in front of the stage. We would much rather watch the 'other theatre' behind the stage. The wives of the performers would come and stand behind the stage with something heavy in the folds of their saris. It looked like it was really heavy. Initially we did not understand the secret of all this heaviness. The women would carry these weights and go behind the screen as though they had been instructed in what to do. At first we were not allowed in there, but then, little by little, we would help them carry this and that, and eventually we did get to backstage, close to where the screen was. That is how we learnt of the secret of the heavy things they were carrying. The performers in their costumes would make loud 'thom' 'thom' sounds as they jumped on stage, sweating as they sang. After a round of singing, they would come close to the screen and pull it so that the audience could not see their faces; then outstretched arms would give them one of the bottles that had been carefully guarded in the saris. They would gratefully gulp down the contents of the bottle, gently clear their throats, and enthusiastically return to conquer the stage, singing at a high pitch. After that it was sheer excitement! Ears would burst. The more of whatever it was their wives had kept hidden in their saris descended into them, the more joyous and exciting the performance got. Once we got used to all that, we regularly went back to watch the 'other theatre' behind the stage.
Performances in those days came with long stories. Like the Ramayana, the Mahabharata. The early King, the middle King and the late King—one King, played out by three actors. When the late King sang, the first two kings would be snoring backstage, asleep in their sparkling costumes. Half the people who came to watch the play would spread their mats, stretch themselves out, and dream about the climax. They would wake up with a jerk, rubbing their eyes as if someone poured water on their faces, when the late King was singing like a roaring lion. There were times when the performance would only end when the sun started shining on their faces, but the performers would mostly keep their costumes on the whole time. In their silk clothes and crowns they would light cigars or beedis, whichever they preferred, and go to a tea shop across the street. We would surround them, but the rest of the people in the town looked at them in amusement or just walked past them on their way to work. It was as if the actors were reluctant to step out of their characters. Maybe they were happy to be Kings and Ministers.
But their faces were horrible to watch. It was funny to see the pearl-white powder they had smeared on earlier in the evening dissolve in their sweat and peel off their faces. What was even funnier than this was to see the men who had put on female costumes. They would lift their saris and fold them up at their waists, remove their artificial hair buns and hold them in their hands while they lit their beedis or cigars. Their clothes would be crumpled, with one fake breast raised above the other sunken one. We tried hard to control our laughter. But we watched them seriously and enjoyed their performances. We kind of went crazy about them.
Sometimes this 'acting' was much better than their onstage acting. They would stand, sipping tea like kings. From the way they looked at us, they seemed to hope we would say something about the efforts they had put into their long night's performance.
I would go first. I would approach the one who looked the most anguished. 'Big brother, that was great! When you first raised your sceptre and pounded it on the ground and jumped up to sing . . . I just cannot forget that. It stays in my eyes!'
As I was saying this, the man would start loudly singing his song. The tea shop owner would clearly be trying to hold in his laughter. We would each get hold of one actor and work on him, which would always lead to a re-enactment of the entire play in front of the tea shop. That's how our trick worked. Then, all in unison, they would order the tea shop owner to bring in milk tea for everyone. As if he had to be told. He was just waiting for this moment.
'Can I get you something else to eat?' he would ask as he served us some hot snacks, like a smart businessman. I have wondered since then if the old Tamil saying about raking someone to put on a show was actually said after our activities.
But now, beyond all my experience, this man in front of me was flattering me, just for a song that I was mumbling to myself.
Now we are heading over for a cup of coffee. There are a lot of Tamil restaurants in Le Chapel. It was rather a problem of choice. Instead of going to my usual place, I decided to take him to a different one. 'Ah, come in and sit down, brother!' the owner of the shop said affectionately, as though he had known me for years.
'I haven't seen you in a long time. Is it a holiday today?' he started questioning me. I normally don't like any of this banter. We might have met each other just the other day, but he would talk as though we were long-time friends, nearly placing his hand on my shoulder. He asked after me. I found it disgusting.
'What would you like to eat?' he asked.
I looked at the man.
'Just bring something that we can munch on,' he told him.
'And to drink?'
He said, 'Just bring us some coffee with milk.'
I don't know whose face I must have looked at when I woke up to make me so unfortunate today. The shop owner brought some coffee and snacks, and asked me, 'Brother, aren't you the one who writes poetry?'
That's it! Now this man who praised me for mumbling a song is not going to spare me at all today.
'I write a little. Have you read any of my poems?'
'Where do I have the time? But people who have read them say they are good,' he replied as he moved on to attend to the next table.
Businessmen are all the same, I thought to myself.
'Little brother, why didn't you tell me about this? When I saw your face I was right in thinking that you are very talented.'
The man acted like a scientist who had just discovered something new. Ayyo! Ayyo! What a pain! I felt like banging my head against the wall.
'I don't seriously write anything. I just scribble something now and then. Not much to talk about.'
'I have a real interest in all of this,' he said as he gave me a piece of paper on which he wrote something. 'Brother, my phone number and my name is on this paper. You really must come to my house one day,' he said. The name on the paper was Nallur Somasundaram. It seemed the man also had a taste for literature.
I figured that, maybe, talking to him wouldn't be a complete waste of time, after all, so I asked, 'Do you read books?'
He said, 'Yes, little brother, I read all the magazines and books that the boys bring me.'
'OK, but how about if we move on, then? I have some things I need to do.' I got up to leave and placed some money on the table for the coffee and snacks.
'Brother, why are you leaving? Wait, I've been talking with you all this time, and I don't even know your name. What is your name, little brother?' he asked as he held my hand. I said my name was Kanakalingam. He frowned and looked at me.
'Why, is there something wrong with my name?' I asked.
'No . . .' he hesitated and said, 'little brother, which part of Jaffna are you from?' he asked. I laughed.
'You laugh at everything. Just tell me where you come from,' he said.
'Why do you think I am from Jaffna? Not from Batticaloa or Trinko. Could even be Manar, right?'
'Can't you tell where a person comes from by the way he speaks?'
I guess he thought this was an important contribution to linguistics.
'If that's so let's see if you can tell me where I'm from?'
'It could be guessed . . .' he hesitated and then after a moment, he said, 'Here everyone's the same. This is all the work of those crazy Europeans. They give a card and a job to everybody who comes here, and now they assume we're all the same.'
What is he talking about? Oh, is he trying to tell me that there is so much violence and murder here because the Europeans have allowed the coexistence of both those who carry arms and those who work for peace? But what is the connection between this and that talk about where a person comes from?
'Little brother, go on. You said you write. Do you have a job?'
Perfect. Whether the man knows literature or not, he indeed knows about writers. But even then, I thought he should not have asked me that question after eating the snacks and drinking the coffee that I just paid for with my own money.
'I have an eight-hour cleaning job.'
'You see? One has to come all the way here to be a janitor. Everything is upside down here. What were you doing back in Sri Lanka?'
'I was studying. Where did I study? I spent all my time watching theatre.'
'I never miss listening to plays on the radio, but I don't watch plays performed on stage,' he said.
'Why not?' I asked.
'Don't you know about the people who stage these plays there?' he asked with a smirk.
I thought the man was morally outraged about the third-rate cinema-style plays of his own time, and asked, 'Wasn't there the great actor Vairamuthu in those days? There was no one then who could outperform him in singing and acting,' I said.
He laughed strangely. Then he said, 'It seems my little brother hasn't properly understood what I meant. Everyone should do what they are meant to do. Didn't they too go around with guns, claiming to fight for liberation? Did we allow them . . . ?'
I got it. I understood everything, just when I was about to make my getaway.
'Little brother, you look like one of our boys. You still haven't told me about your native place?'
I told him. He shot me the next question. 'Oh there! But which area there? By the side of the temple or behind the temple?'
I told him.
'Then is it on this side of the junction or the other side?'
I told him that too.
'Then is it by the side of the field or by the side of the pond?'
I told him.
'What's your father's name? What does he do?'
After a moment of silence, I said, 'Why are you pestering me? What you are thinking is true. I'm not one of your boys.'
He stood up in a hurry.
'Little brother, I just remembered something. I have to go.' He turned to look at me.
'Like I said, little brother, I just remembered something. We are moving tomorrow. The telephone number I gave you just now will not work. I'll look you up later and give you the new number,' he said as he left.
This is all the work of those crazy Europeans: I just repeated, inside myself, what he had said earlier.
_Translated by Kiran Keshavamurthy_
Aruntati, ' _Kelvigal_ ' (1999); first appeared _in Uyir Nizhal_ Journal, Paris, March–April 1999; published in Sugan (ed.), _Theendathagaathavan Muthalaana Eelathu Dalit Sirukathaigal 14_ , Maalika Books, Chennai, 2007, pp. 128–141
## Earthen Towns
#### _Nilanthan_
JAFFNA, OR THE CITY OF PEACE
(A few letters that came from Jaffna after people returned home in April 1996)
this year, a very long summer
nothing but sunshine,
wind moving mysteriously like a spy.
night
belongs to howling dogs
and growling trucks
daylight is
the time between
two curfews
the street stands broken up between one checkpost
and yet another checkpost
life is a barren dream
surrounded.
(24/6/1996)
SONG OF RETURN TO THE CITY
(to be sung to the Catholic folk theatre tune ' _Melinjimunai_ ')
ask the sun that burns dry
every one of our streets
ask the wind that speaks
with our stiff palm trees . . .
ask the sun . . .
our village is burned
our houses alone
our wind breathing death
our hearts have been charred . . .
ask the sun . . .
in our town's rotting mouth
we can see from far away
palm trees sprung from our land
are calling us home . . .
in our town's . . .
our sea our fields
our land our lake
our plains our woods
our river our people
are ours, are ours
are ours, are ours . . .
ask the sun . . .
Written on 21.11.2001, The Day of the Suran War, Tirunagar-Mallavi (Suran War refers to a popular festival that celebrates the killing of the demon Suran by Lord Murugan.)
EARTHEN TOWNS
yesterday
the day after Killinocchi fell
we went to Mullaittivu
instead of Yappu Pattuna
Mullaittivu
instead of Mullaittivu
Killinocchi
instead of one town
another town
towns on top of towns
big towns and little towns
all towns laid waste
unconquered people
are either killed or
flee to the forests
at times
they return victorious
and then
in place of the old demolished town
they build a new town
with earth
the whole of Mullaittivu laid waste
what men built
men demolished
men killed men
and burned men
but ever older and bigger than men
is the sea
unharmed by anything
beyond all lack of certainty
as a single certainty
it's like an angel
the beautiful sea,
like a sage
at peace,
coloured like all the blue of the sky
dissolved and turned into the sea
men come and men go
cities are built and ruined
but the sea
neither comes nor goes
in war or in peace
nothing can touch it
look
the men are coming again
now
they will build a town of earth
oh . . . sea,
old sea,
oh dear, great sea,
keep in touch with the earthen towns
they conquered
a great ocean
but they lost
another capital.
the other day, Killinocchi fell,
when they entered the town
to aim the gaping cannons
bursting open its little streets
full of jostling people,
only a dog was left behind
unconquered people ran
and hid in the forest
calling the birds and all the other
grateful animals
there they would build
a town of earth.
that earthen town
just like their
trenches
will be dark
and beyond time
just like their beliefs
about the future
it will be easily demolished
before the cannons' insatiable hunger
without protest
oh . . . forest
old forest
oh dear, great forest
be the consolation of these earthen cities
over the fields of screeching lapwings,
they wander without support,
these earthen cities
are drenched in rain,
the rain chases them
like a ghost
when on one accursed night
they left
their capital and ran
the rain
was chasing them
just like this
just like the enemy.
forest
oh good forest
don't let them down
sea
oh good sea
don't let them down
rain
cruel rain
oppresses my people
my innocent people
are delirious
with sorrow
like widows
who have lost their youth
on sloping roofs of earthen towns
endlessly getting soaked in the rain . . .
oh . . . capitals
with spacious grounds
oh marketplaces
full of valour and joy
oh grand
famous avenues
beloved palmyra trees
listen to me . . .
_Translated by Rebecca Whittington_
Extracts from two long poems ' _Vannimaanmiyam_ ' and ' _Yaazhppaaname, Enathu Yaazhppaaname_!' by Nilanthan; _Vannimaanmiyam_ first appeared in _Niyathi_ , Mallaavi, 2002. ' _Yaazhppaaname, Enathu Yaazhppaaname_!' first appeared in _Magizh_ , Puthu Kudiyuruppu, 2002. These poems are published in Nilanthan, _Ini Enathu Naatkale Varum_ , Vitiyal, Coimbatore, 2012.
## Hanifa and the Two Bulls
#### _Kumarmurthy_
Vellayan mustered all his energy and let out a high-pitched bellow, almost tearing his vocal chords in the process. Completely rattled by the sound, Hanifa ran over to look. Vellayan lay stretched out in the shed, his pained eyes rolling in and out, lit by the faint moonlight. His hind legs twitched in rapid convulsions. Startled, Hanifa circled around randomly. He had no clue. He ran back to the house and brought out a tiny lantern, burning like a firefly. He kindled it and it crackled and came to life, with a brightness that strained the eyes. He held the lantern up and checked again. Vellayan was in the same state. There was complete silence all around. Suttiyan was glaring right through the night, with his ears all erect. Hanifa sat down next to Vellayan and caressed his chin. Tried to lift his head up, in vain. He shouted towards the house, where he could see his wife coming out the front door.
Together they tried to get Vellayan to stand up. Somehow, Vellayan managed to squat like a 'Nandi', frothing around his mouth. Hanifa trembled looking at him. Tears welled up in his eyes. He tried his best to recount the day, in sequence. Nothing unusual had occurred. They had been working for the whole week in Maniyam's paddy fields. All the tilling and turning over was done by Vellayan and Suttiyan. It's been so for years. Once Hanifa steps into a field, Maniyam would never even bother to look in that direction. All he had to do was relax, sending everything, from betel leaves to chew on to seeds for planting, through his workers. Or he would attend to other jobs in hand. Hanifa's work was impeccable. He never bothered about time, as if he was working for himself. He wouldn't want to go anywhere else until he completed Maniyam's job. He had finished it all up now, but for a single day's work.
While he worried about the sudden sickness of Vellayan, he was equally disturbed about leaving Maniyam's job unfinished. He asked for some salt to be mixed in warm water and sat next to Vellayan. Feeling a little sense of relief, he hugged Vellayan once, who nodded in appreciation despite being exhausted.
Vellayan had known Hanifa ever since he was born. Hanifa was everything. An unimaginable sense of gratitude hence stayed with him perpetually. His mother died soon after his birth. Orphaned and famished by the time Vellayan managed to reach Hanifa's hands, he didn't look like he would survive for two more days. Hanifa looked after him like a son. Even his wife complained a bit. But he nursed him back to life. Vellayan would run around the house, playing with the kids. His character never changed, even after growing into a bull. He would often just stand there with those longing eyes of his, waiting for somebody to bring in his fodder. If that someone's hands were empty, Vellayan would come after them, playfully. He would never wander very far, even when he was outside. The moment he heard the call, 'Vellayan', he would come running. His name, 'Vellayan', means milky white, which was his colour. Not a single spot on his body. When he was scrubbed clean with soap, he would gleam as white as a nettle flower.
But Suttiyan was found by chance. Hanifa immediately realized that he would make a good team with Vellayan. It was not easy for him to buy Suttiyan. Hanifa was not well to do, living as he did on his daily wages, but it was not a difficult life either. He got work for thirty days of the month and gained a reputation as a good farmhand. He was well built and tall. His hair was just about to turn grey. He chewed betel leaves all the time, filling his mouth. He never missed the morning prayer in the mosque. Moreover, he also participated in the administration of the mosque. Not just him, this had been the case for generations. It had become a fact of life, beyond any need for explanation. It had nothing to do with the concept that performing your duties would help fulfil the lives of your children. It just happened that way.
He gently put the soda bottle filled with salt water into Vellayan's mouth, holding his chin up. Vellayan gulped all of it quickly. This gave Hanifa a certain confidence and peace of heart. He asked his wife to make some tea, and leaned back against the pillar, next to Vellayan.
Hanifa was as affectionate towards Vellayan as he would be to any of his own siblings. This was reflected in every word he used. He would never hit him. If Hanifa simply raised his voice in anger, Vellayan would recognize it and act accordingly. Shaking the bell around his neck once, he would quicken his pace. But Suttiyan would, occasionally, get hit: he had a stubborn streak. When they finished work, Hanifa would take bath only after he washed them both up. Suttiyan would run to the house, but Vellayan would wait for Hanifa and walk home with him. Sometimes on the way home, Hanifa might chatter a bit longer than usual at the shop where he bought his betel, and Vellayan would gently nudge him on his backside to remind him. Hanifa would excuse himself, saying, 'He is hungry,' and leave.
Once, about five years ago, it was drizzling, a cold breeze was blowing and the sky was loaded with dark clouds and lightning. Hanifa tied the bulls in the shed, arranged hay for the night and went to bed. At midnight, awakened by Vellayan's cry, he came out to look. The wind was heavy, and the coconut trees were dancing around like devils. The rain got heavier. Vellayan jerked and cried out again, four or five times, heaving in anger. In this ruckus, Hanifa's wife and children also came out of the house. A few minutes later a coconut tree broke and fell headlong on top of the house. Amazed, everybody hugged Vellayan in gratitude.
Sitting there, Hanifa looked up at the sky. The moon was visible, hazily though, hidden among the clouds. Hanifa tried to remember that day's crescent, but his memory refused to help. He assumed that it would be just a little while before dawn broke. At that moment he heard, outside, the inauspicious crowing of the cock. A dog howled in the distance too, then it faded out. Hanifa was a bit rattled, wondering what evil was going to strike. He thought for a moment about his son, working in the city. He had written saying that the army would shoot people at random. He prayed, strongly, for nothing of that kind to happen. Muttering 'inshallah', he looked in the direction of the mosque but he couldn't see it right then.
His wife brought out the tea. He drank it down and then they both tried to lift Vellayan again. Exhausted, Vellayan squatted like a Nandi again, breathing noisily, struggling. Hanifa used his hands to wipe the froth from Vellayan's nose. His hands got sticky, and he wiped them on his shoulder towel.
His worries doubled when he thought about Kaja Moideen's absence. He was an expert cattle healer, and he always responded promptly to any call. The entire village had been shocked when they brought him home dead, killed by a bomb.
As daylight fell over the earth, Hanifa handed Vellayan over to his wife and hurried off to find Maraikkayar. Maraikkayar was shaken when he heard Hanifa speak. He had never heard Hanifa's voice trembling so much.
'What is it, Uncle?' asked Maraikkayar, rushing out of his house. Listening to Hanifa, Maraikkayar said, with raised eyebrows, 'He wandered by here last evening and seemed fine then.' He turned back and called for Hussein. Hussein came out immediately, as he was preparing for his namaz.
The three of them tried together to get Vellayan up on his feet. But he could not get up, and slid back down to the ground. Then somehow they succeeded in stretching out his legs and he stood up, trembling. They examined his body thoroughly, not leaving a spot unchecked. Nothing seemed to be wrong. Maraikkayar felt the chin, the front legs and eyes, for a second time. They seemed a bit swollen. Nodding his head in pride at having diagnosed the problem, he said it must be ' _mun adaippan_ *'. The other two, after second examinations, agreed with him.
'Doing " _naiyyam_ †" twice will make it vanish. Put the burden on Allah! All will be fine,' said Maraikkayar. Hanifa looked up to him in hope.
Hanifa took a deep breath and looked at the sky. They let Vellayan lie down and started discussing the logistics of 'naiyyam'. Suttiyan, standing next to Vellayan, was licking him.
The village seemed to be in a rush. People were hustling and talking, in worry and surprise. The afternoon sun was blinding.
Exhausted, Hanifa wiped his sweat off with his towel. He was carrying palm flowers and neem seeds, collected for 'naiyyam' over some four or five miles. His mind was preoccupied with Vellayan's recovery, plus he was constantly mulling over plans to finish Maniyam's job, with an alternative pair of cattle, if need be.
When he entered the fence gate at his house, his wife and children were standing at the front door. Their faces were shrunk and darkened in incredulous sadness. Hanifa's head reeled when he saw them. Thinking that something had gone wrong with Vellayan, he rushed to check and found him lying on his side, on the ground. He checked his breath, and found it coming, irregularly. His youngest daughter came and stood behind him.
'Father, we all have to leave,' she blurted out.
'Listen to your mother, my child,' said Hanifa, opening up his sack, hurriedly laying out his bundle.
But his wife also hurried over to them, and wailed, 'The Movement has asked all of us to leave . . . Oh Allah, how unfair is this . . .' That's when it struck home to Hanifa, and he remembered Vellayan's cries. His mind finally absorbed the fact that something else, something thoroughly horrible, was really happening.
He walked up to the road in a daze, and saw faces shrunken, inexpressibly sad. He saw lots of people heading to the mosque, and children scrambling all around, raising dust on the street.
Two vehicles, one after the other, went past him, with guns protruding out of them. When he saw that Maniyam's younger son was in one of the vehicles, out of habit he started to call out. But for some reason, he could not. It seemed as if something came and blocked his throat.
When the dust settled, he sensed Maraikkayar's presence next to him.
'What's happening, Uncle?'
'We have to get out of the village.'
'Where to?'
'Only Allah knows,' said Maraikkayar, pointing to the mosque and walking away.
His brain was not so clear, but instinct told him that something terribly wrong was happening and he started walking back home. He went up close to Vellayan and looked at him. His eyes were closed, and his ears were quivering. Suttiyan was standing next to him, still busy licking.
He felt the urge to howl with all his strength. He just sat down, dizzy. Something heavy was rolling up from his stomach, grabbing his chest. Thoughts went numb. His ears could hear familiar voices, wailing and whining. He shut his ears too.
His youngest daughter pulled him, saying, 'Everybody is leaving, come on, Vaapaa.' Hanifa stood up and followed her in a daze. His wife and elder daughter were walking ahead, carrying a small sack of clothes. When something hit his backside, he turned around to look. There stood Suttiyan. Hanifa lost all control and burst out wailing non-stop, and hugging his neck. People in the street watched as they walked past in single-file lines. His younger daughter pulled him away, to walk with her. Suttiyan came along as far as the gate. When they turned into the street, Suttiyan turned back to look at Vellayan. Then he looked again at them.
As long as Hanifa could see him, Suttiyan kept taking turns, looking at them, then at Vellayan, then back again.
_Translated by D. Senthil Babu_
Kumarmurthy, ' _Hanifavum Irandu Erudugalum_ ', in _Kumarmurthy_ _Kathaigal_ , Kaalam, Toronto, 2002, pp. 29–35
(This short story recollects events around the LTTE's chasing out of Muslims from the Northern Province of Sri Lanka in October 1990.)
## A Story Lost in Time, Lasting in Time
#### _Iravi Arunasalam_
I remember those days. Even though I was already thirteen years old, I still held my father's hand when we went anywhere.
It was 1974, in the month of _Thai_ *. Those were happy times. Before this story, which I am about to relate, there came these heavy monsoon rains. Fields flooded. Backyards were brimming with mud. Wary of stepping on snakes in the floodwaters, we went into nearby fields and collected tapioca tubers. We picked brinjals. Floods don't hurt bananas, but so what? Vasanthan went ahead and bagged them too.
Because of the floods, there was a bumper crop that season. I wish I could say that was a time of overflowing, exuberant rivers. But we had no river in our town. Just a canal that we called Vazhukkai River. It ran when the monsoon rains poured down. The rest of the time it was just sand that ran there. We ran there too. Wells all over town were overflowing. Our hearts, too, were overflowing with joy. Torrential monsoons meant cold, dewy winter mornings that would keep us wrapped up in blankets.
Then came the days when we could no longer stay wrapped up in our blankets. We were so happy! All the streets were festooned with banana trees, and shrines were built at every street corner. Banners beckoned in front of all the shops. Full pots sat waiting, along with strings of auspicious mango leaves, at every doorway, in every household.
What for? For a carnival! For whom? For us, for our mother tongue, a festival was happening for our Tamil language! We were ecstatic.
> _Sound the conch!_
>
> _Our life, our wealth,_
>
> _Our Tamil will never grow dim!_
>
> _If anything comes to threaten it_
>
> _Destruction is certain so_
>
> _Sound the conch!_
>
> _Hold your head high, and_
>
> _Call yourself a Tamil, man!_
Such were the words chiselled in our hearts, at that tender age. Naturally, we were happy about the festival. Cold in the morning, cold in the evening, but the days were sunny. Buds started to ripen on the branches of the jujube trees. We explored the whole town looking in vain for one single ripe berry. We didn't notice the time flying by; it was just filled with happiness. Why would we hold back when there is a festival for Tamil?
In 1968, Chennai, the capital of Tamil Nadu, had hosted a Tamil Research Conference. Scenes from that conference were shown as trailers along with the movie _Ooty Varai Uravu_ , starring Sivaji Ganesan. That's why we went to see _Ooty Varai Uravu_ at the Mani Mahal Theatre in Sangani. That's how avid we were. And now, how could we sit still when such a festival was taking place in our very own country? I dragged my father along. Mother said she wouldn't come. My sister, I and Siva held father's hand as we went. We watched the festival. We were ecstatic.
That's how I remember it. Not a single moment forgotten. Our hearts were overflowing with Tamil. Somehow, even now this is how it seems to me. That was the moment when Tamil touched our very hearts. Before that Tamil was a language. It was _just_ a language. After that, Tamil became our identity. It became our consciousness, and mixed, as one, with our lives. We all joined together, in determination, as Tamilians. Those were the days when I went to the Tamil Research Conference, holding on to my father's hand.
There was a parade. The floats featured all kinds of figurines, tableaux and skits, all celebrating the glories of Tamil. We watched it from the corner of Kasthuri Street, then we ran to the corner by the Windsor Theatre and watched it some more. But that was still not enough. Some people told us there was another parade over at the end of Paramesvara Street, so we ran over there and watched that too. We were in seventh heaven.
I got this feeling in my legs. That's how I remember it. They started to tremble. I tried to plant my feet firmly on the ground. Normally, I let my feelings show. I cry, not out of grief, not even in times of soul-wrenching misery. But at that moment I was about to cry. But nothing much happened. My legs started to tremble and I planted my feet squarely on the ground.
But no, my consciousness won out, or at least that's what I would like to say happened. Now my legs stopped trembling and I rooted my feet to the ground.
Oh, man, how can I express that feeling? I don't have the words. Father's hand, holding mine, was firm. Father sent me a message through his hand, and I, in turn, sent the message through my legs into this earth, my Motherland. But I cannot, now, seem to explain that feeling.
In the swirl of all those emotions, we arrived at our house, lit by a kerosene lantern. In the courtyard, with a cool breeze blowing, we told mother all about the day's events.
I pestered my father all through the next day. I pressured him. 'We have to go on the final day too!' I said. I knew the last day would be the pinnacle of the festival. I would not leave my father alone. 'Okay,' he said, and so father and I went.
That evening we were standing on the streets in Jaffna town. There were banana stalks, banners, shrines and figurines everywhere. Bamboo and casuarina poles were erected, and coloured lights were hanging from them, blinking lights. It was all just beautiful. A chilly breeze was blowing, but I was happy.
I repeat: Everyone was happy. Not just happy, though. Everyone was emotional.
I could see it clearly. Each person's face shining in the lamplight. We're gathering in front of the Regal Theatre, and walking from there to Veerasingam Hall. We're standing in the open square. A rally is taking place. I couldn't see. I was a kid. Short of height, I couldn't see anyone. My toe tips hurt, I was straining so hard to see. My father wasn't going to pick me up. The loudspeakers were the only things that helped me. There's a noise coming from the loudspeakers.
No. The more I think about it, the more it seems like the loudspeakers were switched off. No, I can't be sure. Were there loudspeakers or not? Whatever, there were some jumbled, confusing words coming at me. I can be sure of that much. My own emotions were rising, and I could feel the sweat on the palm of my father's hand.
There was a police station by the pannaikkadal. The sea breeze from pannaikkadal came spreading through the entire town, as if it were bringing some news.
This is all I can manage to say now. What happened next? Something that made us wonder what crime we had committed, besides speaking Tamil. We did put on a festival for Tamil, but that's all. But because of that came something crazy, unreasonable.
The lights went out. Wires fell from the electricity poles. That's all I could see. Sparks arced and fireballs flew. Father clutched my hand and hollered my name, 'Raasa! Raasa!' He dragged me along, running. I heard gunshots. Gunshot sounds that I heard once in a blue moon, when somebody went hunting in our village, or when they shot a mad dog, I now heard continuously. The hunters have come. Or maybe it's the people who shoot mad dogs who have arrived.
Bombs fall, exploding white. Eyes burn, but regular smoke doesn't make them burn like this. It was like the smoke when they burn rubbish piles in the village. It burnt when I was asked to spit on burning chillies and neem leaves to placate the evil eye, but it had never burned like this. This, this _really_ burned.
Eyes burning more and more, father keeps on running, yanking me along. I stumble along, running with his pulling. Where are we running to? No idea. I pulled on my father, and he pulls me, and he runs.
Suddenly, father falls into a ditch. And when he falls like that, how can I keep from falling too? I fell. Head first. That's my memory. Face smeared with mud.
Like I already said, it was a good monsoon that year, and it smeared its mud all over me, uninhibited. It probably smeared my father too. I wasn't sure, though, in the dark. We just stayed there, in the mud.
More and more people keep falling into the same ditch. From the way they were falling we can tell it's not really a ditch. It's a storm sewer. We are lying in a storm sewer. Father keeps whispering, 'Raasa! Raasa!' as he rubs my back. He doesn't say anything else. He didn't ask, 'Are you hungry?' If he had, I would not have said yes. He didn't ask, 'Are you scared?' If he had, I would not have said yes.
This is all my father did.
He rubs my back. In the dark, his hands move from my back to my throat and feel around my face for my eyes. My father's fingers.
'Father, I am not crying,' I whisper.
'Doesn't it hurt?' he asks
'Yes, Father,' I said.
'Everything hurts,' I said.
I think my father then whimpered, 'Raasa!'
Dawn broke and light returned. Father arranged some stones and climbed out, then he pulled me out as well.
We went to the bus station, but by a circuitous route.
There was no bus. Father said, 'Let's walk.' We walked ten miles to get home. We passed ponds, fields, and temples. Reached home. Landing on our doorstep, my mother shrieked when she saw how I looked.
That was the day our whole town, our whole nation, began to shriek.
_Translated by D. Senthil Babu_
Iravi Arunasalam _,_ ' _Kaalam Aki Vanta Katai_ ', in _Kaalam Aki Vanta Katai_ , Vitiyal, Coimbatore, 2003, pp. 21–25
(This story is set against the background of the violent events at the World Tamil Research Conference held in Jaffna, in 1974. The massacre took place on the last day, 10th January.)
## Questions for the One Who Is Coming
#### _Karunakaran_
what is this constant obsession of yours
with this race
in which we will never meet
in those mysterious moments of your arrival
raking up dust choking the sky
sounds of horse hooves
through our courtyards
when you come
breaking down
in fear
we run away
carrying ourselves like corpses
in that last moment
the remnants of our life lay weeping
on the cornerstone of a house reduced to dust
the garden laid waste.
you have seen
our happiness littered
among the ruins of flowering trees.
all the times you came that way
only sorrow was in the making.
doesn't it bore you
being set to follow
love's yarn breaking
the invariable gap of impossible meeting?
don't you ever want
to stretch out a hand of friendship
under that ray of light?
_Translated by Rebecca Whittington_
Karunakaran, ' _Varugaialaridam Sila Kelvi_ ', in _Oru Payaniyin Nigazhkala_ _Kurippugal_ , Magizh, Putu Kudiyuruppu, 2003, p. 36
## Appe Ratta
#### _V. Gowribalan_
He isn't a mythical being, something you cannot easily see with the naked eye, yet you would never come across him in the normal, mechanical ruckus of your life and work. When he is not out picking up scrap iron, which he does in order to feed his belly, he'll stand rooted, intensely gazing at the sky above, with his stomach pressed hard against the wall of the Krishna temple in Marathadi Lane, painted in columns of red and white and daubed with saffron. To look at him, you surely don't have to be a writer shouldering the burden of observing the world and recording it, nor do you have to be a humanitarian, anxious about the welfare of your fellow beings. Actually, the Nadar shop owner who buys the scrap from him every day, and cheats him, yet provides him a livelihood—he knows him quite well. Then, too, the aged and bearded chairman of the Krishna Temple Maintenance Association, who sneaks up behind him while he's in his trance-like state with his stomach pressed hard against the temple wall, intensely gazing at the sky above, and whacks his behind with a stick, then cackles his cruel laugh as he enjoys the sight of the man skittering off in shock and pain—he, too, knows him quite well. The old beggar, too, who wraps dirty clothes around his leg to beg and shares his night's sleep in the 'Too Good' bus shed with him (for which he loots part of the day's scrap-metal earnings so that he can buy himself some beedis and ganja)—he knows him quite well, too.
If you consider yourself too civilized to meet him in person, you could always opt to go to the street leading from Marathadi Lane to the railway station, stand in the shade of the magnificent _sirissa_ ** tree, and just watch him. First you'd see a yellow bus, owned by the New Eastern Bus Company, heading off to Irakkakandi and raising a hell of a lot of dust and smoke on its way. Next you let the gas van pass by with its horn sounding like an infant's cry. Then (if you are a male), you shake off your sexual emotions or erotic fantasizing over the girl clad in the light-blue _churidhar_ riding her ladies' bicycle behind the van, and let her pass, or (if you are female) you stop pondering the quality of the kitchen larder offered for sale by the loud cries of head-loaders, or momentarily unload your own sincere concern in ferreting out the faults in the moral behaviour of other women, and you look towards the entrance to the Krishna temple. You spot the veiled woman selling peanuts at the gate. Behind her, then, you can see him, standing right up against the wall with his stomach pressed hard against it, intensely gazing at the sky above.
With his dirt-coloured trousers on, his navel protruding from his dark and naked tummy crushed up against the temple wall, he'll be standing there gazing at the sky above. Maybe he was trying to digest the old, rotten food that the shopkeeper palmed off on him, or even better, attempting to pleasure himself by standing there with this tummy crushed against the wall, his bloodshot eyes glowing in that ugly face, gazing hard at the sky. Maybe he stood there gazing at the sky just to keep from panicking at the sight of other people's faces, or maybe it was a well-meant accommodation on his part to keep other people from screaming and running away at the sight of his vicious face. This trance will continue until the chairman of the Krishna Temple Maintenance Association comes and rudely interrupts it.
You would do well to avoid looking directly at his face. Not that his reddish eyes staring blankly out of his dark face would scare you. Rather, it's that on the entire half of one cheek, lined up in a perfect semicircle, at regular, close intervals, eleven pairs of tiny holes would be starkly visible, clutching at his flesh. Those gnawing spots clutching at his tissues are a mix of red and pale yellow, with blood and pus oozing out, feeding a continuous stream of buzzing flies and mosquitoes. He will not look like he's taking in the aesthetic glory of a star-laden night sky—after all, it is still daytime—nor will his eyes suggest anything like that, with their demonic, spiteful look, staring hard at the sky. Shining red just like his eyes, he just stands there, watching a twisting and turning tiny little river in the sky, soaked in red, yet not sticky. It seems like any other river, with no trace of anything gooey or viscous, yet it is stained with a stream of red. To him, it smelt of blood. Worse than the red of the river, he could see a more garish mix of red and white in the bits and pieces of human flesh in it, interspersed with a steady stream of yellow brain tissues, soaking and bubbling in the red river. Rabid black dogs waited along the banks of the stream with their red, glowing eyes glaring into the river, their mouths wide open. Sometimes the dogs jumped into the river, biting and snatching at pieces of flesh and brains, with shreds of tissue dribbling from their jowls. This is how he sees the river running across the sky above.
One day, at dusk, when the scarlet-flushed sky reddened the palm leaves and the neem treetops, a missile fired from the 'Welcome Vihara' Camp fell into the 'Mill' refugee camp where they were staying, and exploded. When they discharged him from the hospital more than twenty days later, they had stitched eleven pairs of tiny holes in a perfect semicircle on the entire half of a cheek. He eventually made his way back to the 'Mill' camp, where he saw only red blood, dried and darkened on the wall and on the cement floor. Chunks of rotten flesh stuck to the walls, still oozing fluids and giving off a foul stench. Maggots wriggled beneath those globs of flesh. After that day he gave up his search for his parents and relatives. During the nights that followed, he tried out the 'Too Good' bus shed to sleep in. He found that it was already claimed by an old beggar who tied dirty clothes around his legs when he begged, as his place to earn his living and to sleep in. Some days, when the old beggar couldn't get his beedis or ganja, he wouldn't let him sleep there. The old beggar had to be sufficiently 'high' to let him share the sleeping space. Still, first thing in the morning when he sobered up, he would kick him out. This meant that he spent most of his nights in the small lane behind the bus shed, between it and the compound wall of the railway engineer's bungalow. He got used to the stench of urine (thanks to people waiting at the bus shed during the day) and to the heat, to the ruffling of polythene bags, and to the grinding of bits of broken glass, and he learnt to ignore the early-morning breeze with its determined chill, so he could get some sleep. But for the last few nights, even when the old beggar was not at the bus shed he stopped sleeping there or even behind the shed. He had taken to a narrow, unused, concrete culvert, under the sirissa trees, as his new night-time abode. A few days earlier, when he was trying to dodge a brick thrown at him by the respected chairman of the Krishna Temple Maintenance Association, which had nonetheless hit his elbow and brought the blood spurting out, and as he continued to flee from the barrage of bricks, he noticed the old beggar in a heated argument with a bunch of 'elder brothers', all sporting the same coloured T-shirts and new caps.
He could see that those 'elder brothers' were carrying political posters, coloured the same as their T-shirts, with the picture of an elderly man, and that they were trying to put them up at the bus shed, but that the old beggar was trying to stop them. The old beggar was throwing a huge tantrum, pacing up and down and shouting. He then saw that all over the bus shed there were other, similar political posters printed in the same colour as the T-shirts. But these pictured a different leader, with two pictures on each poster—one showing him wearing a white dhoti and shirt, and greeting people, and the other one displaying him all dressed up in a suit, and waving his hand.
'Hey, listen up! We got no quarrel with them. We use the same colour and the same symbol as the ones on their posters. It's only the number of pictures that's different. We'll paste ours up below theirs—we won't cover them up. Now, move over!'
'No way! Once honest, always honest! I gave my word to that Sir that I will not allow other posters here. You cannot put them up here. Go on . . . git!'
'You crooked old beggar dog, you . . .'
One of the 'brothers' took hold of the old beggar's neck and shoved him, but the rest of them stepped in and calmed him down. On the ground in front of the bus shed there developed a collage of random footmarks, shoe prints, and dust scuffed around on the dry surface, revealing wet sand raked up underneath. Then they left and walked towards a van parked in the road. That's when he noticed that van. It sported the same colour as their T-shirts. All around the van, there was that same leader, willingly smiling and greeting, with both hands raised above his head. He could see the men's vicious faces climbing into the van and sitting down behind black glass windowpanes when the van started with a jerk. The wheels spun against the ground spewing dust and smoke before the vehicle began to move, and then it suddenly leaped ahead and raced off. For a moment it looked almost certain that it was going to roll over when it bent around the corner by the church, then it disappeared after the turn. That evening he watched as the leader's face on the posters at the bus shed paled and faded into darkness. That night, he smelled an extra stink of beedi and ganja smoke, much more than usual. He also noticed that the old beggar had some new currency notes of different sizes and colours. The leader, whose face had been pervading the entire bus shed, and his colour, had disappeared completely into the dark night. He could not figure out the exact nature of the old beggar's new business enterprise, but he clearly realized that today he had laboured harder than usual.
Well, the old man was certainly smart.
'De, you loafer! Some of your scrap ain't iron, boy . . . just because you can lay your hands on it for free, dummy! You got to tell the steel from the iron. Steel means higher prices, boy! You got to tell the good from the bad, you loafer!'
Later that night, he dreamed of dogs swimming in the faded red river, chewing and snatching at completely whitened bones, floating, untainted by the colour of the river. When the dogs tried to snatch them, the bones slipped out like rubber, regaining their original shape. At just such a moment, he heard the quiet rumble of a vehicle approaching the bus shed and parking. The sounds of footsteps and a certain restrained commotion reached his ears. His thoroughly exhausted body, the stench from the urine, and concern about the bone-snatching dogs would not permit him to get up and look. He stayed where he was. Pretty soon the commotion increased. The clank of iron rods hitting the cement floor and walls of the bus shed could be heard distinctly, right inside his skull. The old beggar's ganja-infested grumbling turned into a growl, peaked, and turned threatening. Suddenly he yelped, howled, begged, wailed, whimpered, and receded into silence. The sound of the van peaked, and then it faded away. As the dogs resumed their smacking, he drifted off, back into sleep.
When the cold dew of the early-morning chill smeared all over his body like holy ash, with weariness, he woke up, shaking off the dogs and the bone pieces. He began to smell something different from the usual stink of ganja and beedi. What's more, it reminded him of the same stench when his 'Mill' camp was shelled. Frightened by bloody rivers and blood-stinking dogs, he peered out at the bus shed. The old beggar was lying there, like a ripped-apart bundle of clothes. He really woke up when he realized he couldn't see the old beggar's head. The old beggar's sarong lay off a ways, not tied around his waist. There were bloodstains and torn skin above the thread he wore around his hips. The old man lay there upside down, with his legs crossed and his head banged up against the wall. He noticed a long, thick, blood-smeared iron rod lying in front of the bus shed. He cleaned the blood stains from the iron rod by rubbing it in the sand. Then slowly, as the old shopkeeper had said, he used the cap of a soda bottle to scratch at it. It was steel. He was happy. He started walking towards the scrap metal shop, proudly carrying the find of his early morning's magnificent labour.
He chose the narrow, empty culvert to sleep in for the next few days, as the bus shed was barren and stinking of blood. It will only be a matter of time before the chairman of the Krishna Temple Maintenance Association will come and chase him away, but until that happens he will stand right there, his stomach pressed hard against the wall of the Krishna temple, painted in columns of red and white and daubed with saffron, and stare hard at the river of blood running in the sky, trying to digest rotten food, or to pleasure himself.
_Translated by D. Senthil Babu_
V. Gowribalan, ' _Appe Ratta_ ' (2003), in _Oppanai Nizhal_ (first edition 2003), Parisal, Chennai, 2010, pp. 89–96
## Iron Birds
#### _V. Gowribalan_
'Even if you turn a deaf ear to the prophecy of that fifteenth-century prophet—that something terrible would come from the sky, two iron birds would crash into two big buildings, the world would go to ruin, and people would die of starvation—you must surely have realized that, with its production of impotent seeds, this world has dared to deprive itself of chlorophyll. This incident, occurring not long before he picked up those pieces of lead, is recorded here.'
Cheeks puffed out past their ears, hairless, stomachs bloated, legless too, ghosts sunk in the sea come up and beat their heads on the ground, along with the tall waves. Oh, oh, oh . . . bellowed the seashore in the thoughts of a little boy who lived there. With the rumbling of monsoon rains, ocean waves entered the seaside home of a friend of the little boy. The friend's little sister, in a darkened room, clutching a window bar with one hand, her sleepy face covered by the dark so that only her white teeth gleam, is making a racket. Things generally unwanted by the friend's family and himself are banging against the coconut tree trunk in the courtyard, floating on the rumbling salty white foam of ocean waves. A little while ago the friend's father, with a smile, went down the rain-muddied street on his bicycle. So, this is a region where conditions are such that it's possible for this story to happen, a region by the sea that's been overlooked by moviemakers in favour of the city; or maybe conditions are such that it's possible for this story to happen in any region, where any of the world's languages is spoken.
Those objects drew his attention with a kind of perpetually tension-producing panic. Like ink spilled in a new notebook, giving birth to anxiety, like a map of the world soaking into the pages of a notebook, a panic spread right through him, unnerving him. But he felt incapable of diverting his attention, his concentration, from those objects. Yes, they did terrify him, but he took it that those objects were also capable of removing the unidentified weight, or pressure, that was bearing down on him. That's why, between nervousness and fear, in anxiety that someone's attention or gaze might fall on him, even though he sensed that the poor-quality cloth might give way and tear, he put those heavy objects into his pants pockets.
The most blows fell on him precisely on the days he went to school with broken pieces of coconut from the Ganesha temple hidden in his loose white shirt, praying that the maths teacher, who always went around with his yellow-white curls hanging down over his forehead, not come to school, and if he came, that he not hit him. He had lost faith in his own prayers, and in God, at a very young age. His eagerness that his maths teacher should know he had those dangerous objects with him was the reason for his keeping them. During the school tea break, he wanted to take advantage of his classmates' presence all around him to take the objects out, very nonchalantly. They would claim to know all about the weapons they were used in. One would say it was L.M.G. rounds. Another would say it was from an S.L.R. After each one had said a name, he wanted, very calmly, to name a gun they hadn't named, in a tone that suggested he had a lot of experience using it, and put the rounds back into the pockets of his pants. By this means, by means of the other students, he wanted to appear enigmatic to the maths teacher. Generally, the maths teacher expressed an unending hostility towards this class. When he punished them severely, he told them he had dreamt at daybreak that the students of that class, dressed like red Indians, were chasing him with sticks and canes in their hands, and his shoulder still ached with the interminable pain of slamming into a door as he got up and fled from them in fright.
So, at the thought that, if he came to know of the rounds in his possession, the teacher would dream that all the black gods under the neem trees and in the temples, red flames glowing in their eyes, were threatening him with their weapons, he smiled to himself uneasily.
On another occasion he learnt that his father, afraid of being arrested if the army did a round-up, had dug a pit in the narrow gap between the thatched palm fence and the cowshed, and buried the wires his older brother, who was always sneaking around, up to no good, had brought home to siphon off electric current. But he also knew that if he told his father upfront that he had buckshot in his possession, he himself, instead of the rounds, could end up buried just behind the cowshed. He would much rather have his father learn about them secretly, through some other person. If that is how it were to happen, he looked forward to confronting his father with pride, knowing that his father, while pretending to ignore him, would secretly keep a close watch on him.
So he collected his thoughts as he walked through the black-and-blue iron deposits in the gutters, made by the running rainwater.
As he approached the sirissa tree, the walkie-talkie on his hip flashed red and crackled. He immediately crouched against the sirissa-tree fence. His right hand was on his hip. It quickly drew his pistol. Lifting the pistol, he placed it close to his neck. Turning the walkie-talkie on with his left hand, he brought it close to his mouth . . . _over_ . . . _over_ . . . _yes_ . . . _over_ . . . _over_ . . . _receive_ . . . _enemy_ . . . _enemy_ . . . Tucking the walkie-talkie back in his waist, he brought his left hand as well up to the pistol, and he backed up even closer against the fence. Enemies out there. His back hurt. He felt the rusting barbs of the wire fence and the buds of the sirissa tree burrowing cruelly into his back. The fact that the area around him lacked the flat outer wall of a fort or a wooden blockade, like in Tamil or English films, caused him regret. He felt himself stoop like a question mark. The walkie-talkie flashed red and crackled again . . . _enemy_ . . . _enemy . . ._ _change your position . . . over . . . over . . ._ Yes, he sensed the necessity of changing his position. He heard something fall to the ground with a crack. He felt something warm splattering his feet. He looked and saw cow dung trickling down.
'I've been looking for that buffalo all morning, who knows where all it's wandered off to . . .' _enemy . . . enemy . . . dangerous enemy . . . change your position . . ._ his brain commanded him. Besides the trembling of those white flowers among the brown-and-green leaves of the rosebay, he couldn't see anything in front of him, two hops . . . one step . . . another jump . . . that's it . . . crossing that open space and landing behind the potted plants in one go, he could reach his _safety place,_ _one_ . . . _two_ . . . _three . . ._ Two steps, one hop, and another jump . . . He hadn't miscalculated. He felt himself land on something hot and bone-hard, yet also with the warm softness of wool. A feeling of helplessness only added to the load on his mind. He looked up at a blue sky with drifting patches of white, and between them two frightening black eyes, pale-red gums, and white-and-yellow sharp fangs. With an ear-splitting scream, he felt himself straining. A continuing feeble groan and a howl made him aware of the severity of his accident. His body writhed once and subsided . . . his lips were quivering. ' _Dangerous enemy . . . Keri Palla_ * _. . ._ thank god! My balls are still intact . . .'
The restless rumbling of the sea in the monsoon rains . . . the sound of laundry beaten on the rocks . . . squirrel babies chattering _titter-titt_ . . . as he heard all of this, the lane seemed silent, without a sound. After the rainfall at noon, finally the yellow sun began to shine . . . but still duskily, and the clouds had not dispersed. He felt damp earth clinging to the soles of his feet. The yellow sunshine, falling right at the centre of the open space bordered by the palm-leaf fence between a balsam tree and a guava tree, and reaching down to the grassy floor, lay there like a yellow crack . . . the blackish-green grass seemed to be growing _viru-viru,_ covering the entire path. He felt the cool breeze spreading very gently over his body. Suddenly a gust of wind came up, then it died down just as suddenly. Sap from the balsam tree was dripping down on his body, on the crown of his head. With the crown of his head numbed by the falling sap, his body trembling, his penis erect, his heart pounding violently, he cried out spontaneously:
> _Yellow sunshine's beating down
> A mango seed's shaking
> Machan's getting a hard-on_
'If they slap at me, I'll slip away and come to you. If they swipe at me and get me, take off your wedding chain and keep it off . . .' the male mosquito said to his wife . . . He cried out, overcome with the same fright he felt when he thought of the night his grandma died, the one who'd taught him how to kill a mosquito. But he quickly came to his senses. The door to the bathroom in Ragini's house standing open . . . the window of the goldsmith's house locked . . . on the distant, paved road some people walking alone . . . he was grateful that these various things in the narrow lane lying deserted nearby gave him some relief from the fear that had settled on him. Then he walked on, concentrating on the black-and-blue iron deposits in the gutters, made by the running rainwater.
'Hey . . . Sabesan? . . . could you come here for a minute . . .'
Because of the tenderness in the voice or maybe because he recognized its owner, he directed his attention to the scene with only minimal alarm and no deep inner torment. Vasanthi Auntie is washing clothes, squatting by the low wall of the well. He sees Sivaram and Sriri snickering and talking about Vasanthi Auntie. Vasanthi Auntie is a beauty, for sure.
' _Aiyacci_ . . . move the water from the banana tree to the coconut tree, won't you . . .'
Wanting to establish his manhood . . . to show the strength of his arms without actually showing off, he reached out a hand, naturally, with a farmer's keen eye, and changed the direction of the water channelling into the earth. He watched Vasanthi Auntie, unconcernedly washing clothes, parting her lips and singing: 'Who's got a hard-on? . . .'
As if there were a round-up, the cheroot-shop man hid behind his thatched fence, with lime steaming in the background, lifted both hands above his head, waved them about, and shouted. His heart numbed with alarm, he rolled off the top of the henhouse into a mud pit. There was no way he could escape either the mud or the round-up, so he floundered, distressed. His mother called out from inside the clouds or from behind the sooty kitchen chimney like a disembodied voice . . . 'Hey . . . take your book bag like you're going to school . . .' He ran towards the street from a place almost as familiar, but not quite, as the courtyard of his house. However, the street that came to meet him lay before him not like his familiar gravel road but like a path of sand full of palm trees . . . bushes . . . grass . . . There were a lot of people in uniform standing around with guns. He saw himself standing there too, in uniform, with a gun. Even when no one was looking, it made him scared, wandering around these puzzling, unfamiliar paths. Suddenly, a new path appeared before him. He decided to take the rounds out of his book bag and put them down. Suddenly, Mutthappa spoke hoarsely with a stiffened face, in a tense voice that didn't belong to him, as if giving a command . . . 'You shouldn't open your book bag in front of the checkpoint. They say someone pulled out a bag of betel leaves from his hip pocket when he was standing in front of a checkpoint. They thought he might be pulling a gun on them, and they shot him.' He decided to go past the sentry before he put down the rounds . . . Behind him, just like an elephant trumpeting or a plane roaring, there was this very frightening sound of mingled growling and blaring . . . two rounds in succession . . . like in English movies, two sharp edges stretching out in front . . . glitter . . . glittering . . . glimmering . . . A jeep came speeding by . . . he gave it a close look. The maths teacher sat in the front seat, wearing a _veshti_ . . . jumping . . . jumping . . . he shouted, 'That's him . . . that's him . . .' His legs grew unsteady, the roads intermittent. The jeep appeared . . . and suddenly disappeared . . . disappeared . . . appeared . . . But he could still hear that scary growling . . . His throat wouldn't stop burning . . . the class monitor and the maths teacher were crouched in the middle of the thorn bushes. Suddenly, in front of him appeared a tower rising so high it seemed to bang against the sky. Its walls swallowed him up and spit him inside. He fell on his side on a grassy floor. He felt water pouring down roughly on his back. With a betel leaf in his mouth, the Principal is watering the garden. He felt a gentle coolness spread over his body and he looked up in alarm. His whole body was wet. The water vessel placed near his head was lying on its side. He lifted himself up and looked at the table. Like a buffalo standing calmly up to his neck in a pond with only his head out of the water, the book bag was lying calmly on the table. He figured that first thing in the morning he should dig a hole behind the kitchen and bury the rounds. All over town the dogs kept barking. At the thought that they might do a round-up in the morning, he felt a cold tremor spread throughout his body.
_Translated by Rebecca Whittington_
V. Gowribalan, ' _Irumbu Paravaigal_ ' (2003), in _Oppanai Nizhal_ (first edition 2003), Parisal, Chennai, 2010, pp. 56–63
## Encounter
#### _Ilaiya Abdullah_
you knelt down
on a night of seeping rain
and cried, Amma,
at becoming a refugee.
on the morning after
and at sundown still
you were waiting, Amma,
with dampened eyes
your only comfort
was a palm-leaf hut
you were deprived even of that
camp life . . .
you saw even the weaver bird
and the crow live lovingly
by that makeshift camp
you began the second struggle
between your laboured breathing
and the salty sea breeze
straight through the weft of your thoughts
with hurting feet
I am able to keep walking, Amma
for you . . .
you looked at that moon
and smiled
will it still be the same moon
above the smoky clouds
to the north?
life is grown tangled.
in order to live without cursing your fate
you are driven
to love
Amma, for eight years
the life that ran on
drawing and effacing
furrows in the sand
and disappeared
oh Amma, what is it that you need?
lift my chin and tell me
I need that.
we too need
an ancestral land
we need to be blessed with never leaving.
_Translated by Rebecca Whittington_
Ilaiya Abdullah, ' _Ethirkollal_ ', in _Pinam Seyyum Desam_ , Uyirmai, Chennai, 2004
(The backdrop for the poem is the eviction of Tamil Muslims from the Northern Province by the LTTE in 1990.)
## Night
#### _S. Vinodhine_
the sea draws down the sun
obscuring the earth
the spreading blackness
gathers even in my room
in the chink of sky
that shows through the window
there is no moon
just a star or two glittering.
I long to feel this darkness
with my eyes and fingers
a time of neither
touching nor seeing
a night that answers none of my questions.
the footfalls of death-gods
parading by
fade into nothing
the song of a flute someone somewhere is playing
comes floating
and soaks my soul.
songlike
the night is speaking with me.
_Translated by Rebecca Whittington_
S. Vinodhine, ' _Iravu_ ' (2004), in _Mugamoodi Seibaval_ , Kalachuvadu, Nagerkovil, 2007, p. 28
## Midday
#### _S. Vinodhine_
the sun-drenched street lay desolate
everywhere, in everything, its intolerable heat
hidden among the noises of machines
the ear-splitting
sound of snatching souls
the first day
and several days before that
they could not catch him
today they put an end to his life
he said that on the very first day
the will to live burned in his heart
and he stopped drinking.
in the floating heat of the deserted street
with bullets
piercing the eyes tearing the face
his desire came to an end
midday.
_Translated by Rebecca Whittington_
S. Vinodhine, ' _Nedum Pagal_ ' (2006), in _Mugamoodi Seibaval_ , Kalachuvadu, Nagerkovil, 2007, p. 39
## My Songs
#### _S. Vinodhine_
I won't finish my songs
today
not even tomorrow
when will I finish them?
all my unwritten songs
are in the hands of that little girl
she says she won't give them to me just anytime I ask.
she says to take them
when she's not playing with them.
when I try
while she's asleep,
within the space of a word,
she wakes up and starts to struggle
defeated, my soul hides itself
I won't finish my songs
today
not even tomorrow
for unbeknownst to anyone
they are in the safekeeping
of that little girl.
_Translated by Rebecca Whittington_
S. Vinodhine, ' _Enadhu Paadalgalai Naan_ ' (2006), in _Mugamoodi Seibaval_ , Kalachuvadu, Nagerkovil, 2007, p. 76
## After Catastrophe
#### _Faheema Jahan_
a bird perched
on the stump
of a felled tree.
today it has
no flight
and no song.
before its eyes
a vast expanse
is stretched out, blazing in the sun.
is it cursing those men
or longing for its own nest?
_Translated by Rebecca Whittington_
Faheema Jahan, ' _Azhivin Pinnar_ ', in _Oru Katal Nirurril,_ Panikkudam, Chennai, 2007, p. 12
## Merciless Ones
#### _S. Chelian_
the sun was floating in the temple tank
they tied up the sun head-down
hit it with sticks
rolled and pushed it
into the tank
in the cloudless expanse of the sky
rain was falling from the leaves
the tank was overflowing with tears
the utterly fearless little ones
had no mercy at all
they had climbed onto the dead sun
and were swimming and playing
the sky had caught fire
and was burning.
_Translated by Rebecca Whittington_
S. Chelian, ' _Karunaiyum Illadavargal_ ', in _Kadalai Vittuppona Meen Kunjugal_ , Kaalam, Toronto, 2007, p. 16
## Those Who Killed Them
#### _S. Vinodhine_
in the hands of the night
the city lay hidden catlike
they were so fast asleep they didn't sense
the death in gunpowder
and the stench of their own blood
in the wind mingled with the scent of snakes
that night
did a star fall somewhere?
did a sparrow cry out?
did a barn-owl fly over that house?
they had no sense of it
even in dreams
that came in sleep
it must not have happened
the weapons of sorcerers
who blindfolded the night devoured them
tomorrow they will lie drying in the sun
before the next hunt.
_Translated by Rebecca Whittington_
S. Vinodhine, ' _Avargalai Konravargal_ ' in _Mugamoodi Seibaval_ , Kalachuvadu, Nagerkovil, 2007, p. 30
## Take the Child from Me
#### _Faheema Jahan_
I have taken the child of sorrow
you thrust on me
with the unbearable weight of betrayal
and lulled it to sleep on my shoulder
so the child will never wake up crying
I have raised around me
a deep silence
without words of consolation
on the stages I am called to
with all due respect
I am made to sit down on my seat
decorously
with the child on my shoulder
I deliver my prepared speeches
in a humble voice
and descend the stairs
a commended daughter of my city.
in the moment when I cross before you
—when the child slips from my shoulder and creeps
onto yours like a vine—
I lull it to sleep again on the other shoulder.
you have every right
to reach out and take from my arms
the crying child that keeps turning
to look at you
never once did your arms stretch out
nor would I want to give it into the arms of anyone else.
_Translated by Rebecca Whittington_
Faheema Jahan, ' _Enathu Kaimarri Yenthi Kol_ ' (2009), in _Abarathi_ , Vadali, Chennai, 2009, pp. 17–18
## Barrel-toothed Ghost
#### _T. Malar Chelvan_
my son strikes awake
the night sleeping silent
despite all our tested tricks
his voice alone cries out
to the edge of town.
the barrel-toothed ghost
sets the windowpanes of my house
trembling
but my son won't stop crying.
grandma tries to distract him
with grunts and grimaces
but he won't give up.
* * * * *
the night is cruel now
impossible to write in words
the ghost that's climbed onto my shoulder
is ready at any moment
to destroy my head
he's only an infant
what does he understand?
* * * * *
lie still my son!
lie still my son!
there goes a soldier, my son!
lie still my son!
lie still my son!
here comes old barrel-tooth, my son!
lie still my son! lie still!
in the wind* that comes creeping slowly
there goes her voice . . .
he was lying still.
_Translated by Rebecca Whittington_
T. Malar Chelvan, ' _Anji Maraikkal Pallan_ ' (2009), in _Uyir Nizhal_ , January-July 2009, Paris, p. 58
## Burning Nest
#### _Karunakaran_
the bird that flies up
out of the wound
takes along
its beautiful flower
its great fire
its sea
its space.
it has not even
the shadow of the thought
of returning home
to its nest.
along the way
it gives up even
its wings to the wind.
this journey beyond pain
fills the nest
with emptiness
in the bird's undampened heat
the nest burns, alone.
_Translated by Rebecca Whittington_
Karunakaran, ' _Thagikkum Koodu_ ' (2009), in _Pali Aadu_ , Vadali, Chennai, 2009, p. 30
## Black Dog
#### _Karunakaran_
a black dog
blacker than its shadow.
its shadow's silence
is stronger, bigger
even than its bark.
the shadow matches
the black dog's
anger, agitation.
does the shadow-dog have
a sense of scent
and memories of old directions?
the black dog is always leaning
on the shadow-dog.
in the shadow-dog's silence
is the black dog's
spirit.
_Translated by Rebecca Whittington_
Karunakaran, ' _Karuppu Nai_ ' (2009), in _Pali Aadu_ , Vadali, Chennai, 2009, p. 85
## The Warrior Who Could Not Part from His Shadow
#### _Karunakaran_
when he could not part from his shadow
the defeated war hero
felt abandoned
in unguarded territory.
the closed doors
shut him up in fear
and the open doors seemed
terribly dangerous.
knowing the night to be well guarded
he was startled when the very next moment
it turned into
deep trenches of terrors.
when he saw
the keys glow red-hot as they opened the doors
working themselves into the locks
the keyholes
looked at him and smiled.
that mocking smile that said
in any lock
in any key
there is always a way to open
sank into him
like a sense of guilt.
unable to unite
with the shadow from which he could not part
he severed his own head
sweating profusely
in haste
in fear.
_Translated by Rebecca Whittington_
Karunakaran, ' _Nizhalai Vilakka Mudiyaatha Por Veeran_ ' (2009), in _Pali_ _Aadu_ , Vadali, Chennai, 2009, pp. 97–98
## Let's Move on Again, to Yet Another Place
#### _Deebachelvan_
the land that survived took you
away and stationed you somewhere.
the winds are gathering on the shore
where you left the sack you meant to take with you.
a war-pinched life
without courtyards to play in
without alleyways for wandering cycles
the ground has taken you away.
the gun that was thrust on you
is eating you up.
our older brother's tomb
was our only property
our brother's dream shattered,
his tomb disintegrated.
at this time none of us
have a house to live in.
like our older brother
and his dreams
we are wandering.
what are we to do with our fate
that goes dragging you
out from under the cover
of frightening nights
when we lost everything and,
worn out, went into hiding
how will you make him feel the heat,
the enemy who sends us on the run?
at an age when you knew
and could understand nothing
you were given a war
the gun given into your hand
is ripening your raw heart
the remaining ground gives itself up to the enemy.
why am I approached
by this kind of a poem
and this terrible night?
in the end after all
my words fall flat.
am I now supposed to write an ode
on the children's battleground?
heading into the midst of artillery shells
should you be shivering?
who dragged you away?
they were like an older brother to you.
your older brother loved you
just as much as he loved our country.
the children are hidden by the guns
mother says.
at this time we have no city either
we have no life.
we who have nothing
are ourselves absent.
still we need you
to eat with us the little bit of half-cooked rice
and boiled lentils.
come quickly
let's move on again, to yet another place
(Dedicated to my younger sister, Vengani, who was taken away by the LTTE towards the end of the war, for battle. She, however, survived.)
_Translated by Rebecca Whittington_
Deebachelvan, ' _Nilam Peyarnthalaiya Vandhu Vidu_ ' (April–May 2009), in _Aatkaltra Nagarathai Thinra Mirugam_ , Uyirmai, Chennai, 2009, pp. 91–92
## A Boy's Father Dies
#### _Tha. Agilan_
_A mother was howling and weeping._
_'Please save my son. I need him. Please save his life.'_
_The disciples were all waiting, proud smirks hovering on their faces in_
_anticipation of becoming witnesses to the impending miracle._
_Buddha replied, quietly, 'Lady, bring me a fistful of mustard seeds from_
_a house that has never encountered death.'_
_Buddha's smile remained unchanged._
_She ran through the streets. She ran hard, to save her child's life._
_But death won in the end._
_Death was diffused everywhere, like air._
_Not even one fistful of mustard seeds in this earth which hasn't_
_felt the scent of death._
***
Everyone cherishes the wish to conquer death. Death accompanies us through our entire life, like an athlete on a sports track. It follows us right till the end, when it runs ahead of us.
I want to share certain memories left by the footprints of death, which I encountered along my way. I watched my father dying, sitting right next to him. While the smell of death was lingering in his face, the last words he spoke were meant for me.
I was seven years old. I had had no prior information or experience of death before that day. All I knew about death was from watching funeral processions and my fear of the Chinese firecrackers used in those processions. 'Don't show your hands to the dead body. Your hands will rot'—I was scared of my sister's threats, too. To prevent my hands from rotting, I always, carefully, kept them behind my back whenever there was a funeral procession. Once, when I did show my hands, I also managed to create a ruckus, howling and crying, afraid that they might actually rot away.
Now, my father is dead. Everybody worried when they knew that a snake had bitten him. They circled around him. I didn't understand what was happening. I was stuck amidst the people surrounding him. They all wanted to save my father's life. They lifted him and carried him to the road. They laid him down at the Vairavar temple at the street corner, thinking He would save him.
In those days the Indian Army was camped at the corner of our street. Not just in our street, they were camped at several places. Initially, it was funny. It was funny to watch the Indian Army with their beards and turbans, their strange language and their guns, with long knives. That was the first time I had ever seen guns, and only they had them. I never would have believed that they would endanger our lives. Every day they marched up and down in formation, twice in our lane. That was all. My sister used to scare me, saying that they would take me away if I stood outside when they marched past. But I ignored my sister and watched them marching by. Some of them would call out to me, and I would hesitantly smile. I would be lying if I said I was not scared of the Gorkhas. Their beards and turbans would naturally incite fear in anybody. Despite that, I did watch them in fascination.
My sister used to narrate stories about the Gorkhas, tales of their bravery and the chapattis they ate. Those tales just made me fear them more, and hate that mysterious, unknown item of food called chapattis. That's how powerful my sister's tales were. But one day, despite all these things, I was lifted up by an Indian Army man, with a turban, at a totally unexpected moment. Ironically, I suppose, that event helped me to realize that I could actually look quite charming, even though I am dark—charming enough that this Indian Army man wanted to pick me up. But when he did, I started screaming so loud that all the mothers ran out, and he put me down. I blabbered hard, as if I had just escaped from a crocodile's mouth. He tried to pacify me. Afraid that he might lift me up again if I stopped crying, I screamed more. He suddenly took out a yellow balloon from his bag and gave it to me. I quickly grabbed it and gradually lessened the intensity of my howling. But I continued to cry till he left. He walked away, smiling. After that, I did not trust anybody's threats. After all, they give balloons. How could they harm anybody?
But just for a few more days. From marching down the lane, they started to move into the fields. They trampled our crops with their heavy boots. Our uncle, who used to curse us if we walked in the fields with sandals on our feet, stood mutely in silence, watching them. Then they started to cut down the fences along their way. Uncle used to fence his fields to keep the cattle out. They cut them down; but new fences kept springing up, each day at a new place. My uncle finally gave up fencing once and for all, exhausted from the effort of fencing over and over again. That was when I began to see the cruelty in the faces of the Gorkhas, matching with the tales of my sister. I stopped believing they carried balloons any more.
Suddenly, one day they brought their hatchets and destroyed all the fences in all the fields. They cut down entire tree branches, leaving the trees totally bald, every one of them. They installed lights in the bigger trees, painted them white. They said that this is how the trees should always be, and any leaf or branch sprouting out should be cut immediately. In total, they were not like before at all. Actually, it kind of helped me in a way. My mother could not find sticks from a fencerow to punish me with. But my aunt was more worried: there would not be a single stick for her to lean on during the next rains.
One time they stopped a cart carrying thatches for the house and dumped them all out on the street. We were asked to carry them, one by one, from the street to the house. Several restrictions of this kind continued. They started to decide everything—when to light the lamp at home, when to put it out; when people should go out and get back. One day, a jeep roared into the lane and opened fire at random. My mother and I hid under the table, while she prayed to the Ammalachi goddess for our lives, over and over again. Father ventured to look out from the veranda. The next morning I heard father and uncle talking about how eight people were shot dead at the Iranai Madu junction. After that incident, they started laying barbed wire barricades in the streets at six in the evening. Everybody had to be back in their houses before six. No one should be found on the streets after six. That was the first time that I had to light candles for St Anthony on my birthday at four o'clock. I had always done it at six-thirty, for my previous birthdays. That changed too, after the arrival of the Indian Army, and the barbed wire. I asked my mother why they would not allow me to light candles, even on my birthday. As usual, she told me to shut up and pulled me along. Nobody could be out in the streets except military vehicles.
Now, it was well after six-thirty. The Indian Army had already erected those barbed barricades. The only way to take my father to the hospital would be to remove them. Several people were trying to negotiate with the Army, to allow my father to get to the hospital. I could only hear their voices and listen to them begging and crying. The voice that cried the most must have been my elder uncle's. I could only see people's legs. 'Sir, Murthy sir, it's a snake bite, sir,' somebody cried. But the Indian Army refused, vehemently.
I was getting crushed, stuck among people's legs. I managed to reach my father, struggling along the way, tangled up in legs. Father was laid down at the Vairavar temple. I touched his moustache and sought his attention. He hugged me back and cuddled me. It was heavier than usual, his cuddling. Why is my father crying? 'Father is going to God. You must study earnestly,' said my father, his voice trembling. I could not understand the finality and permanence of those words then, but those were my father's last words, and they were spoken to me. I didn't realize, then, that it was that solitary kiss, and those words, that would substitute for every other memory of my father, extending their reach through the rest of my entire life. To tell you the truth, I was totally unaware that death would not return people. The Indian Army refused to permit my father to go to the hospital. Father died. Vairavar gave up too.
Father was brought and laid out at home. I didn't know how many people were there, hugging me and crying. I did feel a bit uncomfortable, but I felt no grief then. I didn't even know grief. I went close to my father and looked at him. For a moment, I felt as if his eyes opened and shut, just once. I definitely saw those pale eyes, and the unbearable pain in them of leaving us behind. I thought about telling someone about my father's eyes, but no one seemed to bother. There were lots of people, too. I wandered around, constantly bumping into people's knees. Finally, I reached my elder uncle and told him about what I saw. He bawled. He hugged me tight and cried out loud, standing in the hallway between the verandah and the kitchen. At that moment, I did feel a little grief in me. I ran, tearing myself away from my uncle. As far as I was concerned, I had seen my father open his eyes. I didn't try to tell anyone else about it after that. Now I regret that. These days I tend to think of it as an illusion. But the little Agilan sitting inside me refuses to consider it an illusion. This big Agilan tries to believe it was just an illusion, but how could the memories of an illusion get stuck in one's heart for twenty years?
People wailing tore my ears off. Then, it all happened quickly. Since father was a dead body then, or this dead body was called my father, I was cautious enough to keep my hands to my back. I wondered though, if my hands really would still rot since this was my own father. I remember well when several people handed me printed homage cards for my father. I sat on top of a sack and read them aloud with the tone of a public announcement. I was trying to imitate the voice of those people who would announce a death to the public, with loudspeakers tied to a car. I was also carrying the betel plate around for whoever asked for it. It still did not register in me then that there is no longer any such a person as father. My elder sister took me away and fed me. I remember the food, bread and curry, and my sister feeding me and my little brother, sitting at our aunt's place. Someone came and asked my sister not to let us go inside, since mother's grief would only deepen if she saw us. My sister nodded in agreement.
I never cried, right? But sitting on someone's shoulders and carrying the funeral pot around my father's body, I suddenly realized that there was something extremely dangerous happening. I began crying. At last, when I saw the dancing, yellowish fire catching on the curly hair on my father's forehead, as he was lying in his silk dhoti and shawl on the pyre, and when I realized that he is never going to come back, I screamed out loud. I recognized at that moment the loss of such a huge support as that of a father. People hugged me, pacified me, gave me soda. Those cries are still lying inside me, dormant. As I write this my lips are trembling, and my heart is engulfed in a subtle tremor.
Now, eighteen years later, when I sat down to write this piece, I thought I was going to record my thoughts about death. But how could I ignore the death of my father? My father was a hazy image to me, not steeped in my memory. But his death did have a deep impact on me. Even more than me, it affected my little brother a great deal. Even worse was the impact it had on my little sister, who was just born then, and who never knew our father's face. While free sympathy for being a fatherless son accompany me all along my way and causes me pain, it fosters his memory in me as well.
_Translated by D. Senthil Babu_
Tha. Agilan, ' _Oru Paiyanin Appa Irandu Ponar_ ', in _Maranathin_ _Vaasanai_ , E. Pathippagam, Chennai, 2009, pp. 21–27
(This essay recollects events that took place during the presence of the Indian Peace Keeping Force in Sri Lanka, from 1987 to 1990.)
## A Refugee's Motherland
#### _Ki. Pi. Aravinthan_
**18-05-2003, Sunday**
Early morning telephone calls for people in the Tamil Diaspora in Europe or anywhere else in the world always meant picking up the receiver with a quivering anxiety. Mostly these calls at dawn came from India or Sri Lanka, often carrying unhappy news. It was through one such call waking me that I received word of my mother's death. Was it expected? It is difficult to say. I had just spoken to her for several long hours on Saturday, that is yesterday, a week after she was hospitalized due to a cardiac arrest. She was recovering when I hung up. She had been comforting me, saying that she was to go home the day after next, that she was expecting me and asking me to be sure to come and visit her. Actually, I had already started preparing for the visit from the moment I heard about her emergency admission to the hospital. I had requested the necessary certificate from her hospital's administration stating that my mother was being treated in their Intensive Care Unit. They had informed my mother about that, which made her certain about my visit, making her happy. I wondered if that happiness might have shown her the way to her death. She was overwhelmed by a desire to see me. She was always very affectionate towards me, not just because I was her eldest son, but also because I was the lost sheep in the flock. It had been thirteen years since I separated myself from her. Actually, it's been thirty years since I moved away from the life that she had imagined for me. After 1972, much of my life was spent in prisons and in hiding. She had had to wait for me at the prison gates of Jaffna and Colombo. I knew that this gave her unbearable pain. Even so, it soothes me to think that I had given her happiness on at least three different occasions. First, when I announced my willingness to get married. Worried as she was about my wayward ways, this must have been a great relief to her. Second, when in 1993, my poetry collection, _Mugamkol_ , received an award and I arranged for all of the award money to go to her. Her worry that she had never received financial help from her elder son was a bit salvaged by that. And third, when she heard her son's voice on one of her favourite international radio stations. My mother had this habit of listening to the radio, propping it right next to her pillow at night. This was actually one of the reasons that I agreed to participate in a serial programme on BBC-Tamil, when they approached me in 2001, despite my being wary for several other reasons. In truth, this gave her a great deal of happiness and satisfaction. So I did manage to do those things for her, at least. After I started living here, and she in Triconamalai due to my sister's posting there, there were fewer letters and more conversations over the telephone. During such conversations, she repeatedly expressed her wish to spend her last days with me. Even during our last telephone conversation, yesterday, from the hospital, she asked me to stay with her for at least a month, and to take her from Triconamalai to Jaffna.
There is this warning included in the blue document issued to refugees: 'A person who has once taken refuge in another country cannot return until there is lasting peace at home.' But in France, they allow one visit, in dire circumstances. I had planned on using this opportunity to visit my mother, so I had already submitted my application, with all the required documents. They said it would take a week to receive a response to my application, which did not inspire confidence and left me sad and disappointed. Mr and Mrs Durai and Mr A. Murugaiyan had assisted me in submitting those documents. But now this Sunday morning message changed everything. I was shattered and my voice failed me. My brothers tried their best to pacify me over the telephone. But who dealt with their sadness? Sumathi managed to field all the concerned calls from friends. Had I betrayed my mother? Or had she deceived me? All these questions and more churned inside me, and my tears kept gushing out.
**19-05-2003, Monday**
I could not sleep at all that Sunday night, and I was the first person at the District Administration office on Monday morning. The moment the doors opened at nine, I met the immigration officer to whom I had earlier submitted the documents and, with tears in my eyes, I asked him to let me know his decision as soon as possible. I also submitted another paper informing them of my mother's death on the previous day. My hands were a bit shaky, probably with the sense of urgency that was in my heart. The officer was not harsh in his reply. He told me to go ahead and prepare for the trip, and that my permission would be granted on Wednesday. It now became certain that I could go home.
On 22-05-2003, Thursday, at nine-thirty in the morning, Sri Lankan time, I landed at Colombo airport, accompanied by my brother, who also lived in France. He travelled like me, since both of us were living there as refugees. They had given me my travel permit on Wednesday morning. My refugee card, the token issued to refugees, and other documents were taken by the authorities as guarantee, only after which did they issue my travel permit. According to that permit, my stay in Sri Lanka was limited to just fifteen days. This not only added to my sadness, but I also felt that I was being unfairly treated; my brother had been granted a month. Obviously, the rules vary with departments. I could not sleep through the entire night's flight. Mother's image persisted in my vision. My mother had served as a nurse until she took voluntary retirement. In addition to her burdens at her job, she took on additional service responsibilities, as I had urged her to do. Due to my activities, she also faced problems at work. As I think about all that, guilt washes through my heart. Further preparations to fly to Jaffna from Colombo on the same afternoon had already been finalized. The immigration officials struggled with our French travel permits, in the absence of regular passports. Finally, after inquiries and visits by higher officials, my brother and I were the last to get through the immigration formalities. Then it became certain that I would see my mother's face, in person. My childhood friend Jayamurugan was waiting outside, ready with the travel arrangements. We were to go straight to the domestic airport at Rathinamala. It was still difficult to believe that I was standing in Colombo. We started off to Rathinamala from Kattunayaka through Colombo city. As we were entering Colombo, I realized I was feeling giddy. The city started looking muddled, and in retreat. I began to wonder if I was going to die even before I got to see my mother. I told my brother to let my friend take care of me, and that he should proceed to Jaffna as planned. I was gradually losing control of myself. I slid over onto my brother, sitting next to me, while the driver drove the car around in search of a hospital.
**23-05-2003, Friday**
When the domestic flight touched down, running on the Palali runway, I shivered inside. My friend was next to me, but I did not let him know about my shivers. I closed my eyes. I am going to see my mother, in my hometown, in my motherland. In the midst of this swelling turmoil inside me, the doors of my subconscious burst open. My brain seemed to melt and ooze out. Aren't all these things real, right in front of me, sprawled out and soaked in the hot sun, as sand and stone, as plants and trees, as bushes and palms, keeping me yearning and active? I look around in awe, gazing at my mother, who nourished my life with her breast milk. I touch the earth and smear it on my eyelids.
The van started towards Jaffna town, rattling and leaving trails of dusty red sand behind. Looking out, I tried to identify places, but in vain. These were the places, the little villages, the tiny lanes on the route of bus number 764 from Jaffna town, where I used to wander. But I could not identify a thing till I reached my village. I could not even identify the north Punnalai Kattuvan junction leading to Kuppilan village, a place that I enjoyed and had grown up with ever since my school days. It was simply ruins and destruction, all around. The van hurried us towards Urumbirai junction. My eyes were searching for the statue of Sivakumaran. Wasn't he like the source of a turbulent river? Wasn't he the one who planted fire in the tree-hole in the midst of the forest?* But my eyes couldn't locate his statue. Memories of the times I spent with him between 1972 and 5 June 1974 rose up within me: Sivakumaran's mother had been the first to come forward, smothering my head with her hands, asking about my health. She fed rice to me and to Sivakumaran. How many mothers had fed us like that, with compassion? They were the first to realize their children's sense of justice. I saluted in the direction of their house. Eventually, the van stopped at the bus terminus near the farm. I got off. It was a place that had sprouted anew. Bushes sprawled in place of the Jaffna Fort. Hailing a taxi, I gave the driver my address. The taxi rolled along, past Veerasingam Hall behind the Jaffna Library. There stood the fort built by the Dutch, holding the Jaffna prison in its belly. The Jaffna police station that once stood at its entrance like a secure, fortified gate had disappeared, all traces of it perished into an empty, barren, deserted field. The breeze from the Pannaikkadal blew in through that space. The wretched final-day events of the Fourth International Tamil Studies Conference had been staged right here, in front of Veerasingam Hall. Pillars once erected in remembrance of those wretched memories lay there now, spread all over, in ruins. I was witness to those wretched final-day events, which unfolded on 10 January 1974. I was serving as a volunteer. Sivakumaran was in charge of our volunteer group. Sivakumaran and I were standing next to the dais. Our group had prevailed upon the conference coordinator to stage the meeting out in the open. Enraged by what had happened that day, Sivakumaran and I pledged to take revenge that same night, while we were clearing up the premises. My mother waited up for me at home, which I reached well after midnight. Several people from my street had gathered together. A teacher who lived close to our street had lost his life in the incident. It still feels like it's just happening all over again today, when I remember the relief shining on my mother's face when she saw me, and the compassionate inquiries from my neighbours. In that incident I somehow lost a wrist watch that my mother had bought for me, but the ring she had given me was safe. Mother was not worried about the lost watch. But she was glad about the ring. She had given me that blue-stone ring when I was released from prison in 1972, to banish my bad characteristics and to bring me well-being. One of her colleagues who believed in such things must have given her this idea. I am not sure if the ring helped in allaying my bad characteristics, but it was definitely useful for me later. Once, when we didn't have enough money to buy a revolver, my mother's ring and my friend Padmanaba's ring, both had to be sold. When I returned home that dusky evening, my mother was startled not to see the ring. I managed to cook up some excuse to tell her. Now, I can't stop my tears, sources of the well don't seem to dry up at all! Memories engulf me of the first day my mother came to visit me in this fort's prison, with my two-year-old youngest sister, after I was arrested on 18 May 1972. My mother's pain, due to disappointment and shame, showed clearly on her face. I was a political prisoner then. I used to wonder why my mother was broken, instead of being proud. But when I went home after I was released six months later, I was struck by the practical reality of the social attitudes towards prisons and the police. I could relate to the ways in which my mother had been hurt. But there was no change at all in the way she showered love and compassion on me. The fact that she called me Manoharan, the eldest of the seven children she gave birth to, must certainly be because of a dream that she cherished within her. None of my relatives knew that my real name was Francis. They still call me Manoharan. My mother must have been influenced by the movie _Manohara,_ released in 1953. But did I live up to her expectations of a dream son? Or did I become a son who betrayed his mother and the motherland? Time will tell. When I was arrested for the second time in 1975, I was made out to be a dangerous terrorist. On the second day of my arrest, I was subjected to the inquiries of the Jaffna Crime Intelligence department. Two weeks later, I was sent to Velikkada for the Colombo interrogation. I was severely assaulted, stripped naked, and made to sit on a chair. I responded to the questions they shot at me. Inspector Padmanadan and his deputies—Shanmuganadan, Karunanidhi and Rodriguez, among others—stood surrounding me, angry and furious. A typist was recording my testimony. I could see my mother, worn out, running along the side of the street towards the office portion of the police station. Yes, I was looking at her. I was also aware that pretty soon, they would bring my mother in here. I knew the pain she would go through if she saw me in this state. I wished they would hide me. But it was their intention to make my mother see me that way. The entrance wasn't that far. Mother reached the entrance. I turned my face away. She was not permitted to talk to me. What must have been her state of mind? Later that evening, they locked me in police custody and gave me a parcel that my mother had brought. Food that mother had packed for me was wrapped in a newspaper published on the day of my arrest. I took a moment to acknowledge my mother's thoughtfulness. There have been so many other times when she acted thoughtfully like this, on her own. Memories of my mother keep gathering in me. How is it possible to pour them all out? The taxi was hurrying past Subramanya Park and the court complex, towards home. It was in this court complex that Sub-Inspector Chadrasekara pinpointed me in an identification parade. Near the end of 1977, arrest warrants were issued on me for several cases, including this one, but I kept coming home. I attended the trial of the first case, and bail was granted until the next hearing, but only on surety of land or cash. We had neither. I was sent back to jail. It took ten days for my mother to mobilize and pay the money. She sold all the jewels in the house, including her wedding chain. On the way back home, she said, 'Son, there are still two more warrants. We have no way to handle them. You better discuss it with your comrades, or go underground and continue your activities, as before.' After that I never attended any trial for any case. Those words still guide me today, I guess. I still avoid being in the light. I keep assuming new names. I still like being underground, being the dark horse.
When I finally arrived and could really look at her, I was completely taken over by the emotions of seeing my mother. I stood next to her head. All our relatives were sitting around her. Candles were burning. The room was filled with wailings. My brothers and my father stood next to me, holding each other. I could not cry. 'Cry out loud, my son,' said my father. The bright face of my mother was covered with a thin towel. Pushing it aside, I caressed her. What should I cry about and to whom? Or would she like me crying at all?
**24-05-2003, Saturday**
Today is my mother's birthday, the beginning of her 75th year. She would leave the house that afternoon. Kith and kin had gathered to bid her farewell. We did not consciously plan to cremate my mother on her birthday. When she died in the Triconamalai hospital on 18 May, my father had her brought to our Jaffna home that same day. My father was determined to keep my mother's body at home until all her children arrived. My brother living in Norway had left on Tuesday and the other brother living in Germany had left on Wednesday. Though my other brother and I had planned to reach Jaffna on Thursday, and he arrived as planned, I couldn't make it until Friday. This led to the decision to cremate her on Saturday. My mother had asked my father and her youngest daughter-inlaw, Devadana, to take her to Jaffna in case something happened to her. My wife, Sumathi's brother Dananjayan was astonished when he went to organize clothes for my mother after she died. Her suitcase was packed, all ready to go to Jaffna. She had made her preparations to leave for Jaffna even before she fell ill. She had even written to the tenants at our Jaffna house three months earlier, requesting them to be prepared to hand over the house to her in May. She had closed her bank account in Triconamalai and handed over the money to Kesavan, another of Sumathi's brothers. Such precautionary measures!
The time arrived. The final rites began. My father was the first to garland mother, followed by sons and relatives. Neighbours, people of the village my mother loved, walked around, paying their last respects. Whimpers could be heard. I started the farewell speech. 'Dear All! My mother has lived, thanks to all your love. You all are well aware of how our mother brought us up. We and our mother shared mutual affection. But the fact that none of us, her seven children, could be with her during her last days, will continue to haunt us all. My mother stayed committed to love, compassion and service to the cause of others. This is what she taught us. We will continue to live, trying to deserve all your love. Except for her certain grief that her children were not next to her, our mother died happily. All has gone well.
'Let us bid farewell to mother. Good bye, mother!'
Each of us caressed her for the last time. The cart bearing mother began to move. My mother continues to be around us. After the funeral, I was in the midst of relatives, neighbours and friends. Only then did I realize how precious the moment was. It was an opportunity to witness a cross section of my society. I took this as my mother's gift, even in her death. I began the second leg of my journey, in search of my motherland.
_Translated by D. Senthil Babu_
Ki. Pi. Aravinthan, ' _Oru Agathiyin Thaayum Thayagamum_ ' (2004), in _Iruppum Veruppum_ , Salaram, Chennai, 2009, pp. 100–109
## Immense Land: An Introduction to Its Soil Strata
#### _Pa. Ahilan_
beneath the big city
buried in weeds and myths
water and homes
the dayless, nightless, tireless streets
and the branches spreading thickly
the surface abuzz with hurrying people
and speeding vehicles
if you descend
striking steps
even beneath this
if you keep descending
leaving
the surface of a storm of ashes still warm
even beneath
the close-laying sound-strata of crying and screaming
even beneath
the liquid bed of unstaunched blood
even beneath
the stratum of thought already thickened, dense, full of thorns
if you keep descending
even further down, leaving
the great expanse of silence untouched even by the roots of trees
an ancient woman
an ascetic on a throne of skin, scattering the times.
_Translated by Rebecca Whittington_
Written in 2010
Pa. Ahilan, ' _Peru Nilam—Mannadukkugal Parriya Arimugam_ ' (2010), in _Saramakavigal,_ Peru, Jaffna, 2011, p. 45
## Story of an Unwritten Letter . . .
#### _Na. Sathyabalan_
the flame is flickering in the lamp
not knowing
it is struggling, doubting
every moment it faces
distress flickering in the prayers of the one who lit the lamp
who tells the flame to endure the pain of living
the wick resumes its austerities, composedly absorbing
the oil, which is running out
the heart of the lampstand that bears all this
is throbbing
a gentle light diffuses and fills the room,
overflowing, thronging prayers struggle for breath
striking and bouncing off the walls, doors and windows.
not knowing how to write to the wind
to submit their prayers
the flame and the wick suffocate
the moments roll along and dissolve
_Translated by Rebecca Whittington_
Na. Sathyabalan, ' _Ezhudhappadaadha Madalonrin Kathai_ ' (2010), in <http://marupaathy.blogspot.in/2010/09/blog-post_2958.html>
## Little Brother
#### _S. Chelian_
Right in front of Miss Prahaspati and all the thirty-seven kids in my class, Principal Rajagopal brandished his cane whip and whacked me six times on my butt. What was so special about the number six? Not my classmate Ramachandran, glaring evilly at me, not Miss Pirahaspati in her wrath, not the rest of the class, subdued as they were, not even me myself, standing there stiff, with my brain curdled, none of us ever really figured that one out. Maybe it was just Principal Rajagopal's lucky number. But in the history of our school, the Navalapitti Kathiresan Kumara Maha Vittiyalayam, nobody else, even today, can possibly have matched that record—six whacks on the butt.
Of course it's true that teachers cane students on their butts in order to 'correct' them. We hear all kinds of stories about students squirming and quivering, or collapsing in a heap. It is even believable that scars from some of those canings are still there today, on some people's butts. There are stories making the rounds to the effect that some of these brave souls have secretly received the noble title of 'Great Man of Valour' when they related these brave sagas to their wives. Perhaps our United Nations General Assembly could be petitioned to look into these violations of human rights, since they occurred in Third World schools. The unemployed need something to do, don't they? However, we surely can believe that our government is not prepared to permit the United Nations to do any research into the condition of its people's bottoms. Besides, those brothers might well think they'd rather die than look at such stuff. Still, we shouldn't just blurt out whatever comes into our mouths about how the United Nations is a hapless, tainted organization. At least until it comes to the point where it decides by majority vote whether or not it is okay to rape women or to rape men, we can believe with some certainty, or at least have a bit of faith, in it as a 'democratic organization' and a Protector God for the Earth and for all the people who live upon it.
That day I was accused of the crime of writing disrespectfully about Members of Parliament—Arulambalam, Rajan Selvanayagam, Thyagaraja, Minister Kumara Suriyar, and Mayor Thuraippa—in my handwritten magazine. My handwritten magazine was confiscated and I was sent for an interrogation in the Principal's office by my classroom teacher Miss Pirahaspati, through the dutiful services of a student, by the name of Ramachandran.
The Principal called me in for the interrogation during our lunch break. With the Vice Principal right there, he thumbed through the magazine, and said, 'He has written all this stuff about these respected ministers.' He seemed astonished. I have no idea what he really thought, but after a couple of minutes, he let me go. One possibility was that the image of my father's face might have come to him at just the right moment.
This, however, was intolerable, not only to Ramachandran but also to our classroom teacher Miss Pirahaspati. They marched right back to the Principal. What they talked about remains a great mystery, but as soon as lunch break ended, here came the Principal with his long cane. He imagined his cane was born right along with him, like Karnan's famous armour. Our class teacher Miss Pirahaspati came along too.
'Who wrote these essays? Who did the drawings? Who authored the poems? Whose handwriting is in this magazine?' The Principal asked lots of kinds of questions, but I had no difficulty in replying, since they all had the same answer.
'I did all of it, under different names,' said I, and the people who actually did draw the drawings, write the poems, and write the short stories all heaved a sigh of relief. When the interrogation was over, the Principal decided on caning as my punishment. In truth, though, that punishment had been decided upon even before the inquiry.
Before executing his decision, Principal Rajagopal asked me to face the wall. Was that so the other students could more easily see my butt? Was it because he didn't have the guts to watch my face while he was caning me? I do not know. Not a single teardrop, through all six whacks. But after that, my heart just refused to identify with that college* any longer. I had already been planning to go to Jaffna for my studies, so the next year I enrolled at Hindu College, again in the eighth grade.
One morning as I was going to college there were bands of a few young men blocking the paths of the students and turning them back. 'Today is a day of mourning. Boycott school,' they said. They didn't bother with me, though. This was a complete novelty to me. I had never seen anything like it. The college was deserted. A few students were huddled together, talking.
'What's going on?' I asked quietly.
Clearly agitated, they replied, 'Diraviyam is dead.' Everybody's face overflowed with grief, as though they had just lost a close relative.
'They say Diraviyam robbed the Copay Bank and as he was making his getaway, the police caught him. So he took cyanide and died,' explained one of them.
'So, who is this Diraviyam?'—though my heart was aching to ask, I was reluctant.
'Diraviyam needed the money to buy a gun,' said another.
'All the guys who were with him got away—he's the only one who got caught,' somebody else said.
'It was those people in Neerveli who caught him and turned him over to the police. They just didn't know who he was,' said somebody in anger.
Who is this guy? Why are these students so upset, and the teachers confined to their rooms? And the young men standing in the streets?
'Diraviyam was none other than Sivakumaran,' Ranjith whispered discretely into my ear. 'He rose up as a militant—he believed that only armed revolution would bring freedom from ethnic oppression. He bombed a police officer. He has been living underground somewhere around here, and the police have been going crazy trying to find him.'
Sivakumaran's body was brought to Urumbara for cremation. Usually whenever somebody died in the town the funeral procession would pass by my house. A hundred or two hundred people might walk past. But for Sivakumaran's cremation more than two thousand people marched through the street in front of our house. And not just people from my village, either. Young men and women from villages all around Jaffna rallied together. The cremation ground lay just past the end of our land, where palm, guava and thorn trees mushroomed in utter freedom, completely at their will. I climbed to the top of a tall guava tree and watched the last rites of this hero born in the trenches of our motherland. This Sivakumaran showed us new pathways when he was alive. But even after he died he scripted new ways to live. According to Tamil tradition, women are not supposed to come to a cremation ground. They just come up to the fence. But here at Sivakumaran's funeral, hundreds of women gathered inside the cremation ground and wailed out their grief. Not only that, but a joint cry went up from all the people who wanted to see Sivakumaran's face one last time. Traditionally once closed, the coffin was not allowed to be reopened for anyone to look again at the body. But here, bowing to the wishes of the people, his body was raised high into the sky by several notables, including P.U. Navaratthinam, and shown so that everyone could see. When they saw his innocent, childlike face, everyone fell into an agonized rapture. Tears flowed from every eye. From a few people's eyes not one tear dropped, though: they stared deep into him, and determination grew in their hearts.
Finally, about ten feet from the foot of the guava tree I was sitting in, that hero's sacrificial body was fed to the fiery flames. I watched the glowing fire for a long time, from my perch in the guava tree. I did not feel like going back home. Eventually, though, I found my way to the house of a relative who shared our well.
'With my own hands, I gave water from our well to some young people coming back from paying their respects at the deceased person's house,' said my cousin Lali's husband. 'They swore to take revenge.'
I was glad to hear that.
Contrary to tradition, a memorial was built in that Hindu cremation ground. Pon Sivakumaran was the name that was etched on it. Every year, Sivakumaran's mother lit lamps and showered flowers upon it. Sometimes our household donated water and a grass-cutting spade.
One of those days over a thousand young people rallied and walked past our house, heading for the cremation ground. I was standing in our doorway, and I joined them and went to the cremation ground. They paid homage to Pon Sivakumaran at his memorial. I learnt that these young people were from the Tamil Youth Assembly, and that their leader was Santhathiyaar. I also learnt, from Santhathiyaar's speech, that after paying homage to Sivakumaran, they were going to head out, on foot, to attend a final campaign rally for the Kankesanturai by-elections at Mutruveli, shouting Father Selva's slogan: 'Tamil Ealam is now our collective destiny!'
I followed them in a trance. Following Santhathiyaar's orders, we marched two-by-two along the sides of the streets. When I walked past my house, no one came out to stop me. Led by Santhathiyaar and walking along the sides of the streets we posed no problems for the traffic, and people emerged from all of the homes we passed. When they saw us they gave us their heartfelt best wishes. From some of the homes people served us water to quench our thirst. As we passed Kondavil corner in Palaali Street, suddenly there were police jeeps blocking our way. I was standing just two feet behind Santhathiyaar, and I got a bit scared.
The District Assistant Chief of Police in Jaffna climbed out of one of the jeeps and questioned Santhathiyaar. He could not speak Tamil, and Santhathiyaar could not speak Sinhalese, so the police officer spoke in English. Santhathiyaar said that he did not know English. Another police officer was given the task of interpretation.
'It is against the law to take out a procession without a permit. Disperse immediately,' said the Assistant Chief of Police.
'This is not a procession. We are walking to the election rally because we have no money for bus fare.'
'This is against the law. You are creating a traffic problem for ordinary people.'
'We are not causing anybody any trouble in the streets. We're walking along the sides of the roads.' That was Santhathiyaar's reply.
After a few more minutes of talking, with Santhathiyaar not backing down on anything, the police jeeps turned around and drove off. Our crowd roared in joy, and we continued our march. When we came to the Tirunelveli Agricultural Association building, a car blocked our way. Out came a visibly angered Thalapathy Amirthalingam Anna.
'What is the meaning of this, Santhathiyaar? What did you promise the police officials, in my presence? How could you break your promise and lead this procession?'
'Anna, this is not a procession. We did not have enough money for the bus fare, so we are walking,' said Santhathiyaar, unperturbed.
'Okay, if that's the way it is, I'll send you all to the rally right now,' said Amirthalingam.
He raised his hand and brought to a halt all the cars, buses, and other vehicles driving down Palaali Street. At his request, they dropped whatever had brought them to Palaali Street in the first place, and all the cars, buses and other vehicles took us in and delivered us to the rally. In ten minutes, we were all there.
'Little brother, get in my car,' said Amirthalingam, patting me on the shoulder, and one of the other passengers helped me in.
We all participated in Father Selva's final campaign rally. The thundering voice of the rights of Tamil people rose to the heavens that day. Some fifty thousand people took part in that rally. It was ten o'clock at night when it finally wound down. To this day, I cannot recall how I made my way home.
'What have you been up to till this odd hour of the night?' was not a question that anybody in my family asked me. Only our dog jumped up as soon as he saw me and wagged his tail, in silence. Even he knew how to behave, late that night.
_Translated by D. Senthil Babu_
S. Chelian, ' _Chinnathambi_ ' (2010), in _Kaalam_ Journal, January–March, 2010
## Restless Sea . . . Sleepless Land . . . Endless Dream
#### _Karunakaran_
A life surrounded
within me a restless sea
before me a sleepless land
and so this unsubsiding anger
everywhere an unending dream . . .
in my eyes a fire rises
a river flooding
people turned into stone slabs on the roadside
for others to rest their loads . . .
in the street filled with people ripped and flung
today a god is born
the stroke of midnight is muffled by
the unsubsiding fire . . .
the stray cows and the compounds
overgrown with jungle
and the streets thick with darkness
in the taken towns there are more soldiers than people
if you want to go home, ask a soldier for the address
get permission from him
and find out from him about me
he must be finding out about me every day, the soldier
never about my tears
about the burning kindled within me
about my growing into a jungle
about the aching wounds on my body
about my suffering days without sleep
he'll never know.
about the type and the location of my excrement
about my having yawned, he has found out.
what's more
the confusion and fear taking hold of my legs
and the surveillance over my eyes and head
even if you can spot it in the soldier's eye
you too will be silently placed inside
a circle of surveillance and sent to me
even in a time without war
my life and days are hemmed in by investigations
a restless sea within me
a sleepless land an unsubsiding anger
everywhere an unending dream . . .
in my eyes a fire rises
a flood rises and runs out as blood
the people turned into stone slabs . . .
in the distance the sound of a bell
announcing the birth of a child
echoes on the holy crosses
_Translated by Rebecca Whittington_
Karunakaran, ' _Oyaa Kadal . . . Urangaa Nilam . . . Theeraa Kanavu_ ' (2011), in Karunakaran, _Oru Payaniyin Porkala Kurippugal_ , Karrupu Pirathigal, Chennai, 2012, pp. 90–91
## Yugapuranam: Myth of an Era
#### _Nilanthan_
Part I
it was the end of an era
the rain fell out of season
people fucked with abandon
the earth's youth exhausted,
the wives of the sages
had gone to the forest for penance*
false prophets had cropped up everywhere
and were roaming around in every street
selling tall tales.
it was a lie,
all that talk of a little boat
coming to carry seven sages
across the ocean of milk.
it was a waste,
all that time spent waiting
for wonders and marvels
a nation
promiscuous in war
called on its firstborn children
death was waiting like a creditor
on the steps of a bunker
the arms of strong men
had withered with guilt
the false prophets and the charlatans
had already surrendered
and the grateful people, oh,
they'd become cannon fodder
only those who think with their blood**
stood alone, unscarred
a beautiful heroic era
with its puzzling heroism
and its unparalleled sacrifice
vanished, sunk in the mud of the seashore.
Part II
a nation that did not value upright men
dogged the heels
of blind believers
only those who think with their blood
amassed imperial pleasures
not a single soothsayer
lived there.
in a nation that asked for nothing else
but victories in battle
there was a famine
even of coffins
there was no one
even to dig graves
death seemed
even more certain than life
whenever the cannons
were seized with hunger
the people
were not hungry
were not thirsty
had no pleasure
did no penance
there was no one to eat
the discarded fruits
those were cruel days
weapons were blunted
or bounced back
all those who thought with their blood
went off to the heaven of heroes
and oh, the people who gave up their firstborn children
became prisoners or refugees
on a day given up
even by loving people
the unparalleled hero
his unparalleled sacrifice
expired
a rare heroic era
with dreams frozen in its eyes
and garlands of fading sirissa
vanished, sunk in the mud of the seashore.
Part III
In Nandikadal lagoon
man from Vanni once more became a refugee
from among long-gone corpses
from among
rejected prayers
he came fleeing.
the ashes and tears
of those who disappeared,
the hopelessness and curses
of the people whose trust was broken
the last dreams
of those who were betrayed
clung in his eyes.
between the big sea and the little lagoon
the nation shrank to three tiny villages,
between victory and a hero's heaven
the future pushed on, uncomprehending
people fleeing with nowhere to go
stumbled
on their own corpses and prayers.
the murdered are the lucky ones
the traitor's badge is not for them
for the imprisoned
and the wounded who surrendered
ayyo
for the man who swallowed defeat
and lost his limbs
ayyo
for the man who cooked seeds and young rice plants
and the man who lit the cooking fire
ayyo
the garland of withered sirissa
hung in the bald palmyra trees
could not break free
the big sea
wailed
beating its chest
the arecanut bird
sang with blood throbbing in its voice
caught on touch-me-not thorns
the dream of the man from Vanni quivered
on the dull walls
of roofless houses
the heroic era is beaten flat
but the shores of the Nandi lagoon
do not give in to the stench of blood
and the reign of wildflowers
sends out new shoots.
Part IV
enemies capturing herds of cows
women crying out for protection
Dvaraka sinking into the water
Krishna is missing
since that was the end of an era
the warlords were strongest
the warlords cropped up everywhere
and lightened the load of the earth
withering, drying up with grief for their sons
on the banks of the Yamuna
Yadavas clash with Yadavas
Sinhalas clash with Tamils
Sinhalas clash with Sinhalas
Tamils clash with Tamils
Muslims clash with Tamils
Sinhalas clash with Muslims
on Kudumbi mountain
in Kaththan Kuti
in the Verugal river
in Nandi lagoon
the pennant of victory wet with the blood
of its own brother
throbs without shame
the sound of the warlords' snores
is heard ripping through the nights.
a little boat
bearing seven sages
pushed off into the sea of milk.
hiding on the riverbank, Krishna
weary of playing
his epochal game,
must be in a yogic sleep
to ease his fatigue
the river of time
gulps down and digests
the subject-matter of a heroic era
the potter of time
dissolving the ashes
of a heroic era
on that very water bank
threw earth on his wheel and began
a new era.
the eternal music of the changing eras
comes oozing out
of the corpse-laden
banks of the Yamuna.
Part V
am I
a solitary heron suffering
in the dried-out pond by the seashore
for times that do not come?
am I not even more ancient
than the roots of the banyan tree
where the cobra lives
on the seashore?
I am
the granary of abandoned villages
I am
the biggest merchant
of this roofless capital
I came to sing an elegy
for an era old and dead
I came to recite the epic
of an era newly born
I am the jester,
the Shakti of the era
descended into my hymns
the Maya of the era
returns my years to me
where is my sacrificial hall?
where is my sacrificial horse?
now
the days to come are mine.
Krishna!
give me your flute!
_Translated by Rebecca Whittington_
Nilanthan, ' _Yugappuranam_ ' (2011), in _Ini Enathu Naatkale Varum_ , Vitiyal, Coimbatore, 2012, pp. 93–99
## Keep All That to Yourself
#### _Karunakaran_
there came a saying:
if you have faith
then all will be well.
there came an order
that said: fear nothing.
there came a call:
be patient,
there came a warning:
keep peace,
there came an appeal
to relinquish everything.
I was everything and with everything
even when nothing came of anything
even when I found out
where all these came from
whom they came for
what they came for
keep all this to yourself
and leave me
to go my way gently
as a snail
as an ant
and why not even as a human being.
_Translated by Rebecca Whittington_
Karunakaran, ' _Neeye Vaiththiru Avarraiyellam_ ', in _Oru Payaniyin Por_ _Kala Kurippugal_ , Karuppu Pirathigal, Chennai, 2012, p. 30
## The Sea and Dreams
#### _Ki. Pi. Aravinthan_
the sea is deep and beautiful
and primeval
and unlike ponds and lakes
and like a dream, limitless.
the sea spread with waves
and the dream yearning for freedom
there was a time when
they were fused together
in starless darkness
on the fathomless dream's sea-surface
many rowed in search of direction.
boats of those bearing dreams
fell into the hands of those
who refused to support
the rowers with tired arms
roaring raging tireless
waves
yet unwilling to move away from
the sea.
would you believe
that these very waves
issued by this very sea
have eaten my dream?
I have a story of escape
from the waves of the sea
with the dream's remainder brimming over
despite this struggle
and the scars of narrow escape
I love the sea deeply
even today.
all that rises must come down
this is no new law
and neither is drifting along
the wind's direction a surrender.
tales in all directions
of the rolling sea
with its tireless waves
of insatiable fury
little waves within me too
rise up foaming and overflow
suddenly one day
the dream of the sea
breaks and scatters
in the fixed staring eyes
lying curled in the Mullivaykkal*
it has gone stiff
in the waveless depthless
stagnant sea
of the Nandi lagoon†.
_Translated by Rebecca Whittington_
Ki. Pi. Aravinthan, ' _Kadalum Kanavum_ ' (2012), _Kakkai Cirakinile_ Journal, May 2012, Chennai, p. 3
## Madakkombarai in Jaffna: A Memoir
#### _Malliappu Santhi_ 2 _Thilakar_
I reserved two bus tickets for my friend, Lenin Mathivanan, and myself on the 16th, well ahead of our trip to Jaffna on 19 July 2013. Despite being a seasoned traveller, having been to many countries, this trip was constantly making me restless and anxious, like never before. The main purpose of my trip to Jaffna was to speak at a literary conference on 20th and 21st July. I have been a teacher early in my career. I am currently a management consultant. Public speaking or a presenting a paper usually does not make me anxious. The reasons for my anxiety about this particular trip, however, were different.
I was born on 29 September 1973 in the Lion Quarters of the 'Puthukkatu' division of a plantation called Madakkombarai near Vattakkotai town of Nuvarelia (Nuwara Eliya) district, as the fourth child of my parents. (The eldest child, Chandrasekaran, died before he was a year old; the other two were my elder sisters.) It was on the day the government provided half a measure of rice to each family as relief against a famine that haunted Sri Lanka.
A year or two before I was born, my father's father, _thatha_ , had moved, with his family, to the Killinocchi–Vattakkatchi region of Vanni. Time led part of his family of plantation workers to Vanni and made them farm labourers. My three aunts and an uncle (my father's younger brother) were among those family members who moved. Our own family, along with another aunt and two uncles, continued living in Madakkombarai, in the hills ('upcountry'). Later on my eldest uncle also moved, with his family, to Vanni. The rest of us visited them from time to time. My memory of the very first time I made such a trip with my uncle, in 1978, when I was five years old, is still fresh. It was the first time I had ever seen my grandfather and grandmother ( _appayi_ , as grandmothers are called in the hills, became _aacchi_ when I met her, probably because of the Vanni connection).
In 1979, because of my acquaintance with the Sinhalese families in the PWD quarters located by the side of the pathway leading into our plantation, I wrote the Sinhala alphabet before I even learnt to write my first Tamil alphabet ( _ayanna_ before _aanaa_ ), sitting on the mud floor of our Madakkombarai (Vatakkimalai) plantation school. Even before that, when I could barely remember my own age, I had become familiar with the Tamil primer of letters, the _Ariccuvati_ , in the 'night school' of the neighbouring house where Mr Megharaja, an uncle of mine, now based in Kunnur in Salem district, Tamil Nadu, used to live. Since then, I have always called my uncle Megharaja my guru.
In 1977, although the government changed, our starving didn't. A biscuit in the morning and steamed _chou-chou_ (a popular vegetable in the hills) for lunch and, if the budget permitted, a little rice for dinner; this was how our days passed. As poverty chased us, the only way for my father to fend it off was to move to Vanni himself.
Witnessing the ethnic violence of the times, internalizing someone's 'far-sighted vision' that 'if one had to live in this country, the Sinhalese medium of instruction was the only way', our father enrolled three of us in the Vattakkadai Sinhala school. This meant bidding farewell to the dhoti-clad Gopinath Master from Jaffna (he had a penchant for pinching hard the back of our thighs if we were found guilty; from hearsay I gather he lives in France these days), Master Arumainayakam (from Batticaloa, I think), the school supervisor ( _kankaani_ ; plantation schools also had a supervisor!), old man 'Vatthangi', several friends, and even the slates and chalks that marked the beginning of our tryst with letters. To teach us Sinhalese letters, we had Sinhala teachers like Menikke Teacher, Amarakkon Teacher, Sarat Sir and Vidana Sir, among others. From Matiaparanam and Gunaraja, friends from the mud-floor Madakkombarai plantation school, now in Vattakkadai, my friends were Ravindre, Nandasene, Indike, Iyasene. Everything had changed.
Occasionally, a money order from father would ease our hunger. His letters inspired us, gave us solace. He would write interestingly about happenings in the country, with a certain 'pride'. But poverty dogged our family. Mother's daily wage helped a bit. Not just my father, my mother, too, received her share of 'far-sighted vision' whereupon we were sent packing to evening tuitions to learn Tamil after school. 'If we learn Sinhala, won't tomorrow's children need to know Tamil?' So, when the Sinhalese school day was over, he enrolled us in night tutorial sessions to learn Tamil. Thus, as soon as it got dark, we presented ourselves at the Vattakkadai Sri Krishna Social Welfare School. Mister Shanmugam, the teacher and director of that school, became another guiding force in my life. He writes in the _Suryakanthi_ magazine under the name of Vattakkodai-Subbaiah Rajasekaran. He helped relieve my hunger, gave me an education and enriched my life.
There is no need to write about July 1983. While Tamilians were burning, I was studying in the third grade, in a Sinhalese school. My friend Ravindra, during some spat or the other, called me a _Para Thamila_. The very next instant my hand flew up. Ravindra, bawling with a bloodied mouth and a broken tooth in his hand, and I stood before the Principal for interrogation. I was giving my statement in Sinhala. (Just as we all do now . . .) Vidana Sir spewed hate as he looked me over. Sarat Sir looked at me with compassion. And the bell rang for school to close. 'Tomorrow the inquiry will continue,' they said, and sent us away. As the three of us were filing out, Sarat Sir took us aside and spoke affectionately. 'You'd better not come back to this school ever again. I am telling you this for your own good,' he said. We took our leave of him in the customary Sinhalese way. He blessed us and saw us off. Even today, the image of him dressed in white shirts and trousers, his curly hair and sharp nose, and his smile, remains with me. Sarat Sir was large in appearance and also at heart.
When father's far-sightedness was reduced to bits in an instant, mother's far-sighted plan came in handy. Shanmugam Master took us to the Vattakodai Tamil school, introduced us to the headmaster Shanmuganathan, and explained what happened. The headmaster thought for a while, then sent for the third-grade Tamil textbooks, and asked us to read. Standing straight, legs taut, I read aloud, breathlessly. Patting me on my back with a smile, Shangmuganathan said, 'Here (in this Tamil school), I doubt if a fifth-grader would read like this,' and looked at Shanmugam Master, who explained how, at my mother's request, I attended 'night classes' at his private school. I looked at Shanmugam Master in gratitude. 'Didn't your father come with you?' asked the headmaster. 'Father's working at a rice mill in Jaffna. We are at home with our mother. Our uncle has come with us,' I said, pointing at my father's younger brother, Tharmakularaja. The headmaster asked, 'Which town in Jaffna?' Uncle replied, 'Kokkuvil.' So father after all went to Jaffna, not Vanni. I remembered getting excited about seeing the name 'Kokkuvil' on the envelope containing his letters.
'Oh . . . I'm from Inuvil myself. I don't see why you should run around here when you could pursue your education there. Till then, I will admit you here. Write to your father about this,' said the headmaster. Enrolling us in the Vattakodai Tamil school, he too had imposed his own 'far-sighted vision' on us.
I must have studied at the Vattakodai Tamil school for about a month. Then, just like our kin in the plantations, who would knock at every door to inform everyone about their returning to India ( _homeland_ ) under the auspices of the Srimavo–Shastri Agreement, we did the same and moved to Vanni. Along with my sisters, I was put in Killinocchi's St Theresa's School (at that time boys could also study there till the fifth grade—I don't know how it is now). I don't know if this involved yet another far-sighted plan of father's, or there was some Machiavellian game behind my not being admitted into the Kokkuvil Hindu School, though father lived very close to it. The talk by Professor A.C. George in a panel on casteism at the 41st Literary Meet, 2012, for which I went to Jaffna prompted me to check with my father as to why I was not enrolled in Kokkuvil Hindu school.
The days in Killinocchi and Karadippokku, at St Theresa's School, affected me a lot. When we lived in the hills, I had to live apart from my father, but now I was without my mother as well. We were made to stay with an aunt in Vattakkachi, to go to school, while my parents stayed in Jaffna, where they worked. We did frequently visit Jaffna. I remember the Tamil film songs of those years played on the bus trips to Jaffna, the voice of K.S. Raja hosting film-based programmes on the radio, in particular. I also remember the Tamil movies that our uncle would take us to in the different cinema halls (Shanthi, Windsor, Manohara, Raja, Rani).
We were given a tiny house at the edge of the huge concrete floor-slab, built to dry rice, at the mill. The floor was big enough for us to ride on cycles as we wished. The compound wall hid the narrow lane that led to our house. There was a tamarind tree behind the house. If we climbed up on the roof, we could eat as many of its fruits as we fancied. On the left-hand side of the house there was a drumstick tree. Mother's preparations from its drumsticks remain unforgettable memories of that Jaffna home. Going back to school in Killinocchi after holidays wasn't easy. Tears would well up as I trod reluctantly towards the classroom. It wasn't so difficult at the Sinhalese school at the plantations. I was a product of that Sinhalese elementary school and I did not know any of the Tamil 'technical terms' of the classroom. Still I would speak, read and write 'Tamil'. For example, when those students said ' _alirappar_ ' what came out of my mouth was ' _ma(k)kanee_ '. Both meant eraser. But the trouble is that the former was Tamil and the latter Sinhala.
I am a Tamilian. But I felt everyone looked at me as though I were Sinhalese. I would sit on the last possible bench in the classroom, almost always close to tears. I had, however, one dear friend. I still wish I could meet him somehow in this lifetime, my friend Nesakumar. He was from the Kandy Teldeniya region and had suffered due to violence there. Since he had come from a Tamil elementary school, he knew all these Tamil 'technical terms'. I was a lot more at ease when I was with him. Somehow the final year examinations came and I managed to score well, passing in first class. Not sure how I managed a seat at the front row in the next class. However, I was a lot more fluent with the Jaffna Tamil terms, so much so that I could easily say not just ' _alirubber_ ', but also ' _pendu'_ (then), ' _cycle ulakki_ ' (to ride a cycle) and ' _velikkittu_ ' (to go out). Even in those days, I remember going to a popular Rajinikanth film at the Eswara theatre along with my cousin, Senthooran.
As time was racing along like this, one day, because of heavy rain the Iranaimadu reservoir got breached and the water surged to destroy all the small bridges between Vattakkachi and Killinocchi. We went to school in the morning, but we could not return home later. The villagers struggled hard and, with a Herculean effort, built some catamarans that took us home, scared to death all along the way. Going to school was often interrupted. Father's illness made it impossible for him to return to the North from Madakkombarai, where he went visiting. Mother had to get to Killinocchi, making our stay with our aunt even more burdensome. Unhindered by anyone's 'far-sighted vision', this time around, we returned to the hills, for good.
Again, Shanmugam Master, the headmaster Shanmuganathan, and the Vattakodai Tamil school. I had left there as a third grader, and now returned in time for the fifth grade exams there. I could not do well in that exam and had become considerably 'weak' (as a student). By now my speech had a whiff of Jaffna in it. Bhagyalakshmi, Gomathi, Mutthulakshmi, Gnanambikai, Bhavani, and . . . some other female teachers whose name ends in '–mani', and Indrarajan the maths teacher, whom we all called by the nickname 'Kotthurotti', they were all teachers from Jaffna working in that Vattakodai school. Bhagyalakshmi Teacher staged a play in Jaffna dialect in that Vattakodai school. I played the part of a government agent ( _vithaanaiyar)_ and received compliments for good acting.
Within a few days, my hill-country Tamil came back to stick to me, but not the school itself. Then Uncle Dharmakularaja enrolled me in the Puntuloya Tamil High School, where he had studied. I joined the school just when the headmaster Nataraja left, after being promoted. Later, after I had grown close to him, he'd laugh and say that I gave him his promotion. There, too, Gukeesvararaja (commerce), Irudayanathan (Tamil), Rajaratnam (mathematics), Muralidaran (science) and Vignesvaran (class teacher) were teachers from Jaffna. Teacher G. Muralidaran was a good artist. He adapted Professor Mounaguru's play _Rain_ as _Eyes Seeking the Dawn_ and directed it, featuring me. That play won national recognition in the Tamil Day celebrations. It was on the way to stage it at the national competition that we were compelled to turn back, because there was a bomb attack on the security minister, Ranjan Vijayaratne, near Nittamby on the Colombo–Kandy highway. Later, during my school days it was staged at the National Literary Festival, headed by P.P. Devaraj, when I was selected as the best actor.
The year 1989 was marked by the Janatha Vimukthi Perumana (JVP)-led riots. It was a time when tea factories were burnt as they were seen as symbols of foreign investment, disrupting the foundations of the Sri Lankan economy. I also remember the JVP's arguments from those times, that the tea-plantation workers were leftovers of Indian expansionism. In the forests of Madakumbaram on the way to school from Madakkombarai to Punduloyaa, several times I have seen burnt corpses of people, with charred tyres around their necks. In fact, I was the one who ran to inform the village about the killing of the Madakkombarai camp manager Dharmaraja, a Tamilian, who was shot dead by the JVP, a bullet in his head. I found him on the roadside, on my way to school. His gravestone can still be found at the same spot where he was shot, near the entrance to the village, as a symbol. Just a kilometre away from there, on the roadside, is the tomb of the people's poet and leader C.V. Velu Pillai. The JVP problem was at its peak in the South and the Indian Army had just left the North under the regime of President Ranasinghe Premadasa. The Tigers were often visiting Colombo for talks. Vanni seemed secure, once again.
In 1990, I took the public examination and returned to Vanni to pursue higher studies. This time, it was just me. It was a time of many changes, thanks to the departure of the Indian Army and the Thirteenth Amendment. I reached Visvamadu, where another uncle and aunt were living. I was thinking of going to high school in Murasumottai or Kandavalai, and stay with an aunt. We had land at Visvamadu as well. While helping my uncle out in his farm, I have seen fighters of the movement walking around with guns.
One day as I was riding my bicycle to our farm along a narrow lane, I was trapped at the junction of three streets by three cyclists bearing guns, and I couldn't move in any direction. I was scared. 'What's your name? Where are you coming from? Why are you coming along here? Do you have any connection with E.P.?' Many questions were asked. I stood there in the midday sun. I had seen them going around in the streets before, so I figured they must be Tigers. A few minutes after I thought they had finished their interrogation, making way for me, they said, 'Okay. You can go.' I bore down on my bicycle pedals, trembling with fear and reached the farm, and I narrated the entire incident to my uncle. Listening to me quietly, he said, 'Ïs that so? . . . Let's go somewhere immediately.' He went out somewhere with a sense of determination. The radio was announcing the death of A. Aziz, the president of the plantation workers' union.
My uncle came back and I was ready. Both of us left on his bicycle, with him riding fast. I could see the schools in Murasumottai and Kandavalai passing me by on my bicycle ride. I had no clue where I was going. There I was imagining that my uncle was taking me to certain big shots of the movement to set my record straight, and was even enthusiastically pushing pedals to assist him. But he was in no mood to talk. It seemed as if he was thinking of reaching some place much before anyone else did. He spoke only after we reached the Parandan railway station.
'If they know for certain that you have no connection with anybody, you'll have no trouble. But that by itself will fuel trouble for your family. You're the only male child of the house. I know of only one way out. I have to send you home,' he said. That was when I realized that the reason he was in such a hurry was to catch the 'Yazh (Jaffna) Devi' train. I arrived at Vattakkodai by way of Polakavalai after I had been able to get a glimpse of St Theresa's School, where I had previously studied, from the train. How two police posts came up around our Madakkombarai house a month after I arrived is another story. But the Yazh Devi train which set me down in Polakavalai that day never went back to Jaffna again—to this day. It has resumed its operations after the war but has only managed to touch Vavuniya and Tandikulam. It is now contemplating about going to Killinocchi. The fact that I was on this bus going to Jaffna even before the Yazh Devi was causing me so much anxiety and restlessness.
What foresight on the part of my uncle too! Having left Jaffna on his advice, it has taken me twenty-three years to cross Vanni and now twenty-seven years to return to Jaffna. In between, my cousin-brother Devaraja has committed suicide in Vanni. Another cousin, Tiruchenturan, who took me to the movies, has been buried as 'Nithi'. Cousin Shanthini has been sowed as 'Poonkuyil'. Who would know that all of them were born in Madakkombarai and were carried to Vanni as infants? My railway man uncle Thangaiah fled with his sewing machine when people were asked to leave during the final phase of the war. When he could no longer run carrying the machine, he had to abandon it midway, which made him feel utterly ill. We rescued him from 'Arunachalam' prisoner's camp and tried to treat him, but he did not recover and died subsequently. We buried him in Madakkombarai . . . once again.
How could I be at peace with myself on this journey thinking about all these?
I continued my journey chatting with my friend Lenin. He fell asleep, but I couldn't sleep. Anxiety kept me awake. Determined not to get off at Vanni and to proceed straight to Jaffna, I continued on the bus, tracking the scars on the landscape, despite the driver's best attempts to frighten me to death. Though annoyed with his driving, I decided to remain calm, as I was going back as a new person. There were other journalists like Devagowri, Dushyanthini and Kesha on the same bus. But nothing much transpired by way of conversation between us. I hardly knew them then.
As soon as I got off the bus at Jaffna, I went to the place where the literary conference was going on and did not move an inch till it got over the next day. In the first session on the second day, I spoke on Poetic Literature and Nationalism of the 'Upcountry'. In the final session the same evening, my friend Lenin Mathivanan spoke on 'Upcountry Nationalism'. Our plans to leave the same night seemed to fall through. We postponed our trip by a day and shifted elsewhere for the night. Friends Acura and Devadoss embraced us warmly. We visited Raghavan and Nirmala on the way there. Devadoss sat on top of a table and started singing the hill songs popularized by EPRLF, while Nirmala and Sumathi started singing the songs of Meenatchi Ammal Natesayyar. A proper concert had begun. Cheered on by Kovai Nandan, Asura, Raghavan and Lenin, I started singing the songs of Vattakkodai Kabalichellan, the lesser-known, legendary 'Upcountry' folk singer. (It wasn't even a month since he had died.)
Songs with a beat made friends like Nirmala dance. She was pleasant and wished me well, like my mother would. It was a surprising coincidence that away from the literary conference, a musical evening was in progress, comprising only 'Upcountry' songs.
It was the morning of the third day. After twenty-three years, I bathed with water drawn from the well. The morning felt nice. Breakfast with friends. Lenin and I set out to go around Jaffna as I had told him I wanted to. 'No problem. We'll go wherever you want to go,' he agreed. We boarded the bus to Gangesanturai, and I told the conductor, 'Drop us off at Taavadi.' From memories I had carried inside me for the last twenty-seven years and inquiries at the roadside garage about 'that rice mill', we approached that lane in the hot sun. The lane ends at the mill's gate. As I reached the place, I was restless. Lenin seemed to understand my emotions.
The concrete floor where I used to cycle was overrun by bushes. I was searching for my house. I could spot the remains of its foundation. Holding back tears, I went to the miller who informed me: 'They sold it to us a long time ago.' The area where our home used to be had also been sold in bits and pieces. I asked about the water tank that I was so fond of. The stranger said, 'There it is,' pointing in a certain direction. I asked if I could take a picture. He said he would check with the owner. We decided to avoid the trouble and I just took a photograph of myself along with the remnants of what was our home. So many old memories flooded my mind at that moment that I could think of nothing else.
It must have been 1984. As a ten-year-old boy, I loved my holidays, when I could hang around my mother; carrying buckets of water from the garden, and the joy of collecting warm rice as it fell from the mill, not to mention the food made of that warm rice in those days. I used to love running to the shop often.
One day as I went to the shop, a few older boys on cycles in a group, gave me some handbills. I had no idea what were in my hand, as I was waiting my turn to buy some chilli powder from the grinding mill at the corner. Then a loud thundering noise broke out . . . thud . . thud . . . People started to run, screaming. I looked out into the street. The old man of the shop, Veerappa, asked me to run home to tell my family that the army was coming, hunting for those brothers distributing the handbills. As I was running in the lane, I could see a cow rolling over, dying from a gunshot wound? Bolting the compound gate, I ran to my mother and told her about what was coming. We gathered whatever we could and stepped out of the house. My father was working in the mill, a bit inside. The owner's mother came out of her house, which was next to the mill. She saw us panicking. As we were telling her what had happened, a family was banging on the gate that I had bolted tight.
'The army is coming . . . shooting . . . Please save us . . . Open the gate . . .' cried the family in terror. There were about six of them, including wife, husband and children. I could recognize them as people living in one of the mud huts by one side of the narrow lane leading to the rice mill, so I ran to open the gate. 'Don't open that gate, boy! Do not open that gate!' the owner's mother stopped me. The family was trembling. Crying. Begging. But the owner's mother kept on scolding me. With fear lingering in me, after seeing the cow get shot and roll over and die, her harsh words only scared me more. My mother pulled me close to her and hugged me. I was totally focused on opening the gate. The sound of gunshots started getting closer. . . .
One of the men in the fleeing family, shirtless, his lungi tied up to his thighs, had a small knife tucked in at his waist. That tall, dark robust man climbed up on the gate and jumped inside, pulled at the gate and, with just one yank, the lock broke. The gate opened. They entered the rice mill where we all stood, now in greater number. In their hands were ladles, knives and cooking utensils, and in the little ones' hands were bicycle tires they'd been playing with. They must have just fled instantly. Now the sound of the approaching army was fading. The narrow lane had two to three turns. The army did not venture beyond the second. They must have seen the desolate houses and left. But the fleeing families had spread over the entire mill. The owner's mother was cursing herself, beating her head.
It was common among the plantation workers to use abusive language, and the children were not immune to it. But my own mother had kept me away from all that. But the tall, dark man who climbed over the gate came close to the owner's mother and said, 'Talking about caste, what caste? #@*&*# caste . . .' and he hurled the choicest curses at her. My mother could not keep me away from those abuses. In fact, I realize now that I was rather enjoying them. Everyone who had left the mill, including my father, the workers and us were watching, as if it were a drama. I could not understand much about the events of the day then, until I could hear the speeches of writers like Theniyan, Senior Gunasingam, Akalya, Devadas, Rengan Devarajan, A.C. George, and the anxious and tense reactions of the conference organizer Vel Tanjan. Sarat Sir from the Sinhalese school even now was hovering in my eyes.
Rescuing myself from memories, Lenin Mathivanan and I walked towards the KKS Street. Visited the Jaffna library and the Nallur temple. But couldn't even go inside them. The library was closed on the full moon day. The temple was closed after the day's ritual. We went to the new home of our friends. The house where the musical evening had taken place the previous night apparently was maintained by them as a memorial for their sister Rajini Tiranagama. After spending some time there, we returned to the old house. Having come prepared to stay for just two days, we had run out of clothes. Our friend Asura pulled brand-new shirts out of his luggage from France. I felt a new bonding with Jaffna. Full of emotions, we bade goodbye. Asura and Devadoss, who had come to see us off at the bus station, took leave of us. We got into the bus after a mini-shopping trip to get _odiyal_ , dry fish, snacks from Paruthithurai, _idiappam_ trays, pickled chillies, palm jaggery, and other assorted snacks.
On the way back, it felt as if my friend Asura was jumping out of my shirt pocket, teasing me. As I reached home the next morning at six, the first question that my mother asked was: 'How is our home in Jaffna . . . ?'
_Translated by D. Senthil Babu_
Malliappu Santhi Thilagar, _Yaazhppanathil_ ' _Madakkombarai_ ' (2013), in _Jeevanathi_ , No. 63, December 2013, pp. 39–47, published from Nelliady, Jaffna.
## Release
#### _V. Gowribalan_
Stepping back a bit as she walked behind the bus moving ahead, its dense, dark smoke choking her, she stood there, clearing the smoke with her right hand. As she got off the bus, she had worried that her light, nylon churidhar clinging to her body with sweat, was disgustingly revealing it. She grew angry as if the stench of arrack from the stout lips and thick moustache of the man who fell on her, had settled on her body too. She felt her attention shattering, unable to focus her thoughts on the landmarks that layered her memories, as the intermittent heat and the sweltering wind slapped her face. She felt the throbbing pain rise on the left side of her forehead as her headache flared up, reminding her of its latent presence. She felt sad that the crumpled bag tucked tightly under her left arm, made out of a fertilizer sack, had left herself and her community wandering in an ancient time. Wrapping one end of her white _dupatta_ around her left arm to hide it, she put the other end around her neck like a garland. She sensed her feeling an implacable anxiety as this was not her familiar narrow road, with its potholes and its sweeping white sand. She was distressed as she felt alienated, insecure and alone standing on that wide tarred road, with its clear white stripes rising from the white sandy surface. She realized that she had got off the bus, two stops earlier, because of the rush and the panic induced by the fact of her coming here after a long time.
She bowed her head down into the burning sun right on her eyebrows decayed and dissolved as colour bubbles in her watering eyes. She felt, just like her, the tall electric post's shadow lying humped and curled up inside the pit. She stood there feeling uncomfortable in that churidhar, clinging to her body with sweat, bought for someone else. She felt disgusted wearing that worn-out purple churidhar, with coloured black dots, that someone else had liked and bought. She remembered she was the last person to pick it up, thinking it would fit her, from the heap of clothes dumped on the cement floor of the rehabilitation camp, from bags made of fertilizer sacks. She saw the goat that came out on the side of the ' _eecham_ ' shrub, going back into it, as it saw her.
Disgusted with herself, she thought she would take the sandy track by the lime kiln and not the gravelled lane. The whiteness of the lime shells baking in the heat of the husk forming in her memory, she walked along the kiln, built like a well with hard clay and exposed bricks, laden with ash and charcoal. Thinking of the scattered particles flying out of its cracked-up, blackened chimney, she stood in front of the kiln. The scent of the smouldering shells settling heavily on her, she started running down from the tarred road towards the sandy track, along the fence of the lime kiln, made of dried palm leaves.
She felt she needed to walk under the cashew trees to avoid the sweltering heat. She walked thinking that the branches of the cashew trees curled up like creepers, were sprawled on the white sand like a caved-in green tent. She realized her exhaustion, with the burning sun hitting her straight in the face, walking on that aimless, meandering track, lined with cactus and scrubs, some with thick leaves and some full of thorns. Once again her headache began to show its presence. She felt the piece of shrapnel inside her head on the left side heating up and its heat spreading across her face. She walked faster, realizing that her feet burnt sharply as her flat slippers sank into the sand.
She felt an obscure hope looking at the green, budding leaves on top of the palm-like drumstick tree, planted along the ridges of the sand-bunds to firm them, shoring up the abandoned betel field, almost to half as high as a coconut tree. Bending forward, she climbed up the slope of the crumbled sand-bund, as if looking for her father's hopes and drops of sweat. She felt irritated at the grains of sand caught between the slippers and her feet that were thrown up, hitting her neck, getting into her shirt, sticking to the sweat, and rubbing her.
Standing with one leg on the ridge of the sand-bund and the other on its slope, she turned to look back as if something that she lost a long time ago was lying there somewhere behind her. She felt as if the pond, thick with black algae and grass, with little water, had moved far from the sand-bund on which she was standing. She remembered when her father had his betel field on this sand-bund, the pond was bigger, with more water, closer to the field's fence; its memory sprouted afresh and dissolved in her mind. Her father's dark, emaciated body, just a piece of cloth around him, carrying water in a large earthen pot from the pond, climbing on to the sand-bund with his feet sinking into sand, appeared in her mind as an image on the water in the pond, only to dissolve like water bubbles, swept away by waves. She saw a hazy image taking shape in the mirage from the scorching sand, with sirissa trunks chopped for wood, of her brother and her, peeling and eating tapioca, roasted under the sirissa tree with its hanging, long, green pods, still feeling the tapioca's warmth in their hands. She stood staggering, as if the green betel vines—which were her food, which were her books—were spitting out fiery wind at her, unsettling her as they swayed. She felt as if her legs were buckling, losing strength, like her left arm, when she remembered that this is the field her father had toiled on, carrying earth in baskets, mixed with sand, dung and sweat, and planting vines.
She came to the centre of the scorching sand-bund, letting her gaze wander in the direction of the settlements in the distance. She saw the scattered, squat houses of stone and their red tiles, with the names of organizations engraved in white on them, through the thick faded leaves of the cashew branches, scrubs and white sand dunes. She saw that the cone-shaped huts made of tin sheets and palm leaves have disappeared completely, as she had seen them when she left to join the movement. She looked sharply at her own hut, forcing her gaze through the sweeping waves of the broiling wind. She ensured that the tiled house, built by organizations, standing mutely next to the tall, white tent, with UNICEF embossed with blue paint on it, was theirs. She sharpened her gaze further, despite the heat. She felt her eyes taking in people, appearing taller than the sloped roof of her house, clad in white clothes, walking around inside the tent and outside her house. She saw hazy faces of relatives amidst the vivid moments of hurried preparations for a cremation, swaying in her memory, inside her tearful eyes.
She thought about the moments when she was leaving, not sure whether to feel happy or sad that it was the news of her younger brother's death, conveyed by the Red Cross, which turned into the reason for her own release.
'He seems to have fallen in love with some education officer's daughter in the campus. That girl's father seems to have slapped him in front of the others in the campus . . . the same night he hanged himself from a tree . . . we will try to keep the body for three days . . . if they let you go, come and see his face for the last time.'
Uncle Govindan, who had come along with the Red Cross, informed her.
She remembered the betel-chewing mouth and the stained clothes of the old man, who served food at the camp, who like her own father had showered pure affection on her, had signed the bond and helped her board the bus.
Remembering the screaming sounds coming out of the loudspeaker, she felt her eardums were slammed, as the leader spoke constantly at public events and school sports competitions, and said, 'Boy or girl . . . one from each family must come to fight.'
'The boy is very smart . . . intelligent . . . He will join the campus somehow . . . if he makes it, he will carry the family ashore.'
When Uncle Govindan repeated this, she reminded herself of how her father kept listening to him, oblivious of its intent. She recalled the moments of that night, and the bright sandy tracks shining in the moonlight, as she left to join the movement in the middle of the night, leaving a letter for the sake of her brother and her family.
In the sweltering heat, as the piece of the 'shell', still inside her head, gained heat, she felt her head and body simmering along with her nerves and veins. She got down from the sand dune, and gathering her senses, walked in the direction of her house. She could feel herself beginning to digest the sounds of wailing, lamenting and the commands of rituals, well before she neared her house.
When she started to listen to the rustling conversations and lamentations closely, she felt the thorny bushes and the eecham shrubs were the only ones screening her from revealing herself. She suddenly felt the stench of sulphur in the air. She felt fear, as a single Chinese cracker that burst nearby, vibrated and subsided inside her head. She heard the subsiding echo of the cracker against the sky like clothes beaten and washed on stone.
In the rush to present herself, her legs fumbled and, losing all control, she saw the thorny scrubs and the hot sand, closing in fast, straight on her face. Realizing that she could not use her lame left arm, she jerked her body rightwards, and her right shoulder hitting the ground hard, she fell with her head striking the hot sand. Once again the stench of sulphur hit her; she heard the bursting sounds of bundles of crackers very close by. Scattered shards of paper and sparks of fire kept appearing suddenly.
She felt the stiffness and pain, as if countless number of needles were piercing inside her head, along with sounds and echoes of constant bursts. As the pain grew, she felt hapless, unable to control her legs shivering, then trembling on the hot sand. She heard her teeth grinding as her jaws stiffened. Her eyes darkened when her entire body shrunk, shivered and began to tremble.
She felt the continuous sounds and echoes of the Chinese crackers inside her ears; they were turning gradually into sounds of bursting gunshots, without echoes. She felt a shapeless memory rising in her mind, of her shooting incessantly, despite feeling the searing heat of not just the steel parts of her gun but also its wooden parts. What stayed in her mind was the memory of the pain she felt when heavily wounded by the flying splinters of stone, bursting out of the concrete bunker, as a bullet from the other side smashed into it. She sensed the discontinuous, intermittent, fierce stench of the charred concrete bunker, smashed by the bullet, assailing her nose hard, and unsettling her. She tried to bring to her mind that scene of the torrent of light that appeared like a lightning strike, as she was shooting continuously. She only recollected how she felt when the strong, massive, silent heap of sand, moved towards the bunker. After a thundering sound and a sandstorm had passed, she saw the bunker shrouded in heaps of sand and ruined branches of trees, which appeared in her memory like images of violent scenes in a movie, shown in negative. As she felt the fresh hot blood spreading from the left side of the head down to her cheeks and from the place where her two fingers appeared to have been spliced by a sharp blade, the roaring noise inside her ears stopped and she completely lost her consciousness.
She felt her senses coming back, along with the left-side headache. She felt her body lying soaked in sweat, as if drenched in rain. She thought her sweat made the sand and the churidhar, bought by someone else, which she was wearing, stick together oppressively, making her feel disgusted. She could feel the persistent dizziness inside her head as she opened her eyes. She was shocked when she realized that the sky had turned ashen and dusk was setting in. She thought, she felt sad as her senses made her aware of the complete silence of the dead house. She could feel her body absorbing the latent heat of the sand, still hot from the day. Clamping her teeth, she could feel the pain in her jaws and the salt in her lips.
Getting up, stepping across the thorny scrubs and the eecham shrubs that were like a screen blocking her, she saw her father stooped, innocently in front of some relatives. She stood there drained, with the rush of implacable emotions egging her on to get back and sit behind the screen of the thorny scrubs and eecham shrubs, until dark, hiding her left arm with the white dupatta. Salvaging the memories of she and her brother watering the greens and eating smoked cashewnuts together, she went back, sitting behind the screen of the thorny scrub and the eecham shrubs, and wilted into tears.
_Translated by D. Senthil Babu_
V. Gowribalan, ' _Thirumputal_ ' (2014). Unpublished in Tamil.
## Copyright Acknowledgements
Grateful acknowledgement is made to the following for permission to reprint copyright material:
A. Jesurasa: A. Jesurasa for ' _Unnudaiyavum Kathi_ ', in M.A. Nuhman and A. Jesurasa (eds), _Patinoru Eelattu Kavingargal_ , CreA, Chennai, 1984, p. 179.
Aruntati: Aruntati for ' _Kelvigal_ ' (1999); first appeared _in Uyir Nizhal Journal_ , Paris, March–April 1999; published in Sugan (ed.), _Theendathagaathavan Muthalaana Eelathu Dalit Sirukathaigal 14_ , Maalika Books, Chennai, 2007, pp. 128–141.
Aswagosh: Aswagosh for ' _Irul_ ' (1990), in _Vanathin Azhaippu_ , Nigari, Kalkilai, 1997, pp. 16–18.
Bose Nilhale: Son of Bose Nihale for ' _Veenai_ ' (1999), in _Sarinigar_ , No. 172 (27 May–9 June 1999, Colombo; published in _Vetraaki Ninra Veli_ , Vitiyal, Coimbatore, 2001, p. 23; for ' _Nigazh_ ', in _Sarinigar_ , No. 172, 27 May–9 June 1999, Colombo; published in _Vetraagi Ninra Veli_ , Vitiyal, Coimbatore, 2001, p. 22.
Chelian: Chelian for ' _Chinnathambi_ ' (2010), _Kaalam_ Journal, January– March 2010; for ' _Karunaiyum Illadavargal_ ', in _Kadalai Vittuppona Meen Kunjugal_ , Kaalam, Toronto, 2007, p. 16.
Cheran: Cheran for ' _Veerargal Thuyilum Nilam_ ', in _Sarinigar_ , No. 172, 27 May–9 June 1995, Colombo; published in _Vetraagi Ninra Veli_ , Vitiyal, Coimbatore, 2001, pp. 15–17.
Deebachelvan: Deebachelvan for ' _Nilam Peyarnthalaiya Vandhu Vidu_ ' (April–May 2009), in _Aatkaltra Nagarathai Thinra Mirugam_ , Uyirmai, Chennai, 2009, pp. 91–92.
Dominic Jeeva: Dominic Jeeva for ' _Gnanam_ ' (around the early 1960s), in _Dominic Jeeva Sirukataikal_ , Mallikaippantal, Jaffna, 1996, pp. 109–118
Faheema Jahan: Faheema Jahan for ' _Azhivin Pinnar_ ', in _Oru Katal Nirurril,_ Panikkudam, Chennai, 2007, p. 12; for ' _Enathu Kaimarri Yenthi Kol_ ', in _Abarathi_ , Vadali, Chennai, 2009, pp. 17–18.
Ilaiya Abdullah: Ilaiya Abdullah for ' _Ethirkollal_ ', in _Pinam Seyyum Desam_ , Uyirmai, Chennai, 2004.
Ilavalai Wijayendran: Ilavalai Wijayendran for ' _Thadi Kondu Tiribavargalukku_ ' (1990), in _Niramarru Pona Kanavugal_ , Desiya Kalai Ilakkiya Peravai and South Vision, Colombo, Chennai, 1999, p. 56.
Iravi Arunasalam: Iravi Arunasalam for ' _Kaalam Aki Vanta Katai_ ', in _Kaalam Aki Vanta Katai,_ Vitiyal, Coimbatore, 2003, pp. 21–25.
Ki. Pi. Aravinthan: Ki. Pi. Aravinthan for ' _Oru Agathiyin Thaayum Thayagamum_ ' (2004), in _Iruppum Veruppum_ , Salaram, Chennai, 2009, pp. 100–109; for ' _Kadalum Kanavum_ ' (2012), _Kakkai Cirakinile_ Journal, May 2012, Chennai, p. 3.
Karunakaran: Karunakaran for ' _Varugaialaridam Sila Kelvi_ ', in _Oru Payaniyin Nigazhkala Kurippugal_ , Magizh, Putu Kudiyuruppu, 2003, p. 36; for ' _Thagikkum Koodu_ ' (2009), in _Pali Aadu_ , Vadali, Chennai, 2009, p. 30; for ' _Karuppu Nai_ ' (2009), in ibid., p. 85; for ' _Nizhalai Vilakka Mudiyaatha Por Veeran_ ' (2009), in ibid _.,_ pp. 97–98; for ' _Oyaa Kadal . . . Urangaa Nilam . . . Theeraa Kanavu_ ' (2011), in Karunakaran, _Oru Payaniyin Porkala Kurippugal_ , Karrupu Pirathigal, Chennai, 2012, pp. 90–91; for ' _Neeye Vaiththiru Avarraiyellam_ ', in ibid., p. 30.
Kumaramurthy: Son of Kumaramurthy for ' _Hanifavum Irandu Erudugalum_ ', in _Kumaramurthy Kathaigal_ , Kaalam, Toronto, 2002, pp. 29–35.
M.A. Nuhman: M.A. Nuhman for ' _Nerraiya Malaiyum Inraiya Kaalaiyum_ ', in _Alai_ Journal, Jaffna, December 1977, pp. 239–240.
Maalika _*_ : Maalika for ' _Oriravil_ ', in _Erimalai_ , September 1996; published in _Vetraagi Ninra Veli_ , Vitiyal, Coimbatore, 2001, p. 47.
Mahakavi: The estate of Mahakavi and Cheran for _Therum Thingalum_ , in M.A. Nuhman and A. Yesurasa (eds), _Patinoru Eelattu Kavingargal_ , CreA, Chennai, 1984, pp. 27–28.
Majeed: Majeed for ' _Ulmana Veli Parappinil_ ', in _Sarinigar_ , No. 162, 24 December–14 January 1998, Colombo; published in _Vetraaki Ninra Veli_ , Vitiyal, Coimbatore, 2001, p. 25; for ' _Ner Kottu Parappalave Enakkullum Thuyar_ ' (1998), in _Sarinigar_ , No. 152, 6–10 August 1998, Colombo; published in _Vetraaki Ninra Veli_ , Vitiyal, Coimbatore, 2001, p. 26.
Malaravan: N. Malathy for extract from _War Journey:_ _Diary of a Tamil Tiger_ , translated by N. Malathy, Penguin Books India, New Delhi, 2013, pp. 59–73; _Por Ula_ , Publication Division, LTTE, Killinocchi, 1993; second edition, Vitiyal, Coimbatore, 2009.
Malliappu Santhi Thilagar: Malliappu Santhi Thilagar for ' _Yaazhppanathil Madakkombarai'_ , in _Jeevanathi_ , No. 63, December 2013, pp. 39–47. Published from Nelliady, Jaffna.
Mu. Ponnampalam: Mu. Ponnampalam for ' _Natai_ ', in Mu. Ponnampalam, _Kaalil Leelai_ , Dhwani, Chennai, 1997, pp. 90–91.
Mu. Thalaiyasingam: The estate of Thalaiyasingam for ' _Sree La Sree Arumuga Naavalarku Ezhudhum Vinnappam_ ', in Mu. Ponnampalam (ed.) _, Thalayasinkam Padaippukal_ , Kalachuvadu, Nagerkovil, 2006, pp. 771–783.
Na. Sathyabalan: Na. Sathyabalan for ' _Ezhudhappadaadha Madalonrin Kathai_ ' (2010), in <http://marupaathy.blogspot.in/2010/09/blog-post_2958.html>
Neelavanan: Son of Neelavanan for ' _O . . . O . . . Vandikkara_ ', in M.A. Nuhman and A. Jesurasa (eds), _Patinoru Eelattu Kavingargal_ , CreA, Chennai, 1984, p. 81.
Nilathan: Nilanthan for extracts from two long poems, ' _Vannimaanmiyam_ ' and ' _Yaazhppaaname, Enathu Yaazhppaaname!_ ' by Nilanthan; ' _Vannimaanmiyam_ ' first appeared in _Niyathi_ , Mallaavi, 2002; ' _Yaazhppaaname, Enathu Yaazhppaaname_!' first appeared in _Magizh_ , Puthu Kudiyuruppu, 2002. These poems are published in Nilanthan, _Ini Enathu Naatkale Varum_ , Vitiyal, Coimbatore, 2012; for ' _Yugappuranam_ ' (2011), in _Ini Enathu Naatkale Varum_ , Vitiyal, Coimbatore, 2012, pp. 93–99.
Pa. Ahilan: Pa. Ahilan for ' _Pathungu Kuzhi Natkal_ ' (1992), in _Pathungu Kuzhi Natkal_ , Kuruthu, Erode, 2000, p. 15; Pa. Ahilan, ' _Peru Nilam— Mannadukkugal Parriya Arimugam_ ' (2010), in _Saramakavigal,_ Peru, Jaffna, 2011, p. 45.
Piramil: Estate of Piramil for ' _Lankapuri Raja_ ' (23 June 1985), in _Tinamani Katir_ , Chennai; in K. Subramaniam (ed.), _Piramil Pataippukal_ , Adaiyalam, Puthanatham, 2003, pp. 101–111.
R. Muralisvaran: R. Muralisvaran for ' _Tholaintha Vaazhvu_ ', in _Sarinigar_ , No. 155, 17–30 September 1998, Colombo; published in _Vetraaki Ninra Veli_ , Vitiyal, Coimbatore, 2001, 29–30.
Ranjakumar: Ranjakumar for ' _Kaalam Unakku Oru Paattu Ezhudum_ ', in Ranjakumar _, Mokavasal_ , Yathartha, Paruthithurai, 1989, pp. 18–31.
Rashmy: Rashmy for ' _Eemam_ ' (1999), in _Kaavu Kollappatta Vaazhvu Mudalaaya Kavithaigal_ , Exil, Coubevoie, 2002, pp. 49–51.
S. Sivasegaram: S. Sivasegaram for ' _Payanam_ ', in M.A. Nuhman and A. Jesurasa (eds), _Patinoru Eelattu Kavingargal_ , CreA, Chennai, 1984, p. 170.
Selvam Arulanantham: Selvam Arulanantham for ' _Vyakula Prasangam_ ', in _Thotruthaan Povoma_ , Sabalingam Nanbargal Vattam, Gorges Les Gonesse, France, 1999; published in _Vetraagi Ninra Veli_ , Vitiyal, Coimbatore, 2001, p. 50.
Shanmugam Sivalingam: Estate of Shanmugam Sivalingam for ' _Paadatha Padalkal_ ', in Shanmugam Sivalingam, _Neer Valaiyangal_ , Tamizhiyal, Chennai, 1988, pp. 112–113; for ' _Sithaninthu Pona Desamum Thoornthu Pone Manakkugaiyum_ ' (1997), in _Kalachuvadu_ , Tamiliyal, Nagerkovil, 2010, pp. 195–196; for ' _Oru Sarvadesa Agatiyin Paadal_ ' (1990), in _Sithaninthu Pona Desamum Thoornthu Pone Manakkugaiyum,_ _Kalachuvadu_ , Tamiliyal, Nagerkovil, 2010, pp. 205–206.
Sivaramani: Sivaramani for ' _Maalai Nerangalil'_ (1989), in _Sivaramani Kavithaigal_ , Women's Study Circle, Batticaloa, 1993, pp. 39–41; for _'Ennidam'_ (1989), in ibid., p. 38; for ' _Enathu Paramparaiyum Naanum_ ' (1989), in ibid., pp. 42–43; for _'Thanithu'_ (1989), in ibid., p. 46–47; for ' _Avamaana Paduthappattaval_ ' (1990), in ibid, p. 44–45.
Su. Vilvarathinam: Su. Vilvarathinam's estate for _Oru Paalaiyin Kural_ (1989), in _Uyirtthezhum Kaalathirkaga_ , Vitiyal, Coimbatore, 2001, pp. 157–158; for ' _Vetraki Ninra Veli_ ' (1994), in ibid., pp. 139–140; for ' _Nilavin Ethiroli_ ' (1999), in ibid., pp. 324–325.
T. Malar Chelvan: T. Malar Chelvan for ' _Anji Maraikkal Pallan_ ', in _Uyir Nizhal_ , January–July 2009, Paris, p. 58.
Tha. Agilan: Tha. Agilan for ' _Oru Paiyanin Appa Irandu Ponar_ ', in _Maranathin Vaasanai_ , E. Pathippagam, Chennai, 2009, pp. 21–27.
V. Gowribalan: V. Gowribalan for ' _Appe Ratta_ ' (2003), in _Oppanai Nizhal_ (first edition 2003), Parisal, Chennai, 2010, pp. 89–96; for ' _Irumbu_ _Paravaigal_ ', in _Oppanai Nizhal_ (first edition 2003), Parisal, Chennai, 2010, pp. 56–63; for ' _Thirumputal_ ' (2014). Unpublished in Tamil.
V.I.S. Jayapalan: V.I.S. Jayapalan for ' _Nambikkai_ ', in M.A. Nuhman and A. Jesurasa (eds), _Patinoru Eelattu Kavingargal_ , CreA, Chennai, 1984,
p. 186; for ' _Kadarpuram_ ', in M.A. Nuhman and A. Jesurasa, ibid., p. 190; for ' _Ettavathu Pey_ ' (1997), in _Sarinigar_ , No. 135, 20 November–3 December 1997, Colombo; published in _Vetraagi Ninra Veli_ , Vitiyal, Coimbatore, 2001, pp. 41–42.
Vinodhine: Vinodhine for ' _Iravu_ ' (2004), in _Mugamoodi Seibaval_ , Kalachuvadu, Nagerkovil, 2007, p. 28; for ' _Nedum Pagal_ ' (2006), in ibid., p. 39; for ' _Enadhu Paadalgalai Naan_ ', in ibid., p. 76; for ' _Avargalai Konravargal_ ', in ibid., p. 30.
## Footnotes
## Introduction
Note III, in _Dark Times Filled with Light: The Selected Work of Juan Gelman_ , translated from the Spanish by Hardie St Martin, Open Letter, Rochester, New York, 2012, p. 75.
There were almost 10 lakh Tamils in the plantations at the time of signing the agreement. India was supposed to take back 5.25 lakh of people. The remaining 3 lakh people were assured Sri Lankan citizenship. The fate of the remaining 1.5 lakh Tamils in the hills was to be decided at a later date. The 5.25 lakh Tamils were supposed to leave Sri Lankan soil within fifteen years from the date of signing the agreement in batches of 36,000 per year. The Sri Lankan government was supposed to guarantee citizenship to 20,000 Tamils every year.
Some of the early militants were indeed trained by the PLO. The poetry of Palestine resonated with the political mood of the times. See, for instance, the slender but significant translation of Palestinian poetry edited by M.A. Nuhman and R. Murugaiyan, _Palesteena Kavithaigal_ (translated from English), VASA, Readers Association, Kalmunai, 1981.
The Liberation Tigers of Tamil Ealam (LTTE), the People's Liberation Organization of Tamil Ealam (PLOTE), the Tamil Ealam Liberation Organization (TELO), the Ealam Revolutionary Organization of Students (EROS), the Tamil Ealam Liberation Army (TELA) and the Ealam People's Revolutionary Liberation Front (EPRLF) were some of the main organizations that were formed during this period.
See the documentary _Burning Memories_ directed by Sridharan Someedaran, Nihari, Chennai, 2008.
Fact-finding reports for these years could be found in www.uthr.org (University Teachers for Human Rights); also in www.transcurrents.com (run by a Canada-based Sri Lankan Tamil journalist, D.B.S. Jeyaraj, accessible now, though not updated since 2012); or alternatively his current blog, www.dbsjeyaraj.com; for the LTTE-centred sources, please see www.tamilnet.com.
A particular event that precipitated the sudden change in the scenario was the fast-unto-death undertaken by the LTTE militant Dileepan, who demanded the release of Tamil militants from prisons and the end of colonization of the Tamil-speaking areas. The fast that began on 15 September 1987 lasted till 26 September 1987, when Dileepan died without succeeding in getting any of his demands.
The book _The Satanic Force_ , compiled in two volumes by the LTTE in 1990–91, chronicling the atrocities committed by the IPKF in the Tamil areas, was printed in Chennai. When it was about to be published, all the printed copies were seized by the Tamil Nadu police, after the assassination of Rajiv Gandhi in 1991. However, the first volume is available in three parts as an e-book at the website: www.ebook.yarl.com, as _The Satanic Force: Heinous Crimes of Indian Peacekeeping, LTTE Headquarters, Jaffna, 1990_. For human rights violations from all sides, see Rajan Somasundaram et. al, _The Broken Palmyra: The Tamil Crisis in Sri Lanka, An Insider Account_ , The Sri Lanka Studies Institute, Jaffna, 1990.
Hardly any account exists of the plight of the Sri Lankan refugees in India. One exception is Tho. Pathinathan's _Porin Marupakkam, Eala Agathiyin Thuyara Varalaru_ , Kalachuvadu, Nagerkovil, 2007. Also, Mullai Jesudasan's _Neelam Aagi Varum Kadal_ , Nidarsanam, Tamil Ealam, 2003.
See the recent report by Meera Srinivasan, 'Elections hold little hope for "plantation Tamils"', in _The Hindu_ , Chennai, 20 September 2014, p. 12.
S. Jebansesan, _The American Mission and Modern Education in Jaffna: The_ _Contribution of Higher Educational Enterprise of the American Missionaries in Nineteenth Century_ , Kumaran Book House, Colombo–Chennai, 2013.
Professor K. Kailasapathi was a literary historian, critic and journalist from Jaffna, and is well known for his Marxist analysis of classical Sangam Tamil literature. (K. Kailasapathi, _Tamil Heroic Poetry_ , Clarendon Press, Oxford, 1968.) He was also the editor of the _Dinakaran_ daily, from 1958–1962.
K. Sivathambi, professor of literature and a critic, was from Jaffna, and is known for his work, _Drama in Ancient Tamil Society_ (New Century Book House, Chennai, 1981).
A discussion about the role of pioneering Tamil scholars like Xavier Thaninayagam (1913–1980) and Su. Vithiyanandan (1924–1989) in the Sri Lankan Tamil struggle is beyond the scope of this Introduction.
To name a few writers of this movement: Se. Yoganathan (1941–2008), Dominic Jeeva (1927–), Ganesalingan (1927–), Thelivathai Joesph (1934–) and Anthony Jeeva (1944–). The last two are from the upcountry region.
Cheran, Yesurasa et.al, _Maranathul Vaazhvom_ , 1996, Vitiyal, Coimbatore. First edition 1986, Jaffna.
For example, see Arular, _Lanka Rani_ , 2008 (1980, first edition), Saanron Pathippagam, London and Chennai.
Govindan, _Puthiyathor Ulagam_ , Theeppori, 1985, place of publication not known.
A joint effort of concerned individuals based in Colombo, and supported by the Tamil Diaspora maintains a website, www.noolaham.net as a digital platform for Sri Lankan Tamil literature.
It is puzzling to note that through all these years of turmoil, with people facing oppression from the Sri Lankan state, their literature never faced active censorship from it, except for curbs on distribution between regions, during different periods.
The Research Programme and the Library at the French Institute of Pondicherry has the single largest collection in India of Sri Lankan Tamil journals, magazines, books and documentaries.
K. Daniel, who belonged to a family of _vannar_ (launderers), was a political activist of the Maoist faction of the Communist party, known for his role in the 'Popular Movement for the Eradication of Untouchability'. _Pancamar (1972)_ remains his most important novel, but a series of five others complete a faithful picture of the struggle of the untouchables in Sri Lankan Tamil society. He is often considered the pioneer of Dalit literature in Tamil.
A staged play that had a very important role in the anti-caste struggle is _Kandan Karunai_ , written by N.K. Raghunathan in 1968, and staged by many directors. See N.K. Raghunathan, _Kandan Karunai_ , Desiya Kalai Ilakkiya Peravai, Colombo, 2003.
An important playwright of this period is Kulandai M. Shanmugalingam, whose plays are available in English. See S. Pathmanathan (Sopa) (trans.), _Shamugalingam Three Plays_ , Kumaran Book House, Colombo and Chennai, 2007.
The most well known of the upcountry writers are Thelivathai Joseph and Anthony Jeeva. The journal _Theerthakarai_ (1980–1982), which had five issues, provided a platform for a new generation of upcountry Tamil writers. See Nandalala (ed.), _Theerthakarai Kathaigal (Ilangai Malaiyaga Sirukathaigal),_ Annam, Sivagangai, 1995. Also the anthology edited by Manimekalai Kamalakanthan, _Malaiyaga Parisuk Kathaigal_ , Kalai Oli Muthaiya Pillai Ninaivu Kuzhu, Colombo, 1994.
See his book _Piramil Dharmod Jeevaramu, Srilankavin Desiya Tharkolai_ , Parivartana Publishers, Chennai, 1984. Not many are aware that he translated books for Tamil militants and even wrote a national anthem for Tamil Ealam. See Piramil, _Pirmail Kavithaigal_ , Layam, Sathiyamangalam, 1998, p. 223.
Franz Kaf ka, 'The Silence of the Sirens', in Nahum N. Glatzer (ed.), Franz Kaf ka, _The Complete Stories_ , Schocken Books, New York, 1971, pp. 430–432.
Jibanananda Das, _Malloban,_ 1948.
## Enlightenment
#### _Dominic Jeeva_
* A _poun_ is a measure used in weighing gold, equivalent to eight grams.
## To His Holiness Arumuga Navalar*: An Appeal
#### _Mu. Thalaiyasingam_
* Arumuga Navalar (1822–1879) was a very influential Hindu—specifically Saivite—revivalist Tamil scholar, popularly celebrated as the Father of Tamil prose; a reformer in the academic sense with missionary influence but a casteist to the core. The dominant Vellala Saivite society in Jaffna has a tradition of deifying him.
** Navalar vehemently denounced Vallalar as a religious impostor. See, for example, _Poliyarutpaa Maruppu_ in _Piripanta-t-Tirattu,_ pages 89–121. Vallalar's real name was Chidambaram Ramalingam (1823–1874). He was a Saiva saint and a poet. Within the Saiva sect, he was a radical who dared to address issues of inequality and poverty. He was the founder of the humanist movement Samarasa Sutha Sanmarkka Sangam. His followers believe that he disappeared into the light of the ' _aruljothi_ ', which he worshipped. For the dispute between Vallalar and Navalar, please see P. Saravanan, _Arutpa Marutpa Kandanathirattu_ , Kalachuvadu, Nagerkovil, 2010.
## One Night
#### _Maalika_
* Maalika was reportedly one of the pen names used by the poet Pudhuvai Rathinadurai.
## Hanifa and the Two Bulls
#### _Kumarmurthy_
* A disease that affects cattle, arresting their ability to masticate.
† Mixture of neem seeds and palm flowers.
## A Story Lost in Time, Lasting in Time
#### _Iravi Arunasalam_
* Mid-January to mid-February.
## Appe Ratta!*
#### _V. Gowribalan_
* This is a Sinhalese phrase, which means, 'Our Country'. It is often used as a slogan by all sections of the Sinhalese political parties.
** In Tamil, this tree is called ' _vaagai_ ' and its botanical name is _Sirissa olbasia_ , known for its wide and beautiful canopy.
## Iron Birds
#### _V. Gowribalan_
* _Keri Palla_ is an abusive Sinhala term for Tamil.
## Barrel-toothed Ghost
#### _T. Malar Chelvan_
* _Solaga Kaatru_ in Tamil refers to a strong wind that blows from the south during the months of May, June and July in Sri Lanka.
## A Refugee's Motherland
#### _Ki. Pi. Aravinthan_
* Echoing the famous poem of the Tamil poet Subramania Bharathi (1882– 1921).
## Little Brother
#### _S. Chelian_
* High school.
## Yugapuranam: Myth of an Era
#### _Nilanthan_
* 'Before the beginning of the Bharata War, Vyasa went to his mother and said, "Mother, the earth's youth has been exhausted. Now go to the forest to perform penance."'
** Otto Van Bismarck, who unified Germany, used to say 'Germans should think with their blood.'
## The Sea and Dreams
#### _Ki. Pi. Aravinthan_
* The place of the final battle between the LTTE and the Sri Lankan Army in May 2009.
† Nandi Kadal, the lagoon in which the LTTE leader V. Prabhakaran's body was found in May 2009.
## Madakkombarai in Jaffna: A Memoir
#### _Malliappu Santhi_ _Thilakar_
(An extract from a memoir about an upcountry plantation worker family that shows the interconnections of this migrant Tamil community [of more recent origins than the Jaffna–Vanni–Batticaloa Tamils] of workers with the rest of the regions.)
Malliappu Santhi (literally, Jasmine Junction) is an important landmark on the Colombo–Kandy highway, before Hatton. Strategically located, it is the only way to enter and exit the upcountry plantations where Tamil workers live. Historically, the junction has been the site of several important labour struggles and agitations.
The Thirteenth Amendment (13A) to the Constitution of Sri Lanka created Provincial Councils in the country. This also made Sinhala and Tamil the official languages of the country, and English the link language.
Both 'Nithi' and 'Poonkuyil' are their noms de guerre, referring to the fact that they died as martyrs in the war.
EPRLF, Ealam People's Revolutionary Liberation Front, was one of the militant organizations with left leanings supported by the Government of India, and suppressed by the LTTE.
She was a well-known trade union activist of the hill-country workers, and a writer. She was married to S.K. Natesa Ayyar, (1887–1947), a pioneering trade unionist among the hill-country workers, and a journalist.
A professional doctor and a human rights activist, she was assassinated by the LTTE. She also co-authored the book _The Broken Palmyra_ (The Sri Lankan Studies Institute, Claremont CA, 1990), chronicling the abuses of both the IPKF and the LTTE in Jaffna. There is a documentary film on her— _No More Tears, Sister_ , directed by Helen Klodansky, 2005.
## Copyright Acknowledgements
* Maalika was reportedly one of the pen names used by the poet Pudhuvai Rathinadurai.
## Note on Authors
**Mahakavi (1927–1971)**
Thu. Uruthiramurthy was from Jaffna and worked as a government official. He is acclaimed for having introduced novel forms and patterns in modern Tamil poetry of Sri Lanka. As a versatile author, he wrote several lyrical plays, poetry and short stories. He also edited a journal of poetry, _Thenmozhi_ , published just for a year in 1955–56.
**Dominic Jeeva (1927–)**
Left-leaning, politically, Dominic Jeeva is from Jaffna. He is a Dalit and a hairdresser by profession. His shop was known as a hub for writers and poets. He founded the literary journal _Mallikai_ in the 1960s, which continues to be published to this day. Presently, he lives in Colombo and has published several collections of his short stories, besides his autobiography.
**Mu. Thalaiyasingam (1935–1973)**
Mu. Thalaiyasingam was from Pungudu Island in Jaffna. A school teacher by profession, he was also a reformist active in anti-caste struggles and a philosopher steeped in spiritual and non-violent modes of resistance, inspired by Mahatma Gandhi. To him, literature was integral to the pursuit of a new humanism. He was the first among his generation to conceive of a Tamil homeland in Sri Lanka. He has authored works in philosophy and has written short stories and a novel.
**Neelavanan (1931–1975)**
K. Chinnathurai, a school teacher, was from Periya Neelavanai in Eastern Sri Lanka, who assumed the pen name of Neelavanan after the name of his village. Along with many poems, he wrote lyrical plays. He was the president of the Writers Association based in Kalmunai in Eastern Sri Lanka. He also edited a literary journal, _Paadum Meen_.
**Mu. Ponnampalam (1939–)**
Mu. Ponnampalam, the younger brother of Mu. Thalaiyasingam, is from Pungudu island in Jaffna. He is known for his novel, _Noyil Iruthal_. He lives in Colombo.
**M.A. Nuhman (1944–)**
M.A. Nuhman is from Kalmunaikudi in Eastern Sri Lanka. He pursued research in linguistics and became a professor of Tamil at the University of Peradeniya, Sri Lanka. He is well known as a critic and a translator. His translation of poetry from Palestine (1981) in Tamil and his anthology of Ealam Tamil poetry (1984) are considered landmarks in Sri Lankan Tamil literature.
**A. Jesurasa (1946–)**
A. Jesurasa is from Kurunagar village in Jaffna. He worked as a postmaster. He was one of the editors of _Alai,_ an important modern literary journal, in the late 1970s. He is an avid film enthusiast and critic. He also edited the anthology of Ealam Tamil poetry of 1984 along with M.A. Nuhman. He lives in Jaffna.
**S. Sivasegaram (1942–)**
S. Sivasegaram, from Inuvil village in Jaffna, is an engineer by training. He was professor of mechanical engineering in the University of Peradeniya. A Marxist, he has published many volumes of poetry. He is also a translator.
**V.I.S. Jayapalan (1944–)**
V.I.S. Jayapalan is from Neduntheevu village in Jaffna. A graduate in economics, he is well known as a poet and essayist. He has published many collections of his poetry. Since the 1970s he has been living in Norway. He has also acted in popular Tamil movies.
**Shanmugam Sivalingam (1940–2012)**
A graduate teacher in science, Shanmugam Sivalingam hailed from Pandi Iruppu village in Eastern Sri Lanka. He was not a prolific poet and published just two poetry collections in his lifetime. He also wrote short stories. One of his sons was a militant and was killed in war.
**Piramil (1939–1997)**
Piramil was from Triconamalai, Sri Lanka. He was known by various pen names which he formulated from his practice of numerology. Even after migrating to India in the 1970s, he remained a Sri Lankan Tamil at heart. He firmly stood for a unified Sri Lankan country with a fusion of both Sinhalese and Tamil cultural traditions. He is considered a major poet in contemporary Tamil. He has also written short stories and plays.
**Sivaramani (1968–1991)**
Sivaramani, whose parents were teachers, came from Yaanaippanthi village, Jaffna. She studied English literature, political science and linguistics at the University of Jaffna. Along with her friends, she founded the Women's Study Circle in Jaffna. She was active during her university student life in resistance against the fraught political environment of the late 1980s. She committed suicide in 1991. Her collection of poems was published posthumously.
**Su. Vilvarathinam (1950–2006)**
Su. Vilvarathinam, from Pungudi island in Jaffna, was a government official. He regarded Thalaiyasingam as his mentor and was active in anti-caste struggles. A powerful poet, he stood for spiritual values, yet was rebellious. Considered as the lyrical poet par excellence of his generation, he was a very good singer and orator.
**S. Ranjakumar (1959–)**
Somabala Ranjakumar is from Karaveddi, Jaffna. His only collection of short stories, _Mogavasal_ , was written and published in 1989. This collection is considered to be highly significant, one that captured the deep anxieties of the Tamil people after the riots of 1983, known as Black July. He worked in a printing press and lived between Jaffna and Colombo. He now lives in Australia.
**Aswagosh (1969–)**
Ramanaiah Kathiravel, from Navindil, Karaveddi, in Jaffna, wrote in the name of Puthiya Jeevan till the nineties. Afterwards he assumed the name of Aswagosh. He has so far published two poetry collections, in 1997 and 1999. He has also written essays in the name of Ram Kathiravel. He lives in Colombo.
**Ilavalai Wijayendran (1961–)**
Wijayendran Thiyagaraja is from Nurelia (Nuwara Eliya), near Kandy, in the hill country. He has been a journalist in Sri Lanka. He moved to Norway, from where he edited a literary journal, _Suvadugal_.
**Pa. Ahilan (1970–)**
Pakkianathan Ahilan, from Jaffna, is a postgraduate in fine arts from MS University, Baroda, India. He teaches art history in the University of Jaffna. He has published two poetry collections, in 2001 and 2011.
**Malaravan (1972–1992)**
Known by his _noms de guerre_ Captain Malaravan and Leo, Kasilingam Vijeethan was from Thirunelveli, Jaffna. He served in the army of the LTTE and was killed in combat in 1992. He was a gifted narrator, sensitive to the human context in the zone of war, despite being a shrewd military analyst. He is primarily known for his war diary, _Por Ula_ (1993). He has also written a novel, _Puyal Paravai_ , that was posthumously published in 2003 in Killinocchi.
**Cheran (1958–)**
Son of the poet Mahakavi, Cheran belongs to Alavetty, Jaffna. He teaches in the Department of Sociology and Anthropology at the University of Windsor, Canada. _Maranathul Vaazhvom_ , an anthology of Sri Lankan Tamil poetry that he coedited in 1985, is considered to be a significant collection.
**Maalika (1948–?)**
Maalika was reportedly the pen name of Puthuvai Rathinadurai, from Puthur village in Jaffna. A dynamic and popular lyrical poet, he headed the Arts and Culture Wing of the LTTE; his poems and songs were the mainstay of their propaganda. He surrendered to the Sri Lankan Army in May 2009, after the war. Nothing has been known about him since. The Sri Lankan authorities refuse to entertain any queries about him.
**Majeed (1969–)**
Adam Kandu Abdul Majeed is from Akkaraippatru, Ambarai, in Eastern Sri Lanka. He has published two poetry collections. He lives in Akkaraippatru and works as an assistant librarian.
**Muralisvaran (1976–)**
Dr Rarasarathnam Muralisvaran, from Nelliyadi village in Jaffna, studied medicine in the Jaffna Medical College. He practises in Batticaloa. His collection of poems will be published towards the end of this year.
**Bose Nilhale (1975–2007)**
Chandrabose Sudhakar was from Palai near Killinocchi. He worked as a journalist. Known to be self-righteous to the point of being uncompromising, he was killed by unknown assailants in front of his family, in Vavuniya. He edited and published a literary journal, _Nilam_. His poems and writings will be published as a book for the first time by the end of 2014.
**Rashmy (1974–)**
A painter and book designer, Ahamed Rashmy Mohamed is from Akkaraippatru in Eastern Sri Lanka. He now lives in the UK, where he works as a journalist. He has published four poetry collections.
**Selvam Arulanantham (1953–)**
Selvam Arulanantham is from Sillalai, Jaffna. He edits a literary journal, _Kaalam_ , published from Toronto, where he now lives. He is pivotal to the circulation of Tamil literature in the Tamil Diaspora of Canada.
**Aruntati (1957–)**
Arulananda Raja is from Naavanthurai, Jaffna. He moved to Paris in 1984. In 1996, he made a Tamil feature film, _Mugam_. He has published two poetry collections and is also a playwright and director.
**Nilanthan (1965–)**
Nilanthan is from Jaffna. Due to the war, he moved to Vanni in 1995. He lived in the war zone till 2009. Now he lives in Jaffna, where he works as a private English tutor. He is a painter, who has also written plays and political essays.
**Kumarmurthy (1956–2001)**
Kumarasamy Vinayagamurthy is from Delft (Neduntheevu), Jaffna. He grew up in Thambanai in the Vanni region. For a while, he worked on a ship. He was politically active for some time with the People's Liberation Organization of Tamil Ealam (PLOTE). He moved to Canada in 1986 and was a human rights activist. He has published two short story collections.
**Iravi Arunasalam (1960–)**
A graduate in Tamil with a diploma in education, A. Ravi is from Alavetty, Jaffna. He worked as a schoolteacher in Sri Lanka before he moved to Europe, where he works as a journalist. He now lives in London. He has published two memoirs and a collection of his short stories.
**Karunakaran (1963–)**
Karunakaran, also known as Vasantharajan, from Iyackachi, Northern Sri Lanka, shifted to Vanni in 1995 due to the compulsions of the war. After the war, he moved to Jaffna, where he lives now. He was the editor of an important literary journal, _Velicham_ , published from Vanni during the war years. He has published four collections of poetry and one of short stories.
**V. Gowribalan (1970–)**
V. Gowribalan is from Uppuveli, Triconamalai. Trained as a draughtsman, he shifted to Jaffna in 1989. He now lives in Batticaloa. He is a management assistant in the government. His short story collection _Oppanai Nizhal_ (2003) is a stark portrayal of the grim realities of the marginalized during the war years.
**Ilaiya Abdullah (1968–)**
M.N.M. Anas is from Mullaittivu, Northern Sri Lanka. He works as a journalist and has been writing since 1985. He has published two poetry collections, one short story collection and another one of essays. He works for a Tamil TV channel in London.
**S. Vinodhine (1969–)**
Vinodhine Sachidanandan, from Thellippalai, Jaffna, started writing in the 1980s, both in English and Tamil. She has published one poetry collection, and lives in the USA.
**Faheema Jahan (1973–)**
A mathematics teacher, Faheema Jahan is from Melsiripuram in Kurunagal district, North-western Sri Lanka. She has published three poetry collections.
**S. Chelian (1960–)**
S. Sivakumaran, from Urumpirai, Jaffna, became involved in the Tamil liberation struggle at an early age. He left Sri Lanka in 1986 as a refugee and has been living in Canada ever since. He has written short stories and plays, and has five collections of poems.
**T. Malar Chelvan (1968–)**
T. Malar Chelvan, who works in the department of culture in Catticaloa in Eastern Sri Lanka, comes from Aaraiyampathi in the same area. He has published a collection each of poetry and short stories. Since 2003, he edits a literary journal, _Maruka_.
**Deebachelvan (1983–)**
Balendran Pradipan is from Rathinapuram, Killinocchi district, northern Sri Lanka. He is a postgraduate in journalism and media, and works as a journalist and a photographer. He has published five collections of his poetry, three of his essays and a memoir.
**Tha. Agilan (1983–)**
Agilan Thadchanamoorthy is a journalist and photographer from Killinocchi, Northern Sri Lanka. He has published a collection of his poems and a memoir. He lives in Canada where he runs a publishing house, Vadali, that specializes in bringing out rare and out-of-print works from Sri Lankan Tamil literature.
**Ki. Pi. Aravinthan (1953–)**
Christopher Francis, belonging to Jaffna, was involved in the liberation struggle during its early years. He moved to Paris in 1991 from where he edited a literary journal, _Mounam_ , for many years. He has published three poetry collections. Recently, his poetry has been translated into French.
**Na. Sathyabalan (1956–)**
Nataraja Sathyabalan is from Nallur, Jaffna. He is an English teacher. He has published a poetry collection.
**Malliyappu Santhi Thilagar (1973–)**
Mylvaganam Thilakarajah, a Sri Lankan-Indian-origin upcountry Tamil, comes from North Medacombara (Tea) Estate, Watagoda, Sri Lanka. He studied management in the University of Colombo and works as a management consultant in a private firm. He has published a collection of his poems.
## Further Reading
The selection below, listed in the chronological order, is intended to orient the reader to the particular contexts in which Sri Lankan Tamil literature emerged. We have confined ourselves to works available in English related to the writings in this anthology and their period. We have also given a list of translations of Sri Lankan Tamil literature, which are already available in print. The sources available in Tamil, including those available online, are too numerous to be listed here.
### The Background
Balasingham, Adele. _The Will to Freedom: An Inside View of Tamil Resistance_. Mitcham: Fairmax Publishing, 2011.
Balasingham, Anton. _War and Peace: Armed Struggle and Peace Efforts of_ _Liberation Tigers_. Mitcham: Fairmax Publishing, 2004.
Bass, Daniel. _Everyday Ethnicity in Sri Lanka: Up-country Tamil Identity Politics_. Abingdon, Oxon: Routledge, 2013.
Daniel, E. Valentine. _Chapters in an Anthropography of Violence: Sri Lankans, Sinhalas and Tamils_. New Delhi: Oxford University Press, New Delhi, 1997.
David, S.A. _Tamil Ealam Freedom Struggle_. Chennai: World Tamil Reader's Trust, 2004.
de Soyza, Niromi. _Tamil Tigress: My Story as a Child Soldier in Sri Lanka's Bloody Civil War_. Sydney: Allen & Unwin, 2011; Pune: Mehta Publishing House, 2012.
Gunawardana, R.A.L.H. 'The People of the Lion: Sinhala Identity and Ideology in History and Historiography'. In _Sri Lanka and the Roots of Conflict_ , edited by Jonathan Spencer, pp. 70–78. London: Routledge, 1990.
———. _Historiography in a Time of Ethnic Conflict—Construction of the Past in Contemporary Sri Lanka_. Colombo: Social Scientist Association, 1995.
Harrison, Frances. _Still Counting the Dead: Survivors of Sri Lanka's Hidden War_. London: Portobello Books, 2012.
Indrapala, K. _The Evolution of an Ethnic Identity: The Tamils in Sri Lanka, c. 300 BCE to c. 1200 CE._ Sydney: MV Publications, South Asian Study Centre. (Indian edition, Colombo and Chennai: Kumaran Book House, 2006.)
Kanapathipillai, Valli. _Citizenship and Statelessness in Sri Lanka: The Case of the Tamil Estate Workers_. Anthem South Asian Studies. London, New Delhi: Anthem Press, 2012. (First edition, UK, US, 2009.)
Manivannan, Ramu. _Sri Lanka:_ _Hiding the Elephant—Documenting Genocide, War Crimes and Crimes Against Humanity_. Chennai: Department of Politics and Public Administration, University of Madras, 2014. Malathy, N. _A Fleeting Moment in My Country: The Last Years of the LTTE De-Facto State._ Atlanta: Clarity Press, 2012; New Delhi: Aakar Books, 2012.)
McGilvray, Dennis B. _Symbolic Heat: Gender, Health and Worship among the Tamils of South India and Sri Lanka_. Boulder: Mapin Publications in association with University of Colorado Museum, 1998.
———. _Crucible of Conflict: Tamil and Muslim Society on the East Coast of Sri Lanka_. Durham: Duke University Press, 2008.
McGilvray, Dennis B. and Mirak Raheem. _Muslim Perspectives on the Sri Lankan Conflict_. Washington: East-West Center, 2007.
McGowan, William. _Only Man Is Vile: The Tragedy of Sri Lanka_. New York: Farrar, Straus and Giroux, 1992.
Mohan, Rohini. _The Seasons of Trouble: Life Amid the Ruins of Sri Lanka's War_. London: Verso Books, 2014.
Moldrich, Donovan. _Bitter Berry Bondage: The Nineteenth Century Coffee Workers of Sri Lanka_. Kandy: Coordinating Secretariat for Plantation Areas, 1989.
Nadesan, S. _A History of the Up-Country Tamil People in Sri Lanka_. Hatton: Nandalala Publication, 1993.
Nuhman, M.A. _Sri Lankan Muslims: Ethnic Identity within Cultural Diversity_. Colombo: International Centre for Ethnic Studies, 2007.
Obeysekere, Gananath. _The Cult of Goddess Pattini_. Chicago: University of Chicago Press, 1984.
Pfaffenberger, Bryan. _Caste in Tamil Culture: The Religious Foundations of Sudra Domination in Tamil Sri Lanka_. (Foreign and Comparative Studies, South Asian Series No. 7.) New York: Syracuse University, 1982.
Rajan, Somasundaram, Daya Sritharan K. and Rajani Thiranagama. _The Broken Palmyra, The Tamil Crisis in Sri Lanka: An Inside Account_. Jaffna: The Sri Lanka Studies Institute, Jaffna, 1990. (Second edition, 1992.)
Reeves, Peter, ed. _The Encyclopaedia of the Sri Lankan Diaspora_. Singapore: Editions Didier Millet, 2013.
Sivathamby, Karthigesu. _Sri Lankan Tamil Society and Politics_. Chennai: New Century Book House, 1995.
———. _Being a Tamil and Sri Lankan_. Colombo: Aivakam, 2006. Somasundaram, Daya. _Scarred Communities: Psychological Impacts of Man-made and Natural Disasters on Sri Lankan Society_. New Delhi: Sage, 2014.
Subramanian, Samanth. _This Divided Island: Stories from the Sri Lankan War_. Gurgaon: Hamish Hamilton, Penguin Books, 2014.
Tambiah, Stanley. _Sri Lanka: Ethnic Fratricide and the Dismantling of Democracy_. Chicago: University of Chicago Press, 1986.
———. _Buddhism Betrayed? Religion, Politics, and Violence in Sri Lanka._ (A Monograph of the World Institute for Development Economics Research.) Chicago: University of Chicago Press, 1992.
Weiss, Gordon. _The Cage: The Fight for Sri Lanka and the Last Days of the Tamil Tigers_. London: The Bodley Head, 2011.
Whitaker, Mark P. _Amiable Incoherence: Manipulating Histories and Modernities in a Batticaloa Hindu Temple_. (Sri Lankan Studies Series.) Amsterdam: VU University Press, 1999.
———. _Learning Politics from Sivaram: The Life and Death of a Revolutionary Tamil Journalist in Sri Lanka_. London: Pluto Press, 2007.
### Sri Lankan Tamil Literature
Cheran. _A Second Sunrise_. Edited and translated by Lakshmi Holmstrom and Sascha Ebeling. New Delhi: Navayana, 2012.
———. _In a Time of Burning_. Translated by Lakshmi Holmstrom. Lancs: Arc Publications, 2013.
Holmstrom, Lakshmi. Subashree Krishnaswamy, and K. Srilata, eds. _The Rapids of a Great River: The Penguin Book of Tamil Poetry_. New Delhi: Penguin Viking, 2009.
Kanganayakam, Chelva, ed. _Lutesong and Lament: Tamil Writing from Sri Lanka_. Toronto: TSAR, 2001.
———. _Wilting Laughter: Three Tamil Poets—Cheran, V.I.S. Jayapalan, Puthuvai Ratnathurai_. Translated by Chelva Kanganayakam. Toronto: TSAR, 2009.
———. Kanganayakam, Chelva, ed. & tr. _You Cannot Turn Away: Poems in Tamil_ _by Cheran_. Translated by Chelva Kanganayakam. Toronto: TSAR, 2011.
———. _In Our Translated World: Contemporary Global Tamil Poetry_. Toronto: TSAR, 2013.
Malaravan. _War Journey: Diary of a Tamil Tiger_. Translated by N. Malathy. New Delhi: Penguin Books, 2013.
Muttulingam, A. _Inauspicious Times_. Translated by Padma Narayanan., Chennai: Indian Writing, 2008.
Neminathan, M., ed. _Tamil Ealam Literature: An Anthology_. London: Tamil Information Centre, 1996. (With an Introduction by Velupillai Prabhakaran.)
Ravikumar, ed. _Waking Is Another Dream: Poems on the Genocide in Eelam_. New Delhi: Navayana, 2010.
Selvadurai, Shyam, ed. _Many Roads through Paradise: An Anthology of Sri_ _Lankan Literature_. New Delhi: Penguin Books, 2014.
Sivasegaram, S. _About Another Matter: Poems in Translation_. Colombo: Dhesiya Kalai Ilakkiya Peravai, 2004.
Shanaathanan, T. _The Incomplete Thompu_. Raking Leaves, 2011. (This is an interesting project on Art that engages with the destruction of the living environment in Jaffna, through maps, architectural sketches and paintings.)
Shanmugalingam. _Three Plays_. _Translated by S. Pathmanathan_. Colombo and Chennai: Kumaran Book House, 2007.
Shobasakthi. _Gorilla_. Translated by Anushiya Sivanarayanan. New Delhi: Random House, New Delhi, 2008.
———. _Traitor_. Translated by Anushiya Ramaswamy, Penguin Viking, New Delhi, 2010.
Subramanian, K.S., ed. & tr. _Tamil Poetry Today_. Chennai: International
Institute of Tamil Studies, 2007.
Veluppillai, C. V. _In Ceylon's Tea Garden_. Talangama: Harrison Peiris for Ceylon Verse, 1956. (Second edition, Watagoda: Bakya Pathippagam, 2007.)
Wijesinha, Rajiva, ed. _Bridging Connections: An Anthology of Sri Lankan Short_ _Stories_. New Delhi: National Book Trust, New Delhi, 2007.
Wijesinha, Rajiva, ed. _Mirrored Images: An Anthology of Sri Lankan Tamil Poetry_. New Delhi: National Book Trust, 2013.
**Note:** We have used the popular spellings for Tamil terms throughout this volume so as to facilitate the pronunciation and to make it easier for readers to search for Tamil resources on the Internet.
## The French Institute of Pondicherry
The French Institute of Pondicherry (IFP), UMIFRE 21 CNRSMAEE, is a financially autonomous institution under the joint authority of the French Ministry of Foreign and European Affairs (MAEE) and the French National Centre for Scientific Research (CNRS). It is a part of the network of twenty-seven research centres under this Ministry. It also forms part of the research unit 3330 'Savoirs et Mondes Indiens' of the CNRS, along with the Centre de Sciences Humaines (CSH) in New Delhi. It fulfils its missions of research, expertise and training in Human and Social Sciences and Ecology in South and Southeast Asia. It works particularly in the fields of Indian cultural knowledge and heritage (Sanskrit language and literature, history of religions, Tamil studies, etc.), contemporary social dynamics and the natural ecosystems of South India.
French Institute of Pondicherry, 11, St Louis Street, PB 33, Pondicherry 605001-India, Tel: (91) (413) 2231609, Email : ifpcom@ifpindia.org
Website: <http://www.ifpindia.org>
## Acknowledgements
This anthology was made possible by our friends, the Sri Lankan Tamil writers, who generously allowed us to translate and publish their works. In particular, we would like to thank V. Gowribalan, Pathmanabhan Iyer, Shobhasakthi, Tha. Agilan, Pa. Ahilan, Karunakaran, Nilanthan and Paramasothi Senavarayar. We thank Professor George Hart, Kiran Keshavamurthy and N. Malathy for allowing us to use their translations in this book.
At the French Institute of Pondicherry, we thank its Director, Pierre Grard, its Secretary General, Eve Herrman, and our colleagues and friends: G. Muthushankar, Anurupa Naik, R. Narendiran, K. Ramanujam, G. Saravanan, V. Prakash, P. Balamurugan, S. Prabhavathi, A. Pankajavalli and Vanitha Bruno for all their support and encouragement. This project would not have been possible without the full-fledged support of the French Institute of Pondicherry and its resources.
Professor Francois Gros, Professor Y. Subbarayalu and Mr Eric Whittington remained committed to the fulfilment of this project right from its beginning, and we hope it meets their expectations.
Kamini Mahadevan, our editor at Penguin, made this project possible. We were fortunate to work with her closely, making it for us an enjoyable learning experience.
One of us, Rebecca Whittington, would like to thank her friend Shamila Sivakumaran and Professors George and Kausalya Hart and Chana Kronfeld at the University of California, Berkeley. She would also like to thank her husband Abhijeet, who has learnt one word in Tamil very well— _anputan._ And last but not least, her daughter Kuheli, or Monima, known in Tamil as Kuyili, or Muniyamma, who is known to thumb through the Tamil dictionary with great concentration, looking for the picture of the dog.
We thank our friend Maragadamme Sitharamane for being there in those times of translation.
We thank Anupama Krishnamurthy, Varalakshmi Krishnamurthy, Manimekalai Dhandapani, Balasubramanian and Egile Tiroutchelvy for sustaining us and making us go on.
Two of us, Kannan M. and Senthil Babu, would like to thank our nieces, Ria, Sahana, Vedha, Nivanthi and Niyantha just for being with us and 'performing the delightful miracle of remaining children while seeing the world through our eyes'.
## THE BEGINNING
Let the conversation begin...
Follow the Penguin Twitter.com@PenguinIndia
Keep up-to-date with all our stories Youtube.com/PenguinIndia
Like 'Penguin Books' on Facebook.com/PenguinIndia
Find out more about the author and
discover more stories like this at Penguinbooksindia.com
##### PENGUIN BOOKS
UK | Canada | Ireland | Australia
New Zealand | India | South Africa
Penguin Books is part of the Penguin Random House group of companies whose addresses can be found at global.penguinrandomhouse.com.
This collection published 2014
Copyright © Penguin Books India and French Institute of Pondicherry, 2014
The moral right of the author has been asserted
Jacket images © Abhishek Panda
ISBN: 978-0-143-42304-1
This digital edition published in 2014.
e-ISBN: 978-9-351-18877-3
This book is sold subject to the condition that it shall not, by way of trade or otherwise, be lent, resold, hired out, or otherwise circulated without the publisher's prior consent in any form of binding or cover other than that in which it is published and without a similar condition including this condition being imposed on the subsequent purchaser.
# Contents
1. Cover
2. Title Page
3. Contents
4. About the Author
5. Introduction
6. 1. The Temple Car and the Moon: Mahakavi
7. 2. Enlightenment: Dominic Jeeva
8. 3. To His Holiness Arumuga Navalar: An Appeal: Mu. Thalaiyasingam
9. 4. Oh Driver: Neelavanan
10. 5. Walk: Mu. Ponnampalam
11. 6. Yesterday Evening, This Morning: M.A. Nuhman
12. 7. Your Plight Also: A. Jesurasa
13. 8. Journey: S. Sivasegaram
14. 9. Hope: V.I.S. Jayapalan
15. 10. Seashore: V.I.S. Jayapalan
16. 11. Unsung Songs: Shanmugam Sivalingam
17. 12. Lankapuri Raja: Piramil
18. 13. In the Evenings: Sivaramani
19. 14. I Don't Have the Words: Sivaramani
20. 15. My Lineage and I: Sivaramani
21. 16. Place: Jaffna University Canteen: Sivaramani
22. 17. Summer Scorches Day after Day . . .: Su. Vilvarathinam
23. 18. Time Will Write a Song for You: S. Ranjakumar
24. 19. Woman Humiliated: Sivaramani
25. 20. Darkness: Aswagosh
26. 21. To Those Who Come with Sticks: Ilavalai Wijayendran
27. 22. Days in the Trenches: Pa. Ahilan
28. 23. War Journey: Diary of a Tamil Tiger: Malaravan
29. 24. A Space That No Longer Is: Su. Vilvarathinam
30. 25. Heroes Rest Here: Cheran
31. 26. One Night: Maalika
32. 27. I Am a Snail . . .: Shanmugam Sivalingam
33. 28. The Eighth Ghost: V.I.S. Jayapalan
34. 29. On the Surface of the Mind: Majeed
35. 30. The Sorrow within Me Has the Surface Area of a Straight Line: Majeed
36. 31. Lost Life: R. Muralisvaran
37. 32. Veena: Bose Nilhale
38. 33. On the Present: Bose Nilhale
39. 34. Pyre: Rashmy
40. 35. The Song of an International Refugee: Shanmugam Sivalingam
41. 36. The Echo of Moonlight: Su. Vilvarathinam
42. 37. Anxious Sermon: Selvam Arulanandam
43. 38. 'Questions': Aruntati
44. 39. Earthen Towns: Nilanthan
45. 40. Hanifa and the Two Bulls: Kumarmurthy
46. 41. A Story Lost in Time, Lasting in Time: Iravi Arunasalam
47. 42. Questions for the One Who Is Coming: Karunakaran
48. 43. Appe Ratta: V. Gowribalan
49. 44. Iron Birds: V. Gowribalan
50. 45. Encounter: Ilaiya Abdullah
51. 46. Night: S. Vinodhine
52. 47. Midday: S. Vinodhine
53. 48. My Songs: S. Vinodhine
54. 49. After Catastrophe: Faheema Jahan
55. 50. Merciless Ones: S. Chelian
56. 51. Those Who Killed Them: S. Vinodhine
57. 52. Take the Child from Me: Faheema Jahan
58. 53. Barrel-toothed Ghost: T. Malar Chelvan
59. 54. Burning Nest: Karunakaran
60. 55. Black Dog: Karunakaran
61. 56. The Warrior Who Could Not Part from His Shadow: Karunakaran
62. 57. Let's Move on Again, to Yet Another Place: Deebachelvan
63. 58. A Boy's Father Dies: Tha. Agilan
64. 59. A Refugee's Motherland: Ki. Pi. Aravinthan
65. 60. Immense Land: An Introduction to Its Soil Strata: Pa. Ahilan
66. 61. Story of an Unwritten Letter: Na. Sathyabalan
67. 62. Little Brother: S. Chelian
68. 63. Restless Sea . . . Sleepless Land . . . Endless Dream: Karunakaran
69. 64. Yugapuranam: Myth of an Era: Nilanthan
70. 65. Keep All That to Yourself: Karunakaran
71. 66. The Sea and Dreams: Ki. Pi. Aravinthan
72. 67. Madakkombarai in Jaffna: A Memoir: Malliappu Santhi Thilakar
73. 68. Release: V. Gowribalan
74. Copyright Acknowledgements
75. Footnotes
1. Introduction
2. 2. Enlightenment: Dominic Jeeva
3. 3. To His Holiness Arumuga Navalar: An Appeal: Mu. Thalaiyasingam
4. 26. One Night: Maalika
5. 40. Hanifa and the Two Bulls: Kumarmurthy
6. 41. A Story Lost in Time, Lasting in Time: Iravi Arunasalam
7. 43. Appe Ratta: V. Gowribalan
8. 44. Iron Birds: V. Gowribalan
9. 53. Barrel-toothed Ghost: T. Malar Chelvan
10. 59. A Refugee's Motherland: Ki. Pi. Aravinthan
11. 62. Little Brother: S. Chelian
12. 64. Yugapuranam: Myth of an Era: Nilanthan
13. 66. The Sea and Dreams: Ki. Pi. Aravinthan
14. 67. Madakkombarai in Jaffna: A Memoir: Malliappu Santhi Thilakar
15. Copyright Acknowledgements
76. Note on Authors
77. Further Reading
78. The French Institute of Pondicherry
79. Acknowledgements
80. Follow Penguin
81. Copyright
1. Cover
2. Brand Page
3. Table of Contents
4. Begin Reading
5. Copyright
| {
"redpajama_set_name": "RedPajamaBook"
} | 623 |
Michael J. Kramer
The Berkeley Folk Music Festival Project
Berkeley Folk Music Festival Project Blog
Digitizing Folk Music History Research Seminar
The Berkeley Folk Music Festival ran annually from 1958 to 1970 on the campus of the University of California, Berkeley. Its archive, containing over 35,000 artifacts (papers, business records, recordings, posters, ephemera, and over 10,000 photographs), has sat virtually unused in Northwestern University's Special Collections Library since the materials were purchased in 1974 from festival director Barry Olivier. The Berkeley event offers access to the understudied history of the folk revival on the West Coast. It preceded the more famous Newport Folk Festival in Rhode Island, and in fact provided a direct model for it by featuring many prominent musicians of the post-World War II folk revival in a mix of concerts, workshops, and collective music-making events. This project preserves the collection digitally and provides access to its rich holdings for research as well as in a curated online exhibition, a gallery exhibition, events and programming, and a print catalogue. The Berkeley project also connect the archival richness of the collection to other archival holdings to demonstrate how the folk music revival on the West Coast presents a different story than the one more typically told based on a focus on the East Coast and South.
The Berkeley Folk Music Festival, which took place annually on the flagship campus of the University of California between 1958-1970, was directed by Barry Olivier, a Berkeley-raised guitarist and folk music advocate. The Festival was one of the preeminent folk festivals on the West Coast, predating the more famous Newport Folk Festival on the East Coast and partly inspiring its workshop model as well as its mix of older and newer, more vernacular and more commercial, performers. Among others who appeared at the Berkeley Folk Music Festival were Joan Baez, Pete Seeger, Doc Watson, Alan Lomax, Howlin' Wolf, Phil Ochs, Alice Stuart, Jean Ritchie, Jean Redpath, Jesse Fuller, Big Mama Thornton, Mance Lipscomb, Mississippi John Hurt, Slim Critchlow, Archie Green, Alan Dundes, Bess Hawes Lomax, Ewan MacColl, John Fahey, Robbie Basho, the Jefferson Airplane, the Youngbloods, and a post-Janis Joplin Big Brother and the Holding Company. The festival's willingness to embrace electric rock music and other forms of what would become known as roots or Americana music makes it markedly different from Newport, where the infamous struggle over Bob Dylan going electric became a major story in folk revival history. The relationship of leftwing politics to the bohemianism of the folk revival was also slightly different, as was the larger milieu in which the event took place: a Northern California context of rapid postwar suburbanization, the expansion of mass higher education, and other social transformations driven by a technology-obsessed Cold War military-industrial economy. Berkeley exemplifies a diverse and adventurous musical and cultural milieu that arose on the West Coast—in the Bay Area in particular. Studying the festival more closely holds the promise of revising our understanding of the national folk revival in the decades after World War II. It reframes the relationship between cultural activity in Berkeley and New Left political events such as the Free Speech Movement and People's Park. And it offers a fresh lens on how musical heritage has related to commerce and consumerism, state-funded cultural activity, technology and change, the existential search for authenticity, and the pursuit of a shared common life in postwar America and the world.
Mississippi John Hurt performing at the Greek Amphitheater, Berkeley Folk Music Festival, 1964.
The Digital Berkeley Folk Music Festival Project digitizes and preserves the archive in a way that makes its rich holdings available for a wider audience. As a mode of digital public history, it also becomes a prototype for what a digital archive can be and do—or more accurately, what digital surrogates of the original materials in the archive can enable—through online exploration and expansion. In some sense, the project seeks itself to digitally "revive" this important but understudied event from the twentieth century folk revival. This will allow scholars, educators, students, musicians, artists, and aficionados to better appreciate, analyze, and understand both the Berkeley Folk Music Festival's historical significance and how the history of the folk revival on the West Coast continues to matter in our own contemporary times. Within the Berkeley Festival's "digital river of song," new ways of critically connecting the present to the past become possible.
The project consists of six related efforts:
(1) Preservation and Documentation
(2) Print-Based Interpretation
(3) Digital Interpretation, Publication, and Interaction
(4) Face-to-Face Presentation
(5) Expansion of the Collection
(6) Outreach and Civic Engagement
Detail of the program for the 1967 Berkeley Folk Music Festival.
The Northwestern University Library and Middlebury College digital historian Michael J. Kramer and his students are working on the full preservation and documentation of the collection in both analog and digital form. Through an NEH grant, preservation efforts also examine how to model the development of a digital collection: how might we think more critically about the ways to organize and code an archive's materials so that the individual objects are available yet also coherently contextualized within the original archive? How might we crowdsource metadata and involve students in the creation of the digital archive? What are the best platforms and methods for digital preservation as it relates to presentation of the materials? How do we grapple with intellectual property rights and issues, a topic with a long and vexed (but useful) history within the folk music revival itself? Online, can we more seamlessly and usefully bring together materials concerning the Berkeley Folk Music Festival that are in other collections? Can we also connect the archive to related collections about arts festivals, the folk revival, folk music, cultural heritage, and the historical period of the postwar era in general? Can we model ways of creating a coherent archive that is also participates in the linked open source data movement?
Our belief is that digital, print, and face-to-face modes of interpretation and publication complement each other. Because the Berkeley Folk Music Festival contains such rich photographic documentation, one print-based project arising from the research is a catalogue (linked to a traveling exhibition, see below) that tells the history of the festival. The catalogue will feature contributions from festival participants and performers as well as scholars.
Michael Kramer is also at work on a monograph, titled This Machine Kills Fascists: Technology and Tradition in the US Folk Music Movement. The book probes the relationship between cultural heritage, memory, machines, and modernity in the folk revival movement. It focuses on figures such as Woody Guthrie, who famously wrote "This machine kills fascists" on his acoustic guitar during World War II; folklorist Alan Lomax, who computationally studied global folk song style, beginning in the 1960s; ethnomusicologist Charles Seeger, who sought to create a "melograph," an electronic notation machine for non-Western music; early ballad collectors who debated what amounted to information systems for organizing and communicating their findings; Zora Neale Hurston, who not only created a new form of audio field recording, but also pioneered the use of film for ethnographic research; Harry Smith, who wanted his influential Folkways Records Anthology of American Folk Music to function like a computer making surprising musicological connections across songs; the technological imagination of Afrofuturist musicians such as Sun Ra and the members of Parliament-Funkadelic, who are not typically included in the folk revival, but have connections to its mediations of memory and heritage; and finally, more recent nostalgic recoveries of antiquated technologies such as recording to 78 r.p.m. as well as the vibrant life of folk revival activities over online communication systems such as YouTube.
Barry Olivier, festival director, at the Faculty Glade, Berkeley Folk Music Festival, University of California, 1964.
What would it mean to extend the Berkeley Folk Music Festival into the digital realm in creative and engaging ways? How do we carry forward the transmission, adaptation, and inspiration of cultural heritage in ways that are productively pleasurable and critically aware at the same time? How might we digitally expand what Pete Seeger and others have famously called the "folk process," the passing along and reworking of songs to both remember the past and speak to the present and future? Might we even be able to draw upon the kind of learning that took place at the folk music workshops pioneered at the Berkeley Festival?
In response to these questions, we are developing a "workshop" software platform using the Berkeley archive. It will allow scholars, educators, musicians, aficionados, and the general public to explore the Berkeley Folk Music Festival collection and be a space in which participants not only can access objects, but also can "play" with them digitally. Consisting of a layered map of the Berkeley Festival year by year along with other modes of virtually "attending" the festival, the website seeks to be more than a flat presentation of the past. Rather, it will harness the virtuality of the digital to allow users to "touch" the materials in the Berkeley collection (and potentially other linked digital repository collections as well) in ways they could never do so with the actual originals.
This kind of digital archive has the potential to model how to bring archival objects into connection with their analysis in more dynamic and connected ways. Preservation feeds into analysis and interpretation through the ability to pivot in one space between, on the one hand, archival objects and, on the other, tools of multimodal experimentation, including visualization, sonification, algorithmic analysis, annotation, assemblage, horizontal and vertical narrative techniques, topic modeling, intensified collaboration among multiple users, and multimedia presentation capabilities. Users can rearrange items, collage them, compare multiple objects, layer annotations upon materials, view annotations by others, add comments, and contribute their perspectives to the archive. Their contributions themselves then become new objects in the collection.
The traditional archive, static and removed from circulation, still serves the purpose of preservation here, but it also becomes interactive, a space where memory and the past come alive in the present, and where the present itself flows or feeds back into the past in new ways. Previously concealed in the library stacks, the original repository now has a chance to come to life in a new form through digital technology. It can carry onward the best aspects of the Berkeley Folk Music Festival while also allowing for a space of critique, expansion, and even transformation.
In this imagining of what the digital archive can be and do, it potentially enables the valuable project of actively and critically passing along cultural heritage, enhancing experiences of art and music, and empowering scholars, educators, students, musicians, artists, and the general public to engage in deep, direct, meaningful historical learning.
The original Berkeley Folk Music Festival privileged face-to-face interactions. To extend this legacy and continue to investigate it, we will curate a traveling exhibition about the Berkeley Folk Music Festival. This exhibition will include poster art, photographs, sound installations, film footage, and a program of concerts, talks, roundtables, and (of course) hootenannies.
Berkeley Folk Music Festival program covers.
We are undertaking an extensive oral history project with performers and attendees at the festival to expand the collection. In the past few years, director Barry Olivier, performer Alice Stuart, and Berkeley native, banjoist, and folklorist Neil Rosenberg (along with his partner Terri Rosenberg) have visited Northwestern for public conversations and concerts, met with students in Michael Kramer's digital humanities research seminar when he was a visiting professor at Northwestern, and participated in oral history interviews in the archives. These oral histories that emerge from engagements with archival materials offer a particularly useful new avenue for understanding the festival's significance. For example, we now have a performance by Alice Stuart of "Rather Be the Devil" from the 1968 Berkeley Folk Music Festival and from her appearance at Northwestern in 2012.
Alice Stuart at the Berkeley Folk Music Festival, 1964 and 1968; performing at Northwestern University, 2012.
Working with Middlebury College's Center for Civic Engagement, Digital Liberal Arts Initiative, the Blue Bear School of Music in San Francisco, and the San Francisco Field Study Group of Northwestern University's Center for Civic Engagement, we are developing a music and technology program for students. Students at Middlebury and Northwestern will be able to travel to the Bay Area to study the history of music and community in the region. We are developing connections with educators at the primary and secondary levels as well to link contemporary music technologies of production (such as Apple's GarageBand) to folk traditions and practices. These programs offers students the opportunity to acquire digital skills, literacies, and competencies alongside musical and historical knowledge. They also build community and civic life through historically-conscious artistic and technological engagements.
Associate Director of the Digital Liberal Arts Initiative
michaelk@middlebury.edu
office: (802) 443-5617 mobile: (847) 942-5182
Bio & More Info
Writing & Projects
"This Machine Kills Fascists": Technology and Tradition in the US Folk Music Movement (in progress)
The Republic of Rock: Music and Citizenship in the Sixties Counterculture (Oxford University Press, 2013; paperback, 2017)
"A Foreign Sound To Your Ear": Image Sonification for Historical Interpretation
Revising Humbead's Revised Map of the World: Digitally Remapping the Sixties Folk Music Revival
Atlantic World Forum
Culture Rover: Promiscuous Cultural Criticism
Unfamiliar Quotations
Issues in Digital History
The Republic of Rock Blog
Philosophy, Courses, Syllabi
Editing & Writing – Dramaturgy – Digital & Public Humanities | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,567 |
Q: Adding int type to uint64_t c++ I have a question regarding conversion of integers:
#include <iostream>
#include <cstdint>
using namespace std;
int main()
{
int N,R,W,H,D;
uint64_t sum = 0;
uint64_t sum_2 = 0;
cin >> W >> H >> D;
sum += static_cast<uint64_t>(W) * H * D * 100;
sum_2 += W * H * D * 100;
cout << sum << endl;
cout << sum_2 << endl;
return 0;
}
I thought, that sum should be equal to sum_2, because uint64_t type is bigger than int type and during arithmetic operations compiler chooses bigger type(which is uint64_t). So by my understanding, sum_2 must have uint64_t type. But it has int type.
Can you explain my why sum_2 was converted to int? Why didn't it stay uint64_t?
A: Undefined behavior signed-integer overflow/underflow, and well-defined behavior unsigned-integer overflow/underflow, in C and C++
If I enter 200, 300, and 358 for W, H, and D, I get the following output, which makes perfect sense for my gcc compiler on a 64-bit Linux machine:
2148000000
18446744071562584320
Why does this make perfect sense?
Well, the default type is int, which is int32_t for the gcc compiler on a 64-bit Linux machine, and its max value is 2^32/2-1 = 2147483647, and its min value is -2147483648. The line sum_2 += W * H * D * 100; does int arithmetic since that's the type of each variable there, 100 included, and no explicit cast is used. So, after doing int arithmetic, it then implicitly casts the int result into a uint64_t as it stores the result into the uint64_t sum_2 variable. The int arithmetic on the right-hand side prior to that point, however, results in 2148000000, which has undefined behavior signed integer overflow over the top of the max int value and back down to the min int value and up again.
Even though according to the C and C++ standards, signed integer overflow or underflow is undefined behavior, in the gcc compiler, I know that signed integer overflow happens to roll over to negative values if it is not optimized out. This, by default, is still "undefined behavior", and a bug, however, and must not be relied upon by default. See notes below for details and information on how to make this well-defined behavior via a gcc extension. Anyway, 2148000000 - 2147483647 = 516353 up-counts, the first of which causes roll-over. The first count up rolls over to the min int32_t value of -2147483648, and the next (516353 - 1 = 516352) counts go up to -2147483648 + 516352 = -2146967296. So, the result of W * H * D * 100 for the inputs above is now -2146967296, based on undefined behavior. Next, that value is implicitly cast from an int (int32_t in this case) to a uint64_t in order to store it from an int (int32_t in this case) into the uint64_t sum_2 variable, resulting in well-defined behavior unsigned integer underflow. You start with -2146967296. The first down-count underflows down to uint64_t max, which is 2^64-1 = 18446744073709551615. Now subtract the remaining 2146967296 - 1 = 2146967295 counts from that and you get 18446744073709551615 - 2146967295 = 18446744071562584320, just as shown above!
Voila! With a little compiler and hardware architecture understanding, and some expected but undefined behavior, the result is perfectly explainable and makes sense!
To easily see the negative value, add this to your code:
int sum_3 = W*H*D*100;
cout << sum_3 << endl; // output: -2146967296
Notes
*
*Never intentionally leave undefined behavior in your code. That is known as a bug. You do not have to write ISO C++, however! If you can find compiler documentation indicating a certain behavior is well-defined, that's ok, so long as you know you are writing in the g++ language and not the C++ language, and don't expect your code to work the same across compilers. Here is an example where I do that: Using Unions for "type punning" is fine in C, and fine in gcc's C++ as well (as a gcc [g++] extension). I'm generally okay with relying on compiler extensions like this. Just be aware of what you're doing is all.
*@user17732522 makes a great point in the comments here:
"in the gcc compiler, I know that signed integer overflow happens to roll over to negative values.": That is not correct by-default. By-default GCC assumes that signed overflow does not happen and applies optimizations based on that. There is the -fwrapv and/or -fno-strict-overflow flag to enforce wrapping behavior. See https://gcc.gnu.org/onlinedocs/gcc-12.1.0/gcc/Code-Gen-Options.html#Code-Gen-Options.
Take a look at that link above (or even better, this one, to always point to the latest gcc documentation instead of the documentation for just one version: https://gcc.gnu.org/onlinedocs/gcc/Code-Gen-Options.html#Code-Gen-Options). Even though signed-integer overflow and underflow is undefined behavior (a bug!) according to the C and C++ standards, gcc allows, by extension, to make it well-defined behavior (not a bug!) so long as you use the proper gcc build flags. Using -fwrapv makes signed-integer overflow/underflow well-defined behavior as a gcc extension. Additionally, -fwrapv-pointer allows pointers to safely overflow and underflow when used in pointer arithmetic, and -fno-strict-overflow applies both -fwrapv and -fwrapv-pointer. The relevant documentation is here: https://gcc.gnu.org/onlinedocs/gcc/Code-Gen-Options.html#Code-Gen-Options (emphasis added):
These machine-independent options control the interface conventions used in code generation.
Most of them have both positive and negative forms; the negative form of -ffoo is -fno-foo.
...
*
*-fwrapv
This option instructs the compiler to assume that signed arithmetic overflow of addition, subtraction and multiplication wraps around using twos-complement representation. This flag enables some optimizations and disables others. The options -ftrapv and -fwrapv override each other, so using -ftrapv -fwrapv on the command-line results in -fwrapv being effective. Note that only active options override, so using -ftrapv -fwrapv -fno-wrapv on the command-line results in -ftrapv being effective.
*-fwrapv-pointer
This option instructs the compiler to assume that pointer arithmetic overflow on addition and subtraction wraps around using twos-complement representation. This flag disables some optimizations which assume pointer overflow is invalid.
*-fstrict-overflow
This option implies -fno-wrapv -fno-wrapv-pointer and when negated [as -fno-strict-overflow] implies -fwrapv -fwrapv-pointer.
So, relying on signed-integer overflow or underflow withOUT using the proper gcc extension flags above is undefined behavior, and therefore a bug, and can not be safely relied upon! It may be optimized out by the compiler and not work reliably as intended without the gcc extension flags above.
My test code
Here is my total code I used for some quick checks to write this answer. I ran it with the gcc/g++ compiler on a 64-bit Linux machine. I did not use the -fwrapv or -fno-strict-overflow flags, so all signed integer overflow or underflow demonstrated below is undefined behavior, a bug, and cannot be relied upon safely without those gcc extension flags. The fact that it works is circumstantial, as the compiler could, by default, choose to optimize out the overflows in unexpected ways.
If you run this on an 8-bit microcontroller such as an Arduino Uno, you'd get different results since an int is a 2-byte int16_t by default, instead! But, now that you understand the principles, you could figure out the expected result. (Also, I think 64-bit values don't exist on that architecture, so they become 32-bit values).
#include <iostream>
#include <cstdint>
using namespace std;
int main()
{
int N,R,W,H,D;
uint64_t sum = 0;
uint64_t sum_2 = 0;
// cin >> W >> H >> D;
W = 200;
H = 300;
D = 358;
sum += static_cast<uint64_t>(W) * H * D * 100;
sum_2 += W * H * D * 100;
cout << sum << endl;
cout << sum_2 << endl;
int sum_3 = W*H*D*100;
cout << sum_3 << endl;
sum_2 = -1; // underflow to uint64_t max
cout << sum_2 << endl;
sum_2 = 18446744073709551615ULL - 2146967295;
cout << sum_2 << endl;
return 0;
}
A: Just a short version of @Gabriel Staples good answer.
"and during arithmetic operations compiler chooses bigger type(which is uint64_t)"
There is no uin64_t in W * H * D * 100, just four int. After this multiplication, the int product (which overflowed and is UB) is assigned to an uint64_t.
Instead, use 100LLU * W * H * D to perform a wider unsigned multiplication.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,414 |
Pangaea enters 2009 as an established DJ/producer and co-owner of Hessle Audio with fellow stalwarts Ramadanman and Ben UFO. Their young label is already the subject of much critical acclaim from discerning dubstep and techno fans and responsible for progressive, landmark releases including TRG's 'Broken Heart', Ramadanman's 'Blimey', Pangaea's own 'Coiled EP', 'Router' and 'You & I' alongside material from likeminded producers Untold and Martyn.
His DJ schedule is rapidly picking up pace with consistent bookings across Europe and a burgeoning reputation as a true pioneer behind the decks; a notion backed up by Mary Anne Hobbs who described his recent mix for BBC Radio 1 as "texturally one of the most exquisite mixes of the year…breathtakingly beautiful in every way." | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,498 |
Miloslav Denk (* 19. srpna 1957) je bývalý český fotbalový útočník.
Fotbalová kariéra
Se Spartou získal třikrát ligový titul, hrál i v evropských pohárech. Po odchodu ze Sparty hrál mj. v Malajsii za Sabah FA nebo po návratu za Hradec Králové. V československé lize nastoupil ve 109 utkáních a dal 19 gólů. Na krajské úrovni hrál fotbal ještě ve svých 53 letech. V dresu Sparty je pravidelným účastníkem tradičních silvestrovských derby veteránů Sparta–Slavia.
Ligová bilance
Externí odkazy
Player History
Českoslovenští fotbalisté
Čeští fotbalisté
Fotbalisté VTJ Tábor
Fotbalisté AC Sparta Praha
Fotbalisté FC Hradec Králové
Fotbalisté Sabah FA
Narození 19. srpna
Narození v roce 1957
Žijící lidé
Muži | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,785 |
Q: Opengl rendering in server I'm trying to render 3D model on a dedicated server (which obviously doesn't have any screen). Until now my different attempt (using blender) led me to the conclusion that CPU rendering is far too long for what i want.
My scene is very simple, only a textured sphere and a camera turning around to record it. The quality of the Opengl render in the blender's viewport is far enough for what i want. I have the opportunity to switch to a GPU server ( with additional costs ).
So my questions are, is it possible to use openGL to render a video on a server that doesn't have a screen nor a Xorg server ?
If that's the case, which technology should I look for ? Does the rendering time will be as fast as in a classic openGL program (or at least generate 25 frames per seconds) ? During my research I heard about mesa3D and EGL, but as a novice I don't know if I should continue looking that way.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,532 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.