text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Se conoce como Batalla de Paso Cuello al enfrentamiento bélico ocurrido el 19 de marzo de 1817 entre el ejército invasor portugués y las milicias de la resistencia artiguista, en el límite de los departamentos de Canelones y Florida, territorio del actual Uruguay. El episodio fue precedido en unos días por la toma de la Villa de Guadalupe por parte de los portugueses, hecho que según consta en crónicas de la época, provocó el vaciamiento absoluto del lugar por parte de sus pobladores, que marcharon a buscar refugio tras las líneas del ejército oriental, acampado a pocos kilómetros, a orillas del río Santa Lucía. Durante el año 1816, la Provincia Oriental fue invadida por el ejército portugués liderado por el general Carlos Federico Lecor, ocupando Cerro Largo , Santa Teresa y Maldonado, pese a la importante resistencia sostenida por las fuerzas artiguista. Los portugueses avanzaban con más de 12.000 hombres, en su mayoría veteranos de las guerras napoleónicas en Europa, perfectamente armados y pertrechados. Para enero de 1817, y ante la inminente invasión de Montevideo, las fuerzas orientales al mando del delegado de Artigas Miguel Barreiro, se ven obligadas a abandonar la plaza y retirarse hacia Canelones, estableciendo el cuartel general durante unos días en la Villa de Guadalupe y luego en el Paso Cuello del río Santa Lucía. El ejército portugués acampó en las inmediaciones de la capital y los delegados del cabildo que quedaron van a su encuentro para hacen entrega de la llave de Montevideo. Es así como comienza un período en el que los orientales desarrollan la llamada guerra de recursos, que consistía en despojar a los portugueses de cualquier producto existente fuera de la fortaleza que sirviera para su sustento, tratando de debilitar de esta manera al ejército invasor. Esta guerra de recursos fue acompañada por fuertes guerrillas, que además atacaban las partidas portuguesas provocando no solamente la escasez de alimentos sino que también importantes bajas. Todas estas acciones fueron planificadas y dirigidas desde el cuartel general establecido en el Paso Cuello, en este lugar quedan acampando durante un mes y medio, un contingente aproximado de 1000 hombres quienes formaban el ejército de la resistencia, junto con los 400 hombres liderados por Juan Antonio Lavalleja que se encontraban acampados en la zona de en Toledo. Luego de varios meses de asedio por parte de las guerrillas orientales donde Lavalleja tiene un valiente rol fundamental, el General Lecor congrega a 5000 soldados en el cuartel portugués establecido en la quinta de Casavalle, organizando una campaña militar para enfrentar la resistencia oriental. El ejército portugués invade la Villa Guadalupe o de los Canelones, no sin antes recibir una fuerte resistencia por parte de sus pobladores. El pueblo de Canelones es abandonado por completo y dejado vacío, mostrando la fidelidad de la población al general Artigas que a modo de segundo éxodo se retiran rumbo al Paso Cuello. En estos hechos toman un rol fundamental y determinante las mujeres, quienes se hicieron cargo de la seguridad de los niños y los mayores, además de arrear el ganado para no permitir a los invasores que se lo apropien. Invadida la Villa de Guadalupe, el General Lecor manda a saquear la estancia de los Artigas en Sauce, saliendo luego con 2500 soldados a enfrentar la resistencia oriental establecida en el Paso de Cuello en el río Santa Lucía. Al mediodía del 19 de marzo de 1817 los portugueses llegan al paso momentos en los cuales comienza la batalla, que se extenderá hasta la noche. 550 orientales armados esperaban defender la posición en la barranca del río y del resto del cuartel general y la población de Villa Guadalupe que a modo de éxodo cruzaba el Santa Lucía con el objetivo de alcanzar refugio en el interior del territorio. Además de los 2.500 soldados fuertemente armados, los portugueses contaban con 5 cañones que abrieron fuego durante toda la batalla, los que fueron contestados por una reducida artillería del lado de los artiguistas. Esta feroz y heroica batalla de más de 5 horas, posibilitó la defensa del pueblo oriental de la Villa de Guadalupe y encontró a importantes hombres de nuestra historia resistiendo a la invasión extranjera. Como resultado de ella el objetivo de aquellos artiguistas se cumplió con creces. Aquí perecieron más de 100 artiguistas y 50 portugueses. Posteriormente, desde Purificación, el general José Gervasio Artigas, acompañado de los Charrúas, decide "bajar" a la capital para fortalecer la resistencia, culminando estos hechos con un sitio a Montevideo. En Paso de Cuello convergen diversos y notorios hombres de la historia de la Provincia Oriental y luego el Estado uruguayo: Miguel Barreiro, Tomás García de Zúñiga, Fructuoso Rivera, Juan Antonio Lavalleja, Manuel Francisco Artigas, Joaquín Suárez, Fernando Otorgues, Pedro Bermúdez, Rufino Bauzá, Manuel Oribe, Ignacio Oribe o Fray José Benito Lamas. Referencias Uruguay en 1817 Batallas de Uruguay
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,148
Autoba rufipuncta är en fjärilsart som beskrevs av Turner 1902. Autoba rufipuncta ingår i släktet Autoba och familjen nattflyn. Inga underarter finns listade i Catalogue of Life. Källor Nattflyn rufipuncta
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,184
#include "common_tools.h" #include "opengles.h" typedef void (APIENTRY *glScissorExclusiveArrayvNVPROC) (jint, jint, uintptr_t); typedef void (APIENTRY *glScissorExclusiveNVPROC) (jint, jint, jint, jint); EXTERN_C_ENTER JNIEXPORT void JNICALL Java_org_lwjgl_opengles_NVScissorExclusive_nglScissorExclusiveArrayvNV__IIJ(JNIEnv *__env, jclass clazz, jint first, jint count, jlong vAddress) { glScissorExclusiveArrayvNVPROC glScissorExclusiveArrayvNV = (glScissorExclusiveArrayvNVPROC)tlsGetFunction(762); uintptr_t v = (uintptr_t)vAddress; UNUSED_PARAM(clazz) glScissorExclusiveArrayvNV(first, count, v); } JNIEXPORT void JNICALL Java_org_lwjgl_opengles_NVScissorExclusive_glScissorExclusiveNV(JNIEnv *__env, jclass clazz, jint x, jint y, jint width, jint height) { glScissorExclusiveNVPROC glScissorExclusiveNV = (glScissorExclusiveNVPROC)tlsGetFunction(763); UNUSED_PARAM(clazz) glScissorExclusiveNV(x, y, width, height); } EXTERN_C_EXIT
{ "redpajama_set_name": "RedPajamaGithub" }
7,847
For the 13th day of the Go Goodwins Manchester Storms #StormingChristmas Advent Calendar, we're offering all Salford University students £10 tickets for Saturday's game against the Nottingham Panthers! To take advantage of this offer please email Stuart at [email protected], but hurry – this offer ends at 5pm Friday 15th December!!!
{ "redpajama_set_name": "RedPajamaC4" }
1,084
#include <algorithm> #include "mitkPlanarEllipse.h" #include "mitkPlaneGeometry.h" #include "mitkProperties.h" #include <algorithm> mitk::PlanarEllipse::PlanarEllipse() : FEATURE_ID_MAJOR_AXIS(Superclass::AddFeature("Major Axis", "mm")), FEATURE_ID_MINOR_AXIS(Superclass::AddFeature("Minor Axis", "mm")), m_MinRadius(0), m_MaxRadius(100), m_MinMaxRadiusContraintsActive(false), m_TreatAsCircle(true) { // Ellipse has three control points this->ResetNumberOfControlPoints( 4 ); this->SetNumberOfPolyLines( 2 ); this->SetProperty( "closed", mitk::BoolProperty::New(true) ); } mitk::PlanarEllipse::~PlanarEllipse() { } bool mitk::PlanarEllipse::SetControlPoint( unsigned int index, const Point2D &point, bool createIfDoesNotExist ) { if(index == 0) // moving center point and control points accordingly { const Point2D &centerPoint = GetControlPoint( 0 ); Point2D boundaryPoint1 = GetControlPoint( 1 ); Point2D boundaryPoint2 = GetControlPoint( 2 ); Point2D boundaryPoint3 = GetControlPoint( 3 ); vnl_vector<ScalarType> vec = (point.GetVnlVector() - centerPoint.GetVnlVector()); boundaryPoint1[0] += vec[0]; boundaryPoint1[1] += vec[1]; boundaryPoint2[0] += vec[0]; boundaryPoint2[1] += vec[1]; boundaryPoint3[0] += vec[0]; boundaryPoint3[1] += vec[1]; PlanarFigure::SetControlPoint( 0, point, createIfDoesNotExist ); PlanarFigure::SetControlPoint( 1, boundaryPoint1, createIfDoesNotExist ); PlanarFigure::SetControlPoint( 2, boundaryPoint2, createIfDoesNotExist ); PlanarFigure::SetControlPoint( 3, boundaryPoint3, createIfDoesNotExist ); return true; } else if (index < 3) { PlanarFigure::SetControlPoint( index, point, createIfDoesNotExist ); int otherIndex = index+1; if (otherIndex > 2) otherIndex = 1; const Point2D &centerPoint = GetControlPoint( 0 ); Point2D otherPoint = GetControlPoint( otherIndex ); Point2D point3 = GetControlPoint( 3 ); Vector2D vec1 = point - centerPoint; Vector2D vec2; if (index == 1 && m_TreatAsCircle ) { float x = vec1[0]; vec2[0] = vec1[1]; vec2[1] = x; if (index==1) vec2[0] *= -1; else vec2[1] *= -1; otherPoint = centerPoint+vec2; PlanarFigure::SetControlPoint( otherIndex, otherPoint, createIfDoesNotExist ); float r = centerPoint.EuclideanDistanceTo(otherPoint); // adjust additional third control point Point2D p3 = this->GetControlPoint(3); Vector2D vec3; vec3[0] = p3[0]-centerPoint[0]; vec3[1] = p3[1]-centerPoint[1]; if (vec3[0]!=0 || vec3[1]!=0) { vec3.Normalize(); vec3 *= r; } else { vec3[0] = r; vec3[1] = 0; } point3 = centerPoint + vec3; PlanarFigure::SetControlPoint( 3, point3, createIfDoesNotExist ); } else if ( vec1.GetNorm() > 0 ) { float r = centerPoint.EuclideanDistanceTo(otherPoint); float x = vec1[0]; vec2[0] = vec1[1]; vec2[1] = x; if (index==1) vec2[0] *= -1; else vec2[1] *= -1; vec2.Normalize(); vec2 *= r; if ( vec2.GetNorm() > 0 ) { otherPoint = centerPoint+vec2; PlanarFigure::SetControlPoint( otherIndex, otherPoint, createIfDoesNotExist ); } // adjust third control point Vector2D vec3 = point3 - centerPoint; vec3.Normalize(); double r1 = centerPoint.EuclideanDistanceTo( GetControlPoint( 1 ) ); double r2 = centerPoint.EuclideanDistanceTo( GetControlPoint( 2 ) ); Point2D newPoint = centerPoint + vec3*std::max(r1, r2); PlanarFigure::SetControlPoint( 3, newPoint, createIfDoesNotExist ); m_TreatAsCircle = false; } return true; } else if (index == 3) { Point2D centerPoint = GetControlPoint( 0 ); Vector2D vec3 = point - centerPoint; vec3.Normalize(); double r1 = centerPoint.EuclideanDistanceTo( GetControlPoint( 1 ) ); double r2 = centerPoint.EuclideanDistanceTo( GetControlPoint( 2 ) ); Point2D newPoint = centerPoint + vec3*std::max(r1, r2); PlanarFigure::SetControlPoint( index, newPoint, createIfDoesNotExist ); m_TreatAsCircle = false; return true; } return false; } void mitk::PlanarEllipse::PlaceFigure( const mitk::Point2D &point ) { PlanarFigure::PlaceFigure( point ); m_SelectedControlPoint = 1; } mitk::Point2D mitk::PlanarEllipse::ApplyControlPointConstraints(unsigned int index, const Point2D &point) { return point; Point2D indexPoint; this->GetPlaneGeometry()->WorldToIndex( point, indexPoint ); BoundingBox::BoundsArrayType bounds = this->GetPlaneGeometry()->GetBounds(); if ( indexPoint[0] < bounds[0] ) { indexPoint[0] = bounds[0]; } if ( indexPoint[0] > bounds[1] ) { indexPoint[0] = bounds[1]; } if ( indexPoint[1] < bounds[2] ) { indexPoint[1] = bounds[2]; } if ( indexPoint[1] > bounds[3] ) { indexPoint[1] = bounds[3]; } Point2D constrainedPoint; this->GetPlaneGeometry()->IndexToWorld( indexPoint, constrainedPoint ); if(m_MinMaxRadiusContraintsActive) { if( index != 0) { const Point2D &centerPoint = this->GetControlPoint(0); double euclideanDinstanceFromCenterToPoint1 = centerPoint.EuclideanDistanceTo(point); Vector2D vectorProjectedPoint; vectorProjectedPoint = point - centerPoint; vectorProjectedPoint.Normalize(); if( euclideanDinstanceFromCenterToPoint1 > m_MaxRadius ) { vectorProjectedPoint *= m_MaxRadius; constrainedPoint = centerPoint; constrainedPoint += vectorProjectedPoint; } else if( euclideanDinstanceFromCenterToPoint1 < m_MinRadius ) { vectorProjectedPoint *= m_MinRadius; constrainedPoint = centerPoint; constrainedPoint += vectorProjectedPoint; } } } return constrainedPoint; } void mitk::PlanarEllipse::GeneratePolyLine() { // clear the PolyLine-Contrainer, it will be reconstructed soon enough... this->ClearPolyLines(); const Point2D &centerPoint = GetControlPoint( 0 ); const Point2D &boundaryPoint1 = GetControlPoint( 1 ); const Point2D &boundaryPoint2 = GetControlPoint( 2 ); Vector2D dir = boundaryPoint1 - centerPoint; dir.Normalize(); vnl_matrix_fixed<float, 2, 2> rot; // differentiate between clockwise and counterclockwise rotation int start = 0; int end = 64; if (dir[1]<0) { dir[0] = -dir[0]; start = -32; end = 32; } // construct rotation matrix to align ellipse with control point vector rot[0][0] = dir[0]; rot[1][1] = rot[0][0]; rot[1][0] = sin(acos(rot[0][0])); rot[0][1] = -rot[1][0]; double radius1 = centerPoint.EuclideanDistanceTo( boundaryPoint1 ); double radius2 = centerPoint.EuclideanDistanceTo( boundaryPoint2 ); // Generate poly-line with 64 segments for ( int t = start; t < end; ++t ) { double alpha = (double) t * vnl_math::pi / 32.0; // construct the new polyline point ... vnl_vector_fixed< float, 2 > vec; vec[0] = radius1 * cos( alpha ); vec[1] = radius2 * sin( alpha ); vec = rot*vec; Point2D polyLinePoint; polyLinePoint[0] = centerPoint[0] + vec[0]; polyLinePoint[1] = centerPoint[1] + vec[1]; // ... and append it to the PolyLine. // No extending supported here, so we can set the index of the PolyLineElement to '0' this->AppendPointToPolyLine(0, polyLinePoint); } this->AppendPointToPolyLine(1, centerPoint); this->AppendPointToPolyLine(1, this->GetControlPoint(3)); } void mitk::PlanarEllipse::GenerateHelperPolyLine(double /*mmPerDisplayUnit*/, unsigned int /*displayHeight*/) { // A circle does not require a helper object } void mitk::PlanarEllipse::EvaluateFeaturesInternal() { Point2D centerPoint = this->GetControlPoint(0); this->SetQuantity(FEATURE_ID_MAJOR_AXIS, 2 * centerPoint.EuclideanDistanceTo(this->GetControlPoint(1))); this->SetQuantity(FEATURE_ID_MINOR_AXIS, 2 * centerPoint.EuclideanDistanceTo(this->GetControlPoint(2))); } void mitk::PlanarEllipse::PrintSelf( std::ostream& os, itk::Indent indent) const { Superclass::PrintSelf( os, indent ); } bool mitk::PlanarEllipse::Equals(const mitk::PlanarFigure& other) const { const mitk::PlanarEllipse* otherEllipse = dynamic_cast<const mitk::PlanarEllipse*>(&other); if ( otherEllipse ) { if(this->m_TreatAsCircle != otherEllipse->m_TreatAsCircle) return false; return Superclass::Equals(other); } else { return false; } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,360
Foodpharmacy Blog: Supplements, Vitamins, Vitamin B, Vitamin B Complex Sundown Naturals, B-Complex, 100 Tablets Product name: Sundown Naturals, B-Complex, 100 Tablets Quantity: 100 Count, 0.05 kg, 4.6 x 4.6 x 9.1 cm Categories: Sundown Naturals, Supplements, Vitamins, Vitamin B, Vitamin B Complex, Gluten Free, Dairy Free, Casein Free, Non Gmo, Vegetarian Now Foods, Ultra Omega-3, 500 EPA/250 DHA, 180 Softgels Gluten Free, Dairy Free, Non-GMO, Clean Nutrition, With 6 Essential B Vitamins, For Energy Support and Heart Health, Vegetarian, Vitamin Supplement, Sundown believes in clean nutrition and being transparent. That's why you won't find genetically modified ingredients, gluten, wheat, dairy, lactose or artificial flavors in any of our products, B vitamins aid in the conversion of food into energy. Most supplements can be stored in a cool, dry place, and be sure to keep the container tightly shut and avoid exposing it to sunlight. As many as 15 percent of people in the united states have a vitamin b12 deficiency, which can lead to anemia. In the online description none of the vitamins exceed 100% of the daily value, this is important to me as certain vitamins taken in excess can cause damage to the liver and kidneys in particular. The b vitamins refer to eight vitamins (B 1, b 2, b 3, b 5, b 6, b 7, b 9, and b 12) that together are referred to as the vitamin b complex and are important in cell metabolism. How could i benefit from a vitamin b complex? Each of these vitamins has many additional functions. However, five products (Mostly b complexes) failed our tests for providing far less or more ingredient than listed. Because of the sustained release, you get an even boost of b vitamins throughout the day which can help boost your energy levels consistently. Natrol, 5-HTP, Extra Strength, 100 mg, 30 Capsules Sundown Naturals, B-Complex, 100 Tablets: Vitamin B Complex, Vitamin B, Vitamins, Supplements This product contains excessive and over daily recommended dosage for b vitamins intake. Unlike our standard american diet, nutrient-dense foods can deliver a vitamin-packed and mineral-packed wallop. Just the b vitamins themselves can help with immune system and cardiovascular health, plus allow your nervous system to function properly. Some people insist that the benefits of vitamin b are visible just a few days after beginning supplementation. If you have taken regular b vitamins before, you will notice that with sustained release b's, the neon yellow color will usually be less intense initially, and yet noticeable for longer as it slowly releases at an optimal absorption rate of 6-8 hours. Compared with the placebo group, the multivitamin group experienced consistent and statistically significant reductions in anxiety and perceived stress, as determined by questionnaires measuring psychological state. The label on my b-complex states it contains 50,000% of the daily value! Anabolic processes: The vitamin-dependent, citric acid cycle furnishes not only energy, but also the intermediaries for the biosynthesis of numerous key compounds, including amino acids, fatty acids and pyrimidines. Indeed, there would seem to be little evidence for supplementing with the bare minimum requirement (Rda) given the dose-response to b vitamins in terms of bioavailability and physiological benefits. Some sprays can be used on food, offering another easy way to take these supplements. B group vitamins are water-soluble molecules (They are dissolved in water). Even though energy drinks manufacturers will often boast about the high content of b vitamins in their products, having them does nothing to boost your actual energy. B-complex supplements usually pack all eight b vitamins into one pill. With the added vitamin c, it may also be better for those who needed the immune system boost. To help you make educated decisions, and to better understand controversial or confusing supplements, our medical experts have digested the science into these three easy-to-follow ratings. Unfortunately, this has led to an erroneous belief among non-scientists that these vitamins have a special relationship to each other. This vitamin b complex from garden of life supports mental and physical energy, blood health, heart health, immune system health and a healthy response to stress. Summary recommended intake for b vitamins varies depending on age, nutrient demands, gender and health status. During pregnancy, the demand for b vitamins, particularly b12 and folate, grows to support fetal development. Read on to learn about the daily doses of different b vitamins you need, natural sources to include in your diet, and the health benefits you can expect to reap. B-complex is especially important for energy, alertness and decreasing stress. Adequate b vitamins are essential for maintaining energy levels and additional intake is often needed by those with high levels of stress. A b-complex supplement can help almost everyone with several different body and brain functions. The b vitamins are a class of water-soluble nutrients that play an important role in maintaining normal physiologic and metabolic functions. B vitamins help your body convert food into energy on a cellular level, but they might also give you more energy in general. B-complex vitamins are often used to reduce fatigue and boost mood. This ensures you are getting the highest quality supplements from bronson. In addition to a nutrient-dense diet, some opt to take a multivitamin to help cover their bases. Although most b vitamins are eliminated in the urine, if taken in excess some can present problems. Which supplements reduce the risk of stroke? This high-potency liquid has enough b vitamins to meet all of your daily requirements. The health benefits of vitamin b6 uncovered by clinical research include reduction in heart disease risk. For example, vitamin b12 may be useful for those who use stomach-acid controlling drugs, including h-2 blockers and proton-pump inhibitors, or who take metformin to treat type 2 diabetes. Our fermentation method enhances the absorption of nutrients, unlocking and activating the vitamins so your body can recognize and absorb them like food. These 5 b vitamins are essential for effective nervous system functioning and work together to help metabolic functions as well. Now Foods, Joint Support, 90 Capsules Sundown Naturals Vitamin B Complex Interestingly, the commentary surrounding the equivocal nature of the evidence in this area has not included any reference to the predominant use of elderly participants in these studies, or whether providing an absolute maximum of three b vitamins (Folate, b 6, b 12), simply on the basis that these will reduce levels of homocysteine, is a rational approach, given the inextricably inter-linked functions of all eight b vitamins (And the potential for deficiencies/insufficiencies in any of these vitamins). As an example, the major human epidemiological and controlled trial research effort in this area has concentrated almost exclusively on that small sub-set of b vitamins (Folate, vitamin b 12 and, to a lesser extent vitamin b 6) that play the most obvious roles in homocysteine metabolism. For the moment, the foregoing suggests that research should, at a minimum, be redirected towards elucidating the potential benefits for brain function of both the acute and chronic administration of a full range of b vitamins rather than concentrating solely on the chronic effects of a small sub-group of three vitamins. B-minus by seeking health is a pure vitamin b complex formula that provides thiamin, riboflavin, niacin, vitamin b6, pantothenic acid, and biotin. These vitamins are free from all the extra fillers that you can find in a lot of other vitamins. It will likely be important to calculate a risk index for each individual, which for the case of b vitamins, would include measures of existing concentrations of b 6, folic acid, and b 12, and of brain atrophy to determine whether a response should occur, and also to design interventions with a longer follow-up than 2 years. B vitamins have historically been taken together for their synergistic role in supporting energy production, immune health, cardiovascular health and neurological health. Furthermore, evidence from human research clearly shows both that a significant proportion of the populations of developed countries suffer from deficiencies or insufficiencies in one or more of this group of vitamins, and that, in the absence of an optimal diet, administration of the entire b-vitamin group, rather than a small sub-set, at doses greatly in excess of the current governmental recommendations, would be a rational approach for preserving brain health. We therefore suggest choosing a supplement that provides 100% of the rda, and does not exceed that limit. I finally found a vitamin that i can swallow without getting stuck in my throat and needing a gallon of water to flush it down! Which supplements help to improve energy and decrease fatigue? This chapter takes a journey through the relationship between dietary b vitamin status of vegetarians, methionine intake, and concentrations of plasma total homocysteine (Thcy). However, taking supplements that contain excessively high and unnecessary quantities of b-complex vitamins could lead to serious side effects. Here are the health benefits of b-complex vitamins as well as dosage recommendations and potential side effects. The role of b-vitamins in mitochondrial energy production. The amounts of niacinamide and vitamin b6 used in this research may cause significant side effects and may require monitoring by a doctor. Vitamin b supplements are the very simple and easy to use. They report that this vitamin is well-tolerated and effective at reducing migraine frequency in adults, though they recommend further research. It also contains vitamin c boost for bone and muscle health. People who are 65 or older may benefit from a b-complex supplement. During a ten-year period, doctors at the north nassau mental health center in new york treated approximately 11,000 people with schizophrenia with a megavitamin regimen that included vitamin c (Up to 4 grams per day), vitamin b3 either as niacin or niacinamide (Up to 4 grams per day), vitamin b6 (Up to 800 mg per day), and vitamin e (Up to 1,200 iu per day). Are there any supplements i should avoid when taking acetaminophen (Tylenol)? We are sorry, the page you are looking for cannot be found you may be interested in our range of liposomal supplements. Consensus science does not currently recognize any of these substances as a vitamin. If you are using a b-complex vitamin to help increase energy for workouts, you may want to add a bit extra since you will be burning off a lot of what you are taking. It is recommended that you discuss the use of vitamin b-complex and your current medications with your doctor or pharmacist. The apparent evolutionary paradox of why an organism would benefit from losing the ability to synthesise a compound required for it's survival is resolved by the fact that, during the course of evolution, vitamins have been in ubiquitous and plentiful supply within the food chain. I have been using the solgar b-complex for several years, several reasons. Taking these supplements may also improve mood, cognitive function and symptoms of depression. Interestingly, the orthodoxy that vitamins have to be administered for an extended period of time in order to elicit any physiological effects is not based on any evidence that vitamins do not exert acute effects. It is made from whole foods, making it completely vegan, you get more than just the b vitamins your body needs to function efficiently. A great b-complex that is well tolerated by patients. Do any supplements improve balance or reduce the risk of falls? A popular way of increasing one's vitamin b intake is through the use of dietary supplements. Note: Other substances once thought to be vitamins were given numbers in the b-vitamin numbering scheme, but were subsequently discovered to be either not essential for life or manufactured by the body, thus not meeting the two essential qualifiers for a vitamin. It is important when taking a gummy vitamin that they meet your dietary restrictions or preferences. This means that, most of the time, the body excretes extra b vitamins in the urine. Manuka Doctor, Manuka Honey Multifloral, MGO 80+, 8.75 oz (250 g) Sundown Naturals, B-Complex, 100 Tablets Product Review I like this. Good Vitamins. Existing complex B at such a low cost. B-Complex. Vitamins. Easy to drink. poorly. liked it. Good value tiny tablets. Good This is my mastjev, I order not for the first time, the composition is good, the price is pleasant, the brand is quality. Madly like. I'll take it again. If you press, that my feedback was useful, I will be immensely grateful. You are not difficult, but I'm pleased 🙂 Very good complex with a small dosage. I take 1 capsule per day. Ordered the same complex of another company with dosages twice as much, such a surge of strength that even sleep is not possible. I consider these Vitamins more suitable for their effects on the body. If you do not regret the review useful Yes) Wonderful complex B, at such a low price. I am grateful that I accidentally ordered it somehow. At work in the summer there was a very heavy workload, respectively, she was nervous, she cried constantly; as a result, there was complete nervous exhaustion, and reached the point that she could not even breathe fully. When the complex arrived, I did not have high hopes for it, but after 10-12 days I began to evaluate my condition and realized that I became calmer, more balanced, calmly react to irritants, better to sleep, it seems to me that I have become smaller, those. there are no these nervous evening zazhra. Complex B is my salvation. A wonderful complex, what I was looking for a long time, is not overloaded with abnormal doses. Good quality. Really improves well-being. Packed well, vitamins smell nice. It is very easy to drink because it is small and flat, rather than the large grains that are common in American supplements. It is good to take vitamin B in the morning, so I take it once every morning. Did not help. They started on skin like abrasions, although they didn't fall anywhere. A lot of vitamins in this group. Small and easy to drink. I will order again when it is gone. Good value, tiny tablets – easy to swallow. Could easily fit double the amount in the tub though! It's easy to drink like a small ramune, and it's a lipi because you can get group B with one grain Do the tablets become wet soon after opening the bottle? Is it okay to cut the tablet in half and use it like that? Hello, is each tablet 100mg? It's just I was recommended to take that much. Thank you. Hi, is the folate in the form of 5-methyl-THF or folic acid? This does not contain any Biotin (B7) and Pantothenic acid (B5)? I read b1 b2 b6 b12 are non compatible together? To what extend is it true? I guess the brand is aware of it so could we have an information about this? Does it include eggs? Do these have an aftertaste? or no taste at all? No they don't. Yes its okay to cut the tablet in half use it like that There are various amounts of different B vitamins. The actual amounts are in a table on the listing online. Please note that some of them are milligrams (mg) and some of them are micrograms (mcg). You'll have to read the listing for this item. You're going to have to take a lot of these to get that much. Spread them out over the day for best result. It says FOLIC ACID on the label. Hope this helps! 24OCT19. You are correct. The "SUNDOWN" B COMPLEX does not contain Biotin (B7) or Pantuthenic acid(B5) – according to the label. Actually B complex is goodness for our health, it is valuable for one tablet which fulfill of the whole B nutrients. And after I have eaten for one month I feel better. No mention of eggs. The label said no diary and no lactose I have a very sensitive and discerning palate yet have never noticed a taste or after-taste. Casein Free, Dairy Free, Gluten Free, Non Gmo, Sundown Naturals, Supplements, Vegetarian, Vitamin, Vitamin B, Vitamin B Complex, Vitamins Garden of Life, Raw Protein & Greens, Organic Plant Formula, Real Raw Vanilla, 19.3 oz (548 g) Renew Life, Cleanse More, 60 Vegetarian Capsules Solgar, Ubiquinol (Reduced CoQ10), 100 mg, 50 Softgels Zahler, Omega 3 Platinum+D, Advanced Omega 3 with Vitamin D3, 2,000 mg, 90 Softgels L'il Critters, Probiotic, Natural Cherry, Orange & Grape Flavor, 60 Gummies
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,767
Q: If a file was found more than once in subfolders - delete all using batch script The system I'm working on looks like this: D:\TargetFolder\Subfolder1 D:\TargetFolder\Subfolder2\Subfolder3 There is a file called "Settings.txt" that could exist in each of these folders. what I want is the following: * *If the file was found more than once in the targeted folder and all of its subfolders then delete all of them. *If the file was found just once in the targeted folder and all of its subfolders then continue on with the script. *If the file doesn't exist then continue on with the script. The final script could possibly be something like this: IF exist "D:\TargetFolder\*Settings.txt" (goto delete) else goto continue :delete del *Settings.txt /f /q :continue exit I hope I explained my question correctly. Thanks in advance. A: @echo off for /F "skip=1" %%a in ('dir /B /S D:\TargetFolder\Settings.txt 2^>NUL') do ( del /Q /S D:\TargetFolder\Settings.txt >NUL goto break ) :break The for /F loop process file names from dir /S command, but the first one is skipped (because "skip=1"switch), that is to say, if there are more than one file, the next commands are executed. The first command in the for deletes all files with "Settings.txt" name and the for is break because the goto command. A: This batch file could be used for the task: @echo off setlocal EnableExtensions DisableDelayedExpansion set "SettingsFile=" for /F "delims=" %%I in ('dir "D:\TargetFolder\Settings.txt" /A-D-L /B /S 2^>nul') do if not defined SettingsFile (set "SettingsFile=1") else (del "D:\TargetFolder\Settings.txt" /A /F /Q /S >nul 2>nul & goto Continue) :Continue endlocal A less compact variant of above: @echo off setlocal EnableExtensions DisableDelayedExpansion set "SettingsFile=" for /F "delims=" %%I in ('dir "D:\TargetFolder\Settings.txt" /A-D-L /B /S 2^>nul') do ( if not defined SettingsFile ( set "SettingsFile=1" ) else ( del "D:\TargetFolder\Settings.txt" /A /F /Q /S >nul 2>nul goto Continue ) ) :Continue endlocal First, there is made sure that the environment variable SettingsFile is not defined by chance. Next the command DIR is executed by a separate command process started in background to search in D:\TargetFolder for files with name Settings.txt and output them all with full path. The output of DIR is captured by FOR and processed line by line if DIR found the file Settings.txt at all. The environment variable SettingsFile is defined with a string value which does not really matter on first file Settings.txt. The FOR loop finishes without having done anything else if there is no more file Settings.txt. But on second file Settings.txt is executed the command DEL to delete in the specified folder and all its subfolders the file Settings.txt. The loop is excited with command GOTO to continue batch file processing on the line below label Continue as the other occurrences of Settings.txt do not matter anymore and of course do not exist anymore on deletion of all Settings.txt was successful. For understanding the used commands and how they work, open a command prompt window, execute there the following commands, and read entirely all help pages displayed for each command very carefully. * *del /? *dir /? *echo /? *endlocal /? *for /? *goto /? *if /? *set /? *setlocal /? Read the Microsoft documentation about Using command redirection operators for an explanation of >nul and 2>nul. The redirection operator > must be escaped with caret character ^ on FOR command line to be interpreted as literal character when Windows command interpreter processes this command line before executing command FOR which executes the embedded dir command line with using a separate command process started in background with cmd.exe /c and the command line within ' appended as additional arguments. See also single line with multiple commands using Windows batch file for an explanation of operator &. A: This is not difficult if you can use the PowerShell already on your Windows system. If the count of files found is greater than zero, then each one is deleted. Otherwise, nothing happens. When you are confident that the correct files will be deleted, remove the -WhatIf from the Remove-Item command. @powershell.exe -NoLogo -NoProfile -Command ^ "$Files = Get-ChildItem -Recurse -File -Path 'C:\TargetFolder' -Filter 'Settings.txt';" ^ "if ($Files.Count -gt 1) {" ^ "foreach ($File in $Files) { Remove-Item $File.FullName -Whatif }" ^ "}" Some of the noise can be eliminated if it could be run as a PowerShell .ps1 script file. $Files = Get-ChildItem -Recurse -File -Path 'C:\TargetFolder' -Filter 'Settings.txt' if ($Files.Count -gt 1) { foreach ($File in $Files) { Remove-Item $File.FullName -Whatif } }
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,118
Peter Boltun (born January 2, 1993) is a Slovak professional ice hockey forward currently playing for HK Dukla Michalovce of the Tipsport Liga. Boltun made his Tipsport Liga debut with HC Košice during the 2012–13 season. He later joined HK Poprad in 2014 for one season before returning to Košice. On May 30, 2019, Boltun signed with his hometown team HK Dukla Michalovce following their promotion to the Tipsport Liga. References External links 1993 births Living people HK Dukla Michalovce players HC Košice players People from Michalovce Sportspeople from the Košice Region HK Poprad players Slovak ice hockey forwards Competitors at the 2013 Winter Universiade
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,884
{"url":"https:\/\/www.quantumstudy.com\/three-very-large-plates-of-same-are-kept-parallel-and-close-to-each-other-they-are-considered-as-ideal-black-surfaces-and-have-very-high-thermal-conductivity-the-first-and-third-plates-are-maintaine\/","text":"# Three very large plates of same are kept parallel and close to each other. They are considered as ideal black surfaces and have very high thermal conductivity. The first and third plates are maintained at temperatures 2T and 3T respectively. The temperature of the middle (i. e., second) plate under steady condition is\n\nQ: Three very large plates of same are kept parallel and close to each other. They are considered as ideal black surfaces and have very high thermal conductivity. The first and third plates are maintained at temperatures 2T and 3T respectively. The temperature of the middle (i. e., second) plate under steady condition is\n\n(a) $(\\frac{65}{2})^{1\/4} T$\n\n(b) $(\\frac{97}{4})^{1\/4} T$\n\n(c) $(\\frac{97}{2})^{1\/4} T$\n\n(d) $(97)^{1\/4} T$\n\nAns: (c)\n\nSol: Let temperature of middle plate is T\u2019\n\n$\\large \\sigma A [(3T)^4 \u2013 (T\u2019)^4] = \\sigma A [(T\u2019)^4 \u2013 (2T)^4]$\n$\\large T\u2019 = (\\frac{97}{2})^{1\/4} T$","date":"2021-09-24 21:32:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8123855590820312, \"perplexity\": 1001.2804292811404}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057580.39\/warc\/CC-MAIN-20210924201616-20210924231616-00117.warc.gz\"}"}
null
null
Mågerø is a small peninsula just south of the Norwegian city of Tønsberg. It is a part of the island Tjøme, a typically Norwegian summer vacation area. Mågerø is the location of the Royal Norwegian Air Force control and alert station for the southern part of Norway. The Norwegian Royal family have their summer vacation facility situated on Mågerø, which in its entirety has restricted military area status. References Peninsulas of Vestfold og Telemark
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,934
{"url":"https:\/\/solvedlib.com\/n\/there-are-200-tomatoes-in-bushel-if-an-average-tomato-contains,18282002","text":"# There are 200 tomatoes In bushel. If an average tomato contains about 0.20 g fat; 1.1 g proteln, and\n\n###### Question:\n\nThere are 200 tomatoes In bushel. If an average tomato contains about 0.20 g fat; 1.1 g proteln, and 4.8 g carbohydrate how many Calories are In bushels of average-slzed tomatoes? Roundt0 cion\n\n#### Similar Solved Questions\n\n##### Sweeten Company had no jobs in progress at the beginning of March and no beginning inventories....\nSweeten Company had no jobs in progress at the beginning of March and no beginning inventories. The company has two manufacturing departments-Molding and Fabrication. It started, completed, and sold only two jobs during March- Job P and Job Q. The following additional information is available for th...\n##### Sabit bir elektrik alan 2iN\/C sekildeki gibi negatif Yinundecir. Sekilde verilen A(5,1) noktasindan 8(-3,4) noktasina pC Iuk noktasal vikj tasimak icin n2 kadar is Yapmak gerekir.4-}84 u}3-)13S4]80 p}Ca)D-)8444}64 HJ\nSabit bir elektrik alan 2iN\/C sekildeki gibi negatif Yinundecir. Sekilde verilen A(5,1) noktasindan 8(-3,4) noktasina pC Iuk noktasal vikj tasimak icin n2 kadar is Yapmak gerekir. 4-} 84 u} 3-) 13S4] 80 p} Ca) D-) 8444} 64 HJ...\n##### A point charge of +7 micro-coulombs is placed at x = +18.67 cm, y = +26.61...\nA point charge of +7 micro-coulombs is placed at x = +18.67 cm, y = +26.61 cm and another charge of +7 micro-coulombs is placed at the origin. What is the x component of the electric force on the origin in Newtons? Include a negative sign if the component is negative....\n##### PART A. Presented below are transactions related to Wildhorse Company. Prepare the journal entries to record...\nPART A. Presented below are transactions related to Wildhorse Company. Prepare the journal entries to record these transactions on the books of Wildhorse Company using a perpetual inventory system. (Credit account titles are automatically indented when amount is entered. Do not indent manually.) 1....\n##### Find all values x =a where the function is discontinuous 2x -7 ifx<0 f(x) +3x -7 If xz00A Nowhere a =3a =0a=-7\nFind all values x =a where the function is discontinuous 2x -7 ifx<0 f(x) +3x -7 If xz0 0A Nowhere a =3 a =0 a=-7...\n##### Based on the IMFs of Calcium Chloride and Water_ Calcium Chloride will be soluble in Water: Calcium Chloride will be insoluble in Water:\nBased on the IMFs of Calcium Chloride and Water_ Calcium Chloride will be soluble in Water: Calcium Chloride will be insoluble in Water:...\n##### 700 600 T-5A-2.55n titanium alloy 4340 500 400 Mamuntres SMP 1045 300 Ductile contien 200 700...\n700 600 T-5A-2.55n titanium alloy 4340 500 400 Mamuntres SMP 1045 300 Ductile contien 200 700 30 brass 2014-T6 Alloy 100 EG21A-T6 Mga 10 10 10 10 10 Cycles totallur, 3. A ceramic part for a jet engine has a yield strength of 75,000 psi and a plane strain fracture toughness, i.e., Kc, of 5000 psivin...\n##### Refer to the matrices A and B below Make appropriate calculations that justify your answers and mention an approp~2 55-1 -4~8 _1-1How many rows of A contain pivot position? Does the equation Ax = b have a solution for each b in R4?Can each vector in R\" be written as linear combination of the columns of the Idtrix A above? Do the columns of A span R4?\nrefer to the matrices A and B below Make appropriate calculations that justify your answers and mention an approp ~2 55 -1 -4 ~8 _1 -1 How many rows of A contain pivot position? Does the equation Ax = b have a solution for each b in R4? Can each vector in R\" be written as linear combination of ...\n##### BH ,2H,o , HO\nBH , 2H,o , HO...\n##### Ed wants to double the size of his existing garden by adding the same number (x) to the current length of 40 feet and to the current width of 60 feet. He has represented this situation with the following equation. 2(40)(60) = (x + 40)(x + 60)\nEd wants to double the size of his existing garden by adding the same number (x) to the current length of 40 feet and to the current width of 60 feet. He has represented this situation with the following equation. 2(40)(60) = (x + 40)(x + 60)...\n##### The card game bridge is played with a standard deck of 52 cards.A bridge hand consists of 13 cards. What is the probability that abridge hand is dealt with the following property: there are 5 cardsof the same suit, one of which is an ace and at least two of whichare face cards? (A face card is any card with a face on it: Jack,Queen, and King. So for example, a hand containing the 2, 5, Jack,King, and Ace of spades and no other spades would satisfy thecondition given in the problem.)THE QUESTION\nThe card game bridge is played with a standard deck of 52 cards. A bridge hand consists of 13 cards. What is the probability that a bridge hand is dealt with the following property: there are 5 cards of the same suit, one of which is an ace and at least two of which are face cards? (A face card is a...\n##### Joylene is interviewing to work at a dentist's office. During the interview process, the hiring manager...\nJoylene is interviewing to work at a dentist's office. During the interview process, the hiring manager had her come in and talk to some of the employees about the office's culture. Everything she learned related to the office's culture EXCEPT A) the office has a business casual dress co...\n##### Please answer all Question 36 1 pts A shipping firm suspects that the mean lifetime of...\nPlease answer all Question 36 1 pts A shipping firm suspects that the mean lifetime of the tires used by its trucks is less than 35,000 miles. To check the claim, the firm randomly selects and tests 54 of these tires and gets a mean lifetime of 34,570 miles with a standard deviation of 1200 mi...\n##### Match each to the best choice: Each Answer may be used more than once:Drug FactsLabeling b. Full Prescribing InformationWhat is printed on the container of the drugUS package insertLabelBrief Summary of the key points from the body of labelingd.Only non-prescription productsHighlightsThis part contains a description of Clinical Studies\nMatch each to the best choice: Each Answer may be used more than once: Drug Facts Labeling b. Full Prescribing Information What is printed on the container of the drug US package insert Label Brief Summary of the key points from the body of labeling d.Only non-prescription products Highlights This p...\n##### Ovan that ? i; standerdicormz fantiun varlable LOrndueTollowino nrababiltiesdecimals). Use TableAppendixAs%z2 .1.0}A 2 -1.5)Az 2 -2.5)53<4<0\nOvan that ? i; standerdicormz fantiun varlable LOrndue Tollowino nrababilties decimals). Use Table Appendix As %z2 .1.0} A 2 -1.5) Az 2 -2.5) 53<4<0...\n##### Consider the two different designs of nutcracker shown in (Figure 1). Suppose that in both cases...\nConsider the two different designs of nutcracker shown in (Figure 1). Suppose that in both cases a 34-N force is exerted on each handle when cracking the nut igure < 1of1 ai \u3112\u3127-(b)-- Part A Determine the magnitude of the force exerted by cracker (a) on the nut. Express your answer w...\n##### DflAhas a waight of 10 K9 and disk B has Meight 0l.20 K9 Tho morent & norba 0l \/and & respactively &ra Ad 40 K9.mE; Mno slpping occurs between ther; answer questons 6 and 7ini sclona=]rdls'M2 m1 mDetermine Ihe couple moment M which must be appled I0 disk A t0 give it an angular acceleration 0f radls 2 (2 Points)MeoNmM-ISNmM=10 NmNone ot themM-sNm\nDflAhas a waight of 10 K9 and disk B has # Meight 0l.20 K9 Tho morent & norba 0l \/and & respactively &ra Ad 40 K9.mE; Mno slpping occurs between ther; answer questons 6 and 7ini sclon a=]rdls' M 2 m 1 m Determine Ihe couple moment M which must be appled I0 disk A t0 give it an angul...\n##### Describe a graph with an infinite number of vertices and aninfinite number of edges that havs no cycles.\nDescribe a graph with an infinite number of vertices and an infinite number of edges that havs no cycles....\n##### Solve the trigonometric equations on theinterval 0\u00e2\u2030\u00a4\u00ce\u00b8<2\u00cf\u20ac.2tan2\u00ce\u00b8=2\nsolve the trigonometric equations on the interval 0\u00e2\u2030\u00a4\u00ce\u00b8<2\u00cf\u20ac. 2tan2\u00ce\u00b8=2...\n##### 3uciute conAssignments Unit 1-6: Quartiles, Medians, Interquartile Range and Box Plots (Chapt 2) 8 2019FA Statistics Unit 1-6: Quartiles, Medians, Interquartile Range and Box Plots (Chapt 2)The five number summary for set of data is given below:MaxMedianMinUsing the interquartile range which of the following are outliers? Select all correct answersSelect all that apply:1108\n3 uciute con Assignments Unit 1-6: Quartiles, Medians, Interquartile Range and Box Plots (Chapt 2) 8 2019FA Statistics Unit 1-6: Quartiles, Medians, Interquartile Range and Box Plots (Chapt 2) The five number summary for set of data is given below: Max Median Min Using the interquartile range which ...\n##### 11. Endocrine system - what is it? What does it do in the body? Negative feedback...\n11. Endocrine system - what is it? What does it do in the body? Negative feedback mechanism? What is it? 12. What are the glands of the endocrine system? 13. What is the master gland of the endocrine system? 14. What do hypothalamus, thyroid, and pineal glands do? 15. What is an adrenal crisis? And ...\n##### You are given a kit to build a basic speaker. Inside there is a wire, a...\nYou are given a kit to build a basic speaker. Inside there is a wire, a magnet, a couple of small sticks, and a cup. Explain, in a paragraph or so, how a speaker would work with the given materials....\n##### Show that the semi-latus rectum of the parabola is twice the distance from the vertex to the focus.\nShow that the semi-latus rectum of the parabola is twice the distance from the vertex to the focus....\n##### Write net ionic cquation to show why solid sodium hydroxide; NaOH (6) , forms basic solution when it dissolves walet;\nWrite net ionic cquation to show why solid sodium hydroxide; NaOH (6) , forms basic solution when it dissolves walet;...","date":"2023-01-30 18:43:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.45725512504577637, \"perplexity\": 4815.742197213375}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499826.71\/warc\/CC-MAIN-20230130165437-20230130195437-00152.warc.gz\"}"}
null
null
package com.qSilver.render.string; import com.bStone.engine.resource.texture.TextureBase; public class FontRenderer { private final String name; private final TextureBase bitmap; private final int charWidth; private final int charHeight; public FontRenderer(String parName, TextureBase parTexture, int parCharWidth, int parCharHeight) { this.name = parName; this.bitmap = parTexture; this.charWidth = parCharWidth; this.charHeight = parCharHeight; } public TextureBase getTexture() { return this.bitmap; } public int getCharWidth() { return this.charWidth; } public int getCharHeight() { return this.charHeight; } public int getCharU(char parChar) { return (((int) parChar) % (this.getTexture().getWidth() / this.charWidth)) * this.charWidth; } public int getCharV(char parChar) { return (((int) parChar) / (this.getTexture().getWidth() / this.charWidth)) * this.charHeight; } public String getName() { return this.name; } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,715
{"url":"http:\/\/sbai.uniroma1.it\/node\/7187","text":"## Workshop\n\nData evento:\nMercoled\u00ec, 31 Gennaio, 2018 - 15:00\n\nOre 15,00: Mercoled\u00ec 31 Gennaio 2018 - Aula Seminari - SBAI\n\nOre 15,00:\n\nMar\u00eda Medina\n\nA first example of nondegenerate sign-changing solution for the Yamabe problem with maximal rank\n\nAbstract\nIn this talk we will construct a sequence of nondegenerate (in the sense\nof Duyckaerts-Kenig-Merle) nodal nonradial solutions to the critical\nYamabe problem\n$$-\\Delta u=\\frac{n(n-2)}{4}|u|^{\\frac{4}{n-2}}u,\\qquad u\\in\\mathcal{D}^{1,2}(\\mathbb{R}^n),$$\nwhich, for n = 4, provides the first example in the literature of a\nsolution with maximal rank.\nThis is a joint work with M. Musso and J. Wei.\n\nTea and...something\n\nOre 16,15:\n\nBenedetta Pellacci\n\nUniversit\u00e0 degli studi della Campania \u201cLuigi Vanvitelli\u201d\n\nNonlinear Helmholtz equations: some existence results\n\nThe aim of this talk is to present some existence results of radially\nsymmetric oscillating solutions for a class of nonlinear autonomous\nHelmholtz equations and in particular to analyse their exact asymptotic\nbehavior at infinity. Some generalizations to non-autonomous radial\nequations as well as existence results for non-radial solutions will\nbe discussed. These results are linked with the existence of standing\nwaves solutions of nonlinear wave equations with large frequencies.\nJoint work with Rainer Mandel (Karlsruher Institut fur Technologie)\nand Eugenio Montefusco (\u201dSapienza\u201d Universit di Roma)\nAllegati:","date":"2018-07-19 01:54:02","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5252283811569214, \"perplexity\": 5373.620287874443}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-30\/segments\/1531676590443.0\/warc\/CC-MAIN-20180719012155-20180719032155-00122.warc.gz\"}"}
null
null
Ministry, DG Comp talks on PPC sale terms not over yet Negotiations between the energy ministry and the European Commission's Directorate-General for Competition for an agreement on the revised terms of the main power utility PPC's follow-up effort to sell lignite units will continue this week but are not expected to exceed it as a crucial Eurogroup meeting of eurozone finance ministers is scheduled for next Monday, March 11. PPC's lignite disinvestment is a pending bailout requirement. It is one of the key commitments for the release, by the country's lenders, of a one-billion euro tranche. Throughout the previous week, the talks between the energy ministry and the DG Comp were said to be nearing a deal. The fundamentals of the new sale's revised terms, to feature improved conditions for investors following the initial effort's failure, have been set but participation details concerning new entrants still need to be clarified, sources explained. "The main objective of the two sides is to resolve whatever pending issues remain in a way that will maximize the sale's chances of success this time around," one source informed. PPC is also making a committed effort for a successful follow-up sale. Last week, the utility's chief executive Manolis Panagiotakis provided the European Commission with a letter listing a series of factors he sees as crucial to the disinvestment's success. Panagiotakis drew attention to an EU law limiting investment activity of non-EU investors, which he views as an obstacle for the sale. Russian, Chinese and American players of repute are interested in the PPC sale, according to the PPC boss, currently in Beijing for talks with Chinese firms.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,661
\section{Introduction} \setcounter{equation}{0} Let $(g,h)$ be a pair of probability generating functions. By a \textit{discrete time and discrete state Galton-Watson branching process with immigration} (DBI-process) corresponding to $(g,h)$ we mean a discrete-time Markov chain $\{y(n): n = 0,1,2,\cdots\}$ with state space $\mathbb{N} := \{0,1,2,\cdots\}$ and one-step transition matrix $P(i,j)$ defined by \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{1.1} \sum^\infty_{j=0} P(i,j)z^j = g(z)^ih(z), \qquad i=0,1,2,\cdots,~ 0\le z\le 1. \eeqlb The intuitive meaning of the process is clear from (\ref{1.1}). In particular, if $h(z) \equiv 1$, we simply call $\{y(n): n = 0,1,2,\cdots\}$ a \textit{discrete time and discrete state Galton-Watson branching process} (DB-process). Kawazu and Watanabe (1971) studied systematically the limit theorems of DBI-processes. They also characterized completely the class of the limit processes as \textit{continuous time and continuous state branching processes with immigration} (CBI-processes). Let us consider a special class of the CBI-processes introduced in Kawazu and Watanabe (1971). Suppose that $R$ is a function on $[0,\infty)$ defined by \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{1.2} R(\lambda) = \beta\lambda - \alpha\lambda^2 - \int_0^\infty\Big(e^{-\lambda u}-1+\frac{\lambda u}{1+u^2}\Big)\mu(du), \eeqlb where $\beta \in \mathbb{R}$ and $\alpha\ge0$ are constants and $(1\land u^2)\mu(du)$ is a finite measure on $(0,\infty)$, and $F$ is a function on $[0,\infty)$ defined by \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{1.3} F(\lambda) = b\lambda + \int_0^\infty(1-e^{-\lambda u})m(du), \eeqlb where $b\ge0$ is a constant and $(1\land u)m(du)$ is a finite measure on $(0,\infty)$. A Markov process $\{y(t): t\ge0\}$ with state space $\mathbb{R}_+ := [0,\infty)$ is called a \textit{CBI-process} if it has transition semigroup $(P_t)_{t\ge0}$ given by \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{1.4} \int_0^\infty e^{-\lambda y} P_t(x,dy) = \exp\bigg\{-x\psi_t(\lambda) - \int_0^tF(\psi_s(\lambda))ds\bigg\}, \qquad \lambda\ge 0, \eeqlb where $\psi_t(\lambda)$ is the unique solution of \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{1.5} \frac{d\psi_t}{dt}(\lambda) = R(\psi_t(\lambda)), \qquad \psi_0(\lambda) = \lambda. \eeqlb Clearly, the transition semigroup $(P_t)_{t\ge0}$ defined by (\ref{1.4}) is stochastically continuous. In particular, if $F(\lambda) \equiv 0$, we simply call $\{y(t): t\ge0\}$ a \textit{continuous time and continuous state branching process} (CB-process). A CBI-process is said to be \textit{conservative} if it does not explode, that is, $\mathbf{P}_x\{y(t) < \infty\} = 1$ for every $t\ge0$ and $x\in \mathbb{R}_+$ where $\mathbf{P}_x$ denotes the conditional law given $y(0)=x$. By Kawazu and Watanabe (1971, Theorem~1.2), the process is conservative if and only if \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \int_{0+} R^*(\lambda)^{-1} d\lambda = \infty, \eeqnn where $R^*(\lambda) = R(\lambda) \vee 0$. (This is a correction to equation (1.21) of Kawazu and Watanabe (1971).) Let $\{b_k\}$ and $\{c_k\}$ be sequences of positive numbers such that $b_k\to \infty$ and $c_k\to \infty$ as $k\to \infty$. Let $\{y_k(n): n\ge0\}$ be a sequence of DBI-processes corresponding to the parameters $\{(g_k,h_k)\}$ and assume $y_k(0) = c_k$. Suppose that for all $t\ge0$ and $\lambda \ge0$ the limits \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{1.6} \lim_{k\to\infty} g_k^{[kt]}(e^{-\lambda/b_k})^{c_k} = \phi_1(t,\lambda) \quad\mbox{and}\quad \lim_{k\to\infty} \prod_{j=0}^{[kt]-1}h_k(g_k^j(e^{-\lambda/b_k})) = \phi_2(t,\lambda) \eeqlb exist and the convergence is locally uniform in $\lambda\ge0$ for each fixed $t\ge0$, where $g_k^j$ denotes the $j$-order composition of $g_k$ and $[kt]$ denotes the integer part of $kt$. The following result was proved in Kawazu and Watanabe (1971, Theorem~2.1): \begin{theorem}\sl{}\def\etheorem{\end{theorem}}\label{t1.1} Suppose that (\ref{1.6}) holds and $\phi_1(t,\lambda) <1$ for some $t>0$ and $\lambda >0$. Then $\{y_k([kt])/b_k: t\ge0\}$ converges in finite-dimensional distributions to a stochastically continuous and conservative CBI-process $\{y(t): t\ge0\}$ with transition semigroup given by (\ref{1.4}). \etheorem Based on this theorem, Kawazu and Watanabe (1971) showed that, given each stochastically continuous and conservative CBI-process $\{y(t): t\ge0\}$, there is a sequence of positive numbers $\{b_k\}$ with $b_k\to \infty$ and a sequence of DBI-processes $\{y_k(n): n\ge0\}$ such that $\{y_k([kt])/b_k: t\ge0\}$ converges in finite-dimensional distributions to $\{y(t): t\ge0\}$. Their results have become the basis of many studies of branching processes with immigration; see e.g.\ Pitman and Yor (1982) and Shiga and Watanabe (1973). On the other hand, since condition (\ref{1.6}) involves complicated compositions of the probability generating functions, it is some times not so easy to verify. In view of the characterizations (\ref{1.1}), (\ref{1.4}) and (\ref{1.5}) of the two classes of processes, one naturally expect some simple sufficient conditions for the convergence of the DBI-processes to the CBI-processes given in terms of the parameters $(g,h)$ and $(R,F)$. The purpose of this note is to provide a set of conditions of this type. For the convenience of proof, we shall discuss the convergence of $\{y_k([\gamma_kt])/k: t\ge0\}$ for some sequence of positive numbers $\{\gamma_k\}$ with $\gamma_k\to \infty$, which is slightly different from the scaling of Kawazu and Watanabe (1971). Instead of the convergence of finite-dimensional distributions, we shall consider the weak convergence on the space of c\`adl\`ag functions $D([0,\infty), \mathbb{R}_+)$. \section{The limit theorem} \setcounter{equation}{0} In this section, we prove a limit theorem for DBI-processes on the space $D([0,\infty), \mathbb{R}_+)$. Let $F$ be defined by (\ref{1.3}). For simplicity we assume the function $R$ is given by \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{2.1} R(\lambda) = \beta\lambda - \alpha\lambda^2 - \int_0^\infty(e^{-\lambda u}-1+\lambda u)\mu(du), \qquad \lambda\ge 0, \eeqlb where $\beta \in \mathbb{R}$ and $\alpha\ge0$ are constants and $(u\land u^2)\mu(du)$ is a finite measure on $(0,\infty)$. Suppose that $\{y(t): t\ge0\}$ is a CBI-process corresponding to $(R,F)$. Let $\{y_k(n): n\ge0\}$ be a sequence of DBI-processes corresponding to the parameters $\{(g_k,h_k)\}$ and let $\{\gamma_k\}$ be a sequence of positive numbers. For $0\le \lambda\le k$ set \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{2.2} F_k(\lambda) = \gamma_k[1 - h_k(1-\lambda/k)] \eeqlb and \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{2.3} R_k(\lambda) = k\gamma_k[(1-\lambda/k) - g_k(1-\lambda/k)]. \eeqlb Let us consider the following set of conditions: (2.A) As $k\to \infty$, we have $\gamma_k \to\infty$ and $\gamma_k/k \to$ some $\gamma_0\ge 0$. (2.B) As $k\to \infty$, the sequence $\{F_k\}$ defined by (\ref{2.2}) converges to a continuous function. (2.C) The sequence $\{R_k\}$ defined by (\ref{2.3}) is uniformly Lipschitz on each bounded interval and converges to a continuous function as $k\to \infty$. We remark that conditions (2.B) and (2.C) parallel the sufficient conditions for the convergence of continuous-time and discrete state branching processes with immigration, see e.g., Li (1992) for the discussions in the setting of measure-valued processes. Based the results of Li (1991), the following lemma can be proved by modifying the arguments of the proofs of Li (1992, Lemmas~3.4 and~4.1). \begin{lemma}\sl{}\def\elemma{\end{lemma}}\label{l2.1} (i) Under conditions (2.B) and (2.C), the limit functions $F$ and $R$ of $\{F_k\}$ and $\{R_k\}$ have representations (\ref{1.3}) and (\ref{2.1}), respectively. (ii) For any $(F,R)$ given by (\ref{1.3}) and (\ref{2.1}), there are sequences $\{\gamma_k\}$ and $\{(g_k, h_k)\}$ as above such that (2.A), (2.B) and (2.C) hold with $F_k \to F$ and $R_k \to R$. \elemma For $\lambda\ge0$ we set \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{2.4} S_k(\lambda) = k\gamma_k[(1-\lambda/k) - g_k(e^{-\lambda/k})]. \eeqlb \begin{lemma}\sl{}\def\elemma{\end{lemma}}\label{l2.2} Under conditions (2.A) and (2.C), let $R = \lim_{k\to\infty} R_k$. Then we have \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{2.5} \lim_{k\to\infty} S_k(\lambda) = R(\lambda) - \gamma_0\lambda^2/2 \quad\mbox{and}\quad \lim_{k\to\infty} \gamma_k[1 - g_k(e^{-\lambda/k})] = \gamma_0\lambda \eeqlb uniformly on each bounded interval. \elemma \noindent\textit{Proof.~~}}\def\qed{\hfill$\Box$\medskip By mean-value theorem we have \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{2.6} S_k(\lambda) = R_k(\lambda) - k\gamma_kg_k^\prime(\eta_k) (e^{-\lambda/k}-1+\lambda/k), \eeqlb where $1-\lambda/k < \eta_k < e^{-\lambda/k}$ and $g_k^\prime$ denotes the derivative of $g_k$. Under condition (2.C), the sequence $R_k^\prime(\lambda) = \gamma_k[g_k^\prime (1-\lambda/k)-1]$ is uniformly bounded on each bounded interval $\lambda \in [0,l]$ for $l \ge 0$. Then $g_k^\prime(1-\lambda/k) \to 1$ uniformly on each bounded interval. In particular, we have $g_k^\prime (\eta_k) \to 1$ and the first equality in (\ref{2.5}) follows from (2.A) and (\ref{2.6}). The second equality follows by a similar argument. \qed \begin{theorem}\sl{}\def\etheorem{\end{theorem}}\label{t2.1} Suppose conditions (2.A), (2.B) and (2.C) hold with $F = \lim_{k\to\infty} F_k$ and $R = \lim_{k\to\infty} R_k$. If $y_k(0)/k$ converges in distribution to $y(0)$, then $\{y_k([\gamma_kt])/k: t\ge0\}$ converges in distribution on $D([0,\infty), \mathbb{R}_+)$ to the CBI-process $\{y(t): t\ge0\}$ corresponding to $(R,F)$ with initial value $y(0)$. \etheorem \noindent\textit{Proof.~~}}\def\qed{\hfill$\Box$\medskip Let $(P_t)_{t\ge0}$ denote the transition semigroup of the CBI-process corresponding to $(R,F)$. For $\lambda>0$ and $x\ge0$ set $e_\lambda(x) = e^{-\lambda x}$. We denote by $D_1$ the linear hull of $\{e_\lambda: \lambda>0\}$. Then $D_1$ is an algebra which strongly separates the points of $\mathbb{R}_+$. Let $C_0(\mathbb{R}_+)$ be the space of continuous functions on $\mathbb{R}_+$ vanishing at infinity. By the Stone-Weierstrass theorem, $D_1$ is dense in $C_0(\mathbb{R}_+)$ for the supremum norm; see, e.g., Hewitt and Stromberg (1965, pp.98-99). For $\lambda>0$ we set \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{2.7} Ae_\lambda(x) = -e^{-\lambda x}\left[xR(\lambda) + F(\lambda)\right], \qquad x\in \mathbb{R}_+, \eeqlb and extend the definition of $A$ to $D_1$ by linearity. Then $A := \{(f,Af): f \in D_1\}$ is a linear subspace of $C_0(\mathbb{R}_+) \times C_0(\mathbb{R}_+)$. Since $D_1$ is invariant under $(P_t)_{t\ge0}$, it is a core of $A$; see, e.g., Ethier and Kurtz (1986, p.17). With those observations it is not hard to see that the semigroup $(P_t)_{t\ge0}$ is generated by the closure of $A$; see, e.g., Ethier and Kurtz (1986, p.15 and p.17). Note that $\{y_k (n)/k: n\ge0\}$ is a Markov chain with state space $E_k := \{0,1/k,2/k,\cdots\}$ and one-step transition probability $Q_k(x,dy)$ determined by \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \int_{E_k}e^{-\lambda y}Q_k(x,dy) = g_k(e^{-\lambda/k})^{kx}h_k(e^{-\lambda/k}). \eeqnn Then the (discrete) generator $A_k$ of $\{y_k ([\gamma_kt])/k: t\ge0\}$ is given by \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} A_ke_\lambda(x) &=& \gamma_k\Big[g_k(e^{-\lambda/k})^{kx}h_k(e^{-\lambda/k}) - e^{-\lambda x}\Big] \\ &=& \gamma_k\Big[\exp\{xk\alpha_k(\lambda)(g_k(e^{-\lambda/k})-1)\} \exp\{\beta_k(\lambda)(h_k(e^{-\lambda/k})-1)\} - e^{-\lambda x}\Big], \eeqnn where \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \alpha_k(\lambda) = (g_k(e^{-\lambda/k})-1)^{-1} \log g_k(e^{-\lambda/k}) \eeqnn and $\beta_k(\lambda)$ is defined by the same formula with $g_k$ replaced by $h_k$. Under conditions (2.A), (2.B) and (2.C), it is easy to show that \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \lim_{k\to\infty}(g_k(e^{-\lambda/k})-1) = \lim_{k\to\infty}(h_k(e^{-\lambda/k})-1) = 0 \eeqnn and \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \lim_{k\to\infty} \alpha_k(\lambda) = \lim_{k\to\infty} \beta_k(\lambda) =1. \eeqnn Then we have \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{2.8} A_ke_\lambda(x) = -e^{-\lambda x}\left[x\alpha_k(\lambda)S_k(\lambda) + x\gamma_k(\alpha_k(\lambda)-1)\lambda + H_k(\lambda)\right] + o(1), \eeqlb where \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} H_k(\lambda) = \gamma_k\beta_k(\lambda)(1-h_k(e^{-\lambda/k})). \eeqnn By elementary calculations we find that \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \alpha_k(\lambda) = 1 + \frac{1}{2}(1-g_k(e^{-\lambda/k})) + o(1-g_k(e^{-\lambda/k})), \eeqnn and so $\lim_{k\to\infty} \gamma_k(\alpha_k(\lambda)-1) = \gamma_0\lambda/2$ by Lemma~\ref{l2.2}. It follows that \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \lim_{k\to\infty}\left[\alpha_k(\lambda)S_k(\lambda) + \gamma_k(\alpha_k(\lambda)-1)\lambda\right] = R(\lambda). \eeqnn By the argument of the proof of Lemma~\ref{l2.2} one can show that \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \lim_{k\to\infty}H_k(\lambda) = \lim_{k\to\infty} F_k(\lambda) = F(\lambda). \eeqnn In view of (\ref{2.7}) and (\ref{2.8}) we get \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \lim_{k\to\infty} \sup_{x\in E_k}\left|A_ke_\lambda(x) - Ae_\lambda(x)\right| = 0 \eeqnn for each $\lambda>0$. This clearly implies that \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \lim_{k\to\infty} \sup_{x\in E_k}\left|A_kf(x) - Af(x)\right| = 0 \eeqnn for each $f\in D_1$. By Ethier and Kurtz (1986, p.226 and pp.233-234) we find that $\{y_k([\gamma_kt])/k: t\ge0\}$ converges in distribution on $D([0,\infty), \mathbb{R}_+)$ to the CBI-process corresponding to $(R,F)$. \qed By Lemma~\ref{l2.1} and Theorem~\ref{t2.1}, for any functions $(R,F)$ given by (\ref{1.3}) and (\ref{2.1}), there is a sequence of positive numbers $\{\gamma_k\}$ and a sequence of DBI-processes $\{y_k(n): n\ge0\}$ such that $\{y_k([\gamma_kt])/k: t\ge0\}$ converges in distribution on $D([0,\infty), \mathbb{R}_+)$ to the CBI-process corresponding to $(R,F)$. \section{Generalized Ray-Knight theorems} \setcounter{equation}{0} As an example of the applications of their limit theorems, Kawazu and Watanabe (1971) reproved the Ray-Knight theorems of diffusion characterizations of the Brownian local time. In this section, we generalize the results to the case of a Brownian motion with drift. We refer the reader to Le Gall and Le Jan (1998) for another adequate formulation of the Ray-Knight theorems for general L\'evy processes. Let $A = \alpha d^2/dx^2 + \beta d/dx$ for given constants $\alpha >0$ and $\beta \in \mathbb{R}$. Then $A$ generates a one-dimensional Brownian motion with drift $(X_t, \mathscr{F}_t, \mathbf{P}_x)$. The \textit{local time} of $\{X_t: t\ge0\}$ is a continuous two-parameter process $\{l(t,x): t\ge0, x\in \mathbb{R}\}$ such that the following property holds almost surely: \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{3.1} 2\int_B l(s,x)dx = \int_0^t 1_B(X_s)ds, \qquad B\in \mathscr{B}(\mathbb{R}), \eeqlb where $\mathscr{B}(\mathbb{R})$ denote the Borel $\sigma$-algebra of $\mathbb{R}$ and $1_B$ denotes the indicator function of $B$. For fixed $a\ge0$ let \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{3.2} l^{-1}(u,a) = \inf\{t\ge0: l(t,a)=u\}. \eeqlb \begin{theorem}\sl{}\def\etheorem{\end{theorem}}\label{t3.1} The process \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{3.3} \xi_u(t) = l(l^{-1}(u,a),a+t), \qquad t\ge0, \eeqlb is a diffusion generated by \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{3.4} x\frac{d^2}{dx^2} + \frac{\beta}{\alpha} x\frac{d}{dx}. \eeqlb \etheorem \noindent\textit{Proof.~~}}\def\qed{\hfill$\Box$\medskip We follow the ideas of Kawazu and Watanabe (1971, Example~2.2). For $c\in \mathbb{R}$ let $\sigma_c = \inf\{t\ge0: X_t=c\}$. Let $\delta>0$ and let $u_\delta(x) = \mathbf{P}_x\{\sigma_\delta < \sigma_{-\delta}\} = 1 - \mathbf{P}_x\{\sigma_\delta > \sigma_{-\delta}\}$. Then $u_\delta(\cdot)$ satisfies \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \alpha \frac{d^2}{dx^2}u_\delta(x) + \beta \frac{d}{dx}u_\delta(x) = 0, \qquad |x| \le \delta, \eeqnn with $u_\delta(\delta) = 1$ and $u_\delta(-\delta) = 0$. Solving this boundary value problem we find that \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} u_\delta(x) = \frac{\exp\{\beta\delta/\alpha\} - \exp\{-\beta x/\alpha\}} {\exp\{\beta\delta/\alpha\} - \exp\{-\beta\delta/\alpha\}}. \eeqnn By a $\delta$-downcrossing at $x\in \mathbb{R}$ before time $T>0$ we mean an interval $[u,v] \subset [0,T)$ such that $X_u = x+\delta$, $X_v = x$ and $x<X_t<x+\delta$ for all $u<t<v$. Let $\eta_\delta$ denote the number of $\delta$-downcrossings at $0$ before time $\sigma_{-\delta}$. By the property of independent increments of the Brownian motion with drift we have \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \mathbf{E}_0[z^{\eta_\delta}] = \sum_{i=0}^\infty (1-p)(pz)^i = \frac{q}{1-pz}, \eeqnn where $p=u_{\delta}(0)$, $q=1-p$ and $\mathbf{E}_0$ denotes the expectation under $\mathbf{P}_0$. Let $x_i = a + i/k$ for $i\ge0$ and $k\ge1$ and let $Z_k(i)$ denote the number of $1/k$-downcrossings at $x_i$ before time $l^{-1}(u,a)$. It is easy to see that $Z_k(i+1)$ is the sum of $Z_k(i)$ independent copies of $\eta_{1/k}$. Thus $\{Z_k(i): i=0,1,\cdots\}$ is a DB-process corresponding to the generating function \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} g_k(z) = \frac{q_k}{1-p_kz}, \eeqnn where $p_k=u_{1/k}(0)$ and $q_k=1-p_k$. By a standard result for local times of diffusion processes, \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \lim_{k\to 0} Z_{1/k}([kt])/k = l(l^{-1}(u,a),a+t) = \xi_u(t); \eeqnn see It\^o and McKean (1965, p.48 and p.222). Then Theorem~\ref{t2.1} implies that the limit $\{\xi_u(t): t\ge0\}$ is a CB-process corresponding to \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} R(\lambda) = \lim_{k\to\infty} k^2[(1-\lambda/k) - g_k(1-\lambda/k)] = \frac{\beta}{\alpha}\lambda - \lambda^2. \eeqnn This proves the desired result. \qed Kawazu and Watanabe (1971, Theorem~2.3 and Example~2.2) proved the results of Theorem~\ref{t3.1} in the special case $\beta=0$. In that case the generating function $g_k$ is actually independent of $k\ge 1$. In the general case, it seems difficult to check condition (\ref{1.6}) for the sequence $\{g_k\}$. By similar arguments as the above we obtain the following \begin{theorem}\sl{}\def\etheorem{\end{theorem}}\label{t3.2} The process \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{3.5} \eta_u(t) = l(l^{-1}(u,a),a-t), \qquad 0\le t\le a. \eeqlb is a diffusion generated by \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{3.6} x\frac{d^2}{dx^2} + \frac{\beta}{\alpha} x\frac{d}{dx} + \frac{d}{dx}. \eeqlb \etheorem \bigskip \textbf{Acknowledgement.} I would like to thank the referee for a number of helpful comments. \noindent
{ "redpajama_set_name": "RedPajamaArXiv" }
8,455
The Educational Service Center of Central Ohio (ESC) is requesting proposals to provide a new VoIP Telephone System to replace the existing phone system at the central office located at 2080 Citygate Drive, Columbus, Ohio until 4:00 p.m. EST, Tuesday March 22, 2016. Please click here for the Request for Proposal document. Please click here for Questions and Answers that have been submitted. Please click here for a list of participating vendors.
{ "redpajama_set_name": "RedPajamaC4" }
5,856
\section{Introduction} \label{sec:intro} Suppose that we have a response vector $y\in\mathds{R}^n$, a matrix $X \in \mathds{R}^{n\times p}$ of predictor variables and the usual linear regression setup: \begin{equation} \label{eq:model} y = X\beta^* + \sigma\epsilon, \end{equation} where $\beta^* \in \mathds{R}^p$ are unknown coefficients to be estimated, $\sigma^2>0$ is the noise variance, and the components of the noise vector $\epsilon \in \mathds{R}^n$ are i.i.d. with $\mathbb{E}[\epsilon_i]=0$ and $\mathrm{Var}(\epsilon_i)=1$. We assume that $y$ has been centered, and the columns of $X$ are centered and scaled, so that we can omit an intercept in the model. The lasso estimator \cite{lasso,bp}, is defined as \begin{equation} \label{eqn:lasso} \hat{\beta} = \mathop{\mathrm{argmin}}_{\beta\in\mathds{R}^p} \, \frac{1}{2}\|y-X\beta\|_2^2 + \lambda\|\beta\|_1, \end{equation} where $\lambda \geq 0$ is a tuning parameter, controlling the degree of sparsity in the estimate $\hat{\beta}$. Variable selection is important in many modern applications, for which the lasso has proven to be successful. However, this method has known limitations in certain settings: there is a solution with at most $n$ non-zero coefficients when $p>n$, and if a group of relevant variables is highly correlated, it tends to include only one in the model. {These conditions occur frequently in real applications, such as genomics, where we often have a large number of predictors that can be divided into highly correlated groups. It is therefore of practical interest to overcome these limitations.} The elastic net ~\cite{enet} can sometimes improve the performance of the lasso. The elastic net penalty is the weighted sum of the $\ell_{2}$ and $\ell_{1}$ norms of the coefficient vector to be estimated: $P_{\alpha}(\beta)=\frac{(1-\alpha)}{2}||\beta||_{2}^2 + \alpha||\beta||_{1}$. It is equivalent to the ridge regression penalty when $\alpha = 0$, and to the lasso penalty when $\alpha =1$. The elastic net solves the following problem: \begin{equation} \label{eq:en} \hat{\beta} = \mathop{\mathrm{argmin}}_{\beta\in\mathds{R}^p} \, \frac{1}{2}\|y-X\beta\|_2^2 + \lambda P_{\alpha}(\beta). \end{equation} {The elastic net penalty is strictly convex, by strict convexity of the $\ell_{2}$ norm. Using this fact, the authors provide an upper bound on the distance between coefficients that correspond to highly correlated predictors.} This guarantees the grouping effect of the elastic net. Moreover, the elastic net solution can have more than $n$ non-zero coefficients, even when $p>n$, since it is equivalent to solving the lasso on an augmented dataset. It is easy to see that the elastic net regularizes the feature covariance matrix from $X^TX$ to a form $X^TX+ \frac{1-\alpha}{2}\cdot\lambda I_p$ where $I_p$ is the $p\times p$ identity matrix. By inflating the diagonal it reduces the effective size of the off-diagonal correlations. {If the feature covariance matrix is block diagonal, its connected components correspond to groups of predictors that are correlated with each other but not with predictors in other groups.} Here, we introduce a method adapted to situations where the sample covariance matrix is approximately block diagonal. Our proposed method, {\em the component lasso}, applies a more severe form of decorrelation than the elastic net to exploit this structure. Consider the inverse of the covariance matrix of the predictors. Zeros in this matrix correspond to conditionally independent variables. Recent work has focused on estimating a sparse version of the inverse covariance by optimizing the $\ell_1$ penalized log-likelihood. The so-called ``graphical lasso'' algorithm solves the problem by cycling through the variables and fitting a modified lasso regression to each one. In their ``scout'' procedure, \citeasnoun{WT2009} used the graphical lasso in a penalized regression framework to estimate the inverse covariance of $X$. Then they applied a modified form of the lasso to estimate the regression parameters. More recently, a connection between the graphical lasso and connected components has been established by \citeasnoun{WFS2011} and \citeasnoun{MH2012}. Specifically, the connected components in the estimated inverse covariance matrix correspond exactly to those obtained from single-linkage clustering of the correlation matrix. Clustering the correlated variables before estimating the parameters has been suggested by \citeasnoun{GE2007} and \citeasnoun{BUH2007}. In this paper, we propose a new simple idea to make use of the connected components in penalized regression. The {\em component lasso} works by (a) finding the connected components of the estimated covariance matrix, (b) solving separate lasso problems for each component, and then (c) combining the componentwise predictions into one final prediction. We show that this approach can improve the accuracy, and interpretability of the lasso and elastic net methods. The method is summarized in Figure \ref{fig:clgph}. \begin{figure}[!h] \centering \includegraphics[scale=0.7]{compgraph.png} \caption{\em The component lasso steps: The predictors are split according to the estimated connected components of the sample covariance matrix. The lasso is applied to each subset of predictors to separately estimate the coefficients and to predict the response. Finally, the different coefficient vectors are combined using a non-negative least squares fit of $y$ on the $K$ predictions from each component. } \label{fig:clgph} \end{figure} The following example motivates the remainder of the paper. Consider eight predictors, and let the corresponding covariance matrix be block diagonal with two blocks. Suppose that the predictors corresponding to the first block, or equivalently component, are all signal variables. The second component only contains noise variables. Figure~\ref{fig:paths} shows the coefficient paths for the naive and non naive elastic net, and the component lasso before and after the non-negative least squares (NNLS) recombination step when the sample covariance is split into two blocks. The paths are plotted for all values of the tuning parameter $\lambda$. \begin{figure} \begin{minipage}[t]{0.45\textwidth} \includegraphics[width=\linewidth]{naiveen_path.png} \end{minipage} \hspace{\fill} \begin{minipage}[t]{0.45\textwidth} \includegraphics[width=\linewidth]{en_path.png} \end{minipage} \vspace*{0.5cm} \begin{minipage}[t]{0.45\textwidth} \includegraphics[width=\linewidth]{cl_nonnls.png} \end{minipage} \hspace{\fill} \begin{minipage}[t]{0.45\textwidth} \includegraphics[width=\linewidth]{cl_path.png} \end{minipage} \caption{\em Coefficient paths for : the naive elastic net (top left), the non naive elastic net (top right), the component lasso before non-negative least squares (bottom left), and the component lasso (bottom right). The signal variables are shown in blue, while the non-signal variables are in red.} \label{fig:paths} \end{figure} The example shows the role that NNLS plays in selecting the relevant component which contains the signal variables (in blue) and reducing the coefficients of the noise variables (in red) in the second component to zero. This illustrates the possible improvements that can be achieved by finding the block-diagonal structure of the sample covariance matrix, as compared to standard methods. The remainder of the paper is organized as follows. We explain our algorithm in Section \ref{sec:comp_lass}. Section \ref{sec:ex} includes simulated and real data results. Section \ref{sec:comp} focuses on the computational complexity of the component lasso, and presents ideas for making it more efficient. We conclude the paper with a short discussion in Section \ref{sec:discussion}, including possible extensions to generalized linear models. \section{The Component Lasso} \label{sec:comp_lass} \subsection{The main idea} The lasso minimizes the $\ell_1$ penalized criterion (\ref{eqn:lasso}) whose corresponding subgradient equation is \begin{equation} X^TX\beta - X^T y + \lambda\cdot \text{sign}(\beta) = 0, \label{eqn:subg} \end{equation} where $\text{sign}(\beta)$ is a vector with components $s_j=\text{sign} (\beta_j)$ if $\beta_j\neq 0$ and $s_j \in [-1,1]$ if $\beta_j= 0$. The solution to the lasso can be written as \begin{equation} \hat{\beta} = (X^TX)^{-}(X^Ty-\lambda\cdot \text{sign}(\hat{\beta})) \label{eqn:lassosol} \end{equation} where $(X^TX)^{-}$ represents a generalized inverse of $X^TX$. Let $\Sigma=\text{cov}(X)$. We propose replacing $(X^TX)^{-}$ by a block diagonal estimate $n^{-1}\hat\Theta \approx n^{-1}\Sigma^{-1} $, the blocks of $\hat\Theta$ being the (estimated) connected components. Finding $K$ connected components splits the subgradient equation into $K$ separate equations: \begin{equation} \label{eqn:subg2} X^T_k X_k \beta_k - X_k^T y + \lambda\cdot \text{sign}(\beta_k ) = 0 \end{equation} for $k = 1, 2,\dots K$, where $X_k$ is a subset of $X$ containing the observations of the predictors in the $k$th component, and $\beta_k$ contains the corresponding coefficients. Each subproblem can be solved individually using a standard lasso or elastic net algorithm. The resultant coefficients $\beta_k$ are then combined into a solution to the original problem. The use of the block-diagonal covariance matrix creates a substantial bias in the coefficient estimates, so the combination step is quite important. We scale the componentwise solution vectors $\hat\beta_1, \hat\beta_2, \ldots \hat\beta_K$ using a non-negative least squares refitting of $y$ on $\{\hat{y}_k=X_k\hat\beta_k\}, k=1, \dots, K$. The non-negativity constraint seems natural since each componentwise predictor should have positive correlation with the outcome. The component lasso objective function, corresponding to a block diagonal estimate of the sample covariance with connected components $C_1,\dots,C_K$ is: \begin{eqnarray} J(\beta, c)= \sum_ {k=1}^K \sum_{i=1}^n \left[\frac{1}{2}\left(y_i-c_k\sum_{j\in C_k} X_{ij}\beta_j \right)^2+\lambda\left(\sum_{j \in C_k} \alpha|\beta_j| + \frac{(1-\alpha)}{2}||\beta_j||_2^2 \right) \; \right] \label{eqn:obj} \end{eqnarray} subject to $c_k\geq 0 \; \forall k$. Our algorithm (detailed below) sets $c_k=1 \; \forall k$, optimizes over $\beta$, and then optimizes over $c$. Consider an extreme case where the sample covariance matrix happens to be block diagonal with $K$ connected components. This occurs when predictors in different correlated groups are orthogonal to each other. The subgradient equation of the lasso splits naturally into separate systems of equations as in equation (\ref{eqn:subg2}) for the component lasso. The lasso coefficients will be identical to the component lasso coefficients before the NNLS step, which reweights the predictors corresponding to each component. This can be easily extended to the elastic net. Let $\lambda'=\frac{\lambda(1-\alpha)}{2}$ be the tuning parameter corresponding to the $\ell_2$ penalty. The naive elastic net problem can be written as a lasso problem on an augmented data set $(y^*,X^*)$, where $y^*=(y,0)$ is now an $n+p$ vector and $$ X^*=(1+\lambda')^{-1/2}\left[ \begin{array}{c} X \\ \sqrt {\lambda'} I \end{array}\right] .$$ The sample covariance matrix corresponding to the augmented observations $X^*$ $$(1+\lambda')^{-1}(X^TX + \lambda'I)$$ is clearly block diagonal when the predictors in different components are orthogonal. Therefore, the subgradient equation of the elastic net splits as well, and the elastic net coefficients will be identical to those of the component lasso before the NNLS step. In this case, the model chosen by the component lasso will involve splitting the predictors into components if the NNLS reweighting is useful in minimizing the validation MSE. \subsection{Details of connected component estimation via the graphical lasso} Given an observed covariance $S={X^TX/n}$ with $X \sim \mathcal{N}(0,\Sigma)$, the graphical lasso estimates $\Theta=\Sigma^{-1}$ by maximizing the penalized log-likelihood \begin{eqnarray} \ell(\Theta)=\log{\rm det} \Theta-{\rm tr}(S\Theta)-\tau ||\Theta||_1 \end{eqnarray} over all non-negative definite matrices $\Theta$ . The KKT conditions for this problem are \begin{eqnarray} \Theta^{-1}=S-\tau \Gamma(\Theta)=0 \end{eqnarray} where $\Gamma(\Theta)$ is a matrix of componentwise subgradients $\text{sign}(\Theta_{ij})$. If $C_1, C_2 \ldots C_K$ are a partition of $1,2,\ldots p$, then \citeasnoun{WFS2011} and \citeasnoun{MH2012} show that the corresponding arrangement of $\hat\Theta(\tau)$ is block diagonal if and only if $S_{ii'} \leq \tau$ for all $i \in C_k, i' \in C_{k'}, k\neq k'$. This means that soft-thresholding of $S$ at level $\tau$ into its connected components yields the connected components of $\hat\Theta(\tau)$. Furthermore, there is an interesting connection to hierarchical clustering. Specifically the connected components correspond to the subtrees from when we apply single linkage agglomerative clustering to $S$ and then cut the dendrogram at level $\tau$ \cite{TWS2013}. Single linkage clustering is sometimes not very attractive in practice, since it can produce long and stringy clusters and hence components of very unequal size. However, these same authors show that under regularity conditions on $S$, application of average or complete linkage agglomerative clustering also consistently estimates the connected components. Hence we are free to use average, single or complete linkage clustering; we use average linkage in the examples of this paper. \subsection{Summary of the component lasso algorithm} \begin{enumerate} \item Apply average, single or complete linkage clustering to $S=X^TX/n$ and cut the dendrogram at level $\tau$ to produce components $C_1, C_2, \ldots C_K$. \item For each component $k=1,2,\ldots K$ and fixed elastic net parameter $\alpha$, compute a path of elastic net solutions $\hat \beta_{k,\alpha,\tau}(\lambda)$ over a grid of $\lambda$ values. Let $\hat y_{k,\alpha,\tau}(\lambda)$ be the predicted values from the $k$th fit. \item Compute the non-negative least squares (NNLS) fit of $y$ on $\{\hat y_{1,\alpha,\tau}(\lambda), \hat y_{2,\alpha,\tau}(\lambda), \ldots \hat y_{K,\alpha,\tau}(\lambda)\}$, yielding weights $\{\hat c_1, \hat c_2, \ldots \hat c_K\}$. Finally, form the overall estimate $\hat\beta_{\alpha,\tau}(\lambda) =\sum_{k=1}^K \hat c_k \hat \beta_{k,\alpha,\tau}(\lambda)$. \item Estimate optimal values of $\tau, \alpha$ and $\lambda$ by cross-validation. \end{enumerate} {\bf Remark A}. The above procedure partially optimizes the bi-convex objective function (\ref{eqn:obj}) in two stages: it sets $c_k=1 \;\forall k$, optimizes over $\beta$ and then optimizes over the $c_k$ with $\hat\beta$ fixed. Of course one could iterate these steps in the hopes of obtaining at least a local optimum of the objective function. But we have found that the simple two-step approach works well in practice and is more efficient computationally. {\bf Remark B}. The bias induced by setting blocks of the covariance matrix to zero can be seen in a simple example. Let $A$ be a block diagonal matrix with blocks $A_1, A_2$ and let the covariance of the features be $S=A+ \rho ee^T$ where $e$ is a $p$-vector of ones. Assume that $A_1, A_2$ are positive definite. Then by the Sherman-Morrison-Woodbury formula \begin{eqnarray} S^{-1}=A^{-1} -\frac{\rho A^{-1}ee^TA^{-1}}{1+\rho e^TA^{-1}e} \label{eqn:smw} \end{eqnarray} The coefficients for the full least squares fit are $S^{-1}X^Ty$; if instead we set to zero the covariance elements outside of the blocks $A_1, A_2$, the estimates become $A_j^{-1} x_j y$ for $j=1,2$. The second term in (\ref{eqn:smw}) represents the bias in using $A^{-1}$ in place of $S^{-1}$, and is generally larger as $\rho$ increases. \section{Examples} \label{sec:ex} \subsection{Simulated examples} In this section, we study the performance of the component lasso in several simulated examples. The results show that the component lasso can achieve a lower MSE as well as better support recovery in certain settings when compared to common regression and variable selection methods. We report the test error, the false positive rate and false negative rate of the following methods: the lasso, a rescaled lasso, the lasso-OLS hybrid, ridge regression, and the naive and non-naive elastic net. {The non-naive elastic net does not correspond to rescaling the naive elastic net solution as suggested in the elastic net paper. Instead, we do a least squares fit of the response $y$ on the response that is predicted using the coefficients estimated by the naive elastic net.} The error is computed as $(\beta-\hat\beta)^T S(\beta-\hat\beta)$ where S is the observed covariance matrix. The data is simulated according to the model $$ y = X\beta + \sigma\epsilon, \epsilon \sim \mathcal{N}(0,1).$$ The data generated in each example consists of a training set, a validation set to tune the parameters, and a test set to evaluate the performance of our chosen model according to the measures described above. Following the notation from \cite{enet}, we denote $././.$ the number of observations in the training, validation and test sets respectively. \begin{subsubsection}{Orthogonal components example} We generate an example with two connected components, where the predictors in different components are orthogonal. The corresponding sample covariance matrix is block diagonal with 2 blocks. As mentioned earlier, the subgradient equations of the lasso and elastic net split naturally when the components are orthogonal. Therefore, the component lasso only differs from the non naive elastic net in the NNLS reweighting step. We generate the example as follows: $p=8$, $\sigma=3$ and $\beta=(3,1.5,0,0,2,3,0,0)$. We simulate 100 20/20/200 sets of observations such that the correlations within a component are equal to 0.8, and force the correlations between the components to be exactly 0. We then check the performance of the component lasso in two settings: when the number of components it uses is fixed to 2, and, when the optimal number of components is chosen in the validation step. The corresponding test MSEs are given in Table~\ref{table:orthog}. \begin{table} \footnotesize \begin{center} \begin{tabular}{ | l | l | p{2cm} | p{2cm} |} \hline \textbf{Method} & \textbf{Median MSE} & \textbf{Median FP} & \textbf{Median FN}\\ \hline Lasso & 7.16 (0.50) & 0.40 (0.02) & 0 (0.03)\\ \hline Rescaled Lasso & 7.26 (0.49) & 0.33 (0.02) & 0 (0.02) \\ \hline Lasso-OLS Hybrid & 7.64 (0.46) & 0.33 (0.02) & 0 (0.02) \\ \hline Naive Elastic Net & 6.04 (0.45) & 0.43 (0.01) & 0 (0.02)\\ \hline Elastic Net & 5.8 (0.4) & 0.50 (0.01) & 0 (0.02)\\ \hline Ridge & 6.27 (0.44) & 0.50 (0.01) & 0 (0)\\ \hline Component Lasso (2 components) & 5.33 (0.36) & 0.43 (0.01) & 0 (0.02) \\ \hline Component Lasso & 4.76 (0.34) & 0.43 (0.01) & 0 (0.01) \\ \hline \end{tabular} \caption{\em Median MSE, false positive and false negative rates for all regression methods when predictors in different components are orthogonal. Numbers in parentheses are the standard errors. The component lasso--- with two components, and, when the number of components is chosen at the validation step--- achieves the lowest MSE.} \label{table:orthog} \end{center} \end{table} The lower test error achieved by the component lasso indicates that for this simulation, the use of NNLS to weight the predictors within each component is more advantageous than rescaling the entire predictor vector at once, as in the non naive elastic net. \end{subsubsection} \begin{subsubsection}{Further examples} We consider four examples. The first and third examples are from the original lasso paper \cite{lasso}. The covariance matrix in those examples is not block diagonal, so the efficiency of the component lasso method in such a setting is not clear apriori. In the second example, we simulate a set-up that seems well adapted to the component lasso because the covariance matrix is block diagonal. The variables are split into two connected components. We test two instances of this example: one with noise and signal variables in both components, and another with a component containing only noise variables. The fourth example is taken from the elastic net paper \cite{enet}. All signal variables in that example belong to three connected components, and the remaining noise variables are independent. The elastic net is known to perform well under such conditions, and is shown in \cite{enet} to be better than the lasso at picking out the relevant correlated variables. Our examples were generated as follows: \begin{itemize} \item Example 1: $p=8$, $\sigma=3$ and $\beta=(3,1.5,0,0,2,0,0,0)$. We simulate 100 20/20/200 sets of observations with pairwise correlation ${\rm corr}(i,j)=0.5^{|i-j|}$. This gave an average signal to noise ratio (SNR) of 2.38. \item Example 2: $p=8$, $\sigma=5$ and $\beta=(3,1.5,0,0,2,3,0,0)$ or $\beta=(3,1.5,2,3,0,0,0,0)$. We simulate 100 20/20/200 sets of observations in the following way: $$ x_{i} = Z_1 + \epsilon_i \text{ if } i \in 1,\ldots,4 $$ $$ x_{i} = Z_2 + \epsilon_i \text{ if } i \in 5,\ldots,8 $$ where $Z_1$ and $Z_2$ $\sim \mathcal{N}(0,2)$ and $\epsilon_i \sim \mathcal{N}(0,0.5)$. The main point of this example is to compare the performance of the component lasso depending on whether the signal variables are in separate connected components (signal in $C_1$ and $C_2$) or in the same one (signal in $C_1$). The respective average SNRs were 4.68 and 8.73. \item Example 3: $p=40$, $\sigma=15$ and $$\beta=(\underbrace{0,\dots,0}_{10},\underbrace{2,\dots,2}_{10},\underbrace{0,\dots,0}_{10}, \underbrace{2,\dots,2}_{10}).$$ We simulate 100 100/100/400 sets of observations with pairwise correlations ${\rm corr}(i,j)=0.5$ if $i\neq j$. This gave an average SNR of 7.72. \item Example 4: $p=40$, $\sigma=15$ and $$\beta=(\underbrace{3,\dots,3}_{15},\underbrace{0,\dots,0}_{25}).$$ The predictors are generated according to 3 correlated groups. We simulate 100 50/50/200 sets of observations according to the following model from \cite{enet}: $$ x_i=Z_1+\epsilon_i^x, Z_1 \sim \mathcal{N}(0,1), i=1,\dots,5, $$ $$ x_i=Z_2+\epsilon_i^x, Z_2 \sim \mathcal{N}(0,1), i=6,\dots,10,$$ $$ x_i=Z_3+\epsilon_i^x, Z_3 \sim \mathcal{N}(0,1), i=11,\dots,15,$$ where $ \epsilon_i^x \sim \mathcal{N}(0,0.01) \text{ for } i \in 1,\dots,15$ and $x_i \sim \mathcal{N}(0,1) \text{ for } i \in 16,\dots,40$. The corresponding correlations matrix has a block-diagonal structure. This gave an average SNR of 2.97. \end{itemize} Heat maps of sample covariance matrices corresponding to the above examples are shown in Figure~\ref{fig:covs}. \begin{figure} \begin{minipage}[t]{0.4\textwidth} \includegraphics[width=\linewidth]{ex1_cov.png} \end{minipage} \hspace{\fill} \begin{minipage}[t]{0.4\textwidth} \includegraphics[width=\linewidth]{ex3_cov.png} \end{minipage} \vspace{1cm} \begin{minipage}[t]{0.4\textwidth} \includegraphics[width=\linewidth]{ex2_cov.png} \end{minipage} \hspace{\fill} \begin{minipage}[t]{0.4\textwidth} \includegraphics[width=\linewidth]{ex4_Cov.png} \end{minipage} \vspace{1cm} \caption{\em Heat maps of the sample covariance matrices. Examples 1 (top left) and 3 (top right) do not have a block-diagonal structure, whereas examples 2 (bottom left) and 4 (bottom right) do.} \label{fig:covs} \end{figure} Table~\ref{table:allsims} shows the results of common penalized regression methods on the above examples: the median MSE, median false positive and false negative rates. The component lasso performs well in all examples, including the ones where the data is not generated according to a covariance matrix with a block structure. The MSE achieved by the component lasso is the lowest. The use of the estimated connected components introduces a more significant improvement in example 2 when the signal variables are in the same component, and in example 4 (indicated by a *). The model for both of these datasets has a block-diagonal covariance matrix, where certain components contain only signal variables, and the remaining components contain only noise variables. The NNLS reweighting step helps select the components containing the signal predictors. \begin{table} \footnotesize \begin{center} \begin{tabular}{ | l | l | p{2cm} | p{2cm} |} \hline \textbf{Method} & \textbf{Median MSE} & \textbf{Median FP} & \textbf{Median FN}\\ \hline \textbf{Example 1} & & & \\ \hline Lasso & 2.44 (0.28) & 0.50 (0.02) & 0 (0.02)\\ \hline Rescaled Lasso & 2.16 (0.26) & 0.40 (0.02) & 0 (0.02) \\ \hline Lasso-OLS Hybrid & 2.10 (0.25) & 0.25 (0.02) & 0 (0.01) \\ \hline Naive Elastic Net & 2.17 (0.26) & 0.50 (0.02) & 0 (0.02)\\ \hline Elastic Net & 1.82 (0.25) & 0.50 (0.02) & 0 (0.02)\\ \hline Ridge & 2.79 (0.28) & 0.62 (0) & 0 (0)\\ \hline Component Lasso & 1.59 (0.22) & 0.40 (0.02) & 0 (0.02) \\ \hline \textbf{Example 2 (Signal in $C_1$ and $C_2$)} & & & \\ \hline Lasso & 7.63 (0.55) & 0.37 (0.02) & 0 (0.02)\\ \hline Rescaled Lasso & 7.17 (0.58) & 0.33 (0.02) & 0 (0.02) \\ \hline Lasso-OLS Hybrid& 7.48 (0.58) & 0.25 (0.02) & 0.2 (0.02) \\ \hline Naive Elastic Net & 6.08 (0.48) & 0.43 (0.01) & 0 (0.02)\\ \hline Elastic Net & 5.87 (0.40) & 0.50 (0.01) & 0 (0.02)\\ \hline Ridge & 6.61 (0.47) & 0.50 (0) & 0 (0)\\ \hline Component Lasso & 4.89 (0.33 )& 0.43 (0.01) & 0 (0.02) \\ \hline \textbf{Example 2 (Signal in $C_1$)} & & & \\ \hline Lasso & 5.95 (0.53) & 0.25 (0.02) & 0 (0.02)\\ \hline Rescaled Lasso & 5.49 (0.44) & 0.20 (0.02) & 0 (0.02)\\ \hline Lasso-OLS Hybrid& 5.31 (0.47) & 0 (0.01) & 0.2 (0.01) \\ \hline Naive Elastic Net & 4.14 (0.47) & 0.33 (0.01) & 0 (0.01)\\ \hline Elastic Net & 1.83 (0.27) & 0 (0.02) & 0 (0)\\ \hline Ridge & 4.4 (0.5) & 0.50 (0) & 0 (0)\\ \hline Component Lasso & 1.57* (0.27) & 0 (0.02) & 0 (0) \\ \hline \textbf{Example 3} & & & \\ \hline Lasso & 58.61 (1.43) & 0.31 (0.01) & 0.23 (0.01)\\ \hline Rescaled Lasso & 58.44 (1.54) & 0.31 (0.01) & 0.25 (0.01) \\ \hline Lasso-OLS Hybrid & 57.25 (1.64) & 0.28 (0.01) & 0.24 (0.01) \\ \hline Naive Elastic Net & 38.74 (0.93) & 0.41 (0) & 0.14 (0.01)\\ \hline Elastic Net & 31.75 (0.69) & 0.46 (0) & 0 (0.02)\\ \hline Ridge & 32.86 (0.74) & 0.50 (0) & 0 (0)\\ \hline Component Lasso & 31.16 (0.73) & 0.46 (0) & 0 (0.02)\\ \hline \textbf{Example 4} & & & \\ \hline Lasso & 46.62 (3.29) & 0.60 (0.01) & 0.37 (0.01)\\ \hline Rescaled Lasso & 28.67 (3.09) & 0.29 (0.02) & 0.31 (0) \\ \hline Lasso-OLS Hybrid & 15.75 (2.02) & 0 (0.01) & 0.32 (0.02)\\ \hline Naive Elastic Net & 44.90 (3.01) & 0.47 (0.01) & 0.20 (0.01)\\ \hline Elastic Net & 23.79 (2.66) & 0.25 (0.03) & 0 (0.01)\\ \hline Ridge & 61.74 (3.99) & 0.62 (0) & 0 (0) \\ \hline Component Lasso & 10.74* (2.34) & 0.06 (0.01) & 0.04 (0.01) \\ \hline \end{tabular} \caption{\em Median MSE, false positive and false negative rates for the four simulated examples using 7 regression methods. Numbers in parentheses are the standard errors. } \label{table:allsims} \end{center} \end{table} For every data set, the connected-component split which gave the lowest validation MSE is chosen to compute the test error. Tables 2-6 show the distribution of the number of components that minimize the error in all examples. The number of components (NOC) by itself is not an appropriate measure to verify how the predictors are being grouped. {For example, consider the case where some of the connected components only contain noise variables. Then, whether those variables are grouped correctly or kept in one big component does not affect the performance of the component lasso as long as the noisy components are excluded.} In order to focus on how the signal variables are split, we use the misclassification measure from \citeasnoun{CT2005} on the signal variables only: $$M(C,T)=\frac{ \sum_{i>i'}|I_C(i,i')-I_T(i,i')| }{{n \choose 2}} , $$ where C is the partition of points, T corresponds to the true clustering, and $I()$ is an indicator function for whether the clustering places i and i' in the same cluster. The measure quantifies the misclassification of signal variables over all signal pairs. It can be seen from the tables that the component lasso method favors splitting the predictors into clusters with low misclassification rate. The true number of components, which corresponds to the number of diagonal blocks in the covariance matrix used to generate the data, is indicated by a *. \begin{table}[h!] \small \begin{center} \begin{tabular}{ | l | l | l| l| l |} \hline \textbf{Number of Components} & 1* & 3 & 5 & 7 \\ \hline \textbf{Number of Datasets } & 38 & 26 & 21 & 15 \\ \hline \textbf{Mis. Rate } & 0 & 0.60 & 0.86 & 1 \\ \hline \end{tabular} \caption{\em \textbf{Example 1:} Optimal NOC and misclassification rate of the signal variables. } \end{center} \end{table} \begin{table}[h!] \small \begin{center} \begin{tabular}{ | l | l | l| l| l | l | l | l | l| } \hline \textbf{N. of Components} & 1 & 2* & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \textbf{N. of Datasets } & 35 & 14 & 6 & 6 & 9 & 12 & 12 & 6 \\ \hline \textbf{Mis. Rate } & 0.67 & 0 & 0.17 & 0.20 & 0.18 & 0.25 & 0.28 & 0.33 \\ \hline \end{tabular} \caption{\em \textbf{Example 2 (Signal in $C_1$ and $C_2$):} Optimal NOC and misclassification rate of the signal variables. } \end{center} \end{table} \begin{table}[h!] \small \begin{center} \begin{tabular}{ | l | l | l| l| l | l | l | l | l| } \hline \textbf{N. of Components} & 1 & 2* & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \textbf{N. of Datasets } & 45 & 11 & 16 & 8 & 10 & 8 & 2 & 0 \\ \hline \textbf{Mis. Rate } & 0 & 0 & 0.43 & 0.63 & 0.55 & 0.71 & 0.92 & - \\ \hline \end{tabular} \caption{\em \textbf{Example 2 (Signal in $C_1$):} Optimal NOC and misclassification rate of the signal variables. } \end{center} \end{table} \begin{table}[h!] \small \begin{center} \begin{tabular}{ | l | l | l| l| l | l| l | l| l| l| } \hline \textbf{N. of Components} & 1* & 5 & 9 & 13 & 17 & 21 & 25 \\ \hline \textbf{N. of Datasets } & 59 & 18 & 13 & 5 & 3 & 1 & 1 \\ \hline \textbf{Mis. Rate } & 0 & 0.32 & 0.65 & 0.88 & 0.89 & 0.98 & 0.95 \\ \hline \end{tabular} \caption{\em \textbf{Example 3:} Optimal NOC and misclassification rate of the signal variables. } \end{center} \end{table} \begin{table}[h!] \small \begin{center} \begin{tabular}{ | l | l | l| l| l | l| l | l | l | l | l |} \hline \textbf{N. of Components} & 1 & 5 & 9 & 13 & 17 & 21 & 25 & 29* & 33 & 37 \\ \hline \textbf{N. of Datasets } & 20 & 28 & 14 & 2 & 3 & 1 & 2 & 15 & 11 & 4 \\ \hline \textbf{Mis. Rate } & 0.71 & 0.07 & 0.03 & 0 & 0 & 0 & 0 & 0.04 & 0.16 & 0.25 \\ \hline \end{tabular} \caption{\em \textbf{Example 4:} Optimal NOC and misclassification rate of the signal variables. The true NOC is 28. 29 is the closest value in the tested grid. } \end{center} \end{table} \end{subsubsection} \subsection{Real data example} The component lasso is designed for settings where the data consist of a large number of predictors which can be split into highly correlated subgroups. We use a dataset from genetics to evaluate the performance of the method, because data in this area tend to follow this structure. Molecular markers are fragments of DNA associated with certain locations in the genome. In recent years, the abundance of molecular markers has made it possible to use them to predict genetic traits using linear regression. The genetic value of genes that influence a trait of interest is defined as the average phenotypic value over individuals with that trait. A standard genetic model consists in writing the phenotype $y$ as a sum of genetic values such that $y=X\beta+\epsilon$, where $X$ contains genetic values of the considered molecular markers. \\ Here, we consider the wheat data set studied in~\cite{GYp2010}. The aim is to predict genetic values of a quantitative trait, specifically grain yield in a fixed type of environment. The dataset consists of 599 observations, each corresponding to a different wheat line. Following the analysis done in~\cite{GYp2010}, we use 1279 predictors which indicate the presence or absence of molecular markers. The grain yield response is available in 4 distinct environments. We normalize the data so that the predictors are centered and scaled, and the response is centered. We then split the available observations into equally sized training and test sets. Finally, we apply cross validation to determine the model parameters. \\ The test MSE is defined as $\sum_i(y_i-\hat{y}_i)^2/n$ for $i$ in the test set. We compare the error rates of the lasso, naive elastic net, elastic net and the component lasso. We fix the range of the number of components for the component lasso to be between 1 and 50. Table~\ref{table:real} contains the test MSE achieved by the different methods to predict grain yield in 4 environments. \\ \begin{table}[h] \begin{center} \begin{tabular}{ | l | l | l | l |} \hline \textbf{Method} & \textbf{Test MSE} & \textbf{Parameters} & \textbf{Variables Selected} \\ \hline \textbf{Environment 1} & & & \\ \hline Lasso & 0.8547 & $\lambda=5.1e^{-2}$ & 45 \ \\ \hline Naive Elastic Net & 0.9656 & $\alpha=0.05$, $\lambda=2.43e^{-4}$ & 1195 \\ \hline Elastic Net & 0.9122 & $\alpha=0.05$, $\lambda=9.58e^{-4}$ & 1151 \\ \hline Component Lasso & 0.7552 & $\alpha=0.05$, $\lambda=1.8e^{-4}$, $noc=29$ & 548 \\ \hline \textbf{Environment 2} & & & \\ \hline Lasso & 0.8875 & $\lambda=6.18e^{-3}$ & 38 \ \\ \hline Naive Elastic Net & 1.1104 & $\alpha=0.05$, $\lambda=2.68e^{-4}$ & 1191 \\ \hline Elastic Net & 1.0722 & $\alpha=0.05$, $\lambda=6.57e^{-4}$ & 1170 \\ \hline Component Lasso & 0.8775 & $\alpha=0.05$, $\lambda=2.45e^{-5}$, $noc=37$ & 564 \\ \hline \textbf{Environment 3} & & & \\ \hline Lasso & 0.8216 & $\lambda=7.23e^{-3}$ & 28 \ \\ \hline Naive Elastic Net & 1.1087 & $\alpha=0.05$, $\lambda=4.89e^{-4}$ & 1170 \\ \hline Elastic Net & 1.1249 & $\alpha=0.05$, $\lambda=1.20e^{-3}$ & 1125 \\ \hline Component Lasso & 0.8830 & $\alpha=0.05$, $\lambda=1.62e^{-3}$, $noc=17$ & 303 \\ \hline \textbf{Environment 4} & & & \\ \hline Lasso & 0.8068 & $\lambda=6.8e^{-3}$ & 27 \ \\ \hline Naive Elastic Net & 1.0487 & $\alpha=0.05$, $\lambda=5.18e^{-4}$ & 1157 \\ \hline Elastic Net & 0.9349 & $\alpha=0.05$, $\lambda=2.75e^{-3}$ & 1081 \\ \hline Component Lasso & 0.8200 & $\alpha=0.05$, $\lambda=3.36e^{-5}$, $noc=33$ & 564 \\ \hline \end{tabular} \caption{\em Test MSE, parameters, and number of non-zero predictors for the real data set. } \label{table:real} \end{center} \end{table} The component lasso achieves the lowest test MSE in environments 1 and 2 by splitting the variables into 29 and 37 connected components respectively. The lasso achieves the lowest MSE in environments 3 and 4. In this dataset, splitting the genomic markers into correlated groups helped improve the accuracy in certain environments. Sorting the predictors according to the connected components chosen by the component lasso and plotting the heat map of the sample covariance matrix reveals the block-diagonal structure of the genomic markers. The corresponding connected components can be seen in Figure~\ref{fig:realheat}. \begin{figure}[!h] \centering \includegraphics[scale=0.4]{heat_real.png} \caption{\em Heat map of the sample covariance matrix corresponding to the wheat data.} \label{fig:realheat} \end{figure} This example illustrates that the component lasso can provide improved prediction accuracy and interpretability in some real data problems. \subsection{Recovery of the true non-zero parameter support} \label{LM.sec.further.lasso} There has been much study of the ability of the lasso and related procedures to recover the correct model, as $n$ and $p$ grow. Examples of this work include \citeasnoun{KF2000}, \citeasnoun{GR2004}, \citeasnoun{tropp2004}, \citeasnoun{donoho2006}, \citeasnoun{Mein2007}, \citeasnoun{MB2006}, \citeasnoun{tropp2006}, \citeasnoun {ZY2006}, \citeasnoun{wainwright2006}, and \citeasnoun{BTW2007}. Many of the results in this area assume an ``irrepresentability'' condition on the design matrix of the form \begin{eqnarray} ||({X_{\cal S}}^TX_{\cal S})^{-1}{X_{{\cal S}}}^TX_{{\cal S}^c}{\rm sign}(\beta_1)||_\infty \leq (1-\epsilon)\; \mbox{for some}\; \epsilon \in (0,1] \label{LM.lassocond} \end{eqnarray} \cite{ZY2006}. The set ${\cal S}$ indexes the subset of features with non-zero coefficients in the true underlying model, and $X_{\cal S}$ are the columns of $X$ corresponding to those features. Similarly ${\cal S}^c$ are the features with true coefficients equal to zero, and $X_{{\cal S}^c}$ the corresponding columns. The vector $\beta_1$ denotes the coefficients of the non-zero signal variables. The condition~\eqref{LM.lassocond} says that the least squares coefficients for the columns of $X_{{\cal S}^c}$ on $X_{\cal S}$ are not too large, that is, the ``good'' variables ${\cal S}$ are not too highly correlated with the nuisance variables ${\cal S}^c$. Now suppose that the signal variables and noise variables fall into two separate components $C_1, C_2$ with sufficient within-component correlation that we are able to identify them from the data. Note that $C_1$ might also contain some noise variables. Then in order to recover the signal successfully, we need only that the noise variables within $C_1$ are irrepresentable by the signal variables, as opposed to all noise variables. This result follows from the fact that for block diagonal correlation matrices, the strong irrepresentable condition holds if and only there exists a common $0<\eta \leq 1$ for which the strong irrepresentable condition holds for every block. \subsection{Grouping effect} The grouping effect refers to the property of a regression method that returns similar coefficients for highly correlated variables. If some predictors happen to be identical, the method should return equal coefficients for the corresponding variables. The elastic net is shown to exhibit this property in the extreme case where predictors are identical ( \cite{enet} lemma 2). Moreover, in Theorem 1 of the same paper, the authors bound the absolute value of the difference between coefficients $\hat{\beta}_i$ and $\hat{\beta}_j$ in terms of their sample correlation $\rho=x_i^Tx_j$. In the component lasso method, we use the elastic net or the lasso to estimate the coefficients of every connected component. If we assume that we are able to identify the components correctly from the data, then the first step of the component lasso method will preserve the grouping effect when the elastic net is used for every subproblem. NNLS fitting will also preserve the property since variables within the same connected component are scaled by the same coefficient. \section{Computational Considerations} \label{sec:comp} We use the {\tt glmnet} package in R for fitting the lasso and elastic net \cite{friedman08:_regul_paths_gener_linear_model_coord_descen}. This package uses cyclical coordinate descent using a ``naive'' method for $p>500$ and a ``covariance'' mode for $p \le 500$. Empirically, the computation time for the algorithm in naive mode scales as $O(np^2)$ (or perhaps $O(np^{1.5})$). Now suppose we divide the predictors into $L$ connected components: this requires $O(np^2)$ operations and can be done without forming {the sample covariance matrix} $S$ (see e.g. \citeasnoun{M2002}). The lasso or elastic net fitting in each of the components takes $O(Ln (p/L)^2)=O(np^2/L)$. The final non-negative least squares fit can be done in $O(nL^2)$. Thus the overall computational complexity of the component lasso is about the same as for the lasso itself. Table~\ref{tab:timings} shows some sample timings for agglommerative clustering with different linkage methods applied to problems with different $n$ and $p$. The $p$ columns of $X$ were clustered and the code ran on a standard linux server. We used the {\tt Rclusterpp} R package from the CRAN repository. \begin{table} \begin{center} \begin{tabular}{|rr|rrrr|r|} \hline &&\multicolumn{4}{c}{Linkage} & glmnet\\ n & p & ave & comp & sing & Ward& \\ \hline 200&200&0.372&0.172&0.020&0.024&0.312\\ 200&1000&20.698&2.852&0.268&0.412&0.056\\ 200&2000&103.382&11.225&1.040&1.792&0.076\\ 1000&200&1.816&0.572&0.048&0.080&0.036\\ 2000&200&3.484&1.108&0.100&0.188&0.056\\ 2000&1000&191.468&27.574&3.200&8.409&0.960\\ 2000&2000&1316.306&121.244&14.765&42.722&9.453\\ \hline \end{tabular} \end{center} \caption[tab:timings]{\em Timings for various hierarchical clustering techniques, compared to {\em glmnet}} \label{tab:timings} \end{table} We see that columns scale roughly as $O(np^2)$, but some linkages are much faster than others. However the computational time for the lasso fit by {\tt glmnet} seems to grow more slowly than that for the clustering operations. However, there is potential for significant speedups in the component lasso algorithm. The main bottleneck is the clustering step, which requires about $O(np^2)$ operations, as seen above. But in fact we do not need to cluster all $p$ features. If a feature is never entered into the model, we don't need to determine its cluster membership and hence don't need to compute its inner product with other features. Consider for example the covariance mode of {\tt glmnet}. Suppose we have a model with $k$ nonzero coefficients. For {\tt glmnet}, we need to compute the inner products of these features with all other features, $kp$ in all. For the component lasso, suppose that we have $K$ clusters of equal size, and $k/K$ nonzero coefficients in each. Then we only need to compute $K(k/K)(p/K)=kp/K$ inner products, plus the number needed to determine the cluster memberships of clusters containing each of the $k$ features. This is $O(p)$ inner products. Thus the total number is reduced from $O(kp)$ to $O(kp/K+ p)$. A careful implementation of this procedure will be done in future work. We note that cross validation is potentially slower for the component lasso since it needs to consider splitting the covariance matrix into multiple numbers of components. This results in an extra parameter--- the number of components--- that must be varied in the cross-validation step. Finally, the the modular structure of the component lasso lends itself naturally to parallel computation. This will also be developed in future work. \section{Discussion} \label{sec:discussion} In this paper we have proposed the component lasso, a penalized regression and variable selection method. In particular, we have shown that estimating and exploiting the block-diagonal structure of the sample covariance matrix--- solving separate lasso problems and then recombining--- can yield more accurate predictions and better recovery of the support of the signal. We provide simulated and real data examples where the component lasso outperforms standard regression methods in terms of prediction error and support recovery. There are possible extensions of this work to other settings. Consider a $\ell_1$-penalized logistic regression model with outcome $y_i \in [0,1]$, $\mu={\rm Pr}(Y=1|x)$ and linear predictor $\eta=\log(\mu/(1-\mu))=\beta_0+x\beta$. Then the subgradient equations have the form \begin{equation} X^T W X\beta - X^T Wz + \lambda\cdot \text{sign}(\beta) = 0 \label{eqn:irls} \end{equation} with $z=\beta_0+X\beta$ and $W={\rm diag}(\mu_1, \mu_2,\ldots \mu_n)$. Typical algorithms start with some initial value $\beta'$, compute $W$ and $z$ and then solve (\ref{eqn:irls}). Then $W$ and $z$ are updated and the process is repeated until convergence. This is known as iteratively reweighted (penalized) least squares (IRLS). We see that the appropriate connected components are those of $X^T WX$: however this depends on $\beta$ and would have to be re-computed at each iteration. We might instead set $\beta'=0$ so that $X^T WX= X^TX/4$. Hence we find the connected components of $ X^TX$ and fix them. This leads to $K$ separate $\ell_1$-penalized logistic regression problems with estimates $\hat\eta_1, \hat\eta_2, \ldots \hat\eta_K$. These could be combined by a non-negative-constrained logistic regression of $y$ on $\{\hat\eta_\ell, k=1,2,\ldots K\}$. An analogous approach could be used for other generalized linear models. The component lasso achieves a significant reduction in prediction error in examples for which the covariance matrix has a block-diagonal structure and where some components only contain noise variables. The NNLS step allows the component lasso to select the relevant components due to the fact that it induces sparsity in the estimated coefficients. The component lasso also exhibits a better performance in other examples, in which NNLS helps by weighting the contribution of each component. Thus the properties of NNLS are crucial to the performance of the method. In future work we will study the theoretical properties of the component lasso. \subsection*{Acknowledgements} The authors thank Trevor Hastie for helpful suggestions. Robert Tibshirani was supported by National Science Foundation Grant DMS-9971405 and National Institutes of Health Contract N01-HV-28183. \bibliographystyle{agsm}
{ "redpajama_set_name": "RedPajamaArXiv" }
88
Nuclear-inclusion-a endopeptidase (, potyvirus NIa protease) is a protease enzyme found in potyviruses. This enzyme catalyses the following chemical reaction: Hydrolyses glutaminyl bonds, and activity is further restricted by preferences for the amino acids in P6 - P1' that vary with the species of potyvirus. e.g. Glu-Xaa-Xaa-Tyr-Xaa-Gln \ (Ser or Gly) for the enzyme from tobacco etch virus. The enzyme is used encoded in vectors for the artificial expression of recombinant fusion proteins (see TEV protease). References External links EC 3.4.22
{ "redpajama_set_name": "RedPajamaWikipedia" }
623
Adopt North West mplcontact provides campaign and customer service call handling support for Adopt North West as it launches the very first Adoption Week and on-going promotional activity Adopt North West is a collaboration between 22 local authorities across the North West of England from Greater Manchester and Lancashire to Merseyside and Cheshire. The group came together to encourage all kinds of people to adopt and to provide the high quality support needed at every stage of the adoption process. The project launched in March 2014 with the very first regional Adoption Week promoting events and information resources for potential adopters. Adopt North West chose to work with mplcontact as their contact centre partner with the expertise and agent calibre to manage the enquiries generated by their campaign activity. A flexible and friendly call handling solution The local authorities within Adopt North West placed more children for adoption in 2013 than any other area in England and so the Adopt North West team expected an even higher level of interest with their integrated media campaigns, promotional activity and programme of events offered by each of the participating authorities during 2014. With advertising slots planned for prime time television, radio and online in the North West region, Adopt North West knew they needed a contact centre partner with the flexibility and scalability of resources to manage fluctuating peaks of call traffic whilst providing a consistently high quality of customer care. Adopt North West chose to work with mplcontact following a brief procurement process and gave them the immediate task of handling all the initial enquiries coming into Adopt North West's adoption recruitment line. "We wanted a 24/7 service so that when people do decide to make that call about adoption, which is a huge step for them, there is someone there to answer, take their details and confirm that an adoption expert from Adopt North West will be put in touch with them." Adrian Rocks, Project Manager of Adopt North West Comprehensive customer service mplcontact agents have a wide range of experience in working with clients where a high degree of empathy and consideration is required for their callers. This experience, along with the company's network of UK centres available to manage spikes in call traffic, placed them as the preferred supplier for the Adopt North West campaign. The initial launch week proved to be a success both in terms of the new enquiries it generated for Adopt North West and in how mplcontact's agents sympathetically managed each caller's enquiry. Whilst the majority of new enquiries generated by the media activity go through the Adopt North West website, the contact centre still plays a pivotal part in ensuring anyone who prefers to enquire by phone receives a prompt response. The consistency and competency of mplcontact's agents have been such that, in addition to taking initial enquiries, they also now handle calls relating to more sensitive areas concerned with Children's Services including registering concerns over a child's safety. A boost for children looking for families Since the first Adopt North West advert was aired, over 1000 people have got in touch with mplcontact handling some 250 calls and texts on behalf of Adopt North West. Adopt North West are about to initiate the second stage of the campaign in late June 2014 and mplcontact will again support that campaign activity and help callers make that initial enquiry process as easy and stress-free as possible. "We're really pleased with the response we've had to the campaign," said Adrian. "So many people in the region are realising that adoption is an option for them and that they could provide a family to a child. It's been really encouraging and we look forward to talking to more people about adopting." Flexible, scalable call handling resource immediately available Network of UK centres absorb peaks generated by TV ad slots Highly skilled and empathetic agents handling calls 24/7 Accurate reporting on ad slot performance and call types Can we help you with similar services? We tailor our solution to each client, ensuring that we offer the most effective relief possible for your needs. If you'd like to have a quick chat over coffee to discuss your current or future requirements for call handling and contact centre services, please get in touch! Your notes... Take a look at our recent case studies The Principality Centrick Property
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,878
// Type definitions for prosemirror-model 1.11 // Project: https://github.com/ProseMirror/prosemirror-model // Definitions by: Bradley Ayers <https://github.com/bradleyayers> // David Hahn <https://github.com/davidka> // Tim Baumann <https://github.com/timjb> // Malte Blanken <https://github.com/neknalb> // Patrick Simmelbauer <https://github.com/patsimm> // Definitions: https://github.com/DefinitelyTyped/DefinitelyTyped // TypeScript Version: 2.3 import OrderedMap = require('orderedmap'); /** * Instances of this class represent a match state of a node * type's [content expression](#model.NodeSpec.content), and can be * used to find out whether further content matches here, and whether * a given position is a valid end of the node. */ export class ContentMatch<S extends Schema = any> { /** * Get the first matching node type at this match position that can * be generated. */ defaultType?: NodeType; /** * The number of outgoing edges this node has in the finite automaton * that describes the content expression. */ edgeCount: number; /** * True when this match state represents a valid end of the node. */ validEnd: boolean; /** * Match a node type and marks, returning a match after that node * if successful. */ matchType(type: NodeType<S>): ContentMatch<S> | null | undefined; /** * Try to match a fragment. Returns the resulting match when * successful. */ matchFragment(frag: Fragment<S>, start?: number, end?: number): ContentMatch<S> | null | undefined; /** * Try to match the given fragment, and if that fails, see if it can * be made to match by inserting nodes in front of it. When * successful, return a fragment of inserted nodes (which may be * empty if nothing had to be inserted). When `toEnd` is true, only * return a fragment if the resulting match goes to the end of the * content expression. */ fillBefore(after: Fragment<S>, toEnd?: boolean, startIndex?: number): Fragment<S> | null | undefined; /** * Find a set of wrapping node types that would allow a node of the * given type to appear at this position. The result may be empty * (when it fits directly) and will be null when no such wrapping * exists. */ findWrapping(target: NodeType<S>): Array<NodeType<S>> | null | undefined; /** * Get the _n_th outgoing edge from this node in the finite automaton * that describes the content expression. */ edge(n: number): { type: NodeType; next: ContentMatch }; } /** * A fragment represents a node's collection of child nodes. * * Like nodes, fragments are persistent data structures, and you * should not mutate them or their content. Rather, you create new * instances whenever needed. The API tries to make this easy. */ export class Fragment<S extends Schema = any> { /** * The size of the fragment, which is the total of the size of its * content nodes. */ size: number; /** * Invoke a callback for all descendant nodes between the given two * positions (relative to start of this fragment). Doesn't descend * into a node when the callback returns `false`. */ nodesBetween( from: number, to: number, f: ( node: ProsemirrorNode<S>, start: number, parent: ProsemirrorNode<S>, index: number ) => boolean | null | undefined | void, startPos?: number ): void; /** * Call the given callback for every descendant node. The callback * may return `false` to prevent traversal of a given node's children. */ descendants( f: ( node: ProsemirrorNode<S>, pos: number, parent: ProsemirrorNode<S> ) => boolean | null | undefined | void ): void; /** * Create a new fragment containing the combined content of this * fragment and the other. */ append(other: Fragment<S>): Fragment<S>; /** * Cut out the sub-fragment between the two given positions. */ cut(from: number, to?: number): Fragment<S>; /** * Create a new fragment in which the node at the given index is * replaced by the given node. */ replaceChild(index: number, node: ProsemirrorNode<S>): Fragment<S>; /** * Compare this fragment to another one. */ eq(other: Fragment<S>): boolean; /** * The first child of the fragment, or `null` if it is empty. */ firstChild?: ProsemirrorNode<S> | null; /** * The last child of the fragment, or `null` if it is empty. */ lastChild?: ProsemirrorNode<S> | null; /** * The number of child nodes in this fragment. */ childCount: number; /** * Get the child node at the given index. Raise an error when the * index is out of range. */ child(index: number): ProsemirrorNode<S>; /** * Get the child node at the given index, if it exists. */ maybeChild(index: number): ProsemirrorNode<S> | null | undefined; /** * Call `f` for every child node, passing the node, its offset * into this parent node, and its index. */ forEach(f: (node: ProsemirrorNode<S>, offset: number, index: number) => void): void; /** * Find the first position at which this fragment and another * fragment differ, or `null` if they are the same. */ findDiffStart(other: Fragment<S>): number | null | undefined; /** * Find the first position, searching from the end, at which this * fragment and the given fragment differ, or `null` if they are the * same. Since this position will not be the same in both nodes, an * object with two separate positions is returned. */ findDiffEnd(other: ProsemirrorNode<S>): { a: number; b: number } | null | undefined; /** * Return a debugging string that describes this fragment. */ toString(): string; /** * Create a JSON-serializeable representation of this fragment. */ toJSON(): { [key: string]: any } | null | undefined; /** * Deserialize a fragment from its JSON representation. */ static fromJSON<S extends Schema = any>( schema: S, value?: { [key: string]: any } ): Fragment<S>; /** * Build a fragment from an array of nodes. Ensures that adjacent * text nodes with the same marks are joined together. */ static fromArray<S extends Schema = any>(array: Array<ProsemirrorNode<S>>): Fragment<S>; /** * Create a fragment from something that can be interpreted as a set * of nodes. For `null`, it returns the empty fragment. For a * fragment, the fragment itself. For a node or array of nodes, a * fragment containing those nodes. */ static from<S extends Schema = any>( nodes?: Fragment<S> | ProsemirrorNode<S> | Array<ProsemirrorNode<S>> ): Fragment<S>; /** * An empty fragment. Intended to be reused whenever a node doesn't * contain anything (rather than allocating a new empty fragment for * each leaf node). */ static empty: Fragment; } /** * These are the options recognized by the * [`parse`](#model.DOMParser.parse) and * [`parseSlice`](#model.DOMParser.parseSlice) methods. */ export interface ParseOptions<S extends Schema = any> { /** * By default, whitespace is collapsed as per HTML's rules. Pass * `true` to preserve whitespace, but normalize newlines to * spaces, and `"full"` to preserve whitespace entirely. */ preserveWhitespace?: boolean | 'full' | null; /** * When given, the parser will, beside parsing the content, * record the document positions of the given DOM positions. It * will do so by writing to the objects, adding a `pos` property * that holds the document position. DOM positions that are not * in the parsed content will not be written to. */ findPositions?: Array<{ node: Node; offset: number }> | null; /** * The child node index to start parsing from. */ from?: number | null; /** * The child node index to stop parsing at. */ to?: number | null; /** * By default, the content is parsed into the schema's default * [top node type](#model.Schema.topNodeType). You can pass this * option to use the type and attributes from a different node * as the top container. */ topNode?: ProsemirrorNode<S> | null; /** * Provide the starting content match that content parsed into the * top node is matched against. */ topMatch?: ContentMatch | null; /** * A set of additional nodes to count as * [context](#model.ParseRule.context) when parsing, above the * given [top node](#model.ParseOptions.topNode). */ context?: ResolvedPos<S> | null; } /** * A value that describes how to parse a given DOM node or inline * style as a ProseMirror node or mark. */ export interface ParseRule { /** * A CSS selector describing the kind of DOM elements to match. A * single rule should have _either_ a `tag` or a `style` property. */ tag?: string | null; /** * The namespace to match. This should be used with `tag`. * Nodes are only matched when the namespace matches or this property * is null. */ namespace?: string | null; /** * A CSS property name to match. When given, this rule matches * inline styles that list that property. May also have the form * `"property=value"`, in which case the rule only matches if the * propery's value exactly matches the given value. (For more * complicated filters, use [`getAttrs`](#model.ParseRule.getAttrs) * and return undefined to indicate that the match failed.) */ style?: string | null; /** * Can be used to change the order in which the parse rules in a * schema are tried. Those with higher priority come first. Rules * without a priority are counted as having priority 50. This * property is only meaningful in a schema—when directly * constructing a parser, the order of the rule array is used. */ priority?: number | null; /** * When given, restricts this rule to only match when the current * context—the parent nodes into which the content is being * parsed—matches this expression. Should contain one or more node * names or node group names followed by single or double slashes. * For example `"paragraph/"` means the rule only matches when the * parent node is a paragraph, `"blockquote/paragraph/"` restricts * it to be in a paragraph that is inside a blockquote, and * `"section//"` matches any position inside a section—a double * slash matches any sequence of ancestor nodes. To allow multiple * different contexts, they can be separated by a pipe (`|`) * character, as in `"blockquote/|list_item/"`. */ context?: string | null; /** * The name of the node type to create when this rule matches. Only * valid for rules with a `tag` property, not for style rules. Each * rule should have one of a `node`, `mark`, or `ignore` property * (except when it appears in a [node](#model.NodeSpec.parseDOM) or * [mark spec](#model.MarkSpec.parseDOM), in which case the `node` * or `mark` property will be derived from its position). */ node?: string | null; /** * The name of the mark type to wrap the matched content in. */ mark?: string | null; /** * When true, ignore content that matches this rule. */ ignore?: boolean | null; /** * When true, ignore the node that matches this rule, but do parse * its content. */ skip?: boolean | null; /** * Attributes for the node or mark created by this rule. When * `getAttrs` is provided, it takes precedence. */ attrs?: { [key: string]: any } | null; /** * A function used to compute the attributes for the node or mark * created by this rule. Can also be used to describe further * conditions the DOM element or style must match. When it returns * `false`, the rule won't match. When it returns null or undefined, * that is interpreted as an empty/default set of attributes. * * Called with a DOM Element for `tag` rules, and with a string (the * style's value) for `style` rules. */ getAttrs?: ((p: Node | string) => { [key: string]: any } | false | null | undefined) | null; /** * For `tag` rules that produce non-leaf nodes or marks, by default * the content of the DOM element is parsed as content of the mark * or node. If the child nodes are in a descendent node, this may be * a CSS selector string that the parser must use to find the actual * content element, or a function that returns the actual content * element to the parser. */ contentElement?: string | ((p: Node) => Node) | null; /** * Can be used to override the content of a matched node. When * present, instead of parsing the node's child nodes, the result of * this function is used. */ getContent?: (<S extends Schema = any>(p: Node, schema: S) => Fragment<S>) | null; /** * Controls whether whitespace should be preserved when parsing the * content inside the matched element. `false` means whitespace may * be collapsed, `true` means that whitespace should be preserved * but newlines normalized to spaces, and `"full"` means that * newlines should also be preserved. */ preserveWhitespace?: boolean | 'full' | null; } /** * A DOM parser represents a strategy for parsing DOM content into * a ProseMirror document conforming to a given schema. Its behavior * is defined by an array of [rules](#model.ParseRule). */ export class DOMParser<S extends Schema = any> { /** * Create a parser that targets the given schema, using the given * parsing rules. */ constructor(schema: S, rules: ParseRule[]); /** * The schema into which the parser parses. */ schema: S; /** * The set of [parse rules](#model.ParseRule) that the parser * uses, in order of precedence. */ rules: ParseRule[]; /** * Parse a document from the content of a DOM node. */ parse(dom: Node, options?: ParseOptions<S>): ProsemirrorNode<S>; /** * Parses the content of the given DOM node, like * [`parse`](#model.DOMParser.parse), and takes the same set of * options. But unlike that method, which produces a whole node, * this one returns a slice that is open at the sides, meaning that * the schema constraints aren't applied to the start of nodes to * the left of the input and the end of nodes at the end. */ parseSlice(dom: Node, options?: ParseOptions<S>): Slice<S>; /** * Construct a DOM parser using the parsing rules listed in a * schema's [node specs](#model.NodeSpec.parseDOM), reordered by * [priority](#model.ParseRule.priority). */ static fromSchema<S extends Schema = any>(schema: S): DOMParser<S>; } /** * A mark is a piece of information that can be attached to a node, * such as it being emphasized, in code font, or a link. It has a type * and optionally a set of attributes that provide further information * (such as the target of the link). Marks are created through a * `Schema`, which controls which types exist and which * attributes they have. */ export class Mark<S extends Schema = any> { /** * The type of this mark. */ type: MarkType<S>; /** * The attributes associated with this mark. */ attrs: { [key: string]: any }; /** * Given a set of marks, create a new set which contains this one as * well, in the right position. If this mark is already in the set, * the set itself is returned. If any marks that are set to be * [exclusive](#model.MarkSpec.excludes) with this mark are present, * those are replaced by this one. */ addToSet(set: Array<Mark<S>>): Array<Mark<S>>; /** * Remove this mark from the given set, returning a new set. If this * mark is not in the set, the set itself is returned. */ removeFromSet(set: Array<Mark<S>>): Array<Mark<S>>; /** * Test whether this mark is in the given set of marks. */ isInSet(set: Array<Mark<S>>): boolean; /** * Test whether this mark has the same type and attributes as * another mark. */ eq(other: Mark<S>): boolean; /** * Convert this mark to a JSON-serializeable representation. */ toJSON(): { [key: string]: any }; static fromJSON<S extends Schema = any>(schema: S, json: { [key: string]: any }): Mark<S>; /** * Test whether two sets of marks are identical. */ static sameSet<S extends Schema = any>(a: Array<Mark<S>>, b: Array<Mark<S>>): boolean; /** * Create a properly sorted mark set from null, a single mark, or an * unsorted array of marks. */ static setFrom<S extends Schema = any>(marks?: Mark<S> | Array<Mark<S>>): Array<Mark<S>>; /** * The empty set of marks. */ static none: Mark[]; } /** * This class represents a node in the tree that makes up a * ProseMirror document. So a document is an instance of `Node`, with * children that are also instances of `Node`. * * Nodes are persistent data structures. Instead of changing them, you * create new ones with the content you want. Old ones keep pointing * at the old document shape. This is made cheaper by sharing * structure between the old and new data as much as possible, which a * tree shape like this (without back pointers) makes easy. * * **Do not** directly mutate the properties of a `Node` object. See * [the guide](/docs/guide/#doc) for more information. */ declare class ProsemirrorNode<S extends Schema = any> { /** * The type of node that this is. */ type: NodeType<S>; /** * An object mapping attribute names to values. The kind of * attributes allowed and required are * [determined](#model.NodeSpec.attrs) by the node type. */ attrs: { [key: string]: any }; /** * A container holding the node's children. */ content: Fragment<S>; /** * The marks (things like whether it is emphasized or part of a * link) applied to this node. */ marks: Array<Mark<S>>; /** * For text nodes, this contains the node's text content. */ text?: string | null; /** * The size of this node, as defined by the integer-based [indexing * scheme](/docs/guide/#doc.indexing). For text nodes, this is the * amount of characters. For other leaf nodes, it is one. For * non-leaf nodes, it is the size of the content plus two (the start * and end token). */ nodeSize: number; /** * The number of children that the node has. */ childCount: number; /** * Get the child node at the given index. Raises an error when the * index is out of range. */ child(index: number): ProsemirrorNode<S>; /** * Get the child node at the given index, if it exists. */ maybeChild(index: number): ProsemirrorNode<S> | null | undefined; /** * Call `f` for every child node, passing the node, its offset * into this parent node, and its index. */ forEach(f: (node: ProsemirrorNode<S>, offset: number, index: number) => void): void; /** * Invoke a callback for all descendant nodes recursively between * the given two positions that are relative to start of this node's * content. The callback is invoked with the node, its * parent-relative position, its parent node, and its child index. * When the callback returns false for a given node, that node's * children will not be recursed over. */ nodesBetween( from: number, to: number, f: ( node: ProsemirrorNode<S>, pos: number, parent: ProsemirrorNode<S>, index: number ) => boolean | null | undefined | void, startPos?: number ): void; /** * Call the given callback for every descendant node. Doesn't * descend into a node when the callback returns `false`. */ descendants( f: ( node: ProsemirrorNode<S>, pos: number, parent: ProsemirrorNode<S> ) => boolean | null | undefined | void ): void; /** * Concatenates all the text nodes found in this fragment and its * children. */ textContent: string; /** * Get all text between positions `from` and `to`. When * `blockSeparator` is given, it will be inserted whenever a new * block node is started. When `leafText` is given, it'll be * inserted for every non-text leaf node encountered. */ textBetween(from: number, to: number, blockSeparator?: string, leafText?: string): string; /** * Returns this node's first child, or `null` if there are no * children. */ firstChild?: ProsemirrorNode<S> | null; /** * Returns this node's last child, or `null` if there are no * children. */ lastChild?: ProsemirrorNode<S> | null; /** * Test whether two nodes represent the same piece of document. */ eq(other: ProsemirrorNode<S>): boolean; /** * Compare the markup (type, attributes, and marks) of this node to * those of another. Returns `true` if both have the same markup. */ sameMarkup(other: ProsemirrorNode<S>): boolean; /** * Check whether this node's markup correspond to the given type, * attributes, and marks. */ hasMarkup(type: NodeType<S>, attrs?: { [key: string]: any }, marks?: Array<Mark<S>>): boolean; /** * Create a new node with the same markup as this node, containing * the given content (or empty, if no content is given). */ copy(content?: Fragment<S>): ProsemirrorNode<S>; /** * Create a copy of this node, with the given set of marks instead * of the node's own marks. */ mark(marks: Array<Mark<S>>): ProsemirrorNode<S>; /** * Create a copy of this node with only the content between the * given positions. If `to` is not given, it defaults to the end of * the node. */ cut(from: number, to?: number): ProsemirrorNode<S>; /** * Cut out the part of the document between the given positions, and * return it as a `Slice` object. */ slice(from: number, to?: number): Slice<S>; /** * Replace the part of the document between the given positions with * the given slice. The slice must 'fit', meaning its open sides * must be able to connect to the surrounding content, and its * content nodes must be valid children for the node they are placed * into. If any of this is violated, an error of type * [`ReplaceError`](#model.ReplaceError) is thrown. */ replace(from: number, to: number, slice: Slice<S>): ProsemirrorNode<S>; /** * Find the node starting at the given position. */ nodeAt(pos: number): ProsemirrorNode<S> | null | undefined; /** * Find the (direct) child node after the given offset, if any, * and return it along with its index and offset relative to this * node. */ childAfter(pos: number): { node?: ProsemirrorNode<S> | null; index: number; offset: number }; /** * Find the (direct) child node before the given offset, if any, * and return it along with its index and offset relative to this * node. */ childBefore(pos: number): { node?: ProsemirrorNode<S> | null; index: number; offset: number }; /** * Resolve the given position in the document, returning an * [object](#model.ResolvedPos) with information about its context. */ resolve(pos: number): ResolvedPos<S>; /** * Test whether a given mark or mark type occurs in this document * between the two given positions. */ rangeHasMark(from: number, to: number, type: Mark<S> | MarkType<S>): boolean; /** * True when this is a block (non-inline node) */ isBlock: boolean; /** * True when this is a textblock node, a block node with inline * content. */ isTextblock: boolean; /** * True when this node has inline content. */ inlineContent: boolean; /** * True when this is an inline node (a text node or a node that can * appear among text). */ isInline: boolean; /** * True when this is a text node. */ isText: boolean; /** * True when this is a leaf node. */ isLeaf: boolean; /** * True when this is an atom, i.e. when it does not have directly * editable content. This is usually the same as `isLeaf`, but can * be configured with the [`atom` property](#model.NodeSpec.atom) on * a node's spec (typically used when the node is displayed as an * uneditable [node view](#view.NodeView)). */ isAtom: boolean; /** * Return a string representation of this node for debugging * purposes. */ toString(): string; /** * Get the content match in this node at the given index. */ contentMatchAt(index: number): ContentMatch<S>; /** * Test whether replacing the range between `from` and `to` (by * child index) with the given replacement fragment (which defaults * to the empty fragment) would leave the node's content valid. You * can optionally pass `start` and `end` indices into the * replacement fragment. */ canReplace( from: number, to: number, replacement?: Fragment<S>, start?: number, end?: number ): boolean; /** * Test whether replacing the range `from` to `to` (by index) with a * node of the given type. */ canReplaceWith(from: number, to: number, type: NodeType<S>, marks?: Array<Mark<S>>): boolean; /** * Test whether the given node's content could be appended to this * node. If that node is empty, this will only return true if there * is at least one node type that can appear in both nodes (to avoid * merging completely incompatible nodes). */ canAppend(other: ProsemirrorNode<S>): boolean; /** * Check whether this node and its descendants conform to the * schema, and raise error when they do not. */ check(): void; /** * Return a JSON-serializeable representation of this node. */ toJSON(): { [key: string]: any }; /** * Deserialize a node from its JSON representation. */ static fromJSON<S extends Schema = any>( schema: S, json: { [key: string]: any } ): ProsemirrorNode<S>; } export { ProsemirrorNode as Node }; /** * Error type raised by [`Node.replace`](#model.Node.replace) when * given an invalid replacement. */ export class ReplaceError extends Error { } /** * A slice represents a piece cut out of a larger document. It * stores not only a fragment, but also the depth up to which nodes on * both side are 'open' (cut through). */ export class Slice<S extends Schema = any> { /** * Create a slice. When specifying a non-zero open depth, you must * make sure that there are nodes of at least that depth at the * appropriate side of the fragment—i.e. if the fragment is an empty * paragraph node, `openStart` and `openEnd` can't be greater than * 1. * * It is not necessary for the content of open nodes to conform to * the schema's content constraints, though it should be a valid * start/end/middle for such a node, depending on which sides are * open. */ constructor(content: Fragment<S>, openStart: number, openEnd: number); /** * The slice's content. */ content: Fragment<S>; /** * The open depth at the start. */ openStart: number; /** * The open depth at the end. */ openEnd: number; /** * The size this slice would add when inserted into a document. */ size: number; /** * Tests whether this slice is equal to another slice. */ eq(other: Slice<S>): boolean; /** * Convert a slice to a JSON-serializable representation. */ toJSON(): { [key: string]: any } | null | undefined; /** * Deserialize a slice from its JSON representation. */ static fromJSON<S extends Schema = any>(schema: S, json?: { [key: string]: any }): Slice<S>; /** * Create a slice from a fragment by taking the maximum possible * open value on both side of the fragment. */ static maxOpen<S extends Schema = any>( fragment: Fragment<S>, openIsolating?: boolean ): Slice<S>; /** * The empty slice. */ static empty: Slice; } /** * You can [_resolve_](#model.Node.resolve) a position to get more * information about it. Objects of this class represent such a * resolved position, providing various pieces of context information, * and some helper methods. * * Throughout this interface, methods that take an optional `depth` * parameter will interpret undefined as `this.depth` and negative * numbers as `this.depth + value`. */ export class ResolvedPos<S extends Schema = any> { /** * The position that was resolved. */ pos: number; /** * The number of levels the parent node is from the root. If this * position points directly into the root node, it is 0. If it * points into a top-level paragraph, 1, and so on. */ depth: number; /** * The offset this position has into its parent node. */ parentOffset: number; /** * The parent node that the position points into. Note that even if * a position points into a text node, that node is not considered * the parent—text nodes are 'flat' in this model, and have no content. */ parent: ProsemirrorNode<S>; /** * The root node in which the position was resolved. */ doc: ProsemirrorNode<S>; /** * The ancestor node at the given level. `p.node(p.depth)` is the * same as `p.parent`. */ node(depth?: number): ProsemirrorNode<S>; /** * The index into the ancestor at the given level. If this points at * the 3rd node in the 2nd paragraph on the top level, for example, * `p.index(0)` is 2 and `p.index(1)` is 3. */ index(depth?: number): number; /** * The index pointing after this position into the ancestor at the * given level. */ indexAfter(depth?: number): number; /** * The (absolute) position at the start of the node at the given * level. */ start(depth?: number): number; /** * The (absolute) position at the end of the node at the given * level. */ end(depth?: number): number; /** * The (absolute) position directly before the wrapping node at the * given level, or, when `level` is `this.depth + 1`, the original * position. */ before(depth?: number): number; /** * The (absolute) position directly after the wrapping node at the * given level, or the original position when `level` is `this.depth + 1`. */ after(depth?: number): number; /** * When this position points into a text node, this returns the * distance between the position and the start of the text node. * Will be zero for positions that point between nodes. */ textOffset: number; /** * Get the node directly after the position, if any. If the position * points into a text node, only the part of that node after the * position is returned. */ nodeAfter?: ProsemirrorNode<S> | null; /** * Get the node directly before the position, if any. If the * position points into a text node, only the part of that node * before the position is returned. */ nodeBefore?: ProsemirrorNode<S> | null; /** * Get the position at the given index in the parent node at the * given depth (which defaults to this.depth). */ posAtIndex(index: number, depth?: number): number; /** * Get the marks at this position, factoring in the surrounding * marks' [`inclusive`](#model.MarkSpec.inclusive) property. If the * position is at the start of a non-empty node, the marks of the * node after it (if any) are returned. */ marks(): Array<Mark<S>>; /** * Get the marks after the current position, if any, except those * that are non-inclusive and not present at position `$end`. This * is mostly useful for getting the set of marks to preserve after a * deletion. Will return `null` if this position is at the end of * its parent node or its parent node isn't a textblock (in which * case no marks should be preserved). */ marksAcross($end: ResolvedPos<S>): Array<Mark<S>> | null | undefined; /** * The depth up to which this position and the given (non-resolved) * position share the same parent nodes. */ sharedDepth(pos: number): number; /** * Returns a range based on the place where this position and the * given position diverge around block content. If both point into * the same textblock, for example, a range around that textblock * will be returned. If they point into different blocks, the range * around those blocks in their shared ancestor is returned. You can * pass in an optional predicate that will be called with a parent * node to see if a range into that parent is acceptable. */ blockRange( other?: ResolvedPos<S>, pred?: (p: ProsemirrorNode<S>) => boolean ): NodeRange<S> | null | undefined; /** * Query whether the given position shares the same parent node. */ sameParent(other: ResolvedPos<S>): boolean; /** * Return the greater of this and the given position. */ max(other: ResolvedPos<S>): ResolvedPos<S>; /** * Return the smaller of this and the given position. */ min(other: ResolvedPos<S>): ResolvedPos<S>; } /** * Represents a flat range of content, i.e. one that starts and * ends in the same node. */ export class NodeRange<S extends Schema = any> { /** * Construct a node range. `$from` and `$to` should point into the * same node until at least the given `depth`, since a node range * denotes an adjacent set of nodes in a single parent node. */ constructor($from: ResolvedPos<S>, $to: ResolvedPos<S>, depth: number); /** * A resolved position along the start of the * content. May have a `depth` greater than this object's `depth` * property, since these are the positions that were used to * compute the range, not re-resolved positions directly at its * boundaries. */ $from: ResolvedPos<S>; /** * A position along the end of the content. See * caveat for [`$from`](#model.NodeRange.$from). */ $to: ResolvedPos<S>; /** * The depth of the node that this range points into. */ depth: number; /** * The position at the start of the range. */ start: number; /** * The position at the end of the range. */ end: number; /** * The parent node that the range points into. */ parent: ProsemirrorNode<S>; /** * The start index of the range in the parent node. */ startIndex: number; /** * The end index of the range in the parent node. */ endIndex: number; } /** * Node types are objects allocated once per `Schema` and used to * [tag](#model.Node.type) `Node` instances. They contain information * about the node type, such as its name and what kind of node it * represents. */ export class NodeType<S extends Schema = any> { /** * The name the node type has in this schema. */ name: string; /** * A link back to the `Schema` the node type belongs to. */ schema: S; /** * The spec that this type is based on */ spec: NodeSpec; /** * The starting match of the node type's content expression. */ contentMatch: ContentMatch<S>; /** * True if this node type has inline content. */ inlineContent: boolean; /** * True if this is a block type */ isBlock: boolean; /** * True if this is the text node type. */ isText: boolean; /** * True if this is an inline type. */ isInline: boolean; /** * True if this is a textblock type, a block that contains inline * content. */ isTextblock: boolean; /** * True for node types that allow no content. */ isLeaf: boolean; /** * True when this node is an atom, i.e. when it does not have * directly editable content. */ isAtom: boolean; /** * Create a `Node` of this type. The given attributes are * checked and defaulted (you can pass `null` to use the type's * defaults entirely, if no required attributes exist). `content` * may be a `Fragment`, a node, an array of nodes, or * `null`. Similarly `marks` may be `null` to default to the empty * set of marks. */ create( attrs?: { [key: string]: any } | null, content?: Fragment<S> | ProsemirrorNode<S> | Array<ProsemirrorNode<S>>, marks?: Array<Mark<S>> ): ProsemirrorNode<S>; /** * Like [`create`](#model.NodeType.create), but check the given content * against the node type's content restrictions, and throw an error * if it doesn't match. */ createChecked( attrs?: { [key: string]: any } | null, content?: Fragment<S> | ProsemirrorNode<S> | Array<ProsemirrorNode<S>>, marks?: Array<Mark<S>> ): ProsemirrorNode<S>; /** * Like [`create`](#model.NodeType.create), but see if it is necessary to * add nodes to the start or end of the given fragment to make it * fit the node. If no fitting wrapping can be found, return null. * Note that, due to the fact that required nodes can always be * created, this will always succeed if you pass null or * `Fragment.empty` as content. */ createAndFill( attrs?: { [key: string]: any } | null, content?: Fragment<S> | ProsemirrorNode<S> | Array<ProsemirrorNode<S>>, marks?: Array<Mark<S>> ): ProsemirrorNode<S> | null | undefined; /** * Returns true if the given fragment is valid content for this node * type with the given attributes. */ validContent(content: Fragment<S>): boolean; /** * Check whether the given mark type is allowed in this node. */ allowsMarkType(markType: MarkType<S>): boolean; /** * Test whether the given set of marks are allowed in this node. */ allowsMarks(marks: Array<Mark<S>>): boolean; /** * Removes the marks that are not allowed in this node from the given set. */ allowedMarks(marks: Array<Mark<S>>): Array<Mark<S>>; } /** * Like nodes, marks (which are associated with nodes to signify * things like emphasis or being part of a link) are * [tagged](#model.Mark.type) with type objects, which are * instantiated once per `Schema`. */ export class MarkType<S extends Schema = any> { /** * The name of the mark type. */ name: string; /** * The schema that this mark type instance is part of. */ schema: S; /** * The spec on which the type is based. */ spec: MarkSpec; /** * Create a mark of this type. `attrs` may be `null` or an object * containing only some of the mark's attributes. The others, if * they have defaults, will be added. */ create(attrs?: { [key: string]: any }): Mark<S>; /** * When there is a mark of this type in the given set, a new set * without it is returned. Otherwise, the input set is returned. */ removeFromSet(set: Array<Mark<S>>): Array<Mark<S>>; /** * Tests whether there is a mark of this type in the given set. */ isInSet(set: Array<Mark<S>>): Mark<S> | null | undefined; /** * Queries whether a given mark type is * [excluded](#model.MarkSpec.excludes) by this one. */ excludes(other: MarkType<S>): boolean; } /** * An object describing a schema, as passed to the [`Schema`](#model.Schema) * constructor. */ export interface SchemaSpec<N extends string = any, M extends string = any> { /** * The node types in this schema. Maps names to * [`NodeSpec`](#model.NodeSpec) objects that describe the node type * associated with that name. Their order is significant—it * determines which [parse rules](#model.NodeSpec.parseDOM) take * precedence by default, and which nodes come first in a given * [group](#model.NodeSpec.group). */ nodes: { [name in N]: NodeSpec } | OrderedMap<NodeSpec>; /** * The mark types that exist in this schema. The order in which they * are provided determines the order in which [mark * sets](#model.Mark.addToSet) are sorted and in which [parse * rules](#model.MarkSpec.parseDOM) are tried. */ marks?: { [name in M]: MarkSpec } | OrderedMap<MarkSpec> | null; /** * The name of the default top-level node for the schema. Defaults * to `"doc"`. */ topNode?: string | null; } export interface NodeSpec { /** * The content expression for this node, as described in the [schema * guide](/docs/guide/#schema.content_expressions). When not given, * the node does not allow any content. */ content?: string | null; /** * The marks that are allowed inside of this node. May be a * space-separated string referring to mark names or groups, `"_"` * to explicitly allow all marks, or `""` to disallow marks. When * not given, nodes with inline content default to allowing all * marks, other nodes default to not allowing marks. */ marks?: string | null; /** * The group or space-separated groups to which this node belongs, * which can be referred to in the content expressions for the * schema. */ group?: string | null; /** * Should be set to true for inline nodes. (Implied for text nodes.) */ inline?: boolean | null; /** * Can be set to true to indicate that, though this isn't a [leaf * node](#model.NodeType.isLeaf), it doesn't have directly editable * content and should be treated as a single unit in the view. */ atom?: boolean | null; /** * The attributes that nodes of this type get. */ attrs?: { [name: string]: AttributeSpec } | null; /** * Controls whether nodes of this type can be selected as a [node * selection](#state.NodeSelection). Defaults to true for non-text * nodes. */ selectable?: boolean | null; /** * Determines whether nodes of this type can be dragged without * being selected. Defaults to false. */ draggable?: boolean | null; /** * Can be used to indicate that this node contains code, which * causes some commands to behave differently. */ code?: boolean | null; /** * Determines whether this node is considered an important parent * node during replace operations (such as paste). Non-defining (the * default) nodes get dropped when their entire content is replaced, * whereas defining nodes persist and wrap the inserted content. * Likewise, in _inserted_ content the defining parents of the * content are preserved when possible. Typically, * non-default-paragraph textblock types, and possibly list items, * are marked as defining. */ defining?: boolean | null; /** * When enabled (default is false), the sides of nodes of this type * count as boundaries that regular editing operations, like * backspacing or lifting, won't cross. An example of a node that * should probably have this enabled is a table cell. */ isolating?: boolean | null; /** * Defines the default way a node of this type should be serialized * to DOM/HTML (as used by * [`DOMSerializer.fromSchema`](#model.DOMSerializer^fromSchema)). * Should return a DOM node or an [array * structure](#model.DOMOutputSpec) that describes one, with an * optional number zero ("hole") in it to indicate where the node's * content should be inserted. * * For text nodes, the default is to create a text DOM node. Though * it is possible to create a serializer where text is rendered * differently, this is not supported inside the editor, so you * shouldn't override that in your text node spec. */ toDOM?: ((node: ProsemirrorNode) => DOMOutputSpec) | null; /** * Associates DOM parser information with this node, which can be * used by [`DOMParser.fromSchema`](#model.DOMParser^fromSchema) to * automatically derive a parser. The `node` field in the rules is * implied (the name of this node will be filled in automatically). * If you supply your own parser, you do not need to also specify * parsing rules in your schema. */ parseDOM?: ParseRule[] | null; /** * Defines the default way a node of this type should be serialized * to a string representation for debugging (e.g. in error messages). */ toDebugString?: ((node: ProsemirrorNode) => string) | null; /** * Allow specifying arbitrary fields on a NodeSpec. */ [key: string]: any; } export interface MarkSpec { /** * The attributes that marks of this type get. */ attrs?: { [name: string]: AttributeSpec } | null; /** * Whether this mark should be active when the cursor is positioned * at its end (or at its start when that is also the start of the * parent node). Defaults to true. */ inclusive?: boolean | null; /** * Determines which other marks this mark can coexist with. Should * be a space-separated strings naming other marks or groups of marks. * When a mark is [added](#model.Mark.addToSet) to a set, all marks * that it excludes are removed in the process. If the set contains * any mark that excludes the new mark but is not, itself, excluded * by the new mark, the mark can not be added an the set. You can * use the value `"_"` to indicate that the mark excludes all * marks in the schema. * * Defaults to only being exclusive with marks of the same type. You * can set it to an empty string (or any string not containing the * mark's own name) to allow multiple marks of a given type to * coexist (as long as they have different attributes). */ excludes?: string | null; /** * The group or space-separated groups to which this mark belongs. */ group?: string | null; /** * Determines whether marks of this type can span multiple adjacent * nodes when serialized to DOM/HTML. Defaults to true. */ spanning?: boolean | null; /** * Defines the default way marks of this type should be serialized * to DOM/HTML. */ toDOM?: ((mark: Mark, inline: boolean) => DOMOutputSpec) | null; /** * Associates DOM parser information with this mark (see the * corresponding [node spec field](#model.NodeSpec.parseDOM)). The * `mark` field in the rules is implied. */ parseDOM?: ParseRule[] | null; /** * Allow specifying arbitrary fields on a MarkSpec. */ [key: string]: any; } /** * Used to [define](#model.NodeSpec.attrs) attributes on nodes or * marks. */ export interface AttributeSpec { /** * The default value for this attribute, to use when no explicit * value is provided. Attributes that have no default must be * provided whenever a node or mark of a type that has them is * created. */ default?: any; } /** * A document schema. Holds [node](#model.NodeType) and [mark * type](#model.MarkType) objects for the nodes and marks that may * occur in conforming documents, and provides functionality for * creating and deserializing such documents. */ export class Schema<N extends string = any, M extends string = any> { /** * Construct a schema from a schema [specification](#model.SchemaSpec). */ constructor(spec: SchemaSpec<N, M>); /** * The [spec](#model.SchemaSpec) on which the schema is based, * with the added guarantee that its `nodes` and `marks` * properties are * [`OrderedMap`](https://github.com/marijnh/orderedmap) instances * (not raw objects). */ spec: SchemaSpec<N, M>; /** * An object mapping the schema's node names to node type objects. */ nodes: { [name in N]: NodeType<Schema<N, M>> } & { [key: string]: NodeType<Schema<N, M>> }; /** * A map from mark names to mark type objects. */ marks: { [name in M]: MarkType<Schema<N, M>> } & { [key: string]: MarkType<Schema<N, M>> }; /** * The type of the [default top node](#model.SchemaSpec.topNode) * for this schema. */ topNodeType: NodeType<Schema<N, M>>; /** * An object for storing whatever values modules may want to * compute and cache per schema. (If you want to store something * in it, try to use property names unlikely to clash.) */ cached: { [key: string]: any }; /** * Create a node in this schema. The `type` may be a string or a * `NodeType` instance. Attributes will be extended * with defaults, `content` may be a `Fragment`, * `null`, a `Node`, or an array of nodes. */ node( type: string | NodeType<Schema<N, M>>, attrs?: { [key: string]: any }, content?: | Fragment<Schema<N, M>> | ProsemirrorNode<Schema<N, M>> | Array<ProsemirrorNode<Schema<N, M>>>, marks?: Array<Mark<Schema<N, M>>> ): ProsemirrorNode<Schema<N, M>>; /** * Create a text node in the schema. Empty text nodes are not * allowed. */ text(text: string, marks?: Array<Mark<Schema<N, M>>>): ProsemirrorNode<Schema<N, M>>; /** * Create a mark with the given type and attributes. */ mark(type: string | MarkType<Schema<N, M>>, attrs?: { [key: string]: any }): Mark<Schema<N, M>>; /** * Deserialize a node from its JSON representation. This method is * bound. */ nodeFromJSON(json: { [key: string]: any }): ProsemirrorNode<Schema<N, M>>; /** * Deserialize a mark from its JSON representation. This method is * bound. */ markFromJSON(json: { [key: string]: any }): Mark<Schema<N, M>>; } export interface DOMOutputSpecArray { 0: string; 1?: DOMOutputSpec | 0 | { [attr: string]: string | null | undefined }; 2?: DOMOutputSpec | 0; 3?: DOMOutputSpec | 0; 4?: DOMOutputSpec | 0; 5?: DOMOutputSpec | 0; 6?: DOMOutputSpec | 0; 7?: DOMOutputSpec | 0; 8?: DOMOutputSpec | 0; 9?: DOMOutputSpec | 0; } export type DOMOutputSpec = string | Node | DOMOutputSpecArray; /** * A DOM serializer knows how to convert ProseMirror nodes and * marks of various types to DOM nodes. */ export class DOMSerializer<S extends Schema = any> { /** * Create a serializer. `nodes` should map node names to functions * that take a node and return a description of the corresponding * DOM. `marks` does the same for mark names, but also gets an * argument that tells it whether the mark's content is block or * inline content (for typical use, it'll always be inline). A mark * serializer may be `null` to indicate that marks of that type * should not be serialized. */ constructor( nodes: { [name: string]: (node: ProsemirrorNode<S>) => DOMOutputSpec }, marks: { [name: string]: (mark: Mark<S>, inline: boolean) => DOMOutputSpec } ); /** * The node serialization functions. */ nodes: { [name: string]: (node: ProsemirrorNode<S>) => DOMOutputSpec }; /** * The mark serialization functions. */ marks: { [name: string]: (mark: Mark<S>, inline: boolean) => DOMOutputSpec }; /** * Serialize the content of this fragment to a DOM fragment. When * not in the browser, the `document` option, containing a DOM * document, should be passed so that the serializer can create * nodes. */ serializeFragment(fragment: Fragment<S>, options?: { [key: string]: any }): DocumentFragment; /** * Serialize this node to a DOM node. This can be useful when you * need to serialize a part of a document, as opposed to the whole * document. To serialize a whole document, use * [`serializeFragment`](#model.DOMSerializer.serializeFragment) on * its [content](#model.Node.content). */ serializeNode(node: ProsemirrorNode<S>, options?: { [key: string]: any }): Node; /** * Render an [output spec](#model.DOMOutputSpec) to a DOM node. If * the spec has a hole (zero) in it, `contentDOM` will point at the * node with the hole. */ static renderSpec( doc: Document, structure: DOMOutputSpec ): { dom: Node; contentDOM?: Node | null }; /** * Build a serializer using the [`toDOM`](#model.NodeSpec.toDOM) * properties in a schema's node and mark specs. */ static fromSchema<S extends Schema = any>(schema: S): DOMSerializer<S>; }
{ "redpajama_set_name": "RedPajamaGithub" }
1,408
» At 6:40 a.m. Wednesday, a caller reported a traffic crash on the Riverview Expressway Bridge. » At 6:54 a.m. Wednesday, a caller reported a traffic crash on the overpass near the intersection of High Street and State 34. » At 9:12 a.m. Wednesday, a Wisconsin Rapids woman reported someone stole her key fob. An officer learned the woman's son had hid it in a DVD case and it was not stolen. » At 10:16 a.m. Wednesday, a caller reported a semi took out the stoplights at the intersection of Eighth Street South and Kuhn Avenue. » At 12:01 p.m. Wednesday, a caller reported a car hit the wall at the Riverview Expressway Bridge. » At 2:09 p.m. Wednesday, officers responded to a disturbance in the 2700 block of Second Avenue South. » At 2:52 p.m. Wednesday, a caller reported a two-vehicle crash near the intersection of 16th Street South and Two Mile Avenue. » At 5:11 p.m. Wednesday, a caller reported someone left a dog in a truck overnight in the 1100 block of 16th Street North. » At 5:17 p.m. Wednesday, a caller reported a semi hit phone lines near the intersection of 18th and Wickham avenues. » At 6:10 p.m. Wednesday, a caller reported two women with kids slammed a child into a shopping cart and rammed it into a 4-year-old child in the 900 block of East Riverview Expressway. » At 2:55 p.m. Tuesday, a caller reported a traffic crash near the intersection of 16th Street South and Two Mile Avenue. » At 3:21 a.m. Wednesday, a caller reported a traffic crash near the intersection of Wood County E and X, Dexter.
{ "redpajama_set_name": "RedPajamaC4" }
9,910
{"url":"https:\/\/msp.org\/involve\/2019\/12-4\/p01.xhtml","text":"#### Vol. 12, No. 4, 2019\n\n Recent Issues\n The Journal About the Journal Editorial Board Subscriptions Editors\u2019 Interests Scientific Advantages Submission Guidelines Submission Form Ethics Statement Editorial Login ISSN: 1944-4184 (e-only) ISSN: 1944-4176 (print) Author Index Coming Soon Other MSP Journals\nEuler's formula for the zeta function at the positive even integers\n\n### Samyukta Krishnamurthy and Micah B. Milinovich\n\nVol. 12 (2019), No. 4, 541\u2013548\nDOI: 10.2140\/involve.2019.12.541\n##### Abstract\n\nWe give a new proof of Euler\u2019s formula for the values of the Riemann zeta function at the positive even integers. The proof involves estimating a certain integral of elementary functions two different ways and using a recurrence relation for the Bernoulli polynomials evaluated at $\\frac{1}{2}$.\n\n##### Keywords\nRiemann zeta function, Euler, Basel problem, Bernoulli numbers, Bernoulli polynomials\n##### Mathematical Subject Classification 2010\nPrimary: 11M06\nSecondary: 11B68, 11B37","date":"2020-09-18 18:17:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 1, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.353667289018631, \"perplexity\": 3498.3105647427387}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600400188049.8\/warc\/CC-MAIN-20200918155203-20200918185203-00567.warc.gz\"}"}
null
null
namespace ray { class CoreWorker; namespace rpc { /// NOTE: See src/ray/core_worker/core_worker.h on how to add a new grpc handler. /// Disable gRPC server metrics since it incurs too high cardinality. #define RAY_CORE_WORKER_RPC_HANDLERS \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED(CoreWorkerService, PushTask, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED( \ CoreWorkerService, DirectActorCallArgWaitComplete, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED( \ CoreWorkerService, RayletNotifyGCSRestart, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED(CoreWorkerService, GetObjectStatus, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED( \ CoreWorkerService, WaitForActorOutOfScope, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED(CoreWorkerService, PubsubLongPolling, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED(CoreWorkerService, PubsubCommandBatch, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED( \ CoreWorkerService, UpdateObjectLocationBatch, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED( \ CoreWorkerService, GetObjectLocationsOwner, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED(CoreWorkerService, KillActor, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED(CoreWorkerService, CancelTask, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED(CoreWorkerService, RemoteCancelTask, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED(CoreWorkerService, GetCoreWorkerStats, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED(CoreWorkerService, LocalGC, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED(CoreWorkerService, SpillObjects, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED( \ CoreWorkerService, RestoreSpilledObjects, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED( \ CoreWorkerService, DeleteSpilledObjects, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED(CoreWorkerService, PlasmaObjectReady, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED(CoreWorkerService, Exit, -1) \ RPC_SERVICE_HANDLER_SERVER_METRICS_DISABLED(CoreWorkerService, AssignObjectOwner, -1) #define RAY_CORE_WORKER_DECLARE_RPC_HANDLERS \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(PushTask) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(DirectActorCallArgWaitComplete) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(RayletNotifyGCSRestart) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(GetObjectStatus) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(WaitForActorOutOfScope) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(PubsubLongPolling) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(PubsubCommandBatch) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(UpdateObjectLocationBatch) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(GetObjectLocationsOwner) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(KillActor) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(CancelTask) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(RemoteCancelTask) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(GetCoreWorkerStats) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(LocalGC) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(SpillObjects) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(RestoreSpilledObjects) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(DeleteSpilledObjects) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(PlasmaObjectReady) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(Exit) \ DECLARE_VOID_RPC_SERVICE_HANDLER_METHOD(AssignObjectOwner) /// Interface of the `CoreWorkerServiceHandler`, see `src/ray/protobuf/core_worker.proto`. class CoreWorkerServiceHandler { public: virtual ~CoreWorkerServiceHandler() {} /// Handlers. For all of the following handlers, the implementations can /// handle the request asynchronously. When handling is done, the /// `send_reply_callback` should be called. See /// src/ray/rpc/node_manager/node_manager_client.h and /// src/ray/protobuf/node_manager.proto for a description of the /// functionality of each handler. /// /// \param[in] request The request message. /// \param[out] reply The reply message. /// \param[in] send_reply_callback The callback to be called when the request is done. RAY_CORE_WORKER_DECLARE_RPC_HANDLERS }; /// The `GrpcServer` for `CoreWorkerService`. class CoreWorkerGrpcService : public GrpcService { public: /// Constructor. /// /// \param[in] main_service See super class. /// \param[in] handler The service handler that actually handle the requests. CoreWorkerGrpcService(instrumented_io_context &main_service, CoreWorkerServiceHandler &service_handler) : GrpcService(main_service), service_handler_(service_handler) {} protected: grpc::Service &GetGrpcService() override { return service_; } void InitServerCallFactories( const std::unique_ptr<grpc::ServerCompletionQueue> &cq, std::vector<std::unique_ptr<ServerCallFactory>> *server_call_factories) override { RAY_CORE_WORKER_RPC_HANDLERS } private: /// The grpc async service object. CoreWorkerService::AsyncService service_; /// The service handler that actually handles the requests. CoreWorkerServiceHandler &service_handler_; }; } // namespace rpc } // namespace ray
{ "redpajama_set_name": "RedPajamaGithub" }
6,045
From Bhamwiki 1906 was the 35th year after the founding of the city of Birmingham. 1.1 Business 1.2 Education 1.3 Government 1.5 Sports 2 Individuals 2.1 Births 2.2 Graduations 2.3 Marriages 2.4 Deaths 3.1 Buildings 3.1.1 Demolitions 5 Context Elm Leaf Cemetery was renamed Elmwood Cemetery. October 16: Vulcan went on display at the Alabama State Fairgrounds for the 1906 Alabama State Fair. 1906 Brookside strike William Donovan founded the Donovan Provision Company, now Red Diamond. W. A. Watts founded the Cement Block & Manufacturing Co. Standard Portland Cement opened a plant in Leeds. The Tutwiler Coal, Coke & Iron Company was reincorporated as the Birmingham Coal & Iron Company. New boilers were installed at Sloss Furnaces. The Tennessee Coal, Iron and Railroad Company was acquired by U.S. Steel. The First National Bank of Ensley merged with the Bank of Ensley. The Bank of Alabama (Ensley) was founded. Sidney Lee began selling ginger-flavored tonic under the "Buffalo Rock" name. December 4: William Lay founded the Alabama Power Company in Gadsden. The North Alabama Conference College was renamed Birmingham College. March 21: The Birmingham Board of Aldermen voted to display the Vulcan statue in Capitol Park, but never followed through. D. F. Sugg was elected Mayor of Ensley in the 1906 Ensley municipal election. B. B. Comer was elected Governor of Alabama in the 1906 general election. McElwain Baptist Church purchased a pipe organ. Frederick Weidman succeeded Henry Heise as pastor of Zion Lutheran Church. Harry Vaughn's 1906 Birmingham Base Ball Club went 85-47 on their way to their first Southern Association title. October 7: The Fairgrounds Raceway hosted its first motorcycle race. November 17: Alabama defeated Auburn 10-0 in the Alabama-Auburn Game at the Alabama State Fairgrounds. Architect S. Scott Joy joined the firm of Wheelock & Wheelock. July 7: Satchel Paige, Hall of Fame pitcher November 10: Hugh Kaul, president of the Kaul Lumber Company Hugo Black graduated from the University of Alabama School of Law. Houston Brice Sr graduated from Pittsburg High School in Pittsburg, Texas. April 8: Mary Cahalan, principal of Powell School. April 9: Laura Burton, physician December 20: Hazel Farris, who gained fame as "Hazel the Mummy". Joseph Riley Smith, physician The fictional James "Slag" Wormwood, mascot of Sloss Fright Furnace, supposedly died in October 1906. Paul Cook (August 1906) "Birmingham, Alabama: The Magic City". National magazine The 1906 Brown Marx Building Brown Marx Building Ensley Library Mason Building on 3rd Avenue North The Jemison Company developed the Mountain Terrace subdivision, now part of Forest Park. Shelby County Courthouse (with the 1906 Shelby County time capsule) Standard Portland Cement plant in Leeds. Expansion of the Buck Creek Mill. In 1906, HMS Dreadnought was launched, sparking a naval race between Britain and Germany. Native American tribal governments in Indian Territory were teminated, a prerequisite for creating the state of Oklahoma in 1907. Mount Vesuvius erupted and devastated Naples. The San Francisco Earthquake (estimated magnitude 7.8) on the San Andreas Fault destroyed much of San Francisco, California, killing at least 3,000, with 225,000–300,000 left homeless, and $350 million in damages. Also in 1906, the first Grand Prix was held in Le Mans, France. The Pure Food and Drug Act of 1906 was signed into law by U.S. President Theodore Roosevelt. The first Imperial German Navy submarine, U-1, was launched. The first Victor Victrola phonographic record player was manufactured. SOS became an international distress signal. Notable 1906 births include those of actors Mary Astor, Louise Brooks, John Carradine, Lon Chaney Jr., Lou Costello, Ozzie Nelson, and George Sanders; authors Samuel Beckett and Robert E. Howard; computer scientist and U.S. Navy admiral Grace Hopper; gangster Bugsy Siegel; Puyi, Last Emperor of China; Nazi SS officer Adolf Eichmann; physicist Ernst Ruska; Soviet leader Leonid Brezhnev; and musician, author, and actor Oscar Levant. Deaths in 1906 included composer John Knowles Paine, Confederate and later U.S. Army General Joseph "Fighting Joe" Wheeler, First Lady of the Confederate States of America Varina Davis, murder victim Grace Brown, painter Paul Cézanne, physicist Pierre Curie, women's suffragist Susan B. Anthony. The 1906 Nobel Peace Prize was awarded to Theodore Roosevelt. Notable books published in 1906 included White Fang by Jack London. Popular music published in 1906 included "Anchors Aweigh" by Alfred Hart Miles, R. Lovell and Charles A. Zimmerman and "School Days" by Will D. Cobb and Gus Edwards. Births - Deaths - Establishments - Events - Works Retrieved from "https://www.bhamwiki.com/wiki/index.php?title=1906&oldid=157840" About Bhamwiki
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,351
\section{% (01/22/2015): Introduction and Overview} \subsection{Basic motivating background} The course will cover several topics in \emph{spectral graph methods}. By that, I mean that it will cover not spectral graph theory per se, nor will it cover the application of spectral graph methods per se. In addition, spectral methods is a more general topic, and graph methods is a more general topic. Spectral graph theory uses eigenvectors and eigenvalues (and related quantities) of matrices associated with graphs to say things about those graphs. It is a topic which has been studied from a wide range of perspectives, e.g., theoretical computer science, scientific computing, machine learning, statistics, etc., and as such it is a topic which can be viewed from a wide range of approaches. The reason for the focus on spectral graph methods is that a wide range of problems are obviously spectral graph methods, and thus they are useful in practice as well as interesting in theory; but, in addition, many other methods that are not obviously spectral graph methods really are spectral graph methods under the hood. We'll get to what I mean by that, but for now think of running some procedure that seems to work, and if one were to perform a somewhat rigorous algorithmic or statistical analysis, it would turn out that that method essentially boiled down to a spectral graph method. As an example, consider the problem of viral propagation on a social network, which is usually described in terms of some sort of infectious agent, but which has strong connections with spectral graph methods. Our goal will be to understand---by drawing strength from each of the wide range of approaches that have been brought to bear on these problems---when/why spectral graph methods are useful in practical machine learning and data analysis applications, and when/why they (or a vanilla variant of them) are not useful. In the latter case, of course, we'll be interested in understanding whether a better understanding of spectral graph methods can lead to the development of improved algorithmic and statistical techniques---both for large-scale data as well as for small-scale data. Relatedly, we will be interested in whether other methods can perform better or whether the data are just ``bad'' in some sense. \subsection{Types of data and types of problems} Data comes from all sorts of places, and it can be a challenge to find a good way to represent the data in order to obtain some sort of meaningful insight from the data. Two popular ways to model data are as \emph{matrices} and as \emph{graphs}. \begin{itemize} \item Matrices often arise when there are $n$ things, each of which is described by $m$ features. In this case, we have an $m \times n$ matrix $A$, where each column is a data point described by a bunch of features (or vice versa) and where each row is a vector describing the value of that feature at each data point. Alternatively, matrices can arise when there are $n$ things and we have information about the correlations (or other relationships) between them. \item Graphs often arise when there are $n$ things and the pairwise relationships between them are though to be particularly important. Let's specify a graph by $G=(V,E)$, where $V$ is the set of vertices and $E$ is the set of edges, which are pairs of vertices. (Later they can be weighted, etc., but for now let's say they are undirected, unweighted, etc.) Examples of data graphs include the following. \begin{itemize} \item Discretizations of partial differential equations and other physical operators give rise to graphs, where the nodes are points in a physical medium and edges correspond to some sort of physical interaction. \item Social networks and other internet applications give rise to graphs, where the nodes are individuals and there is an edge between two people if they are friends or have some other sort of interaction. \item Non-social networks give rise to graphs, where, e.g., devised, routers, or computers are nodes and where there is an edge between two nodes if they are connected and/or have traffic between them. \item Graphs arise more generally in machine learning and data analysis applications. For example, given a bunch of data points, each of which is a feature vector, we could construct a graph, where the nodes correspond to data points and there is an edge between two data points if they are close in some sense (or a soft version of this, which is what rbf kernels do). \end{itemize} \end{itemize} \noindent In the same way as we can construct graphs from matrices, we can also construct matrices from graphs. We will see several examples below (e.g., adjacency matrices, Laplacians, low-rank embedding matrices, etc.). Spectral graph methods involve using eigenvectors and eigenvalues of matrices associated with graphs to do stuff. In order to do stuff, one runs some sort of algorithmic or statistical methods, but it is good to keep an eye on the types of problems that might want to be solved. Here are several canonical examples. \begin{itemize} \item Graph partitioning: finding clusters/communities. Here, the data might be a bunch of data points (put a picture: sprinkled into a left half and a right half) or it might be a graph (put a picture: two things connected by an edge). There are a million ways to do this, but one very popular one boils down to computing an eigenvector of the so-called Laplacian matrix and using that to partition the data. Why does such a method work? One answer, from TCS, is that is works since it is a relaxation of a combinatorial optimization problem for which there are worst-case quality-of-approximation guarantees. Another answer, from statistics and machine learning, is that is can be used to recover hypothesized clusters, say from a stochastic blockmodel (where the graph consists of several random graphs put together) or from a low-dimensional manifold (upon which the data points sit). \item Prediction: e.g., regression and classification. Here, there is a similar picture, and one popular procedure is to run the same algorithm, compute the same vector and use it to classify the data. In this case, one can also ask: why does such a method work? One answer that is commonly given is that if there are meaningful clusters in the data, say drawn from a manifold, then the boundaries between the clusters correspond to low-density regions; or relatedly that class labels are smooth in the graph topology, or some notion of distance in $\mathbb{R}^{n}$. But, what if the data are from a discrete place? Then, there is the out-of-sample extension question and a bunch of other issues. \item Centrality and ranking. These are two (different but sometimes conflated) notions from sociology and social networks having to do with how ``important'' or ``central'' is an individual/node and relatedly how to rank individuals/nodes. One way to do this is to choose the highest degree node, but this is relatively easy to spam and might not be ``real'' in other ways, and so there are several related things that go by the name of spectral ranking, eigenvector centrality, and so on. The basic idea is that a node is important if important nodes thing it is important. This suggests looking at loops and triangles in a graph, and when this process is iterated you get random walks and diffusions on the graph. It's not obvious that this has very strong connections with the clustering, classification, etc. problems described above, but it does. Basically, you compute the same eigenvector and use it to rank. \item Encouraging or discouraging ``viral propagation.'' Here, one is given, say, a social network, and one is interested in some sort of iterative process, and one wants to understand its properties. Two examples are the following: there might be a virus or other infectious agent that goes from node to node making other agents sick; or there might be some sort of ``buzz'' about a new movie or new tennis shoes, and this goes from individual to individual. Both of these are sometimes called viral propagation, but there are important differences, not the least of which is that in the former people typically want to stop the spread of the virus, while in the latter people want to encourage the spread of the virus to sell more tennis shoes. \end{itemize} \subsection{Examples of graphs} When algorithms are run on data graphs---some of which might be fairly nice but some of which might not, it can be difficult to know why the algorithm performs as it does. For example, would it perform that way on every possible graph? Similarly, if we are not in some asymptotic limit or if worst-case analysis is somewhat too coarse, then what if anything does the method reveal about the graph? To help address these and related questions, it helps to have several examples of graphs in mind and to see how algorithms perform on those graphs. Here are several good examples. \begin{itemize} \item A discretization of nice low-dimensional space, e.g., the integers/lattice in some fixed dimension: $\mathbb{Z}_d$, $\mathbb{Z}^{2}_d$, and $\mathbb{Z}^{3}_d$. \item A star, meaning a central node to which all of the other nodes are attached. \item A binary tree. \item A complete graph or clique. \item A constant-degree expander, which is basically a very sparse graph that has no good partitions. Alternatively, it can be viewed as a sparse version of the complete graph. \item A hypercube on $2^n$ vertices. \item A graph consisting of two complete graphs or two expanders or two copies of $\mathbb{Z}^{2}_d$ that are weakly connected by, say, a line graph. \item A lollipop, meaning a complete graph of expander, with a line graph attached, where the line/stem can have different lengths. \end{itemize} Those are common constructions when thinking about graphs. The following are examples of constructions that are more common in certain network applications. \begin{itemize} \item An Erdos-Renyi random graph, $G_{np}$, for $p=3/n$ or $p \gtrsim \log(n)/n$ \item A ``small world'' graph, which is basically a ring plus a $3$ regular random graph. \item A heavy-tailed random graph, with or without min degree assumption, or one constructed from a preferential attachment process. \end{itemize} In addition to things that are explicitly graphs, it also helps to have several examples of matrix-based data to have in mind from which graphs can be constructed. These are often constructed from some sort of nearest-neighbor process. Here are several common examples. \begin{itemize} \item A nice region of a low-dimensional subspace of $\mathbb{R}^{n}$ or of a nice low-dimensional manifold embedded in $\mathbb{R}^{n}$. \item A full-dimensional Gaussian in $\mathbb{R}^{n}$. Here, most of the mass is on the shell, but what does the graph corresponding to this ``look like''? \item Two low-dimensional Gaussians in $\mathbb{R}^{n}$. This looks like a dumbbell, with two complete graphs at the two ends or two copies of $\mathbb{Z}^{n}_{d}$ at the ends, depending on how parameters are set. \end{itemize} \subsection{Some questions to consider} Here are a few questions to consider. \begin{itemize} \item If the original data are vectors that form a matrix, how sensitive are these methods to the details of the graph construction? (Answer: in theory, no; in practice, often yes.) \item If the original data are represented by a graph, how sensitive are these methods to a bit of noise in the graph? (Answer: in theory, no; in practice, often yes.) \item How good a guide is worst case cs and asymptotic statistical theory? (Answer: in theory, good; in practice, often not, but it depends on what is the reference state, e.g., manifold versus stochastic blockmodel.) \item What if you are interested in a small part of a very large graph? E.g., you and your $100$ closest friends on a social network, as opposed to you and your $10^9$ closest friends on that social network. Do you get the same results if you run some sort of local algorithm on a small part of the graph as you do if you run a global algorithm on a subgraph that is cut out? (Typically no, unless you are very careful.) \end{itemize} \subsection{Matrices for graphs} Let $G=(V,E,W)$ be an undirected, possibly weighted, graph. There are many matrices that one can associate with a graph. Two of the most basic are the \emph{adjacency matrix} and the \emph{diagonal degree matrix}. \begin{definition} Given $G=(V,E,W)$, the adjacency matrix $A\in\mathbb{R}^{n \times n}$ is defined to be \[ A_{ij} = \left\{ \begin{array}{l l} W_{ij} & \quad \text{if $(ij)\in E$}\\ 0 & \quad \text{otherwise} \end{array} \right. , \] and the diagonal degree matrix $D\in\mathbb{R}^{n \times n}$ is defined to be \[ D_{ij} = \left\{ \begin{array}{l l} \sum_k W_{ik} & \quad \text{if $i=j$}\\ 0 & \quad \text{otherwise} \end{array} \right. . \] \end{definition} Note that for undirected graphs, i.e., when $W_{ij}$ equals $1$ or $0$ depending on whether or not there is an edge between nodes $i$ and $j$, the adjacency matrix specifies which edges are connected and the diagonal degree matrix gives the degree of $i^{th}$ node at the $(ii)^{th}$ diagonal position. (Given this setup, it shouldn't be surprising that most spectral graph methods generalize in nice ways from unweighted to weighted graphs. Of interest also are things like time-evolving graphs, directed graphs, etc. In those cases, the situation is more subtle/complex. Typically, methods for those more complex graphs boil down to methods for simpler undirected, static graphs.) Much of what we will discuss has strong connections with spectral graph theory, which is an area that uses eigenvectors and eigenvalues of matrices associated with the graph to understand properties of the graph. To begin, though, we should note that it shouldn't be obvious that eigenstuff should reveal interesting graph properties---after all, graphs by themselves are essentially combinatorial things and most traditional graph problems and algorithms don't mention anything having to do with eigenvectors. In spite of this, we will see that eigenstuff reveals a lot about graphs that are useful in machine learning and data analysis applications, and we will want to understand why this is the case and how we can take advantage of it in interesting ways. Such an approach of using eigenvectors and eigenvalues is most useful when used to understand a natural operator of natural quadratic form associated with the graphs. Perhaps surprisingly, adjacency matrices and diagonal degree matrices are not so useful in that sense---but they can be used to construct other matrices that are more useful in that sense. One natural and very useful operator to associate with a graph $G$ is the following. \begin{definition} Given $G=(V,E,W)$, the \emph{diffusion operator} is \[ W = D^{-1}A \quad \text{(or $M=AD^{-1}$, if you multiply from the other side)} . \] \end{definition} This matrix describes the behavior of diffusions and random walks on $G$. In particular, if $x \in \mathbb{R}^{n}$ is a row vector that gives the probability that a particle is at each vertex of $G$, and if the particle then moves to a random neighbor, then $pW$ is the new probability distribution of the particle. If the graph $G$ is regular, meaning that it is degree-homogeneous, then $W$ is a rescaling of $A$, but otherwise it can be very different. Although we won't go into too much detail right now, note that applying this operator to a vector can be interpreted as doing one step of a diffusion or random walk process. In this case, one might want to know what happens if we iteratively apply an operator like $W$. We will bet back to this. One natural and very useful quadratic form to associate with a graph $G$ is the following. \begin{definition} Given $G=(V,E,W)$, the \emph{Laplacian matrix} (or combinatorial Laplacian matrix) is \[ L=D-A . \] \end{definition} Although we won't go into detail, the Laplacian has an interpretation in terms of derivatives. (This is most common/obvious in continuous applications, where it can be used to measure the smoothness of the Laplacian and/or of some continuous place from which the gradient was constructed---if it was---in a nice way, which is often \emph{not} the case.) Given a function or a vector $x \in \mathbb{R}^{n}$, the Laplacian quadratic form is \[ x^TLx = \sum_{(ij)\in E} (x_i-x_j)^2 . \] This is a measure of smoothness of the vector/function $x$---smoothness of $x$, in some sense, conditioned on the graph structure. (That is, it is a statement about the graph itself, independent of how it was constructed. This is of interest by itself but also for machine learning and data analysis application, e.g., since labels associated with the nodes that correspond to a classification function might be expected to be smooth.) Alternatively, we can define the \emph{normalized Laplacian matrix}. \begin{definition} Given $G=(V,E,W)$, the \emph{normalized Laplacian matrix} is \[ \mathcal{L} = L = D^{-1/2} L D^{-1/2} = I - D^{-1/2} L D^{-1/2} . \] \end{definition} (Note that I have already started to be a little sloppy, by using the same letter to mean two different things. I'll point out as we go where this matters.) As we will see, for degree-homogeneous graphs, these two Laplacians are essentially the same, but for degree-heterogeneous graphs, they are quite different. As a general rule, the latter is more appropriate for realistic degree-heterogeneous graphs, but it is worth keeping the two in mind, since there are strong connections between them and how they are computed. Similar smoothness, etc. interpretations hold for the normalized Laplacian, and this is important in many applications. \subsection{An overview of some ideas} Here is a vanilla version of a spectral graph algorithm that will be central to a lot of what we do. We'll be more precise and go into a lot more detail later. The algorithm takes as input a graph, as specified by a Laplacian matrix, $L$ or $\mathcal{L}$. \begin{enumerate} \item Compute, exactly or approximately, the leading nontrivial eigenvector of $L$ or $\mathcal{L}$. \item Use that vector to split the nodes of the graph into a left half and a right half. \end{enumerate} Those two pieces can be the two clusters, in which case this algorithm is a vanilla version of spectral graph partitioning; or with some labels that can be used to make predictions for classification or regression; or we can rank starting from the left and going to the right; or we can use the details of the approximate eigenvector calculation, e.g., random walks and related diffusion-based methods, to understand viral diffusion problems. But in all those cases, we are interested in the leading nontrivial eigenvector of the Laplacian. We'll have a lot more to say about that later, but for now think of it just as some vector that in some sense describes important directions in the graph, in which case what this vanilla spectral algorithm does is putting or ``embedding'' the nodes of the graph on this line and cuts the nodes into two pieces, a left half and a right half. (The embedding has big distortion, in general, for some points at least; the two halves can be very unbalanced, etc.; but at least in very nice cases, that informal intuition is true, and it is true more generally if the two halves can be unbalanced, etc. Understanding these issues will be important for what we do.) (Make a picture on the board.) We can ask: what is the optimization problem that this algorithm solves? As we will see eventually, in some sense, what makes a spectral graph algorithm a spectral graph algorithm is the first step, and so let's focus on that. Here is a basic spectral optimization problem that this problem solves. \begin{alignat*}{4} &\text{min} & x^T L_{G} x \\ &\text{s.t.} & x^T D_{G} x = 1 \\ & & x^T D_{G} \vec{1} = 0 \\ & & x \in \mathbb{R}^V \end{alignat*} \noindent That is, find the vector that minimizes the quadratic form $x^TLx$ subject to the constraints that $x$ sits on a (degree-weighted) unit ball and that $x$ is perpendicular (in a degree-weighted norm) to the ``trivial'' all-ones vector. The solution to this problem is a vector, and it is the leading nontrivial eigenvector of $L$ or $\mathcal{L}$. Importantly, this is \emph{not} a convex optimization problem; but rather remarkably it can be solved in low-degree polytime by computing an eigenvector of $L$. How can it be that this problem is solvable if it isn't convex? After all, the usual rule of thumb is that convex things are good and non-convex things are bad. There are two (related, certainly not inconsistent) answers to this. \begin{itemize} \item One reason is that this is an eigenvector (or generalized eigenvector) problem. In fact, it involves computing the leading nontrivial eigenvector of $L$, and so it is a particularly nice eigenvalue problem. And computing eigenvectors is a \emph{relatively} easy thing to do---for example, with a black box solver, or in this special case with random walks. But more on this later. \item Another reason is that is is secretly convex, in that it is convex in a different place. Importantly, that different place there are better duality properties for this problem, and so it can be used to understand this problem and its solution better. \end{itemize} Both of these will be important, but let's start by focusing on the second reason. Consider the following version of the basic spectral optimization problem. \begin{eqnarray*} {\textsf{SDP}:} &\min & L \bullet X \\ & \mbox{s.t.} & \Trace{X} = I_0 \bullet X=1 \\ & & X \succeq 0 , \end{eqnarray*} where $\bullet$ stands for the Trace, or matrix inner product, operation, \emph{i.e.}, $A \bullet B = \Trace{A B^T}=\sum_{ij}A_{ij}B_{ij}$ for matrices $A$ and $B$. \emph{Note that, both here and below, $I_0$ is sometimes the Identity on the subspace perpendicular to the all-ones vector. This will be made more consistent later.} \textsf{SDP} is a relaxation of the spectral program \textsf{SPECTRAL} from an optimization over unit vectors to an optimization over distributions over unit vectors, represented by the density matrix $X$. But, the optimal values for \textsf{SPECTRAL} and \textsf{SDP} are the same, in the sense that they are given by the second eigenvector $v$ of $L$ for \textsf{SPECTRAL} and by $X = vv^T$ for \textsf{SDP}. Thus, this is an SDP. While solving the vanilla spectral optimization problem with a black-box SDP solver is possible, it is not advisable, since one can just can a black-box eigenvalue solver (or run some non-black-box method that approximates the eigenvalue). Nevertheless, the SDP can be used to understand spectral graph methods. For example, we can consider the dual of this SDP: \begin{alignat*}{4} \quad& &\text{maximize} \quad && \alpha \\ & &\text{s.t.} \quad && L_{G} \succeq \alpha L_{K_{n}} \\ & & & & \alpha \in \mathbb{R} \end{alignat*} This is a standard dual construction, the only nontrivial thing is we write out explicitly $I_0=L_{K_n}$, as the identity matrix on the subspace perpendicular to $\vec{1}$ is $I_0=I-\vec{1}^T\vec{1}=L_{K_n}$ where $K_n$ is the complete graph on $n$ vertices. We will get into more detail later what this means, but informally this means that we are in some sense ``embedding'' the Laplacian of the graph in the Laplacian of a complete graph. Slightly less informally, the $\succeq$ is an inequality over graphs (we will define this in more detail later) which says that the Laplacian quadratic form of one graph is above or below that of another graph (in the sense of SPSD matrices, if you know what that means). So, in this dual, we want to choose the largest $\alpha$ such that that inequality is true. \subsection{Connections with random walks and diffusions} A final thing to note is that the vector $x^*$ that solves these problems has a natural interpretation in terms of diffusions and random walks. This shouldn't be surprising, since one of the ways this vector is to partition a graph into two pieces that captures a qualitative notion of connectivity. The interpretation is that is you run a random walk---either a vanilla random walk defined by the matrix $W=D^{-1}A$ above, meaning that at each step you go to one of your neighbors with equal probability, or a fancier random walk---then $x^{*}$ defines the slowest direction to mixing, i.e., the direction that is at the pre-asymptotic state before you get to the asymptotic uniform distribution. So, that spectral graph methods are useful in these and other applications is largely due to two things. \begin{itemize} \item Eigenvectors tend to be ``global'' things, in that they optimize a global objective over the entire graph. \item Random walks and diffusions optimize almost the same things, but they often do it in a very different way. \end{itemize} One of the themes of what we will discuss is the connection between random walks and diffusions and eigenvector-based spectral graph methods on different types of graph-based data. Among other things, this will help us to address local-global issues, e.g., the global objective that defines eigenvectors versus the local nature of diffusion updates. Two things should be noted about diffusions. \begin{itemize} \item Diffusions are robust/regularized notions of eigenvectors \item The behavior of diffusions is very different on $K_n$ or expander-like metric spaces than it is on line-like or low-dimensional metric spaces. \end{itemize} An important subtlety is that most data have some sort of degree heterogeneity, and so the extremal properties of expanders are mitigated since it is constant-degree expanders that are most unlike low-dimensional metric spaces. (In the limit, you have a star, and there it is trivial why you don't have good partitions, but we don't want to go to that limit.) \subsection{Low-dimensional and non-low-dimensional data} Now, complete graphs are very different than line graphs. So, the vanilla spectral graph algorithm is also putting the data in a complete graph. To get a bit more intuition as to what is going on and to how these methods will perform in real applications, consider a different type of graph known as an expander. I won't give the exact definition now---we will later---but expanders are very important, both in theory and in practice. (The former may be obvious, while the latter may be less obvious.) For now, there are three things you need to know about expanders, either constant-degree expanders and/or degree-heterogeneous expanders. \begin{itemize} \item Expanders are extremely sparse graphs that do not have any good clusters/partitions, in a precise sense that we will define later. \item Expanders are also the metric spaces that are least like low-dimensional spaces, e.g., a line graph, a two-dimensional grid, etc. That is, if your intuition comes from low-dimensional places like 1D or 2D places, then expanders are metric spaces that are most different than that. \item Expanders are sparse versions of the complete graph, in the sense that there are graph inequalities of the form $\succeq$ that relate the Laplacian quadratic forms of expanders and complete graphs. \end{itemize} So, in a certain precise sense, the vanilla spectral graph method above (as well as other non-vanilla spectral methods we will get to) put or embed the input data in two extremely different places, a line as well as a dense expander, i.e., a complete graph. Now, real data have low-dimensional properties, e.g., sometimes you can visualize them in a two-dimensional piece of paper and see something meaningful, and since they often have noise, they also have expander-like properties. (If that connection isn't obvious, it soon will be.) We will see that the properties of spectral graph methods when applied to real data sometimes depend on one interpretation and sometimes depend on the other interpretation. Indeed, many of the properties---both the good properties as well as the bugs/features---of spectral graph methods can be understood in light of this tension between embedding the data in a low-dimensional place and embedding the data in an expander-like place. There are some similarities between this---which is a statement about different types of graphs and metric spaces---and analogous statements about random vectors in $\mathbb{R}^{n}$, e.g., from a full-dimensional Gaussian distribution in $\mathbb{R}^{n}$. Some of these will be explored. \subsection{Small world and heavy-tailed examples} There are several types or classes of generative models that people consider, and different communities tend to adopt one or the other class. Spectral graph methods are applied to all of these, although they can be applied in somewhat different ways. \begin{itemize} \item Discretization or random geometric graph of some continuous low-dimensional place, e.g., a linear low-dimensional space, a low-dimensional curved manifold, etc. In this case, there is a natural low-dimensional geometry. (Put picture on board.) \item Stochastic blockmodels, where there are several different types of individuals, and each type interacts with individuals in the same group versus different groups with different probabilities. (Put picture on board, with different connection probabilities.) \item Small-world and heavy-tailed models. These are generative graph models, and they attempt to capture some local aspect of the data (in one case, that there is some low-dimensional geometry, and in the other case, that there is big variability in the local neighborhoods of individuals, as captured by degree or some other simple statistic) and some global aspect to the data (typically that there is a small diameter). \end{itemize} We will talk about all three of these in due course, but for now let's say just a bit about the small-world models and what spectral methods might reveal about them in light of the above comments. Small-world models start with a one-dimensional or two-dimensional geometry and add random edges in one of several ways. (Put picture here.) The idea here is that you reproduce local clustering and small diameters, which is a property that is observed empirically in many real networks. Importantly, for algorithm and statistical design, we have intuition about low-dimensional geometries; so let's talk about the second part: random graphs. Consider $G_{np}$ and $G_{nm}$, which are the simplest random graph models, and which have an expected or an exact number of edges, respectively. In particular, start with $n$ isolated vertices/nodes; then: \begin{itemize} \item For $G_{np}$, insert each of the ${n \choose 2}$ possible edges, independently, each with probability $p$. \item For $G_{nm}$, among all ${ { n \choose 2 } \choose m}$ subsets of $m$ edges, select one, independently at random. \end{itemize} In addition to being of theoretical interest, these models are used in all sorts of places. For example, they are the building blocks of stochastic block models, in addition to providing the foundation for common generative network models. (That is, there are heavy-tailed versions of this basic model as well as other extensions, some of which we will consider, but for now let's stick with this.) These vanilla ER graphs are often presented as strawmen, which in some sense they are; but when taken with a grain of salt they can reveal a lot about data and algorithms and the relationship between the two. First, let's focus on $G_{np}$. There are four regimes of particular interest. \begin{itemize} \item $p < \frac{1}{n}$. Here, the graph $G$ is not fully-connected, and it doesn't even have a giant component, so it consists of just a bunch of small things. \item $ \frac{1}{n} \lesssim p \lesssim \frac{\log(n)}{n}$. Here there is a giant component, i.e., set of $\Omega(n)$ nodes that are connected, that has a small $O(\log(n))$ diameter. In addition, random walks mix in $O(\log^2(n))$ steps, and the graph is locally tree-like. \item $\frac{\log(n)}{n} \lesssim p$. Here the graph is fully-connected. In addition, it has a small $O(\log(n))$ diameter and random walks mix in $O(\log^2(n))$ steps (but for a slightly different reason that we will get back to later). \item $\frac{\log(n)}{n} \ll p$. Here the graph is pretty dense, and methods that are applicable to pretty dense graphs are appropriate. \end{itemize} If $p \gtrsim \log(n)/n$, then $G_{np}$ and $G_{nm}$ are ``equivalent'' in a certain sense. But if they are extremely sparse, e.g., $p=3/n$ or $p=10/n$, and the corresponding values of $m$, then they are different. In particular, if $p=3/n$, then the graph is not fully-connected, but we can ask for a random $r$-regular graph, where $r$ is some fixed small integer. That is, fix the number of edges to be $r$, so we have a total of $nr$ edges which is \emph{almost} a member of $G_{nm}$, and look at a random such graph. \begin{itemize} \item A random $1$-regular graph is a matching. \item A random $2$-regular graph is a disjoint union of cycles. \item A random $3$-regular graph: it is fully-connected and had a small $O(\log(n))$ diameter; it is an expander; it contains a perfect matching (a matching, i.e., a set of pairwise non-adjacent edges that matches all vertices of the graph) and a Hamiltonian cycle (a closed loop that visits each vertex exactly once). \item A random $4$-regular graph is more complicated to analyze \end{itemize} How does this relate to small world models? Well, let's start with a ring graph (a very simple version of a low-dimensional lattice, in which each node is connected to neighbors within a distance $k=1$) and add a matching (which is a bunch of random edges in a nice analyzable way). Recall that a random $3$-regular graph has both a Hamiltonian cycle and a perfect matching; well it's also the case that the union of an $n$ cycle and a random machine is contiguous to a random $3$ regular random graphs. (This is a type of graph decomposition we won't go into.) This is a particular theoretical form to say that small world models have a local geometry but globally are also expanders in a strong sense of the word. Thus, in particular, when one runs things like diffusions on them, or relatedly when one runs spectral graph algorithms (which have strong connections under the hood to diffusions) on them, what one gets will depend sensitively on the interplay between the line/low-dimensional properties and the noise/expander-like properties. It is well known that similar results hold for heavy-tailed network models such as PA models or PLRG models or many real-world networks. There there is degree heterogeneity, and this can give a lack of measure concentration that is analogous to the extremely sparse Erdos-Renyi graphs, unless one does things like make minimum degree assumptions. It it less well known that similar things also hold for various types of constructed graphs. Clearly, this might happen if one constructs stochastic blockmodels, since then each piece is a random graph and we are interested in the interactions between different pieces. But what if construct a manifold method, but there is a bit of noise? This is an empirical question; but noise, if it is noise, can be thought of as a random process, and in the same way as the low-dimensional geometry of the vanilla small world model is not too robust to adding noise, similarly geometric manifold-based methods are also not too robust. In all of this, there algorithm questions, as well as statistical and machine learning questions such as model selection questions and questions about how to do inference with vector-based of graph-based data, as well as mathematical questions, as well as questions about how these methods perform in practice. We will revisit many of these over the course of the semester. \subsection{Outline of class} In light of all that, here is an outline of some representative topics that we will cover. \begin{enumerate} \item Basics of graph partitioning, including spectral, flow, etc., degree heterogeneity, and other related objectives. \item Connections with diffusions and random walks, including connections with resistor network, diffusion-based distances, expanders, etc. \item Clustering, prediction, ranking/centrality, and communities, i.e., solving a range of statistics and data analysis methods with variants of spectral methods. \item Graph construction and empirical properties, i.e., different ways graphs can be constructed and empirical aspects of ``given'' and ``constructed'' graphs. \item Machine learning and statistical approaches and uses, e.g., stochastic blockmodels, manifold methods, regularized Laplacian methods, etc. \item Computations, e.g., nearly linear time Laplacian solvers and graph algorithms in the language of linear algebra. \end{enumerate} \section{% (01/27/2015): Basic Matrix Results (1 of 3)} Reading for today. \begin{compactitem} \item ``Lecture notes,'' from Spielman's Spectral Graph Theory class, Fall 2009 and 2012 \end{compactitem} \subsection{Introduction} Today and next time, we will start with some basic results about matrices, and in particular the eigenvalues and eigenvectors of matrices, that will underlie a lot of what we will do in this class. The context is that eigenvalues and eigenvectors are complex (no pun intended, but true nonetheless) things and---in general---in many ways not so ``nice.'' For example, they can change arbitrarily as the coefficients of the matrix change, they may or may not exist, real matrices may have complex eigenvectors and eigenvalues, a matrix may or may not have a full set of $n$ eigenvectors, etc. Given those and related instabilities, it is an initial challenge is to understand what we can determine from the spectra of a matrix. As it turns out, for many matrices, and in particular many matrices that underlie spectral graph methods, the situation is much nicer; and, in addition, in some cases they can be related to even nicer things like random walks and diffusions. So, let's start by explaining ``why'' this is the case. To do so, let's get some context for how/why matrices that are useful for spectral graph methods are nicer and also how these nicer matrices sit in the larger universe of arbitrary matrices. This will involve establishing a few basic linear algebraic results; then we will use them to form a basis for a lot of the rest of what we will discuss. This is good to know in general; but it is also good to know for more practical reasons. For example, it will help clarify when vanilla spectral graph methods can be extended, \emph{e.g.}, to weighted graphs or directed graphs or time-varying graph or other types of normalizations, etc. \subsection{Some basics} To start, recall that we are interested in the Adjacency matrix of a graph $G=(V,E)$ (or $G=(V,E,W)$ if the graph is weighted) and other matrices that are related to the Adjacency matrix. Recall that the $n \times n$ Adjacency matrix is defined to be \[ A_{ij} = \left\{ \begin{array}{l l} W_{ij} & \quad \text{if $(ij)\in E$}\\ 0 & \quad \text{otherwise} \end{array} \right. , \] where $W_{ij}=1$, for all $(i,j)\in E$ if the graph is unweighted. Later, we will talk about directed graphs, in which case the Adjacency matrix is not symmetric, but note here it is symmetric. So, let's talk about symmetric matrices: a symmetric matrix is a matrix $A$ for which $A=A^T$, i.e., for which $A_{ij}=A_{ji}$. Almost all of what we will talk about will be real-valued matrices. But, for a moment, we will start with complex-valued matrices. To do so, recall that if $x=\alpha+i\beta \in \mathbb{C}$ is a complex number, then $\bar{x}=\alpha-i\beta \in \mathbb{C}$ is the \emph{complex conjugate} of $x$. Then, if $M\in\mathbb{C}^{m \times n}$ is a complex-valued matrix, i.e., an $m \times n$ matrix each entry of which is a complex number, then the \emph{conjugate transpose} of $M$, which is denoted $M^{*}$, is the matrix defined as \[ \left(M^{*}\right)_{ij} = \bar{M}_{ji} . \] Note that if $M$ happens to be a real-valued $m \times n$ matrix, then this is just the transpose. If $x,y \in \mathbb{C}^{n}$ are two complex-valued vectors, then we can define their inner product to be \[ \left<x,y\right> = x^{*}y = \sum_{i=1}^{n} \bar{x}_i y_i . \] Note that from this we can also get a norm in the usual way, i.e., $\left<x,x\right> = \|x\|_2^2 \in \mathbb{R}$. Given all this, we have the following definition. \begin{definition} If $M\in\mathbb{C}^{n \times n}$ is a square complex matrix, $\lambda \in \mathbb{C}$ is a scalar, and $x \in \mathbb{C}^{n} \backslash \{ 0 \}$ is a non-zero vector such that \begin{equation} \label{eqn:eigensystem1} M x = \lambda x \end{equation} then $\lambda$ is an \emph{eigenvalue} of $M$ and $x$ is the corresponding \emph{eigenvector} of $\lambda$. \end{definition} \noindent Note that when Eqn.~(\ref{eqn:eigensystem1}) is satisfied, then this is equivalent to \begin{equation}\label{eq:det} \left( M - \lambda I \right)x = 0 , \mbox{ for } x \ne 0 , \end{equation} where $I$ is an $n \times n$ Identity matrix. In particular, this means that we have at least one eigenvalue/eigenvector pair. Since ~\eqref{eq:det} means $M-\lambda I$ is rank deficient, this in turn is equivalent to \[ \mbox{det}\left( M - \lambda I \right) = 0 . \] Note that this latter expression is a polynomial with $\lambda$ as the variable. That is, if we fix $M$, then the function given by $\lambda \rightarrow \mbox{det}\left( M - \lambda I \right)$ is a univariate polynomial of degree $n$ in $\lambda$. Now, it is a basic fact that every non-zero, single-variable, degree polynomial of degree $n$ with complex coefficients has---counted with multiplicity---exactly $n$ roots. (This counting multiplicity thing might seem pedantic, but it will be important latter, since this will correspond to potentially degenerate eigenvalues, and we will be interested in how the corresponding eigenvectors behave.) In particular, any square complex matrix $M$ has $n$ eigenvectors, counting multiplicities, and there is at least one eigenvalue. As an aside, someone asked in class if this fact about complex polynomials having $n$ complex roots is obvious or intuitive. It is sufficiently basic/important to be given the name the fundamental theorem of algebra, but its proof isn't immediate or trivial. We can provide some intuition though. Note that related formulations of this state that every non-constant single-variable polynomial with complex coefficients has at least one complex root, etc. (e.g., complex roots come in pairs); and that the field of complex numbers is algebraically closed. In particular, the statements about having complex roots applies to real-valued polynomials, i.e., since real numbers are complex numbers polynomials in them have complex roots; but it is false that real-valued polynomials always have real roots. Equivalently, the real numbers are not algebraically closed. To see this, recall that the equation $x^2-1=0$, viewed as an equation over the reals has two real roots, $x=\pm 1$; but the equation $x^2+1=0$ does not have any real roots. Both of these equations have roots over the complex plane: the former having the real roots $x=\pm 1$, and the latter having imaginary roots $x=\pm i$. \subsection{Two results for Hermitian/symmetric matrices} Now, let's define a special class of matrices that we already mentioned. \begin{definition} A matrix $M\in\mathbb{C}^{n \times n}$ is \emph{Hermitian} if $M=M^{*}$. In addition, a matrix $M\in\mathbb{R}^{n \times n}$ is \emph{symmetric} if $M=M^{*}=M^{T}$. \end{definition} For complex-valued Hermitian matrices, we can prove the following two lemmas. \begin{lemma} Let $M$ be a Hermitian matrix. Then, all of the eigenvalues of $M$ are real. \end{lemma} \begin{Proof} Let $M$ be Hermitian and $\lambda\in\mathcal{C}$ and $x$ non-zero be s.t. $Mx = \lambda x$. Then it suffices to show that $\lambda=\lambda^{*}$, since that means that $\lambda\in\mathbb{R}$. To see this, observe that \begin{eqnarray} \nonumber \left<Mx,x\right> &=& \sum_i \sum_j \bar{M_{ij}}\bar{x_j} x_i \\ \label{eqn:herm1} &=& \sum_i \sum_j M_{ji} x_i \bar{x_j} \\ \nonumber &=& \left< x,Mx\right> \end{eqnarray} where Eqn.~(\ref{eqn:herm1}) follows since $M$ is Hermitian. But we have \[ \left<Mx,x\right> = \left< \lambda x, x\right> = \bar{\lambda} \left<x,x\right> = \bar{\lambda} \|x\|_2^2 \] and also that \[ \left<x,Mx\right> = \left< x, \lambda x\right> = \lambda \left<x,x\right> = \lambda \|x\|_2^2 . \] Thus, $\lambda = \bar{\lambda}$, and the lemma follows. \end{Proof} \begin{lemma} Let $M$ be a Hermitian matrix; and let $x$ and $y$ be eigenvectors corresponding to different eigenvalues. Then $x$ and $y$ are orthogonal. \end{lemma} \begin{Proof} Let $Mx = \lambda x$ and $My = \lambda^{\prime} y$. Then, \[ \left<Mx,y\right> = \left(Mx\right)^{*}y = x^{*}M^{*}y = x^{*}My = \left< x, My\right> . \] But, \[ \left<Mx,y\right> = \lambda \left<x,y\right> \] and \[ \left<x,My\right> = \lambda^{\prime} \left<x,y\right> . \] Thus \[ \left(\lambda-\lambda^{\prime}\right) \left<x,y\right> = 0 . \] Since $\lambda \ne \lambda^{\prime}$, by assumption, it follows that $\left<x,y\right> = 0$, from which the lemma follows. \end{Proof} So, Hermitian and in particular real symmetric matrices have real eigenvalues and the eigenvectors corresponding to to different eigenvalues are orthogonal. We won't talk about complex numbers and complex matrices for the rest of the term. (Actually, with one exception since we need to establish that the entries of the eigenvectors are not complex-valued.) \subsection{Consequences of these two results} So far, we haven't said anything about a full set of orthogonal eigenvectors, etc., since, e.g., all of the eigenvectors could be the same or something funny like that. In fact, we will give a few counterexamples to show how the niceness results we establish in this class and the next class fail to hold for general matrices. Far from being pathologies, these examples will point to interesting ways that spectral methods and/or variants of spectral method ideas do or do not work more generally (e.g., periodicity, irreducibility etc.) Now, let's restrict ourselves to real-valued matrices, in which case Hermitian matrices are just symmetric matrices. With the exception of some results next time on positive and non-negative matrices, where we will consider complex-valued things, the rest of the semester will consider real-valued matrices. Today and next time, we are only talking about complex-valued matrices to set the results that underlie spectral methods in a more general context. So, let's specialize to real-values matrices. First, let's use the above results to show that we can get a full set of (orthogonalizable) eigenvectors. This is a strong ``niceness'' result, for two reasons: (1) there is a full set of eigenvectors; and (2) that the full set of eigenvectors can be chosen to be orthogonal. Of course, you can always get a full set of orthogonal vectors for $\mathbb{R}^{n}$---just work with the canonical vectors or some other set of vectors like that. But what these results say is that for symmetric matrices we can also get a full set of orthogonal vectors that in some sense have something to do with the symmetric matrix under consideration. Clearly, this could be of interest if we want to work with vectors/functions that are in some sense adapted to the data. Let's start with the following result, which says that given several (i.e., at least one) eigenvector, then we can find another eigenvector that is orthogonal to it/them. Note that the existence of at least one eigenvector follows from the existence of at least one eigenvalue, which we already established. \begin{lemma} Let $M\in\mathbb{R}^{n \times n}$ be a real symmetric matrix, and let $x_1,\ldots,x_k$, where $1 \le k < n$, be orthogonal eigenvectors of $M$. Then, there is an eigenvector $x_{k+1}$ of $M$ that is orthogonal to $x_1,\ldots,x_k$. \end{lemma} \begin{Proof} Let $V$ be the $(n-k)$-dimensional subspace of $\mathbb{R}^{n}$ that contains all vectors orthogonal to $x_1,\ldots,x_k$. Then, we claim that: for all $x \in V$, we have that $Mx \in V$. To prove the claim, note that for all $i\in[k]$, we have that \[ \left<x_i,Mx\right> = x_i^TMx = \left(Mx_i\right)^Tx = \lambda_i x_i x = \lambda_i \left<x_i,x\right> = 0 , \] where $x_i$ is one of the eigenvectors assumed to be given. Next, let \begin{itemize} \item $B \in \mathbb{R}^{n \times (n-k)}$ be a matrix consisting of the vectors $b_1,\ldots,b_{n-k}$ that form an orthonormal basis for $V$. (This takes advantage of the fact that $\mathbb{R}^{n}$ has a full set of exactly $n$ orthogonal vectors that span it---that are, of course, not necessarily eigenvectors.) \item $B^{\prime} = B^T$. (If $B$ is any matrix, then $B^{\prime}$ is a matrix such that, for all $y \in V$, we have that $B^{\prime}y$ is an $(n-k)$-dimensional vector such that $BB^{\prime}y=y$. I \emph{think} we don't loose any generality by taking $B$ to be orthogonal.) \item $\lambda$ be a real eigenvalue of the real symmetric matrix \[ M^{\prime} = B^{\prime} M B \in \mathbb{R}^{(n-k)\times(n-k)} , \] with $y$ a corresponding real eigenvector of $M$. I.e., $M^{\prime}y=\lambda y$. \end{itemize} Then, \[ B^{\prime} M B y = \lambda y , \] and so \[ B B^{\prime} M B y = \lambda B y , \] from which if follows that \[ M B y = \lambda B y . \] The last equation follows from the second-to-last since $By \perp \{ x_1,\ldots,x_k\}$, from which it follows that $MBy \perp \{ x_1,\ldots,x_k\}$, by the above claim, and thus $BB^{\prime} MBy = MBy$. I.e., this doesn't change anything since $BB^{\prime} \xi = \xi$, for $\xi$ in that space. So, we can now construct that eigenvector. In particular, we can choose $x_{k+1} = By$, and we have that $Mx_{k+1} = \lambda x_{k+1}$, from which the lemma follows. \end{Proof} Clearly, we can apply the above lemma multiple times. Thus, as an important aside, the following ``spectral theorem'' is basically a corollary of the above lemma. \begin{theorem}[Spectral Theorem] Let $M\in\mathbb{R}^{n \times n}$ be a real symmetric matrix, and let $\lambda_1,\ldots,\lambda_n$ be its real eigenvalues, including multiplicities. Then, there are $n$ orthonormal vectors $x_1,\ldots,x_n$, with $x_i\in\mathbb{R}^{n}$, such that $x_i$ is an eigenvector corresponding to $\lambda_i$, i.e., $M x_i = \lambda_i x_i$. \end{theorem} A few comments about this spectral theorem. \begin{itemize} \item This theorem and theorems like this are very important and many generalizations and variations of it exist. \item Note the wording: there are $n$ vectors ``such that $x_i$ is an eigenvector corresponding to $\lambda_i$.'' In particular, there is no claim (yet) about uniqueness, etc. We still have to be careful about that. \item From this we can derive several other things, some of which we will mention below. \end{itemize} \noindent Someone asked in class about the connection with the SVD. The equations $M x_i = \lambda_i x_i$, for all $\lambda_i$, can be written as $MX = X \Lambda$, or as $M = X \Lambda X^T$, since $X$ is orthogonal. The SVD writes an arbitrary $m \times n$ matrix $A$ a $A=U \Sigma V^T$, where $U$ and $V$ are orthogonal and $\Sigma$ is diagonal and non-negative. So, the SVD is a generalization or variant of this spectral theorem for real-valued square matrices to general $m \times n$ matrices. It is \emph{not} true, however, that the SVD of even a symmetric matrix gives the above theorem. It is true by the above theorem that you can write a symmetric matrix as $M = X \Lambda X^T$, where the eigenvectors $\Lambda$ are real. But they might be negative. For those matrices, you also have the SVD, but there is no immediate connection. On the other hand, some matrices have all $\Lambda$ positive/nonnegative. They are called SPD/SPSD matrices, and form them the eigenvalue decomposition of the above theorem essentially gives the SVD. (In fact, this is sometimes how the SVD is proven---take a matrix $A$ and write the eigenvalue decomposition of the SPSD matrices $AA^T$ and $A^TA$.) SPD/SPSD matrices are important, since they are basically covariance or correlation matrices; and several matrices we will encounter, e.g., Laplacian matrices, are SPD/SPSD matrices. We can use the above lemma to provide the following variational characterization of eigenvalues, which will be very important for us. \begin{theorem}[Variational Characterization of Eigenvalues] Let $M\in\mathbb{R}^{n \times n}$ be a real symmetric matrix; let $\lambda_l \le \cdots \le \lambda_n$ be its real eigenvalues, containing multiplicity and sorted in nondecreasing order; and let $x_1,\ldots,x_k$, for $k < n$ be orthonormal vectors such that $M x_i = \lambda_i x_i$, for $i\in[k]$. Then \[ \lambda_{k+1} = \min_{ \substack{x\in\mathbb{R}^{n}\diagdown\{\vec{0}\}\\x \perp x_i \quad \forall i\in[k]} } \frac{x^TMx}{x^Tx} , \] and any minimizer of this is an eigenvector of $\lambda_{k+1}$. \end{theorem} \begin{Proof} First, by repeatedly applying the above lemma, then we get $n-k$ orthogonal eigenvectors that are also orthogonal to $x_1,\ldots,x_k$. Next, we claim that the eigenvalues of this system of $n$ orthogonal eigenvectors include all eigenvalues of $M$. The proof is that if there were any other eigenvalues, then its eigenvector would be orthogonal to the other $n$ eigenvectors, which isn't possible, since we already have $n$ orthogonal vectors in $\mathbb{R}^{n}$. Call the additional $n-k$ vectors $x_{k+1},\ldots,x_n$, where $x_i$ is an eigenvector of $\lambda_i$. (Note that we are inconsistent on whether subscripts mean elements of a vectors or different vectors themselves; but it should be clear from context.) Then, consider the minimization problem \[ \min_{ \substack{x\in\mathbb{R}^{n}\diagdown\{\vec{0}\}\\x \perp x_i \quad \forall i\in[k]} } \frac{x^TMx}{x^Tx} \] The solution $x \equiv x_{k+1}$ is feasible, and it has cost $\lambda_{k+1}$, and so $\mbox{min} \le \lambda_{k+1}$. Now, consider any arbitrary feasible solution $x$, and write it as \[ x = \sum_{i={k+1}}^{n} \alpha_i x_i . \] The cost of this solution is \[ \frac{\sum_{i=k+1}^{n}\lambda_i\alpha_i^2}{\sum_{i=k+1}^{n}\alpha_i^2} \ge \lambda_{k+1} \frac{\sum_{i=k+1}^{n}\alpha_i^2}{\sum_{i=k+1}^{n}\alpha_i^2} = \lambda_{k+1} , \] and so $\mbox{min} \ge \lambda_{k+1}$. By combining the above, we have that $\mbox{min} = \lambda_{k+1}$. Note that is $x$ is a minimizer of this expression, i.e., if the cost of $x$ equals $\lambda_{k+1}$, then $a_i=0$ for all $i$ such that $\lambda_i > \lambda_{k+1}$, and so $x$ is a linear combination of eigenvectors of $\lambda_{k+1}$, and so it itself is an eigenvector of $\lambda_{k+1}$. \end{Proof} Two special cases of the above theorem are worth mentioning. \begin{itemize} \item The leading eigenvector. \[ \lambda_{1} = \min_{ x\in\mathbb{R}^{n}\diagdown\{\vec{0}\} } \frac{x^TMx}{x^Tx} \] \item The next eigenvector. \[ \lambda_{2} = \min_{ x\in\mathbb{R}^{n}\diagdown\{\vec{0}\},x \perp x_1 } \frac{x^TMx}{x^Tx} , \] where $x_1$ is a minimizer of the previous expression. \end{itemize} \subsection{Some things that were skipped} Spielman and Trevisan give two slightly different versions of the variational characterization and Courant-Fischer theorem, i.e., a min-max result, which might be of interest to present. From wikipedia, there is the following discussion of the min-max theorem which is nice. \begin{itemize} \item Let $A \in \mathbb{R}^{n \times n}$ be a Hermitian/symmetric matrix, then the Rayleigh quotient $R_A:\mathbb{R}^{n}\diagdown\{0\}\rightarrow\mathbb{R}$ is $R_A(x) = \frac{\left<Ax,x\right>}{\left<x,x\right>}$, or equivalently $f_A(x) = \left<Ax,x\right> : \|x\|_2=1$. \item Fact: for Hermitian matrices, the range of the continuous function $R_A(x)$ or $f_A(x)$ is a compact subset $[a,b]$ of $\mathbb{R}$. The max $b$ and min $a$ are also the largest and smallest eigenvalue of $A$, respectively. The max-min theorem can be viewed as a refinement of this fact. \item \begin{theorem} If $A\in\mathbb{R}^{n \times n}$ is Hermitian with eigenvalues $\lambda_1 \ge \dots \ge \lambda_k \ge \cdots$, then \[ \lambda_k = \max\{ \min\{ R_A(x) : x \in U, x \ne 0 \} , \mbox{dim}(U) = k \} , \] and also \[ \lambda_k = \min\{ \max\{ R_A(x) : x \in U, x \ne 0 \} , \mbox{dim}(U) = n-k+1 \} . \] \end{theorem} \item In particular, \[ \lambda_n \le R_A(x) \le \lambda_1 , \] for all $x\in\mathbb{R}^{n}\diagdown\{0\}$. \item A simpler formulation for the max and min is \begin{eqnarray*} \lambda_1 &=& \max \{ R_A(x) : x \ne 0 \} \\ \lambda_n &=& \min \{ R_A(x) : x \ne 0 \} \end{eqnarray*} \end{itemize} Another thing that follows from the min-max theorem is the Cauchy Interlacing Theorem. See Spielman's 9/16/09 notes and Wikipedia for two different forms of this. This can be used to control eigenvalues as you make changes to the matrix. It is useful, and we may revisit this later. And, finally, here is counterexample to these results in general. Lest one thinks that these niceness results always hold, here is a simple non-symmetric matrix. \[ A = \left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right) \] (This is an example of a nilpotent matrix.) \begin{definition} A \emph{nilpotent matrix} is a square matrix $A$ such that $A^k=0$ for some $k \in \mathbb{Z}^{+}$. \end{definition} More generally, any triangular matrix with all zeros on the diagonal; but it could also be a dense matrix.) For this matrix $A$, we can define $R_A(x)$ as with the Rayleigh quotient. Then, \begin{itemize} \item The only eigenvalue of $A$ equals $0$. \item The maximum value of $R_A(x)$ is equal to $\frac{1}{2}$, which is larger that $0$. \end{itemize} So, in particular, the Rayleigh quotient doesn't say much about the spectrum. \subsection{Summary} Today we showed that any symmetric matrix (e.g., adjacency matrix $A$ of an undirected graph, Laplacian matrix, but more generally) is nice in that it has a full set of $n$ real eigenvalues and a full set of $n$ orthonormal eigenvectors. Next time, we will ask what those eigenvectors look like, since spectral methods make crucial use of that. To do so, we will consider a different class of matrices, namely positive or nonnegative (not PSD or SPSD, but element-wise positive or nonnegative) and we will look at the extremal, i.e., top or bottom, eigenvectors. \section{% (01/29/2015): Basic Matrix Results (2 of 3)} Reading for today. \begin{compactitem} \item Same as last class. \end{compactitem} \subsection{Review and overview} Last time, we considered symmetric matrices, and we showed that is $M$ is an $n \times n$ real-valued matrix, then the following hold. \begin{itemize} \item There are $n$ eigenvalues, counting multiplicity, that are all real. \item The eigenvectors corresponding to different eigenvalues are orthogonal. \item Given $k$ orthogonal eigenvectors, we can construct one more that is orthogonal to those $k$, and thus we can iterate this process to get a full set of $n$ orthogonal eigenvectors \item This spectral theorem leads to a variational characterization of eigenvalues/eigenvectors and other useful characterizations. \end{itemize} These results say that symmetric matrices have several ``nice'' properties, and we will see that spectral methods will use these extensively. Today, we will consider a different class of matrices and establish a different type of ``niceness'' result, which will also be used extensively by spectral methods. In particular, we want to say something about how eigenvectors, and in particular the extremal eigenvectors, e.g., the largest one or few or the smallest one of few ``look like.'' The reason is that spectral methods---both vanilla and non-vanilla variants---will rely crucially on this; thus, understanding when and why this is true will be helpful to see how spectral methods sit with respect to other types of methods, to understand when they can be generalized, or not, and so on. The class of matrices we will consider are \emph{positive matrices} as well as related \emph{non-negative matrices}. By positive/non-negative, we mean that this holds element-wise. Matrices of this form could be, e.g., the symmetric adjacency matrix of an undirected graph, but they could also be the non-symmetric adjacency matrix of a directed graph. (In the latter case, of course, it is not a symmetric matrix, and so the results of the last class don't apply directly.) In addition, the undirected/directed graphs could be weighted, assuming in both cases that weights are non-negative. In addition, it could apply more generally to any positive/non-negative matrix (although, in fact, we will be able to take a positive/non-negative matrix and interpret it as the adjacency matrix of a graph). The main theory that is used to make statements in this context and that we will discuss today and next time is something called Perron-Frobenius theory. \subsection{Some initial examples} Perron-Frobenius theory deals with positive/non-negative vectors and matrices, i.e., vectors and matrices that are entry-wise positive/nonnegative. Before proceeding with the main results of Perron-Frobenius theory, let us see a few examples of why it might be of interest and when it doesn't hold. \textbf{Example.} Non-symmetric and not non-negative matrix. Let's start with the following matrix, which is neither positive/non-negative nor symmetric. \[ A = \left( \begin{array}{cc} 0 & -1 \\ 2 & 3 \end{array} \right) . \] The characteristic polynomial of this matrix is \begin{eqnarray*} \mbox{det}\left( A - \lambda I \right) &=& \left| \begin{array}{cc} -\lambda & -1 \\ 2 & 3-\lambda \end{array} \right| \\ &=& -\lambda(3-\lambda)+2 \\ &=& \lambda^2-3\lambda+2 \\ &=& \left(\lambda-1\right)\left(\lambda-2\right) , \end{eqnarray*} from which if follows that the eigenvalues are $1$ and $2$. Plugging in $\lambda=1$, we get $x_1+x_2=0$, and so the eigenvector corresponding to $\lambda=1$ is \[ x_{\lambda=1} = \frac{1}{\sqrt{2}} \left( \begin{array}{c} 1 \\ -1 \end{array} \right). \] Plugging in $\lambda=2$, we get $2x_1+x_2=0$, and so the eigenvector corresponding to $\lambda=2$ is \[ x_{\lambda=1} = \frac{1}{\sqrt{5}} \left( \begin{array}{c} 1 \\ -2 \end{array} \right). \] So, this matrix has two eigenvalues and two eigenvectors, but they are not orthogonal, which is ok, since $A$ is not symmetric. \textbf{Example.} Defective matrix. Consider the following matrix, which is an example of a ``defective'' matrix. \[ A = \left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right) . \] The characteristic polynomial is \begin{equation*} \mbox{det}\left( A - \lambda I \right) = \left| \begin{array}{cc} 1-\lambda & 1 \\ 0 & 1-\lambda \end{array} \right| = \left(1-\lambda\right)^{2} , \end{equation*} and so $1$ is a double root. If we plug this in, then we get the system of equations \begin{equation*} \left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right) \left( \begin{array}{c} x_1 \\ x_2 \end{array} \right) = \left( \begin{array}{c} 0 \\ 0 \end{array} \right) , \end{equation*} meaning that $x_2=0$ and $x_1$ is arbitrary. (BTW, note that the matrix that appears in that system of equations is a nilpotent matrix. See below. From the last class, this has a value of the Rayleigh quotient that is not in the closed interval defined by the min to max eigenvalue.) Thus, there is only one linearly independent eigenvector corresponding to the double eigenvalue $\lambda=1$ and it is \[ x_{\lambda=1} = \left( \begin{array}{c} 1 \\ 0 \end{array} \right). \] \textbf{Example.} Nilpotent matrix. Consider the following matrix, \[ A = \left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right) . \] The only eigenvalue of this equals zero. The eigenvector is the same as in the above example. But this matrix has the property that if you raise it to some finite power then it equals the all-zeros matrix. \textbf{Example.} Identity. The problem above with having only one linearly independent eigenvector is \emph{not} due to the multiplicity in eigenvalues. For example, consider the following identity matrix, \[ A = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) . \] which has characteristic polynomial $\lambda^2-1=0$, and so which has $\lambda=1$ as a repeated root. Although it has a repeated root, it has two linearly independent eigenvectors. For example, \[ x_{1} = \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \quad \mbox \quad x_{2} = \left( \begin{array}{c} 0 \\ 1 \end{array} \right) , \] or, alternatively, \[ x_{1} = \frac{1}{\sqrt{2}} \left( \begin{array}{c} 1 \\ 1 \end{array} \right) \quad \mbox \quad x_{2} = \frac{1}{\sqrt{2}} \left( \begin{array}{c} -1 \\ 1 \end{array} \right) . \] This distinction as to whether there are multiple eigenvectors associated with a degenerate eigenvalue is an important distinction, and so we introduce the following definitions. \begin{definition} Given a matrix $A$, for an eigenvalue $\lambda_i$ \begin{itemize} \item it's \emph{algebraic multiplicity}, denoted $\mu_A(\lambda_i)$, is the multiplicity of $\lambda$ as a root of the characteristic polynomial; and \item it's \emph{geometric multiplicity}, denoted $\gamma_A(\lambda_i)$ is the maximum number of linearly independent eigenvectors associated with it. \end{itemize} \end{definition} Here are some facts (and terminology) concerning the relationship between the algebraic multiplicity and the geometric multiplicity of an eigenvalue. \begin{itemize} \item $ 1 \le \gamma_A(\lambda_i) \le \mu_A(\lambda_i)$. \item If $\mu_A(\lambda_i)=1$, then $\lambda_i$ is a simple eigenvalue. \item If $\gamma_A(\lambda_i)=\mu_A(\lambda_i)$, then $\lambda_i$ is a semi-simple eigenvalue. \item If $\gamma_A(\lambda_i) < \mu_A(\lambda_i)$, for some $i$, then the matrix $A$ is defective. Defective matrices are more complicated since you need things like Jordan forms, and so they are messier. \item If $\sum_i \gamma_A(\lambda_i) = n$, then $A$ has $n$ linearly independent eigenvectors. In this case, $A$ is diagonalizable. I.e., we can write $AQ = Q\Lambda$, and so $Q^{-1}AQ = \Lambda$. And conversely. \end{itemize} \subsection{Basic ideas behind Perron-Frobenius theory} The basic idea of Perron-Frobenius theory is that if you have a matrix $A$ with all positive entries (think of it as the adjacency matrix of a general, i.e., possibly directed, graph) then it is ``nice'' in several ways: \begin{itemize} \item there is one simple real eigenvalue of $A$ that has magnitude larger than all other eigenvalues; \item the eigenvector associated with this eigenvalue has all positive entires; \item if you increase/decrease the magnitude of the entries of $A$, then that maximum eigenvalue increases/decreases; and \item a few other related properties. \end{itemize} These results generalize to non-negative matrices (and slightly more generally, but that is of less interest in general). There are a few gotchas that you have to watch out for, and those typically have an intuitive meaning. So, it will be important to understand not only how to establish the above statements, but also what the gotchas mean and how to avoid them. These are quite strong claims, and they are certainly false in general, even for non-negative matrices, without those additional assumptions. About that, note that every nonnegative matrix is the limit of positive matrices, and so there exists an eigenvector with nonnegative components. Clearly, the corresponding eigenvalue is nonnegative and greater or equal in absolute value. Consider the following examples. \textbf{Example.} Symmetric matrix. Consider the following matrix. \[ A = \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) . \] This is a non-negative matrix, and there is an eigenvalue equal to $1$. However, there exist other eigenvalues of the same absolute value (and not strictly less) as this maximal one. The eigenvalues are $-1$ and $1$, both of which have absolute value $1$. \textbf{Example.} Non-symmetric matrix. Consider the following matrix. \[ A = \left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right) . \] This is a matrix in which the maximum eigenvalue is not simple. The only root of the characteristic polynomial is $0$, and the corresponding eigenvector, i.e., $\left( \begin{array}{c} 1 \\ 0 \end{array} \right) $, is not strictly positive. These two counter examples contain the basic ideas underlying the two main gotchas that must be dealt with when generalizing Perron-Frobenius theory to only non-negative matrices. (As an aside, it is worth wondering what is unusual about that latter matrix and how it can be generalized. One is \[ A = \left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ \end{array} \right) , \] and there are others. These simple examples might seem trivial, but they contain several key ideas we will see later.) One point of these examples is that the requirement that the entries of the matrix $A$ be strictly positive is important for Perron-Frobenius theory to hold. If instead we only have non-negativity, we need further assumption on $A$ which we will see below (and in the special case of matrices associated with graphs, the reducibility property of the matrix is equivalent to the connectedness of the graph). \subsection{Reducibility and types of connectedness} We get a non-trivial generalization of Peron-Frobenius theory from all-positive matrices to non-negative matrices, if we work with the class of irreducible matrices. (We will get an even cleaner statement if we work with the class of irreducible aperiodic matrices. We will start with the former first, and then we will bet to the latter.) We start with the following definition, which applied to an $n \times n$ matrix $A$. For those readers familiar with Markov chains and related topics, there is an obvious interpretation we will get to, but for now we just provide the linear algebraic definition. \begin{definition} A matrix $A \in \mathbb{R}^{n \times n}$ is \emph{reducible} if there exist a permutation matrix $P$ such that \[ C = PAP^T = \left( \begin{array}{cc} A_{11} & A_{12} \\ 0 & A_{22} \end{array} \right) , \] with $A_{11} \in \mathbb{R}^{r \times r}$ and $A_{22} \in \mathbb{R}^{(n-r) \times (n-r)}$, where $0 < r < n$. (Note that the off-diagonal matrices, $0$ and $A_{12}$, will in general be rectangular.) A matrix $A\in\mathbb{R}^{n \times n}$ is \emph{irreducible} if it is not reducible. \end{definition} As an aside, here is another definition that you may come across and that we may point to later. \begin{definition} A nonnegative matrix $A\in\mathbb{R}^{n \times n}$ is \emph{irreducible} if $\forall i,j \in[n]^2 , \exists t \in \mathbb{N} : A_{ij}^t >0$. And it is \emph{primitive} if $ \exists t \in \mathbb{N} , \forall i,j \in[n]^2 : A_{ij}^t >0$. \end{definition} This is less intuitive, but I'm mentioning it since these are algebraic and linear algebraic ideas, and we haven't yet connected it with random walks. But later we will understand this in terms of things like lazy random walks (which is more intuitive for most people than the gcd definition of aperiodicity/primitiveness). Fact: If $A$, a non-negative square matrix, is nilpotent (i.e., s.t. $A^k=0$, for some $k\in\mathbb{Z}^{+}$, then it is reducible. \begin{Proof} By contradiction, suppose $A$ is irreducible, and nilpotent. Let $k$ be the smallest $k$ such that $A^k=0$. Then we know $A^{k-1}\neq 0$. Suppose $A^{k-1}_{ij}> 0$ for some $i,j$, since $A$ irreducible, we now there exist $t\ge 1$ such that $A^t_{ji}> 0$. Note all powers of $A$ are non-negative, then $A^{k-1+t}_{ii}=A^{k-1}_{i,\cdot}A^t_{\cdot,i}\ge A^{k-1}_{ij}A^t_{ji} > 0$ which gives a contradiction, since we have $A^k=0 \Rightarrow A^{k'}=0 \quad \forall k'\ge k$, while $k-1+t\ge k$, but $A^{k-1+t}\neq 0$. \end{Proof} We start with a lemma that, when viewed the right way, i.e., in a way that is formal but not intuitive, is trivial to prove. \begin{lemma} Let $A\in\mathbb{R}^{n \times n}$ be a non-negative square matrix. If $A$ is primitive, then $A$ is reducible. \end{lemma} \begin{Proof} $\exists\forall\rightarrow\forall\exists$ \end{Proof} It can be shown that the converse is false. But we can establish a sort of converse in the following lemma. (It is a sort of converse since $A$ and $I+A$ are related, and in particular in our applications to spectral graph theory the latter will essentially have an interpretation in terms of a lazy random walk associated with the former.) \begin{lemma} Let $A\in\mathbb{R}^{n \times n}$ be a non-negative square matrix. If $A$ is irreducible, then $I+A$ is primitive. \end{lemma} \begin{Proof} Write out the binomial expansion \[ \left(I+A\right)^n = \sum_{k=0}^{n} {n \choose k} A^k . \] This has all positive entries since $A$ is irreducible, i.e., it eventually has all positive entries if $k$ is large enough. \end{Proof} Note that a positive matrix may be viewed as the adjacency matrix of a weighted complete graph. Let's be more precise about directed and undirected graphs. \begin{definition} A \emph{directed graph} $G(A)$ associated with an $n \times n$ nonnegative matrix $A$ consists of $n$ nodes/vertices $P_1,\ldots,P_n$, where an edge leads from $P_i$ to $P_j$ iff $A_{ij} \ne 0$. \end{definition} Since directed graphs are directed, the connectivity properties are a little more subtle than for undirected graphs. Here, we need the following. We will probably at least mention other variants later. \begin{definition} A directed graph $G$ is \emph{strongly connected} if $\forall$ ordered pairs $(P_i,P_j)$ of vertices of $G$, $\exists$ a path, i.e., a sequence of edges, $(P_i,P_{l_1}), (P_{l_1},P_{l_2}),\ldots, (P_{l_{r-1}},P_{j})$, which leads from $P_i$ to $P_j$. The \emph{length} of the path is $r$. \end{definition} Fact: The graph $G(A^k)$ of a nonnegative matrix $A$ consists of all paths of $G(A)$ of length $k$ (i.e. there is an edge from $i$ to $j$ in $G(A^k)$ iff there is a path of length $k$ from $i$ to $j$ in $G$). Keep this fact in mind since different variants of spectral methods involve weighting paths of different lengths in different ways. Here is a theorem that connects the linear algebraic idea of irreducibility with the graph theoretic idea of connectedness. Like many things that tie together notions from two different areas, it can seem trivial when it is presented in such a way that it looks obvious; but it really is connecting two quite different ideas. We will see more of this later. \begin{theorem} An $n \times n$ matrix $A$ is irreducible iff the corresponding directed graph $G(A)$ is strongly connected. \end{theorem} \begin{Proof} Let $A$ be an irreducible matrix. Assume, for contradiction, that $G(A)$ is \emph{not} strongly connected. Then, there exists an ordered pair of nodes, call them $(P_i,P_j)$, s.t. there does not exist a connection from $P_i$ to $P_j$. In this case, let $S_1$ be the set of nodes connected to $P_i$, and let $S_2$ be the remainder of the nodes. Note that there is no connection between any nodes $P_{\ell}\in S_2$ and any node $P_q \in S_1$, since otherwise we sould have $P_{\ell}\in S_1$. And note that both sets are nonempty, since $P_j\in S_1$ and $P_i\in S_2$. Let $r = |S_1|$ and $n-r = |S_2|$. Consider a permutation transformation $C=PAP^T$ that reorders the nodes of $G(A)$ such that \[ \begin{cases} P_1,P_2,\cdots,P_r \in S_1 \\ P_{r+1},P_{r+2},\cdots,P_n \in S_2 \end{cases} \] That is \[ C_{k\ell} = 0 \quad\forall \begin{cases} k = r+1,r+2,\ldots,n \\ \ell = 1,2,\ldots,r . \end{cases} \] But this is a contradiction, since $A$ is irreducible. Conversely, assume that $G(A)$ is strongly connected, and assume for contradiction that $A$ is not irreducible. Reverse the order of the above argument, and we arrive at the conclusion that $G(A)$ is not strongly connected, which is a contradiction. \end{Proof} We conclude by noting that, informally, there are two types of irreducibility. To see this, recall that in the definition of reducibility/irreducibility, we have the following matrix: \[ C = PAP^T = \left( \begin{array}{cc} A_{11} & A_{12} \\ 0 & A_{22} \end{array} \right) . \] In one type, $A_{12} \ne 0$: in this case, we can go from the first set to the second set and get stuck in some sort of sink. (We haven't made that precise, in terms of random walk interpretations, but there is some sort of interaction between the two groups.) In the other type, $A_{12} = 0$: in this case, there are two parts that don't talk with each other, and so essentially there are two separate graphs/matrices. \subsection{Basics of Perron-Frobenius theory} Let's start with the following definition. (Note here that we are using subscripts to refer to elements of a vector, which is inconsistent with what we did in the last class.) \begin{definition} A vector $x\in\mathbb{R}^{n}$ is \emph{positive} (resp, \emph{non-negative}) if all of the entries of the vector are positive (resp, non-negative), i.e., if $x_i > 0$ for all $i\in[n]$ (resp if $x_i \ge 0$ for all $i\in[n]$). \end{definition} A similar definition holds for $m \times n$ matrices. Note that this is \emph{not} the same as SPD/SPSD matrices. Let's also provide the following definition. \begin{definition} Let $\lambda_1,\ldots,\lambda_n$ be the (real or complex) eigenvalues of a matrix $A\in\mathbb{C}^{n \times n}$. Then the \emph{spectral radius} $ \rho_A = \rho(A) = \max_i \left( \left| \lambda_i \right| \right) $ \end{definition} Here is a basic statement of the Perron-Frobenius theorem. \begin{theorem}[Perron-Frobenius] Let $A\in\mathbb{R}^{n \times n}$ be an irreducible non-negative matrix. Then, \begin{enumerate} \item $A$ has a positive real eigenvalue equal to its spectral radium. \item That eigenvalue $\rho_A$ has algebraic and geometric multiplicity equal to one. \item The one eigenvector $x$ associated with the eigenvalue $\rho_A$ has all positive entries. \item $\rho_A$ increases when any entry of $A$ increases. \item There is no other non-negative eigenvector of $A$ different than $x$. \item If, in addition, $A$ is primitive, then each other eigenvalue $\lambda$ of $A$ satisfies $\left| \lambda \right| < \rho_A$. \end{enumerate} \end{theorem} Before giving the proof, which we will do next class, let's first start with some ideas that will suggest how to do the proof. Let $P=\left(I+A\right)^{n}$. Since $P$ is positive, it is true that for every non-negative and non-null vector $v$, that we have that $Pv >0$ element-wise. Relatedly, if $v \le w$ element-wise, and $v \ne w$, then $Pv < Pw$. Let \[ Q = \left\{ x \in \mathbb{R}^{n} \mbox{ s.t. } x \geq 0 , x \neq 0 \right\} \] be the nonnegative orthant, excluding the origin. In addition, let \[ C = \left\{ x \in \mathbb{R}^{n} \mbox{ s.t. } x \geq 0 , ||x||=1 \right\} , \] where $||\cdot||$ is any vector norm. Clearly, $C$ is compact, i.e., closed and bounded. Then, for all $z \in Q$, we can define the following function: let \[ f(z) = \max \left\{ s\in\mathbb{R} : sz \le Az \right\} = \min_{1 \le i \le n,z_i \ne 0} \frac{\left(Az\right)_{i}}{z_i} \] Here are facts about $f$. \begin{itemize} \item $f(rz) = f(z)$, for all $r > 0$. \item If $Az = \lambda z$, i.e., if $(\lambda,z)$ is an eigenpair, then $f(z) = \lambda$. \item If $sz \le Az$, then $sPz \le PAz = APz$, where the latter follows since $A$ and $P$ clearly commute. So, \[ f(z) \le f(Pz) . \] In addition, if $z$ is \emph{not} an eigenvector of $A$, then $sz \ne Az$, for all $s$; and $sPz < APz$. From the second expression for $f(z)$ above, we have that in this case that \[ f(z) < f(Pz) , \] i.e., an inequality in general but a strict inequality if not an eigenvector. \end{itemize} This \emph{suggests} an idea for the proof: look for a positive vector that maximizes the function $f$; show it is an eigenvector we want in the theorem; and show that it established the properties stated in the theorem. \section{% (02/03/2015): Basic Matrix Results (3 of 3)} Reading for today. \begin{compactitem} \item Same as last class. \end{compactitem} \subsection{Review and overview} Recall the basic statement of the Perron-Frobenius theorem from last class. \begin{theorem}[Perron-Frobenius] Let $A\in\mathbb{R}^{n \times n}$ be an irreducible non-negative matrix. Then, \begin{enumerate} \item $A$ has a positive real eigenvalue $\lambda_{max}$; which is equal to the spectral radius; and $\lambda_{max}$ has an associated eigenvector $x$ with all positive entries. \item If $0 \le B \le A$, with $B \ne A$, then every eigenvalue $\sigma$ of $B$ satisfies $|\sigma| < \lambda_{max} = \rho_A$. (Note that $B$ does not need to be irreducible.) In particular, $B$ can be obtained from $A$ by zeroing out entries; and also all of the diagonal minors $A_{(i)}$ obtained from $A$ by deleting the $i^{th}$ row/column have eigenvalues with absolute value strictly less than $\lambda_{max} = \rho_A$. Informally, this says: $\rho_A$ increases when any entry of $A$ increases. \item That eigenvalue $\rho_A$ has algebraic and geometric multiplicity equal to one. \item If $y \ge 0$, $y \ne 0$ is a vector and $\mu$ is a number such that $A y \le \mu y$, then $y > 0$ and $\mu \ge \lambda_{max}$; with $\mu = \lambda_{max}$ iff $y$ is a multiple of $x$. Informally, this says: there is no other non-negative eigenvector of $A$ different than $x$. \item If, in addition, $A$ is primitive/aperiodic, then each other eigenvalue $\lambda$ of $A$ satisfies $\left| \lambda \right| < \rho_A$. \item If, in addition, $A$ is primitive/aperiodic, then \[ \lim_{t \rightarrow \infty} \left( \frac{1}{\rho_A} A \right)^{t} = xy^T , \] where $x$ and $y$ are positive eigenvectors of $A$ and $A^T$ with eigenvalue $\rho_A$, i.e., $Ax = \rho_A x$ and $A ^T y = \rho_A y$ (i.e., $y^TA = \rho_Ay^T$), normalized such that $x^Ty=1$. \end{enumerate} \end{theorem} Today, we will do three things: (1) we will prove this theorem; (2) we will also discuss periodicity/aperiodicity issues; (3) we will also briefly discuss the first connectivity/non-connectivity result for Adjacency and Laplacian matrices of graphs that will use the ideas we have developed in the last few~classes. Before proceeding, one note: an interpretation of a matrix $B$ generated from $A$ by zeroing out an entry or an entire row/column is that you can remove an edge from a graph or you can remove a node and all of the associated edges from a graph. (The monotonicity provided by that part of this theorem will be important for making claims about how the spectral radius behaves when such changes are made to a graph.) This obviously holds true for Adjacency matrices, and a similar statement also holds true for Laplacian matrices. \subsection{Proof of the Perron-Frobenius theorem} We start with some general notation and definitions; then we prove each part of the theorem in~turn. Recall from last time that we let $P=\left(I+A\right)^{n}$ and thus $P$ is positive. Thus, for every non-negative and non-null vector $v$, then we have that $Pv >0$ element-wise; and (equivalently) if $v \le w$ element-wise, and $v \ne w$, then we have that $Pv < Pw$. Recall also that we defined \begin{eqnarray*} Q &=& \left\{ x \in \mathbb{R}^{n} \mbox{ s.t. } x \geq 0 , x \neq 0 \right\} \\ C &=& \left\{ x \in \mathbb{R}^{n} \mbox{ s.t. } x \geq 0 , ||x||=1 \right\} , \end{eqnarray*} where $||\cdot||$ is any vector norm. Note in particular that this means that $C$ is compact, i.e., closed and bounded. Recall also that, for all $z \in Q$, we defined the following function: let \[ f(z) = \max \left\{ s\in\mathbb{R} : sz \le Az \right\} = \min_{1 \le i \le n,z_i \ne 0} \frac{\left(Az\right)_{i}}{z_i} \] Finally, recall several facts about the function $f$. \begin{itemize} \item $f(rz) = f(z)$, for all $r > 0$. \item If $Az = \lambda z$, i.e., if $(\lambda,z)$ is an eigenpair, then $f(z) = \lambda$. \item In general, $ f(z) \le f(Pz) $; and if $z$ is \emph{not} an eigenvector of $A$, then $ f(z) < f(Pz) $. (The reason for the former is that if $sz \le Az$, then $sPz \le PAz = APz$. The reason for the latter is that in this case $sz \ne Az$, for all $s$, and $sPz < APz$, and by considering the second expression for $f(z)$ above.) \end{itemize} We will prove the theorem in several steps. \subsection{Positive eigenvalue with positive eigenvector.} Here, we will show that there is a positive eigenvalue $\lambda^{*}$ and that the associated eigenvector $x^{*}$ is a positive vector. To do so, consider $P(C)$, the image of $C$ under the action of the operator $P$. This is a compact set, and all vectors in $P(C)$ are positive. By the second expression in definition of $f(\cdot)$ above, we have that $f$ is continuous of $P(C)$. Thus, $f$ achieves its maximum value of $P(C)$, i.e., there exists a vector $x \in P(C)$ such that \[ f(x) = \sup_{z \in C} f(Pz) . \] Since $f(z) \le f(Pz)$, the vector $x$ realizes the maximum value $f_{max}$ of $f$ on $Q$. So, \[ f_{max} = f(x) \le f(Px) \le f_{max} . \] Thus, from the third property of $f$ above, $x$ is an eigenvector of $A$ with eigenvalue $f_{max}$. Since $x \in P(C)$, then $x$ is a positive vector; and since $Ax > 0$ and $Ax = f_{max}x$, it follows that $f_{max}>0$. (Note that this result shows that $f_{max}=\lambda^{*}$ is achieved on an eigenvector $x=x^{*}$, but it doesn't show yet that it is equal to the spectral radius.) \subsection{That eigenvalue equals the spectral radius.} Here, we will show that $f_{max}= \rho_A$, i.e., $f_{max}$ equals the spectral radius. To do so, let $z\in\mathbb{C}^{n}$ be an eigenvector of $A$ with eigenvalue $\lambda\in\mathbb{C}$; and let $|z|$ be a vector, each entry of which equals $|z_i|$. Then, $|z| \in Q$. We claim that $ |\lambda| |z| \le A |z| $. To establish the claim, rewrite it as $ |\lambda| |z_i| \le \sum_{k=1}^{n} A_{ik} |z_k| $. Then, since $Az = \lambda z$, i.e., $\lambda z_i = \sum_{k=1}^{n} A_{ik} z_k $, and since $A_{ik}\ge0$, we have that \[ |\lambda| |z| = \left| \sum_{k=1}^{n} A_{ik} z_k \right| \le \sum_{k=1}^{n} A_{ik} |z_k| , \] from which the claim follows. Thus, by the definition of $f$ (i.e., since $f(z)=\min\frac{(Az)_{i}}{(z)_{i}}$, we have that $|\lambda| \le f(|z|)$. Hence, $|\lambda| \le f_{max}$, and thus $\rho_A \le f_{max}$ (where $\rho_A$ is the spectral radius). Conversely, from the above, i.e., since $f_{max}$ is an eigenvalue it must be $\le$ the maximum eigenvalue, we have that $f_{max} \le \rho_A$. Thus, $f_{max}=\rho_A$. \subsection{An extra claim to make.} \label{sxn:extra-claim} We would like to establish the following result: \[ f(z) = f_{max} \Rightarrow \left( Az = f_{max}z \mbox{ and } z > 0 \right) . \] To establish this result, observe that above it is shown that: if $f(z)=f_{max}$, then $f(z)=f(Pz)$. Thus, $z$ is an eigenvector of $A$ for eigenvalue $f_{max}$. It follows that $Pz = \lambda z$, i.e., that $z$ is also an eigenvector of $P$. Since $P$ is positive, we have that $Pz > 0$, and so $z$ is positive. \subsection{Monotonicity of spectral radius.} Here, we would like to show that $0 \le B \le A$ and $B \ne A$ implies that $\rho_B < \rho_A$. (Recall that $B$ need \emph{not} be irreducible, but $A$ is.) To do so, suppose that $Bz = \lambda z$, with $z \in \mathbb{C}^{n}$ and with $\lambda\in\mathbb{C}$. Then, \[ |\lambda| |z| \le B |z| \le A |z| , \] from which it follows that \[ |\lambda| \le f_A(|z|) \le \rho_A , \] and thus $\rho_B \le \rho_A$. Next, assume for contradiction that $|\lambda| = \rho_A$. Then from the above claim (in Section~\ref{sxn:extra-claim}), we have that $f_A(z) = \rho_A$. Thus from above it follows that $|z|$ is an eigenvector of $A$ for the eigenvalue $\rho_A$ and also that $z$ is positive. Hence, $B |z| = A |z|$, with $z > 0$; but this is impossible unless $A=B$. \textbf{Remark.} Replacing the $i^{th}$ row/column of $A$ by zeros gives a non-negative matrix $A_{(i)}$ such that $0 \le A_{(i)} \le A$. Moreover, $A_{(i)} \ne A$, since the irreducibility of $A$ precludes the possibility that all entries in a row are equal to zero. Thus, for all matrices $A_{(i)}$ that are obtained by eliminating the $i^{th}$ row/column of $A$, the eigenvalues of $A_{(i)} < \rho$. \subsection{Algebraic/geometric multiplicities equal one.} Here, we will show that the algebraic and geometric multiplicity of $\lambda_{max}$ equal $1$. Recall that the geometric multiplicity is less than or equal to the algebraic multiplicity, and that both are at least equal to one, so it suffices to prove this for the algebraic multiplicity. Before proceeding, also define the following: given a square matrix $A$: \begin{itemize} \item Let $A_{(i)}$ be the matrix obtained by eliminating the $i^{th}$ row/column. In particular, this is a smaller matrix, with one dimension less along each column/row. \item Let $A_{i}$ be the matrix obtained by zeroing out the $i^{th}$ row/column. In particular, this is a matrix of the same size, with all the entries in one full row/column zeroed out. \end{itemize} To establish this result, here is a lemma that we will use; its proof (which we won't provide) boils down to expanding $\mbox{det}\left(\Lambda-A\right)$ along the $i^{th}$ row. \begin{lemma} Let $A$ be a square matrix, and let $\Lambda$ be a diagonal matrix of the same size with $\lambda_1,\ldots,\lambda_n$ (as variables) along the diagonal. Then, \[ \frac{\partial}{\partial \lambda_i} \mbox{det}\left( \Lambda-A \right) = \mbox{det}\left( \Lambda_{(i)} - A_{(i)} \right) , \] where the subscript $(i)$ means the matrix obtained by eliminating the $i^{th}$ row/column from each matrix. \end{lemma} Next, set $\lambda_i = \lambda$ and apply the chain rule from calculus to get \[ \frac{d}{d\lambda} \mbox{det}\left( \lambda I - A \right) = \sum_{i=1}^{n} \mbox{det}\left( \lambda I - A_{(i)} \right) . \] Finally, note that \[ \mbox{det}\left( \lambda I - A_{i} \right) = \lambda \mbox{det} \left( \lambda I - A_{(i)} \right) . \] But by what we just proved (in the Remark at the end of last page), we have that $\mbox{det}\left( \rho_A I - A_{(i)} \right) > 0 $. Thus, the derivative of the characteristic polynomial of $A$ is nonzero at $\rho_A$, and so the algegraic multiplicity equals $1$. \subsection{No other non-negative eigenvectors, etc.} Here, we will prove the claim about other non-negative vectors, including that there are no other non-negative eigenvectors. To start, we claim that: $0 \le B \le A \Rightarrow f_{max}(B) \le f_{max}(A)$. (This is related to but a little different than the similar result we had above.) To establish the claim, note that if $z \in Q$ is s.t. $sz \le Bz$, then $sz \le Az$ (since $Bz \le Az$), and so $f_B(z) \le f_A(z)$, for all $z$. We can apply that claim to $A^T$, from which it follows that $A^T$ has a positive eigenvalue, call it $\eta$. So, there exists a row vector, $w > 0$ s.t. $w^TA = \eta w^T$. Recall that $x > 0$ is an eigenvector of $A$ with maximum eigenvalue $\lambda_{max}$. Thus, \[ w^TAx = \eta w^Tx = \lambda_{max} w^Tx , \] and thus $\eta = \lambda_{max}$ (since $w^Tx > 0$). Next, suppose that $y \in Q$ and $Ay \le \mu y$. Then, \[ \lambda_{max} w^Ty = w^T A y \le \mu w^Ty , \] from which it follows that $\lambda_{max} \le \mu$. (This is since all components of $w$ are positive and some components of $y$ is positive, and so $w^Ty > 0$). In particular, if $Ay = \mu y$, then $\mu = \lambda_{max}$. Further, if $y \in Q$ and $Ay \le \mu y$, then $\mu \ge 0$ and $y > 0$. (This is since $0 < Py = \left(I+A\right)^{n-1}y \le \left( 1+\mu\right)^{n-1}y$.) This proves the first two parts of the result; now, let's prove the last part of the result. If $\mu = \lambda_{max}$, then $w^T (Ay - \lambda_{max}y ) = 0$. But, $Ay - \lambda_{max} y \le 0$. So, given this, from $w^T\left(Ay-\lambda_{max}y\right) = 0$, it follows that $Ay = \lambda_{max}y$. Since $y$ must be an eigenvector with eigenvalue $\lambda_{max}$, the last result (i.e., that $y$ is a scalar multiple of $x$) follows since $\lambda_{max}$ has multiplicity $1$. To establish the converse direction march through these steps in the other direction. \subsection{Strict inequality for aperiodic matrices} Here, we would like to establish the result that the eigenvalue we have been talking about is strictly larger in magnitude than the other eigenvalues, under the aperiodicity assumption. To do so, recall that the $t^{th}$ powers of the eigenvalues of $A$ are the eigenvalues of $A^t$. So, if we want to show that there does \emph{not} exist eigenvalues of a primitive matrix with absolute value $=\rho_A$, other than $\rho_A$, then it suffices to prove this for a positive matrix $A$. Let $A$ be a positive matrix, and suppose that $Az = \lambda z$, with $z\in\mathbb{C}^{n}$, $\lambda\in\mathbb{C}$, and $|\lambda|=\rho_A$, in which case the goal is to show $\lambda < \rho_A$. (We will do this by showing that any eigenvector with eigenvalue equal in magnitude to $\rho_A$ is the top eigenvalue.) (I.e., we will show that such a $z$ equals $|z|$ and thus there is no other one with $\rho_A$.) Then, \[ \rho_A |z| = |Az| \le A |z| , \] from which it follows that \[ \rho_A \le f(|z|) \le \rho_A , \] which implies that $f(|z|) = \rho_A $. From a result above, this implies that $|z|$ is an eigenvector of $A$ with eigenvalue $\rho_A$. Moreover, $|Az| = A|z|$. In particular, \[ \left| \sum_{i=1}^{n} A_{1i} z_{i} \right| = \sum_{i=1}^{n} A_{1i} | z_i | . \] Since all of the entries of $A$ are positive, this implies that there exists a number $u\in\mathbb{C}$ (with $|u|=1$) s.t. for all $i\in[n]$, we have that $z_i = u|z_i|$. Hence, $z$ and $|z|$ are collinear eigenvectors of $A$. So, the corresponding eigenvalues of $\lambda$ and $\rho$ are equal, as required. \subsection{Limit for aperiodic matrices} Here, we would like to establish the limiting result. To do so, note that $A^T$ has the same spectrum (including multiplicities) as $A$; and in particular the spectral radius of $A^T$ equals $\rho_A$. Moreover, since $A^T$ is irreducible (a consequence of being primitive), we can apply the Perron-Frobenius theorem to it to get $yA = \rho_A y$. Here $y$ is determined up to a scalar multiple, and so let's choose it s.t. $x^Ty = \sum_{i=1}^{n} x_iy_i = 1$. Next, observe that we can decompose the $n$-dimensional vector space $\mathbb{R}^{n}$ into two parts, \[ \mathbb{R}^{n} = R \oplus N , \] where both $R$ and $N$ are invariant under the action of $A$. To do this, define the rank-one matrix $H=xy^T$, and: \begin{itemize} \item let $R$ be the \emph{image space} of $H$; and \item let $N$ be the \emph{null space} of $H$. \end{itemize} Note that $H$ is a projection matrix (in particular, $H^2=H$), and thus $I-H$ is also a projection matrix, and the image space of $I-H$ is $N$. Also, \[ AH = Axy^T = \rho_A xy^T = x \rho_A y^T = x y^T A = HA . \] So, we have a direct sum decomposition of the space $\mathbb{R}^{n}$ into $R \oplus N$, and this decomposition is invariant under the action of $A$. Given this, observe that the restriction of $A$ to $N$ has all of its eigenvalues strictly less that $\rho_A$ in absolute value, while the restriction of $A$ to the one-dimensional space $R$ is simply a multiplication/scaling by $\rho_A$. So, if $P$ is defined to be $P = \frac{1}{\rho_A}A$, then the restriction of $P$ to $N$ has its eigenvalues $<1$ in absolute value. This decomposition is also invariant under all positive integral powers of $P$. So, the restriction of $P^k$ to $N$ tends to zero as $k \rightarrow \infty$, while the restriction of $P$ to $R$ is the identity. So, $\lim_{t \rightarrow \infty} \left( \frac{1}{\rho_A}A \right)^{t} = H = xy^T$. \subsection{Additional discussion form periodicity/aperiodic and cyclicity/primitiveness} Let's switch gears and discuss the periodicity/aperiodic and cyclicity/primitiveness issues. (This is an algebraic characterization, and it holds for general non-negative matrices. I think that most people find this less intuitive that the characterization in terms of connected components, but it's worth at least knowing about it.) Start with the following definition. \begin{definition} The \emph{cyclicity} of an irreducible non-negative matrix $A$ is the g.c.d. (greatest common denominator) of the length of the cycles in the associated graph. \end{definition} \noindent Let's let $\mathbb{N}_{ij}$ be a positive subset of the integers s.t. \[ \{ t \in \mathbb{N} \mbox{ s.t. } (A^t)_{ij} > 0 \} , \] that is, it is the values of $ t \in \mathbb{N} $ s.t. the matrix $A^t$'s $(i,j)$ entry is positive (i.e. exists a path from $i$ to $j$ of length $t$) . Then, to define $\gamma$ to be the cyclicity of $A$, first define $\gamma_i = \mbox{gcd}\left( \mathbb{N}_{ii} \right)$, and then clearly $\gamma = \mbox{gcd} \left( \{ \gamma_i \mbox{ s.t. } i \in V \} \right) $. Note that each $\mathbb{N}_{ii}$ is closed under addition, and so it is a semi-group. Here is a lemma from number theory (that we won't prove). \begin{lemma} A set $\mathbb{N}$ of positive integers that is closed under addition contains all but a finite number of multiples of its g.c.d. \end{lemma} \noindent From this it follows that $\forall i \in [n] , \gamma_i = \gamma$. The following theorem (which we state but won't prove) provides several related conditions for an irreducible matrix to be primitive. \begin{theorem} Let $A$ be an irreducible matrix. Then, the following are equivalent. \begin{enumerate} \item The matrix $A$ is primitive. \item All of the eigenvalues of $A$ different from its spectral radius $\rho_A$ satisfy $|\lambda| < \rho_A$. \item The sequence of matrices $\left( \frac{1}{\rho_A}A \right)^{t}$ converges to a positive matrix. \item There exists an $i \in [n]$ s.t., $\gamma_i = 1$. \item The cyclicity of $A$ equals $1$. \end{enumerate} \end{theorem} For completeness, note that sometimes one comes across the following definition. \begin{definition} Let $A$ be an irreducible non-negative square matrix. The \emph{period} of $A$ is the g.c.d. of all natural numbers $m$ s.t. $\left(A^m\right)_{ii} > 0$ for some $i$. Equivalently, the g.c.d. of the lengths of closed directed paths of the directed graph $G_A$ associated with $A$. \end{definition} \textbf{Fact.} All of the statements of the Perron-Frobenius theorem for positive matrices remain true for irreducible aperiodic matrices. In addition, all of those statements generalize to periodic matrices. The the main difference in this generalization is that for periodic matrices the ``top'' eigenvalue isn't ``top'' any more, in the sense that there are other eigenvalues with equal absolute value that are different: they equal the $p^{th}$ roots of unity, where $p$ is the periodicity. Here is an example of a generalization. \begin{theorem} \label{thm:pf-generalization} Let $A$ be an irreducible non-negative $n \times n$ matrix, with period equal to $h$ and spectral radius equal to $\rho_A = r$. Then, \begin{enumerate} \item $r > 0$, and it is an eigenvalue of $A$. \item $r$ is a simple eigenvalue, and both its left and right eigenspace are one-dimensional. \item $A$ has left/right eigenvectors $v$/$w$ with eigenvalue $r$, each of which has all positive entries. \item $A$ has exactly $h$ complex eigenvalues with absolute value $=r$; and each is a simple root of the characteristic polynomial and equals the $r \cdot h^{th}$ root of unity. \item If $h > 0$, then there exists a permutation matrix $P$ s.t. \begin{equation} PAP^T = \left( \begin{array}{ccccc} 0 & A_{12} & & & 0 \\ & 0 & A_{23} & & \\ & & \ddots & \ddots & \\ 0 & & & 0 & A_{h-1,h} \\ A_{h1} & 0 & & & 0 \\ \end{array} \right) . \label{eqn:block-matrix-periodic} \end{equation} \end{enumerate} \end{theorem} \subsection{Additional discussion of directness, periodicity, etc.} Today, we have been describing Perron-Frobenius theory for non-negative matrices. There are a lot of connections with graphs, but the theory can be developed algebraically and linear-algebraically, i.e., without any mention of graphs. (We saw a hint of this with the g.c.d. definitions.) In particular, Theorem~\ref{thm:pf-generalization} is a statement about matrices, and it's fair to ask what this might say about graphs we will encounter. So, before concluding, let's look at it and in particular at Eqn.~(\ref{eqn:block-matrix-periodic}) and ask what that might say about graphs---and in particular undirected graphs---we will consider. To do so, recall that the Adjacency Matrix of an undirected graph is symmetric; and, informally, there are several different ways (up to permutations, etc.) it can ``look like.'' In particular: \begin{itemize} \item It can look like this: \begin{equation} A = \left( \begin{array}{cc} A_{11} & A_{12} \\ A_{12}^{T} & A_{22} \end{array} \right) , \label{eqn:block-matrix-vanilla} \end{equation} where let's assume that all-zeros blocks are represented as $0$ and so each $A_{ij}$ is not all-zeros. This corresponds to a vanilla graph you would probably write down if you were asked to write down a graph. \item It can look like this: \begin{equation} A = \left( \begin{array}{cc} A_{11} & 0 \\ 0 & A_{22} \end{array} \right) , \label{eqn:block-matrix-disconnected} \end{equation} in which case the corresponding graph is not connected. \item It can even look like this: \begin{equation} A = \left( \begin{array}{cc} 0 & A_{12} \\ A_{21} & 0 \end{array} \right) , \label{eqn:block-matrix-bipartite} \end{equation} which has the interpretation of having two sets of nodes, each of which has edges to only the other set, and which will correspond to a bipartite graph. \item Of course, it could be a line-like graph, which would look like a tridiagonal banded matrix, which is harder for me to draw in latex, or it can look like all sorts of other things. \item But it can\emph{not} look like this: \begin{equation} A = \left( \begin{array}{cc} A_{11} & A_{12} \\ 0 & A_{22} \end{array} \right) , \label{eqn:block-matrix-reducible1} \end{equation} and it can\emph{not} look like this: \begin{equation} A = \left( \begin{array}{cc} 0 & A_{12} \\ 0 & 0 \end{array} \right) , \label{eqn:block-matrix-reducible2} \end{equation} where recall we are assuming that each $A_{ij}$ is not all-zeros. In both of these cases, these matrices are not symmetric. \end{itemize} \noindent In light of today's results and looking forward, it's worth commenting for a moment on the relationship between Eqns.~(\ref{eqn:block-matrix-periodic}) and Eqns~(\ref{eqn:block-matrix-vanilla}) through (\ref{eqn:block-matrix-reducible2}). Here are a few things to note. \begin{itemize} \item One might think from Eqns.~(\ref{eqn:block-matrix-periodic}) that periodicity means that that the graph is directed and so if we work with undirected graphs we can ignore it. That's true if the periodicity is $3$ or more, but note that the matrix of Eqn~(\ref{eqn:block-matrix-bipartite}) is periodic with period equal to $2$. In particular, Eqn~(\ref{eqn:block-matrix-bipartite}) is of the form of Eqn.~(\ref{eqn:block-matrix-periodic}) if the period $h=2$. (It's eigenvalues are real, which they need to be since the matrix is symmetric, since the complex ``$2^{th}$ roots of unity,'' which equal $\pm1$, are both real.) \item You can think of Eqn.~(\ref{eqn:block-matrix-disconnected}) as a special case of Eqn.~(\ref{eqn:block-matrix-reducible1}), with the $A_{12}$ block equal to $0$, but it is not so helpful to do so, since its behavior is very different than for an irreducible matrix with $A_{12} \ne 0$. \item For directed graphs, e.g., the graph that would correspond to Eqn.~(\ref{eqn:block-matrix-reducible1}) (or Eqn.~(\ref{eqn:block-matrix-reducible2})), there is very little spectral theory. It is of interest in practice since edges are often directed. But, most spectral graph methods for directed graphs basically come up---either explicitly or implicitly---with some sort of symmetrized version of the directed graph and then apply undirected spectral graph methods to that symmetrized graph. (Time permitting, we'll see an example of this at some point this semester.) \item You can think of Eqn.~(\ref{eqn:block-matrix-reducible1}) as corresponding to a ``bow tie'' picture (that I drew on the board and that is a popular model for the directed web graph and other directed graphs). Although this is directed, it can be made irreducible by adding a rank-one update of the form $11^T$ to the adjacency matrix. E.g., $A \rightarrow A+ \epsilon 11^T$. This has a very natural interpretation in terms of random walkers, it is the basis for a lot of so-called ``spectral ranking'' methods, and it is a very popular way to deal with directed (and undirected) graphs. In addition, for reasons we will point out later, we can get spectral methods to work in a very natural way in this particular case, even if the initial graph is undirected. \end{itemize} \section{% (02/05/2015): Overview of Graph Partitioning} Reading for today. \begin{compactitem} \item ``Survey: Graph clustering,'' in Computer Science Review, by Schaeffer \item ``Geometry, Flows, and Graph-Partitioning Algorithms,'' in CACM, by Arora, Rao, and Vazirani \end{compactitem} The problem of \emph{graph partitioning} or \emph{graph clustering} refers to a general class of problems that deals with the following task: given a graph $G=(V,E)$, group the vertices of a graph into groups or clusters or communities. (One might be interested in cases where this graph is weighted, directed, etc., but for now let's consider non-directed, possibly weighted, graphs. Dealing with weighted graphs is straightforward, but extensions to directed graphs are more problematic.) The graphs might be given or constructed, and there may or may not be extra information on the nodes/edges that are available, but insofar as the black box algorithm that actually does the graph partitioning is concerned, all there is is the information in the graph, i.e., the nodes and edges or weighted edges. Thus, the graph partitioning algorithm takes into account the node and edge properties, and thus it typically relies on some sort of ``edge counting'' metric to optimize. Typically, the goal is to group nodes in such a manner that nodes within a cluster are more similar to each other than to nodes in different clusters, \emph{e.g.}, more and/or better edges within clusters and relatively few edges between clusters. \subsection{Some general comments} Two immediate questions arise. \begin{itemize} \item A first question is to settle on an objective that captures this bicriteria. There are several ways to quantify this bicriteria which we will describe, but each tries to cut a data graph into $2$ or more ``good'' or ``nice'' pieces. \item A second question to address is how to compute the optimal solution to that objective. In some cases, it is ``easy,'' e.g., it is computable in low-degree polynomial time, while in other cases it is ``hard,'' e.g., it is intractable in the sense that the corresponding decision problem is NP-hard or NP-complete. \end{itemize} In the case of an intractable objective, people are often interested in computing some sort of approximate solution to optimize the objective that has been decided upon. Alternatively, people may run a procedure without a well-defined objective stated and decided upon beforehand, and in some cases this procedure returns answers that are useful. Moreover, the procedures often bear some sort of resemblance to the steps of algorithms that solve well-defined objectives exactly. Clearly, there is potential interest in understanding the relationship between these two complementary approaches: this will help people who run procedures know what they are optimizing; this can feed back and help to develop statistically-principled and more-scalable procedures; and so on. Here, we will focus on several different methods (i.e., classes of algorithms, e.g., ``spectral graph algorithms'' as well as other classes of methods) that are very widespread in practice and that can be analyzed to prove strong bounds on the quality of the partitions found. The methods are the following. \begin{enumerate} \item Spectral-based methods. This could include either global or local methods, both of which come with some sort of Cheeger Inequality. \item Flow-based methods. These have connections with the min-cut/max-flow theorem, and they can be viewed in terms of embeddings via their LP formulation, and here too there is a local improvement version. \end{enumerate} In addition, we will also probably consider methods that combine spectral and flow in various ways. Note that most or all of the theoretically-principles methods people use have steps that boil down to one of these. Of course, we will also make connections with methods such as local improvement heuristics that are less theoretically-principled but that are often important in practice. Before doing that, we should point out something that has been implicit in the discussion so far. That is, while computer scientists (and in particular TCS) often draw a strong distinction between problems and algorithms, researchers in other areas (in particular machine learning and data analysis as well as quantitatively-inclined people in nearly every other applied area) often do not. For the latter people, one might run some sort of procedure that solves something insofar as, e.g., it finds clusters that are useful by a downstream metric. As you can imagine, there is a proliferation of such methods. One of the questions we will address is when we can understand those procedures in terms of the above theoretically-principled methods. In many cases, we can; and that can help to understand when/why these algorithms work and when/why they don't. Also, while we will mostly focus on a particular objective (called expansion or conductance) that probably is the combinatorial objective that most closely captures the bicriteria of being well-connected intra-cluster and not well-connected inter-cluster, we will probably talk about some other related methods. For example, finding dense subgraphs, and finding so-called good-modularity partitions. Those are also of widespread interest; they will illustrate other ways that spectral methods can be used; and understanding the relationship between those objectives and expansion/conductance is important. Before proceeding, a word of caution: For a given objective quantifying how ``good'' is a partition, it is \emph{not} the case that all graphs have good partitions---but all graph partitioning algorithms (as will other algorithms) will return some answer, \emph{i.e.}, they will give you some output clustering. In particular, there is a class of graphs called \emph{expanders} that do not have good clusters with respect to the so-called expansion/conductance objective function. (Many real data graphs have strong expander-like properties.) In this case, i.e., when there are no good clusters, the simple answer is just to say don't do clustering. Of course, it can sometimes in practice be difficult to tell if you are in that case. (For example, with a thousand graphs and a thousand methods---that may or may not be related but that have different knobs and so are at least minorly-different---you are bound to find things look like clusters, and controlling false discovery, etc., is tricky in general but in particular for graph-based data.) Alternatively, especially in practice, you might have a graph that has both expander-like properties and non-expander-like properties, e.g., in different parts of the graph. A toy example of this could be given by the lollipop graph. In that case, it might be good to know how algorithms behave on different classes of graphs and/or different parts of the graph. Question (raised by this): Can we certify that there are no good clusters in a graph? Or certify the nonexistence of hypothesized things more generally? We will get back to this later. Let's go back to finding an objective we want to consider. As a general rule of thumb, when most people talk about clusters or communities (for some reason, in network and especially in social graph applications clusters are often called communities---they may have a different downstream, e.g., sociological motivation, but operationally they are typically found with some sort of graph clustering algorithm) ``desirable'' or ``good'' clusters tend to have the following properties: \begin{enumerate} \item Internally (intra) - well connected with other members of the cluster. Minimally, this means that it should be connected---but it is a challenge to guarantee this in a statistically and algorithmically meaningful manner. More generally, this might mean that it is ``morally connected''---\emph{e.g.}, that there are several paths between vertices in intra-clusters and that these paths should be internal to the cluster. (Note: this takes advantage of the fact that we can classify edges incident to $v \in C$ as internal (connected to other members of $C$) and external (connected to $\bar{C}$). \item Externally (inter) - relatively poor connections between members of a cluster and members of a different cluster. For example, this might mean that there are very few edges with one endpoint in one cluster and the other endpoint in the other cluster. \end{enumerate} Note that this implies that we can classify edges, i.e., pairwise connections, incident to a vertex $v \in C$ into edges that are internal (connected to other members of $C$) and edges that are external (connected to members of $\bar{C}$). This technically is well-defined; and, informally, it makes sense, since if we are modeling the data as a graph, then we are saying that things and pairwise relationships between things are of primary importance. So, we want a relatively dense or well-connected (very informally, those two notions are similar, but they are often different when one focuses on a particular quantification of the informal notion) induced subgraph with relatively few inter-connections between pieces. Here are extreme cases to consider: \begin{itemize} \item Connected component, \emph{i.e.}, the ``entire graph,'' if the graph is connected, or one connected component if the graph is not connected. \item Clique or maximal clique, \emph{i.e.}, complete subgraph or a maximal complete subgraph, \emph{i.e.}, subgraph in which no other vertices can be added without loss of the clique property. \end{itemize} But how do we quantify this more generally? \subsection{A first try with min-cuts} Here we will describe an objective that has been used to partition graphs. Although it is widely-used for certain applications, it will have certain aspects that are undesirable for many other applications. In particular, we cover it for a few reasons: first, as a starter objective before we get to a better objective; second, since the dual is related to a non-spectral way to partition graphs; and third, although it doesn't take into account the bi-criteria we have outlined, understanding it will be a basis for a lot of the stuff later. \subsubsection{Min-cuts and the Min-cut problem} We start with the following definition. \begin{definition} Let $G=(V,E)$ be a graph. A \emph{cut} $C=(S,T)$ is a partition of the vertex set $V$ of $G$. An \emph{$s$-$t$-cut} $C=(S,T)$ of $G=(V,E)$ is a cut $C$ s.t. $s \in S$ and $t \in T$, where $s,t \in V$ are pre-specified source and sink vertices/nodes. A \emph{cut set} is $\{(u,v) \in E : u \in S, v \in T \}$, i.e., the edges with one endpoint on each side of the cut. \end{definition} The above definition applies to both directed and undirected graphs. Notice in the directed case, the cut set contains the edges from node in $S$ to nodes in $T$, but not those from $T$ to $S$. Given this set-up the \emph{min-cut problem} is: find the ``smallest'' cut, \emph{i.e.}, find the cut with the ``smallest'' cut set, \emph{i.e.} the smallest boundary (or sum of edge weights, more generally). That is: \begin{definition} The \emph{capacity} of an $s$-$t$-cut is $c(S,T) = \sum_{(u,v)\in(S,\bar{S})} c_{uv}$. In this case, the \emph{Min-Cut Problem} is to solve \[ \min_{s \in S,t \in \bar{S}} c(S,\bar{S}) . \] \end{definition} That is, the problem is to find the ``smallest'' cut, where by smallest we mean the cut with the smallest total edge capacity across it, i.e., with the smallest ``boundary.'' Things to note about this formalization: \begin{enumerate} \item Good: Solvable in low-degree polynomial time by a polynomial time algorithm. (As we will see, $\mbox{min-cut} = \mbox{max-flow}$ is related.) \item Bad: Often get very unbalanced cut. (This is not \emph{necessarily} a problem, as maybe there are no good cuts, but for this formalization, this happens even when it is known that there are good large cuts. This objective tends to nibble off small things, even when there are bigger partitions of interest.) This is problematic for several reasons: \begin{itemize} \item \textbf{In theory.} Cut algorithms are used as a sub-routine in divide and conquer algorithm, and if we keep nibbling off small pieces then the recursion depth is very deep; alternatively, control over inference is often obtained by drawing strength over a bunch of data that are well-separated from other data, and so if that bunch is very small then the inference control is weak. \item \textbf{In practice.} Often, we want to ``interpret'' the clusters or partitions, and it is not nice if the sets returned are uninteresting or trivial. Alternatively, one might want to do bucket testing or something related, and when the clusters are very small, it might not be worth the time. \end{itemize} (As a forward-looking pointer, we will see that an `improvement'' of the idea of cut and min-cut may also get very imbalanced partitions, but it does so for a more subtle/non-trivial reason. So, this is a bug or a feature, but since the reason is somewhat trivial people typically view this as a bug associated with the choice of this particular objective in many applications.) \end{enumerate} \subsubsection{A slight detour: the Max-Flow Problem} Here is a slight detour (w.r.t. spectral methods per se), but it is one that we will get back to, and it is related to our first try objective. Here is a seemingly-different problem called the Max-Flow problem. \begin{definition} The \emph{capacity} of an edge $e \in E$ is a mapping $c:E\rightarrow\mathbb{R}^{+}$, denoted $c_e$ or $c_{uv}$ (which will be a constraint on the maximum amount of flow we allow on that edge). \end{definition} \begin{definition} A \emph{flow} in a directed graph is a mapping $f:E\rightarrow\mathbb{R}$, denoted $f_e$ or $f_{uv}$ s.t.: \begin{itemize} \item $f_{uv} \le c_{uv}, \quad \forall (u,v) \in E$ (Capacity constraint.) \item $\sum_{v:(u,v) \in E} f_{uv} = \sum_{v:(v,u) \in E} f_{vu}, \quad\forall u \in V\backslash \{s,t\}$ (Conservation of flow, except at source and sink.) \item $f_e\geq 0 \quad \forall e\in E$ (obey directions) \end{itemize} A \emph{flow} in a undirected graph is a mapping $f:E\rightarrow\mathbb{R}$, denoted $f_e$. We arbitrarily assign directions to each edge, say $e=(u,v)$, and when we write $f_{(v,u)}$, it is just a notation for $-f_{(u,v)}$ \begin{itemize} \item $|f_{uv}| \le c_{uv}, \quad \forall (u,v) \in E$ (Capacity constraint.) \item $\sum_{v:(u,v) \in E} f_{uv} = 0, \quad\forall u \in V\backslash \{s,t\}$ (Conservation of flow, except at source and sink.) \end{itemize} \end{definition} \begin{definition} The \emph{value of the flow} $|f| = \sum_{v \in V} |f_{sv}|$, where $s$ is the source. (This is the amount of flow flowing out of $s$. It is easy to see that as all the nodes other than $s,t$ obey the flow conservation constraint, the flow out of $s$ is the same as the flow into $t$. This is the amount of flow flowing from $s$ to $t$.) In this case, the \emph{Max-Flow Problem} is \[ \max |f| . \] \end{definition} Note: what we have just defined is really the ``single commodity flow problem'' since there exists only $1$ commodity that we are routing and thus only $1$ source/sink pair $s$ and $t$ that we are routing from/to. (We will soon see an important generalization of this to something called \emph{multi-commodity flow}, and this will be very related to a non-spectral method for graph partitioning.) Here is an important result that we won't prove. \begin{theorem}[Max-Flow-Min-Cut Theorem] The max value of an $s-t$ flow is equal to the min capacity of an $s-t$ cut. \end{theorem} Here, we state the Max-Flow problem and the Min-Cut problem, in terms of the primal and dual optimization problems. \textsc{Primal: (Max-Flow)}: \begin{align*} \max & \quad |f| \\ \mbox{s.t.~~} & \quad f_{uv} \le C_{uv} ,\quad (uv)\in E \\ & \sum_{v:(vu)\in E} f_{vu} - \sum_{v:(uv)\in E} f_{uv} \le 0 , \quad u \in V \\ & f_{uv} \ge 0 \end{align*} \textsc{Dual: (Min-Cut)}: \begin{align*} \min & \quad \sum_{(i, j) \in E} c_{ij} d_{ij} \\ \mbox{s.t.~~} & \quad d_{ij} - p_i + p_j \ge 0, (ij)\in E \\ & \quad p_s=1, p_t=0, \\ & \quad p_i \geq 0, i \in V \\ & \quad d_{ij} \geq 0, ij \in E \end{align*} There are two ideas here that are important that we will revisit. \begin{itemize} \item Weak duality: for any instance and any feasible flows and cuts, $\max flow \le \min cut$. \item Strong duality: for any instance, $\exists$ feasible flow and feasible cut s.t. the objective functions are equal, \emph{i.e.}, s.t. $\max flow = \min cut$. \end{itemize} We are not going to go into these details here---for people who have seen it, it is just to set the context, and for people who haven't seen it, it is to give an important fyi. But we will note the following. \begin{itemize} \item Weak duality generalizes to many settings, and in particular to multi-commodity flow; but strong duality does not. The next question is: does there exist a cut s.t. equality is achieved. \item We can get an approximate version of strong duality, \emph{i.e.}, an approximate Min-Cut-Max-Flow theorem in that multi-commodity case. That we can get such a bound will have numerous algorithmic implications, in particular for graph partitioning. \item We can translate this (in particular, the all-pairs multi-commodity flow problem) into $2$-way graph partitioning problems (this should not be immediately obvious, but we will cover it later) and get nontrivial approximation guarantees. \end{itemize} About the last point: for flows/cuts we have introduced special source and sink nodes, $s$ and $t$, but when we apply it back to graph partitioning there won't be any special source/sink nodes, basically since we will relate it to the all-pairs multi-commodity flow problem, i.e., where we consider all ${n \choose 2}$ possible source-sink pairs. \subsection{Beyond simple min-cut to ``better'' quotient cut objectives} The way we described what is a ``good'' clustering above was in terms of an intra-connectivity versus intra-connectivity bi-criterion. So, let's revisit and push on that. A related thing or a different way (that gives the same result in many cases, but sometimes does not) is to say that a bi-criterion is: \begin{itemize} \item We want a good ``cut value''---not too many crossing edges---where \emph{cut value} is $E(S, \bar{S})$ or a weighted version of that. I.e., what we just considered with min-cut. \item We want good ``balance'' properties---\emph{i.e.}, both sides of the cut should be roughly the same size---so both $S, \bar{S}$ are the same size or approximately the same size. \end{itemize} There are several ways to impose a balance condition. Some are richer or more fruitful (in theory and/or in practice) than others. Here are several. First, we can add ``hard'' or \emph{explicit} balance conditions: \begin{itemize} \item Graph bisection---find a min cut s.t. $|S| = |\bar{S}| = n/2$, \emph{i.e.}, ask for exactly $50$-$50$ balance. \item $\beta$-balanced cut---find a min cut s.t $|S| = \beta n $, $ |\bar{S}| = (1-\beta) n$, \emph{i.e.}, give a bit of wiggle room and ask for exactly (or more generally no worse than), say, a $70$-$30$ balance. \end{itemize} Second, there are also ``soft'' or \emph{implicit} balance conditions, where there is a penalty and separated nodes ``pay'' for edges in the cut. (Actually, these are not ``implicit'' in the way we will use the word later; here it is more like ``hoped for, and in certain intuitive cases it is true.'' And they are not quite soft, in that they can still lead to imbalance; but when they do it is for much more subtle and interesting reasons.) Most of these are usually formalized as quotient-cut-style objectives: \begin{itemize} \item Expansion: $\frac{E(S,\bar{S})}{\frac{|S|}{n}}$ or $\frac{E(S,\bar{S})}{\min\{|S|,|\bar{S}|\}}$ (def this as :h(S) ) (a.k.a. q-cut) \item Sparsity: $\frac{E(S, \bar S)}{|S| |\bar S|}$ (def this as :sp(S) ) (a.k.a. approximate-expansion) \item Conductance: $\frac{E(S, \bar S)}{\frac{\mbox{Vol}\left(S\right)}{n}}$ or $\frac{E(S,\bar{S})}{\min(\mbox{Vol}(|S|),\mbox{Vol}(|\bar{S}|))}$ (a.k.a. Normalized cut) \item Normalized-cut: $\frac{E(S,\bar{S})}{\mbox{Vol}(|S|)\cdot\mbox{Vol}(|\bar{S}|)}$ (a.k.a. approximate conductance) \end{itemize} Here, $E(S,\bar{S})$ is the number of edges between $S$ and $\bar{S}$, or more generally for a weighted graph the sum of the edge weights between $S$ and $\bar{S}$, and $\mbox{Vol}(S) = \sum_{ij\in E}{ \mbox{deg}(V_i)}$. In addition, the denominator in all four cases correspond to different volume notions: the first two are based on the number of nodes in $S$, and the last two are based on the number of edges in $S$ (i.e., the sum of the degrees of the nodes in $S$.) Before proceeding, it is worth asking what if we had taken a difference rather than a ratio, e.g., $SA-VOL$, rather than $SA/VOL$. At the high level we are discussing now, that would do the same thing---but, quantitatively, using an additive objective will generally give very different results than using a ratio objective, in particular when one is interested in fairly small clusters. (As an FYI, the first two, i.e., expansion and sparsity, are typically used in the theory algorithms algorithms, since they tend to highlight the essential points; while the latter two, i.e., conductance and normalized cuts, are more often used in data analysis, machine learning, and other applications, since issues of normalization are dealt with better.) Here are several things to note: \begin{itemize} \item Expansion provides a \emph{slightly} stronger bias toward being well-balanced than sparsity (i.e. between a $10-90$ cut and a $40-60$ cut, the advantage in the denominator for the more balanced $40-60$ cut in expansion is $4:1$, while it is $2400:900<4:1$ in sparsity) and there \emph{might} be some cases where this is important. That is, the product variants have a ``factor of $2$'' weaker preference for balance than the min variants. Similarly for normalized cuts versus conductance. \item That being said, that difference is swamped by the following. Expansion and sparsity are the ``same'' (in the following sense:) \[\min h(S) \approx \min sp(S)\] Similarly for normalized cuts versus conductance. \item Somewhat more precisely, although the expansion of any particular set isn't in general close to the sparsity of that set, The expansion problem and the sparsity problem are equivalent in the following sense:\\ It is clear that \[\text{argmin}_S \Phi'(G)=\text{argmin}_S n\Phi'(G)=\text{argmin}_S \frac{C(S,\bar{S})}{\min\{|S|,|\bar{S}|\}}\frac{n}{\max\{|S|,|\bar{S}|\}} \] As $1<\frac{n}{\max\{|S|,|\bar{S}|\}}\leq 2$ , the min partition we find by optimizing sparsity will also give, off by a multiplicative factor of $2$, the optimal expansion. As we will see this is small compared to $O(\log n)$ approximation from flow or the quadratic factor with Cheeger, and so is not worth worrying about from an optimization perspective. Thus, we will be mostly cavalier about going back and forth. \item Of course, the sets achieving the optimal may be very different. An analogous thing was seen in vector spaces---the optimal may rotate by $90$ degrees, but for many things you only need that the Rayleigh quotient is approximately optimal. Here, however, the situation is worse. Asking for the certificates achieving the optimum is a more difficult thing---in the vector space case, this means making awkward ``gap'' assumptions, and in the graph case it means making strong and awkward combinatorial statements. \item Expansion $\ne$ Conductance, in general, except for regular graphs. (Similarly, Sparsity $\ne$ Normalized Cuts, in general, except for regular graphs.) The latter is in general preferable for heterogeneous graphs, \emph{i.e.}, very irregular graphs. The reason is that there are closer connections with random walks and we get tighter versions of Cheeger's inequality if we take the weights into account. \item Quotient cuts capture exactly the surface area to volume bicriteria that we wanted. (As a forward pointer, a question is the following: what does this mean if the data come from a low dimensional space versus a high dimensional space; or if the data are more or less expander-like; and what is the relationship between the original data being low or high dimensional versus the graph being expander-like or not? \item For ``space-like'' graphs, these two bicriteria as ``synergistic,'' in that they work together; for expanders, they are ``uncoupled,'' in that the best cuts don't depend on size, as they are all bad; and for many ``real-world'' heavy-tailed informatics graphs, they are ``anti-correlated,'' in that better balance means worse cuts. \item An obvious question: are there other notions, e.g., of ``volume'' that might be useful and that will lead to similar results we can show about this? (In some cases, the answer to this is yes: we may revisit this later.) Moreover, one might want to choose a different reweighting for statistical or robustness reasons. \end{itemize} (We will get back to the issues raised by that second-to-last point later when we discuss ``local'' partitioning methods. We simply note that ``space-like'' graphs include, \emph{e.g.}, $\mathcal{Z}^{2}$ or random geometric graphs or ``nice'' planar graphs or graphs that ``live'' on the earth. More generally, there is a trade-off and we might get very imbalanced clusters or even disconnected clusters. For example, for the $G(n, p)$ random graph model if $p \ge \log n^2 / n $ then we have an expander, while for extremely sparse random graphs, \emph{i.e.}, $ p <~ \log n/n $, then due to lack of concentration we can have deep small cuts but be expander-like at larger size scales.) \subsection{Overview Graph Partition Algorithms} Here, we will briefly describe the ``lay of the land'' when it comes to graph partitioning algorithms---in the next few classes, we will go into a lot more details about these methods. There are three basic ideas you need to know for graph partitioning, in that nearly all methods can be understood in terms of some combination of these methods. \begin{itemize} \item Local Improvement (and multi-resolution). \item Spectral Methods. \item Flow-based Methods. \end{itemize} As we will see, in addition to being of interest in clustering data graphs, graph partitioning is a nice test-case since it has been very well-studied in theory and in practice and there exists a large number of very different algorithms, the respective strengths and weaknesses of which are well-known for dealing with it. \subsubsection{Local Improvement} \emph{Local improvement} methods refer to a class of methods that take an input partition and do more-or-less naive steps to get a better partition: \begin{itemize} \item $70$s Kernighan-Lin. \item $80$s Fiduccia-Mattheyses---FM and KL start with a partition and improve the cuts by flipping nodes back and forth. Local minima can be a big problem for these methods. But they can be useful as a post-processing step---can give a big difference in practice. FM is better than KL since it runs in linear time, and it is still commonly used, often in packages. \item $90$s Chaco, Metis, etc. In particular, METIS algorithm from Karypis and Kumar, works very well in practice, especially on low dimensional graphs. \end{itemize} The methods of the $90$s used the idea of local improvement, coupled with the basically linear algebraic idea of \emph{Multiresolution} get algorithms that are designed to work well on space-like graphs and that can perform very well in practice. The basic idea is: \begin{itemize} \item Contract edges to get a smaller graph. \item Cut the resulting graph. \item Unfold back up to the original graph. \end{itemize} Informally, the basic idea is that if there is some sort of geometry, say the graph being partitioned is the road network of the US, i.e., that lives on a two-dimensional surface, then we can ``coarse grain'' over the geometry, to get effective nodes and edges, and then partition over the coarsely-defined graph. The algorithm will, of course, work for any graph, and one of the difficulties people have when applying algorithms as a black box is that the coarse graining follows rules that can behave in funny ways when applied to a graph that doesn't have the underlying geometry. Here are several things to note: \begin{itemize} \item These methods grew out of scientific computing and parallel processing, so they tend to work on ``space-like'' graphs, where there are nice homogeneity properties---even if the matrices aren't low-rank, they might be diagonal plus low-rank off-diagonal blocks for physical reasons, or whatever. \item The idea used previously to speed up convergence of iterative methods. \item Multiresolution allows globally coherent solutions, so it avoids some of the local minima problems. \item $90$s: Karger showed that one can compute min-cut by randomly contracting edges, and so multiresolution may not be just changing the resolution at which one views the graph, but it may be taking advantage of this property also. \end{itemize} An important point is that local improvement (and even multiresolution methods) can easily get stuck in local optima. Thus, they are of limited interest by themselves. But they can be very useful to ``clean up'' or ``improve'' the output of other methods, e.g., spectral methods, that in a principled way lead to a good solution, but where the solution can be improved a bit by doing some sort or moderately-greedy local improvement. \subsubsection{Spectral methods} \emph{Spectral methods} refer to a class of methods that, at root, are a relaxation or rounding method derived from an NP-hard QIP and that involves eigenvector computations. In this case that we are discussing, it is the QIP formulation of the graph bisection problem that is relaxed. Here is a bit of incomplete history. \begin{itemize} \item Donath and Hoffman (ca. 72,73) introduced the idea of using the leading eigenvector of the Adjacency Matrix $A_G$ as a heuristic to find good partitions. \item Fiedler (ca. 73) associated the second smallest eigenvalue of the Laplacian $L_G$ with graph connectivity and suggested splitting the graph by the value along the associated eigenvector. \item Barnes and Hoffman (82,83) and Bopanna (87) used LP, SDP, and convex programming methods to look at the leading nontrivial eigenvector. \item Cheeger (ca. 70) established connections with isoperimetric relationships on continuous manifolds, establishing what is now known ad Cheeger's Inequality. \item $80$s: saw performance guarantees from Alon-Milman, Jerrum-Sinclair, etc., connecting $\lambda_2$ to expanders and rapidly mixing Markov chains. \item 80s: saw improvements to approximate eigenvector computation, e.g., Lanczos methods, which made computing eigenvectors more practical and easier. \item 80s/90s: saw algorithms to find separators in certain classes of graphs, e.g., planar graphs, bounds on degree, genus, etc. \item Early 90s: saw lots of empirical work showing that spectral partitioning works for ``real'' graphs such as those arising in scientific computing applications \item Spielman and Teng (96) showed that ``spectral partitioning works'' on bounded degree planar graphs and well-shaped meshed, i.e., in the application where it is usually applied. \item Guattery and Miller (95, 97) showed ``spectral partitioning doesn't work'' on certain classes of graphs, e.g., the cockroach graph, in the sense that there are graphs for which the quadratic factor is achieved. That particular result holds for vanilla spectral, but similar constructions hold for non-vanilla spectral partitioning methods. \item Leighton and Rao (87, 98) established a bound on the duality gap for multi-commodity flow problems, and used multi-commodity flow methods to get an $O(\log n)$ approximation to the graph partitioning problem. \item LLR (95) considered the geometry of graphs and algorithmic applications, and interpreted LR as embedding $G$ in a metric space, making the connection with the $O(\log n)$ approximation guarantee via Bourgain's embedding lemma. \item 90s: saw lots of work in TCS on LP/SDP relaxations of IPs and randomized rounding to get $\{\pm 1\}$ solutions from fractional solutions. \item Chung (97) focused on the normalized Laplacian for degree irregular graphs and the associated metric of conductance. \item Shi and Malik (99) used normalized cuts for computer vision applications, which is essentially a version of conductance. \item Early 00s: saw lots of work in ML inventing and reinventing and reinterpreting spectral partitioning methods, including relating it to other problems like semi-supervised learning and prediction (with, e.g., boundaries between classes being given by low-density regions). \item Early 00s: saw lots of work in ML on manifold learning, etc., where one constructs a graph and recovers an hypothesized manifold; constructs graphs for semi-supervised learning applications; and where the diffusion/resistance coordinates are better or more useful/robust than geodesic distances. \item ARV (05) got an SDP-based embedding to get an $O(\sqrt{\log n})$ approximation, which combined ideas from spectral and flow; and there was related follow-up work. \item 00s: saw local/locally-biased spectral methods and improvements to flow improve methods. \item 00s: saw lots of spectral-like methods like viral diffusions with social/complex networks. \end{itemize} For the moment and for simplicity, say that we are working with unweighted graphs. The graph partitioning QIP is: \begin{align*} \min & \quad x^TLx \\ \mbox{s.t.~~} & \quad x^T1=0 \\ & x_i\in\{-1,+1\} \end{align*} and the spectral relaxation is: \begin{align*} \min & \quad x^TLx \\ \mbox{s.t.~~} & \quad x^T1=0 \\ & x_i\in\mathbb{R},\quad x^Tx=n \end{align*} That is, we relax $x$ from being in $\{-1,1\}$, which is a discrete/combinatorial constraint, to being a real continuous number that is $1$ ``on average.'' (One could relax in other ways---\emph{e.g.}, we could relax to say that it's magnitude is equal to $1$, but that it sits on a higher-dimensional sphere. We will see an example of this later. Or other things, like relaxing to a metric.) This spectral relaxation is not obviously a nice problem, e.g., it is not even convex; but it can be shown that the solution to this relaxation can be computed as the second smallest eigenvector of $L$, the Fiedler vector, so we can use an eigensolver to get the eigenvector. Given that vector, we then have to perform a \emph{rounding} to get an actual cut. That is, we need to take the real-valued/fractional solution obtained from the continuous relaxation and round it back to $\{-1,+1\}$. There are different ways to do that. So, here is the basic spectral partitioning method. \begin{itemize} \item Compute an eigenvector of the above program. \item Cut according to some rules, \emph{e.g.}, do a hyperplane rounding, or perform some other more complex rounding rule. \item Post process with a local improvements method. \end{itemize} The hyperplane rounding is the easiest to analyze, and we will do it here, but not surprisingly factors of $2$ can matter in practice; and so---\emph{when spectral is an appropriate thing to do}---other rounding rules often do better in practice. (But that is a ``tweak'' on the larger question of spectral versus flow approximations.) In particular, we can do local improvements here to make the output slightly better in practice. Also, there is the issue of what exactly is a rounding, e.g., if one performs a sophisticated flow-based rounding then one may obtain a better objective function but a worse cut value. Hyperplane rounding involves: \begin{itemize} \item Choose a split point $\hat{x}$ along the vector $x$ \item Partition nodes into $2$ sets: $\{x_i<\hat{x}\}$ and $\{x_i > \hat{x} \}$ \end{itemize} By \emph{Vanilla spectral}, we refer to spectral with hyperplane rounding of the Fiedler vector embedding. Given this setup of spectral-based partitioning, what can go ``wrong'' with this approach. \begin{itemize} \item We can choose the wrong direction for the cut: \begin{itemize} \item Example---Guattery and Miller construct an example that is ``quadratically bad'' by taking advantage of the confusion that spectral has between ``long paths'' and ``deep cuts.'' \item Random walk interpretation---long paths can also cause slow mixing since the expected progress of a $t$-step random walk is $O(\sqrt{t})$. \end{itemize} \item The hyperplane rounding can hide good cuts: \begin{itemize} \item In practice, it is often better to post-process with FM to improve the solution, especially if want good cuts, i.e., cuts with good objective function value. \end{itemize} \end{itemize} An important point to emphasize is that, although both of these examples of ``wrong'' mean that the task one is trying to accomplish might not work, i.e., one might not find the best partition, sometimes that is not all bad. For example, the fact that spectral methods ``strip off'' long stringy pieces might be ok if, e.g., one obtains partitions that are ``nice'' in other ways. That is, the direction chosen by spectral partitioning might be nice or regularized relative to the optimal direction. We will see examples of this, and in fact it is often for this reason that spectral performs well in practice. Similarly, the rounding step can also potentially give an implicit regularization, compared to more sophisticated rounding methods, and we will return to discuss this. \subsubsection{Flow-based methods} There is another class of methods that uses very different ideas to partition graphs. Although this will not be our main focus, since they are not spectral methods, we will spend a few classes on it, and it will be good to know about them since in many ways they provide a strong contrast with spectral methods. This class of flow-based methods uses the ``all pairs'' multicommodity flow procedure to reveal bottlenecks in the graph. Intuitively, flow should be ``perpendicular'' to the cut (i.e. in the sense of complementary slackness for LPs, and similar relationship between primal/dual variables to dual/primal constraints in general). The idea is to route a large number of commodities \emph{simultaneously} between random pairs of nodes and then choose the cut with the most edges congested---the idea being that a bottleneck in the flow computation corresponds to a good~cut. Recall that the single commodity max-flow-min-cut procedure has zero duality gap, but that is not the case for multi-commodity problem. On the other hand, the $k$-multicommodity has $O(\log k)$ duality gap---this result is due to LR and LLR, and it says that there is an \emph{approximate} min-flow-max-cut. Also, it implies an $O(\log n)$ gap for the all pairs problem. The following is an important point to note. \begin{claim} The $O(\log n)$ is tight on expanders. \end{claim} For flow, there are connections to embedding and linear programming, so as we will see, we can think of the algorithm as being: \begin{itemize} \item Relax flow to LP, and solve the LP. \item Embed solution in the $\ell_1$ metric space. \item Round solution to $\{0,1\}$. \end{itemize} \subsection{Advanced material and general comments} We will conclude with a brief discussion of these results in a broader context. Some of these issues we may return to later. \subsubsection{Extensions of the basic spectral/flow ideas} Given the basic setup of spectral and flow methods, both of which come with strong theory, here are some extensions of the basic ideas. \begin{itemize} \item Huge graphs. Here want to do computations depending on the size of the sets and not the size of the graph, \emph{i.e.}, we don't even want to touch all the nodes in the graph, and we want to return a cut that is nearby an input seed set of nodes. This includes ``local'' spectral methods---that take advantage of diffusion to approximate eigenvectors and get Cheeger-like guarantees. \item Improvement Methods. Here we want to ``improve'' an input partition---there are both spectral and flow versions. \item Combining Spectral and Flow. \begin{itemize} \item ARV solves an SDP, which takes time like $O(n^{4.5})$ or so; but we can do it faster (\emph{e.g.}, on graphs with $\approx 10^{5}$ nodes) using ideas related to approximate multiplicative weights. \item There are strong connections here to online learning---roughly since we can view ``worst case'' analysis as a ``game'' between a cut player and a matching player. \item Similarly, there are strong connections to boosting, which suggest that these combinations might have interesting statistical properties. \end{itemize} \end{itemize} A final word to reemphasize: at least as important for what we will be doing as understanding when these methods work is understanding when these methods ``fail''---that is, when they achieve their worst case quality-of-approximation guarantees: \begin{itemize} \item Spectral methods ``fail'' on graphs with ``long stringy'' pieces, like that constructed by Guattery and Miller. \item Flow-based methods ``fail'' on expander graphs (and, more generally, on graphs where most of the $\binom{n}{2}$ pairs but most pairs are far $\log n$ apart). \end{itemize} Importantly, a lot of real data have ``stringy'' pieces, as well as expander-like parts; and so it is not hard to see artifacts of spectral and flow based approximation algorithms when they are run on real data. \subsubsection{Additional comments on these methods} Here are some other comments on spectral versus flow. \begin{itemize} \item The SVD gives good ``global'' but not good ``local'' guarantees. For example, it provides global reconstruction error, and going to the low-dimensional space might help to speed up all sorts of algorithms; but any pair of distances might be changed a lot in the low-dimensional space, since the distance constraints are only satisfied on average. This should be contrasted with flow-based embedding methods and all sorts of other embedding methods that are used in TCS and related areas, where one obtains very strong local or pairwise guarantees. There are two important (but not immediately obvious) consequences of this. \begin{itemize} \item The lack of local guarantees makes it hard to exploit these embeddings algorithmically (in worst-case), whereas the pair-wise guarantees provided by other types of embeddings means that you can get worst-case bounds and show that the solution to the subproblem approximates in worst case the solution to the original problem. \item That being said, the global guarantee means that one obtains results that are more robust to noise and not very sensitive to a few ``bad'' distances, which explains why spectral methods are more popular in many machine learning and data analysis applications. \item That local guarantees hold for all pair-wise interactions to get worst-case bounds in non-spectral embeddings essentially means that we are ``overfitting'' or ``most sensitive to'' data points that are most far apart. This is counter to a common design principle, e.g., exploited by Gaussian rbf kernels and other NN methods, that the most reliable information in the data is given by nearby points rather than far away points. \end{itemize} \end{itemize} \subsection{References} \begin{enumerate} \item Schaeffer, "Graph Clustering", Computer Science Review 1(1): 27-64, 2007 \item Kernighan, B. W.; Lin, Shen (1970). "An efficient heuristic procedure for partitioning graphs". Bell Systems Technical Journal 49: 291-307. \item CM Fiduccia, RM Mattheyses. "A Linear-Time Heuristic for Improving Network Partitions". Design Automation Conference. \item G Karypis, V Kumar (1999). "A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs". Siam Journal on Scientific Computing. \end{enumerate} \section{% (02/10/2015): Spectral Methods for Partitioning Graphs (1 of 2): Introduction to spectral partitioning and Cheeger's Inequality} Reading for today. \begin{compactitem} \item ``Lecture Notes on Expansion, Sparsest Cut, and Spectral Graph Theory,'' by Trevisan \end{compactitem} Today and next time, we will cover what is known as \emph{spectral graph partitioning}, and in particular we will discuss and prove Cheeger's Inequality. This result is central to all of spectral graph theory as well as a wide range of other related spectral graph methods. (For example, the isoperimetric ``capacity control'' that it provides underlies a lot of classification, etc. methods in machine learning that are not explicitly formulated as partitioning problem.) Cheeger's Inequality relates the quality of the cluster found with spectral graph partitioning to the best possible (but intractable to compute) cluster, formulated in terms of the combinatorial objectives of expansion/conductance. Before describing it, we will cover a few things to relate what we have done in the last few classes with how similar results are sometimes presented elsewhere. \subsection{Other ways to define the Laplacian} Recall that $L=D-A$ is the graph Laplacian, or we could work with the normalized Laplacian, in which case $L=I-D^{-1/2}AD^{-1/2}$. While these definition might not make it obvious, the Laplacian actually has several very intuitive properties (that could alternatively be used as definitions). Here, we go over two of these. \subsubsection{As a sum of simpler Laplacians} Again, let's consider $d$-regular graphs. (Much of the theory is easier for this case, and expanders are more extremal in this case; but the theory goes through to degree-heterogeneous graphs, and this will be more natural in many applications, and so we will get back to this later.) Recall the definition of the Adjacency Matrix of an unweighted graph $G=(V,E)$: \[ A_{ij} = \left\{ \begin{array}{l l} 1 & \quad \text{if $(ij)\in E$}\\ 0 & \quad \text{otherwise} \end{array} \right. , \] In this case, we can define the Laplacian as $L=D-A$ or the normalized Laplacian as $L=I-\frac{1}{d}A$. Here is an alternate definition for the Laplacian $L=D-A$. Let $G_{12}$ be a graph on two vertices with one edge between those two vertices, and define \[ L_{G_{12}} = \left( \begin{array}{cc} 1 & -1 \\ -1 & 1 \end{array} \right) \] Then, given a graph on $n$ vertices with just one edge between vertex $u$ and $v$, we can define $L$ to be the all-zeros matrix, except for the intersection between the $u^{th}$ and $v^{th}$ column and row, where we define that intersection to be $$ L_{G_{uv}} = \left( \begin{array}{cc} 1 & -1 \\ -1 & 1 \end{array} \right) .$$ Then, for a general graph $G=(V,E)$, one we can define \[ L_G = \sum_{(u,v) \in E} L_{G_{uv}} . \] This provides a simpler way to think about the Laplacian and in particular changes in the Laplacian, e.g., when one adds or removes edges. In addition, note also that this generalizes in a natural way to $ \sum_{(u,v) \in E} w_{uv}L_{G_{uv}}$ if the graph $G=(V,E,W)$ is weighted. \textbf{Fact.} This is identical to the definition $L=D-A$. It is simple to prove this. From this characterization, several things follow easily. For example, \[ x^TLx = \sum_{(u,v) \in E} w_{uv} \left(x_u-x_v\right)^2 , \] from which it follows that if $v$ is an eigenvector of $L$ with eigenvalue $\lambda$, then $v^TLv = \lambda v^Tv \ge 0$. This means that every eigenvalue is nonnegative, i.e., $L$ is SPSD. \subsubsection{In terms of discrete derivatives} Here are some notes that I didn't cover in class that relate the Laplacian matrix to a discrete notion of a derivative. In classical vector analysis, the Laplace operator is a differential operator given by the divergence of the gradient of a function in Euclidean space. It is denoted: \[ \nabla\cdot\nabla \mbox{ or } \nabla^2 \mbox{ or } \triangle \] In the cartesian coordinate system, it takes the form: \[ \nabla = \left( \frac{\partial}{\partial x_1},\cdots,\frac{\partial}{\partial x_n} \right) , \] and so \[ \triangle f = \sum_{i=1}^{n} \frac{\partial^2 f}{\partial x_i^2} . \] This expression arises in the analysis of differential equations of many physical phenomena, e.g., electromagnetic/gravitational potentials, diffusion equations for heat/fluid flow, wave propagation, quantum mechanics, etc. The \emph{discrete Laplacian} is defined in an analogous manner. To do so, somewhat more pedantically, let's introduce a discrete analogue of the gradient and divergence operators in graphs. Given an undirected graph $G=(V,E)$ (which for simplicity we take as unweighted), fix an \emph{arbitrary} orientation of the edges. Then, let $K\in\mathbb{R}^{V \times E}$ be the edge-incidence matrix of $G$, defined as \[ K_{ue} = \left\{ \begin{array}{l l} +1 & \quad \text{if edge $e$ exits vertex $u$}\\ -1 & \quad \text{if edge $e$ enters vertex $u$}\\ 0 & \quad \text{otherwise} \end{array} \right. . \] Then, \begin{itemize} \item define the \emph{gradient} as follows: let $f:V\rightarrow\mathbb{R}$ be a function on vertices, viewed as a row vector indexed by $V$; then $K$ maps $f \rightarrow fK$, a vector indexed by $E$, measures the change of $f$ along edges of the graph; and if $e$ is an edge from $u$ to $v$, then $\left(fK\right)_{e} = f_u - f_v$. \item define the \emph{divergence} as follows: let $g:E\rightarrow\mathbb{R}$ be a function on edges, viewed as a column vector indexed by $E$; then $K$ maps $g \rightarrow Kg$, a vector indexed by $V$; if we think of $g$ as describing flow, then its divergence at vertex is the net outbound flow: $\left(Kg\right)_{v} = \sum_{e \mbox{ exits } v} g_e - \sum_{e \mbox{ enters } v} g_v $ \item define the \emph{Laplacian} as follows: it should map $f$ to $KK^Tf$, where $f:V\rightarrow\mathbb{R}$. So, $L = L_G = KK^T$ is the discrete Laplacian. \end{itemize} Note that it is easy to show that \[ L_{uv} = \left\{ \begin{array}{l l} -1 & \quad \text{if $(u,v) \in E$} \\ \mbox{deg}(u) & \quad \text{if $u=v$} \\ 0 & \quad \text{otherwise} \end{array} \right. , \] which is in agreement with the previous definition. Note also that \[ fLf^T = fKK^Tf = \|fK\|_2^2 = \sum_{(u,v)\in E} \left( f_u-f_v \right)^2 , \] which we will later interpret as a smoothness condition for functions on the vertices of the graph. \subsection{Characterizing graph connectivity} Here, we provide a characterization in terms of eigenvalues of the Laplacian of whether or not a graph is connected. Cheeger's Inequality may be viewed as a ``soft'' version of this result. \subsubsection{A Perron-Frobenius style result for the Laplacian} What does the Laplacian tell us about the graph? A lot of things. Here is a start. This is a Perron-Frobenius style result for the Laplacian. \begin{theorem} \label{thm:perrof-frob-lapl} Let $G$ be a $d$-regular undirected graph, let $L=I-\frac{1}{d}A$ be the normalized Laplacian; and let $\lambda_1 \le \lambda_2 \le \cdots \le \lambda_n$ be the real eigenvalues, including multiplicity. Then: \begin{enumerate} \item $\lambda_1 = 0$, and the associated eigenvector $x_{1} = \frac{\vec{1}}{\sqrt{n}} = \left( \frac{1}{\sqrt{n}}, \ldots, \frac{1}{\sqrt{n}} \right)$. \item $\lambda_2 \le 2$. \item $\lambda_k = 0$ iff $G$ has at least $k$ connected components. (In particular, $\lambda_2>0$ iff $G$ is connected.) \item $\lambda_n = 2$ iff at least one connected component is bipartite. \end{enumerate} \end{theorem} \begin{Proof} Note that if $x\in\mathbb{R}^{n}$, then $x^TLx= \frac{1}{d}\sum_{(u,v)\in E} \left( x_u - x_v \right)^{2}$ and also \[ \lambda_1 = \min_{x\in\mathbb{R}^{n}\diagdown\{0\}} \frac{x^TLx}{x^Tx} \ge 0 . \] Take $\vec{1} = \left(1,\ldots,1\right)$, in which case $\vec{1}^TL\vec{1}=0$, and so $0$ is the smallest eigenvalue, and $\vec{1}$ is one of the eigenvectors in the eigenspace of this eigenvalue. This proves part $1$. We also have the following formulation of $\lambda_k$ by Courant-Fischer: \[ \lambda_k = \min_{\substack{S\subseteq \mathbb{R}^n \\ dim(S)=k}}\max_{x\in S\diagdown\{\vec{0}\}}\frac{x^TAx}{x^Tx} \frac{\sum_{(u,v)\in E} \left(x_u-x_v\right)^{2}}{d \sum_u x_u^2} \] So, if $\lambda_k = 0$, then $\exists$ a $k$-dimensional subspace $S$ such that $\forall x \in S$, we have $\sum_{(u,v)\in E} \left(x_u-x_v\right)^{2}=0$. But this means that $\forall x\in S$, we have $x_u=x_v \quad \forall$ edges $(u,v)\in E$ with positive weight, and so $x_u = x_v$, for any $u,v$ in the same connected component. This means that $x \in S$ is constant within each connected component of $G$. So, $k=\mbox{dim}(S) \le \Xi$, where $\Xi$ is the number of connected components. Conversely, if $G$ has $\ge k$ connected components, then we can let $S$ be the space of vectors that are constant on each component; and this $S$ has dimension $\ge k$. Furthermore, $\forall x \in S$, we have that $\sum_{(u,v) \in E} \left( x_u-x_v \right)^{2} = 0$. Thus $\max_{x\in S_k\diagdown\{\vec{0}\}}\frac{x^TAx}{x^Tx}=0$ for any dimension $k$ subspace $S_k$ of the $S$ we choose. Then it is clear from Courant-Fischer $\lambda_k=0$ as any $S_k$ provides an upperbound. Finally, to study $\lambda_n = 2$, note that \[ \lambda_n = \max_{x\in\mathbb{R}^{n}\diagdown\{\vec{0}\}} \frac{x^TLx}{x^Tx} \] This follows by using the variational characterization of the eigenvalues of $-L$ and noting that $-\lambda_n$ is the smallest eigenvalue of $-L$. Then, observe that $\forall x \in \mathbb{R}^{n}$, we have that \[ 2 - \frac{x^TLx}{x^Tx} = \frac{\sum_{(u,v) \in E} \left( x_u + x_v \right)^{2}}{d\sum_u x_u^2}\geq 0 , \] from which it follows that $\lambda_n \le 2$ (also $\lambda_k\le 2$ for all $k=2,\ldots,n$). In addition, if $\lambda_n = 2$, then $\exists x\ne 0$ s.t. $\sum_{(u,v)\in E} \left(x_u + x_v \right)^{2} = 0$. This means that $x_u = -x_v$, for all edges $(u,v)\in E$. Let $v$ be a vertex s.t. $x_v = a \ne 0$. Define sets \begin{eqnarray*} A &=& \{ v: x_v = a \} \\ B &=& \{ v: x_v = -a \} \\ R &=& \{ v: x_v \ne \pm a \} . \end{eqnarray*} Then, the set $A \cup B$ is disconnected from the rest of the graph, since otherwise an edge with an endpoint in $A \cup B$ and the other endpoint in $R$ would give a positive contribution to $\sum_{ij} A_{ij} \left( x_i + x_j \right)^{2} $. Also, every edge incident on $A$ has other endpoint in $B$, and vice versa. So $A \cup B$ is a bipartite connected component (or a collection of connected components) of $G$, with bipartition $A,B$. \end{Proof} (Here is an aside. That proof was from Trevisan; Spielman has a somewhat easier proof, but it is only for two components. I need to decide how much I want to emphasize the possibility of using $k$ eigenvectors for soft partitioning---I'm leaning toward it, since several students asked about it---and if I do I should probably go with the version of here that mentions $k$ components.) As an FYI, here is Spielman's proof of $\lambda_2=0$ iff $G$ is disconnected; or, equivalently, that \[ \lambda_2 >0 \Leftrightarrow G \mbox{ is connected} . \] Start with proving the first direction: if $G$ is disconnected, then $\lambda_2 = 0$. If $G$ is disconnected, then $G$ is the union of (at least) $2$ graphs, call then $G_1$ and $G_2$. Then, we can renumber the vertces so that we can write the Laplacian of $G$ as \[ L_G = \left( \begin{array}{cc} L_{G_1} & 0 \\ 0 & L_{G_2} \end{array} \right) \] So, $L_G$ has at least $2$ orthogonal eigenvectors with eigenvalue $0$, i.e., $\left( \begin{array}{c} 1 \\ 0 \end{array} \right)$ and $\left( \begin{array}{c} 0 \\ 1 \end{array} \right)$, where the two vectors are given with the same renumbering as in the Laplacians. Conversely, if $G$ is connected and $x$ is an eigenvector such that $L_Gx = 0 x$, then, $L_G x = 0$, and $x^TL_Gx = \sum_{(ij)\in E} \left(x_i - x_j\right)^{2} = 0$. So, for all $(u,v)$ connected by an edge, we have that $x_u = x_u$. Apply this iteratively, from which it follows that $x$ is a constant vector, i.e., $x_u = x_v$, forall $u,v$. So, the eigenspace of eigenvalue $0$ has dimension $1$. This is the end of the aside.) \subsubsection{Relationship with previous Perron-Frobenius results} Theorem~\ref{thm:perrof-frob-lapl} is an important result, and it has several important extensions and variations. In particular, the ``$\lambda_2>0$ iff $G$ is connected'' result is a ``hard'' connectivity statement. We will be interested in how this result can be extended to a ``soft'' connectivity, e.g., ``$\lambda_2$ is far from $0$ iff the graph is well-connected,'' and the associated Cheeger Inequality. That will come soon enough. First, however, we will describe how this result relates to the previous things we discussed in the last several weeks, e.g., to the Perron-Frobenius result which was formulated in terms of non-negative matrices. To do so, here is a similar result, formulated slightly differently. \begin{lemma} \label{thm:perrof-frob-adj} Let $A_G$ be the Adjacency Matrix of a $d$ regular graph, and recall that it has $n$ real eigenvalues $\alpha_1 \ge \cdots \ge \alpha_n$ and $n$ associated orthogonal eigenvectors $v_i$ s.t. $A v_i = \lambda_i v_i$. Then, \begin{itemize} \item $\alpha_1 = d$, with $v_{1} = \frac{1}{\sqrt{n}} = \left( \frac{1}{\sqrt{n}}, \ldots, \frac{1}{\sqrt{n}} \right) $. \item $\alpha_n \ge -d$. \item The graph is connected iff $\alpha_1 > \alpha_2$. \item The graph is bipartite iff $\alpha_1 = -\alpha_n$, i.e., if $\alpha_n = -d$. \end{itemize} \end{lemma} Lemma~\ref{thm:perrof-frob-adj} has two changes, relative to Theorem~\ref{thm:perrof-frob-lapl}. \begin{itemize} \item The first is that it is a statement about the Adjacency Matrix, rather than the Laplacian. \item The second is that it is stated in terms of a ``scale,'' i.e., the eigenvalues depend on $d$. \end{itemize} When we are dealing with degree-regular graphs, then $A$ is trivially related to $L=D-A$ (we will see this below) and also trivially related to $L=I-\frac{1}{d}A$ (since this just rescales the previous $L$ by $1/d$). We could have removed the scale from Lemma~\ref{thm:perrof-frob-adj} by multiplying the Adjacency Matrix by $1/d$ (in which case, e.g., the eigenvalues would be in $[-1,1]$, rather than $[-d,d]$), but it is more common to remove the scale from the Laplacian. Indeed, if we had worked with $L=D-A$, then we would have had the scale there too; we will see that below. (When we are dealing with degree-heterogeneous graphs, the situation is more complicated. The reason is basically since the eigenvectors of the Adjacency matrix and unnormalized Laplacian don't have to be related to the diagonal degree matrix $D$, which defined the weighted norm which relates the normalized and unnormalized Laplacian. In the degree-heterogeneous case, working with the normalized Laplacian will be more natural due to connections with random walks. That can be interpreted as working with an unnormalized Laplacian, with an appropriate degree-weighted norm, but then the trivial connection with the eigen-information of the Adjacency matrix is lost. We will revisit this below too.) In the above, $A\in\mathbb{R}^{n \times n}$ is the Adjacency Matrix of an undirected graph $G=(V,E)$. This will provide the most direct connection with the Perro-Frobenius results we talked about last week. Here are a few questions about the Adjacency Matrix. \begin{itemize} \item Question: Is it symmetric? Answer: Yes, so there are real eigenvalues and a full set of orthonormal eigenvectors. \item Question: Is it positive? Answer: No, unless it is a complete graph. In the weighted case, it could be positive, if there were all the edges but they had different weights; but in general it is not positive, since some edges might be missing. \item Question: Is it nonnegative? Answer: Yes. \item Question: Is it irreducible? Answer: If no, i.e., if it is reducible, then \[ A = \left( \begin{array}{cc} A_{11} & A_{12} \\ 0 & A_{22} \end{array} \right) \] must also have $A_{12}=0$ by symmetry, meaning that the graph is disconnected, in which case we should think of it as two graphs. So, if the graph is connected then it is irreducible. \item Question: Is is aperiodic? Answer: If no, then since it must be symmetric, and so it must look like \[ A = \left( \begin{array}{cc} 0 & A_{12} \\ A_{12}^{T} & 0 \end{array} \right) , \] meaning that it is period equal to $2$, and so the ``second'' large eigenvalue, i.e., the one on the complex circle equal to a root of unity, is real and equal to $-1$. \end{itemize} How do we know that the trivial eigenvector is uniform? Well, we know that there is only one all-positive eigenvector. Let's try the all-ones vector $\vec{1}$. In this case, we get \[ A \vec{1} = d \vec{1} , \] which means that $\alpha_1 = d$ and $v_{1} = \frac{\vec{1}}{\sqrt{n}} = \left( \frac{1}{\sqrt{n}}, \ldots, \frac{1}{\sqrt{n}} \right) $. So, the graph is connected if $\alpha_1 > \alpha_2$, and the graph is bipartite if $\alpha_1 = -\alpha_n$. For the Laplacian $L=D-A$, there exists a close relationship between the spectrum of $A$ and $L$. (Recall, we are still considering the $d$-regular case.) To see this, let $d = \alpha_1 \ge \ldots \alpha_n$ be the eigenvalues of $A$ with associated orthonormal eigenvectors $v_1,\ldots,v_n$. (We know they are orthonormal, since $A$ is symmetric.) In addition, let $ 0 \le \lambda_1 \le \cdots \le \lambda_n$ be the eigenvalues of $L$. (We know they are all real and in fact all positive from the above alternative definition.) Then, \[ \alpha_i = d-\lambda_i \] and \[ A_G v_i = \left(dI-L_G\right) v_i = \left(d-\lambda_i\right)v_i . \] So, $L$ ``inherits'' eigen-stuff from $A$. So, even though $L$ isn't positive or non-negative, we get Perron-Frobenius style results for it, in addition to the results we get for it since it is a symmetric matrix. In addition, if $L \rightarrow D^{-1/2}L D^{-1/2}$, then the eigenvalues of $L$ become in $[0,2]$, and so on. This can be viewed as changing variables $y \leftarrow D^{-1/2}x$, and then defining Laplacian (above) and the Rayleigh quotient in the degree-weighted dot product. (So, many of the results we will discuss today and next time go through to degree-heterogeneous graphs, for this reason. But some of the results, in particular the result having to do with expanders being least like low-dimensional Euclidean space, do not.) \subsection{Statement of the basic Cheeger Inequality} We know the $\lambda_2$ captures a ``hard'' notion of connectivity, since the above result in Theorem~\ref{thm:perrof-frob-lapl} states that $\lambda_2 = 0 \Leftrightarrow G \mbox{ is disconnected}$. Can we get a ``soft'' version of this? To do so, let's go back to $d$-regular graphs, and recall the definition. \begin{definition} Let $G=(V,E)$ be a $d$-regular graph, and let $\left(S,\bar{S}\right)$ be a cut, i.e., a partition of the vertex set. Then, \begin{itemize} \item the \emph{sparsity of $S$} is: $\sigma\left(S\right) = \frac{E\left(S,\bar{S}\right)}{\frac{d}{|V|}|S|\cdot|\bar{S}|}$ \item the \emph{edge expansion of $S$} is: $\phi\left(S\right) = \frac{E\left(S,\bar{S}\right)}{d|S|}$ \end{itemize} \end{definition} This definition holds for sets of nodes $S \subset V$, and we can extend them to hold for the graph $G$. \begin{definition} Let $G=(V,E)$ be a $d$-regular graph. Then, \begin{itemize} \item the \emph{sparsity of $G$} is: $\sigma\left(G\right) = \min_{S \subset V: S \neq 0, S \neq V} \sigma\left(S\right) $. \item the \emph{edge expansion of $G$} is: $\phi\left(G\right) = \min_{S \subset V: |S| \le \frac{|V|}{2}} \phi\left(S\right) $. \end{itemize} \end{definition} For $d$-regular graphs, the graph partitioning problem is to find the sparsity or edge expansion of $G$. Note that this means finding a number, i.e., the value of the objective function at the optimum, but people often want to find the corresponding set of nodes, and algorithms can do that, but the ``quality of approximation'' is that number. \textbf{Fact.} For all $d$ regular graphs $G$, and for all $S \subset V$ s.t. $|S| \le \frac{|V|}{2}$, we have that \[ \frac{1}{2}\sigma\left(S\right) \le \phi\left(S\right) \le \sigma\left(S\right) . \] Thus, since $\sigma\left(S\right) = \sigma\left(\bar{S}\right)$, we have that \[ \frac{1}{2}\sigma\left(G\right) \le \phi\left(G\right) \le \sigma\left(G\right) . \] BTW, this is what we mean when we say that these two objectives are ``equivalent'' or ``almost equivalent,'' since that factor of $2$ ``doesn't matter.'' By this we mean: \begin{itemize} \item If one is interested in theory, then this factor of $2$ is well below the guidance that theory can provide. That is, this objective is intractable to compute exactly, and the only approximation algorithms give quadratic or logarithmic (or square root of log) approximations. If they could provide $1\pm\epsilon$ approximations, then this would matter, but they can't and they are much coarser than this factor of $2$. \item If one is interested in practice, then we can often do much better than this factor-of-$2$ improvement with various local improvement heuristics. \item In many cases, people actually write one and optimize the other. \item Typically in theory one is most interested in the number, i.e., the value of the objective, and so we are ok by the above comment. On the other hand, typically in practice, one is interested in using that vector to do things, e.g., make statements that the two clusters are close; but that requires stronger assumptions to say nontrivial about the actual cluster. \end{itemize} Given all that, here is the basic statement of Cheeger's inequality. \begin{theorem}[Cheeger's Inequality] Recall that \[ \lambda_2 = \min_{x:x\perp\vec{1}}\max_{x:x\perp\vec{1}}\frac{x^TLx}{x^Tx} \] where $L=I-\frac{1}{d}A$. Then, \[ \frac{\lambda_2}{2} \le \phi(G) \le \sqrt{ 2 \lambda_2 } . \] \end{theorem} \subsection{Comments on the basic Cheeger Inequality} Here are some notes about the basic Cheeter Inequality. \begin{itemize} \item This result ``sandwiches'' $\lambda_2$ and $\phi$ close to each other on both sides. Clearly, from this result it immediatly follows that \[ \frac{\phi(G)^{2}}{2} \le \lambda_2 \le 2 \phi(G) . \] \item Later, we will see that $\phi(G)$ is large, i.e., is bounded away from $0$, if the graph is well-connected. In addition, other related things, e.g., that random walks will mix rapidly, will also hold. So, this result says that $\lambda_2$ is large if the graph is well-connected and small if the graph is not well-connected. So, it is a soft version of the hard connectivity statement that we had before. \item The inequality $\frac{\lambda_2}{2} \le \phi(G)$ is sometimes known as the ``easy direction'' of Cheeger's Inequality. The reason is that the proof is more straightforward and boils down to showing one of two related things: that you can present a test vector, which is roughly the indicator vector for a set of interest, and since $\lambda_2$ is a min of a Rayleigh quotient, then it is lower than the Rayleigh quotient of the test vector; or that the Rayleigh quotient is a relaxation of the sparsest cut problem, i.e., it is minimizing the same objective over a larger set. \item The inequality $\phi(G) \le \sqrt{ 2 \lambda_2 }$ is sometimes known as the ``hard direction'' of Cheeger's Inequality. The reason is that the proof is constructive and is basically a vanilla spectral partitioning algorithm. Again, there are two related proofs for the ``hard'' direction of Cheeger. One way uses a notion of inequalities over graphs; the other way can be formulated as a randomized rounding argument. \item Before dismissing the easy direction, note that it gives a polynomial-time certificate that a graph is expander-like, \emph{i.e.}, that $\forall$ cuts (and there are $2^n$ of them to check) at least a certain number of edges cross that cut. (So the fact that is holds is actually pretty strong---we have a polynomial-time computable certificate of having no sparse cuts, which you can imagine is of interest since the naive way to check is to check everything.) \end{itemize} Before proceeding, a question came up in the class about whether the upper or lower bound was more interesting or useful in applications. It really depend on on what you want. \begin{itemize} \item For example, if you are in a networking application where you are routing bits and you are interested in making sure that your network is well-connected, then you are most interested in the easy direction, since that gives you a quick-to-compute certificate that the graph is well-connected and that your bits won't get stuck in a bottleneck. \item Alternatively, if you want to run a divide and conquer algorithm or you want to do some sort of statistical inference, both of which might require showing that you have clusters in your graph that are well-separated from the rest of the data, then you might be more interested in the hard direction which provides an upper bound on the intractable-to-compute expansion and so is a certificate that there are some well-separated clusters. \end{itemize} \section{% (02/12/2015): Spectral Methods for Partitioning Graphs (2 of 2): Proof of Cheeger's Inequality} Reading for today. \begin{compactitem} \item Same as last class. \end{compactitem} Here, we will prove the easy direction and the hard direction of Cheeger's Inequality. Recall that what we want to show is that \[ \frac{\lambda_2}{2} \le \phi(G) \le \sqrt{ 2 \lambda_2 } . \] \subsection{Proof of the easy direction of Cheeger's Inequality} For the easy direction, recall that what we want to prove is that \[ \lambda_2 \le \sigma(G) \le 2 \phi(G) . \] To do this, we will show that the Rayleigh quotient is a relaxation of the sparsest cut problem. Let's start by restating the sparsest cut problem: \begin{eqnarray} \nonumber \sigma(G) &=& \min_{S \subset V : S \neq 0, S \neq V} \frac{E\left(S,\bar{S}\right)}{\frac{d}{|V|}|S|\cdot|\bar{S}|} \\ \nonumber &=& \min_{ x\in\{0,1\}^{n} \diagdown \{\vec{0},\vec{1} \} } \frac{ \sum_{{\{u,v\}}\in E} | x_u - x_v | }{ \frac{d}{n}\sum_{\substack{ \{u,v\} \in V \times V }} | x_u - x_v | } \\ \label{eqn:sparsity-quadratic} &=& \min_{ x\in\{0,1\}^{n} \diagdown \{\vec{0},\vec{1} \} } \frac{ \sum_{{\{u,v\}}\in E} | x_u - x_v |^2 }{ \frac{d}{n}\sum_{\substack{ \{u,v\} \in V \times V}} | x_u - x_v |^2 } , \end{eqnarray} where the last equality follows since $x_u$ and $x_v$ are Boolean values, which means that $|x_u-x_v|$ is also a Boolean value. Next, recall that \begin{equation} \lambda_2 = \min_{x\in\mathbb{R}^{n}\diagdown\{\vec{0}\},x\perp\vec{1} } \frac{ \sum_{\{u,v\}\in E} |x_u-x_v|^2 }{ d\sum_v x_v^2 } . \label{eqn:lambda-quadratic-almost} \end{equation} Given that, we claim the following. \begin{claim} \begin{equation} \lambda_2 = \min_{x\in\mathbb{R}^{n}\diagdown Span\{\vec{1}\} } \frac{ \sum_{\{u,v\}\in E} |x_u-x_v|^2 }{ \frac{d}{n} \sum_{\{u,v\}} |x_u-x_v|^2 } . \label{eqn:lambda-quadratic} \end{equation} \end{claim} \begin{Proof} Note that \begin{eqnarray*} \sum_{u,v} |x_u-x_v|^2 &=& \sum_{uv} x_u^2 + \sum_{uv} x_v^2 - 2 \sum_{uv} x_u x_v \\ &=& 2n \sum_v x_v^2 -2 \left( \sum_v x_v \right)^2 . \end{eqnarray*} Note that for all $x\in\mathbb{R}^{n}\diagdown\{\vec{0}\}$ s.t. $x \perp \vec{1}$, we have that $\sum_vx_v=0$, so \begin{eqnarray*} \sum_v x_v^2 &=& \frac{1}{2n} \sum_{u,v} |x_u - x_v |^2 \\ &=& \frac{1}{n} \sum_{ \{u,v\} } |x_u-x_v |^2 , \end{eqnarray*} where the first sum is over unordered pairs $u,v$ of nodes, and where the second sum of over ordered pairs $\{u,v\}$ (i.e. we double count $(u,v)$ and $(v,u)$ in first sum, but not in second sum). So, \[ \min_{x\in\mathbb{R}^{n}\diagdown\{\vec{0}\},x\perp\vec{1} } \frac{ \sum_{\{u,v\}\in E} |x_u-x_v|^2 }{ d\sum_v x_v^2 } = \min_{x\in\mathbb{R}^{n}\diagdown\{0\},x\perp\vec{1} } \frac{ \sum_{\{u,v\}\in E} |x_u-x_v|^2 }{ \frac{d}{n} \sum_{\{u,v\}} |x_u-x_v|^2 } . \] Next, we need to remove the part along the all-ones vector, since the claim doesn't have that. To do so, let's choose an $x^{*}$ that maximizes Eqn.~(\ref{eqn:lambda-quadratic}). Observe the following. If we shift every coordinate of that vector $x^{*}$ by the same constant, then we obtain another optimal solution, since the shift will cancel in all the expressions in the numerator and denominator. (This works for any shift, and we will choose a particular shift to get what we want.) So, we can define $x^{\prime}$ s.t. $x_{v}^{\prime} = x_v - \frac{1}{n} \sum_u x_u$, and note that the entries of $x^{\prime}$ sum to zero. Thus $x^{\prime} \perp \vec{1}$. Note we need $x\not\in Span(\vec{1})$ to have $x^{\prime}\neq \vec{0}$ So, \[ \min_{x\in\mathbb{R}^{n}\diagdown\{0\},x\perp\vec{1} } \frac{ \sum_{\{u,v\}\in E} |x_u-x_v|^2 }{ \frac{d}{n} \sum_{\{u,v\}} |x_u-x_v|^2 } = \min_{x\in\mathbb{R}^{n}\diagdown Span\{\vec{1}\} } \frac{ \sum_{\{u,v\}\in E} |x_u-x_v|^2 }{ \frac{d}{n} \sum_{\{u,v\}} |x_u-x_v|^2 } . \] This establishes the claim. \end{Proof} So, from Eqn.~(\ref{eqn:sparsity-quadratic}) and Eqn.~(\ref{eqn:lambda-quadratic}), it follows that $\lambda$ is a continuous relaxation of $\sigma(G)$, and so $\lambda_2 \le \sigma(G)$, from which the easy direction of Cheeger's Inequality follows. \subsection{Some additional comments} Here are some things to note. \begin{itemize} \item There is nothing required or forced on us about the use of this relaxation, and in fact there are other relaxations. We will get to them later. Some of them lead to traditional algorithms, and one of them provides the basis for flow-based graph partitioning algorithms. \item Informally, this relaxation says that we can replace $x \in \{0,1\}^{n}$ or $x\in \{ -1,1\}^{n}$ constraint with the constraint that $x$ satisfies this ``on average.'' By that, we mean that $x$ in the relaxed problem is on the unit ball, but any particular value of $x$ might get distorted a lot, relative to its ``original'' $\{0,1\}$ or $\{ -1,1\}$ value. In particular, note that this is a very ``global'' constraint. As we will see, that has some good features, e.g., many of the well-known good statistical properties; but, as we will see, it has the consequence that any particular local pairwise metric information gets distorted, and thus it doesn't lead to the usual worst-case bounds that are given only in terms of $n$ the size of the graph (that are popular in TCS). \item While providing the ``easy'' direction, this lemma gives a quick low-degree polynomial time (whatever time it takes to compute an exact or approximate leading nonrtivial eigenvector) certificate that a given graph is expander-like, in the sense that for all cuts, at least a certain number of edges cross it. \item There has been a lot of work in recent years using approaches like this one; I don't know the exact history in terms of who did it first, but it was explained by Trevisan very cleanly in course notes he has had, and this and the proof of the other direction is taken from that. In particular, he describes the randomized rounding method for the other direction. Spielman has slightly different proofs. These proofs here are a combination of results from them. \item We could have proven this ``easy direction'' by just providing a test vector. E.g., a test vector that is related to an indicator vector or a partition. We went with this approach to highlight similarities and differences with flow-based methods in a week or two. \item The other reason to describe $\lambda_2$ as a relaxation of $\sigma(G)$ is that the proof of the other direction that $\phi(G) \le \sqrt{ 2 \lambda_2 }$ can be structured as a randomized rounding algorithm, i.e., given a real-valued solution to Eqn.~(\ref{eqn:lambda-quadratic}), find a similarly good solution to Eqn.~(\ref{eqn:sparsity-quadratic}). This is what we will do next time. \end{itemize} \subsection{A more general result for the hard direction} For the hard direction, recall that what we want to prove is that \[ \phi(G) \le \sqrt{ 2 \lambda_2 } . \] Here, we will state---and then we will prove---a more general result. For the proof, we will use the randomized rounding method. The proof of this result is algorithmic/constructive, and it can be seen as an analysis for the following algorithm. \textsc{VanillaSpectralPartitioning}. Given as input a graph $G=(V,E)$, a vector $x\in\mathbb{R}^{n}$, \begin{enumerate} \item Sort the vertices of $V$ in non-decreasing order of values of entries of $x$, i.e., let $V = \{ v_1,\cdots,v_n\}$, where $x_{v_1} \le \cdots\le x_{v_n}$. \item Let $i \in [n-1]$ be s.t. \[ \max \{ \phi \left(\left\{ v_1 ,\cdots,v_i \right\}\right) , \phi \left(\left\{ v_{i+1},\cdots,v_n \right\}\right) \}, \] is minimal. \item Output $S = \{ v_1,\ldots,v_i\}$ and $\bar{S} = \{ v_{i+1},\ldots v_n\}$. \end{enumerate} This is called a ``sweep cut,'' since it involves sweeping over the input vector and looking at $n$ (rather than $2^n$ partitions) to find a good partition. We have formulated this algorithm as taking as input a graph $G$ and any vector $x$. You might be more familiar with the version that takes as input a graph $G$ that first compute the leading nontrivial eigenvector and then performs a sweep cut. We have formulated it the way we did for two reasons. \begin{itemize} \item We will want to separate out the spectral partitioning question from the question about how to compute the leading eigenvector or some approximation to it. For example, say that we don't run an iteration ``forever,'' i.e., to the asymptotic state to get an ``exact'' answer to machine precision. Then we have a vector that only approximates the leading nontrivial eigenvector. Can we still use that vector and get nontrivial results? There are several interesting results here, and we will get back to this. \item We will want to separate out the issue of global eigenvector to something about the structure of the relaxation. We will see that we can use this result to get local and locally-biased partitions, using both optimization and random walk based idea. In particular, we will use this to generalize to locally-biased spectral methods. \end{itemize} So, establishing the following lemma is sufficient for what we want. \begin{lemma} Let $G=(V,E)$ be a $d$-regular graph, and let $x\in\mathbb{R}^{n}$ be s.t. $x\perp\vec{1}$. Define \[ R(x) = \frac{ \sum_{ \{u,v \} \in E } |x_u-x_v |^2 }{ d \sum_v x_v^2 } \] and let $S$ be the side with less than $|V|/2$ nodes of the output cut of \textsc{VanillaSpectralPartitioning}. Then, \[ \phi(S) \le \sqrt{ 2 R(x) } \] \end{lemma} \noindent Before proving this lemma, here are several things to note. \begin{itemize} \item If we apply this lemma to a vector $x$ that is an eigenvector of $\lambda_2$, then $R(x) = \lambda_2$, and so we have that $\phi(G)\le \phi(S) \le \sqrt{2\lambda_2}$, i.e., we have the difficult direction of Cheeger's Inequality. \item On the other hand, any vector whose Rayleigh quotient is close to that of $\lambda_2$ also gives a good solution. This ``rotational ambiguity'' is the usual thing with eigenvectors, and it is different than any approximation of the relatation to the original expansion IP. We get ``goodness'' results for such a broad class of vectors to sweep over since we are measuring goodness rather modestly: only in terms of objective function value. Clearly, the actual clusters might change a lot and in general will be very different if we sweep over two vectors that are orthogonal to each other. \item This result actually holds for vectors $x$ more generally, i.e., vectors that have nothing to do with the leading eigenvector/eigenvalue, as we will see below with locally-biased spectral methods, where we will use it to get upper bounds on locally-biased variants of Cheeger's Inequality. \item In this case, in ``eigenvector time,'' we have found a set $S$ with expansion s.t. $\phi(S) \le \sqrt{\lambda_2} \le 2 \sqrt{\phi(G)}$. \item This is \emph{not} a constant-factor approximation, or any nontrivial approximation factor in terms of $n$; and it is incomparable with other methods (e.g., flow-based methods) that do provide such an approximation factor. It is, however, nontrivial in terms of an important structural parameter of the graph. Moreover, it is efficient and useful in many machine learning and data analysis applications. \item The above algorithm can be implemented in roughly $O\left( |V| \log|V| + |E| \right)$ time, assuming arithmetic operations and comparisons take constant time. This is since once we have computed $$ E\left( \{v_1,\ldots,v_i\},\{v_{i+1},\ldots,v_n\} \right) , $$ it only takes $O(\mbox{degree}(v_{i+1}))$ time to compute $$ E\left( \{v_1,\ldots,v_{i+1}\},\{v_{i+2},\ldots,v_n\} \right) . $$ \item As a theoretical point, there exists nearly linear time algorithm to find a vector $x$ such that $R(x) \approx \lambda_2$, and so by coupling this algorithm with the above algorithm we can find a cut with expansion $O\left( \sqrt{\phi(G)} \right)$ in nearly-linear time. Not surprisingly, there is a lot of work on providing good implementations for these nearly linear time algorithms. We will return to some of these later. \item This quadratic factor is ``tight,'' in that there are graphs that are that bad; we will get to these (rings or Guattery-Miller cockroach, depending on exactly how you ask this question) graphs below. \end{itemize} \subsection{Proof of the more general lemma implying the hard direction of Cheeger's Inequality} Note that $\lambda_2$ is a relaxation of $\sigma(G)$ and the lemma provides a rounding algorithtm for real vectors that are a solution of the relaxation. So, we will view this in terms of a method from TCS known as randomized rounding. This is a useful thing to know, and other methods, e.g., flow-based methods that we will discuss soon, can also be analyzed in a similar manner. For those who don't know, here is the one-minute summary of randomized rounding. \begin{itemize} \item It is a method for designing and analyzing the quality of approximation algorithms. \item The idea is to use the probabilistic method to convert the optimal solution of a relaxation of a problem into an approximately optimal solution of the original problem. (The probabilistic method is a method from combinatorics to prove the existence of objects. It involves randomly choosing objects from some specified class in some manner, i.e., according to some probability distribution, and showing that the objects can be found with probability $>0$, which implies that the object exists. Note that it is an existential/non-constructive and not algorithmic/constructive method.) \item The usual approach to use randomized rounding is the following. \begin{itemize} \item Formulate a problem as an integer program or integer linear program (IP/ILP). \item Compute the optimal fractional solution $x$ to the LP relaxation of this IP. \item Round the fractional solution $x$ of the LP to an integral solution $x^{\prime}$ of the IP. \end{itemize} \item Clearly, if the objective is a min, then $ \mbox{cost}(x) \le \mbox{cost}(x^{\prime})$. The goal is to show that $\mbox{cost}(x^{\prime})$ is not much more that $\mbox{cost}(x)$. \item Generally, the method involves showing that, given any fractional solution $x$ of the LP, w.p. $>0$ the randomized rounding procedure produces an integral solution $x^{\prime}$ that approximated $x$ to some factor. \item Then, to be computationally efficient, one must show that $x^{\prime} \approx x$ w.h.p. (in which case the algorithm can stay randomized) or one must use a method like the method of conditional probabilities (to derandomize it). \end{itemize} Let's simplify notation: let $V = \{1,\ldots,n\}$; and so $x_1 \le x_2 \le \cdots x_n$. In this case, the goal is to show that there exists $i\in[n]$ w.t. \[ \phi\left( \{ 1 ,\ldots,i \} \right) \le \sqrt{ 2 R(x) } \quad \mbox{and} \quad \phi\left( \{ i+1,\ldots,n \} \right) \le \sqrt{ 2 R(x) } . \] We will prove the lemma by showing that there exists a distribution $D$ over sets $S$ of the form $\{ 1,\ldots,i\}$ s.t. \begin{equation} \frac{ \mathbb{E}_{S \sim D}\left\{ E(S,\bar{S}) \right\} }{\mathbb{E}_{S \sim D}\left\{ d\min\{ |S|,|\bar{S}| \} \right\} } \le \sqrt{2 R(x) } . \label{eqn:ratio-expectations} \end{equation} Before establishing this, note that Eqn.~(\ref{eqn:ratio-expectations}) does \emph{not} imply the lemma. Why? In general, it is the case that $\mathbb{E}\left\{ \frac{X}{Y} \right\} \neq \frac{\mathbb{E}\left\{X\right\}}{\mathbb{E}\left\{Y\right\}} $, but it suffices to establish something similar. \textbf{Fact.} For random variables $X$ and $Y$ over the sample space, even though $\mathbb{E}\left\{ \frac{X}{Y} \right\} \neq \frac{\mathbb{E}\left\{X\right\}}{\mathbb{E}\left\{Y\right\}} $, it is the case that \[ \mathbb{P}\left\{ \frac{X}{Y} \le \frac{\mathbb{E}\left\{X\right\}}{\mathbb{E}\left\{Y\right\}} \right\} > 0 , \] provided that $Y > 0$ over the entire sample space. But, by linearity of expectation, from Eqn.~(\ref{eqn:ratio-expectations}) it follows that \[ \mathbb{E}_{S \sim D}\left[ E(S,\bar{S}) - d\sqrt{2R(x)}\min\{ |S|,|\bar{S}| \}\right] \le 0 . \] So, there exists a set $S$ in the sample space s.t. \[ E(S,\bar{S}) - d\sqrt{2R(x)}\min\{ |S|,|\bar{S}| \} \le 0 . \] That is, for $S$ and $\bar{S}$, at least on of which has size $\le \frac{n}{2}$, \[ \phi(S) \le \sqrt{ 2 R(x) } , \] from which the lemma will follow. So, because of this, it will suffice to establish Eqn.~(\ref{eqn:ratio-expectations}). So, let's do that. Assume, WLOG, that $x_{\lceil\frac{n}{2}\rceil} = 0$, i.e., the median of the entires of $x$ equals $0$; and $x_1^2+x_n^2=1$. This is WLOG since, if $x\perp\vec{1}$, then adding a fixed constant $c$ to all entries of $x$ can only decrease the Rayleigh quotient: \begin{eqnarray*} R\left(x+(c,\ldots,c)\right) &=& \frac{ \sum_{ \{(u,v)\} \in E} |(x_u+c) - (x_v+c)|^2 }{ d \sum_v (x_v+c)^2 } \\ &=& \frac{ \sum_{ \{(u,v)\} \in E} | x_u - x_v |^2 }{ d \sum_v x_v^2 - 2dc\sum_v x_v + nc^2 } \\ &=& \frac{ \sum_{ \{(u,v)\} \in E} | x_u - x_v |^2 }{ d \sum_v x_v^2 + nc^2 } \\ &\le& R(x) . \end{eqnarray*} Also, multiplying all entries by fixed constant does \emph{not} change the value of $R(x)$, nor does it change the property that $x_1 \le \cdots \le x_n$. We have made these choices since they will allow us to define a distribution $D$ over sets $S$ s.t. \begin{equation} \mathbb{E}_{S \sim D} \min\left\{ |S|,|\bar{S}| \right\} = \sum_i x_i^2 \label{eqn:expect2} \end{equation} Define a distribution $D$ over sets $\{1,\ldots,i\}$, $1\le i\le n-1$, as the outcome of the following probabilistic process. \begin{enumerate} \item Choose a $t \in [ x_1,x_n ] \in \mathbb{R}$ with probability density function equal to $f(t) = 2 |t|$, i.e., for $x_1 \le a \le b \le x_n$, let \[ \mathbb{P}\left[ a \le t \le b \right] = \int_a^b 2|t| dt = \left\{ \begin{array}{l l} | a^2-b^2 | & \quad \text{if $a$ and $b$ have the same sign}\\ a^2+b^2 & \quad \text{if $a$ and $b$ have different signs} \end{array} \right. , \] \item Let $S = \{ u : x_i \le t \}$ \end{enumerate} From this definition \begin{itemize} \item The probability that an element $i \le \frac{n}{2}$ belongs to the smaller of the sets $S,\bar{S}$ equals the probability of $i\in S$ and $|S|\le |\bar{S}|$, which equals the probability that the threshold $t$ is in the range $[x_i,0]$, which equals $x_i^2$. \item The probability that an element $i > \frac{n}{2}$ belongs to the smaller of the sets $S,\bar{S}$ equals the probability of $i\in \bar{S}$ and $|S|\ge |\bar{S}|$, which equals the probability that the threshold $t$ is in the range $[0,x_i]$, which equals $x_i^2$. \end{itemize} So, Eqn.~(\ref{eqn:expect2}) follows from linearity of expectation. Next, we want to estimate the expected number of edges between $S$ and $\bar{S}$, i.e., \[ \mathbb{E}\left[ E\left(S,\bar{S}\right)\right] = \sum_{(i,j)\in E} \mathbb{P}\left[ \mbox{edge } (i,j) \mbox{ is cut by } (S,\bar{S}) \right] . \] To estimate this, note that the event that the edge $(i,j)$ is cut by the partition $(S,\bar{S})$ happens when $t$ falls in between $x_i$ and $x_j$. So, \begin{itemize} \item if $x_i$ and $x_j$ have the same sign, then \[ \mathbb{P}\left[ \mbox{edge } (i,j) \mbox{ is cut by } (S,\bar{S}) \right]=|x_i^2-x_j^2| \] \item if $x_i$ and $x_j$ have the different signs, then \[ \mathbb{P}\left[ \mbox{edge } (i,j) \mbox{ is cut by } (S,\bar{S}) \right] = x_i^2 + x_j^2 \] \end{itemize} The following expression is an upper bound that covers both cases: \[ \mathbb{P}\left[ \mbox{edge } (i,j) \mbox{ is cut by } (S,\bar{S}) \right] \le | x_i - x_j | \cdot \left( |x_i| + |x_j| \right) . \] Plugging into the expressions for the expected number of cut edges, and applying the Cauchy-Schwatrz inequality gives \begin{eqnarray*} \mathbb{E} E\left(S,\bar{S}\right) &\le& \sum_{(i,j) \in E} |x_i - x_j| \left( |x_i|+|x_j|\right) \\ &\le& \sqrt{ \sum_{(i,j)\in E} \left(x_i-x_j\right)^2 } \sqrt{ \sum_{(i,j)\in E} \left(|x_i|+|x_j|\right)^{2} } \end{eqnarray*} Finally, to deal with the expression $\sum_{(ij)\in E} \left(|x_i|+|x_j|\right)^{2}$, recall that $(a+b)^2 \le 2a^2+2b^2$. Thus, \[ \sum_{(ij)\in E} \left(|x_i|+|x_j|\right)^{2} \le \sum_{(ij) \in E} 2x_i^2 + 2x_j^2 = 2d \sum_i x_i^2 . \] Putting all of the pieces together, we have that \[ \frac{ \mathbb{E} \left[E\left(S,\bar{S}\right) \right]}{ \mathbb{E}\left[d \min\{ |S|,|\bar{S}| \}\right] } \le \frac{ \sqrt{ \sum_{(ij)\in E} \left(x_i-x_j\right)^2 } \sqrt{ 2d \sum_i x_i^2 } }{ d \sum_i x_i^2 } = \sqrt{ 2R(x) } , \] from which the result follows. \section{% (02/17/2015): Expanders, in theory and in practice (1 of 2)} Reading for today. \begin{compactitem} \item ``Expander graphs and their applications,'' in Bull. Amer. Math. Soc., by Hoory, Linial, and Wigderson \end{compactitem} \subsection{Introduction and Overview} \emph{Expander graphs}, also called \emph{expanders}, are remarkable structures that are widely-used in TCS and discrete mathematics. They have a wide range of applications: \begin{itemize} \item They reduce the need for randomness and are useful for derandomizing randomized algorithms---so, if random bits are a valuable resource and thus you want to derandomized some of the randomized algorithms we discussed before, then this is a good place to start. \item They can be used to find good error-correcting codes that are efficiently encodable and decodable---roughly the reason is that they spread things out. \item They can be used to provide a new proof of the so-called PCP theorem, which provides a new characterization of the complexity class NP, and applications to the hardness of approximate computation. \item They are a useful concept in data analysis applications, since expanders look random, or are empirically quasi-random, and it is often the case that the data, especially when viewed at large, look pretty noisy. \end{itemize} For such useful things, it is somewhat surprising that (although they are very well-known in computer science and TCS in particular due to their algorithmic and complexity connections) expanders are almost unknown outside computer science. This is unfortunate since: \begin{itemize} \item The world is just a bigger place when you know about expanders. \item Expanders have a number of initially counterintuitive properties, like they are very sparse and very well-connected, that are typical of a lot of data and thus that are good to have an intuition about. \item They are ``extremal'' in many ways, so they are a good limiting case if you want to see how far you can push your ideas/algorithms to work. \item Expanders are the structures that are ``most unlike'' low-dimensional spaces---so if you don't know about them then your understanding of the mathematical structures that can be used to describe data, as well as of possible ways that data can look will be rather limited, \emph{e.g.}, you might think that curved low-dimensional spaces are good ideas. \end{itemize} Related to the comment about expanders having extremal properties, if you know how your algorithm behaves on, say, expanders, hypercubes (which are similar and different in interesting ways), trees (which we won't get to as much, but will mention), and low-dimensional spaces, they you probably have a pretty good idea of how it will behave on your data. That is very different than knowing how it will behave in any one of those places, which doesn't give you much insight into how it will behave more generally; this extremal property is used mostly by TCS people for algorithm development, but it can be invaluable for understanding how/when your algorithm works and when it doesn't on your non-worst-case data. We will talk about expander graphs. One issue is that we can define expanders both for degree-homogeneous graphs as well as for degree-heterogeneous graphs; and, although many of the basic ideas are similar in the two cases, there are some important differences between the two cases. After defining them (which can be done via expansion/conductance or the leading nontrivial eigenvalue of the combinatorial/normalized Laplacian), we will focus on the following aspects of expanders and expander-like graphs. \begin{itemize} \item Expanders are graphs that are very well-connected. \item Expanders are graphs that are sparse versions/approximations of a complete graph. \item Expanders are graphs on which diffusions and random walks mix rapidly. \item Expanders are the metric spaces that are least like low-dimensional Euclidean spaces. \end{itemize} Along the way, we might have a chance to mention a few other things, e.g.: how big $\lambda_2$ could be with Ramanujan graphs and Wigner's semicircle result; trivial ways with $d_{max}$ to extend the Cheeger Inequality to degree-heterogeneous graphs, as well as non-trivial ways with the normalized Laplacian; pseudorandom graphs, converses, and the Expander Mixing Lemma; and maybe others. Before beginning with some definitions, we should note that we can't draw a meaningful/interpretable picture of an expander, which is unfortunate since people like to visualize things. The reason for that is that there are no good ``cuts'' in an expander---relatedly, they embed poorly in low-dimensional spaces, which is what you are doing when you visualize on a two-dimensional piece of paper. The remedy for this is to compute all sorts of other things to try to get a non-visual intuition about how they behave. \subsection{A first definition of expanders} Let's start by working with $d$-regular graphs---we'll relax this regularity assumption later. But many of the most extremal properties of expanders hold for degree-regular graphs, so we will consider them first. \begin{definition} A graph $G=(V,E)$ is \emph{$d$-regular} if all vertices have the same degree $d$, \emph{i.e.}, each vertex is incident to exactly $d$ edges. \end{definition} Also, it will be useful to have the following notion of the set of edges between two sets $S$ and $T$ (or from $S$ to $T$), both of which are subsets of the vertex set (which may or may not be the complement of each other). \begin{definition} For $S,T \subset V$, denote \[ E(S,T)=\{(u,v)\in E|\; u\in S,\, v\in T\} . \] \end{definition} Given this notation, we can define the expansion of a graph. (This is slightly different from other definitions I have given.) \begin{definition} The \emph{expansion} or \emph{edge expansion ratio} of a graph $G$ is \[ h(G) = \min_{S:|S|\le\frac{n}{2}} \frac{E(S,\bar{S})}{|S|} \] \end{definition} Note that this is slightly different (just in terms of the scaling) than the edge expansion of $G$ which we defined before as: $$\phi\left(G\right) = \min_{S \subset V: |S| \le \frac{|V|}{2}} \frac{E\left(S,\bar{S}\right)}{d|S|} .$$ We'll use this today, since I'll be following a proof from HLW, and they use this, and following their notation should make it easier. There should be no surprises, except just be aware that there is a factor of $d$ difference from what you might expect. (As an aside, recall that there are a number of extensions of this basic idea to measure other or more fine versions of this how well connected is a graph: \begin{itemize} \item Different notions of boundary---\emph{e.g.}, vertex expansion. \item Consider size-resolved minimum---in Markov chains and how good communities are as a function of size. \item Different denominators, which measure different notions of the ``size'' of a set $S$: \begin{itemize} \item Sparsity or cut ratio: $\min \frac{E(S,\bar{S})}{|S|\cdot|\bar{S}|}$---this is equivalent to expansion in a certain sense that we will get to. \item Conductance or NCut---this is identical for $d$-regular graphs but is more useful in practice and gives tighter bounds in theory if there is degree heterogeneity. \end{itemize} \end{itemize} We won't deal with these immediately, but we will get back to some later. This ends the aside.) In either case above, the expansion is a measure to quantify how well-connected is the graph. Given this, informally we call a $d$-regular graph $G$ an \emph{expander} if $h(G) \geq \epsilon$ where $\epsilon$ is a constant. More precisely, let's define an expander: \begin{definition} A graph $G$ is a $(d,\epsilon)$-expander if it is $d$-regular and $h(G) \geq \epsilon$, where $\epsilon$ is a constant, independent of $n$. \end{definition} Alternatively, sometimes expansion is defined in terms of a sequence of graphs: \begin{definition} A sequence of $d$-regular graphs $\{G_i\}_{i \in \mathcal{Z}^{+}}$ is a family of \emph{expander graphs} if $\exists \epsilon > 0$ s.t. $h(G_i)\geq\epsilon , \forall i$. \end{definition} \noindent If we have done the normalization correctly, then $h(G) \in [0,d]$ and $\phi(G) \in [0,1]$, where large means more expander-like and small means that there are good partitions. So, think of the constant $\epsilon$ as $d/10$ (and it would be $1/10$, if we used $\phi(G)$ normalization). Of course, there is a theory/practice issue here, e.g., sometimes you are given a single graph and sometimes it can be hard to tell a moderately large constant from a factor of $\log(n)$; we will return to these issues later. \subsection{Alternative definition via eigenvalues} Although expanders can be a little tricky and counterintuitive, there are a number of ways to deal with them. One of those ways, but certainly not the only way, is to compute eigenvectors and eigenvalues associated with matrices related to the graph. For example, if we compute the second eigenvalue of the Laplacian, then we have Cheeger's Inequality, which says that if the graph $G$ is an expander, then we have a (non-tight, due to the quadratic approximation) bound on the second eigenvalue, and vice versa. That is, one way to test if a graph is an expander is to compute that eigenvalue and check. Of central interest to a lot of things is $\lambda_2^{LAP}$, which is the Fiedler value or second smallest eigenvalue of the Laplacian. Two things to note: \begin{itemize} \item If we work with Adjacency matrices rather than Laplacians, then we are interested in how far $\lambda_2^{ADJ}$ is from $d$. \item We often normalized things so as to interpret them in terms of a random walk, in which case the top eigenvalue $=1$ with the top eigenvector being the probability distribution. In that case, we are interested in how far $\lambda_2$ is from $1$. \end{itemize} (Since I'm drawing notes from several different places, we'll be a little inconsistent on what the notation means, but we should be consistent within each class or section of class.) Here is Cheeger's Inequality, stated in terms of $h(G)$ above. \begin{itemize} \item If $ 0=\lambda_1 \leq \lambda_2 \leq \cdots \leq \lambda_n $ are the eigenvalues of the Laplacian (not normalized, i.e. $D-A$) of a $d$-regular graph $G$, then: \[ \frac{\lambda_2}{2} \leq h(G) \leq \sqrt{ 2 d \lambda_2 } \] The $\sqrt{d}$ in the upper bound is due to our scaling. \end{itemize} Alternatively, here is Cheeger's Inequality, stated in terms of $h(G)$ for an Adjacency Matrix. \begin{itemize} \item If $d=\mu_{1}\geq\mu_{2}\geq\ldots\geq\mu_{n}$ are the eigenvalues of the Adjacency Matrix $A(G)$ of $d$-regular graph $G$, then: \[ \frac{d-\mu_{2}}{2}\leq h(G)\leq\sqrt{2d(d-\mu_{2})} \] \end{itemize} Therefore, the expansion of the graph is related to its spectral gap ($d-\mu_{2}$). Thus, we can define a graph to be an expander if $\mu_2 \leq d-\epsilon $ or $\lambda_2 \geq \epsilon$ where $\lambda_2$ is the second eigenvalue of the matrix $L(G) = D - A(G)$ where $D$ is the diagonal degree matrix. Slightly more formally, here is the alternate definition of expanders: \begin{definition} A sequence of $d$-regular graphs $\{G_n\}_n\in\mathbb{N}$ is a family of expander graphs if $|\lambda_i^{ADJ}|\le d- \epsilon$, \emph{i.e.} if all the eigenvalues of $A$ are bounded away from $d$ \end{definition} \noindent \textbf{Remark.} The last requirement can be written as $\lambda_2^{LAP}\ge c, \forall n$, \emph{i.e.}, that all the eigenvalues of the Laplacian bounded below and away from $c>0$. In terms of the edge expansion $\phi(G)$ we defined last time, this definition would become the following. \begin{definition} A family of constant-degree expanders is a family of graphs $\{ G_n \}_{n\in \mathbb{N}}$ s.t. each graph in $G_n$ is $d$-regular graph on $n$ vertices and such that there exists an absolute constant $\phi^{*}$, independent of $n$, s.t. $\phi(G_n) \ge \phi^{*}$, for all $n$. \end{definition} \subsection{Expanders and Non-expanders} A clique or a complete graph is an expander, if we relax the requirement that the $d$-regular graph have a fixed $d$, independent of $n$. Moreover, $G_{n,p}$ (the random graph), for $p \gtrsim \frac{\log(n)}{n}$ is also an expander, with $d$ growing only weakly with $n$. (We may show that later.) Of greatest interest---at least for theoretical considerations---is the case that $d$ is a constant independent of $n$. \subsubsection{Very sparse expanders} In this case, the idea of an expander, \emph{i.e.}, an \emph{extremely} sparse and \emph{extremely} well-connected graph is nice; but do they exist? It wasn't obvious until someone proved it, but the answer is YES. In fact, a typical $d$-regular graph is an expander with high probability under certain random graph models. Here is a theorem that we will not prove. \begin{theorem} Fix $d \in \mathbb{Z}^{+} \ge 3$. Then, a randomly chosen $d$-regular graph is an expander w.h.p. \end{theorem} \noindent \textbf{Remark.} Clearly, the above theorem is false if $d=1$ (in which case we get a bunch of edges) or if $d=2$ (in which case we get a bunch of cycles); but it holds even for $d=3$. \noindent \textbf{Remark.} The point of comparison for this should be if $d \gtrsim \frac{\log(n)}{n}$. In this case, ``measure concentration'' in the asymptotic regime, and so it is plausible (and can be proved to be true) that the graph has no good partitions. To understand this, recall that one common random graph model is the Erdos-Renyi $\mathcal{G}_{n,p}$ model, where there are $n$ vertices and edges are chosen to exist with probability $p$. (We will probably describe this ER model as well as some of its basic properties later; at a minimum, we will revisit it when we talk about stochastic blockmodels.) The related $\mathcal{G}_{n,m}$ model is another common model where graphs with $n$ vertices and $m$ edges are chosen uniformly at random. An important fact is that if we set $p$ such that there are on average $m$ edges, then $\mathcal{G}_{n,m}$ is very similar (in strong senses of the word) to $\mathcal{G}_{n,p}$---if $p \geq \log n /n$. (That is the basis for the oft-made observation that $\mathcal{G}_{n,m}$ and $\mathcal{G}_{n,p}$ are ``the same.'') However, for the above definition of expanders, we require in addition that $d$ is a constant. Importantly, in that regime, the graphs are sparse enough that measure hasn't concentrated, and they are \emph{not} the same. In particular, if $p = 3/n$, $\mathcal{G}_{n,p}$ usually generates a graph that is not connected (and there are other properties that we might return to later). However, (by the above theorem) $\mathcal{G}_{n,m}$ with corresponding parameters usually yields a connected graph with very high expansion. We can think of randomized expander construction as a version of $\mathcal{G}_{n,m}$, further constrained to $d$-regular graphs. \textbf{Remark.} There are explicit deterministic constructions for expanders---they have algorithmic applications. That is an FYI, but for what we will be doing that won't matter much. Moreover, later we will see that the basic idea is still useful even when we aren't satisfying the basic definition of expanders given above, e.g., when there is degree heterogeneity, when a graph has good small but no good large cuts, etc. \subsubsection{Some non-expanders} It might not be clear how big is big and how small is small---in particular, how big can $h$ (or $\lambda$) be. Relatedly, how ``connected'' can a graph be? To answer this, let's consider a few graphs. \begin{itemize} \item Path graph. (For a path graph, $\mu_1 = \Theta(1/n^2)$. If we remove $1$ edge, then we can cut the graph into two $50$-$50$ pieces. \item Two-dimensional $\sqrt{n} \times \sqrt{n}$ grid. (For a $\sqrt{n} \times \sqrt{n}$ grid, $\mu_1 = \Theta(1/n)$.) Here, you can't disconnect the graph by removing $1$ edge, and the removal of a constant number of edges can only remove a constant number of vertices from the graph. But, it is possible to remove $\sqrt{n}$ of the edge, i.e., an $O(\frac{1}{\sqrt{n}})$ fraction of the total, and split the graph into two $50$-$50$ pieces. \item For a 3D grid, $\mu_1 = \Theta(1/n^{2/3})$. \item A $k$-dimensional hypercube is still better connected. But it is possible to remove a very small fraction of the edges (the edges of a dimension cut, which are $\frac{1}{k}= \frac{1}{\log(n)}$ fraction of the total) and split half the vertices from the other half. \item For a binary tree, e.g., a complete binary tree on $n$ vertices, $\mu_1 = \Theta(1/n)$. \item For a $K_n - K_n$ dumbbell, (two expanders or complete graphs joined by an edge) $\mu_1 = \Theta(1/n)$. \item For a ring on $n$ vertices, $\mu_1 = \Theta(1/ n)$. \item Clique. Here, to remove a $p$ fraction of the vertices from the rest, you must remove $\ge p(1-p)$ fraction of the edges. That is, it is very well connected. (While can take a complete graph to be the ``gold standard'' for connectivity, it does, however, have the problem that it is dense; thus, we will be interested in sparse versions of a complete that are similarly well-connected.) \item For an expander, $\mu_1 = \Theta(1)$. \end{itemize} \noindent \textbf{Remark.} A basic question to ask is whether, say, $\mu_1 \sim \Theta(\mbox{poly}(1/ n))$ is ``good'' or ``bad,'' say, in some applied sense? The answer is that it can be either: it can be bad, if you are interested in connectivity, e.g., a network where nodes are communication devices or computers and edges correspond to an available link; or it can be good, either for algorithmic reasons if e.g. you are interested in divide and conquer algorithms, or for statistical reasons since this can be used to quantify conditional independence and inference. \noindent \textbf{Remark.} Recall the quadratic relationship between $d-\lambda_2$ and $h$. If $d-\lambda_2$ is $\Theta(1)$, then that is not much difference (a topic which will return to later), but if it is $\Theta(1/n)$ or $\Theta(1/n^2)$ then it makes a big difference. A consequence of this is that by TCS standards, spectral partitioning does a reasonably-good job partitioning expanders (basically since the quadratic of a constant is a constant), while everyone else would wonder why it makes sense to partition expanders; while by TCS standards, spectral partitioning does \emph{not} do well in general, since it has a worst-case approximation factor that depends on $n$, while everyone else would say that it does pretty well on their data sets. \subsubsection{How large can the spectral gap be?} A question of interest is: how large can the spectral gap be? The answer here depends on the relationship between $n$, the number of nodes in the graph and $d$, the degree of each node (assumed to be the same for now.) In particular, the answer is different if $d$ is fixed as $n$ grows or if $d$ grows with $n$ as $n$ grows. As as extreme example of the latter case, consider the complete graph $K_n$ on $n$ vertices, in which case $d=n-1$. The adjacency matrix of $K_n$ is $A_{K_n} = J-I$, where $J$ is the all-ones matrix, and where $I=I_n$ is the diagonal identity matrix. The spectrum of the adjacency matrix of $K_n$ is $\{n-1,-1,\ldots,-1\}$, and $\lambda=1$. More interesting for us here is the case that $d$ is fixed and $n$ is large, in which case $n \gg d$, in which case we have the following theorem (which is due to Alon and Boppana). \begin{theorem}[Alon-Boppana] Denoting $\lambda=\max(|\mu_{2}|,|\mu_{n}|)$, we have, for every $d$-regular graph: \[ \lambda\geq2\sqrt{d-1}-o_{n}(1) \] \end{theorem} \noindent So, the eigengap $d-\mu_2$ is not larger than $d-2\sqrt{d-1}$. For those familiar with Wigner's semicircle law, note the similar form. The next question is: How tight is this? In fact, it is pretty close to tight in the following sense: there exists constructions of graphs, called Ramanujan graphs, where the second eigenvalue of $L(G)$ is $\lambda_1(G) = d- 2 \sqrt{d-1}$, and so the tightness is achieved. Note also that this is of the same scale as Wigner's semicircle law; the precise statements are somewhat different, but the connection should not be surprising. \subsection{Why is $d$ fixed?} A question that arises is why is $d$ fixed in the definition, since there is often degree variability in practice. Basically that is since it makes things harder, and so it is significant that expanders exist even then. Moreover, for certain theoretical issues that is important. But, in practice the idea of an expander is still useful, and so we go into that here. We can define expanders: i.t.o. boundary expansion; or i.t.o. $\lambda_2$. The intuition is that it is well-connected and then get lots of nice properties: \begin{itemize} \item Well-connected, so random walks converge fast. \item Quasi-random, meaning that it is empirically random (although in a fairly weak sense). \end{itemize} Here are several things to note: \begin{itemize} \item Most theorems in graph theory go through to weighted graphs, if you are willing to have factors like $\frac{w_{max}}{w_{min}}$---that is a problem if there is very significant degree heterogeneity or heterogeneity in weights, as is common. So in that case many of those results are less interesting. \item In many applications the data are extremely sparse, like a constant number of edges on average (although there may be a big variance). \item There are several realms of $d$, since it might not be obvious what is big and what is small: \begin{itemize} \item $d=n$: complete (or nearly complete) graph. \item $d=\Omega(\mbox{polylog}(n))$: still dense, certainly in a theoretical sense, as this is basically the asymptotic region. \item $d=\Theta(\mbox{polylog}(n))$: still sufficiently dense that measure concentrates, \emph{i.e.}, enough concentration for applications; Harr measure is uniform, and there are no ``outliers'' \item $d=\Theta(1)$: In this regime things are very sparse, $G_{nm} \ne G_{np}$, so you have a situation where the graph has a giant component but isn't fully connected; so $3$-regular random graphs are different than $G_{np}$ with $p=\frac{3}{n}$. \end{itemize} You should think in terms of $d=\Theta(\mbox{polylog}(n))$ at most, although often can't tell $O(\log n)$ versus a big constant, and comparing trivial statistics can hide what you want. \item The main properties we will show will generalize to degree variability. In particular: \begin{itemize} \item High expansion $\rightarrow$ high conductance. \item Random walks converge to ``uniform'' distribution $\rightarrow$ random walks converge to a distribution that is uniform over the edges, meaning proportional to the degree of a node. \item Expander Mixing Property $\rightarrow$ Discrepancy and Empirical Quasi-randomness \end{itemize} \end{itemize} So, for theoretical applications, we need $d=\Theta(1)$; but for data applications, think i.t.o. a graph being expander-like, i.e., think of some of the things we are discussing as being relevant for the properties of that data graph, if: (1) it has good conductance properties; and (2) it is empirically quasi-random. This happens when data are extremely sparse and pretty noisy, both of which they often are. \subsection{Expanders are graphs that are very well-connected} Here, we will describe several results that quantify the idea that expanders are graphs that are very well-connected. \subsubsection{Robustness of the largest component to the removal of edges} Here is an example of a lemma characterizing how constant-degree graphs with constant expansion are very sparse graphs with extremely good connectivity properties. In words, what the following lemma says is that the removal of $k$ edges cannot cause more that $O\left(\frac{k}{d}\right)$ vertices to be disconnected from the rest. (Note that it is always possible to disconnect $\frac{k}{d}$ vertices after removing $k$ edges, so the connectivity of an expander is the best possible.) \begin{lemma} Let $G=(V,E)$ be a regular graph with expansion $\phi$. Then, after an $\epsilon < \phi$ fraction of the edges are adversarially removed, the graph has a connected component that has at least $1-\frac{\epsilon}{2\phi}$ fraction of the vertices. \end{lemma} \begin{Proof} Let $d$ be the degree of $G$. Let $E^{\prime} \subseteq E$ be an arbitrary subset of $\le \epsilon |E| = \epsilon d \frac{|V|}{2}$ edges. Let $C_1,\ldots,C_m$ be the connected components of the graph $(V,E\diagdown E^{\prime})$, ordered s.t. \[ |C_1| \ge |C_2| \ge \cdots |C_m| . \] In this case, we want to prove that \[ |C_1| \ge |V|\left( 1-\frac{2\epsilon}{\phi} \right) \] To do this, note that \[ |E^{\prime}| \ge \frac{1}{2} \sum_{ij} E\left(C_i,C_j\right) = \frac{1}{2} \sum_i E\left(C_i,V\diagdown C_i\right) . \] So, if $|C_1| \le \frac{|V|}{2}$, then \[ |E^{\prime}| \ge \frac{1}{2} \sum_i d \phi |C_i| = \frac{1}{2} d \phi |V| , \] which is a contradiction if $\phi > \epsilon$. On the other hand, if $|C_1| \ge \frac{|V|}{2}$, then let's define $S$ to be $S= C_2 \cup \ldots \cup C_m$. Then, we have \[ |E^{\prime}| \ge E(C_1,S) \ge d \phi |S| , \] which implies that \[ |S| \le \frac{\epsilon}{2\phi} |V| , \] and so $|C_1 \ge \left( 1-\frac{\epsilon}{2\phi}\right) |V|$, from which the lemma follows. \end{Proof} \subsubsection{Relatedly, expanders exhibit quasi-randomness} In addition to being well-connected in the above sense (and other senses), expanders also ``look random'' in certain senses. \paragraph{One direction} For example, here I will discuss connections with something I will call ``empirical quasi-randomness.'' It is a particular notion of things looking random that will be useful for what we will discuss. Basically, it says that the number of edges between any two subsets of nodes is very close to the expected value, which is what you would see in a random graph. Somewhat more precisely, it says that when $\lambda$ below is small, then the graph has the following quasi-randomness property: for every two disjoint sets of vertices, $S$ and $T$, the number of edges between $S$ and $T$ is close to $\frac{d}{n}|S|\cdot|T|$, i.e., what we would expect a random graph with the same average degree $d$ to have. (Of course, this could also hide other structures of potential interest, as we will discuss later, but it is a reasonable notion of ``looking random'' in the large scale.) Here, I will do it in terms of expansion---we can generalize it and do it with conductance and discrepancy, and we may do that later. We will start with the following theorem, called the ``Expander Mixing Lemma,'' which shows that if the spectral gap is large, then the number of edges between two subsets of the graph vertices can be approximated by the same number for a random graph, \emph{i.e.}, what would be expected on average, so it looks empirically random. Note that $\frac{d}{n}|S|\cdot|T|$ is the average value of the number of edges between the two sets of nodes in a random graph; also, note that $\lambda\sqrt{|S|\cdot|T|}$ is an ``additive'' scale factor, which might be very large, e.g., too large for the following lemma to give an interesting bound, in particular when one of $S$ or $T$ is very small. \begin{theorem}[Expander Mixing Lemma] Let $G=(V,E)$ be a $d$-regular graph, with $|V|=n$ and $\lambda=\max(|\mu_{2}|,|\mu_{n}|)$, where $\mu_i$ is the $i$-th largest eigenvalue of the (non-normalized) Adjacency Matrix. Then, for all $S,T\subseteq V$, we have the following: \[ \left|\left|E(S,T)\right|-\frac{d}{n}|S|\cdot|T|\right| \leq\lambda\sqrt{|S|\cdot|T|} . \] \end{theorem} \begin{proof} Define $\chi_{S}$ and $\chi_{T}$ to be the characteristic vectors of $S$ and $T$. Then, if $\{v_{j}\}_{j=1}^{n}$ are orthonormal eigenvectors of $A_{G}$, with $v_1 = \frac{1}{\sqrt{n}}(1,\ldots,1)$, then we can write the expansion of $\chi_{S}$ and $\chi_{T}$ in terms of those eigenvalues as: $\chi_{S}=\sum_{i}\alpha_{i}v_{i}$ and $\chi_{T}=\sum_{j}\beta_{j}v_{j}$. Thus, \begin{eqnarray*} \left|E(S,T)\right| &=& \chi_{S}^{T}A\chi_{T} \\ &=& \left( \sum_{i}\alpha_{i}v_{i} \right) A \left( \sum_{j}\beta_{j}v_{j} \right) \\ &=& \left( \sum_{i}\alpha_{i}v_{i} \right) \left( \sum_{j}\mu_{j}\beta_{j}v_{j} \right)\\ &=& \sum_i\mu_i\alpha_i\beta_i \quad\mbox{since the $v_i$'s are orthonormal} . \end{eqnarray*} Thus, \begin{eqnarray*} \left|E(S,T)\right| &=& \sum\mu_{i}\alpha_{i}\beta_{i} \\ &=& \mu_{1}\alpha_1\beta_1+\sum_{i\geq 2}\mu_{i}\alpha_{i}\beta_{i} \\ &=& d\frac{|S|.|T|}{n}+\sum_{i\geq1}\mu_{i}\alpha_{i}\beta_{i} , \end{eqnarray*} where the last inequality is because, $\alpha_{1}= \langle \chi_{S},\frac{\overrightarrow{1}}{\sqrt{n}} \rangle=\frac{|S|}{\sqrt{n}}$ and (similarly) $\beta_{1}=\frac{|T|}{\sqrt{n}}$, and $\mu_{1}=d$. Hence, \begin{eqnarray*} \left|\left|E(S,T)\right|-\frac{d}{n}|S|\cdot|T|\right| & =& \left|\sum_{i=2}^{n}\mu_{i}\alpha_{i}\beta_{i}\right| \\ &\leq& \sum_{i\geq 2}|\mu_{i}\alpha_{i}\beta_{i}| \\ &\leq& \lambda\sum_{i\geq1}|\alpha_{i}||\beta_{i}| \\ &\leq& \lambda||\alpha||_{2}||\beta||_{2} = \lambda||\chi_{S}||_{2}||\chi_{T}||_{2} = \lambda\sqrt{|S|\cdot|T|} \end{eqnarray*} \end{proof} \paragraph{Other direction} There is also a partial converse to this result: \begin{theorem}[Bilu and Linial] Let $G$ be a $d$-regular graph, and suppose that \[ \left| E(S,T) - \frac{d}{n}|S|\cdot|T| \right| \leq \rho \sqrt{|S|\cdot|T|} \] holds $\forall$ disjoint $S$,$T$ and for some $\rho > 0$. Then \[ \lambda \le O\left( \rho\left( 1+ \log(\frac{d}{\rho}) \right) \right) \] \end{theorem} \subsubsection{Some extra comments} We have been describing these results in terms of regular and unweighted graphs for simplicity, especially of analysis since the statements of the theorems don't change much under generalization. Important to note: these results can be generalized to weighted graphs with irregular number of edges per nodes using discrepancy. Informally, think of these characterizations as intuitively defining what the interesting properties of an expander are for real data, or what an expander is more generally, or what it means for a data set to look expander-like. Although we won't worry too much about those issues, it is important to note that for certain, mostly algorithmic and theoretical applications, the fact that $d=\Theta(1)$, etc. are very important. \subsection{Expanders are graphs that are sparse versions/approximations of a complete graph} To quantify the idea that constant-degree expanders are sparse approximations to the complete graph, we need two steps: \begin{enumerate} \item first, a way to say that two graphs are close; and \item second, a way to show that, with respect to that closeness measure, expanders and the complete graph are close. \end{enumerate} \subsubsection{A metric of closeness between two graphs} For the first step, we will view a graph as a Laplacian and vice versa, and we will consider the partial order over PSD matrices. In particular, recall that for a symmetric matrix $A$, we can write \[ A \succeq 0 \] to mean that \[ A \in PSD \] (and, relatedly, $A \succ 0$ to mean that it is PD). In this case, we can write $A \succeq B$ to mean that $A-B \succeq 0$. Note that $\succeq$ is a partial order. Unlike the real numbers, where every pair is comparable, for symmetric matrices, some are and some are not. But for pairs to which it does apply, it acts like a full order, in that, e.g., \begin{eqnarray*} & & A \succeq B \mbox{ and } B \succeq C \mbox{ implies } A \succeq C \\ & & A \succeq B \mbox{ implies that } A + C \succeq B+C , \end{eqnarray*} for symmetric matrices $A$, $B$, and $C$. By viewing a graph as its Laplacian, we can use this to define an inequality over graphs. In particular, for graphs $G$ and $H$, we can write \[ G \succeq H \mbox{ to mean that } L_G \succeq L_H . \] In particular, from our previous results, we know that if $G=(V,E)$ is a graph and $H=(V,F)$ is a subgraph of $G$, then $L_G \succeq L_H$. This follows since the Laplacian of a graph is the sum of the Laplacians of its edges: i.e., since $F \subseteq E$, we have \[ L_G = \sum_{e in E} L_e = \sum_{e \in F} L_e + \sum_{e in E \diagdown F} L_e \preceq \sum_{e \in F} L_e = L_H , \] which follows since $\sum_{e \in E \diagdown F} L_e \succeq 0$. That last expression uses the additive property of the order; now let's look at the multiplicative property that is also respected by that order. If we have a graph $G=(V,E)$ and a graph $H=(V,E^{\prime})$, let's define the graph $c \cdot H$ to be the same as the graph $H$, except that every edge is multiplied by $c$. Then, we can prove relationships between graphs such as the following. \begin{lemma} If $G$ and $H$ are graphs s.t. \[ G \succeq c \cdot H \] then, for all $k$ we have that \[ \lambda_k(G) \ge c \lambda_k(H) . \] \label{lem:graphic-ineq-mult} \end{lemma} \begin{Proof} The proof is by the min-max Courant-Fischer variational characterization. We won't do it in detail. See DS, 09/10/12. \end{Proof} \noindent From this, we can prove more general relationships, e.g., bounds if edges are removed or rewieghted. In particular, the following two lemmas are almost corollaries of Lemma~\ref{lem:graphic-ineq-mult}. \begin{lemma} If $G$ is a graph and $H$ is obtained by adding an edge to $G$ or increasing the weight of an edge in $G$, then, for all $i$, we have that $\lambda_i(G) \le \lambda_i(H)$. \end{lemma} \begin{lemma} If $G=(V,E,W_1)$ is a graph and $H=(V,E,W_2)$ is a graph that differs from $G$ only in its weights, then \[ G \succeq \min_{e \in E} \frac{w_1(e)}{w_2(e)} H . \] \end{lemma} Given the above discussion, we can use this to define the notion that two graphs approximate each other, basically by saying that they are close if their Laplacian quadratic forms are close. In particular, here is the definition. \begin{definition} Let $G$ and $H$ be graphs. We say that $H$ is a $c$-approximation to $H$ if \[ cH \succeq G \succeq \frac{1}{c} H . \] \label{def:graph-c-approx} \end{definition} \noindent As a special case, note that if $c=1+\epsilon$, for some $\epsilon\in(0,1)$, then we have that the two graphs are very close. \subsubsection{Expanders and complete graphs are close in that metric} Given this notion of closeness between two graphs, we can now show that constant degree expanders are sparse approximations of the complete graph. The following theorem is one formalization of this idea. This establishes the closeness; and, since constant-degree expanders are very sparse, this result shows that they are sparse approximations of the complete graph. (We note in passing that it is know more generally that every graph can be approximated by a complete graph; this graph sparsification problem is of interest in many areas, and we might return to it.) \begin{theorem} For every $\epsilon > 0$, there exists a $d > 0$ such that for all sufficiently large $n$, there is a $d$ regular graph $G_n$ that is a $1\pm\epsilon$ approximation of the complete graph $K_n$ \end{theorem} \begin{Proof} Recall that a constant-degree expander is a $d$-regular graph whose Adjacency Matrix eigenvalues satisfy \begin{equation} |\alpha_i| \le \epsilon d , \label{eqn:expander-eigenval-bound} \end{equation} for all $i \ge 2$, for some $\epsilon < 1$. We will show that graphs satisfying this condition also satisfy the condition of Def.~\ref{def:graph-c-approx} (with $c=1+\epsilon$) to be a good approximation of the complete graph. To do so, recall that \[ \left(1-\epsilon\right) H \preceq G \preceq \left(1+\epsilon\right) H \] means that \[ \left(1-\epsilon\right) x^TL_Hx \le x^TL_Gx \le \left(1+\epsilon\right) x^TL_Hx . \] Let $G$ be the Adjacency Matrix of the graph whose eigenvalues satisfy Eqn.~(\ref{eqn:expander-eigenval-bound}). Given this, recall that the Laplacian eigenvalues satisfy $\lambda_i = d-\alpha_i$, and so all of the non-zero eigenvalues of $L_G$ are in the interval between $\left(1-\epsilon\right)d$ and $\left(1+\epsilon\right)d$. I.e., for all $x$ s.t. $x \perp \vec{1}$, we have that \[ \left(1-\epsilon\right) x^Tx \le x^TL_Gx \le \left(1+\epsilon\right) x^Tx . \] (This follows from Courant-Fischer or by expanding $x$ is an eigenvalue basis.) On the other hand, for the complete graph $K_n$, we know that all vectors $x$ that are $\perp \vec{1}$ satisfy \[ x^T L_{K_n}x = n x^Tx . \] So, let $H$ be the graph \[ H = \frac{d}{n} K_{n} , \] from which it follows that \[ x^TL_Hx = d x^Tx . \] Thus, the graph $G$ is an $\epsilon$-approximation of the graph $H$, from which the theorem follows. \end{Proof} \noindent For completeness, consider $G-H$ and let's look at its norm to see that it is small. First note that \[ \left(1-\epsilon\right)H \preceq G \preceq \left(1+\epsilon\right)H \mbox{ implies that } -\epsilon H \preceq G-H \preceq \epsilon H . \] Since $G$ and $H$ are symmetric, and all of the eigenvalues of $\epsilon H$ are either $0$ or $d$, this tells us that \[ \|L_G - L_H\|_2 \le \epsilon d . \] \subsection{Expanders are graphs on which diffusions and random walks mix rapidly} We will have more to say about different types of diffusions and random walks later, so for now we will only work with one variant and establish one simple variant of the idea that random walks on expander graphs mix or equilibrate quickly to their equilibrium distribution. Let $G=(V,E,W)$ be a weighted graph, and we want to understand something about how random walks behave on $G$. One might expect that if, e.g., the graph was a dumbbell graph, then random walks that started in the one half would take a very long time to reach the other half; on the other hand, one might hope that if there are no such bottlenecks, e.g., bottlenecks revealed by the expansion of second eigenvalue, than random walks would mix relatively quickly. To see this, let $p_t \in \mathbb{R}^{n}$, where $n$ is the number of nodes in the graph, be a probability distribution at time $t$. This is just some probability distribution over the nodes, \emph{e.g.}, it could be a discrete Dirac $\delta$-function, \emph{i.e.}, the indicator of a node, at time $t=0$; it could be the uniform distribution; or it could be something else. Given this distribution at time $t$, the transition rule that governs the distribution at time $t+1$ is: \begin{itemize} \item To go to $p_{t+1}$, move to a neighbor with probability $\sim$ the weight of the edge. (In the case of unweighted graphs, this means that move to each neighbor with equal probability.) That is, to get to $p_{t+1}$ from $p_t$, sum over neighbors \[ p_{t+1}(u) = \sum_{v:(u,v) \in E} \frac{W(u,v)}{d(v)} p_{t}(v) \] where $d(v) = \sum_u W(u,v)$ is the weighted degree of $v$. \end{itemize} As a technical point, there are going to be bottlenecks, and so we will often consider a ``lazy'' random walk, which removed that trivial bottleneck that the graph is bipartite thus not mixing (i.e. the stationary distribution doesn't exist) and only increases the mixing time by a factor of two (intuitively, on expectation in two steps in the ``lazy'' walk we walk one step as in the simple random walk)---which doesn't matter in theory, since there we are interested in polynomial versus exponential times, and in practice the issues might be easy to diagnose or can be dealt with in less aggressive ways. Plus it's nicer in theory, since then things are SPSD. By making a random walk ``lazy,'' we mean the following: Let \[ p_{t+1}(u) = \frac{1}{2} p_{t}(u) + \frac{1}{2} \sum_{v:(u,v) \in E} \frac{W(u,v)}{d(v)} p_{t}(v) . \] That is, $p_{t+1} = \frac{1}{2}\left(I+AD^{-1}\right)p_{t}$, and so the transition matrix $W_{G}=A_{G} D_{G}^{-1}$ is replaced with $W_{G} = \frac{1}{2}\left(I+A_{G}D_{G}^{-1}\right)$---this is an asymmetric matrix that is similar in some sense to the normalized Laplacian. Then, after $t$ steps, we are basically considering $W_G^t$, in the sense that \[ p_0 \rightarrow p_t = W p_{t-1} = W^2 p_{t-2} = \cdots = W^t p_t . \] \textbf{Fact.} Regardless of the initial distribution, the lazy random walk converges to $\pi(i) = \frac{d(i)}{\sum_j d(j)}$, which is the right eigenvector of $W$ with eigenvalue $1$. \textbf{Fact.} If $1=\omega_1 \ge \omega_2 \ge \cdots \omega_n \ge 0$ are eigenvalues of $W$, with $\pi(i) = \frac{d(i)}{\sum_j d(j)}$, then $\omega_2$ governs the rate of convergence to the stationary distribution. There are a number of ways to formalize this ``rate of mixing'' result, depending on the norm used and other things. In particular, a very good way is with the total variation distance, which is defined as: \[ \|p-q\|_{TVD} = \max_{S \subseteq V} \left\{ \sum_{v \in S} p_v - \sum_{v \in S} q_v \right\} = \frac{1}{2} \|p-q\|_1 . \] (There are other measures if you are interested in mixing rates of Markov chains.) But the basic point is that if $1-\omega_2$ is large, \emph{i.e.}, you are an expander, then a random walk converges fast. For example: \begin{theorem} Assume $G=(V,E)$ with $|V|=n$ is $d$-regular, $A$ is the adjacency matrix of $G$, and $\hat{A}=\frac{1}{d}A$ is the transition matrix of a random walk on $G$, i.e., the normalized Adjacency Matrix. Also, assume $\lambda=\max(|\mu_{2}|,|\mu_{n}|)=\alpha d$ (recall $\mu_i$ is the $i$-th largest eigenvalue of $A$, not $\hat{A}$). Then \[ ||\hat{A}^{t}p-u||_{1}\leq\sqrt{n}\alpha^{t} , \] where $u$ is the stationary distribution of the random walk, which is the uniform distribution in the undirected $d$-regular graph, and $p$ is an arbitrary initial distribution on $V$. In particular, if $t \ge \frac{c}{1-\alpha}\log\left(\frac{n}{\epsilon}\right)$, for some absolute constant $c$ independent of $n$, then $\| u - \hat{A}^{t} p \| \le \epsilon$. \end{theorem} \begin{proof} Let us define the matrix $\hat J = \frac{1}{n} \vec{1} \vec{1}^\top $, where, as before, $\vec{1}$ is the all ones vector of length $n$. Note that, for any probability vector $p$, we have \begin{align*} \hat J p &= \frac{1}{n} \vec{1} \vec{1}^\top p \\ &= \frac{1}{n} \vec{1} \cdot 1 \\ &= u. \end{align*} Now, since $\hat A = \frac{1}{d} A $ we have $\hat \mu_i = \mu_i/d$, where $\hat \mu_i$ denotes the $i$th largest eigenvalue of $\hat A$, and the eigenvectors of $\hat A$ are equal to those of $A$. Hence, we have \begin{align*} \big \Vert \hat A ^t - \hat J \big \Vert_2 &= \max_{w:\Vert w \Vert_2 \leq 1} \Vert (\hat A^t - \hat J) w \Vert_2 \\ &= \sigma_{\max} \left( \hat A^t - \hat J \right) \\ &= \sigma_{\max} \left( \sum_{i=1}^{n} \hat \mu_i^t v_i v_i^\top - \frac{1}{n} \vec{1} \vec{1}^\top \right) \\ &\overset{(a)}{=} \sigma_{\max} \left( \sum_{i=2}^{n} \hat \mu_i^t v_i v_i^\top \right) \\ &= \max\{ \vert \hat \mu_2^t \vert, \vert \hat \mu_n^t \vert\} \\ &= \alpha^t, \end{align*} where $(a)$ follows since $v_1 = \frac{1}{\sqrt{n}} \vec{1}$ and $\hat \mu_1 = 1$. Then, \begin{align*} \big \Vert \hat A^t p - u \big \Vert_1 &\leq \sqrt{n} \big\Vert \hat A^t p - u \big\Vert_2 \\ &\leq \sqrt{n} \big\Vert \hat A^t p - \hat J p \big\Vert_2 \\ &\leq \sqrt{n} \big\Vert \hat A^t - \hat J \big\Vert_2 \big \Vert p \big \Vert_2 \\ &\leq \sqrt{n} \alpha^t, \end{align*} which concludes the proof. \end{proof} This theorem shows that if the spectral gap is large (i.e. $\alpha$ is small), then we the walk mixes rapidly. This is one example of a large body of work on rapidly mixing Markov chains. For example, there are extensions of this to degree-heterogeneous graphs and all sorts of other thigns Later, we might revisit this a little, when we see how tight this is; in particular, one issue that arises when we discuss local and locally-biased spectral methods is that how quickly a random walk mixes depends on not only the second eigenvalue but also on the size of the set achieving that minimum conductance value. \section{% (02/19/2015): Expanders, in theory and in practice (2 of 2)} Reading for today. \begin{compactitem} \item Same as last class. \end{compactitem} Here, we will describe how expanders are the metric spaces that are least like low-dimensional Euclidean spaces (or, for that matter, any-dimensional Euclidean spaces). Someone asked at the end of the previous class about what would an expander ``look like'' if we were to draw it. The point of these characterizations of expanders---that they don't have good partitions, that they embed poorly in low dimensional spaces, etc.---is that \emph{you can't draw them to see what they look like}, or at least you can't draw them in any particularly meaningful way. The reason is that if you could draw them on the board or a two-dimensional piece of paper, then you would have an embedding into two dimensions. Relatedly, you would have partitioned the expander into two parts, i.e., those nodes on the left half of the page, and those nodes on the right half of the page. Any such picture would have roughly as many edges crossing between the two halves as it had on either half, meaning that it would be a non-interpretable mess. This is the reason that we are going through this seemingly-circuitous characterizations of the properties of expanders---they are important, but since they can't be visualized, we can only characterize their properties and gain intuition about their behavior via these indirect means. \subsection{Introduction to Metric Space Perspective on Expanders} To understand expanders from a metric space perspective, and in particular to understand how they are the metric spaces that are least like low-dimensional Euclidean spaces, let's back up a bit to the seemingly-exotic subject of \emph{metric spaces} (although in retrospect it will not seem so exotic or be so surprising that it is relevant). \begin{itemize} \item Finite-dimensional Euclidean space, i.e., $\mathbb{R}^{n}$, with $n < \infty$, is an example of a metric space that is very nice but that is also quite nice/structured or limited. \item When you go to infinite-dimensional Hilbert spaces, things get much more complex; but $\infty$-dimensional RKHS, as used in ML, are $\infty$-dimensional Hilbert spaces that are sufficiently regularized that they inherit most of the nice properties of $\mathbb{R}^{n}$. \item If we measure distances in $\mathbb{R}^{n}$ w.r.t. other norms, \emph{e.g.}, $\ell_1$ or $\ell_{\infty}$, then we step outside the domain of Hilbert spaces to the domain of Banach spaces or normed vector spaces. \item A graph $G=(V,E)$ is completely characterized by its shortest path or geodesic metric; so the metric space is the nodes, with the distance being the geodesic distance between the nodes. Of course, you can modify this metric by adding nonnegative weights to edges like with some nonlinear dimensionality reduction methods. Also, you can assign a vector to vertices and thus view a graph geometrically. (We will get back to the question of whether there are other distances that one can associate with a graph, e.g., resistance of diffusion based distances; and we will ask what is the relationship between this and geodesic distance.) \item The data may not be obviously a matrix or a graph. Maybe you just have similarity/dissimilarity information, \emph{e.g.}, between DNA sequences, protein sequences, or microarray expression levels. Of course, you might want to relate these things to matrices or graphs in some way, as with RKHS, but let's deal with metrics first. \end{itemize} So, let's talk aobut metric spaces more generally. The goal will be to understand how good/bad things can be when we consider metric information about the data. So, we start with a definition: \begin{definition} $(X,d)$ is a \emph{metric space} if \begin{itemize} \item $d:X \times X \rightarrow \mathbb{R}^{+}$ (nonnegativity) \item $d(x,y)=0$ iff $x=y$ \item $d(x,y)=d(y,x)$ (symmetric) \item $d(x,y) \le d(x,z)+d(z,y)$ (triangle inequality) \end{itemize} \end{definition} The idea is that there is a function over the set $X$ that takes as input pairs of variables that satisfies a generalization of what our intuition from Euclidean distances is: namely, nonnegativity, the second condition above, symmetry, and the triangle inequality. Importantly, this metric does not need to come from a dot product, and so although the intuition about distances from Euclidean spaces is the motivation, it is significantly different and more general. Also, we should note that if various conditions are satisfied, then various metric-like things are obtained: \begin{itemize} \item If the second condition above is relaxed, but the other conditions are satisfied, then we have a \emph{psuedometric}. \item If symmetry is relaxed, but the other conditions are satisfied, then we have a \emph{quasimetric}. \item If the triangle inequality is relaxed , but the other conditions are satisfied, then we have a \emph{semimetric}. \end{itemize} We should note that those names are not completely standard, and to confuse matters further sometimes the relaxed quantities are called metrics---for example, we will encounter the so-called \emph{cut metric} describing distances with respect to cuts in a graph, which is not really a metric since the second condition above is not satisfied. More generally, the distances can be from a Gram matrix, a kernel, or even allowing algorithms in an infinite-dimensional space. Some of these metrics can be a little counterintuitive, and so for a range of reasons it is useful to ask how similar or different two metrics are, \emph{e.g.}, can we think of a metric as a tweak of a low-dimensional space, in which case we might hope that some of our previous machinery might apply. So, we have the following question: \begin{question} How well can a given metric space $(X,d)$ be approximated by $\ell_2$, where $\ell_2$ is the metric space $(\mathbb{R}^{n},||\cdot||)$, where $\forall x,y\in\mathbb{R}^{n}$, we have $||x-y||^2=\sum_{i=1}^{n}(x_i-y_i)^2$. \end{question} The idea here is that we want to replace the metric $d$ with something $d'$ that is ``nicer,'' while still preserving distances---in that case, since a lot of algorithms use only distances, we can work with $d'$ in the nicer place, and get results that are algorithmically and/or statistically better without introducing too much error. That is, maybe it's faster without too much loss, as we formulated it before; or maybe it is better, in that the nicer place introduced some sort of smoothing. Of course, we could ask this about metrics other than $\ell_2$; we just start with that since we have been talking about it. There are a number of ways to compare metric spaces. Here we will start by defining a measure of distortion between two metrics. \begin{definition} Given a metric space $(X, d)$ and our old friend the metric space $(\mathbb{R}^{n}, \ell_2)$, and a mapping f: $X \rightarrow \mathbb{R}^{n}$: \begin{itemize} \item $expansion(f) = max_{x_1, x_2 \in X} \frac{\left||{f(x_1) - f(x_2)}\right||_2}{d(x_1, x_2)}$ \item $contraction(f) = max \frac{d(x_1, x_2)}{\left||{f(x_1) - f(x_2)}\right||}$ \item $distortion(f) = expansion(f) \cdot contraction(f)$ \end{itemize} \end{definition} As usual, there are several things we can note: \begin{itemize} \item An embedding with distortion $1$ is an \emph{isometry}. This is very limiting for most applications of interset, which is OK since it is also unnecessarily strong notion of similarity for most applications of interest, so we will instead look for low-distortion embeddings. \item There is also interest in embedding into $\ell_1$, which we will return to below when talking about graph partitioning. \item There is also interest in embedding in other ``nice'' places, like trees, but we will not be talking about that in this class. \item As a side comment, a Theorem of Dvoretzky says that any embedding into normed spaces, $\ell_2$ is the hardest. So, aside from being something we have already seen, this partially justifies the use of $\ell_2$ and the central role of $\ell_2$ in embedding theory more generally. \end{itemize} Here, we should note that we have already seen one example (actually, several related examples) of a low-distortion embedding. Here we will phrase the JL lemma that we saw before in our new nomenclature. \begin{theorem}[JL Lemma] Let $X$ be an $n$-point set in Euclidean space, \emph{i.e.}, $X \subset \ell_2^n$, and fix $\epsilon \in (0,1]$. Then $\exists$ a $(1+\epsilon)$-embedding of $X$ into $\ell_2^k$, where $k=O\left(\frac{\log n}{\epsilon^2}\right)$. \end{theorem} That is, Johnson-Lindenstrauss says that we can map $x_i \rightarrow f(x)$ such that distance is within $1 \pm \epsilon$ of the original. A word of notation and some technical comments: For $x\in\mathbb{R}^{d}$ and $p \in [1,\infty)$, the $\ell_p$ norm of $x$ is defined as $||x||_p = \left( \sum_{i=1}^{d} |x_i|^p \right)^{1/p}$. Let $\ell_p^d$ denote the space $\mathbb{R}^{d}$ equipped with the $\ell_p$ norm. Sometimes we are interested in embeddings into some space $\ell_p^d$, with $p$ given but the dimension $d$ unrestricted, \emph{e.g.}, in \emph{some} Euclidean space s.t. $X$ embeds well. Talk about: $\ell_p = $ the space of all sequences $(x_1,x_2, \ldots)$, with $||x||_p < \infty$, with $||x||_p$ defined as $||x||_p = \left( \sum_{i=1}^{\infty} |x_i|^p \right)^{1/p}$. In this case, embedding into $\ell_p$ is shorthand for embedding into $\ell_p^d$ for some $d$. Here is an important theorem related to this and that we will return to later. \begin{theorem}[Bourgain] Every $n$-point metric space $(X,d)$ can be embedded into Euclidean space $\ell_2$ with distortion $\leq O(\log n)$. \end{theorem} \begin{proof} [Proof Idea.] (The proof idea is nifty and used in other contexts, but we won't use it much later, except to point out how flow-based methods do something similar.) The basic idea is given $(X, d)$, map each point $x \rightarrow \phi(x)$ in $O(\log^2 n)$-dimensional space with coordinates equal to the distance to $S \subseteq X$ where $S$ is chosen randomly. That is, given $(X,d)$, map every point $x \in X$ to $\phi(x)$, an $O(\log^2 n)$-dimensional vector, where coordinates in $\phi(\cdot)$ correspond to subsets $S \subseteq X$, and s.t. the $s$-th in $\phi(x)$ is $d(x,S) = \min_{s \in S} d(x,s)$. Then, to define the map, specify a collection of subsets we use selected carefully but randomly---select $O(\log n)$ subsets of size $1$, $O(\log n)$ subsets of size $2$, of size $4$, $8$, $\ldots$, $\frac{n}{2}$. Using that, it works, \emph{i.e.}, that is the embedding. \end{proof} Note that the dimension of the Euclidean space was originally $O(\log^2n)$, but it has been improved to $O(\log n)$, which I think is tight. Note also that the proof is algorithmic in that it gives an efficient randomized algorithm. Several questions arise: \begin{itemize} \item Q: Is this bound tight? A: YES, on expanders. \item Q: Let $c_2(X,d)$ be the distortion of the embedding of $X$ into $\ell_2$; can we compute $c_2(X,d)$ for a given metric? A: YES, with an SDP. \item Q: Are there metrics such that $c_2(X,d) \ll \log n$? A: YES, we saw it with JL, \emph{i.e.}, high-dimensional Euclidean spaces, which might be trivial since we allow the dimension to float in the embedding, but there are others we won't get to. \end{itemize} \subsubsection {Primal} The problem of whether a given metric space is $\gamma$-embeddable into $\ell_2$ is polynomial time solvable. Note: this does not specify the dimension, just whether there is some dimension; asking the same question with dimension constraints or a fixed dimension is in general much harder. Here, the condition that the distortion $\leq \gamma$ can be expressed as a system of linear inequalities in Gram matrix correspond to vectors $\phi(x)$. So, the computation of $c_2(x)$ is an SDP---which is easy or hard, depending on how you view SDPs---actually, given an input metric space $(X,d)$ and an $\epsilon > 0$, we can determine $c_2(X,d)$ to relative error $\leq\epsilon$ in $\mbox{poly}(n,1/\epsilon)$ time. Here is a basic theorem in the area: \begin{theorem}[LLR] $\exists$ a poly-time algorithm that, given as input a metric space $(X,d)$, computes $c_2(X,d)$, where $c_2(X,d)$ is the least possible distortion of any embedding of $(X,d)$ into $(\mathbb{R}^{n},\ell_2)$. \end{theorem} \begin{proof} The proof is from HLW, and it is based on semidefinite programming. Let $(X,d)$ be the metric space, let $|X|=n$, and let $f:X\rightarrow\ell_2$. WLOG, scale $f$ s.t. $contraction(f)=1$. Then, $distortion(f)\leq\gamma$ iff \begin{equation} d(x_i,x_j)^2 \leq ||f(x_i)-f(x_j)||^2 \leq \gamma^2 d(x_i,x_j)^2 . \label{eqn:llr1-eq1} \end{equation} Then, let $u_i=f(x_i)$ be the $i$-th row of the embedding matrix $U$, and let $Z=UU^T$. Note that $Z\in PSD$, and conversely, if $Z\in PSD$, then $Z=UU^T$, for some matrix $U$. Note also: \begin{eqnarray*} ||f(x_i)-f(x_j)||^2 &=& ||u_i-u_j||^2 \\ &=& (u_i-u_j)^T(u_i-u_j) \\ &=& u_i^Tu_i + u_j^Tu_j - 2u_i^Tu_j \\ &=& Z_{ii} + Z_{jj} - 2Z_{ij} . \end{eqnarray*} So, instead of finding a $u_i=f(x_i)$ s.t.~(\ref{eqn:llr1-eq1}) holds, we can find a $Z \in PSD$ s.t. \begin{equation} d(x_i,x_j)^2 \leq Z_{ii} + Z_{jj} - 2Z_{ij} \leq \gamma^2 d(x_i,x_j)^2 . \label{eqn:llr1-eq2} \end{equation} Thus, $c_2\leq\gamma$ iff $\exists Z \in SPSD$ s.t.~(\ref{eqn:llr1-eq2}) holds $\forall ij$. So, this is an optimization problem, and we can solve this with simplex, interior point, ellipsoid, or whatever; and all the usual issues apply. \end{proof} \subsubsection {Dual} The above is a Primal version of the optimization problem. If we look at the corresponding Dual problem, then this gives a characterization of $c_2(X,d)$ that is useful in proving lower bounds. (This idea will also come up later in graph partitioning, and elsewhere.) To go from the Primal to the Dual, we must take a nonnegative linear combination of constraints. So we must write $Z \in PSD$ in such a way, since that is the constraint causing a problem; the following lemma will do that. \begin{lemma} $Z \in PSD$ iff $\sum_{ij} q_{ij} z_{ij} \geq 0, \forall Q \in PSD$. \end{lemma} \begin{proof} First, we will consider rank $1$ matrices; the general result will follow since general PSD matrices are a linear combination of rank-$1$ PSD matrices of the form $qq^T$, \emph{i.e.}, $Q=qq^T$. First, start with the $\Leftarrow$ direction: for $q\in\mathbb{R}^{n}$, let $Q$ be $PSD$ matrix s.t. $Q_{ij}=q_iq_j$; then \[ q^tZq = \sum_{ij} q_iZ_{ij}q_j = \sum_{ij} Q_{ij}z_{ij} \geq 0 , \] where the inequality follows since $Q$ is $PSD$. Thus, $Z \in PSD$. For the $\Rightarrow$ direction: let $Q$ be rank-$1$ PSD matrix; thus, it has the form $Q=qq^T$ or $Q_{ij}=q_iq_j$, for $q\in\mathbb{R}^{n}$. Then, \[ \sum_{ij}Q_{ij}z_{ij} = \sum_{ij}q_iZ_{ij}q_j \geq 0 , \] where the inequality follows since $A$ is $PSD$. Thus, since $Q \in PSD$ implies that $Q = \sum_i q_i q_i^T = \sum_i \Omega_i$, with $\Omega_i$ being a rank-$i$ PSD matrix, the lemma follows by working through things. \end{proof} Now that we have this characterization of $Z \in PSD$ in terms of a set of (nonnegative) linear combination of constraints, we are ready to get out Dual problem which will give us the nice characterization of $c_2(X,d)$. Recall finding an embedding $f(x_i) = u_i$ iff finding a matrix $Z$ iff $\sum_{ij} q_{ij}z_{ij} \geq 0, \forall Q \in SPSD$. So, the Primal constraints are: \renewcommand{\labelenumi}{\Roman{enumi}.} \begin{enumerate} \item $\sum q_{ij}z_{ij} \geq 0 $ for all $Q \epsilon PSD$ \item $z_{ii} + z_{jj} - 2z_{ij} \geq {d(x_i,x_j)}^2 $ \item $\gamma^2{d(x_i,x_j)}^2 \geq z_{ii} + z_{jj} - 2z_{ij}$ , \end{enumerate} which hold $\forall ij$. Thus, we can get the following theorem. \begin{theorem}[LLR] \[ C_2(X,d) = \max_{(P \epsilon PSD, P.1=0)} \sqrt{\frac{\sum_{P_{ij} >0} P_{ij}d(x_i,x_j)^2}{-\sum_{(P_{ij} <0)} P_{ij}d(x_i,x_j)^2}} \] \end{theorem} \begin{proof} The dual program is the statement that for $\gamma < C_2(X,d)$, thre must exist a non-negative combination of the constraints of the primal problem that yields a contradiction. So, we will assume $\gamma < C_2(x,d)$ and look for a contradiction, \emph{i.e.}, look for a linear combination of constraints such that the primal gives a contradiction. Thus, the goal is to construct a nonnegative linear combination of primal constraints to give a contradiction s.t. $Q.Z= \sum q_{ij}z_{ij} \geq 0$. Recall that the cone of PSD matrices is convex. The goal is to zero out the $z_{ij}$. (I) above says that $Q\cdot Z = \sum_{ij}q_{ij}z_{ij} \geq 0$. Note that since PSD cone is convex, a nonnegative linear combination of the form $\sum_{k} \alpha_k \langle Q,Z \rangle = P \cdot z$, for some $P \in PSD$. So, modifying first constraint from the primal, you get \renewcommand{\labelenumi}{\Roman{enumi}'.} \begin{enumerate} \item $\sum_{ij} P_{ij}Z_{ij} = P \cdot Z \geq 0 $, for some $P \epsilon PSD $ \\ To construct $P$, choose the elements such that you zero out $z_{ij}$ in the following manner. \begin{itemize} \item If $P_{ij}>0$, multiply second constraint from primal by $ P_{ij}/2$, (\emph{i.e.}, the constraint ${d(x_i,x_j)}^2 \leq z_{ii} + z_{jj} - 2z_{ij} $) \item If $P_{ij}<0$, multiply third constraint from primal by $-P_{ij}/2$, (\emph{i.e.}, the constraint $z_{ii} + z_{jj} - 2z_{ij} \leq \gamma^2{d(x_i,x_j)}^2 $) \item If $P_{ij}=0$, multiply by $0$ constraints involving $z_{ij}$. \end{itemize} This gives \begin{eqnarray*} \frac{P_{ij}}{2}\left( z_{ii} + z_{jj} - 2z_{ij} \right) &\geq& \frac{P_{ij}}{2} {d(x_i,x_j)}^2 \\ -\frac{P_{ij}}{2} \gamma^2{d(x_i,x_j)}^2 &\geq& -\frac{P_{ij}}{2} \left( z_{ii} + z_{jj} - 2z_{ij} \right) \end{eqnarray*} from which it follows that you can modify the other constraints from primal to be: \item $\sum_{{ij}, P_{ij}>0} \frac{P_{ij}}{2} (z_{ii} + z_{jj} - 2z_{ij}) \geq \sum_{{ij}, P_{ij}>0} \frac{P_{ij}}{2} d(x_i,x_j)^{2}$ \item $\sum_{{ij}, P_{ij}<0} \frac{P_{ij}}{2} (z_{ii} + z_{jj} - 2z_{ij}) \geq \sum_{{ij}, P_{ij}<0} \frac{P_{ij}}{2} \gamma^2(d(x_i,x_j)^2)$ \end{enumerate} If we add those two things, then we get, \[ \sum_i P_{ii}z_{ii} + \sum_{ij:P_{ij}>0} \frac{P_{ij}}{2}(z_{ii}+z_{jj}) + \sum_{ij} \frac{P_{ij}}{2}(z_{ii}+z_{jj}) \geq \text{RHS Sum} , \] and so \[ \sum_i P_{ii}z_{ii} + \sum_{ij:P_{ij} \ne 0} \frac{P_{ij}}{2}(z_{ii}+z_{jj}) \geq \text{RHS Sum}, \] and so, since we choose $P$ s.t. $P\cdot\vec{1} = \vec{0}$, (i.e. $\sum_j P_{ij}=0$ for all $i$, and $\sum_i P_{ij}=0$ for all $j$ by symmetry) we have that \[ 0 = \sum_i\left(P_{ii} + \sum_{j:P_{ij}\ne 0} P_{ij}\right)z_{ij} \geq RHS = \sum_{ij:P_{ij}>0} \frac{P_{ij}}{2}d(x_i,x_j)^2 + \sum_{ij:P_{ij}<0} \frac{P_{ij}}{2}\gamma^2d(x_i,x_j)^2 \] So, it follows that \[ 0 \geq \sum_{ij:P_{ij}>0} P_{ij}d(x_i,x_j)^2 + \sum_{ij:P_{ij}<0}\gamma^2d(x_i,x_j)^2 . \] This last observation is FALSE if \[ \gamma^2 < \frac{ \sum_{ij:P_{ij}>0} P_{ij} d(x_i,x_j)^2 }{ \sum_{ij:P_{ij}<0} (-P_{ij}) d(x_i,x_j)^2 } \] and so the theorem follows. (In brief, adding the second and third constraints above, \begin{center} $ 0 \geq \sum_{ij, P_{ij} >0 } P_{ij} d(x_i,x_j)^{2} + \sum_{ij, P_{ij} <0 }P_{ij}\gamma^2(d(x_i,x_j)^2) $ \end{center} This will be false if you choose $\gamma$ to be small---in particular, it will be false if $\gamma^2 \leq top/bottom$, from which the theorem will follow. \end{proof} \subsection{Metric Embedding into $\ell_{2}$} We will show that expanders embed poorly in $\ell_2$---this is the basis for the claim that they are the metric spaces that are least like low-dimensional spaces in a very strong sense. It is easy to see that an expander can be embedded into $\ell_{2}$ with distortion $O(\log\, n)$ (just note that any graph can be embedded with distortion equal to its diameter)---in fact, any metric space can be embedded with that distortion. We will show that this result is tight, and thus that expanders are the worst. The basic idea for showing that expanders embed poorly in $\ell_2$ is: If $G$ is a $d$-regular, $\epsilon$-expander, then $\lambda_2$ of $A_G$ is $< d-\delta$ for $\delta = \delta(d,\epsilon) \ne \delta(n)$. The vertices of a bounded degree graph can be paired up s.t. every pair of vertices are a distance $\Omega(\log n)$. We can then let $B$ be a permutation matrix for the pairing, and use the matrix $P = dI - A + \frac{\delta}{2}(B-I)$. Note: We can have a simpler proof, using the theorem of Bourgain that expanders don't embed well in $\ell_2$, since we can embed in $\mathbb{R}^{diameter}$, and $diameter(expander)=\log n$. But we go through this here to avoid (too much) magic. Start with the following definitions: \begin{definition} A \emph{Hamilton cycle} in a graph $G=(V,E)$ is a cycle that visits every vertex exactly once (except for the start and end vertices). \end{definition} \begin{definition} A \emph{matching} is a set of pairwise non-adjacent edges, \emph{i.e.}, no two edges share a common vertex. A vertex is \emph{matched} if it is incident to an edge. A \emph{perfect matching} is a matching that matched all the vertices of a graph. \end{definition} The following theorem is the only piece of magic we will use here: \begin{theorem} A simple graph with $n \ge 3$ edges is Hamiltonian if every vertex has degree $\ge \frac{n}{2}$. \end{theorem} Note that if every vertex has degree $\ge \frac{n}{2}$, then the graph is actually quite dense, and so from Szemeredi-type results relating dense graphs to random graphs it might not be so surprising that there is a lot of wiggle room. \textbf{Note:} A cycle immediately gives a matching. Thus, we have the following lemma: \begin{lemma} Let $G=(V,E)$ be a $d$-regular graph, with $|V|=n$ If $H=(V,E')$ is a graph with the same vertex set as $G$, in which two vertices $u$ and $v$ are adjacent iff $d_{G}(u,v)\geq \lfloor\log_{k}n\rfloor$. Then, $H$ has a matching with $n/2$ edges. \end{lemma} \begin{proof} Since $G$ is a $d$-regular graph, hence for any vertex $x\in V$ and any value $r$, it has $\leq k^{r}$ vertices $y\in V$ can have $d_{G}(x,y)\leq r$, \emph{i.e.}, only that many vertices are within a distance $r$. If $r = \lfloor \log_{k} n \rfloor -1 $, then $\exists \leq \frac{n}{2}$ vertices within distance $r$; that is, at least half of the nodes of $G$ are further than $\log_{k}n-2$ from $x$; this means every node in $H$ has at least degree $n/2$. So $H$ has a Hamilton cycle and thus a perfect matching, and by the above theorem the lemma follows. \end{proof} Finally we get to the main theorem that says that expanders embed poorly in $\ell_2$---note that this is a particularly strong statement or notion of nonembedding, as by Bourgain we know any graph (with the graph distance metric) can be embedded into $\ell_2$ with distortion $O(\log n)$, so expander is the worst case in this sense. \begin{theorem} Let $d \geq 3$, and let $\epsilon > 0$. If $G=(V,E)$ is a $(n,d)$-regular graph with $\lambda_2(A_G)\leq d-\epsilon$ and $|V|=n$, then \[ C_{2}(G)=\Omega(\log\, n) \] where the constant inside the $\Omega$ depends on $d,\epsilon$. \end{theorem} \begin{proof} To prove the lower bound, we use the characterization from the last section that for the minimum distortion in embedding a metric space $(X,d)$ into $l_{2}$, denoted by $C_{2}(X,d)$, is: \begin{equation} C_{2}(X,d)=\max_{P\in PSD,P.\overrightarrow{1}=\overrightarrow{0}}\sqrt{\frac{\sum_{p_{ij}>0}p_{ij}d(x_{i},x_{j})^{2}}{-\sum_{p_{ij}<0}p_{ij}d(x_{i},x_{j})^{2}}} \label{eq:c2_x_d} \end{equation} and we will find some $P$ that is feasible that gives the LB. Assume $B$ is the adjacency matrix of the matching in $H$, whose existence is proved in the previous lemma. Then, define \[ P=(dI-A_{G})+\frac{\epsilon}{2}(B-I) . \] Then, we claim that $P\overrightarrow{1}=\overrightarrow{0}$. To see this, notice both $(dI-A_G)$ and $I-B$ are Laplacians (not normalized), as $B$ is the adjacency matrix of a perfect matching (i.e. $1$-regular graph). Next, we claim that $P\in\mbox{PSD}$. This proof of this second claim is because, for any $x\perp\overrightarrow{1}$, we have \[ x^{T}(dI-A_{G})x \geq dx^Tx-x^TAx \geq (d-\lambda_2)||x||^2 \geq \epsilon||x||^{2} \] (by the assumption on $\lambda_{2}$); and \begin{eqnarray*} x^{T}(B-I)x & =& \sum_{(i,j)\in B}2x_{i}x_{j}-\sum_{i}x_{i}^{2} \\ & =& \sum_{(i,j)\in B}(2x_{i}x_{j}-x_{i}^{2}-x_{j}^{2}) \qquad\mbox{since $||x||^2=\sum_{(i,j)\in B} x_i^2+x_j^2$} \\ &\geq& -2\sum_{(i,j)\in B} x_{i}^{2}+x_{j}^{2} \\ & =& -2||x||^{2} \end{eqnarray*} \noindent The last line is since $||x||^2=\sum_{(i,j)\in B} x_i^2+x_j^2$ and since $B$ is a matching so each $i$ shows up in the sum exactly once. So, we have that \begin{eqnarray*} x^TPx & =& x^T(dI-A_G)x + x^T\frac{\epsilon}{2}(B-I)x \\ &\geq& \epsilon||x||^2 -\frac{2||x||^2\epsilon}{2} \\ & =& 0 . \end{eqnarray*} Next evaluate the numerator and the denominator. \begin{eqnarray*} -\sum_{P_{ij}<0} d(i,j)^2 P_{ij} & =& dn \\\ \sum_{P_{ij}>0} d(i,j)^2P_{ij} &\geq& \frac{\epsilon}{2}n\lfloor\log_{d}n\rfloor^2 \end{eqnarray*} where the latter follows since the distances of edges in $B$ are at least $\lfloor \log_{d}n\rfloor$. Thus, for this $P$, we have that: \[ \sqrt{\frac{\sum_{p_{ij}>0}p_{ij}d(x_{i},x_{j})^{2}}{-\sum_{p_{ij}<0}p_{ij}d(x_{i},x_{j})^{2}}} \geq \sqrt{\frac{ \frac{\epsilon}{2}n\lfloor\log_kn\rfloor^2 }{dn}} \sim \Theta(\log n) \] and thus, from~(\ref{eq:c2_x_d}), that $C_2$ is at least this big, \emph{i.e.}, that: \[ C_{2}(G)\geq\Omega(\log\, n) \] \end{proof} \section{% (02/24/2015): Flow-based Methods for Partitioning Graphs (1 of 2)} Reading for today. \begin{compactitem} \item ``Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms,'' in JACM, by Leighton and Rao \item ``Efficient Maximum Flow Algorithms,'' in CACM, by Goldberg and Tarjan \end{compactitem} \subsection{Introduction to flow-based methods} Last time, we described the properties of expander graphs and showed that they have several ``extremal'' properties. Before that, we described a vanilla spectral partitioning algorithms, which led to the statement and proof of Cheeger's Inequality. Recall that one direction viewed $\lambda_2$ as a relaxation of the conductance or expansion problem; while the other direction gave a ``quadratic'' bound as well as a constructive proof of a graph partitioning algorithm. The basic idea was to compute a vector, show that it is a relaxation of the original problem, and show that one doesn't loose too much in the process. Later, we will see that there are nice connections between these methods and low-dimensional spaces and hypothesized manifolds. Lest one think that this is the only way to compute partitions, we turn now to a \emph{very} different method to partition graphs---it is based on the ideas of single-commodity and multi-commodity flows. It is \emph{not} a spectral method, but it is important to know about for spectral methods (e.g., when/why spectral methods work and how to diagnose things when they don't work as one expects): the reason is basically that flow-based graph algorithms ``succeed'' and ``fail'' in very different ways than spectral methods; and the reason for this is that they too implicitly involve embedding the input graph in a metric/geometric place, but one which is very different than the line/clique that spectral methods implicitly embed the input into. \subsection{Some high-level comments on spectral versus flow} Recall that the key idea underlying graph partitioning algorithms is take a graph $G=(V,E)$ and spread out the vertices $V$ in some abstract space while \emph{not} spreading out edges $E$ too much, and then to partition the vertices in that space into tow (or more) sets. \begin{itemize} \item \textbf{Spectral methods} do this by putting nodes on an eigenvector and then partitioning based on a sweep cut over the partitions defined by that eigenvector. For spectral methods, several summary points are worth noting. \begin{itemize} \item They achieve ``quadratic'' worst-case guarantees from Cheeger's Inequality. This quadratic factor is ``real,'' e.g., it is not an artifact of the analysis and there are graphs on which it is achieved. \item They are ``good'' when graph has high conductance or expansion, \emph{i.e.}, when it is a good expander (in either the degree-weighted or degree-unweighted sense). \item They are associated with some sort of underlying geometry, e.g., as defined by where the nodes get mapped to in the leading eigenvector; but it is not really a metric embedding (or at least not a ``good'' metric embedding since it only preserves average distances---which is typical of these $\ell_2$-based methods---and not the distance between every pair of nodes. \end{itemize} \item \textbf{Flow-based methods} do this by using multi-commodity flows to reveal bottlenecks in the graph and then partitioning based on those bottlenecks. To contrast with spectral methods, note the following summary points for flow-based methods. \begin{itemize} \item They achieve $O(\log n)$ worst-case guarantees. This $O(\log n)$ is ``real,'' in that it too is not an artifact of the analysis and there are graphs on which it is achieved. \item They are ``good'' when the graph has sparse cuts, \emph{i.e.}, when it is not a good expander. \item They are also associated with a geometry, but one which is very different than that of spectral methods. In particular, although not immediately-obvious, they can be viewed as embedding a finite metric space of graph into $\ell_1$ metric, \emph{i.e.}, a very different metric than before, and then partitioning there. \end{itemize} \end{itemize} \noindent The point here is that these two methods are in many ways complementary, in the sense that they succeed and fail in different places. Relatedly, while ``real'' data might not be exactly one of the idealized places where spectral or flow-based methods succeed or fail, a lot of graph-based data have some sort of low-dimensional structure, and a lot of graph-based data are sufficiently noisy that it is fruitful to view them as having expander-like properties. Finally, the comments about spectral methods being good on expanders and flow methods being good on non-expanders might seem strange. After all, most people who use spectral methods are not particularly interested in partitioning graphs that do not have good partitions (and instead they take advantage of results that show that spectral methods find good partitions in graphs that are ``morally'' low-dimensional, e.g., graphs like bounded-degree planar graphs and well-shaped meshes). The reason for this comment is that the metric we have been using to evaluate cluster quality is the objective function and not the quality of the clusters. By this measure, the quadratic of a constant (which is the expansion value of an expander) is a constant, while the quadratic of, e.g., $1/\sqrt{n}$ is very large, i.e., the spectral guarantee is nontrivial in one case and not in the other. For flow-based methods by contrast, the $O(\log n)$ is much larger than the constant value of an expander and so gives trivial results, while this is much smaller than, e.g., $1/\sqrt{n}$, leading to nontrivial results on the objective for morally low-dimensional graphs. That these qualitative guides are the opposite of what is commonly done in practice is a real challenge for theory, and it is a topic we will return to later. \subsection{Flow-based graph partitioning} Recall the basic ideas about flow and the single commodity flow problem: \begin{itemize} \item $G=(V,E)$ is a graph, and each edge $e \in E$ has capacity $C(e)$, which is the maximum amount of flow allowed through it. Also, we are given a source $s$ and a sink $t$. \item (We are going to be applying it to graph partitioning by trying to use network flow ideas to reveal bottlenecks in the graph, and then cut there.) \item So, the goal is to route as much flow $s \rightarrow t$ without violating any of the constraints. \item Max-flow is the maximum amount of such flow. \item Min-Cut $=$ the minimum amount of capacity to be removed from a network to disconnect $s$ from $t$; that is, \[ Min-Cut =\min_{U \subset V: s \in U,t \in \bar{U}} \sum_{e\in(U,\bar{U})}c(e) . \] (Note that there is no ratio here, since we can assume that demand $=1$, WLOG, but that we won't be able to do this later.) \item Weak duality is one thing and is relatively easy to show: \begin{claim} $maxflow \le mincut$ \end{claim} \begin{proof} For all $ U \subseteq V$ that has $s$ and $t$ on opposite sides, all flows from $s \rightarrow t$ must be routed through edges in $(U,\bar{U})$, so the total flow is bounded by the capacity in mincut. \end{proof} \item Strong duality is another thing and is harder: \begin{claim} $maxflow = mincut$ \end{claim} \begin{proof} Suffices to show that mincut bound is always achievable is harder. We won't do it here. \end{proof} \end{itemize} \noindent All of this discussion so far is for $k=1$ single-commodity flow. There are also multi-commodity versions of this flow/cut problem that have been widely studied. Here is the basic definition. \begin{definition} Given $k\ge1$, each with a source $s_i$ and sink $t_i$ and also given demands $D_i$ for each, i.e., we have $(s_i,t_i,D_i)$ for each $i\in[k]$, the \emph{multi-commodity flow problem} is to simultaneously route $D_i$ units of flow from $s_i$ to $t_i$, for all $i$, while respecting capacity constraints, \emph{i.e.}, s.t. the total amount of all commodities passing through any edge $\le$ capacity of that edge. There are several variants, including: \begin{itemize} \item \emph{Max throughput flow}: maximize the amount of flow summed over all commodities. \item \emph{Max concurrent flow}: specify a demand $D_i$ for each commodity, and then maximize the \emph{fraction} of demand $D_i$ that can be simultaneously shipped or routed by flow: \[ \max f \text{~s.t.~}fD_i \text{~units of flow go from $s_i$ to $t_i$.} \] \emph{i.e.}, the maximum $f$ s.t. $fD_i$ units of capacity $i$ are simultaneously routed without violating the capacity constraints. \end{itemize} \end{definition} That is, for this multicommodity flow problem, if the flow of commodity $i$ along edge $(u,v)$ is $f_i(u,v)$, then an \emph{assignment of flow} satisfies: \begin{itemize} \item Capacity constraints. \[ \sum_{i=1}^{k}|f_i(u,v)| \le c(u,v) \] \item Flow conservation \begin{eqnarray*} & & \sum_{w \in V} f_i(u,w) = 0 \quad \mbox{when } u \ne s_i,t_i \\ & & \mbox{For all } u,v \quad f_i(u,v) = -f_i(v,u) \end{eqnarray*} \item Demand satisfaction. \[ \sum_{w \in V} f_i(s_i,w) = \sum_{w \in V} f_i (w,t) = D_i \] \end{itemize} Then the goal is to find a flow that maximizes one of the above variants of the problem. We can also define a related cut problem. \begin{definition} The \emph{MinCut} or \emph{Sparsest Cut} $\Xi$---of an undirected multicommodity flow problem---is the mincut over all cuts of the ratio of the capacity of cut to the demand of the cut, \emph{i.e.}, \[ \Xi=\text{min cut} = \rho = \Xi = \min_{U\subseteq V}\frac{C(U,\bar U)}{D(U,\bar U)} , \] where \begin{align*} C(U,\bar U)&=\sum_{e\in (U,\bar U)}C(e)\\ D(U,\bar U)&=\sum_{\substack{i:s_i\in U\\t_i\in \bar U \text{or vice versa}}}D_i \end{align*} That is, $C(U,\bar{U})$ is the sum of capacities linking $U$ to $\bar{U}$; and $D(U,\bar{U})$ is the sum of demands with source and sinks on opposite sides of the $(U,\bar{U})$ cut. \end{definition} Finally, we point out the following two variants (since they will map to the expansion and conductance objectives when we consider the application of multi-commodity flow to graph partitioning). There are, of course, other variants of flow problems that we don't consider that are of interest if one is primarily interested in flow problems. \begin{definition} The \emph{Uniform Multi-Commodity Flow Problem (UMFP)}: here, there is a demand for every pair of nodes, and the demand for every commodity is the same, \emph{i.e.}, they are uniform, WLOG we take to equal $1$. The \emph{Product Multicommodity Flow Problem (PMFP)}: here, there is a nonnegative weight $\pi(\cdot)$, for all nodes $u \in V$, \emph{i.e.}, $\pi:V\to R^+$. The weights of demands between a pair of nodes $u$ and $v$ are $\pi(v_i)\pi(v_j)$. \end{definition} Here are several comments about the last definition. \begin{itemize} \item UMFP is a special case of PMFP with $\pi(i)=1$. \item If $\pi(i)=1$, then we will get a way to approximate the expansion objective. \item If $\pi(v)=deg(v)$, then we will get a way to approximate the conductance objective. \end{itemize} The MaxFlow-MinCut for single commodity is nice since it relates two fundamental graph theoretic entities (that are in some sense dual) via the min-max theorem. But, prior to LR, very little was known about the relationship between MaxFlow and MinCut in this more general multi-commodity flow setting for general graphs. For certain special graphs, it was known that there was an equality, i.e., a zero duality gap; but for general graphs, it is only known that MaxFlow is within a fraction $k$ of the MinCut. This results can be obtained since each commodity can be optimized separately in the obvious trivial way by using $\frac{1}{k}$ of the capacity of the edges---this might be ok for $k=\Theta(1)$, but it is is clearly bad if $k \sim n$ or $k \sim n^2$. Somewhat more technically, if we consider the LP formulation of the Max Multicommodity Flow Problem, then we can make the following observations. \begin{itemize} \item The dual of this is the LP relaxation of the Min Multicut Problem, \emph{i.e.}, the optimal integral solution to the dual is the Mun Multicut. \item In general, the vertices of the \emph{dual polytope} are NOT integral. \item But, for single commodity flow, they are integral, and so MaxFlow-MinCut theorem is a consequence of LP duality. \item For certain special graphs, it can be shown that they are integral, in which case one has zero duality gap for those graphs. \item Thus, for multicommodity: MaxFlow = MinFractional, \emph{i.e.}, relaxed, Multicut. \end{itemize} Here are several facts that we spend some time discussion: \begin{paragraph}{Fact 1} If we have $k$ commodities, then one can show that max-flow/min-cut gap $\le O(\log k)$. This can be shown directly, or it can be shown more generally via metric embedding methods. \end{paragraph} \begin{paragraph}{Fact 2} If certain conditions are satisfied, then the duality gap = 0. If one look at dual polytope, then whether or not this is the case depend on whether the optimal solution is integral or not. This, in turn, depends on special structure of the input. \end{paragraph} \begin{paragraph}{Fact 3} For $k$ commodities, LR showed that the worst case (over input graph) gap $\Omega(\log k)$. LLR interpreted this geometrically in terms of embeddings. The worst case is achieved, and it is achieved on expanders. \end{paragraph} Here, we will spend some time showing these results directly. Then, next time we will describe it more generally from the metric embedding perspective, since that will highlight better the similarities and differences with spectral methods. \subsection{Duality gap properties for flow-based methods} Here, we will show that there is a non-zero duality gap of size $\Theta(\log n)$. We will do it in two steps: first, by illustrating a particular graph (any expander) that has a gap at least $\Omega(\log n)$; and second, by showing that the gap is $O(\log n)$, i.e., is never worse than that. Let's start by showing that there is a graph for which the duality gap is nonzero by showing a graph for which it is at least $\Theta(\log n)$. \begin{theorem}[LR] $\forall n, \exists$ an $n$-node UMFP with MaxFlow $f$ and MinCut $\Xi$ s.t. \[ f \le O\left( \frac{\Xi}{\log n} \right) \] \end{theorem} \begin{proof} Consider any graph with certain expansion properties. In particular, let $G$ be a $3$-regular $n$-node graph with unit edge capacities s.t. \[ C(U,\bar{U}) = |(U,\bar{U})| \geq C_{onst} \min\{|U|,|\bar{U}|\} ,\quad \forall U,V \] \emph{i.e.}, an expander. (Such graphs exist by Margoulis, zig-zag, etc. constructions; and moreover a randomly-selected $3$-regular graph satisfies this w.h.p.) The first claim is that: \begin{eqnarray*} \Xi &=& \min_{U \subseteq V} \frac{(U,\bar{U})|}{|U|\cdot|\bar{U}|} \\ &\geq& \min_{U \subseteq V} \frac{C_{onst}}{\max\{|U|,|\bar{U}|\}} \\ &=& \frac{C_{onst}}{n-1} . \end{eqnarray*} The second claim is that: \begin{claim} The MaxFlow for UMFP is $\leq \frac{6}{(n-1)(\log n -1)}$, which is $\Theta(\log n)$ smaller than MinCut. \end{claim} \begin{proof}[Proof of claim] Since the graph is $3$-regular, $\exists \leq \frac{n}{2}$ nodes within a distance $\log n - 3$ of $v \in V$. So, for at least half of the $n \choose 2$ commodities, the shortest path connecting $s_i$ and $t_i$ has at least $\log n -2 $ edges. To sustain a flow of $f$ for such a commodity, at least $f(\log n -2)$ capacity must be used by commodity. To sustain a flow $f$, $\forall {n \choose 2}$ commodities, the capacity of the network must be $\geq \frac{1}{2}{n \choose 2}f(\log n-2)$. \end{proof} Since the graph is $3$-regular with unit capacity, the total capacity is $\leq \frac{3n}{2}$. So, \[ \frac{1}{2}{n \choose 2}f(\log n -2 ) \leq CAPACITY \leq \frac{3n}{2} . \] So, \begin{eqnarray*} f &\leq& \frac{3n}{{n \choose 2}(\log n-2)} \\ &=& \frac{6}{(n-1)(\log n -2)} \\ &\leq& \frac{6\Xi}{C_{onst}(\log n -2)} \quad\mbox{since } \Xi \geq \frac{C_{onst}}{n-1} \\ &=& O\left(\frac{\Xi}{\log n} \right) . \end{eqnarray*} That is, the MaxFlow for the UMFP on an expander $G$ is $\ge \Theta(\log n)$ factor smaller than the MinCut. \end{proof} An expander has diameter $O(\log n)$, and so for expanders, the gap can't be worse than $\Theta(\log n)$, but the following shows that this is true more generally. \begin{theorem}[LR] The MinCut of UMFP can never be more than $\Theta(\log n)$ factor larger than MaxFlow, \emph{i.e.}, \[ \Omega\left(\frac{\Xi}{\log n}\right) \leq f \leq \Xi . \] \end{theorem} \begin{proof} We can do this by showing a polynomial-time algorithm that finds a cut $(U,\bar{U})$ s.t. \[ \frac{C(U,\bar{U})}{|U|\cdot|\bar{U}|} \leq O(f(\log n)) \] where the LHS is the \emph{ratio cost} of $(U,\bar{U})$; and recall that $\Xi = \min_{S \subseteq V} \frac{C(U,\bar{U})}{|U|\cdot|\bar{U}|}$. The algorithm is based on LP Dual---the dual of Multicommodity Flow is the problem of assigning a fixed weight (where the weights are thought of as distances) to edges of $g$ s.t. one maximizes the cumulative distance between source and sink pairs. (We won't actually go through this now since there is a more elegant and enlightening formulation.) \end{proof} In the UMFP, the demand across cut $(U,\bar{U})$ is given by: $D(U,\bar U)=|U||\bar U|$. So, in particular, the mincut is given by: \begin{align*} \text{min cut:}\quad\Xi &= \min_{U\subseteq V}\frac{C(U,\bar U)}{|U||\bar U|}\\ &=\min_{U\subseteq V}\frac{E(U,\bar U)}{|U||\bar U|} \quad\text{if all capacities $=1$} , \end{align*} where $C(U,\bar{U})= \sum_{e\in(U,\bar{U})} C(e)$. This is, if the demands are uniform and all the capacities are equal to one, then from UMFP we get our old friend, the sparsest cut $\sim$ best expansion. Among other things, this implies that the $O(\log n)$ approximation for general MultiCommodity Flow in ``inherited'' by the algorithm for the sparsest cuts problem, and many other related problems as well. In particular, one way to use flow is the following: check all $2^n$ cuts, and use the single-commodity zero-duality gap result to show that we can take the one with the best single cwommodity flow to get the one with the best mincut. What these results say is that, instead, we can consider the all-pairs multi-commodity flow problem and check a lot less and get a result that is only a little worse. \subsection{Algorithmic Applications} \label{sec:algor-appl} Let's go back to our old friend the \emph{sparsest cut problem}, and here we will make explicit connections with flow based graph partitioning by viewing it from an optimization perspective. This will in turn provide us with an algorithm (that is mentioned in the proof of the above theorem) to solve the problem. Recall that in the sparsest cut problem, we are given: a graph $G = (V,E)$; a cost function $c(e),\forall e \in E$, \emph{i.e.}, $c: E \rightarrow \mathbb{R}^{+}$; and $k$ pairs of nodes/vertices $(s_i, t_i)$. We will write the problem as an Integer Program (IP). To do so, \begin{itemize} \item Let $x(e)$, $e \in E$, where $x(e) \in \{0,1\}$ to indicate if an edge $e$ is cut; \item let $y(i)$, $i \in [k]$, where $y(i) \in \{0,1\}$ to indicate if commodity $i$ is cut, \emph{i.e.}, is disconnected by this cut; and \item let $\mathcal{P}_i$, $i=1,2,\ldots,k$ be the set of paths between $s_i$ and $t_i$. \end{itemize} Then, what we want is to solve: \begin{align*} \min & \frac{\sum_{e \in E}{c(e)x(e)}}{\sum_{i=1}^k{d(i)y(i)}} \\ \mbox{s.t.~~} & \sum_{e \in P}{x(e)} \geq y(i), \quad \forall P \in \mathcal{P}_i, \quad \forall i=1,2,\ldots,k \\ & y(i) \in \{0, 1\}, \quad i \in[k] \\ & x(e) \in \{0, 1\}, \quad e \in E \end{align*} We want to consider the relaxation of this to the case where replace the last two constraints by $y(i)\geq 0$ and $x(e)\geq 0$ In doing so, note that if $(x,y)$ is a feasible fractional solution, then $(\alpha x,\alpha y)$ is also a feasible fractional solution with the same objective function value. So, WLOG, we can choose the normalization $\alpha$ s.t. $\sum_{i=1}^{k} d(i)y(i) = 1$ to get the following LP: \begin{align*} \min & \sum_{e \in E} c(e)x(e) \\ \mbox{s.t.~~} & \sum_{i=1}^{k} d(i)y(i) = 1 \\ & \sum_{e \in P}{x(e)} \geq y(i), \quad \forall P \in \mathcal{P}_i, \quad \forall i=1,2,\ldots,k \\ & y(i) \geq 0 \quad , x(e) \geq 0 . \end{align*} Below, we will show that we can compute a cut with sparsity ratio within a factor of $O(\log k)$ of this optimal fractional solution. BTW, before we do that, let's write the LP dual: \begin{align*} \max & \quad \alpha \\ \mbox{s.t.~~} & \sum_{p\in\mathcal{P}_i} f(p) \geq \alpha d(i) ,\quad \forall i \in [k] \\ & \sum_{i=1}^{k} \sum_{p\in\mathcal{P}_i(e)} f(p) \leq c(e) ,\quad \forall e \in E \\ & f(p) \geq 0, \quad \forall P \in \mathcal{P}_i, i \in [k] . \end{align*} This is the Max Concurrent Flow problem, and $O(\log k)$ approximation gives an \emph{approximate MaxFlow-MinCut Theorem}. So, to solve this sparsest cut problem, our strategy will be: \begin{itemize} \item Solve the LP (either the primal or the dual). \item Round the solution to an integral value. \end{itemize} Thus, there are some similarities with spectral---we first solve something to get a real-valued solution, and then we have to ``round'' to get an integral solution, and the ball game will be to show that we don't loose too much. Of course, solving the LP will \emph{implicitly} embed us in a very different place than solving an eigenvector problem, so we will expect to see different artifactual properties between the two approximation algorithms. The above discussion gives algorithms that run in polynomial time: \begin{itemize} \item Off the shelf LP (due to a connection with LP). \item Algorithms for approximately optimal flows of distance functions (\emph{i.e.}, take advantage of the structure of the LP). \item Fastest update of sparsest cut algorithm is $\tilde{O}(n^2)$, with Benczur-Karger sparsification. \item Standard algorithm that runs in something like $\tilde{O}(n^{3/2})$ with push-relabel methods. \item Finally, there is a lot of work on using Laplacian-based solvers to do better, and we may return to these later. \end{itemize} Important Note: The way you would actually solve this in practice is to use some sort of push-relabel code, which is relatively fast, as opposed to the general LP procedure just described, which is easier to analyze theoretically. \subsection{Flow Improve} Here is an aside that with luck we will get back to later. These algorithms have a running time that large enough that it can be challenging to apply to very large graphs---e.g., $O(n^{3/2})$ or especially $O(n^2)$ is certainly too large for ``massive'' graphs. (Implementations of faster algorithms are still very much a topic of research.) A question arises, can we do something like ``local spectral'' (which, recall, consisted roughly of a few steps of a random walk), to do a local flow improvement? The answer is YES---and here is it. The so-called \texttt{Improve} algorithm of AL as well as the related MQI method: this algorithm is useful by itself for improving partitions from, say, Metis or some other very fast procedures; and it is useful as a way to speed up spectral and/or as one way to combine spectral-based algorithms with flow-based algorithms. In more detail, the goal of a local improvement algorithm is: Given a partition, find a strictly better partition. A local improvement algorithm is useful in the following contexts: \begin{itemize} \item METIS -- post process with a flow based improvement heuristic. \item Vanilla spectral: post process with improvement method. \item Local improvement at one step online iterative algorithm. \end{itemize} MQI and Improve essentially construct and solve a sequence of $s$-$t$ MinCut problems on a modified quotient cut objective to add and remove vertices from a proposed cut. (We won't describe them in detail now.) Here is the basic theorem stating their running time and approximation quality bounds. \begin{theorem} Let $A\subseteq V$ s.t. $\pi(A)\le \pi(\bar A)$, and let $S=\textsc{Improve}(A)$ be the output of the Flow Improve Algorithm. Then \begin{enumerate} \item If $C\subseteq A$, (\emph{i.e.}, $\forall C \subseteq A$) then $Q(S)\le Q(C)$ (where $Q(S) = \frac{|\partial S|}{\mbox{Vol}(S)}$) \item If $C$ is such that \begin{align*} \frac{\pi(A\cap C)}{\pi(C)} \ge \frac{\pi(A)}{\pi(V)}+\epsilon\frac{\pi(\bar A)}{\pi(V)} , \quad\mbox{for some $\epsilon$} \end{align*} \emph{i.e.}, if $C$ is $\epsilon$-more-correlated with $A$ than random, \emph{i.e.}, if the fraction of $C$ that is in $A$ is $\epsilon$-better than random, then $Q(S)\le \frac{1}{\epsilon}Q(C)$ \emph{i.e.}, bound on nearby cuts. \item The algorithm takes time: (1) $\pi(V)^2$ iterations if vertex weights $\pi(V) \in \mathcal{Z}$; and (2) $m$ iterations if the edges are unweighted. \end{enumerate} \end{theorem} \section{% (02/26/2015): Flow-based Methods for Partitioning Graphs (2 of 2)} Reading for today. \begin{compactitem} \item Same as last class. \end{compactitem} Recall from last time that we are looking at flow-based graph partitioning algorithm. Last time, we covered the basics of flow-based methods, and we showed how they are very different than spectral methods. This time, we will discuss flow-based graph partitioning from an embedding perspective. We will see that flow-based algorithms implicitly embed the data in a metric space, but one that is very different than the place where spectral-based algorithms embed the data. Thus, not only do they run different steps operationally and get incomparable quality of approximation bounds, but also they implicitly put the data in a very different place---thus ``explaining'' many of the empirical results that are empirically observed. (BTW, I have made some comments that spectral methods can scale up to much larger input graphs by using diffusion and random walk ideas, a topic we will get back to later. For the moment, let me just note that the way flow is used is not \emph{immediately} relevant for such ``massive'' data. For example, the running time of a typical flow-based algorithm will be $O(n^2)$, since it involves a multi-commodity variant of flow; single commodity variants of flow-based algorithms run in $O(n^{3/2})$ time; and more recent work has focused on using Laplacian solver ideas to do even better, i.e., to run in time that is nearly-linear in the size of the input. There is a lot of interest, mostly from within TCS so far, in this area; and these fast solvers hold the promise to make these methods applicable to much larger graphs. I'm hoping to return to some of these Laplacian solver topics at the end of the semester, and I'm also planning on giving at least give a brief introduction to some of the ideas about how to couple spectral methods later.) \subsection{Review some things about $\ell_1$ and $\ell_2$} Let's review a few things about $\ell_1$ and $\ell_2$ and related topics. \begin{definition} The \emph{$\ell_p$-norm} on $\mathbb{R}^{k}$ is $||x||_p = \left( \sum_{i=1}^{k} |x_i| \right)^{1/p}$. A finite metric space $(X,\rho)$ is \emph{realized} in $\ell_p^k$ if $\exists f:X \rightarrow \mathbb{R}^{k}$ s.t. $\rho(x,y) = ||f(x)-f(y)||_2$, and it is an \emph{$\ell_p$-metric} if it can be realized by $\ell_p^k$ for some $k$. The \emph{metric induced by $\ell_p$} is: $d(x,y) = ||x-y||_p ,\quad \forall x,y \in \ell_p$. \end{definition} For flow-based methods, we will be most interested in $\ell_1$, while for spectral-based methods, we are interested in $\ell_2$. The $\ell_p$ norm (except for $p=\infty$, which we won't discuss here) is usually of at most more theoretical interest. The spaces $\ell_1$ and $\ell_2$ are very different. (Think $\ell_1$ regression, i.e., least absolute deviations, versus $\ell_2$ regression, i.e., least-squares regression; or think $\ell_1$ regularized $\ell_2$ regression, i.e., the lasso, versus $\ell_2$ regularized $\ell_2$ regression, i.e., ridge regression. These differences are often ``explained'' in terms of differences between the unit ball in the $\ell_1$ norm versus the unit ball in the $\ell_2$ norm, with the former being ``pointy'' and the latter being ``smooth.'' In particular, note that in those situations $\ell_1$ often has some sort of connection with sparsity and sparse solutions.) Here is a comparison between $\ell_1$ and $\ell_2$ with respect to the spectral/flow algorithms for the graph partitioning problem we have been considering. \begin{itemize} \item $\ell_2$ norm: \begin{itemize} \item Good for dimension reduction. \item Efficient polynomial time algorithm to compute embedding of any finite metric space. \item Connections to low-dimensional manifolds, diffusions, etc. \end{itemize} \item $\ell_1$ norm: \begin{itemize} \item No good for dimension reduction. \item NP-hard to compute the optimal embedding. \item Connections to partitions/cuts, multicommodity flow, etc. \end{itemize} \end{itemize} The following is a fundamental result in the area that is also central to understanding why flow-based graph partitioning algorithms work. \begin{theorem}[Bourgain] Every $n$-point metric space $d$ admits an $\alpha$-distortion embedding into $\ell_p$, $\forall p$, with $\alpha = O(\log n)$. \end{theorem} \begin{proof}[Proof idea] The proof is similar to but more general than the proof for the corresponding embedding claim for $\ell_2$. The idea is: to use the so-called Frechet embedding method, where the embedding is given by the distance from points to carefully randomly chosen subsets of nodes. \end{proof} \noindent Note that we saw the $\ell_2$ version of this before. Note also that the original embedding had $2^n$ dimensions, but LLR proved that it can be done with $O(\log^2 n)$ dimensions. \subsection{Connection between $\ell_1$ metrics and cut metrics} First, recall what a metric is. \begin{definition} A \emph{metric} is a function $d:V \times V \rightarrow \mathbb{R}$ s.t.: (1) $d(x,y)=0 \mbox{ if } x=0$ (and sometimes $=0 \mbox{ iff } x=y$); (2) $d(x,y) = d(y,x)$; and (3) $d(x,y) + d(x,z) \leq d(x,y)$ \end{definition} (Recall also that sometimes the word ``metric'' if one or more of those conditions is/are relaxed.) Next, recall the definition of the ``cut metric,'' and recall that this is really not a metric but is instead just a semi-metric. \begin{definition} Given $G=(V,E)$ and a set $S \subset V$, $\delta_S$ is the ``cut metric'' for $S$ if \[ \delta_S(i,j) = | \chi_S(i) - \chi_S(j) | , \] where \[ \chi_S(i) = \left\{ \begin{array}{ll} 0 \text{ if } i \in S \\ 1 \text{ otherwise} . \end{array} \right. \] Thus \[ \delta_S(i,j) = \left\{ \begin{array}{ll} 0 \text{ if } i,j \in S,\mbox{ or } i,j\in\bar{S} \\ 1 \text{ otherwise} . \end{array} \right. \] (That is, if $\delta_s(x,y)$ is the indicator of $x$ and $y$ being on different sides of $S$.) \end{definition} There are important connection between $\ell_1$ metrics and Cut metrics. In particular: \begin{itemize} \item There exists a representation of the $\ell_1$ metrics as a conical combination of cut metrics. \item Cut metrics are the extreme rays of the $\ell_1$ cone. \item For these reasons, instead of minimizing the ratio of linear functions over the convex cone, we can minimize the ratio over the extreme rays of the convex cone. Minimum ratio function over cone $\iff $ minimum over extreme rays. \end{itemize} We'll spend most of today going over these results. \textbf{Fact:} An $n$-point metric space can be associated with a vector in $\mathbb{R}^{n \choose 2}$, with each coordinate corresponding to a pair of vertices. \textbf{Fact:} Given a metric $d$, we will refer to the corresponding vector as $\bar{d}$. Then, $\alpha\bar{d}+(1-\alpha)\bar{d}^{'}$ is a metric, $\forall\alpha\in[0,1]$. In addition, $\forall\alpha\geq0$, $\forall\bar{d}$, $\alpha\bar{d}$ is a metric. So, the set of all metrics forms a convex cone in $\mathbb{R}^{n \choose 2}$. In somewhat more detail, we have the following result: \begin{claim} The set of $\ell_1$ metrics is a convex cone, \emph{i.e.}, if $d_1$ and $d_2$ $\in \ell_1 \mbox{ metrics}$, and if $\lambda_1,\lambda_2 \in \mathbb{R}^{+}$, then $ \lambda_1 d_1 + \lambda_2 d_2 \in \ell_1\quad\mbox{metrics}$. \end{claim} \begin{proof} Recall that the line metric is an $\ell_1$ metric. Let $d^{(i)}(x,y) = |x_i - y_i|$, for $x,y \in \mathbb{R}^{c}$. If $d \in \ell_1$ metric, then it is the sum of line metrics. \end{proof} \noindent \textbf{Fact.} The analogous result does \emph{not} hold for $\ell_2$. Next, we have the following theorem: \begin{theorem} Let $d$ be a finite $\ell_1$ metric. Then, $d$ can be written as \[ d = \sum_{S \subset [n]} \alpha_S \delta_S , \] for some constant $\alpha_S$ and cut metrics $\delta_S$. That is, \begin{eqnarray*} {CUT}_{n} &=& \{ d : d = \sum_{S \subseteq V} \alpha_S \delta_S , \alpha \geq 0\} \\ &=& \mbox{Positive cone generated by cut metrics} \\ &=& \mbox{All $n$-point subsets of $\mathbb{R}^{n}$, under the $\ell_1$ norm.} \end{eqnarray*} \end{theorem} \begin{proof} Consider any metric $d \in {CUT}_{n}$. Then, $\forall S$ with $\alpha_S > 0$, we have a dimension, and in that dimension, we can put \[ \begin{cases} \alpha_S & \mbox{if } x \in \bar{S} \\ 0 & \mbox{if } x \in S . \end{cases} \] So, ${CUT}_{n} \subseteq \ell_1\quad\mbox{Metrics}$. For the other direction, consider a set of $n$ points from $\mathbb{R}^{n}$. Take one dimension $d$ and sort the points in increasing values along that dimension. Say that we get $v_1,\ldots,v_k$ as the set of distinct values; then define $k-1$ cut metrics: $S_i = \{ x : x_d < v_{i+1} \}$, and let $\alpha_i = v_{i+1} - v_{i}$, \emph{i.e.}, $k-1$ coeff. So, along this dimension, we have that \[ |x_d-y_d|=\sum_{i=1}^{k}\alpha_{i}\delta_{S_i} , \] But, one can construct cut metrics for every dimension. So, we have cut metrics in ${CUT}_{n}$, $\forall$ $n$-point metrics $\ell_1$; thus, $\ell_1 \subseteq CUT$. \end{proof} \subsection{Relating this to a graph partitioning objective} Why is this result above useful? The usefulness of this characterization is that we are going to want to to optimize functions, and rather than optimize functions over all cut metrics, \emph{i.e.}, over the extreme rays, we will optimize over the full convex cone, \emph{i.e.}, over $\ell_1$ metrics. This leads us to the following lemma: \begin{lemma} Let $C \subset \mathbb{R}^{n}$ be a convex cone, and let $f,g:\mathbb{R}^{n,+}\rightarrow\mathbb{R}^{+}$ be linear functions. And assume that $\min_{x \in C} \frac{f(x)}{g(x)}$ exists. Then \[ \min_{x \in C} \frac{f(x)}{g(x)} = \min_{x\mbox{ in extreme rays of } C} \frac{f(x)}{g(x)} . \] \end{lemma} \begin{proof} Let $x_0$ be the optimum. Since $x_0 \in C$, we have that $x_0 = \sum_i a_i y_i$, where $a_i \in \mathbb{R}^{+}$ and $y_i \in\quad\mbox{extreme rays of}\quad C$. Thus, \begin{eqnarray*} \frac{f(x_0)}{g(x_0)} = \frac{f(\sum_i a_i y_i)}{g(\sum_i a_i y_i)} &=& \frac{ \sum_i f(a_i y_i) }{ \sum_i g(a_i y_i) } \\ &\geq& \frac{ f(a_j y_j) }{ g(a_j y_j) } \qquad \mbox{where $j$ is the min value} \\ &=& \frac{ f(y_i) }{ g(y_i) } , \end{eqnarray*} where the first and third line follow by linearity, and where the second line follows since \[ \frac{ \sum_i \alpha_i }{ \sum_i \beta_i } \geq \min_j \frac{ \alpha_j }{ \beta_j } \] in general. \end{proof} To see the connections of all of this to sparsest cut problem, recall that given a graph $G=(V,E)$ we define the conductance $h_G$ and sparsity $\phi_G$ as follows: \begin{align*} h_G & := \min_{S \subseteq V} \frac{E(S, \bar{S})}{\min \{ |S|, |\bar{S}| \}} \\ \phi_G & := \min_{S \subseteq V} \frac{E(S, \bar{S})}{\frac{1}{n} |S| |\bar{S}|} , \end{align*} and also that: \[ h_G \leq \phi_G \leq 2 h_G . \] (This normalization might be different than what we had a few classes ago.) Given this, we can write sparsest cut as the following optimization problem: \begin{lemma} Solving \[ \phi_G = \min_{S \subseteq V} \frac{E(S, \bar{S})}{\frac{1}{n} |S| |\bar{S}|} \] is equivalent to solving: \begin{align*} \min & \sum_{(ij)\in E} d_{ij} \\ \mbox{s.t.~~} & \sum_{ij \in V} d_{ij} = 1 \\ & d \in \ell_1\mbox{metric} \end{align*} \end{lemma} \begin{proof} Let $\delta_S = $ the cut metric for $S$. Then, \[ \frac{ |E(S,\bar{S})| }{ |S|\cdot|\bar{S}| } = \frac{ \sum_{ij \in E} \delta_S(i,j) }{ \sum_{\forall ij} \delta_S(i,j) } \] So, \[ \phi_G = \min_S \frac{ \sum_{ij \in E} \delta_S(i,j) }{ \sum_{\forall ij} \delta_S(i,j) } \] Since $\ell_1$-metrics are linear combinations of cut metrics, and cut metrics are extreme rays of $\ell_1$ from the above lemma, this ratio is minimized at one of the extreme rays of the cone. So, \[ \phi_G = \min_{d\in\ell_1} \frac{ \sum_{ij \in E} d_S(ij) }{ \sum_{\forall ij} d_S(ij) } . \] Since this is invariant to scaling, WLOG we can assume $\sum_{\forall ij} d_{ij} = 1$; and we get the lemma. \end{proof} \subsection{Turning this into an algorithm} It is important to note that the above formulation is still intractable---we have just changed the notation/characterization. But, the new notation/characterization suggests that we might be able to \emph{relax} (optimize the same objective function over a larger set) the optimization problem---as we did with spectral, if you recall. So, the relaxation we will consider is the following: relax the requirement that $d \in \ell_1 \mbox{ Metric}$ to $d \in \mbox{ Any Metric}$. We can do this by adding $3{n \choose 3}$ triangle inequalities to get the following LP: \begin{align*} \lambda^{*} = \min & \sum_{ij \in E} d_{ij} \\ \mbox{s.t.~~} & \sum_{\forall ij \in V} d_{ij} = 1 \\ & d_{ij} \geq 0 \\ & d_{ij} = d_{ji} \\ & d_{ij} \leq d_{ik} + d_{jk} \quad\forall i,j,k \quad\mbox{triples} \end{align*} (Obviously, since there are a lot of constraints, a naive solution won't be good for big data, but we will see that we can be a bit smarter.) Clearly, \[ \lambda^{*} \leq \phi^{*} = \mbox{ Solution with $d \in \ell_1\quad\mbox{Metric constraint}$} \] (basically since we are minimizing over a larger set). So, our goal is to show that we don't loose too much, \emph{i.e.}, that: \[ \phi^{*} \leq O(\log n) \lambda^{*} . \] Here is the \textsc{Algorithm}. Given as input a graph $G$, do the following: \begin{itemize} \item Solve the above LP to get a metric/distance $d:V \times V \rightarrow \mathbb{R}^{+}$. \item Use the (constructive) Bourgain embedding result to embed $d$ into an $\ell_1$ metric (with, of course an associated $O(\log n)$ distortion). \item Round the $\ell_1$ metric (the solution) to get a cut. \begin{itemize} \item For each dimension/direction, covert the $\ell_1$ embedding/metric along that to a cut metric representation. \item Choose the best. \end{itemize} \end{itemize} Of course, this is what is going on under the hood---if you were actually going to do it on systems of any size you would use something more specialized, like specialized flow or push-relabel code. Here are several things to note. \begin{itemize} \item If we have $\ell_1$ embedding with distortion factor $\xi$ then can approximate the cut up to $\xi$. \item Everything above is polynomial time, as we will show in the next theorem. \item In practice, we can solve this with specialized code to solve the dual of corresponding multicommodity flow. \item Recall that one can ``localize'' spectral by running random walks from a seed node. Flow is hard to localize, but recall the \textsc{Improve} algorithm, but which is still $\tilde{O}(n^{3/2})$. \item We can combine spectral and flow, as we will discuss, in various ways. \end{itemize} \begin{theorem} The algorithm above is a polynomial time algorithm to provide an $O(\log n)$ approximation to the sparsest cut problem. \end{theorem} \begin{proof} First, note that solving the LP is a polynomial time computation to get a metric $d^{*}$. Then, note that the Bourgain embedding lemma is constructive. Finding an embedding of $d^{*}$ to $d \in \ell_{1}^{O(\log^2 n)}$ with distortion $O(\log n)$. So, we can write $d$ as a linear combination of $O(n \log^2 n)$ cut metrics $d = \sum_{S \in \mathcal{S}} \alpha_S\delta_S$, where $|\mathcal{S}| = O( n \log^2 n)$. Note: \begin{eqnarray*} \min_{S\in\mathcal{S}} \frac{ \sum_{ij \in E} \delta_S(ij) }{ \sum_{\forall ij} \delta_S(ij) } &\leq& \frac{ \sum_{ij \in E} d_{ij} }{ \sum_{\forall ij} d_{ij} } \\ &\leq& O(\log n)\frac{ \sum_{ij \in E} d^{*}_{ij} }{ \sum_{\forall ij} d^{*}_{ij} } , \end{eqnarray*} where the first inequality follows since $d$ is in the cone of cut metrics, and where the second inequality follows from Bourgain's theorem. But, \[ \frac{ \sum_{ij \in E} d^{*}_{ij} }{ \sum_{\forall ij} d^{*}_{ij} } = \min_{d^{'} \mbox{ is metric}} \frac{ \sum_{ij \in E} d^{'}_{ij} }{ \sum_{\forall ij} d^{'}_{ij} } \\ \leq \min_{\forall S} \frac{ \sum_{ij \in E} d_{S}(ij) }{ \sum_{\forall ij} d_{S}(ij) } , \] where the equality follows from the LP solution and the inequality follows since LP is a relaxation of a cut metric. Thus, \[ \min_{S\in\mathcal{S}} \frac{ \sum_{ij \in E} \delta_S(ij) }{ \sum_{\forall ij} \delta_S(ij) } \leq O(\log n) \min_{\forall S} \frac{ \sum_{ij \in E} d_{S}(ij) }{ \sum_{\forall ij} d_{S}(ij) } . \] This establishes the theorem. \end{proof} So, we can also approximate the value of the objective---how do we actually find a cut from this? (Note that sometimes in the theory of approximation algorithms you \emph{don't} get anything more than an approximation to the optimal number, but that is somewhat dissatisfying if you want you use the output of the approximation algorithm for some downstream data application.) To see this: \begin{itemize} \item Any $\ell_1$ metric can be written as a conic combination of cut metrics---in our case, with $O(n \log^n)$ nonzeros---$d^{\sigma} = \sum_S \alpha_S \delta_S$. \item So, pick the best cut from among the ones with nonzero $\alpha$ in the cut decomposition of $d^{\sigma}$. \end{itemize} \subsection{Summary of where we are} Above we showed that \begin{eqnarray*} \phi_G &=& \min_{S \subset V} \frac{E(S,\bar{S})}{|S||\bar{S}|} \\ &=& \min \sum_{ij \in E} d_{ij} \\ & & \mbox{s.t.~~} \sum_{ij \in V} d_{ij}=1 \\ & & d \in \ell_1\mbox{ metric} \end{eqnarray*} can be approximated by relaxing it to \begin{align*} \min & \sum_{ij \in E} d_{ij} \\ \mbox{s.t.~~} & \sum_{ij \in V} d_{ij} = 1 \\ & d \in \mbox{Metric} \end{align*} This relaxation is different than the relaxation associated with spectral, where we showed that \[ \phi = \min_{x \in \{0,1\}^V} \frac{ A_{ij} |x_i-x_j|^2 }{ \frac{1}{n}\sum_{ij} |x_i-x_j|^2 } \] can be relaxed to \[ d-\lambda_2 = \min_{x\perp\vec{1}} \frac{ A_{ij} (x_i-x_j)^2 }{ \frac{1}{n}\sum_{ij} (x_i-x_j)^2 } \] which can be solved with the second eigenvector of the Laplacian. Note that these two relaxations are very different and incomparable, in the sense that one is not uniformly better than the other. This is related to them succeeding and failing in different places, and it is related to them parametrizing problems differently, and it can be used to diagnose the properties of how each class of algorithms performs on real data. Later, we will show how to generalize this previous flow-based result and combine it with spectral. Here are several questions that the above discussion raises. \begin{itemize} \item What else can you relax to? \item In particular, can we relax to something else and improve the $O(\log n)$ factor? \item Can we combine these two incomparable ideas to get better bounds in worst-case and/or in practice? \item Can we combine these ideas to develop algorithms that smooth or regularize better in applications for different classes of graphs? \item Can we use these ideas to do better learning, e.g., semi-supervised learning on graphs? \end{itemize} We will address some of these questions later in the term, as there is a lot of interest in these and related questions. \section{% (03/03/2015): Some Practical Considerations (1 of 4): How spectral clustering is typically done in practice} Reading for today. \begin{compactitem} \item ``A Tutorial on Spectral Clustering,'' in Statistics and Computing, by von Luxburg \end{compactitem} Today, we will shift gears. So far, we have gone over the theory of graph partitioning, including spectral (and non-spectral) methods, focusing on \emph{why} and \emph{when} they work. Now, we will describe a little about \emph{how} and \emph{where} these methods are used. In particular, for the next few classes, we will talk somewhat informally about some practical issues, e.g., how spectral clustering is done in practice, how people construct graphs to analyze their data, connections with linear and kernel dimensionality reduction methods. Rather than aiming to be comprehensive, the goal will be to provide illustrative examples (to place these results in a broader context and also to help people define the scope of their projects). Then, after that, we will get back to some theoretical questions making precise the how and where. In particular, we will then shift to talk about how diffusions and random walks provide a robust notion of an eigenvector and how they can be used to extend many of the vanilla spectral methods we have been discussing to very non-vanilla settings. This will then lead to how we can use spectral graph methods for other related problems like manifold modeling, stochastic blockmodeling, Laplacian solvers, etc. Today, we will follow the von Luxburg review. This review was written from a machine learning perspective, and in many ways it is a very good overview of spectral clustering methods; but beware: it also makes some claims (e.g., about the quality-of-approximation guarantees that can be proven about the output of spectral graph methods) that---given what we have covered so far---you should immediately see are not correct. \subsection{Motivation and general approach} The motivation here is two-fold. \begin{itemize} \item Clustering is an extremely common method for what is often called \emph{exploratory data analysis}. For example, it is very common for a person, when confronted with a new data set, to try to get a first view of the data by identifying subsets of it that have similar behavior or properties. \item Spectral clustering methods in particular are a very popular class of clustering methods. They are usually very simple to implement with standard linear algebra libraries; and they often outperform other methods such as $k$-means, hierarchical clustering, etc. \end{itemize} The first thing to note regarding general approaches is that Section 2 of the von Luxburg review starts by saying ``Given a set of data points $x_1,\ldots,x_n$ and some notion of similarity $s_{ij} \ge 0$ between all pairs of data points $x_i$ and $x_j$ ...'' That is, the data are vectors. Thus, any graphs that might be constructed by algorithms are constructed from primary data that are vectors and are useful as intermediate steps only. This will have several obvious and non-obvious consequences. This is a very common way to view the data (and thus spectral graph methods), especially in areas such as statistics, machine learning, and areas that are not computer science algorithms. That perspective is not good or bad per se, but it is worth emphasizing that difference. In particular, the approach we will now discuss will be very different than what we have been discussing so far, which is more common in CS and TCS and where the data were a graph $G=(V,E)$, e.g., the single web graph out there, and thus in some sense a single data point. Many of the differences between more algorithmic and more machine learning or statistical approaches can be understood in terms of this difference. We will revisit it later when we talk about manifold modeling, stochastic blockmodeling, Laplacian solvers, and related topics. \subsection{Constructing graphs from data} If the data are vectors with associated similarity information, then an obvious thing to do is to represent that data as a graph $G=(V,E)$, where each vertex $v \in V$ is associated with a data point $x_i$ an edge $e = (v_i v_j) \in E$ is defined if $s_{ij}$ is larger than some threshold. Here, the threshold could perhaps equals zero, and the edges might be weighted by $s_{ij}$. In this case, an obvious idea to cluster the vector data is to cluster the nodes of the corresponding graph. Now, let's consider how to specify the similarity information $s_{ij}$. There are many ways to construct a similarity graph, given the vectors $\{x_i\}_{i=1}^{n}$ data points as well as pairwise similarity (or distance) information $s_{ij}$ (of $d_{ij}$). Here we describe several of the most popular. \begin{itemize} \item \textbf{$\epsilon$-NN graphs.} Here, we connect all pairs of points with distance $d_{ij} \le \epsilon$. Since the distance ``scale'' is set (by $\le \epsilon$), it is common not to including the weights. The justification is that, in certainly idealized situations, including weights would not incorporate more information. \item \textbf{$k$-NN graphs.} Here, we connect vertex $i$ with vertex $j$ if $v_i$ is among the $k$-NN of $v_i$, where NNs are given by the distance $d_{ij}$. Note that this is a directed graph. There are two common ways to make it undirected. First, ignore directions; and second, include an edge if ($v_i$ connects to $v_j$ AND $v_j$ connects to $v_i$) or if ($v_i$ connects to $v_j$ OR $v_j$ connects to $v_i$). In either of those cases, the number of edges doesn't equal $k$; sometimes people filter it back to exactly $k$ edges per node and sometimes not. In either case, weights are typically included. \item \textbf{Fully-connected weighted graphs.} Here, we connect all points with a positive similarity to each other. Often, we want the similarity function to represent local neighborhoods, and so $s_{ij}$ is either transformed into another form or constructed to represent this. A popular choice is the Gaussian similarity kernel \[ s(x_i,x_j) = \exp \left ( \frac{1}{2\sigma^2} \| x_i-x_j \|_2^2 \right) , \] where $\sigma$ is a parameter that, informally, acts like a width. This gives a matrix that has a number of nice properties, e.g., it is positive and it is SPSD, and so it is good for MLers who like kernel-based methods. Moreover, it has a strong mathematical basis, e.g., in scientific computing. (Of course, people sometimes use this $s_{ij}=s(x_i,x_j)$ information to construct $\epsilon$-NN or $k$-NN graphs.) \end{itemize} Note that in describing those various ways to construct a graph from the vector data, we are already starting to see a bunch of knobs that can be played with, and this is typical of these graph construction methods. Here are some comments about that graph construction approach. \begin{itemize} \item Choosing the similarity function is basically an art. One of the criteria is that typically one is not interested in resolving differences that are large, i.e., between moderately large and very large distances, since the goal is simply to ensure that those points are not close and/or since (for domain-specific reasons) that is the least reliable similarity information. \item Sometimes this approach is of interest in semi-supervised and transductive learning. In this case, one often has a lot of unlabeled data and only a little bit of labeled data; and one wants to use the unlabeled data to help define some sort of geometry to act as a prior to maximize the usefulness of the labeled data in making predictions for the unlabeled data. Although this is often thought of as defining a non-linear manifold, you should think of it at using unlabeled data to specify a data-dependent model class to learn with respect to. (That makes sense especially if the labeled and unlabeled data come from the same distribution, since in that case looking at the unlabeled data is akin to looking at more training data.) As we will see, these methods often have an interpretation in terms of a kernel, and so they are used to learn linear functions in implicitly-defined feature spaces anyway. \item $k$-NN, $\epsilon$-NN, and fully-connected weighted graphs are all the same in certain very idealized situations, but they can be very different in practice. $k$-NN often homogenizes more, which people often like, and/or it connects points of different ``size scales,'' which people often find~useful. \item Choosing $k$, $\epsilon$, and $\sigma$ large can easily ``short circuit'' nice local structure, unless (and sometimes even if) the local structure is extremely nice (e.g., one-dimensional). This essentially injects large-scale noise and expander-like structure; and in that case one should expect very different properties of the constructed graph (and thus very different results when one runs algorithms). \item The fully-connected weighted graph case goes from being a rank-one complete graph to being a diagonal matrix, as one varies $\sigma$. An important question (that is rarely studied) is how does that graph look like as one does a ``filtration'' from no edges to a complete graph. \item Informally, it is often thought that mutual-$k$-NN is between $\epsilon$-NN and $k$-NN: it connects points within regions of constant density, but it doesn't connect regions of very different density. (For $\epsilon$-NN, points on different scales don't get connected.) In particular, this means that it is good for connecting clusters of different densities. \item If one uses a fully-connected graph and then sparsifies it, it is often hoped that the ``fundamental structure'' is revealed and is nontrivial. This is true in some case, some of which we will return to later, but it is also very non-robust. \item As a rule of thumb, people often choose parameters s.t. $\epsilon$-NN and $k$-NN graphs are at least ``connected.'' While this seems reasonable, there is an important question of whether it homogenizes too much, in particular if there are interesting heterogeneities in the graph. \end{itemize} \subsection{Connections between different Laplacian and random walk matrices} Recall the combinatorial or non-normalized Laplacian \[ L=D-W , \] and the normalized Laplacian \[ L_{sym} = D^{-1/2}LD^{-1/2} = I - D^{-1/2}WD^{-1/2} . \] There is also a random walk matrix that we will get to more detail on in a few classes and that for today we will call the (somewhat non-standard name) random walk Laplacian \[ L_{rw} = D^{-1}L = I-D^{-1}W = D^{-1/2}L_{sym}D^{1/2} . \] Here is a lemma connecting them. \begin{lemma} Given the above definitions of $L$, $L_{sym}$, and $L_{rw}$, we have the following. \begin{enumerate} \item For all $x\in\mathbb{R}^{n}$, \[ x^TL_{sym}x = \frac{1}{2} \sum_{ij} W_{ij} \left( \frac{x_i}{\sqrt{d_i}} - \frac{x_j}{\sqrt{d_j}} \right)^{2} . \] \item $\lambda$ is an eigenvalue of $L_{rw}$ with eigenvector $u$ iff $\lambda$ is an eigenvalue of $L_{sym}$ with eigenvector $w=D^{1/2}u$ \item $\lambda$ is an eigenvalue of $L_{rw}$ with eigenvector $u$ iff $\lambda$ and $u$ solve the generalized eigenvalue problem $Lu = \lambda D u$. \item $0$ is an eigenvalue of $L_{rw}$ with eigenvector $\vec{1}$ iff $0$ is an eigenvalue of $L_{sym}$ with eigenvector $D^{1/2}\vec{1}$ \item $L_{sym}$ and $L_{rw}$ are PSD and have $n$ non-negative real-valued eigenvalues $0 = \lambda_1 \le \cdots \le \lambda_n$. \end{enumerate} \end{lemma} \noindent Hopefully none of these claims are surprising by now, but they do make explicit some of the connections between different vectors and different things that could be computed, e.g., one might solve the generalized eigenvalue problem $Lu = \lambda D u$ or run a random walk to approximate $u$ and then from that rescale it to get a vector for $L_{sym}$. \subsection{Using constructed data graphs} Spectral clustering, as it is often used in practice, often involves first computing several eigenvectors (or running some sort of procedures that compute some sort of approximate eigenvectors) and then performing $k$-means in a low-dimensional space defined by them. Here are several things to note. \begin{itemize} \item This is harder to analyze than the vanilla spectral clustering we have so far been considering. The reason is that one must analyze the $k$ means algorithm also. In this context, $k$-means is essentially used as a rounding algorithm. \item A partial justification of this is provided by the theoretical result on using the leading $k$ eigenvectors that you considered on the first homework. \item A partial justification is also given by a result we will get to below that shows that it works in very idealized situations. \end{itemize} We can use different Laplacians in different ways, as well as different clustering, $k$-means, etc. algorithms in different ways to get spectral-like clustering algorithms. Here, we describe $3$ canonical algorithms (that use $L$, $L_{rw}$, and $L_{sym}$) to give an example of several related approaches. Assume that we have $n$ points, $x_1,\ldots,x_n$, that we measure pairwise similarities $s_{ij} = s(x_i,x_j)$ with symmetric nonnegative similarity function, and that we denote the similarity matrix by $S = \left(S_{ij}\right)_{i,j=1,\ldots,n}$. The following algorithm, let's call it \textsc{PopularSpectralClustering}, takes as input a similarity matrix $S\in\mathbb{R}^{n \times n}$ and a positive integer $k\in\mathbb{Z}^{+}$ which is the number of clusters; and it returns $k$ clusters. It does the following steps. \begin{enumerate} \item Construct a similarity graph (e.g., with $\epsilon$-NN, $k$-NN, fully-connected graph, etc.) \item Compute the unnormalized Laplacian $L=D-A$. \item \begin{itemize} \item If (use $L$) \\ then compute the first $k$ eigenvectors $u_1,\ldots,u_k$ of $L$, \item else if (use $L_{rw}$) \\ then compute first $k$ generalized eigenvectors $u_1,\ldots,u_k$ of the generalized eigenvalue problem $Lu = \lambda D u$. (Note by the above that these are eigenvectors of $L_{rw}$.) \item else if (use $L_{sym}$) \\ then compute the first $k$ eigenvectors $u_1,\ldots,u_k$ of $L_{sym}$. \end{itemize} \item Let $U\in\mathbb{R}^{n \times k}$ be the matrix containing the vectors $u_1,\ldots,u_k$ as columns. \item \begin{itemize} \item If (use $L_{sym}$) \\ then $u_{ij} \leftarrow u_{ij} / \left( \sum_k u_{ik}^{2} \right)^{1/2}$, i.e., normalize $U$ row-wise. \end{itemize} \item For $i=\{1,\ldots,n\}$, let $y_i\in\mathbb{R}^{k}$ be a vector containing the $i^{th}$ row of $U$. \item Cluster points $\left( y_i \right)_{i\in[n]}$ in $\mathbb{R}^{k}$ with a $k$-means algorithm into clusters, call them $C_1,\ldots,C_k$. \item Return: clusters $A_1,\ldots,A_k$, with $A_i = \{ j : y_j \in C_i \} $. \end{enumerate} Here are some comments about the \textsc{PopularSpectralClustering} algorithm. \begin{itemize} \item The first step is to construct a graph, and we discussed above that there are a lot of knobs. In particular, the \textsc{PopularSpectralClustering} algorithm is not ``well-specified'' or ``well-defined,'' in the sense that the algorithms we have been talking about thus far are. It might be better to think of this as an algorithmic approach, with several knobs that can be played with, that comes with suggestive but weaker theory than what we have been describing so~far. \item $k$-means is often used in the last step, but it is not necessary, and it is not particularly principled (although it is often reasonable if the data tend to cluster well in the space defined by $U$). Other methods have been used but are less popular, presumably since $k$-means is good enough and there are enough knobs earlier in the pipeline that the last step isn't the bottleneck to getting good results. Ultimately, to get quality-of-approximation guarantees for an algorithm like this, you need to resort to a Cheeger-like bound or a heuristic justification or weaker theory that provides justification in idealized cases. \item In this context, $k$-means is essentially a rounding step to take a continuous embedding, provided by the continuous vectors $\{ y_i \}$, where $y_i \in \mathbb{R}^{k}$, and put them into one of $k$ discrete values. This is analogous to what the sweep cut did. But we will also see that this embedding, given by $\{y_i\}_{i=1}^{n}$ can be used for all sorts of other things. \item Remark: If one considers the $k$-means objective function, written as an IP and then relaxes it (from having the constraint that each data point goes into one of $k$ clusters, written as an orthogonal matrix with one nonzero per column, to being a general orthogonal matrix), then you get an objective function, the solution to which can be computed by computing a truncated SVD, i.e., the top $k$ singular vectors. This provides a $2$-approximation to the $k$-means objective. There are better approximation algorithms for the $k$-means objective, when measured by the quality-of-approximation, but this does provide an interesting connection. \item The rescaling done in the ``If (use $L_{sym}$) then'' is typical of many spectral algorithms, and it can be the source of confusion. (Note that the rescaling is done with respect to $\left(P_U\right)_{ii} = \left(UU^T\right)_{ii}$, i.e., the statistical leverage scores of $U$, and this means that more ``outlying'' points get down-weighted.) From what we have discussed before, it should not be surprising that we need to do this to get the ``right'' vector to work with, e.g., for the Cheeger theory we talked about before to be as tight as possible. On the other hand, if you are approaching this from the perspective of engineering an algorithm that returns clusters when you expect them, it can seem somewhat ad hoc. There are many other similar ad hoc and seemingly ad hoc decisions that are made when engineering implementations of spectral graph methods, and this lead to a large proliferation of spectral-based methods, many of which are very similar ``under the hood.'' \end{itemize} All of these algorithms take the input data $x_i\in\mathbb{R}^{n}$ and change the representation in a lossy way to get data points $y_i\in\mathbb{R}^{k}$. Because of the properties of the Laplacian (some of which we have been discussing, and some of which we will get back to), this \emph{often} enhances the cluster properties of the data. In idealized cases, this approach works as expected, as the following example provides. Say that we sample data points from $\mathbb{R}$ from four equally-spaced Gaussians, and from that we construct a NN graph. (Depending on the rbf width of that graph, we might have an essentially complete graph or an essentially disconnected graph, but let's say we choose parameters as the pedagogical example suggests.) Then $\lambda_1=0$; $\lambda_2$, $\lambda_3$, and $\lambda_4$ are small; and $\lambda_5$ and up are larger. In addition, $v_1$ is flat; and higher eigenfunctions are sinusoids of increasing frequency. The first few eigenvectors can be used to split the data into the four natural clusters (they can be chosen to be worse linear combinations, but they can be chosen to split the clusters as the pedagogical example suggests). But this idealized case is chosen to be ``almost disconnected,'' and so it shouldn't be surprising that the eigenvectors can be chosen to be almost cluster indicator vectors. Two things: the situation gets much messier for real data, if you consider more eigenvectors; and the situation gets much messier for real data, if the clusters are, say, 2D or 3D with realistic noise. \subsection{Connections with graph cuts and other objectives} Here, we will briefly relate what we have been discussing today with what we discussed over the last month. In particular, we describe the graph cut point of view to this spectral clustering algorithm. I'll follow the notation of the von Luxburg review, so you can go back to that, even though this is very different than what we used before. The point here is not to be detailed/precise, but instead to remind you what we have been covering in another notation that is common, especially in ML, and also to derive an objective that we haven't covered but that is a popular one to which to add constraints. To make connections with the \textsc{PopularSpectralClustering} algorithm and MinCut, RatioCut, and NormalizedCut, recall that \[ \mbox{RatioCut}(A_1,\ldots,A_k) = \frac{1}{2} \sum_{i=1}^{k} \frac{W\left(A_i,\bar{A}_i\right)}{|A_i|} = \sum_{i=1}^{k} \frac{ \mbox{cut}\left( A_i,\bar{A}_i \right) }{ |A_i| } , \] where $\mbox{cut}(A_1,\ldots,A_k) = \frac{1}{2}\sum_{i=1}^{k}\left( A_i,\bar{A}_i \right)$. First, let's consider the case $k=2$ (which is what we discussed before). In this case, we want to solve the following problem: \begin{equation} \min_{A \subset V} \mbox{RatioCut}\left(A,\bar{A}\right) . \label{eqn:ratio-cut-1} \end{equation} Given $A \subset V$, we can define a function $f = \left( f_1,\ldots,f_n \right)^T \in \mathbb{R}^{n}$ s.t. \begin{equation} f_{i} = \left\{ \begin{array}{l l} \sqrt{ |\bar{A}|/|A| } & \quad \text{if $v_i \in A$} \\ -\sqrt{ |A|/|\bar{A}| } & \quad \text{if $v_i \in \bar{A}$} \end{array} \right. . \label{eqn:f-indicator-2} \end{equation} In this case, we can write Eqn.~(\ref{eqn:ratio-cut-1}) as follows: \begin{align*} \min_{A \subset V} & f^TLf \\ \mbox{s.t.~~} & f \perp \vec{1} \\ & f \mbox{ defined as in Eqn.~(\ref{eqn:f-indicator-2})} \\ & \|f\|=\sqrt{n} . \end{align*} In this case, we can relax this objective to obtain \begin{align*} \min_{A \subset V} & f^TLf \\ \mbox{s.t.~~} & f \perp \vec{1} \\ & \|f\|=\sqrt{n} , \end{align*} which can then be solved by computing the leading eigenvectors of $L$. Next, let's consider the case $k > 2$ (which is more common in practice). In this case, given a partition of the vertex set $V$ into $k$ sets $A_i,\ldots,A_k$, we can define $k$ indicator vectors $h_j = \left( h_{ij},\ldots,h_{nj}\right)^T$ by \begin{equation} h_{ij} = \left\{ \begin{array}{l l} 1/ \sqrt{ |A_j| } & \quad v_i \in A_j, \quad i\in[n],j\in[k] \\ 0 & \quad \text{otherwise} \end{array} \right. . \label{eqn:f-indicator-k} \end{equation} Then, we can set the matrix $H \in \mathbb{R}^{n \times k}$ as the matrix containing those $k$ indicator vectors as columns, and observe that $H^TH=I$, i.e., $H$ is an orthogonal matrix (but a rather special one, since it has only one nonzero per row). We note the following observation; this is a particular way to write the RatioCut problem as a Trace problem that appears in many places. \begin{claim} $\mbox{RatioCut}(A_1,\ldots,A_k) = \mbox{Tr}(H^TLH)$ \end{claim} \begin{proof} Observe that \[ h_i^TLh_i = \frac{ \mbox{cut}\left( A_i,\bar{A}_i \right)}{ |A_i| } \] and also that \[ h_i^TLh_i = \left(H^TLH\right)_{ii} . \] Thus, we can write \[ \mbox{RatioCut}\left(A_1,\ldots,A_k\right) = \sum_{i=1}^{k} h_i L h_i = \sum_{i=1}^{k} \left( H^T L H \right)_{ii} = \mbox{Tr}\left(H^TLH\right) . \] \end{proof} So, we can write the problem of \[ \min \mbox{RatioCut}\left(A_1,\ldots,A_k\right) \] as follows: \begin{align*} \min_{A_1,\ldots,A_k} & \mbox{Tr}\left(H^TLH\right) \\ \mbox{s.t.~~} & H^TH=I \\ & H \mbox{ defined as in Eqn.~(\ref{eqn:f-indicator-k})} . \end{align*} We can relax this by letting the entries of $H$ be arbitrary elements of $\mathbb{R}$ (still subject to the overall orthogonality constraint on $H$) to get \begin{align*} \min_{H\in\mathbb{R}^{n \times k}} & \mbox{Tr}\left(H^TLH\right) \\ \mbox{s.t.~~} & H^TH=I , \end{align*} and the solution to this is obtained by computing the first $k$ eigenvectors of $L$. Of course, similar derivations could be provided for the NormalizedCut objective, in which case we get similar results, except that we deal with degree weights, degree-weighted constraints, etc. In particular, for $k>2$, if we define indicator vectors $h_j = \left( h_{ij},\ldots,h_{nj}\right)^T$ by \begin{equation} h_{ij} = \left\{ \begin{array}{l l} 1/ \sqrt{ \mbox{Vol}(A_j) } & \quad v_i \in A_j, \quad i\in[n],j\in[k] \\ 0 & \quad \text{otherwise} \end{array} \right. . \label{eqn:f-indicator-kgt2} \end{equation} then the problem of minimizing NormalizedCut is \begin{align*} \min_{A_1,\ldots,A_k} & \mbox{Tr}\left(H^TLH\right) \\ \mbox{s.t.~~} & H^TDH=I \\ & H \mbox{ defined as in Eqn.~(\ref{eqn:f-indicator-kgt2})} , \end{align*} and if we let $T = D^{1/2}H$, then the spectral relaxation is \begin{align*} \min_{T\in\mathbb{R}^{n \times k}} & \mbox{Tr}\left(T^TD^{-1/2}LD^{-1/2}T\right) \\ \mbox{s.t.~~} & T^TT=I , \end{align*} and the solution $T$ to this trace minimization problem is given by the leading eigenvectors of $L_{sym}$. Then $H=D^{-1/2}T$, in which case $H$ consists of the first $k$ eigenvectors of $L_{rw}$, or the first $k$ generalized eigenvectors of $Lu=\lambda D u$. Trace optimization problems of this general for arise in many related applications. For example: \begin{itemize} \item One often uses this objective as a starting point, e.g., to add sparsity or other constraints, as in one variation of ``sparse PCA.'' \item Some of the methods we will discuss next time, i.e., LE/LLE/etc. do something very similar but from a different motivation, and this provides other ways to model the data. \item As noted above, the $k$-means objective can actually be written as an objective with a similar constraint matrix, i.e., if $H$ is the cluster indicator vector for the points, then $H^TH=I$ and $H$ has one non-zero per row. If we relax that constraint to be any orthogonal matrix such that $H^TH=I$, then we get an objective function, the solution to which is the truncated SVD; and this provides a $2$ approximation to the $k$-means problem. \end{itemize} \section{% (03/05/2015): Some Practical Considerations (2 of 4): Basic perturbation theory and basic dimensionality reduction } Reading for today. \begin{compactitem} \item ``A kernel view of the dimensionality reduction of manifolds,'' in ICML, by Ham, et al. \end{compactitem} Today, we will cover two topics: the Davis-Kahan-$\sin\left(\theta\right)$ theorem, which is a basic result from matrix perturbation theory that can be used to understand the robustness of spectral clustering in idealized cases; and basic linear dimensionality reduction methods that, while not spectral graph methods by themselves, have close connections and are often used with spectral graph methods. \subsection{Basic perturbation theory} One way to analyze spectral graph methods---as well as matrix algorithms much more generally---is via matrix perturbation theory. Matrix perturbation theory asks: how do the eigenvalues and eigenvectors of a matrix $A$ change if we add a (small) perturbation $E$, i.e., if we are working the the matrix $\tilde{A}=A+E$? Depending on the situation, this can be useful in one or more of several~ways. \begin{itemize} \item \textbf{Statistically.} There is often noise in the input data, and we might want to make claims about the unobserved processes that generate the observed data. In this case, $A$ might be the hypothesized data, e.g., that has some nice structure; we observe and are working with $\tilde{A}=A+E$, where $E$ might be Gaussian noise, Gaussian plus spiked noise, or whatever; and we want to make claims that algorithms we run on $\tilde{A}$ say something about the unobserved $A$. \item \textbf{Algorithmically.} Here, one has the observed matrix $A$, and one wants to make claims about $A$, but for algorithmic reasons (or other reasons, but typically algorithmic reasons if randomness is being exploited as a computational resource), one performs random sampling or random projections and computes on the sample/projection. This amounts to constructing a sketch $\tilde{A}=A+E$ of the full input matrix $A$, where $E$ is whatever is lost in the construction of the sketch, and one wants to provide guarantees about $A$ by computing on $\tilde{A}$. \item \textbf{Numerically.} This arises since computers can't represent real numbers exactly, i.e., there is round-off error, even if it is at the level of machine precision, and thus it is of interest to know the sensitivity of problems and/or algorithms to such round-off errors. In this case, $A$ is the answer that would have been computed in exact arithmetic, while $\tilde{A}$ is the answer that is computed in the presence of round-off error. (E.g., inverting a non-invertible matrix is very sensitive, but inverting an orthogonal matrix is not, as quantified by the condition number of the input matrix.) \end{itemize} The usual reference for matrix perturbation theory is the book of Stewart and Sun, which was written primarily with numerical issues in mind. Most perturbation theorems say that some notion of distance between eigenstuff, e.g., eigenvalues of subspaces defined by eigenvectors of $A$ and $\tilde{A}$, depend on the norm of the error/perturbation $E$, often times something like a condition number that quantifies the robustness of problems. (E.g., it is easier to estimate extremal eigenvalues than eigenvectors that are buried deep in the spectrum of $A$, and it is easier if $E$ is smaller.) For spectral graph methods, certain forms of matrix perturbation theory can provide some intuition and qualitative guidance as to when spectral clustering works. We will cover one such results that is particularly simple to state and think about. When it works, it works well; but since we are only going to describe a particular case of it, when it fails, it might fail ungracefully. In some cases, more sophisticated variants of this result can provide guidance. When applied to spectral graph methods, matrix perturbation theory is usually used in the following way. Recall that if a graph has $k$ disconnected components, then $0 = \lambda_1 = \lambda_2 = \ldots = \lambda_k = < \lambda_{k+1} $, and the corresponding eigenvectors $v_1,v_2,\ldots,v_k$ can be chosen to be the indicator vectors of the connected components. In this case, the connected components are a reasonable notion of clusters, and the $k$-means algorithm should trivially find the correct clustering. If we let $A$ be the Adjacency Matrix for this graph, then recall that it splits into $k$ pieces. Let's assume that this is the idealized unobserved case, and the data that we observe, i.e., the graph we are given or the graph that we construct is a noisy version of this, call it $\tilde{A} = A+E$, where $E$ is the noise/error. Among other things $E$ will introduce ``cross talk'' between the clusters, so they are no longer disconnected. In this case, if $E$ is small, then we might hope that perturbation theory would show that only the top $k$ eigenvectors are small, well-separated from the rest, and that the $k$ eigenvectors of $\tilde{A}$ are perturbed versions of the original indicator vectors. As stated, this is not true, and the main reason for this is that $\lambda_{k+1}$ (and others) could be very small. (We saw a version of this before, when we showed that we don't actually need to compute the leading eigenvector, but instead any vector whose Rayleigh quotient was similar would give similar results---where by similar results we mean results on the objective function, as opposed to the actual clustering.) But, if we account for this, then we can get an interesting perturbation bound. (While interesting, in the context of spectral graph methods, this bound is somewhat weak, in the sense that the perturbations are often much larger and the spectral gap are often much larger than the theorem permits.) This result is known as the Davis-Kahan theorem; and it is used to bound the distance between the eigenspaces of symmetric matrices under symmetric perturbations. (We saw before that symmetric matrices are much ``nicer'' than general matrices. Fortunately, they are very common in machine learning and data analysis, even if it means considering correlation matrices $XX^T$ or $X^TX$. Note that if we relaxed this requirement here, then this result would be false, and to get generalizations, we would have to consider all sorts of other messier things like pseudo-spectra.) To bound the distance between the eigenspaces, let's define the notion of an angle (a canonical or principal angle) between two subspaces. \begin{definition} Let $\mathcal{V}_1$ and $\mathcal{V}_2$ be two $p$-dimensional subspaces of $\mathbb{R}^{d}$, and let $V_1$ and $V_2$ be two orthogonal matrices (i.e., $V_1^TV_1=I$ and $V_2^TV_2=I$) spanning $\mathcal{V}_1$ and $\mathcal{V}_2$. Then the \emph{principal angles} $\{\theta_i\}_{i=1}^{d}$ are s.t. $\cos(\theta_i)$ are the singular values of $V_1^TV_2$. \end{definition} Several things to note. First, for $d=1$, this is the usual definition of an angle between two vectors/lines. Second, one can define angles between subspaces of different dimensions, which is of interest if there is a chance that the perturbation introduces rank deficiency, but we won't need that here. Third, this is actually a full vector of angles, and one could choose the largest to be \emph{the} angle between the subspaces, if one wanted. \begin{definition} Let $\sin\left(\theta\left(\mathcal{V}_1,\mathcal{V}_2\right)\right)$ be the diagonal matrix with the sine of the canonical angles along the diagonal. \end{definition} Here is the Davis-Kahan-$\sin\left(\theta\right)$ theorem. We won't prove it. \begin{theorem}[Davis-Kahan] \label{thm:davis-kahan} Let $A,E\in\mathbb{R}^{n \times n}$ be symmetric matrices, and consider $\tilde{A}=A+E$. Let $S_1 \subset \mathbb{R}$ be an interval; and denote by $\sigma_{S_1}\left(A\right)$ the eigenvalues of $A$ in $S_1$, and by $V_1$ the eigenspace corresponding to those eigenvalues. Ditto for $\sigma_{S_1}\left(\tilde{A}\right)$ and $\tilde{V}_1$. Define the distance between the interval $S_1$ and the spectrum of $A$ outside of $S_1$ as \[ \delta = \min \{ \| \lambda-s \| : \lambda\mbox{ is eigenvalue of }A, \lambda \notin S_1, s \in S_1 \} . \] Then the distance $d\left(V_1,\tilde{V}_1\right) = \|\sin \theta \left( V_1,\tilde{V}_1 \right) \|$ between the two subspaces $V_1$ and $\tilde{V}_1$ can be bounded as \[ d\left(V_1,\tilde{V}_1\right) \le \frac{\|E\|}{\delta} , \] where $\|\cdot\|$ denotes the spectral or Frobenius norm. \end{theorem} What does this result mean? For spectral clustering, let $L$ be the original (symmetric, and SPSD) hypothesized matrix, with $k$ disjoint clusters, and let $\tilde{L}$ be the perturbed observed matrix. In addition, we want to choose the interval such that the first $k$ eigenvalues of both $L$ and $\tilde{L}$ are in it, and so let's choose the interval as follows. Let $S_1 = [0,\lambda_k]$ (where we recall that the first $k$ eigenvalues of the unperturbed matrix equal $0$); in this case, $ \delta = | \lambda_k - \lambda_{k+1} | $, i.e., $\delta$ equals the ``spectral gap'' between the $k^{th}$ and the $(k+1)^{st}$ eigenvalue. Thus, the above theorem says that the bound on the distance $d$ between the subspaces defined by the first $k$ eigenvectors of $L$ and $\tilde{L}$ is less if: (1) the norm of the error matrix $\|E\|$ is smaller; and (2) the value of $\delta$, i.e., the spectral gap, is larger. (In particular, note that we need the angle to be less than 90 degrees to get nontrivial results, which is the usual case; otherwise, rank is lost). This result provides a useful qualitative guide, and there are some more refined versions of it, but note the following. \begin{itemize} \item It is rarely the case that we see a nontrivial eigenvalue gap in real data. \item It is better to have methods that are robust to slow spectral decay. Such methods exist, but they are more involved in terms of the linear algebra, and so many users of spectral graph methods avoid them. We won't cover them here. \item This issue is analogous to what we saw with Cheeger's Inequality, were we saw that we got similar bounds on the objective function value for any vector whose Rayleigh quotient was close to the value of $\lambda_2$, but the actual vector might change a lot (since if there is a very small spectral gap, then permissible vectors might ``swing'' by 90 degrees). \item BTW, although this invalidates the hypotheses of Theorem~\ref{thm:davis-kahan}, the results of spectral algorithms might still be useful, basically since they are used as intermediate steps, i.e., features for some other task. \end{itemize} That being said, knowing this result is useful since it suggests and explains some of the eigenvalue heuristics that people do to make vanilla spectral clustering work. As an example of this, recall the row-wise reweighting we was last time. As a general rule, eigenvectors of orthogonal matrices are robust, but not otherwise in general. Here, this manifests itself in whether or not the components of an eigenvector on a given component are ``bounded away from zero,'' meaning that there is a nontrivial spectral gap. For $L$ and $L_{rw}$, the eigenvectors are indicator vectors, so there is no need to worry about this, since they will be as robust as possible to perturbation. But for $L_{sym}$, the eigenvector is $D^{1/2}\vec{1}_A$, and if there is substantial degree variability then this is a problem, i.e., for low-degree vertices their entries can be very small, and it is difficult to deal with them under perturbation. So, the row-normalization is designed to robustify the algorithms. This ``reweigh to robustify'' is an after-the-fact justification. One could alternately note that all the results for degree-homogeneous Cheeger bounds go through to degree-heterogeneous cases, if one puts in factors of $d_{max}/d_{min}$ everywhere. But this leads to much weaker bounds than if one considers conductance and incorporates this into the sweep cut. I.e., from the perspective of optimization objectives, the reason to reweigh is to get tighter Cheeger's Inequality guarantees. \subsection{Linear dimensionality reduction methods} There are a wide range of methods that do the following: construct a graph from the original data; and then perform computations on the graph to do feature identification, clustering, classification, regression, etc. on the original data. (We saw one example of this when we constructed a graph, computed its top $k$ eigenvectors, and then performed $k$-means on the original data in the space thereby defined.) These methods are sometimes called \emph{non-linear dimensionality reduction methods} since the constructed graphs can be interpreted as so-called kernels and since the resulting methods can be interpreted as kernel-based machine learning methods. Thus, they indirectly boil down to computing the SVD---indirectly in that it is in a feature space that is implicitly defined by the kernel. This general approach is used for many other problems, and so we will describe it in some~detail. To understand this, we will first need to understand a little bit about \emph{linear dimensionality reduction methods} (meaning, basically, those methods that directly boil down to the computing the SVD or truncated SVD of the input data) as well as kernel-based machine learning methods. Both are large topics in its own right, and we will only touch the surface. \subsubsection{PCA (Principal components analysis)} Principal components analysis (PCA) is a common method for linear dimensionality that seeks to find a ``maximum variance subspace'' to describe the data. In more detail, say we are given $\{x_i\}_{i=1}^{n}$, with each $x_i\in\mathbb{R}^{m}$, and let's assume that the data have been centered in that $\sum_ix_i=0$. Then, our goal is to find a subspace $P$, and an embedding $\vec{y}_i=P\vec{x}_i$, where $P^2=P$, s.t. \[ \mbox{Var}(\vec{y}) = \frac{1}{n} \sum_i ||Px_i||^2 \] is largest, \emph{i.e.}, maximize the projected variance, or where \[ \mbox{Err}(\vec{y}) = \frac{1}{n} \sum_i ||x_i-Px_i||_2^2 \] is smallest, \emph{i.e.}, minimize the reconstruction error. Since Euclidean spaces are so structured, the solution to these two problems is identical, and is basically given by computing the SVD or truncated~SVD: \begin{itemize} \item Let $C = \frac{1}{n}\sum_i x_i x_i^T$, \emph{i.e.}, $C \sim XX^T$. \item Define the variance as $\mbox{Var}(\vec{y}) = \mbox{Trace}(PCP)$ \item Do the eigendecomposition to get $C = \sum_{i=1}^{m} \lambda_i \hat{e}_i\hat{e}_i^T$, where $\lambda_1 \geq \lambda_2 \geq \cdots \lambda_m \geq 0$. \item Let $P = \sum_{i=1}^{d} \hat{e}_i\hat{e}_i^T$, and then project onto this subspace spanning the top $d$ eigenfunctions of $C$. \end{itemize} \subsubsection{MDS (Multi-Dimensional Scaling)} A different method (that boils down to taking advantage of the same structural result the SVD) is that of Multi-Dimensional Scaling (MDS), which asks for the subspace that best preserves inter-point distances. In more detail, say we are given $\{x_i\}_{i=1}^{n}$, with $x_i\in\mathbb{R}^{D}$, and let's assume that the data are centered in that $\sum_ix_i=0$. Then, we have $\frac{n(n-1)}{2}$ pairwise distances, denoted $\Delta_{ij}$. The goal is to find vectors $\vec{y}_i$ such that: \[ || \vec{y}_i - \vec{y}_j || \approx \Delta_{ij} \] We have the following lemma: \begin{lemma} If $\Delta_{ij}$ denotes the Euclidean distance of zero-mean vectors, then the inner products are \[ G_{ij} = \frac{1}{2}\left( \sum_k\left( \Delta_{ik}^{2} + \Delta_{kj}^{2}\right) - \Delta_{ij}^2 -\sum_{kl} \Delta_{kl}^{2} \right) \] \end{lemma} Since the goal is to preserve dot products (which are a proxy for and in some cases related to distances), we will choose $\vec{y}_i$ to minimize \[ \mbox{Err}(\vec{y}) = \sum_{ij} \left( G_{ij}-\vec{y}_i\cdot\vec{y}_j\right)^{2} \] The spectral decomposition of $G$ is given as \[ G = \sum_{i=1}^{n} \lambda_i \hat{v}_i\hat{v}_i^T \] where $\lambda_1 \geq \lambda_2 \geq \cdots \lambda_n \geq 0$. In this case, the optimal approximation is given by \[ y_{i\xi} = \sqrt{\lambda_{\xi}}v_{\xi i} \] for $\xi = 1,2,\ldots,d$, with $d \leq n$, which are simply scaled truncated eigenvectors. Thus $G \sim X^T X$. \subsubsection{Comparison of PCA and MDS} At one level of granularity, PCA and MDS are ``the same,'' since they both boil down to computing a low-rank approximation to the original data. It is worth looking at them in a little more detail, since they come from different motivations and they generalize to non-linear situations in different ways. In addition, there are a few points worth making as a comparison with some of the graph partitioning results we discussed. To compare PCA and MDS: \begin{itemize} \item $C_{ij} = \frac{1}{n}\sum_k x_{ki}x_{kj}$ is a $m \times m$ covariance matrix and takes roughly $O((n+d)m^2)$ time to compute. \item $G_{ij} = \vec{x}_i\cdot\vec{x}_j$ is an $n \times n$ Gram matrix and takes roughly $O((m+d)n^2)$ time to compute. \end{itemize} Here are several things to note: \begin{itemize} \item PCA computes a low-dimensional representation that most faithfully preserves the covariance structure, in an ``averaged'' sense. It minimizes the reconstruction error \[ E_{PCA} = \sum_i || x_i = \sum_{\xi=1}^{m} (x_i \cdot e_{\xi})e_{\xi} ||_2^2 , \] or equivalently it finds a subspace with minimum variance. The basis for this subspace is given by the top $m$ eigenvectors of the $d \times d$ covariance matrix $C=\frac{1}{n}\sum_i x_i x_i^T$. \item MDS computes a low-dimensional representation of the high-dimensional data that most faithfully preserve inner products, \emph{i.e.}, that minimizes \[ E_{MDS} = \sum_{ij} \left( x_i \cdot x_j - \phi_i \cdot \phi_j \right)^2 \] It does so by computing the Gram matrix of inner products $G_{ij} = x_i \cdot x_j$, so $G \approx X^TX$. It the top $m$ eigenvectors of this are $\{v_{i}\}_{i=1}^{m}$ and the eigenvalues are $\{\lambda_{i}\}_{i=1}^{m}$, then the embedding MDS returns is $\Phi_{i\xi} = \sqrt{\lambda_{\xi}}v_{\xi i}$. \item Although MDS is designed to preserve inner products, it is often motivated to preserve pairwise distances. To see the connection, let \[ S_{ij} = ||x_i-x_j||^2 \] be the matrix of squared inter-point distances. If the points are centered, then a Gram matrix consistent with these squared distances can be derived from the transformation \[ G = -\frac{1}{2} \left(I-uu^T\right) S \left(I-uu^T\right) \] where $u = \frac{1}{\sqrt{n}}( 1,\cdots,1)$. \end{itemize} Here are several additional things to note with respect to PCA and MDS and kernel methods: \begin{itemize} \item One can ``kernelize'' PCA by writing everything in terms of dot products. The proof of this is to say that we can ``map'' the data $A$ to a feature space $\mathcal{F}$ with $\Phi(X)$. Since $C = \frac{1}{n}\sum_{j=1}^{n}\phi(x_j)\phi(x_j)^T$ is a covariance matrix, PCA can be computed from solving the eigenvalue problem: Find a $\lambda > 0$ and a vector $v \ne 0$ s.t. \begin{equation} \lambda v = C v = \frac{1}{n} \sum_{j=1}^{n} (\phi(x_j)\cdot v) \phi(x_j) . \label{eqn:pca-ker1} \end{equation} So, all the eigenvectors $v_i$ with $\lambda_i$ must be in the span of the mapped data, \emph{i.e.}, $v \in \mbox{Span}\{\phi(x_1),\ldots,\phi(x_n)\}$, \emph{i.e.}, $v = \sum_{i=1}^{n} \alpha_i \phi(x_i)$ for some set of coefficients $\{\alpha_i\}_{i=1}^{n}$. If we multiply~(\ref{eqn:pca-ker1}) on the left by $\phi(x_k)$, then we get \[ \lambda(\phi(x_k)\cdot v) = (\phi(x_k)\cdot C v ) , \qquad k=1,\ldots,n . \] If we then define \begin{equation} K_{ij} = (\phi(x_i),\phi(x_j)) = k(x_i,x_j) \in \mathbb{R}^{n \times n} , \label{eqn:pca-ker2} \end{equation} then to compute the eigenvalues we only need \[ \lambda \vec{\alpha} = K \vec{\alpha} , \qquad \alpha = (\alpha_1,\ldots,\alpha_n)^T . \] Note that we need to normalize $(\lambda_k,\alpha^k)$, and we can do so by $\hat{K} = K - 1_n K - K 1_n - 1_n K 1_n$. To extract features of a new pattern $\phi(x)$ onto $v^k$, we need \begin{equation} \label{eqn:pca-ker3} (v^k\cdot\phi(x)) = \sum_{i=1}^{m} \alpha_i^k \phi(x_i)\cdot\phi(x) = \sum_{i=1}^{m} \alpha_i^k k(x_i,x) . \end{equation} So, the nonlinearities enter: \begin{itemize} \item The calculation of the matrix elements in~(\ref{eqn:pca-ker2}). \item The evaluation of the expression~(\ref{eqn:pca-ker3}). \end{itemize} But, we can just compute eigenvalue problems, and there is no need to go explicitly to the high-dimensional space. For more details on this, see ``An Introduction to Kernel-Based Learning Algorithms,'' by by M{\"u}ller et al. or ``Nonlinear component analysis as a kernel eigenvalue problem,'' by Sch{\"o}lkopf et al. \item Kernel PCA, at least for isotropic kernels $K$, where $K(x_i,x_j)=f(||x_i-x_j||)$, is a form of MDS and vice versa. For more details on this, see ``On a Connection between Kernel PCA and Metric Multidimensional Scaling,'' by Williams and ``Dimensionality Reduction: A Short Tutorial,'' by Ghodsi. To see this, recall that \begin{itemize} \item From the distances-squared, $\{\delta_{ij}\}_{ij}$, where $\delta_{ij} = ||x_i-x_j||^2_2 = (x_i - x_j)^T (x_i - x_j)$, we can construct a matrix $A$ with $A_{ij} = - \frac{1}{2} \delta_{ij}$. \item Then, we can let $B = HAH$, where $H$ is a ``centering'' matrix ($H=I-\frac{1}{n} 1.1^T$). This can be interpreted as centering, but really it is just a projection matrix (of a form not unlike we we have seen). \item Note that $B = HX(HX)^T$, (and $b_{ij} = (x_i-\bar{x})^T(x_j-\bar{x})$, with $\bar{x}=\frac{1}{n}\sum_ix_i$), and thus $B$ is SPSD. \item In the feature space, $\tilde{\delta}_{ij}$ is the Euclidean distance: \begin{eqnarray*} \tilde{\delta}_{ij} &=& (\phi(x_i) - \phi(x_j))^T (\phi(x_i) - \phi(x_j)) \\ &=& ||\phi(x_i) - \phi(x_j)||_2^2 \\ &=& 2(1-r(\delta_{ij})) , \end{eqnarray*} where the last line follows since with an isotropic kernel, where $k(x_i,x_j) = r(\delta_{ij})$. (If $K_{ij} = f(||x_i-x_j||)$, then $K_{ij} = r(\delta_{ij})$ ($r(0) = 1$).) In this case, $A$ is such that $A_{ij} = r(\delta_{ij}) -1$, $A=K-1.1^T$, so (fact) $HAH=HKH$. The centering matrix annihilates $11^T$, so $HAH=HKH$. \end{itemize} So, $K_{MDS} = -\frac{1}{2}(I-ee^T)A(I-ee^T)$, where $A$ is the matrix of squared distances. \end{itemize} So, the bottom line is that PCA and MDS take the data matrix and use SVD to derive embeddings from eigenvalues. (In the linear case both PCA and MDS rely on SVD and can be constructed in $O(mn^2)$ time ($m > n$).) They are very similar due to the linear structure and SVD/spectral theory. If we start doing nonlinear learning methods or adding additional constraints, then these methods generalize in somewhat different ways. \subsubsection{An aside on kernels and SPSD matrices} The last few comments were about ``kernelizing'' PCA and MDS. Here, we discuss this kernel issue somewhat more generally. Recall that, given a collection $\mathcal{X}$ of data points, which are often but not necessarily elements of $\mathbb{R}^{m}$, techniques such as linear Support Vector Machines (SVMs), Gaussian Processes (GPs), Principle Component Analysis (PCA), and the related Singular Value Decomposition (SVD), identify and extract structure from $\mathcal{X}$ by computing linear functions, i.e., functions in the form of dot products, of the data. (For example, in PCA the subspace spanned by the first $k$ eigenvectors is used to give a $k$ dimensional model of the data with minimal residual; thus, it provides a low-dimensional representation of the data.) Said another way, these algorithms can be written in such a way that they only ``touch'' the data via the correlations between pairs of data points. That is, even if these algorithms are often written in such as way that they access the actual data vectors, they can be written in such a way that they only accesses the correlations between pairs of data vectors. In principle, then, given an ``oracle'' for a different correlation matrix, one could run the same algorithm by providing correlations from the oracle, rather than the correlations from the original correlation matrix. This is of interest essentially since it provides much greater flexibility in possible computations; or, said another way, it provides much greater flexibility in statistical modeling, without introducing too much additional computational expense. For example, in some cases, there is some sort of nonlinear structure in the data; or the data, e.g. text, may not support the basic linear operations of addition and scalar multiplication. More commonly, one may simply be interested in working with more flexible statistical models that depend on the data being analyzed, without making assumptions about the underlying geometry of the hypothesized data. In these cases, a class of statistical learning algorithms known as \emph{kernel-based learning methods} have proved to be quite useful. These methods implicitly map the data into much higher-dimensional spaces, e.g., even up to certain $\infty$-dimensional Hilbert spaces, where information about their mutual positions (in the form of inner products) is used for constructing classification, regression, or clustering rules. There are two points that are important here. First, there is often an efficient method to compute inner products between very complex or even infinite dimensional vectors. Second, while general $\infty$-dimensional Hilbert spaces are relatively poorly-structured objects, a certain class of $\infty$-dimensional Hilbert spaces known as Reproducing kernel Hilbert spaces (RKHSs) are sufficiently-heavily regularized that---informally---all of the ``nice'' behaviors of finite-dimensional Euclidean spaces still hold. Thus, kernel-based algorithms provide a way to deal with nonlinear structure by reducing nonlinear algorithms to algorithms that are linear in some (potentially $\infty$-dimensional but heavily regularized) feature space $\mathcal{F}$ that is non-linearly related to the original input space. The generality of this framework should be emphasized. There are some kernels, e.g., Gaussian rbfs, polynomials, etc., that might be called \emph{a priori kernels}, since they take a general form that doesn't depend (too) heavily on the data; but there are other kernels that might be called \emph{data-dependent kernels} that depend very strongly on the data. In particular, several of the methods to construct graphs from data that we will discuss next time, e.g., Isomap, local linear embedding, Laplacian eigenmap, etc., can be interpreted as providing data-dependent kernels. These methods first induce some sort of local neighborhood structure on the data and then use this local structure to find a global embedding of the data into a lower dimensional space. The manner in which these different algorithms use the local information to construct the global embedding is quite different; but in general they can be interpreted as kernel PCA applied to specially-constructed Gram matrices. Thus, while they are sometimes described in terms of finding non-linear manifold structure, it is often more fruitful to think of them as constructing a data-dependent kernel, in which case they are useful or not depending on issues related to whether kernel methods are useful or whether mis-specified models are useful. \section{% (03/10/2015): Some Practical Considerations (3 of 4): Non-linear dimension reduction methods} Reading for today. \begin{compactitem} \item ``Laplacian Eigenmaps for dimensionality reduction and data representation,'' in Neural Computation, by Belkin and Niyogi \item ``Diffusion maps and coarse-graining: a unified framework for dimensionality reduction, graph partitioning, and data set parameterization,'' in IEEE-PAMI, by Lafon and Lee \end{compactitem} Today, we will describe several related methods to identify structure in data. The general idea is to do some sort of ``dimensionality reduction'' that is more general than just linear structure that is identified by a straightforward application of the SVD or truncated SVD to the input data. The connection with what we have been discussing is that these procedures construct graphs from the data, and then they perform eigenanalysis on those graphs in order to construct ``low-dimensional embeddings'' of the data. These (and many other related) methods are often called \emph{non-linear dimension reduction methods}. In some special cases they identify structure that is meaningfully non-linear; but it is best to think of them as constructing graphs to then construct data-dependent representations of the data that (like other kernel methods) is linear in some nonlinearly transformed version of the data. \subsection{Some general comments} The general framework for these methods is the following. \begin{itemize} \item Derive some (typically sparse) graph from the data, \emph{e.g.}, by connecting nearby data points with an $\epsilon$-NN or $k$-NN rule. \item Derive a matrix from the graph (viz. adjacency matrix, Laplacian matrix). \item Derive an embedding of the data into $\mathbb{R}^{d}$ using the eigenvectors of that matrix. \end{itemize} \emph{Many} algorithms fit this general outline. Here are a few things worth noting about them. \begin{itemize} \item They are not really algorithms (like we have been discussion) in the sense that there is a well-defined objective function that one is trying to optimize (exactly or approximately) and for which one is trying to prove running time or quality-of-approximation bounds. \item There exists theorems that say when each of these method ``works,'' but those theoretical results have assumptions that tend to be rather strong and/or unrealistic. \item Typically one has some sort of intuition and one shows that the algorithm works on some data in certain specialized cases. It is often hard to generalize beyond those special cases, and so it is probably best to think of these as ``exploratory data analysis'' tools that construct data-dependent kernels. \item The intuition underlying these methods is often that the data ``live'' on a low-dimensional manifold. Manifolds are very general structures; but in this context, it is best to think of them as being ``curved'' low-dimensional spaces. The idealized story is that the processes generating the data have only a few degrees of freedom, that might not correspond to a linear subspace, and we want to reconstruct or find that low-dimensional manifold. \item The procedures used in constructing these data-driven kernels depend on relatively-simple algorithmic primitives: shortest path computations; least-squares approximation; SDP optimization; and eigenvalue decomposition. Since these primitives are relatively simple and well-understood and since they can be run relatively quickly, non-linear dimensionality reduction methods that use them are often used to explore the data. \item This approach is often of interest in semi-supervised learning, where there is a lot of unlabeled data but very little labeled data, and where we want to use the unlabeled data to construct some sort of ``prior'' to help with predictions. \end{itemize} There are a large number of these and they have been reviewed elsewhere; here we will only review those that will be related to the algorithmic and statistical problems we will return to later. \subsection{ISOMAP} ISOMAP takes as input vectors $x_i \in \mathbb{R}^{D}, i=1,\ldots,n$, and it gives as output vectors $y_i \in \mathbb{R}^{d}$, where $d \ll D$. The stated goal/desire is to make near (resp. far) points stay close (resp. far); and the idea to achieve this is to preserve geodesic distance along a submanifold. The algorithm uses geodesic distances in an MDS computation: \begin{enumerate} \item \textbf{Step 1.} Build the nearest neighbor graph, using $k$-NN or $\epsilon$-NN. The choice here is a bit of an art. Typically, one wants to preserve properties such as that the data are connected and/or that they are thought of as being a discretization of a submanifold. Note that the $k$-NN here scales as $O(n^2D)$. \item \textbf{Step 2.} Look at the shortest path or geodesic distance between all pairs of points. That is, compute geodesics. Dijkstra's algorithm for shortest paths runs in $O(n^2 \log n + n^2 k )$ time. \item \textbf{Step 3.} Do Metric Multi-Dimensional scaling (MDS) based on $A$, the shortest path distance matrix. The top $k$ eigenvectors of the Gram matrix then give the embedding. They can be computed in $\approx O(n^d)$ time. \end{enumerate} \emph{Advantages} \begin{itemize} \item Runs in polynomial time. \item There are no local minima. \item It is non-iterative. \item It can be used in an ``exploratory'' manner. \end{itemize} \emph{Disadvantages} \begin{itemize} \item Very sensitive to the choice of $\epsilon$ and $k$, which is an art that is coupled in nontrivial ways with data pre-processing. \item No immediate ``out of sample extension''' since so there is not obvious geometry to a graph, unless it is assumed about the original data. \item Super-linear running time---computation with all the data points can be expensive, if the number of data points is large. A solution is to choose ``landmark points,'' but for this to work one needs to have already sampled at very high sampling density, which is often not realistic. \end{itemize} These strengths and weaknesses are not peculiar to ISOMAP; they are typical of other graph-based spectral dimensionality-reduction methods (those we turn to next as well as most others). \subsection{Local Linear Embedding (LLE) } For LLE, the input is vectors $x_i\in\mathbb{R}^{D},i=1,\ldots,n$; and the output is vectors $y_i\in\mathbb{R}^{d}$, with $d \ll D$. \emph{Algorithm} \begin{description} \item [Step 1 : Construct the Adjacency Graph] There are two common variations: \begin{enumerate} \item $\epsilon$ neighborhood \item K Nearest neighbor Graph \end{enumerate} Basically, this involves doing some sort of NN search; the metric of closeness or similarity used is based on prior knowledge; and the usual (implicit or explicit) working assumption is that the neighborhood in the graph $\approx$ the neighborhood of the underlying hypothesized manifold, at least in a ``local linear'' sense. \item [Step 2 : Choosing weights] That is, construct the graph. Weights $W_{ij}$ must be chosen for all edges $ij$. The idea is that each input point and its $k$-NN can be viewed as samples from a small approximately linear patch on a low-dimensional submanifold and we can choose weights $W_{ij}$ to get small reconstruction error. That is, weights are chosen based on the projection of each data point on the linear subspace generated by its neighbors. \begin{eqnarray*} & \min \sum_i ||x_i - \sum_j W_{ij}x_j ||_2^2 &= \Phi(W) \\ & \mbox{s.t. } W_{ij} = 0 \mbox{ if } (ij) \not\in E \\ & \quad \sum_j W_{ij}=1, \forall i \end{eqnarray*} \item [Step 3 : Mapping to Embedded Co-ordinates] Compute output $y \in \mathbb{R}^d$ by solving the same equations, but now with the $y_i$ as variables. That is, let \[ \Psi(y)=\sum_{i} ||y_i - \sum_j W_{ij} y_j ||^2 , \] for a fixed $W$, and then we want to solve \begin{eqnarray*} & \min \Psi(y) \\ & \quad \mbox{s.t. } \sum_i y_i = 0 \\ & \quad \frac{1}{N} \sum_i y_i y_i^T = I \end{eqnarray*} To solve this minimization problem reduces to finding eigenvectors corresponding to the $d+1$ lowest eigenvalues of the the positive definite matrix $(I-W)'(I-W)=\Psi$. \end{description} Of course, the lowest eigenvalue is uninteresting for other reasons, so it is typically not included. Since we are really computing an embedding, we could keep it if we wanted, but it would not be useful (it would assign uniform values to every node or values proportional to the degree to every node) to do downstream tasks people want to do. \subsection{Laplacian Eigenmaps (LE)} For LE, (which we will see has some similarities with LE), the input is vectors $x_i\in\mathbb{R}^{D},i=1,\ldots,n$; and the output is vectors $y_i\in\mathbb{R}^{d}$, with $d \ll D$. The idea is to compute a low-dimensional representation that preserves proximity relations (basically, a quadratic penalty on nearby points), mapping nearby points to nearby points, where ``nearness'' is encoded in $G$. \emph{Algorithm} \begin{description} \item [Step 1 : Construct the Adjacency Graph] Again, there are two common variations: \begin{enumerate} \item $\epsilon$-neighborhood \item $k$-Nearest Neighbor Graph \end{enumerate} \item [Step 2 : Choosing weights] We use the following rule to assign weights to neighbors: \[ W_{ij}=\left\{ \begin{array}{c c} e^{-\frac{||x_i-x_j||^2}{4t}} & \text{if vertices i \& j are connected by an edge} \\ 0 & \text{otherwise} \end{array} \right. \] Alternatively, we could simply set $W_{ij}=1$ for vertices $i$ and $j$ that are connected by an edge---it should be obvious that this gives similar results as the above rule under appropriate limits, basically since the exponential decay introduces a scale that is a ``soft'' version of this ``hard'' $0$-$1$ rule. As a practical matter, there is usually a fair amount of ``cooking'' to get things to work, and this is one of the knobs to turn to cook things. \item [Step 3 : Eigenmaps] Let \[ \Psi(y)=\sum_{i,j}\frac{w_{ij}||y_i-y_j||^2}{\sqrt{D_{ii} \cdot D_{jj}}} \] where $ D=\mbox{Diag}\{\sum_i w_{ij} : j=1 (1) n \}$, and we compute $ y \in \mathbb{R}^d $ such that \begin{eqnarray*} & \min \Psi(y) \\ & \quad \mbox{s.t. } \sum_i y_i = 0 \quad\mbox{centered} \\ & \quad \mbox{and } \frac{1}{N} \sum_i y_i y_i^T = I \quad\mbox{unit covariance} \end{eqnarray*} that is, s.t. that is minimized for each connected component of the graph. This can be computed from the bottom $d+1$ eigenvectors of $\mathcal{L} = I-D^{-1/2}WD^{-1/2}$ , after dropping the bottom eigenvector (for the reason mentioned above). \end{description} LE has close connections to analysis on manifold, and understanding it will shed light on when it is appropriate to use and what its limitations are. \begin{itemize} \item Laplacian in $\mathbb{R}^{d}$: $\Delta f = -\sum_i \frac{a\partial^2 f }{ \partial x_i^2 }$ \item Manifold Laplacian: change is measured along tangent space of the manifold. \end{itemize} The weighted graph $\approx$ discretized representation of the manifold. There are a number of analogies, \emph{e.g.}, Stokes Theorem (which classically is a statement about the integration of differential forms which generalizes popular theorems from vector calculus about integrating over a boundary versus integration inside the region, and which thus generalizes the fundamental theorem of calculus): \begin{itemize} \item Manifold: $\int_{M} ||\nabla f||^2 = \int_{M} f \Delta f$ \item Graph: $\sum_{ij} (f_i- f_j)^2 = f^T L f$ \end{itemize} An extension of LE to so-called Diffusion Maps (which we will get to next time) will provide additional insight on these connections. Note that the Laplacian is like a derivative, and so minimizing it will be something like minimizing the norm of a derivative. \subsection{Interpretation as data-dependent kernels} As we mentioned, these procedures can be viewed as constructing data-dependent kernels. There are a number of technical issues, mostly having to do with the discrete-to-continuous transition, that we won't get into in this brief discussion. This perspective provides light on why they work, when they work, and when they might not be expected to work. \begin{itemize} \item ISOMAP. Recall that for MDS, we have $\delta_{ij}^{2} = (x_i-x_j)^T(x_i-x_j)=$ dissimilarities. Then $A$ is a matrix s.t. $A_{ij} = -\delta_{ij}$, and $B=HAH$, with $H=I_n-\frac{1}{n}11^T$, a ``centering'' or ``projection'' matrix. So, if the kernel $K(x_i,x_j)$ is stationary, i.e., if it is a function of $||x_i-x_j||^2 = \delta_{ij}^{2}$, as it is in the above construction, then $K(x_i,x_j)=r(\delta_{ij})$, for some $r$ that scales s.t. $r(0)=1$. Then $\tilde{\delta}_{ij}^{2}$ is the Euclidean distance in feature space, and if $A$ s.t. $A_{ij} = r(\delta_{ij})-1$, then $A=K-11^T$. The ``centering matrix'' $H$ annihilates $11^T$, so $HAH=HKH$. (See the Williams paper.) So, \[ K_{ISOMAP} = HAH = H \mathcal{D} H = (I-11^T) \mathcal{D} (I-11^T) \] where $\mathcal{D}$ is the squared geodesic distance matrix. \item LE. Recall that LE minimizes $\psi^T L \psi = \frac{1}{2}\sum_{ij} (psi_i-\psi_j)^2 W_{ij}$, and doing this involved computing eigenvectors of $L$ or $\mathcal{L}$, depending on the construction. The point is that $L$ has close connections to diffusions on a graph---think of it in terms of a continuous time Markov chain: \[ \frac{\partial \psi(t)}{\partial t} = - L \psi(t) . \] The solution is a Green's function or heat kernel, related to the matrix exponential of $L$: \[ K_t = \exp(-Lt) = \sum_{\xi} \phi_{\xi}\phi_{\xi}^{T} e^{-\lambda_{\xi}t} , \] where $\phi_{\xi}$, $\lambda_{\xi}$ are the eigenvectors/eigenvalues of $L$. Then, $\psi(t) = K_t \psi(0)$ is the general solution, and under assumptions one can show that \[ K_L = \frac{1}{2}L^{+} \] is related to the ``commute time'' distance of diffusion: \[ C_{ij} \sim L_{ii}^{+} + L_{jj}^{+} - L_{ij}^{+} = L_{ji}^{+} . \] For the difference between the commute time in a continuous time Markov chain and the geodesic distance on the graph, think of the dumbbell example; we will get to this in more detail next time. \item LLE. Recall that this says that we are approximating each point as a linear combination of neighbors. Let $(W_{n})_{ij}, i,j\in[n]$ be the weight of a point $x_j$ in the expansion of $x_i$; then one can show that \[ K_n(x_i,x_j) = \left( (I-W_n)^T(I-W_n) \right)_{ij} \] is PD on the domain $\mathcal{X}:x_i,\ldots,x_n$. Also, one can show that if $\lambda$ is the largest eigenvalue of $(I-W)^T(I-W)=M$, then \[ K_{LLE} = \left( (\lambda-1)I + W^T + W - W^TW \right) \] is a PSD matrix, and thus a kernel. Note also that, under assumptions, we can view $M=(I-W^T(I-W)$ as $\mathcal{L}^2$. \end{itemize} \subsection{Connection to random walks on the graph: more on LE and diffusions} Recall that, give a data set consisting of vectors, we can construct a graph $G=(V,E)$ in one of several ways. Given that graph, consider the problem of mapping $G$ to a \emph{line} s.t. connected points stay as close together as possible. Let $\vec{y}=(y_1,\ldots,y_n)^T$ be the map, and by a ``good'' map we will mean one that minimized $\sum_{ij} (y_i-y_j)^2W_{ij}$ under appropriate constraints, \emph{i.e.}, it will incur a big penalty if $W_{ij}$ is large and $y_i$ and $y_j$ are far apart. This has important connections with diffusions and random walks that we now discuss. We start with the following claim: \begin{claim} $\frac{1}{2}\sum_{i,j} w_{ij} (y_i-y_j)^2=y^{T}L y$ \end{claim} \begin{proof} Recall that $W_{ij}$ is symmetric and that $D_{ii}= \sum_j W_{ij}$. Then: \begin{align*} \frac{1}{2}\sum_{i,j} w_{ij} (y_i-y_j)^2&=\frac{1}{2}\sum_{i,j} w_{ij}y_i^2+w_{ij} y_j^2-2w_{ij}y_iy_j\\ &=\sum_{i} D_{ii} y_i^2-\sum_{ij} w_{ij} y_iy_j\\ &=y^T L y \end{align*} where $L=D-W$, W is the weight matrix and L is Laplacian, a symmetric, positive semidefinite matrix that can be thought of as an operator on functions defined on vertices of G. \end{proof} The following, which we have seen before, is a corollary: \begin{claim} $L$ is SPSD. \end{claim} \begin{proof} Immediate, since $W_{ij}>0$. \end{proof} Recall that the solution to \begin{eqnarray*} \arg \min y^T L y\\ \textrm{s.t.} y^T Dy=1 , \end{eqnarray*} where $D$ gives a measure of the importance of vertices, turns out to be given by solving a generalized eigenvector problem $$ Ly=\lambda Dy , $$ \emph{i.e.}, computing the bottom eigenvectors of that eigenproblem. Actually, we usually solve, \begin{eqnarray*} \arg \min y^T L y\\ \textrm{s.t.} \quad yD\vec{1}=0\\ y^T Dy=1 , \end{eqnarray*} the reason being that since $\vec{1}=(1,\ldots,1)$ is an eigenvector with eigenvalue $0$, it is typically removed. As we saw above, the condition $y^T D\vec{1} = 0$ can be interpreted as removing a ``translation invariance'' in $y$ and so is removed. Alternatively, it is uninteresting if these coordinates are going to be used simple to provide an embedding for other downstream tasks. Also, the condition $y^T D y=1$ removes an arbitrary scaling factor in the embedding, which is a more benign thing to do if we are using the coordinates as an embedding. From the previous discussion on Cheeger's Inequality, we know that putting the graph on a line (and, in that setting, sweeping to find a good partition) can reveal when the graph doesn't ``look like'' a line graph. Among other things, the distances on the line may be distorted a lot, relative to the original distances in the graph (even if they are preserved ``on average''). Thus, more generally, the goal is to embed a graph in $\mathbb{R}^{n}$ s.t. distances are ``meaningful,'' where meaningful might mean the same as or very similar to distances in the graph. Let $G=(V,E,W)$. We know that if $W$ is symmetric, $W=W^T$, and point-wise positive, $W(x,y)\geq0$, then we can interpret pairwise similarities as probability mass transitions in a Markov chain on a graph. Let $d(x) = \sum_z W(x,z) = $ the degree of node $x$. Then let $P$ be the $n \times n$ matrix with entries \[ P(x,w) = \frac{W(x,y)}{d(x)} \] which is the transition probability of going from $x$ to $y$ in one step, which equals the first order neighborhood of the graph. Then, $P^{t}$ is higher order neighborhoods of the graph; this is sometimes interpreted as $\approx$ the ``intrinsic geometry'' of an underlying hypothesized manifold. If the graph is connected, as it usually is due to data preprocessing, then \[ \lim_{t \rightarrow \infty} P^{t}(x,y) = \phi_{0}(y) , \] where $\phi_{0}$ is the unique stationary distribution \[ \phi_{0} = \frac{d(x)}{\sum_z d(z)} \] of the associated Markov chain. (Note The Markov chain is reversible, as detailed-balance is satisfied: $\phi_{0}(x)P_{1}(x,y) = \phi_{0}(y)P_1(y,x)$.) Thus, graph $G$ defines a random walk. For some node $i$ the probability going from $i$ to $j$ is $P'_{ij}=\dfrac{w_{ij}}{d_i}$, where $d_i=\sum_{j} w_{ij}$. Consider if you are at node $i$ and you are move from $i$ in the following way: $$ \begin{cases} \textrm{move to a neighbor chosen u.a.r, w.p.= } \dfrac{1}{d_i} & \textrm{w.p.}\frac{1}{2} \\ \textrm{stay at node i } & \textrm{w.p.} \frac{1}{2} \end{cases} $$ Then the transition matrix $P \in {\mathbb{R}}^{n \times m} = \dfrac{1}{2}I + \dfrac{1}{2} P'$. This is the so-called ``lazy'' random walk. \textbf{Fact:} If $G$ is connected, then for any measure initial $v$ on the vertex, $\lim_{t\rightarrow \infty} P^{t} v=\dfrac{d_i}{\sum_{j} d_j} = \phi_0(i)$. This $\phi$ converges to the stationary distribution. $P$ is related to the normalized Laplacian. If we look at the pre-asymptotic state, for $1 \ll t \ll \infty$, we could define similarity between vertex $x$ and $z$ in terms of the similarity between two density $P_t(x,\cdot)$ and $P_t(z,\cdot)$. That is, \begin{itemize} \item For $t \in (0,\infty)$, we want a metric between nodes s.t. $x$ and $z$ are close if $P_{t}(x,\cdot)$ and $P_{t}(z,\cdot)$ are close. \item There are various notions of closeness: \emph{e.g.}, $\ell_1$ norm (see the Szummer and Jaakkola reference); Kullback-Leibler divergences; $\ell_2$ distances; and so on. \item The $\ell_2$ distance is what we will focus on here (although we might revisit some of the others later). \end{itemize} In this setting, the $\ell_2$ distance is defined as \begin{eqnarray} \nonumber D_t^2(x,z) &=& ||P_t(x,\cdot)-P_t(z,\cdot)||^2_{1/\phi_0} \\ &=& \sum_{y} \frac{(P_t(x,y)-P_t(z,y))^2}{\phi_0(y)} . \label{eqn:dist1_xxx} \end{eqnarray} (So, in particular, the weights $\frac{1}{\phi_{0}(y)}$ penalize discrepancies on domains of low density more than high-density domains.) This notion of distance between points will be small if they are connected by many path. The intuition is that if there are many paths, then there is some sort of geometry, redundancy, and we can do inference like with low-rank approximations. (BTW, it is worth thinking about the difference between minimum spanning trees and random spanning trees, or degree and spectral measures of ranking like eigenvector centrality.) \section{% (03/12/2015): Some Practical Considerations (4 of 4): More on diffusions and semi-supervised graph construction} Reading for today. \begin{compactitem} \item ``Transductive learning via spectral graph partitioning,'' in ICML, by Joachims \item ``Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions,'' in ICML, by Zhu, Ghahramani, and Lafferty \item ``Learning with local and global consistency,'' in NIPS, by Zhou et al. \end{compactitem} Today, we will continue talking about the connection between non-linear dimensionality reduction methods and diffusions and random walks. We will also describe several methods for constructing graphs that are ``semi-supervised,'' in the sense that that there are labels associated with some of the data points, and we will use these labels in the process of constructing the graph. These will give rise to similar expressions that we saw with the unsupervised methods, although there will be some important differences. \subsection{Introduction to diffusion-based distances in graph construction} Recall from last time that we have our graph $G$, and if we run a random walk to the asymptotic state, then we converge to the leading non-trivial eigenvector. Here, we are interested in looking at the pre-asymptotic state, i.e., at a time $t$ such that $1 \ll t \ll \infty$, and we want to define similarity between vertex $x$ and $z$, i.e., a metric between nodes $x$ and $z$, such that $x$ and $z$ are close if $P_{t}(x,\cdot)$ and $P_{t}(z,\cdot)$ are close. Although there are other distance notions one could use, we will work with the $\ell_2$ distance. In this setting, the $\ell_2$ distance is defined as \begin{eqnarray} \nonumber D_t^2(x,z) &=& ||P_t(x,\cdot)-P_t(z,\cdot)||^2_{1/\phi_0} \\ &=& \sum_{y} \frac{(P_t(x,y)-P_t(z,y))^2}{\phi_0(y)} . \label{eqn:dist1} \end{eqnarray} Suppose the transition matrix $P$ has $q$ left and right eigenvectors and eigenvalues $|\lambda_0|\geq|\lambda_1|\geq\ldots\geq|\lambda_n|\geq0$ s.t. \begin{eqnarray*} \phi_j^TP &=& \lambda_j\phi_j^T \\ P\psi_j &=& \lambda \psi_j , \end{eqnarray*} where we note that $\lambda_0=1$ and $\psi_0=\vec{1}$. Normalize s.t. $\phi_k\psi_{\ell} = \delta_{k\ell}$ so that \begin{eqnarray*} \| \phi_{\ell} \|^2_{1/\phi_0} &=& \sum_x \frac{ \phi_{\ell}^2(x) }{ \phi_0(x) } = 1 \\ \| \psi_{\ell} \|^2_{\phi_0} &=& \sum_x \psi_{\ell}^2(x) \phi_0(x) = 1 , \end{eqnarray*} i.e., normalize the left (resp. right) eigenvector of $P$ w.r.t. $1/\phi_0$ (resp. $\phi_0$). If $P_t(x,y)$ is the kernel of the $t^{th}$ iterate of $P$, then we have the following spectral decomposition: \begin{equation} P_t(x,y)=\sum_j \lambda_j^t \psi_j(x)\phi_j(y) . \label{eqn:spectdecomp1} \end{equation} (This is essentially a weighted PCA/SVD of $P^t$, where as usual the first $k$ terms provide the ``best'' rank-$k$ approximation, where the ``best'' is w.r.t. $\|A\|^2 = \sum_x \sum_y \phi_0(x) A(x,y)^2 \phi_0(y)$.) If we insert Eqn.~(\ref{eqn:spectdecomp1}) into Eqn.~(\ref{eqn:dist1}), then one can show that the $L_2$ distance is: \begin{eqnarray} \label{eqn:diffdist1} D_t^2(x,z) &=& \sum_{j=1}^{n-1} \lambda_j^{2t} (\psi_j(x)-\psi_j(z))^2 \\ \label{eqn:diffdist2} &\approx& \sum_{j=1}^{k} \lambda_j^{2t} (\psi_j(x)-\psi_j(z))^2 \\ \label{eqn:diffdist3} &\approx& \sum_{j=1}^{k} (\psi_j(x)-\psi_j(z))^2 . \end{eqnarray} Eqn.~(\ref{eqn:diffdist1}) says that this provides a legitimate distance, and establishing this is on HW2. The approximation of Eqn.~(\ref{eqn:diffdist2}) holds if the eigenvalues decay quickly, and the approximation of Eqn.~(\ref{eqn:diffdist3}) holds if the large eigenvalues are all roughly the same size. This latter expression is what is provided by LE. \subsection{More on diffusion-based distances in graph construction} Recall from last time that LE can be extended to include all of the eigenvectors and also to weighting the embedding by the eigenvalues. This gives rise to an embedding that is based on diffusions, sometimes it is called Diffusion Maps, that defines a distance in the Euclidean space that is related to a diffusion-based distance on the graph. In particular, once we choose $k$ in some way, then here is the picture of the Diffusion Map: \begin{center} Map $\Psi_t: X \rightarrow \left(\begin{array}{c} \lambda_1^t \psi_1(x)\\ \vdots\\ \lambda_k^t\psi_k(x) \end{array}\right)$ \end{center} and so \begin{eqnarray*} D_t^2(x,z) &\approx& \sum_{j=1}^{k} \lambda_j^{2t} (\psi_j(x)-\psi_j(z))^2 \\ &\approx& ||\psi_t(x)-\psi_t(z)||^2 . \end{eqnarray*} This is a low-dimensional embedding, where the Euclidean distance in the embedded space corresponds to the ``diffusion distance'' in the original graph. Here is the picture of the relationships: \begin{center} \[\begin{array}{ccc} G &\leftrightarrow &R^n\\ | &\quad &| \\ \textrm{diffusion} & \leftrightarrow &||\cdot||_2 \end{array} \] \end{center} \textbf{Fact:} This defines a distance. If you think of a \texttt{diffusion} notion of distance in a graph G, this identically equals the Euclidean distance $||\cdot ||_2$ between $\Psi(i)$ and $\Psi(j)$. The diffusion notion of distance is related to the commute time between node $i$ and node $j$. We will describe this next time when we talk about resistor networks. In particular, Laplacian Eigenmaps chooses $k=k^*$, for some fixed $k^*$ and sets $t=0$. Under certain nice limits, this is close to the Laplace-Beltrami operator on the hypothesized manifold. But, more generally, these results will hold even if the graph is not drawn from such a nice setting. \begin{itemize} \item None of this discussion assumes that the original vector data come from low-dimensional manifolds, although one does get a nice interpretation in that case. \item It is fair to ask what happens with spectral decomposition when there is not manifold, e.g., a star graph or a constant-degree expander. Many of the ideas wetland about earlier in the semester would be relevant in this case. \end{itemize} \subsection{A simple result connecting random walks to NCUT/conductance} There are connections with graph partitioning, but understanding the connection with random walkers opens up a lot of other possibilities. Here, we describe one particularly simple result: that a random walker, starting in the stationary distribution, goes between a set and its complement with probability that depends on the NCUT of that set. Although this result is simple, understanding it will be good since it will open up several other possibilities: what if one doesn't run a random walk to the asymptotic state; what if one runs a random walk just a few steps starting from an arbitrary distribution; when does the random walk provide regularization; and so on? Here, we provide an interpretation for the NCUT objective (and thus related to normalized spectral clustering as well as conductance): when minimizing NCUT, we are looking for a suit through the graph s.t. the random walk seldom transitions from $A$ to $\bar{A}$, where $A \subset V$. (This result says that conductance/NCUT is not actually looking for good cuts/partitions, but instead should be interpreted as providing bottlenecks to diffusion.) \begin{lemma} Let $P=D^{-1}W$ be a random walk transition matrix. Let $ \left( X_t \right)_{t \in \mathbb{Z}^{+}}$ be the random walk starting at $X_0 = \pi$, i.e., starting at the stationary distribution. For disjoint subsets $A,B \subset V$, let $\mathbb{P}\left[ B | A \right] = \mathbb{P}\left[ X_1 \in B | X_0 \in A \right]$. Then, \[ NCUT\left(A,\bar{A}\right) = \mathbb{P}\left[ \bar{A} | A \right] + \mathbb{P}\left[ A | \bar{A} \right] . \] \end{lemma} \begin{proof} Observe that \begin{eqnarray*} \mathbb{P}\left[ X_0 \in A \mbox{ and } X_1 \in B \right] &=& \sum_{i \in A,j \in B} \mathbb{P}\left[ X_0 = i \mbox{ and } X_1 = j \right] \\ &=& \sum_{i \in A,j \in B} \pi_i P_{ij} \\ &=& \sum_{i \in A,j \in B} \frac{ d_i }{ \mbox{Vol}(V) } \frac{ W_{ij} }{ d_i } \\ &=& \frac{ 1 }{ \mbox{Vol}(V) } \sum_{i \in A,j \in B} W_{ij} \end{eqnarray*} From this, we have that \begin{eqnarray*} \mathbb{P}\left[ X_1 \in B | X_0 \in A \right] &=& \frac{ \mathbb{P}\left[ X_0 \in A \mbox{ and } X_1 \in B \right] }{ \mathbb{P}\left[ X_0 \in A \right] } \\ &=& \left( \frac{1}{\mbox{Vol}(V) } \sum_{i \in A, j\in B } W_{ij} \right) \left( \frac{ \mbox{Vol}(A) }{ \mbox{Vol}(V) } \right)^{-1} \\ &=& \frac{ 1 }{ \mbox{Vol}(V) } \sum_{i \in A,j \in B} W_{ij} \end{eqnarray*} The lemma follows from this and the definition of NCUT. \end{proof} \subsection{Overview of semi-supervised methods for graph construction} Above, we constructed graphs/Laplacians in an unsupervised manner. That is, there were just data points that were feature vectors without any classification/regression labels associated with them; and we constructed graphs from those unlabeled data points by using various NN rules and optimizing objectives that quantified the idea that nearby data points should be close or smooth in the graph. The graphs were then used for problems such as data representation and unsupervised graph clustering that don't involve classification/regression labels, although in some cases they were also used for classification or regression problems. Here, we consider the situation in which some of the data have labels and we want to construct graphs to help make predictions for the unlabeled data. In between the extremes of pure unsupervised learning and pure supervised learning, there are semi-supervised learning, transductive learning, and several other related classes of machine learning methods. It is in this intermediate regime where using labels for graph construction is most interesting. Before proceeding, here are a few things to note. \begin{itemize} \item If unlabeled data and labeled data come from the same distribution, then there is no difference between unlabeled data and test data. Thus, this transductive setup amounts to being able to see the test data (but not the labels) before doing training. (Basically, the transductive learner can look at all of the data, including the test data with labels and as much training data as desired, to structure the hypothesis space.) \item People have used labels to augment the graph construction process in an explicit manner. Below, we will describe an example of how this is done implicitly in several cases. \item We will see that linear equations of a certain form that involve Laplacian matrices will arise. This form is rather natural, and it has strong similarities with what we saw previously in the unsupervised setting. In addition, it also has strong similarities with local spectral methods and locally-biased spectral methods that we will get to in a few weeks. \end{itemize} We will consider three approaches (those of Joachims, ZGL, and Zhao et al.). Each of these general approaches considers constructing an augmented graph, with extra nodes, $s$ and $t$, that connect to the nodes that have labels. Each can be understood within the following framework. Let $L=D-A = B^TCB$, where $B$ is the unweighted edge-incidence matrix. Recall, then, that the $s,t$-mincut problem is: \[ \min_{x \mbox{ s.t. } x_s=1,x_t=0} \|Bx \|_{C,1} = \min_{x \mbox{ s.t. } x_s=1,x_t=0} \sum_{(u,v) \in E} C_{(u,v)}|x_u-x_v | . \] The $\ell_2$ minorant of this $s,t$-mincut problem is: \[ \min_{x \mbox{ s.t. } x_s=1,x_t=0} \|Bx \|_{C,2} = \min_{x \mbox{ s.t. } s=1,x_t=0} \left( \sum_{(u,v) \in E} C_{(u,v)} |x_u-x_v |^2 \right)^{1/2} . \] This latter problem s equivalent to: \[ \min_{x \mbox{ s.t. } x_s=1,x_t=0} \frac{1}{2} \|Bx \|_{C,2}^{2} = \min_{x \mbox{ s.t. } x_s=1,x_t=0} \sum_{(u,v) \in E} C_{(u,v)} |x_u-x_v |^2 = \min_{x \mbox{ s.t. } x_s=1,x_t=0} x^T L x . \] The methods will construct Laplacian-based expressions by considering various types of $s$-$t$ mincut problems and then relaxing to the associated $\ell_2$ minorant. \subsection{Three examples of semi-supervised graph construction methods} The Joachims paper does a bunch of things, e.g., several engineering heuristics that are difficult to relate to a general setting but that no doubt help the results in practice, but the following is essentially what he does. Given a graph and labels, he wants to find a vector $\vec{y}$ s.t. it satisfies the bicriteria: \begin{enumerate} \item it minimizes $y^TLy$, and \item it has values $$ \begin{cases} 1 & \textrm{ for nodes in class $j$} \\ -1 & \textrm{ for nodes not in class $j$} \end{cases} $$ \end{enumerate} What he does is essentially the following. Given $G=(V,E)$, he adds extra nodes to the node set: $$ \begin{cases} s & \textrm{ with the current class} \\ t & \textrm{ with the other class} \end{cases} $$ Then, there is the issue about what to add as weights when those nodes connect to nodes in $V$. The two obvious weights are $1$ (i.e., uniform) or equal to the degree. Of course, other choices are possible, and that is a parameter one could play with. Here, we will consider the uniform case. Motivated by problem with mincut (basically, what we described before, where it tends to cut off small pieces), Joachims instead considers NCUT and so arrives at the following problem: \[ Y = \left( D_S + L \right) ^{-1} S , \] where $D_S$ is a diagonal matrix with the row sums of $S$ on the diagonal, and $S$ is a vector corresponding to the class connected to the node with label $s$. Note that since most of the rows of $S$ equal zero (since they are unlabeled nodes) and the rest have only 1 nonzero, $D_S$ is a diagonal indicator vector that is sparse. ZGL is similar to Joachims, except that they strictly enforce labeling on the labeled samples. Their setting is that they are interpreting this in terms of a Gaussian random field on the graph (which is like a NN method, except that nearest labels are captures in terms of random vectors on $G$). They provide hard constraints on class labels. In particular, they want to solve \begin{eqnarray*} \min_y & & \frac{1}{2} y^T L y \\ \mbox{s.t.} & & y_i = \left\{ \begin{array}{l l} 1 & \quad \text{node $i$ is labeled in class $j$}\\ 0 & \quad \text{node $i$ is labeled in another class}\\ \mbox{free} & \quad \text{otherwise} \end{array} \right. \end{eqnarray*} This essentially involves constructing a new graph from $G$ ,where the labels involve adding extra nodes, $s$ and $t$, where $s$ links to the current class and $t$ links to the other class. For ZGL, weights $=\infty$ between $s$ and the current class as well as between $t$ and the other class. Zhao, et al. extend this to account explicitly for the bicriteria that: \begin{enumerate} \item \label{point:zhao1} nearby points should have the same labels; and \item \label{point:zhao2} points on the same structure (clusters of manifold) have the same label. \end{enumerate} Note that Point~\ref{point:zhao1} is a ``local'' condition, in that it depends on nearby data points, while Point~\ref{point:zhao2} is a ``global'' condition, in that it depends on large-scale properties of clusters. The point out that different semi-supervised methods take into account these two different properties and weight them in different ways. Zhao et al. try to take both into account by ``iteratively'' spreading ZGL to get a classification function that has local and global consistency properties and is sufficiently smooth with respect to labeled and unlabeled data. In more detail, here is the Zhao et al. algorithm. \begin{enumerate} \item Form the affinity matrix $W$ from an rbf kernel. \item Construct $\tilde{W} = D^{-1/2}WD^{-1/2}$. \item Iterate $Y(t+1) = \alpha \tilde{W} Y + \left(1-\alpha\right)S $, where $\alpha \in (0,1)$ is a parameter, and where $S$ is a class label vector. \item Let $Y^{*}$ be the limit, and output it. \end{enumerate} Note that $\tilde{Y}^{*} = \left(1-\alpha\right)\left(1-\alpha\tilde{W}\right)S$. Alternatively, \[ Y = \left( L + \alpha D \right)^{-1} S = \left( D - \beta A \right)^{-1} S , \] where $Y$ is the prediction vector, and where $S$ is the matrix of labels. Here, we have chosen/defined $\alpha = \frac{1-\beta}{\beta}$ to relate the two forms. Then the ``mincut graph'' has $G=(V,E)$, with nodes $s$ and $t$ such that $$ \begin{cases} s & \textrm{ connects to each sample labeled with class $j$ with weight $\alpha$} \\ t & \textrm{ connects to \emph{all} nodes in $G$ (except $s$) with weight $\alpha(d_i-s_i)$} \end{cases} $$ The $\ell_2$ minorant of this mincut graph has: \[ \min_{x \mbox{ s.t. } x_s=1,x_t=0} \frac{1}{2} \|Bx \|_{C,2}^{2} = \min_{x \mbox{ s.t. } x_s=1,x_t=0} \frac{1}{2} \left( \begin{array}{c} 1 \\ y \\ 0 \end{array} \right)^{T} \left( \begin{array}{ccc} \alpha e^Ts & -\alpha s^T & 0 \\ -\alpha s & \alpha D + L & \alpha(d-s) \\ 0 & -\alpha(d-s)^T & \alpha e^T (d-s) \\ \end{array} \right) \left( \begin{array}{c} 1 \\ y \\ 0 \end{array} \right) . \] In this case, $y$ solves $\left(\alpha D + L\right) y = \alpha s$. We will see an equation of this form below in a somewhat different context. But for the semi-supervised learning context, we can interpret this as a class-specific smoothness constraint. To do so, define \[ A(Y) = \frac{1}{2}\left( \sum_{ij=1}^{n} W_{ij} \| \frac{1}{\sqrt{D_{ii}}}Y_i - \frac{1}{\sqrt{D_{jj}}} Y_j\|^2 - \mu \sum_{j=1}^{n} \| Y_i - S_i \|^2 \right) \] to be the ``cost'' associated with the prediction $Y$. The first term is a ``smoothness constraint,'' which is the sum of local variations, and it reflects that a good classification should not change much between nearby points. The second term is a ``fitting constraint,'' and it says that a good classification function should not change much from the initial assignment provided by the labeled data. The parameter $\mu$ givers the interaction between the two terms. To see how this gives rise to the previous expression, observe that \[ \frac{\partial Q(Y)}{\partial Y} |_{Y=Y^*} = Y^* - WY^* + \mu \left( Y^* - S \right) = 0 , \] from which it follows that \[ Y^* - \frac{1}{1+\mu} WY^* - \frac{\mu}{1+\mu} S = 0 . \] If we define $\alpha = \frac{1}{1+\mu}$ and $\beta = \frac{\mu}{1+\mu}$, so that $\alpha+\beta=1$, then we have that \[ \left(I-\alpha W \right) Y^* = \beta S , \] and thus that \[ Y^* = \beta \left( 1-\alpha W \right)^{-1} S . \] \section{% (03/17/2015): Modeling graphs with electrical networks} Reading for today. \begin{compactitem} \item ``Random Walks and Electric Networks,'' in arXiv, by Doyle and Snell \end{compactitem} \subsection{Electrical network approach to graphs} So far, we have been adopting the usual approach to spectral graph theory: understand graphs via the eigenvectors and eigenvalues of associated matrices. For example, given a graph $G=(V,E)$, we defined an adjacency matrix $A$ and considered the eigensystem $A v = \lambda v$, and we also defined the Laplacian matrix $L=D-A$ and considered the Laplacian quadratic form $x^TLx - \sum_{(ij) \in E} (x_i-x_j)^2$. There are other ways to think about spectral graph methods that, while related, are different in important ways. In particular, one can draw from \emph{physical intuition} and define physical-based models from the graph $G$, and one can also consider more directly vectors that are obtained from various \emph{diffusions and random walks} on $G$. We will do the former today, and we will do the latter next time. \subsection{A physical model for a graph} In many physical systems, one has the idea that there is an equilibrium state and that the system goes back to that equilibrium state when disturbed. When the system is very near equilibrium, the force pushing it back to the equilibrium state is quadratic in the displacement from equilibrium, one can often define a potential energy that in linear in the displacement from equilibrium, and then the equilibrium state is the minimum of that potential energy function. In this context, let's think about the edges of a graph $G=(V,E)$ as physical ``springs,'' in which case the weights on the edges correspond to a spring constant $k$. Then, the force, as a function of the displacement $x$ from equilibrium, is $F(x)=kx$, and the corresponding potential energy is $U(x) = \frac{1}{2}kx^2$. In this case, i.e., if the graph is viewed as a spring network, then if we nail down some of the vertices and then let the rest settle to an equilibrium position, then we are interested in finding the minimum of the potential energy \[ \sum_{(ij)\in E} \left(x_i-x_j\right)^{2} = x^TLx , \] subject to the constraints on the nodes we have nailed down. In this case, the energy is minimized when the non-fixed vertices have values equal to \[ x_i = \frac{1}{d_i} \sum_{(ij) \in E} x_j , \] i.e., when the value on any node equals is the average of the values on its neighbors. (This is the so-called \emph{harmonic property} which is very important, e.g., in harmonic analysis.) As we have mentioned previously and will go into in more detail below, eigenvectors can be unstable things, and having some physical intuition can only help; so let's go a little deeper into these connections. First, recall that the \emph{standard/weighted geodesic graph metric} defines a distance $d(a,b)$ between vertices $a$ and $b$ as the length of the minimum-length path, i.e., number of edges or the sum of weights over edges, on the minimum-length path connecting $a$ and $b$. (This is the ``usual'' notion of distance/metric on the nodes of a graph, but it will be different than distances/metrics implied by spectral methods and by what we will discuss today.) Here, we will \emph{model a graph $G=(V,E)$ as an electrical circuit}. (By this, we mean a circuit that arises in electromagnetism and electrical engineering.) This will allow us to use physical analogues, and it will allow us to get more robust proofs for several results. In addition, it allow us to define another notion of distance that is closer to diffusions. As background, here are some physical facts from electromagnetism that we would like to mimic and that we would like our model to incorporate. \begin{itemize} \item A basic \emph{direct current electrical circuit} consists of a \emph{battery} and one or more \emph{circuit elements} connected by wires. Although there are other circuit elements that are possible, here we will only consider the use of resistors. A battery consists of two distinct vertices, call them $\{a,b\}$, one of which is the source, the other of which is the sink. (Although we use the same terms, ``source'' and ``sink,'' as we used with flow-based methods, the sources and since here will obey different rules.) A \emph{resistor} between two points $a$ and $b$, i.e., between two nodes in $G$, has an associated (undirected and symmetric) quantity $r_{ab}$ called a \emph{resistance} (and an associated conductance $c_{ab} = \frac{1}{r_{ab}}$). Also, there is a \emph{current} $Y_{ab}$ and a \emph{potential difference} $V_{ab}$ between nodes $a$ and $b$. \end{itemize} Initially, we can define the resistance between two nodes that are connected by an edge to depend (typically inversely) on the weight of that edge, but we want to extend the idea of resistance to a resistance between any two nodes. To do so, an important notion is that of \emph{effective resistance}, which is the following. Given a collection of resistors between nodes $a$ and $b$, they can be replaced with a single effective resistor with some other resistance. Here is how the value of that effective resistance is determined. \begin{itemize} \item If $a$ and $b$ have a node $c$ between them, i.e., the resistors are in series, and there are resistances $r_1=R_{ac}$ and $r_2 = R_{cb}$, then the effective resistance between $a$ and $b$ is given by $R_{ab}=r_1+r_2$. \item If $a$ and $b$ have no nodes between them but they are connected by two edges with resistances $r_1$ and $r_2$, i.e., the resistors are in parallel, then the effective resistance between $a$ and $b$ is given by $R_{ab} = \frac{1}{\frac{1}{r_1}+\frac{1}{r_2}}$. \item These rules can be applied recursively. \end{itemize} From this it should be clear that the number of paths as well as their lengths contribute to the effective resistance. In particular, having $k$ parallel edges/paths leads to an effective resistance that is decreased by $\frac{1}{k}$; and adding the first additional edge between two nodes has a big impact on the effective resistance, but subsequent edges have less of an effect. Note that this is vaguely similar to the way diffusions and random walks behave, and distances/metrics they might imply, as opposed to geodesic paths/distances defined above, but there is no formal connection (yet!). Let a voltage source be connected between vertices $a$ and $b$, and let $Y>0$ be the net current out of source $a$ and into course $b$. Here we define two basic rules that our resistor networks must obey. \begin{definition} \label{def:kirchoff-current} The \emph{Kirchhoff current law} states that the current $Y_{ij}$ between vertices $i$ and $j$ (where $Y_{ij}=-Y_{ji}$) satisfies \[ \sum_{j \in N(i)} Y_{ij} = \left\{ \begin{array}{l l} Y & \quad i=a \\ -Y & \quad i=b \\ \mbox{free} & \quad \text{otherwise} \end{array} \right. , \] where $N(i)$ refers to the nodes that are neighbors of node $i$. \end{definition} \begin{definition} \label{def:kirchoff-potential} The \emph{Kirchhoff circuit/potential law} states that for every cycle $C$ in the network, \[ \sum_{(ij)\in C} Y_{ij} R_{ij} = 0. \] \end{definition} From Definition~\ref{def:kirchoff-potential}, it follows that there is a so-called \emph{potential function} on the vertices/nodes of the graph. This is known as Ohm's Law. \begin{definition} \label{def:ohms-law} \emph{Ohm's Law} states that, to any vertex $i$ in the vertex set of $G$, there is an associated potential, call it $V_i$, such that for all edges $(ij) \in E$ in the graph \[ Y_{ij}R_{ij} = V_i - V_j . \] \end{definition} Given this potential function, we can define the effective resistance between any two nodes in $G$, i.e., between two nodes that are not necessarily connected by an edge. \begin{definition} Given two nodes, $a$ and $b$, in $G$, the effective resistance is $R_{ab} = \frac{V_a-V_b}{Y} $. \end{definition} \textbf{Fact.} Given a graph $G$ with edge resistances $R_{ij}$, and given some source-sink pair $(a,b)$, the effective resistance exists, it is unique, and (although we have defined it in terms of a current) it does \emph{not} depend on the net current. \subsection{Some properties of resistor networks} Although we have started with this physical motivation, there is a close connection between resistor networks and what we have been discussing so far this semester. To see this, let's start with the following definition, which is a special case of the Moore-Penrose pseudoinverse. \begin{definition} The \emph{Laplacian pseudoinverse} is the unique matrix satisfying: \begin{enumerate} \item $L^{+}\vec{1}=0$; and \item For all $w \perp \vec{1}: \quad L^{+}w=v \mbox{ s.t. } Lv=w \mbox{ and } v\perp\vec{1} $ . \end{enumerate} \end{definition} Given this, we have the following theorem. Note that here we take the resistances on edges to be the inverse of the weights on those edges, which is probably the most common choice. \begin{theorem} Assume that the resistances of the edges of $G=(V,E)$ are given by $R_{ij} = \frac{1}{w_{ij}}$. Then, the effective resistance between any two nodes $a$ and $b$ is given by: \begin{eqnarray*} R_{ab} &=& \left(e_a-e_b\right)^{T} L^{+} \left(e_a-e_b\right) \\ &=& L_{aa}^{+} -2L_{ab}^{+} + L_{bb}^{+} . \end{eqnarray*} \end{theorem} \begin{proof} The idea of the proof is that, given a graph, edge resistances, and net current, there always exists currents $Y$ and potentials $V$ satisfying Kirchhoff's current and potential laws; in addition, the vector of potentials is unique up to a constant, and the currents are unique. I'll omit the details of this since it is part of HW2. \end{proof} Since the effective resistance between any two nodes is well-defined, we can define the total effective resistance of the graph. (This is sometimes called the Kirchhoff index.) \begin{definition} The \emph{total effective resistance} is $R^{tot} =\sum_{ij=1}^{n} R_{ij}$. \end{definition} Before proceeding, think for a minute about why one might be interested in such a thing. Below, we will show that the effective resistance is a distance; and so the total effective resistance is the sum of the distances between all pairs of points in the metric space. Informally, this can be used to measure the total ``size'' or ``capacity'' of a graph. We used a similar thing (but for the geodesic distance) when we showed that expander graphs had a $\Theta\left(\log(n)\right)$ duality gap. In that case, we did this, essentially, by exploiting the fact that there was a lot of flow to route and since most pairs of nodes were distance $\Theta\left(\log(n)\right)$ apart in the geodesic distance. The quantity $R^{tot}$ can be expressed exactly in terms of the Laplacian eigenvalues (all of them, and not just the first one or first few). Here is the theorem (that we won't prove). \begin{theorem} Let $\lambda_i$ be the Laplacian eigenvalues. Then, $R^{tot} = n \sum_{i=1}^{n} \frac{1}{\lambda_i}$. \end{theorem} Of course, we can get a (weak) bound on $R^{tot}$ using just the leading nontrivial Laplacian eigenvalue. \begin{corollary} \[ \frac{n}{\lambda_2} \le R^{tot} \le \frac{n(n-1)}{\lambda_2} \] \end{corollary} Next, we show that the effective resistance is a distance function. For this reason, it is sometimes called the resistance distance. \begin{theorem} The effective resistance $R$ is a metric. \end{theorem} \begin{proof} We will establish the three properties of a metric. First, from the above theorem, $R_{ij}=0 \iff i=j$. The reason for this is since $e_i-e_j$ is in the null space of $L^{+}$ (which is the $\mbox{span}(\vec{1}$)) iff $i=j$. Since the pseudoinverse of $L$ has eigenvalues $0,\lambda_2^{-1},\ldots,\lambda_n^{-1}$, it is PSD, and so $R_{ij} \ge 0$. Second, since the pseudoinverse is symmetric, we have that $R_{ij}=R_{ji}$. So, the only nontrivial thing is to show the triangle inequality holds. To do so, we show two claims. \begin{claim} \label{claim:resist-metric-claim1} Let $Y_{ab}$ be the vector $e_a-e_b= \left\{ \begin{array}{l l} 1 & \quad \mbox{ at } a \\ -1 & \quad \mbox{ at } b \\ 0 & \quad \text{elsewhere,} \end{array} \right. $ and let $V_{ab} = L^{+} Y_{ab}$. Then, $V_{ab}(a) \ge V_{ab}(c) \ge V_{ab}(b)$, for all $c$. \end{claim} \begin{proof} Recall that $V_{ab}$ is the induced potential when we have $1$ Amp going in $a$ and $1$ Amp coming out of $b$. For every vertex $c$, other than $a$ and $b$, the total flow is $0$, which means $\sum_{x\sim c}\frac{1}{R_{xc}}(V_{ab}(x)-V_{ab}(c))=0$, and it is easy to see $V_{ab}(c)=\frac{\sum_{x\sim c}C_{xc}V_{ab}(x)}{\sum_{x\sim c}C_{xc}}$ where $C_{xc}=\frac{1}{R_{xc}}$ is the conductance between $x,c$. $V_{ab}(c)$ has a value equal to the weighted average of values of $V_{ab}(x)$ at its neighbors. We can use this to prove the claim by contradiction. Assume that there exists a $c$ s.t. $V_{ab}(c) > V_{ab}(a) $. If there are several such nodes, then let $c$ be the node s.t. $V_{ab}(c)$ is the largest. In this case, $V_{ab}(c)$ is larger than the values at its neighbors. This is a contradiction, since $V_c$ is a weighted average of the potentials at its neighbors. The proof of the other half of the claim is similar. (also $V_{ab}(a)\geq V_{ab}(b)$ as $V_{ab}(a)-V_{ab}(b)=R_{ab}\geq 0$) \end{proof} \begin{claim} $R_{eff}(a,b) + R_{eff}(b,c) \ge R_{eff}(a,c) $ \end{claim} \begin{proof} Let $Y_{ab}$ and $Y_{bc}$ be the external current from sending one unit of current from $a \rightarrow b$ and $b \rightarrow c$, respectively. Note that $Y_{ac} = Y_{ab} + Y_{bc}$. Define the voltages $V_{ab} = L^{+} Y_{ab}$, $V_{bc} = L^{+} Y_{bc}$, and $V_{ac} = L^{+} Y_{ac}$. By linearity, $V_{ac} = V_{ab} + V_{bc}$. Thus, it follows that \[ R_{eff}(a,c) = Y_{ac}^TV_{ac}= Y_{ac}^TV_{ab} + Y_{ac}^{T}V_{bc} . \] By Claim~\ref{claim:resist-metric-claim1}, it follows that \[ Y_{ac}^TV_{ab} = V_{ab}(a) - V_{ab}(c) \le V_{ab}(a) - V_{ab}(b) = R_{eff}(a,b) . \] Similarly, $Y_{ac}^T V_{bc} \le R_{eff}(b,c)$. This establishes the claim. \end{proof} The theorem follows from these two claims. \end{proof} Here are some things to note regarding the resistance distance. \begin{itemize} \item $R_{eff}$ is non-increasing function of edge weights. \item $R_{eff}$ does not increase when edges are added. \item $R^{tot}$ strictly decreases when edges are added and weights are increased. \end{itemize} Note that these observations are essentially claims about the distance properties of two graphs, call them $G$ and $G^{\prime}$, when one graph is constructed from the other graph by making changes to one or more edges. We have said that both geodesic distances and resistances distances are legitimate notions of distances between the nodes on a graph. One might wonder about the relationship between them. In the same way that there are different norms for vectors in $\mathbb{R}^{n}$, e.g., the $\ell_{1}$, $\ell_{2}$, and $\ell_{\infty}$, and those norms have characteristic sizes with respect to each other, so too we can talk about the relative sizes of different distances on nodes of a graph. Here is a theorem relating the resistance distance with the geodesic distance. \begin{theorem} For $R_{eff}$ and the geodesic distance $d$: \begin{enumerate} \item $R_{eff}(a,b) = d(a,b)$ iff there exists only one path between $a$ and $b$. \item $R_{eff}(a,b) < d(a,b)$ otherwise. \end{enumerate} \end{theorem} \begin{proof} If there is only one path $P$ between $a$ and $b$, then $Y_{ij}=Y$, for all $ij$ on this path (by Kirchhoff current law), and $V_i-V_j = Y R_{ij}$. It follows that \[ R_{ab} = \frac{V_a-V_b}{Y} = \sum_{(ij) \in P} \frac{V_i-V_i}{Y} = \sum_{(ij) \in P} V_{ij} = d_{ab} . \] If a path between $a$ and $b$ is added, so that now there are multiple paths between $a$ and $b$, this new path might use part of the path $P$. If it does, then call that part of the path $P_1$; consider the rest of $P$, and call the shorter of these $P_2$ and the larger $P_3$. Observe that the current through each edge of $P_1$ is $Y$; and, in addition, that the current through each edge of $P_2$ and $P_3$ is the same for each edge in the path, call them $Y_2$ and $Y_3$, respectively. Due to Kirchhoff current law and Kirchhoff circuit/potential law, we have that $Y_2+Y_3=Y$ and also that $Y_2,Y_3 > 0$, from which it follows that $Y_2 < Y$. Finally, \begin{eqnarray*} R_{ab} &=& \frac{V_a-V_b}{Y} \\ &=& \sum_{(ij) \in P_1} \frac{V_i-V_j}{Y} + \sum_{(ij) \in P_2} \frac{V_i-V_j}{Y} \\ &=& \sum_{(ij) \in P_1} \frac{V_i-V_j}{Y} + \sum_{(ij) \in P_2} \frac{V_i-V_j}{Y_2} \\ &=& \sum_{(ij) \in P_1}R_{ij} + \sum_{(ij) \in P_2} R_{ij} \\ &=& d(a,b) \end{eqnarray*} The result follows since $R_{eff}$ doesn't increase when edges are added. \end{proof} In a graph that is a tree, there is a unique path between any two vertices, and so we have the following result. \begin{claim} The metrics $R_{eff}$ and $d$ are the same in a tree. That is, on a tree, $R_{eff}(a,b)=d(a,b)$, for all nodes $a$ and $b$. \end{claim} \textbf{Fact.} $R_{eff}$ can be used to bound several quantities of interest, in particular the commute time, the cover time, etc. We won't go into detail on this. Here is how $R_{eff}$ behaves in some simple examples. \begin{itemize} \item Complete graph $K_n$. Of all graphs, this has the minimum $R_{eff}^{tot}$: $R_{eff}^{tot}(K_n)=n-1$. \item Path graph $P_n$. Among connected graphs, the path graph has the maximum $R_{eff}^{tot}$: $R_{eff}^{tot}(P_n) = \frac{1}{6}(n-1)n(n+1)$. \item Star $S_n$. Among trees, this has the minimum $R_{eff}^{tot}$: $R_{eff}^{tot}(S_n) = (n-1)^2$. \end{itemize} \subsection{Extensions to infinite graphs} All of what we have been describing so far is for finite graphs. Many problems of interest have to do with infinite graphs. Perhaps the most basic is whether random walks are recurrent. In addition to being of interest in its own right, considering this question on infinite graphs should provide some intuition for how random walked based spectral methods perform on the finite graphs we have been considering. \begin{definition} A random walk is \emph{recurrent} if the walker passes through every point with probability $1$, or equivalently if the walker returns to the starting point with probability $1$. Otherwise, the random walk is \emph{transient}. \end{definition} Note that---if we were to be precise---then we would have to define this for a single node, be precise about which of those two notions we are considering, etc. It turns out that those two notions are equivalent and that a random walk is recurrent for one node iff it is recurrent for any node in the graphs. We'll not go into these details here. For irreducible, aperiodic random walks on finite graphs, this discussion is of less interest, since a random walk will eventually touch every node with probability proportional to its degree; but consider three of the simplest infinite graphs: $\mathbb{Z}$, $\mathbb{Z}^2$, and $\mathbb{Z}^3$. Informally, as the dimension increases, there are more neighbors for each node and more space to get lost in, and so it should be harder to return to the starting node. Making this precise, i.e., proving whether a random walk on these graphs is recurrent is a standard problem, one version of which appears on HW2. The basic idea for this that you need to use is to use something called Rayleigh's Monotonicity Law as well as the procedures of shorting and cutting. Rayleigh's Monotonicity Law is a version of the result we described before, which says that $R_{eff}$ between two points $a$ and $b$ varies monotonically with individual resistances. Then, given this, one can use this to do two things to a graph $G$: \begin{itemize} \item \emph{Shorting} vertices $u$ and $v$: this is ``electrical vertex identification.'' \item \emph{Cutting} edges between $u$ and $v$: this is ``electrical edge deletion.'' \end{itemize} Both of these procedures involve constructing a new graph $G^{\prime}$ from the original graph $G$ (so that we can analyze $G^{\prime}$ and make claims about $G$). Here are the things you need to know about shorting and cutting: \begin{itemize} \item Shorting a network can only decrease $R_{eff}$. \item Cutting a network can only increase $R_{eff}$. \end{itemize} For $\mathbb{Z}^2$, if you short in ``Manhattan circles'' around the origin, then this only decreases $R_{eff}$, and you can show that $R_{eff}=\infty$ on the shorted graph, and thus $R_{eff}=\infty$ on the original $\mathbb{Z}^2$. For $\mathbb{Z}^3$, if you cut in a rather complex way, then you can show that $R_{eff} < \infty$ on the cut graph, meaning that $R_{eff} < \infty$ on the original $\mathbb{Z}^{3}$. This, coupled with the following theorem, establish the result random walks on $\mathbb{Z}^2$ are recurrent, but random walks on $\mathbb {Z}^{3}$ are transient. \begin{theorem} A network is recurrent iff $R_{eff} = \infty$. \end{theorem} Using these ideas to prove the recurrence claims is left for HW2: getting the result for $\mathbb{Z}$ is straightforward; getting it for $\mathbb{Z}^2$ is more involved but should be possible; and getting it for $\mathbb{Z}^3$ is fairly tricky---look it up on the web, but it is left as extra credit. \section{% (03/19/2015): Diffusions and Random Walks as Robust Eigenvectors} Reading for today. \begin{compactitem} \item ``Implementing regularization implicitly via approximate eigenvector computation,'' in ICML, by Mahoney and Orecchia \item ``Regularized Laplacian Estimation and Fast Eigenvector Approximation,'' in NIPS, by Perry and Mahoney \end{compactitem} Last time, we talked about electrical networks, and we saw that we could reproduce some of the things we have been doing with spectral methods with more physically intuitive techniques. These methods are of interest since they are typically more robust than using eigenvectors and they often lead to simpler proofs. Today, we will go into more detail about a similar idea, namely whether we can interpret random walks and diffusions as providing robust or regularized or stable analogues of eigenvectors. Many of the most interesting recent results in spectral graph methods adopt this approach of using diffusions and random walks rather than eigenvectors. We will only touch on the surface of this approach. \subsection{Overview of this approach} There are several advantages to thinking about diffusions and random walks as providing a robust alternative to eigenvectors. \begin{itemize} \item New insight into spectral graph methods. \item Robustness/stability is a good thing in many situations. \item Extend global spectral methods to local spectral analogues. \item Design new algorithms, e.g., for Laplacian solvers. \item Explain why diffusion-based heuristics work as they do in social network, computer vision, machine learning, and many other applications. \end{itemize} Before getting into this, step back for a moment, and recall that spectral methods have many nice theoretical and practical properties. \begin{itemize} \item \textbf{Practically.} Efficient to implement; can exploit very efficient linear algebra routines; perform very well in practice, in many cases better than theory would suggest. (This last claim means, e.g., that there is an intuition in areas such as computer vision and social network analysis that even if you could solve the best expansion/conductance problem, you wouldn't want to, basically since the approximate solution that spectral methods provide is ``better.'') \item \textbf{Theoretically.} Connections between spectral and combinatorial ideas; and connections between Markov chains and probability theory that provides a geometric viewpoint. \end{itemize} Recently, there have been very fast algorithms that combine spectral and combinatorial ideas. They rely on an optimization framework, e.g., solve max flow problems by relating them to these spectral-based optimization ideas. These use diffusion-based ideas, which are a relatively new trend in spectral graph theory. To understand better this new trend, recall that the classical view of spectral methods is based on Cheeger's Inequality and involves computing an eigenvector and performing sweep cuts to reveal sparse cuts/partitions. The new trend is to replace eigenvectors with vectors obtained by running random walks. This has been used in: \begin{itemize} \item fast algorithms for graph partitioning and related problems; \item local spectral graph partitioning algorithms; and \item analysis of real social and information networks. \end{itemize} There are several different types of random walks, e.g., Heat Kernel, PageRank, etc., and different walks are better in different situations. So, one question is: Why and how do random walks arise naturally from an optimization framework? One advantage of a random walk is that to compute an eigenvector in a very large graph, a vanilla application of the power method or other related iterative methods (especially black box linear algebra methods) might be too slow, and so instead one might run a random walk on the graph to get a quick approximation. Let $W=AD^{-1}$ be the natural random walk matrix, and let $L=D-A$ be the Laplacian. As we have discussed, it is well-known that the second eigenvector of the Laplacian can be computed by iterating $W$. \begin{itemize} \item For ``any'' vector $y_0$ (or ``any'' vector $y_0$ s.t. $y_0D^{-1}\vec{1}=0$ or any random vector $y_0$ s.t. $y_0D^{-1}\vec{1}=0$), we can compute $D^{-1}W^{t}y_0$; and we can take the limit as $t \rightarrow \infty$ to get \[ v_2(L) = \lim_{t \rightarrow} \frac{ D^{-1}W^{t} y_0 }{ \| W^{t}y_0 \|_{D^{-1}} } , \] where $v_2(L)$ is the leading nontrivial eigenvector of the Laplacian. \item If time is a precious resource, then one alternative is to avoid iterating to convergence, i.e., don't let $t \rightarrow \infty$ (which of course one never does in practice, but by this we mean don't iterate to anywhere near machine precision), but instead do some sort of ``early stopping.'' In that case, one does not obtain an eigenvector, but it is of interest to say something about the vector that is computed. In many cases, this is useful, either as an approximate eigenvector or as a locally-biased analogue of the leading eigenvector. This is very common in practice, and we will look at it in theory. \end{itemize} Another nice aspect of replacing an eigenvector with a random walk or by truncating the power iteration early is that the vectors that are thereby returned are more robust. The idea should be familiar to statisticians and machine learners, although in a somewhat different form. Say that there is a ``ground truth'' graph that we want to understand but that the measurement we make, i.e., the graph that we actually see and that we have available to compute with, is a noisy version of this ground truth graph. So, if we want to compute the leading nontrivial eigenvector of the unseen graph, then computing the leading nontrivial eigenvector of the observed graph is in general not a particularly good idea. The reason is that it can be very sensitive to noise, e.g., mistakes or noise in the edges. On the other hand, if we perform a random walk and keep the random walk vector, then that is a better estimate of the ground truth eigendirection. (So, the idea is that eigenvectors are unstable but random walks are not unstable.) A different but related question is the following: why are random walks useful in the design of fast algorithms? (After all, there is no ``ground truth'' model in this case---we are simply running an algorithm on the graph that is given, and we want to prove results about the algorithm applied to that graph.) The reason is similar, but the motivation is different. If we want to have a fast iterative algorithm, then we want to work with objects that are stable, basically so that we can track the progress of the algorithm. Working with vectors that are the output of random walks will be better in this sense. Today, we will cover an optimization perspective on this. (We won't cover the many applications of these ideas to graph partitioning and related algorithmic problems.) \subsection{Regularization, robustness, and instability of linear optimization} Again, take a step back. What is regularization? The usual way it is described (at least in machine learning and data analysis) is the following. We have an optimization problem \begin{eqnarray*} \min_{x \in \mathcal{S}} f(x) , \end{eqnarray*} where $f(x)$ is a (penalty) function, and where $\mathcal{S}$ is some constraint set. This problem might not be particularly well-posed or well-conditioned, in the sense that the solution might change a lot if the input is changed a little. In order to get a more well-behaved version of the optimization problem, e.g., one whose solution changes more gradually as problem parameters are varied, one might instead try to solve the problem \begin{eqnarray*} \min_{x \in \mathcal{S}} f(x) + \lambda g(x) , \end{eqnarray*} where $\lambda \in \mathbb{R}^{+}$ is a parameter, and where $g(x)$ is (regularization) function. The idea is that $g(x)$ is ``nice'' in some way, e.g., it is convex or smooth, and $\lambda$ governs the relative importance of the two terms, $f(x)$ and $g(x)$. Depending on the specific situation, the advantage of solving the latter optimization problem is that one obtains a more stable optimum, a unique optimum, or smoothness conditions. More generally, the benefits of including such a regularization function in ML and statistics is that: one obtains increased stability; one obtains decreased sensitivity to noise; and one can avoid overfitting. Here is an illustration of the instability of eigenvectors. Say that we have a graph $G$ that is basically an expander except that it is connected to two small poorly-connected components. That is, each of the two components is well-connected internally but poorly-connected to the rest of $G$, e.g., connected by a single edge. One can easily choose the edges/weights in $G$ so that the leading non-trivial eigenvector of $G$ has most of its mass, say a $1-\epsilon$ fraction of its mass, on the first small component. In addition, one can then easily construct a perturbation, e.g., removing one edge from $G$ to construct a graph $G^{\prime}$ such that $G^{\prime}$ has a $1-\epsilon$ fraction of its mass on the second component. That is, a small perturbation that consists of removing one edge can completely shift the eigenvector---not only its direction but also where in $\mathbb{R}^{n}$ its mass is supported. Let's emphasize that last point. Recalling our discussion of the Davis-Kahan theorem, as well as the distinction between the Rayleigh quotient objective and the actual partition found by performing a sweep cut, we know that if there is a small spectral gap, then eigenvectors can swing by 90 degrees. Although the example just provided has aspects of that, this example here is even more sensitive: not only does the direction of the vector change in $\mathbb{R}^{n}$, but also the mass along the coordinate axes in $\mathbb{R}^{n}$ where the eigenvector is localized changes dramatically under a very minor perturbation to~$G$. To understand this phenomenon better, here is the usual quadratic optimization formulation of the leading eigenvector problem. For simplicity, let's consider a $d$-regular graph, in which case we get the following. \begin{eqnarray} \label{eqn:spectral-quadratic-formulation} \mbox{Quadratic Formulation}: & \frac{1}{d} \min_{x\in\mathbb{R}^{n}} & x^TLx \\ \nonumber & \mbox{s.t.} & \|x\| = 1 \\ \nonumber & & x \perp \vec{1} . \end{eqnarray} This is an optimization over vectors $x\in\mathbb{R}^{n}$. Alternatively, we can consider the following optimization problem over SPSD matrices. \begin{eqnarray} \label{eqn:spectral-sdp-formulation} \mbox{SDP Formulation}: & \frac{1}{d} \min_{X\in\mathbb{R}^{n \times n}} & L \bullet X \\ \nonumber & \mbox{s.t.} & I \bullet X = 1 \\ \nonumber & & J \bullet X = 0 \\ \nonumber & & X \succeq 0 , \end{eqnarray} Recall that $ I \bullet X = \Trace{X} $ and that $J= 11^T$. Recall also that if a matrix $X$ is rank-one and thus can be written as $X=xx^T$, then $L \bullet X = x^TLx$. These two optimization problems, Problem~\ref{eqn:spectral-quadratic-formulation} and Problem~\ref{eqn:spectral-sdp-formulation}, are equivalent, in that if $x^{*}$ is the vector solution to the former and $X^{*}$ is a solution of the latter, then $X^{*}=x^{*}{x^{*}}^{T}$. In particular, note that although there is no constraint on the SDP formulation that the solution is rank-one, the solution turns out to be rank one. Observe that this is a linear SDP, in that the objective and all the constraints are linear. Linear SDPs, just as LPs, can be very unstable. To see this in the simpler setting of LPs, consider a convex set $S \subset \mathbb{R}^{n}$ and a linear optimization problem: \begin{equation*} f(c) = \mbox{arg}\min_{x \in S} c^Tx . \end{equation*} The optimal solution $f(c)$ might be very unstable to perturbations of $c$, in that we can have \[ \|c^{\prime}-c\| \le \delta \quad \mbox{and} \quad \|f(c^{\prime})-f(c) \| \gg \delta . \] (With respect to our Linear SDP, think of the vector $x$ as the PSD variable $X$ and think of the vector $c$ as the Laplacian $L$.) That is, we can change the input $c$ (or $L$) a little bit and the solution changes a lot. One way to fix this is to introduce a regularization term $g(x)$ that is strongly convex. So, consider the same convex set $S \subset \mathbb{R}^{n}$ and a regularized linear optimization problem \begin{equation*} f(c) = \mbox{arg}\min_{x \in S} c^Tx + \lambda g(x) , \end{equation*} where $\lambda\in\mathbb{R}^{+}$ is a parameter and where $g(x)$ is $\sigma$-strongly convex. Since this is just an illustrative example, we won't define precisely the term $\sigma$-strongly convex, but we note that $\sigma$ is related to the derivative of $f(\cdot)$ and so the parameter $\sigma$ determines how strongly convex is the function $g(x)$. Then, since $\sigma$ is related to the slope of the objective at $f(c)$, and since the slope of the new objective at $f(c) < \delta$, strong convexity ensures that we can find a new optimum $f(c^{\prime})$ at distance $< \frac{\delta}{\sigma}$. So, we have \begin{equation} \label{eqn:robustness-regularized-linear-optimiz} \|c^{\prime}-c\| \le \delta \Rightarrow \|f(c^{\prime})-f(c) \| < \delta/\sigma , \end{equation} i.e., the strong convexity on $g(x)$ makes the problem stable that wasn't stable before. \subsection{Structural characterization of a regularized SDP} How does this translate to the eigenvector problem? Well, recall that the leading eigenvector of the Laplacian solves the SDP, where $X$ appears linearly in the objective and constraints, as given in Problem~\ref{eqn:spectral-sdp-formulation}. We will show that several different variants of random walks exactly optimize regularized versions of this SDP. In particular they optimize problems of the form \begin{eqnarray} \label{eqn:spectral-sdp-formulation-regularized} \mbox{SDP Formulation}: & \frac{1}{d} \min_{X\in\mathbb{R}^{n \times n}} & L \bullet X + \lambda G(X) \\ \nonumber & \mbox{s.t.} & I \bullet X = 1 \\ \nonumber & & J \bullet X = 0 \\ \nonumber & & X \succeq 0 , \end{eqnarray} where $G(X)$ is an appropriate regularization function that depends on the specific form of the random walk and that (among other things) is strongly convex. To give an interpretation of what we are doing, consider the eigenvector decomposition of $X$, where \begin{equation} X = \sum_{i} p_i v_i v_i^T , \quad \mbox{where} \quad \left\{ \begin{array}{l l} \forall i \quad p_i \ge 0 \\ \sum_i p_i = 1 \\ \forall i \quad v_i^T \vec{1} = 0 \end{array} \right. \end{equation} I've actually normalized things so that the eigenvalues sum to $1$. If we do this, then the eigenvalues of $X$ define a probability distribution. If we don't regularize in Problem~\ref{eqn:spectral-sdp-formulation-regularized}, i.e., if we set $\lambda=0$, then the optimal solution to Problem~\ref{eqn:spectral-sdp-formulation-regularized} puts all the weight on the second eigenvector (since $X^{*}=x^{*}{x^{*}}^{T}$). If instead we regularize, then the regularization term ensures that the weight is spread out on all the eigenvectors, i.e., the optimal solution $X^{*}= \sum_i \alpha_i v_i v_i^T$, for some set of coefficients $\{\alpha_i\}_{i=1}^{n}$. So, the solution is not rank-one, but it is more stable. \textbf{Fact.} If we take this optimization framework and put in ``reasonable'' choices for $G(X)$, then we can recover algorithms that are commonly used in the design of fast algorithms and elsewhere. That the solution is not rank one makes sense from this perspective: if we iterate $t \rightarrow \infty$, then all the other eigendirections are washed out, and we are left with the leading direction; but if we only iterate to a finite $t$, then we still have admixing from the other eigendirections. To see this in more detail, consider the following three types of random walks. (Recall that $M=AD^{-1}$ and $W=\frac{1}{2}\left(I+M\right)$.) \begin{itemize} \item \textbf{Heat Kernel.} $H_t = \exp\left(-tL\right) = \sum_{k=0}^{\infty} \frac{\left(-t\right)^{k}}{k!} L^k = \sum_{i=1}^{n} e^{-\lambda_it} P_i$, where $P_i$ is a projection matrix onto that eigendirection. \item \textbf{PageRank.} $R_{\gamma} = \gamma \left( I - \left( 1-\gamma \right) M \right)^{-1}$. This follows since the PageRank vector is the solution to $$ \pi(\gamma,s) = \gamma s + (1-\gamma) M \pi(\gamma,s) ,$$ which can be written as $$\pi(\gamma,s) = \gamma \sum_{t=0}^{\infty} \left(1-\gamma \right)^{t} M^t s = R_{\gamma} s .$$ \item \textbf{Truncated Lazy Random Walk.} $W_{\alpha} = \alpha I + (1-\alpha) M$. \end{itemize} These are formal expressions describing the action of each of those three types of random walks, in the sense that the specified matrix maps the input to the output: to obtain the output vector, compute the matrix and multiply it by the input vector. Clearly, of course, these random walks would not be implemented by computing these matrices explicitly; instead, one would iteratively apply a one-step version of them to the input vector. Here are the three regularizers we will consider. \begin{itemize} \item \textbf{von Neumann entropy.} Here, $G_H = \Trace{ X \log X } = \sum_i p_i \log p_i$. \item \textbf{Log-determinant.} Here, $G_D = -\log\det \left( X \right)$. \item \textbf{Matrix $p$-norm, $p >0$.} Here, $G_p = \frac{1}{p} \| X \|_p^p = \frac{1}{p} \Trace{ X^p } = \frac{1}{p} \sum_i p_i^p$. \end{itemize} And here are the connections that we want to establish. \begin{itemize} \item $G=G_H \Rightarrow^{entropy} X^{*} \sim H_t$, with $t=\lambda$. \item $G=G_D \Rightarrow^{logdet} X^{*} \sim R_{\gamma}$, with $\gamma \sim \lambda$. \item $G=G_p \Rightarrow^{p-norm} X^{*} \sim W_{\alpha}^{t}$, with $t \sim \lambda$. \end{itemize} Here is the basic structural theorem that will allow us to make precise this connection between random walks and regularized SDPs. Note that its proof is a quite straightforward application of duality ideas. \begin{theorem} \label{thm:reg-rand-walk-struct} Recall the regularized SDP of Problem~\ref{eqn:spectral-sdp-formulation-regularized}, and let $\lambda = 1/\eta$. If $G$ is a connected, weighted, undirected graph, then let $L$ be the normalized Laplacian. The the following are sufficient conditions for $X^{*}$ to be the solution of the regularized SDP. \begin{enumerate} \item $X^{*} = \left( \nabla G \right)^{-1} \left( \eta \left( \lambda^{*} I - L \right) \right)$, for some $\lambda^{*}\in\mathbb{R}$. \item $I \bullet X^{*} = 1$. \item $X^{*} \succeq 0$. \end{enumerate} \end{theorem} \begin{proof} Write the Lagrangian $\mathcal{L}$ of the above SDP as \[ \mathcal{L} = L \bullet X + \frac{1}{\eta}G(X) - \lambda \left( I \bullet X - 1 \right) - U \bullet X , \] where $\lambda\in\mathbb{R}$ and where $U \succeq 0$. Then, the dual objective function is \[ h\left( \lambda,U \right) = \min_{X \succeq 0} \mathcal{L}(X,\lambda,U) . \] Since $G(\cdot)$ is strictly convex, differentiable, and rotationally invariant, the gradient of $G$ over the positive semi-definite cone is invertible, and the RHS is minimized when \[ X = \left( \nabla G \right)^{-1} \left( \eta \left(- L + \lambda^{*} I + U \right) \right) , \] where $\lambda^{*}$ is chosen s.t. $I \bullet X^{*} = 1$. Hence, \begin{eqnarray*} h \left( \lambda^{*},0 \right) &=& L \bullet X^{*} + \frac{1}{\eta} G(X^{*}) - \lambda^{*} \left( I \cdot X^{*} - 1 \right) \\ &=& L \bullet X^{*} + \frac{1}{\eta} G(X^{*}) . \end{eqnarray*} By Weak Duality, this implies that $X^{*}$ is the optimal solution to the regularized SDP. \end{proof} \subsection{Deriving different random walks from Theorem~\ref{thm:reg-rand-walk-struct}} To derive the Heat Kernel random walk from Theorem~\ref{thm:reg-rand-walk-struct}, let's do the following. Since \[ G_H(X) = \Trace{X \log(X) } - \Trace{X} , \] it follows that $\left(\nabla G\right)(X)=\log(x)$ and thus that $\left( \nabla G \right)^{-1}(Y) = \exp(Y)$, from which it follows that \begin{eqnarray*} X^{*} &=& \left( \nabla G \right)^{-1} \left( \eta \left ( \lambda I - L \right) \right) \\ &=& \exp\left( \eta \left( \lambda I - L \right) \right) \quad\mbox{for an appropriate choice of $\eta,\lambda$}\\ &=& \exp \left( -\eta L \right) \exp \left( \eta \lambda \right) \\ &=& \frac{ H_{\eta} }{ \Trace{ H_{\eta} } } , \end{eqnarray*} where the last line follows if we set $\lambda = \frac{-1}{\eta}\log\left( \Trace{ \exp \left( -\eta L \right) } \right)$ To derive the PageRank random walk from Theorem~\ref{thm:reg-rand-walk-struct}, we follow a similar derivation. Since \[ G_D(X) = -\log\det\left(X\right) , \] it follows that $\left(\nabla G \right)(X) = -X^{-1}$ and thus that $\left( \nabla G \right)^{-1}(Y) = -Y^{-1}$, from which it follows that \begin{eqnarray*} X^{*} &=& \left( \nabla G \right)^{-1} \left( \eta \left ( \lambda I - L \right) \right) \\ &=& -\left( \eta \left( \lambda I-L \right) \right)^{-1} \quad\mbox{for an appropriate choice of $\eta,\lambda$}\\ &=& \frac{ D^{-1/2} R_{\gamma} D^{-1/2} }{ \Trace{ R_{\gamma} } } , \end{eqnarray*} for $\eta,\lambda$ chosen appropriately. Deriving the truncated iterated random walk or other forms of diffusions is similar. We will go into more details on the connection with PageRank next time, but for now we just state that the solution can be written in the form \[ x^{*} = c \left( L_G - \alpha L_{K_n} \right)^{+} D s , \] for a constant $c$ and a parameter $\gamma$. That is, it is of the form of the solution to a linear equation, i.e., $L_G^{+}s$, except that there is a term that moderates the effect of the graph by adding the Laplacian of the complete graph. This is essentially a regularization term, although it is not usually described as such. See the Gleich article for more details on this. \subsection{Interpreting Heat Kernel random walks in terms of stability} Here, we will relate the two previous results for the heat kernel. Again, for simplicity, assume that $G$ is $d$-regular. Recall that the Heat Kernel random walk is a continuous time Markov chain, modeling the diffusion of heat along the edges of $G$. Transitions take place in continuous time $t$ with an exponential distribution: \[ \frac{\partial \rho(t) }{\partial t} = -L \frac{\rho(t)}{d} \Rightarrow \rho(t) = \exp\left( -\frac{t}{d}L \right) \rho(0) . \] That is, this describes the way that the probability distribution changes from one step to the next and how it is related to $L$. In particular, the Heat Kernel can be interpreted as a Poisson distribution over the number of steps of the natural random walk $W=AD^{-1}$, where we get the following: \[ e^{ - \frac{t}{d} L } = e^{-t} \sum_{k=1}^{\infty} \frac{t^k}{k!} W^k . \] What this means is: pick a number of steps from the Poisson distribution; and then perform that number of steps of the natural random walk. So, if we have two graphs $G$ and $G^{\prime}$ and they are close, say in an $\ell_{\infty}$ norm sense, meaning that the edges only change a little, then we can show the following. (Here, we will normalize the two graphs so that their respective eigenvalues sum to $1$.) The statement analogous to Statement~\ref{eqn:robustness-regularized-linear-optimiz} is the following. \[ \| G - G^{\prime} \|_{\infty} \Rightarrow \| \frac{ H_{G}^{t} }{ I \bullet H_{G}^{t} } - \frac{ H_{G^{\prime}}^{t} }{ I \bullet H_{G^{\prime}}^{t} } \|_{1} \le t \delta . \] Here, $\| \cdot \|_{1}$ is some other norm (the $\ell_1$ norm over the eigenvalues) that we won't describe in detail. Observe that the bound on the RHS depends on how close the graphs are ($\delta$) as well as how long the Heat Kernel random walk runs ($t$). If the graphs are far apart ($\delta$ is large), then the bound is weak. If the random walk is run for a long time ($t \rightarrow \infty$), then the bound is also very weak. But, if the walk is nor run too long, then we get a robustness result. And, this follows from the strong convexity of the regularization term that the heat kernel is implicitly optimizing exactly. \subsection{A statistical interpretation of this implicit regularization result} Above, we provided three different senses in which early-stopped random walks can be interpreted as providing a robust or regularized notion of the leading eigenvectors of the Laplacian: e.g., in the sense that, in addition to approximating the Rayleigh quotient, they also exactly optimize a regularized version of the Rayleigh quotient. Some people interpret regularization in terms of statistical priors, and so let's consider this next. In particular, let's now give a statistical interpretation to this implicit regularization result. By a ``statistical interpretation,'' I mean a derivation analogous to the manner in which $\ell_2$ or $\ell_1$ regularized $\ell_2$ regression can be interpreted in terms of a Gaussian or Laplace prior on the coefficients of the regression problem. This basically provides a Bayesian interpretation of regularized linear regression. The derivation below will show that the solutions to the Problem~\ref{eqn:spectral-sdp-formulation-regularized} that random walkers implicitly optimize can be interpreted as a regularized estimate of the pseudoinverse of the Laplacian, and so in some sense it provides a Bayesian interpretation of the implicit regularization provided by random~walks. To start, let's describe the analogous results for vanilla linear regression. For some (statistics) students, this is well-known; but for other (non-statistics) students, it likely is not. The basic idea should be clear; and we cover it here to establish notation and nomenclature. Let's assume that we see $n$ predictor-response pairs in $\mathbb{R}^{p}\times\mathbb{R}$, call them $\{\left( x_i,y_i \right) \}_{i=1}^{n}$, and the goal is to find a parameter vector $\beta\in\mathbb{R}^{p}$ such that $\beta^{T}x_i \approx y_i$. A common thing to do is to choose $\beta$ by minimizing the RSS (residual sum of squares), i.e., choosing \[ F(\beta) = RSS(\beta) = \sum_{i=1}^{n} \| y_i - \beta^Tx_i \|_2^2 . \] Alternatively, we could optimize a regularized version of this objective. In particular, we have \begin{eqnarray*} \mbox{Ridge regression:} & & \min_{\beta} F(\beta) + \lambda \| \beta \|_2^2 \\ \mbox{Lasso regression:} & & \min_{\beta} F(\beta) + \lambda \| \beta \|_1 . \end{eqnarray*} To derive these two versions of regularized linear regression, let's model $y_i$ as independent random variables with distribution dependent on $\beta$ as follows: \begin{equation} \label{eqn:ls-eq0} y_i \sim N \left( \beta^T x, \sigma^2 \right) , \end{equation} i.e., each $y_i$ is a Gaussian random variable with mean $\beta^Tx_i$ and known variance $\sigma^2$. This induces a conditional density for $y$ as follows: \begin{equation} \label{eqn:ls-eq1} p\left( y | \beta \right) \sim \exp\{ \frac{-1}{2\sigma^2}F(\beta) \} , \end{equation} where the constant of proportionality depends only on $y$ and $\sigma$. From this, we can derive the vanilla least-squares estimator. But, we can also assume that $\beta$ is a random variable with distribution $p(\beta)$, which is known as a prior distribution, as follows: \begin{equation} \label{eqn:ls-eq2} p(\beta) \sim \exp \{ -U(\beta) \} , \end{equation} where we adopt that functional form without loss of generality. Since these two random variables are dependent, upon observing $y$, we have information on $\beta$, and this can be encoded in a posterior density, $p\left( \beta | y \right)$, which can be computed from Bayes' rule as follows: \begin{eqnarray} \nonumber p\left( \beta | y \right) &\sim& p\left(y | \beta \right) p(\beta) \\ &\sim& \exp \{ \frac{-1}{2\sigma^2} F(\beta) - U(\beta) \} . \label{eqn:ls-eq3} \end{eqnarray} We can form the MAP, the maximum a posteriori, estimate of $\beta$ by solving \[ \max_{\beta} p\left( \beta | y \right) \mbox{ \textbf{iff} } \min_{\beta} - \log p\left( \beta | y \right) . \] From this we can derive ridge regression and Lasso regression: \begin{eqnarray*} U(\beta) &=& \frac{\lambda}{2\sigma^2} \|\beta\|_2^2 \quad \Rightarrow \mbox{Ridge regression} \\ U(\beta) &=& \frac{\lambda}{2\sigma^2} \|\beta\|_1 \quad \Rightarrow \mbox{Lasso regression} \end{eqnarray*} To derive the analogous result for regularized eigenvectors, we will follow the analogous setup. What we will do is the following. Given a graph $G$, i.e., a ``sample'' Laplacian $L$, assume it is a random object drawn from a ``population'' Laplacian $\mathcal{L}$. \begin{itemize} \item This induces a conditional density for $L$, call it $p\left( L | \mathcal{L} \right)$. \item Then, we can assume prior information about the population Laplacian $\mathcal{L}$ in the form of $p\left(\mathcal{L}\right)$. \item Then, given the observed $L$, we can estimate the population Laplacian by maximizing its posterior density $p\left(\mathcal{L} | L\right)$. \end{itemize} While this setup is analogous to the derivation for least-squares, there are also differences. In particular, one important difference between the two approaches is that here there is one data point, i.e., the graph/Laplacian is a single data point, and so we need to invent a population from which it was drawn. It's like treating the entire matrix $X$ and vector $y$ in the least-squares problem as a single data point, rather than $n$ data points, each of which was drawn from the same distribution. (That's not a minor technicality: in many situations, including the algorithmic approach we adopted before, it is more natural to think of a graph as a single data point, rather than as a collection of data points, and a lot of statistical theory breaks down when we observe $N=1$ data point.) In more detail, recall that a Laplacian is an SPSD matrix with a very particular structure, and let's construct/hypothesize a population from which it was drawn. To do so, let's assume that nodes $n$ in the population and in the sample have the same degrees. If $d=\left(d_1,\ldots, d_n \right)$ is the degree vector, and $D = \mbox{deg}\left( d_1,\ldots,d_n \right)$ is the diagonal degree matrix, then we can define the set \[ \chi = \{ X : X \succeq 0 , X D^{1/2}\vec{1} = 0 , \mbox{rank}(X) = n-1 \} . \] So, the population Laplacian and sample Laplacian are both members of $\chi$. To model $L$, let's use a scaled Wishart matrix with expectation $\mathcal{L}$. (This distribution plays the role of the Gaussian distribution in the least-squares derivation. Note that this is a plausible thing to assume, but other assumptions might be possible too.) Let $m \ge n-1$ be a scale parameter (analogous to the variance), and suppose that $L$ is distributed over $\chi$ as $\frac{1}{m}\mbox{Wishart}\left(\mathcal{L},m\right)$. Then $\mathbb{E}\left[ L | \mathcal{L} \right] = \mathcal{L}$ and $L$ has the conditional density \begin{equation} \label{eqn:lap-cond-likelihood} p\left( L | \mathcal{L} \right) \sim \frac{1}{\det\left(\mathcal{L}\right)^{m/2}} \exp \{ \frac{-m}{2} \Trace{ L \mathcal{L}^{+} } \} . \end{equation} This is analogous to Eqn~(\ref{eqn:ls-eq1}) above. Next, we can say that $\mathcal{L}$ is a random object with prior density $p\left(\mathcal{L}\right)$, which without loss of generality we can take to be of the following form: \[ p\left(\mathcal{L}\right) \sim \exp \{ -U\left(\mathcal{L}\right) \} , \] where $U$ is supported on a subset $\bar{\chi} \subseteq \chi$. This is analogous to Eqn~(\ref{eqn:ls-eq2}) above. Then, observing $L$, the posterior distribution for $\mathcal{L}$ is the following: \begin{eqnarray*} p\left(\mathcal{L}|L \right) &\sim& p \left( L | \mathcal{L} \right) p \left( \mathcal{L} \right) \\ &\sim& \exp \{ \frac{m}{2}\Trace{ L\mathcal{L}^{+} } + \frac{m}{2} \log \det \left( \mathcal{L}^{+} \right) - U\left(\mathcal{L} \right) \} , \end{eqnarray*} with support determined by $\bar{\chi}$. This is analogous to Eqn~(\ref{eqn:ls-eq3}) above. If we denote by $\hat{\mathcal{L}}$ the MAP estimate of $\mathcal{L}$, then it follows that $\hat{\mathcal{L}}^{+}$ is the solution of the following optimization problem. \begin{eqnarray} \label{eqn:sdp-stat} & \min_{X} & \Trace{L \bullet X} + \frac{2}{m} U\left(X^{+}\right) - \log\det\left(X\right) \\ \nonumber & \mbox{s.t.} & X \in \bar{\chi} \subseteq \chi . \end{eqnarray} If $\bar{\chi} = \{ X : \Trace{ X } =1 \} \cap \chi$, then Problem~\ref{eqn:sdp-stat} is the same as Problem~\ref{eqn:spectral-sdp-formulation-regularized}, except for the factor of $\log\det\left(x\right)$. This is almost the regularized SDP we had above. Next, we present a prior that will be related to the PageRank procedure. This will make the connection with the regularized SDP more precise. In particular, we present a prior for the population Laplacian that permits us to exploit the above estimation framework to show that the MAP estimate is related to a PageRank computation. The criteria for the prior are so-called neutrality and invariance conditions. It is to be supported on $\chi$; and in particular, for any $X \in \chi$, it will have rank $n-1$ and satisfy $XD^{1/2}1=0$. The prior will depend only on the eigenvalues of the Laplacian (or equivalently of the inverse Laplacian). Let $\mathcal{L}^{+} = \tau O \Lambda O$ be the spectral decomposition of the inverse Laplacian, where $\tau$ is a scale parameter. We will require that the distribution for $\lambda = \left( \lambda_1,\ldots, \lambda_n \right)$ be exchangeable (i.e., invariant under permutations) and neutral (i.e., $\lambda(v)$ is independent of $\frac{\lambda(u)}{1-\lambda(v)}$, for $u \ne v$, for all $v$). The only non-degenerate possibility is that $\lambda$ is distributed as a Dirichlet distribution as follows: \begin{equation} \label{eqn:dirichlet-prior} p\left(\mathcal{L}\right) \sim p(\tau) \prod_{v=1}^{n-1} \lambda(v)^{\alpha-1} , \end{equation} where $\alpha$ is a so-called shape parameter. Then, we have the following lemma. \begin{lemma} \label{lem:reg-sdp-stat-prior-pagerank} Given the conditional likelihood for $L$ given $\mathcal{L}$ in Eqn.~(\ref{eqn:lap-cond-likelihood}) and the prior density for $\mathcal{L}$ given in Eqn.~(\ref{eqn:dirichlet-prior}); if $\hat{\mathcal{L}}$ is the MAP estimate of $\mathcal{L}$, then \[ \frac{ \hat{\mathcal{L}}^{+} }{ \Trace{ \hat{\mathcal{L}}^{+} } } \] solves the regularized SDP, with $G(X) = - -\log\det(X)$ and with the value of $\eta$ given in the proof~below. \end{lemma} \begin{proof} For $\mathcal{L}$ in the support set of the posterior, we can define $\tau = \Trace{ \mathcal{L}^{+} }$ and $\Theta = \frac{1}{\tau} \mathcal{L}^{+}$, so that $\mbox{rank}(\Theta) = n-1$ and $\Trace{\Theta} = 1$. Then, $p\left(\mathcal{L}\right) \sim \exp \{ -U\left( \mathcal{L} \right) \}$, where \[ U\left(\mathcal{L}\right) = - \log \{ p(\tau) \det\left(\Theta\right)^{\alpha-1} \} = -(\alpha-1) \log \det \left(\Theta\right) - \log \left( p(\tau) \right) . \] Thus, \begin{eqnarray*} p\left( \mathcal{L} | L \right) &\sim& \exp \{ -\frac{m}{2}\Trace{ L\mathcal{L}^{+} } + \frac{m}{2} \log\det\left(\mathcal{L}^{+}\right) - U\left(\mathcal{L}\right) \} \\ &\sim& \exp \{ \frac{-m\tau}{2}\Trace{L\Theta} + \frac{m+2(\alpha-1)}{2}\log\det\left(\Theta\right) + g(\tau) \} , \end{eqnarray*} where the second line follows since $\det\left(\mathcal{L}^{+}\right) = \tau^{n-1}\det\left(\Theta\right)$, and where $g(\tau) = \frac{m(n-1)}{2}\log(\tau) + \log p(\tau)$. If $\hat{\mathcal{L}}$ maximizes the posterior likelihood, then define $\hat{\tau} = \Trace{\hat{\mathcal{L}^{+}}}$ and $\hat{\Theta} = \frac{1}{\tau}\hat{\mathcal{L}}^{+}$, and so $\hat{\Theta}$ must minimize $\Trace{L\hat{\Theta}} - \frac{1}{\eta} \log\mbox{det}\left( \hat{\Theta} \right)$, where \[ \eta = \frac{ m \hat{\tau} }{ m+2(\alpha-1) } \] This $\hat{\Theta}$ solves the regularized SDP with $G(x) = -\log\det\left(X\right)$. \end{proof} \textbf{Remark.} Lemma~\ref{lem:reg-sdp-stat-prior-pagerank} provides a statistical interpretation of the regularized problem that is optimized by an approximate PageRank diffusion algorithm, in the sense that it gives a general statistical estimation procedure that leads to the Rayleigh quotient as well as statistical prior related to PageRank. One can write down priors for the Heat Kernel and other random walks; see the two references if you are interested. Note, however, that the prior for PageRank makes things particularly simple. The reason is that the extra term in Problem~\ref{eqn:sdp-stat}, i.e., the $\log\det\left(X\right)$ term, is of the same form as the regularization function that the approximate PageRank computation implicitly regularizes with respect to. Thus, we can choose parameters to make this term cancel. Otherwise, there are extra terms floating around, and the statistical interpretation is more complex. \section{% (03/31/2015): Local Spectral Methods (1 of 4): Introduction and Overview} Reading for today. \begin{compactitem} \item ``Spectral Ranking'', in arXiv, by Vigna \item ``PageRank beyond the Web,'' in SIAM Review, by Gleich \end{compactitem} Last time, we showed that certain random walks and diffusion-based methods, \emph{when not run to the asymptotic limit}, exactly solve regularized versions of the Rayleigh quotient objective (in addition to approximating the Rayleigh quotient, in a manner that depends on the specific random walk and how the spectrum of $L$ decays). There are two ways to think about these results. \begin{itemize} \item One way to think about this is that one runs almost to the asymptotic state and then one gets a vector that is ``close'' to the leading eigenvector of $L$. Note, however, that the statement of implicit regularization from last time does \emph{not} depend on the initial condition or how long the walk was run. (The value of the regularization parameter, etc., does, but the form of the statement does not.) Thus ... \item Another way to think about this is that one starts at any node, say a localized ``seed set'' of nodes, e.g., in which all of the initial probability mass is on one node or a small number of nodes that are nearby each other in the graph topology, and then one runs only a small number of steps of the random walk or diffusion. In this case, it might be more natural/useful to try to quantify the idea that: if one starts the random walk on the small side of a bottleneck to mixing, and if one runs only a few steps of a random walk, then one might get stuck in that small set. \end{itemize} The latter is the basic idea of so-called \emph{local spectral methods}, which are a class of algorithms that have received a lot of attention recently. Basically, they try to extend the ideas of global spectral methods, where we compute eigenvectors, random walks, etc., that reveal structure about the entire graph, e.g., that find a partition that is quadratically-good in the sense of Cheeger's Inequality to the best conductance/expansion cut in the graph, to methods that reveal interesting structure in locally-biased parts of the graph. Not only do these provide locally-biased versions of global spectral methods, but since spectral methods are often used to provide a ranking for the nodes in a graph and/or to solve other machine learning problems, these also can be used to provide a locally-biased or personalized version of a ranking function and/or to solve other machine learning problems in a locally-biased manner. \subsection{Overview of local spectral methods and spectral ranking} Here is a brief history of local and locally-biased spectral methods. \begin{itemize} \item LS: developed a basic locally-biased mixing result in the context of mixing of Markov chains in convex bodies. They basically show a partial converse to the easy direction of Cheeger's Inequality---namely, that if the conductance $\phi(G)$ of the graph $G$ is big, then \emph{every} random walk must converge quickly---and from this they also show that if the random walk fails to converge quickly, then by examining the probability distribution that arises after a few steps one can find a cut of small conductance. \item ST: used the LS result to get an algorithm for local spectral graph partitioning that used truncated random walks. They used this to find good well-balanced graph partitions in nearly-linear time, which they then used as a subroutine in their efforts to develop nearly linear time solvers for Laplacian-based linear systems (a topic to which we will return briefly at the end of the semester). \item ACL/AC: improved the ST result by computing a personalized PageRank vector. This improves the fast algorithms for Laplacian-based linear solvers, and it is of interest in its own right, so we will spend some time on it. \item C: showed that similar results can be obtained by doing heat kernel computations. \item AP: showed that similar results can be obtained with an evolving set method (that we won't discuss in detail). \item MOV: provided an optimization perspective on these local spectral methods. That is, this is a locally-biased optimization objective that, if optimized exactly, leads to similar locally-biased Cheeger-like bounds. \item GM: characterized the connection between the strongly-local ACL and the weakly-local MOV in terms of $\ell_1$ regularization (i.e., a popular form of sparsity-inducing regularizations) of $\ell_2$ regression problems. \end{itemize} There are several reasons why one might be interested in these methods. \begin{itemize} \item Develop faster algorithms. This is of particular interest if we can compute locally-biased partitions without even touching all of the graph. This is the basis for a lot of work on nearly linear time solvers for Laplacian-based linear systems. \item Improved statistical properties. If we can compute locally-biased things, e.g., locally-biased partitions, without even touching the entire graph, then that certainly implies that we are robust to things that go on on the other side of the graph. That is, we have essentially engineered some sort of regularization into the approximation algorithm; and it might be of interest to quantify this. \item Locally exploring graphs. One might be interested in finding small clusters or partitions that are of interest in a small part of a graph, e.g., a given individual in a large social graph, in situations when those locally-biased clusters are not well-correlated with the leading or with any global eigenvector. \end{itemize} We will touch on all these themes over the next four classes. For now, let $G=(V,E)$, and recall that $\mbox{Vol}(G) = \sum_{v \in T} d_v$ (so, in particular, we have that $\mbox{Vol}(G) = 2 |E| = 2m$. Also, $A$ is the Adjacency Matrix, $W = D^{-1}A$, and $\mathcal{L} = I - D^{-1}A$ is the random walk normalized Laplacian. For a vector $x\in\mathbb{R}^{n}$, let's define its support as \[ \mbox{Supp}(v) = \{ i \in V = [n] : v_i \ne 0 \} . \] Then, here is the transition kernel for that vanilla random walk. \[ \mathbb{P}\left[ x_{t+1}=j | x_t = i \right] = \begin{cases} \frac{1}{d} \quad \text{if } i \sim j \\ 0 \quad \text{otherwise} . \end{cases} \] If we write this as a transition matrix operating on a (row) vector, then we have that \[ p(t) = s W^t , \] where $W$ is the transition matrix, and where $s=p(0)$ is the initial distribution, with $\|s\|_1 = 1$. Then, $p = \vec{1}^T D/(\vec{1}^T D \vec{1})$ is the stationary distribution, i.e., \[ \lim_{t\rightarrow\infty} \mathbb{P}\left[ x_t = i \right] = \frac{d_i}{\mbox{Vol}(G)} , \] independent of $s = p(0)$, as long as $G$ is connected and not bipartite. (If it is bipartite, then let $W \rightarrow W_{LAZY} = \frac{1}{2} \left( I + D^{-1}A \right)$, and the same results holds.) There are two common interpretations of this asymptotic random walk. \begin{itemize} \item Interpretation 1: the limit of a random walk. \item Interpretation 2: a measure of the importance of a node. \end{itemize} With respect to the latter interpretation, think of an edge as denoting importance, and then what we want to find is the important nodes (often for directed graphs, but we aren't considering that here). Indeed, one of the simplest \emph{centrality measures} in social graph analysis is the degree of a node. For a range of reasons, e.g., since that is easy to spam, a refinement of that is to say that important nodes are those nodes that have links to important nodes. This leads to a large area known as \emph{spectral ranking} methods. This area applies the theory of matrices or linear maps---basically eigenvectors and eigenvalues, but also related things like random walks---to matrices that represent some sort of relationship between entities. This has a long history, most recently made well-known by the PageRank procedure (which is one version of it). Here, we will follow Vigna's outline and description in his ``Spectral Ranking'' notes---his description is very nice since it provides the general picture in a general context, and then he shows that with several seemingly-minor tweaks, one can obtain a range of related spectral graph~methods. \subsection{Basics of spectral ranking} To start, take a step back, and let $M \in \mathbb{R}^{n \times n}$, where each column/row of $M$ represents some sort of entity, and $M_{ij}$ represents some sort of \emph{endorsement or approval} of entity $j$ from entity $i$. (So far, one could potentially have negative entries, with the obvious interpretation, but this will often be removed later, basically since one has more structure if entries must be nonnegative.) As Vigna describes, Seeley (in 1949) observed that one should define importance/approval recursively, since that will capture the idea that an entity is important/approved if other important entities think is is important/approved. In this case, recursive could mean that \[ r = rM , \] i.e., that the index of the $i^{th}$ node equals the weighted sum of the indices of the entities that endorse the $i^{th}$ node. This isn't always possible, and indeed Seeley considers nonnegative matrices that don't have any all-zeros rows, in which case uniqueness, etc., follow from the Perron-Frobenius ideas we discussed before. This involves the left eigenvectors, as we have discussed; one could also look at the right eigenvectors, but the endorsement/approval interpretation fails to hold. Later, Wei (in 1952) and Kendall (in 1955) were interested in \emph{ranking} sports teams, and they said essentially that better teams are those teams that beat better teams. This involves looking at the rank induced by \[ \lim_{k\rightarrow\infty} M^k \vec{1}^T \] and then appealing to Perron-Frobenius theory. The significant point here is three-fold. \begin{itemize} \item The motivation is very different than Seeley's endorsement motivation. \item Using dominant eigenvectors on one side or the other dates back to mid 20th century, i.e., well before recent interest in the topic. \item The relevant notion of convergence in both of these motivating applications is that of \emph{convergence in rank} (where rank means the rank order of values of nodes in the leading eigenvector, and not anything to do with the linear-algebraic rank of the underlying matrix). In particular, the actual values of the entires of the vector are not important. This is very different than other notions of convergence when considering leading eigenvectors of Laplacians, e.g., the value of the Rayleigh quotient. \end{itemize} Here is the generalization. Consider matrices $M$ with real and positive dominant eigenvalue $\lambda$ and its eigenvector, i.e., vector $r$ such that $ \lambda r = rM$, where let's say that the dimension of the eigenspace is one. \begin{definition} The \emph{left spectral ranking} associated with $M$ is, or is given by, the dominant left eigenvector. \end{definition} If the eigenspace does not have dimension one, then there is the usual ambiguity problem (which is sometimes simply assumed away, but which can be enforced by a reasonable rule---we will see a common way to do the latter in a few minutes), but if the eigenspace has dimension one, then we can talk of \emph{the} spectral ranking. Note that it is defined only up to a constant: this is not a problem if all the coordinates have the same sign, but it introduces an ambiguity otherwise, and how this ambiguity is resolved can lead to different outcomes in ``boundary cases'' where it matters; see the Gleich article for examples and details of this. Of course one could apply the same thing to $M^T$. The mathematics is similar, but the motivation is different. In particular, Vigna argues that the endorsement motivation leads to the left eigenvectors, while the influence/better-than motivation leads to the right eigenvectors. The next idea to introduce is that of ``damping.'' (As we will see, this will have a reasonable generative ``story'' associated with it, and it will have a reasonable statistical interpretation, but it is also important for a technical reason having to do with ensuring that the dimension of the eigenspace is one.) Let $M$ be a zero-one matrix; then $\left( M^k \right)_{ij}$ is the number of directed paths from $i \rightarrow j$ in a directed graph defined by $M$. In this case, an obvious idea of measuring the importance of $i$, i.e., to measure the number of paths going into $j$, since they represent recursive endorsements, which is given by \[ \vec{1}\left( I + M + M^2 + \cdots \right) = I + \sum_{k=0}^{\infty} M^{k} , \] does \emph{not} work, since the convergence of the equation is not guaranteed, and it does not happen in~general. If, instead, one can guarantee that the spectral radius of $M$ is less than one, i.e., that $\lambda_0 < 1$, then this infinite sum does converge. One way to do this is to introduce a damping factor $\alpha$ to obtain \[ \vec{1}\left( I + \alpha M + \alpha^2 M^2 + \cdots \right) = I + \sum_{k=0}^{\infty} \left( \alpha M\right)^{k} . \] This infinite sum does converge, as the spectral radius of $\alpha M$ is strictly less than $1$, if $\alpha < \frac{1}{\lambda_0}$. Katz (in 1953) proposed this. Note that \[ \vec{1} \sum_{k=0}^{\infty} \left( \alpha M \right)^{k} = \vec{1} \left( I - \alpha M \right)^{-1} . \] So, in particular, we can compute this index by solving the \emph{linear system} as follows. \[ x \left( I - \alpha M \right) = \vec{1} . \] This is a particularly well-structured system of linear equations, basically a system of equations where the constraint matrix can be written in terms of a Laplacian. There has been a lot of work recently on developing fast solvers for systems of this form, and we will get back to this topic in a few weeks. A generalization of this was given by Hubbel (in 1965), who said that one could define a status index $r$ by using a recursive equation $r = v+rM$, where $v$ is a ``boundary condition'' or ``exogenous contribution.'' This gives \[ r = v \left( I - M \right)^{-1} = v \sum_{k=0}^{\infty} M^k . \] So, we are generalizing from the case where the exogenous contribution is $\vec{1}$ to an arbitrary vector $v$. This converges if $\lambda_0 < 1$; otherwise, one could introduce an damping factor (as we will do below). To see precisely how these are all related, let's consider the basic spectral ranking equation \[ \lambda_0 r = r M . \] If the eigenspace of $\lambda_0$ has dimension greater than $1$, then there is no clear choice for the ranking $r$. One idea in this case is to perturb $M$ to satisfy this property, but we want to apply a ``structured perturbation'' in such a way that many of the other spectral properties are not damaged. Here is the relevant theorem, which is due to Brauer (in 1952), and which we won't prove. \begin{theorem} Let $A \in \mathbb{C}^{n \times n}$, let \[ \lambda_0, \lambda_1, \ldots, \lambda_{n-1} \] be the eigenvalues of $A$, and let $x\in\mathbb{C}^{n}$ be nonzero vector such that $Ax^T = \lambda_0 x^T$. Then, for all vectors $v\in\mathbb{C}^{n}$, the eigenvalues of $A+x^Tv$ are given by \[ \lambda_0 + vx^T, \lambda_1, \ldots, \lambda_{n-1} . \] \end{theorem} That is, if we perturb the original matrix by an rank-one update, where the rank-one update is of the form of the outer product of an eigenvector of the matrix and an arbitrary vector, then one eigenvalue changes, while all the others stay the same. In particular, this result can be used to split degenerate eigenvalues and to introduce a \emph{gap} into the spectrum of $M$. To see this, let's consider a rank-one convex perturbation of our matrix $M$ by using a vector $v$ such that $vx^T = \lambda_0$ and by applying the theorem to $\alpha M$ and $\left(1-\alpha\right)x^Tv$. If we do this then we get \[ \lambda_0 r = r \left( \alpha M + \left(1-\alpha\right) x^Tv \right) . \] Next, note that $\alpha M + \left(1-\alpha\right) x^Tv$ has the same dominant eigenvalues as $M$, but with algebraic multiplicity $1$, and all the other eigenvalues are multiplied by $\alpha \in (0,1)$. This ensures a unique $r$. The cost is that it introduces extra parameters ($\alpha$ is we set $v$ to be an all-ones vector, but the vector $v$ if we choose it more generally). These parameters can be interpreted in various ways, as we will see. An important consequence of this approach is that $r$ is defined only up to a constant, and so we can impose the constraint that $rx^T=\lambda_0$. (Note that if $x = \vec{1}$, then this says that the sum of $r$'s coordinates is $\lambda_0$, which if all the coordinates have the same sign means that $\|r\|_1 = \lambda_0$.) Then we get \[ \lambda_0 r = \alpha r M + \left(1-\alpha\right) \lambda_0 v . \] Thus, \[ r \left( \lambda_0 - \alpha M \right) = \left( 1-\alpha \right) \lambda_0 v . \] From this it follows that \begin{eqnarray*} r &=& \left(1-\alpha\right) \lambda_0 v \left( \lambda_0 - \alpha M \right)^{-1} \\ &=& \left(1-\alpha\right) v \left( 1 - \frac{\alpha}{\lambda_0} M \right)^{-1} \\ &=& \left(1-\alpha\right) v \sum_{k=0}^{\infty} \left( \frac{\alpha}{\lambda_0} M \right)^{k} \\ &=& \left(1-\lambda_0\beta\right) v \sum_{k=0}^{\infty} \left( \beta M \right)^{k} , \end{eqnarray*} which converges if $\alpha < 1$, i.e., if $\beta < \frac{1}{\lambda_0}$. That is, this approach shows that the Katz-Hubbel index can be obtained as a rank-one perturbation of the original matrix. In a little bit, we will get to what this rank-one perturbation ``means'' in different situations. To review, we started with a matrix $M$ with possibly many eigenvectors associated with the dominant eigenvalue, and we introduced a structured perturbation get a specific eigenvector associated with $\lambda_0$, given the boundary condition~$v$. The standard story is that if we start from a generic nonnegative matrix and normalize its rows to get a stochastic matrix, then we get a Markovian spectral ranking, which is the limit distribution of the random walk. Here, we are slightly more general, as is captured by the following definition. \begin{definition} Given the matrix $M$, the sampled spectral ranking of $M$ with boundary condition $v$ and dumping factor $\alpha$ is \[ r_0 = \left(1-\alpha\right) v \sum_{k=0}^{\infty} \left( \frac{\alpha}{\lambda_0} M \right)^{k} , \] for $|\alpha| < 1$. \end{definition} The interpretation of the boundary condition (from the sampled to the standard case) is the following. \begin{itemize} \item In the damped case, the Markov chain is restarted to a fixed distribution $v$, and there is a single stationary distribution which is the limit of every starting distribution. \item In the standard case, $v$ is the starting distribution from which we capture the limit using an eigenprojection. \end{itemize} While in some sense equivalent, these two interpretations suggest different questions and interpretations, and we will consider both of these over the next few classes. \subsection{A brief aside} Here is an aside we will get back to over the next few classes. Consider a vanilla random walk, where at each time step, we follow the graph with some probability $\beta$, and we randomly jump to any uniformly-chosen node in the graph with probability $1-\beta$. This Markov chain has stationary distribution \[ p_{\beta} = \frac{1-\beta}{n}\vec{1} + \beta p_{\beta} W . \] This is often known as PageRank, which has received a lot of attention in web ranking, but from the above discussion it should be clear that it is one form of the general case of spectral ranking methods. We can also ask for a ``personalized'' version of this, by which we informally mean a ranking of the nodes that in some sense is conditioned on or personalized for a given node or a given seed set of nodes. We can get this by using a personalized PageRank, by randomly jumping (not to any node chosen uniformly at random but) to a ``seed set'' $s$ of nodes. This PPR is the unique solution to \[ p_{\beta}(s) = \left(1-\beta\right)s + \beta p_{\beta}(s) W , \] i.e., it is of the same form as the expression above, except that an all-ones vector has been replaced by a seed set or boundary condition vector $V$. This latter expression solves the linear equation \[ \left( I - \beta W \right) p_{\beta} (s) = \left( 1-\beta \right) s . \] We can write this expression as an infinite sum as \[ p_{\beta}(s) = \left( 1-\beta \right) s + \beta \sum_{t=0}^{\infty} \beta^t \left(sW\right)^t \] Thus, note that the following formulations of PageRank (as well as spectral ranking more generally) are equivalent. \begin{itemize} \item $\left(I-\beta W \right) x = \left( 1-\beta \right) s$ \item $\left( \gamma D + L \right) z = \gamma s$, where $\beta=\frac{1}{1+\gamma}$ and $x = Dz$. \end{itemize} As noted above, this latter expression is of the form of Laplacian-based linear equations. It is of the same form that arises in those semi-supervised learning examples that we discusses. We will talk toward the end of the term about how to solve equations of this form more generally. \section{% (04/02/2015): Local Spectral Methods (2 of 4): Computing spectral ranking with the push procedure} Reading for today. \begin{compactitem} \item ``The Push Algorithm for Spectral Ranking'', in arXiv, by Boldi and Vigna \item ``Local Graph Partitioning using PageRank Vectors,'' in FOCS, by Andersen, Chung, and Lang \end{compactitem} Last time, we talked about spectral ranking methods, and we observed that they can be computed as eigenvectors of certain matrices or as the solution to systems of linear equations of certain constraint matrices. These computations can be performed with a black box solver, but they can also be done with specialized solvers that take advantage of the special structure of these matrices. (For example, a vanilla spectral ranking method, e.g., one with a preference vector $\vec{v}$ that is an all-ones vector $\vec{1}$, has a large eigenvalue gap, and this means that one can obtain a good solution with just a few steps of a simple iterative method.) As you can imagine, this is a large topic. Here, we will focus in particular on how to solve for these spectral rankings in the particular case when the preference vector $\vec{v}$ is has small support, i.e., when it has its mass localized on a small seed set of nodes. This is a particularly important use case, and the methods developed for it are useful much more generally. In particular, in this class, we will describe how to compute personalized/local spectral rankings with a procedure called the \emph{push procedure}. This has several interesting properties, in general, but in particular when computing locally-biased or personalized spectral rankings in a large graph. As with the last class, we will follow the Boldi-Vigna's notes on ``The Push Algorithm for Spectral Ranking'' on the topic. \subsection{Background on the method} To start, let $M \in \mathbb{R}^{n \times n}$ be a nonnegative matrix; and WLOG assume that $\| M \|_1 =1$, i.e., assume that $M$ is stochastic. (Actually, this assumes that $M$ is sub-stochastic and that at least one row of $M$ sums to $1$; this distinction won't matter for what we do, but it could matter in other cases, e.g., when dealing with directed graphs, etc.) Then, let $v$ be a nonnegative vector s.t. $\|v\|_1=1$, i.e., assume that $v$ is a distribution, and let $\alpha \in [0,1)$. Then, recall that we defined the spectral ranking of $M$ with preference vector $v$ and damping factor $\alpha$ to be \begin{eqnarray*} r &=& \left(1-\alpha\right) v \left(I-\alpha M\right)^{-1} \\ &=& \left(1-\alpha\right) v \sum_{k=0}^{\infty} \alpha^k M^k . \end{eqnarray*} For this, note that $r$ need not be a distribution unless $M$ is stochastic (although it is usually applied in that case). Note also that the linear operator is defined for all $\alpha \in \left[0,\frac{1}{\rho(M)} \right)$, but estimating $\rho(M)$ can be difficult. (We'll see an interesting example next week of what arises when we push this parameter to the limit.) From this perspective, while it is difficult to ``guess'' what is the spectral ranking $r$ associated with a preference vector $v$, it is ``easy'' to solve the inverse problem: given $r$, solve $$ v = \frac{1}{1-\alpha} r \left(I-\alpha M \right) . $$ (While this equation is always true, the resulting preference vector might not be a distribution; otherwise, one could obtain any spectral ranking using a suitable preference vector.) While this is obvious, it has important consequences for computing spectral rankings and approximate spectral rankings that we will now describe. Consider the indicator vector $\mathcal{X}_x(z) = [ x=z]$. If we want to obtain the vector $\left(1-\alpha\right)\mathcal{X}_x$ as a spectral ranking, then the associated preference vector $v$ has the simple form: \begin{eqnarray} \label{eqn:pref-vect} v &=& \frac{1}{1-\alpha}\left(1-\alpha\right) \mathcal{X}_x \left( I-\alpha M\right) \\ \nonumber &=& \mathcal{X}_x - \alpha \sum_{x \rightarrow y} M_{xy} \mathcal{X}_y \\ \nonumber &=& \mathcal{X}_x - \alpha \frac{1}{d(x)} \sum_{x \rightarrow y} \mathcal{X}_y , \end{eqnarray} where note that if $M_{xy} = \frac{1}{d(x)}$ is the natural random walk, then we can take it out of the summation. This is true in general. The important point for us here is that if $v$ is highly concentrated, e.g., if it is an indicator vector of $s$ small set of nodes, and if $\alpha$ is not too close to $1$, then most of the updates done by the linear solver or by an iterative method either don't update much or update below a threshold level. Motivated by this, we will discuss the so-called \emph{push algorithm}---this uses a particular form of updates to reduce computational burden. This was used in work of Jeh and Widom and also of Berkhin on personalized page rank, and it was used with this name by ACL. Although they all apply it to PageRank, it applies for the steady state of Markov chains with restart, and it is basically an algorithm for spectral ranking with damping (see the Boldi, Lonati, Santini, Vigna paper). \subsection{The basic push procedure} The basic idea of this approach is that, rather that computing an \emph{exact} PPR vector by iterating the corresponding Markov chain (e.g., with vanilla matrix-vector multiplies) until it converges, it is also possible to consider computing an \emph{approximate} PPR vector much more efficiently. Recall that the PPR vector is the unique solution to \[ \pi_{\alpha}(s) = \left(1-\alpha\right)s + \alpha \pi_{\alpha}(s) W , \] and it can be written as an infinite sum \[ \pi_{\alpha}(s) = \left(1-\alpha\right)s + \left(1-\alpha\right) \sum_{t=1}^{\infty} \alpha^t \left(s W\right)^t . \] With this notation, we can define the following notion of approximation of a PPR vector. \begin{definition} An \emph{$\epsilon$-approximate PageRank vector} $\pi_{\alpha}(s)$ is any PageRank vector $\pi_{\alpha}(s-r)$, where $r$ is nonnegative and $r(v) \le \epsilon d_v$, for all $v \in V$. \end{definition} \textbf{Fact.} The approximation error of an $\epsilon$-approximate PageRank vector on any set of nodes $S$ can be bounded in terms of the $\mbox{Vol}(S)$ and $\epsilon$. Here is a basic lemma from ACL. \begin{lemma} For all $\epsilon$-approximate PageRank vectors $\pi_{\alpha}(s-r)$, and for all $S \subset V$, we have \[ \pi_{\alpha}(s) 1^T_S \ge \pi_{\alpha}(s-r) 1^T_S \ge \pi_{\alpha}(s) 1^T_S - \epsilon \mbox{Vol}(S) . \] \end{lemma} Here is an algorithm to compute an $\epsilon$-approximate PageRank vector; let's call this algorithm \textsc{ApproxPR($s,\alpha,\epsilon$)}. \begin{enumerate} \item Let $p=\vec{0}$ and $r = \vec{s}$. \item While $r_u \ge \epsilon d_u$ for some vertex $u$, \begin{itemize} \item Pick and $u$ such that $r_u \ge \epsilon d_n$ \item Apply \textsc{Push($u$)} \end{itemize} \item Return the vectors $p$ and $r$ \end{enumerate} And here is the \textsc{Push($u$)} algorithm that is called by the \textsc{ApproxPR($s,\alpha,\epsilon$)} algorithm. \begin{enumerate} \item Let $p^{\prime} = p$ and $r^{\prime} = r$, except for the following updates: \begin{compactitem} \item $p_u^{\prime} = p_u + \alpha r_u$ \item $r_u^{\prime} = \left( 1-\alpha \right) \frac{r_u}{2}$ \item $r_v^{\prime} = r_v + \left(1-\alpha\right) \frac{r_u}{2d_u}$, for all vertices $v$ such that $(u,v) \in E$. \end{compactitem} \end{enumerate} Note that we haven't specified the order in which the pushes are executed, i.e., in which the \textsc{Push($u$)} algorithm is called by the \textsc{ApproxPR($s,\alpha,\epsilon$)} algorithm, and so they can be done in different ways, leading to slightly different algorithms. Here is the theorem that ACL establishes about this procedure. \begin{theorem} Algorithm \textsc{ApproxPR($s,\alpha,\epsilon$)} has the following properties. \begin{itemize} \item For all starting vertices with $\|s\|_1 \le 1$ and for all $\epsilon \in (0,1]$, the algorithm returns an $\epsilon$-approximate $p$ for $p_{\alpha}(s)$. \item The support of $p$ satisfies \[ \mbox{Vol(Supp}(p)) \le \frac{2}{(1+\alpha)\epsilon} . \] \item The running time of the algorithm is $O\left( \frac{1}{\alpha\epsilon} \right)$. \end{itemize} \end{theorem} The idea of the proof---outlined roughly above---is that Algorithm \textsc{Push($u$)} preserves the approximate PageRank condition $p = \pi \left( s-r \right)$; and the stopping condition ensures that it is an $\epsilon$ approximation. The running time follows since $\|r\|_1=1$ and it decreases by $\alpha\epsilon d_n$ at each time \textsc{Push($u$)} is called. \subsection{More discussion of the basic push procedure} The basic idea of the method is to keep track of two vectors, $p$ which is the vector of current approximations and $r$ which is the vector of residuals, such that the following global invariant is satisfied at every step of the algorithm. \begin{equation} \label{eqn:invariant} p + \left(1-\alpha\right) r \left(I-\alpha M\right)^{-1} = \left(1-\alpha\right) v \left(I-\alpha M\right)^{-1} . \end{equation} Initially, $p=0$ and $r=v$, and so this invariant is trivially satisfied. Subsequently, at each step the push method increases $p$ and decreases $r$ to keep the invariant satisfied. To do so, it iteratively ``pushes'' some node $x$. A \emph{push on $x$} adds $\left(1-\alpha\right) r_x \mathcal{X}_x$ to the vector $p$. To keep the invariant try, the method must update $r$. To do so, think of $r$ as a preference vector, in which we are just trying to solve Eqn.~(\ref{eqn:pref-vect}). By linearity, if we subtract \[ r_x \left( \mathcal{X}_x - \alpha \sum_{x \rightarrow y} M_{xy} \mathcal{X}_y \right) \] from $r$, then the value $\left( 1-\alpha \right) r \left( I-\alpha M \right)^{-1}$ will decrease by $\left(1-\alpha\right) r_x \mathcal{X}_x$, thus preserving the invariant. This is a good choice since \begin{compactitem} \item We zero out the $x^{th}$ entry of $r$. \item We add a small positive quantities to a small set of entries (small set, if the graph is sparse, which is the use case we are considering). This increases the $\ell_1$ norm of $p$ by $\left( 1-\alpha\right)r_x$, and it decreases $r$ by at least the same amount. \end{compactitem} Since we don't create negative entries in this process, it always holds that \[ \|p\|_1 + \|r\|_1 \le 1 , \] and thus we can keep track of two norms at each update. The error in the estimate is then given by the following. \begin{eqnarray*} \| \left(1-\alpha \right) r \left( I - \alpha M \right)^{-1} \|_1 &=& \left(1-\alpha\right) \| r \sum_{k \ge 0} \alpha^k M^k \|_1 \\ &\le& \left(1-\alpha\right) \|r \|_1 \sum_{k \ge 0} \alpha^k \| M^k \|_1 \\ &\le& \| r \|_1 \end{eqnarray*} So, in particular, we can control the absolute error of the algorithm by controlling the $\ell_1$ error of the residual. \subsection{A different interpretation of the same process} Here is a different interpretation of the push method. Some might find useful---if so, good, and if not, then just ignore the following. Just as various random walks can be related to diffusion of heat/mass, one can think of PageRank as related to the diffusion of a substance where some fraction of that substance gets stuck in place at each time step---e.g., diffusing paint, where the point of paint is that part of the paint dries in place and then stops diffusing/flowing. (Think here of doing the updates to push asynchronously.) At each time step, $1-\alpha$ fraction of the paint dries in place, and $\alpha$ fraction of the paint does a lazy random walk. So, we need to keep track of two quantities: the amount of wet paint (that still moves at the next step) and the amount of dry paint (that is probability mass that won't move again). If we let \begin{itemize} \item $p : V \rightarrow \mathbb{R}^{n}$ be a vector that says how much pain is stuck at each vertex. \item $r : V \rightarrow \mathbb{R}^{n}$ be a vector saying how much wet paint remains at each vertex. \end{itemize} Then, at $t=0$, $r^0 = \mathcal{X}_u$, and these vectors evolve as \begin{eqnarray*} p^{t+1} &=& p^t + \left(1-\alpha\right) r^t \\ r^t &=& \alpha r \hat{W} . \end{eqnarray*} (Note that I have swapped sides where the vector is multiplying, so I think I have inconsistencies here to fix. Also, I think I refer to $\alpha$ and $1-\alpha$ inconsistently, so that must be fixed too) Given this, if we let $p^{\infty}$ be where paint is dried at the end of the process, then \begin{eqnarray*} p^{\infty} &=& \left(1-\alpha\right) \sum_{t \ge 0} r^t \\ &=& \left( 1-\alpha \right) \sum_{t \ge 0} \alpha^t r^0 \hat{W}^t \\ &=& \sum_{t \ge 0} \alpha^t \mathcal{X}_u \hat{W}^t \end{eqnarray*} This is simply PPR, with a different scaling and $\alpha$. Recall here that $\hat{W} = \frac{1}{2}I+\frac{1}{2}W$. Since $\left(I+X\right)^{-1} = \sum_{i=0}^{\infty} X^{i}$, if the spectral radius of $X$ is less than $1$, it follows that \begin{eqnarray*} p^{\infty} &=& \left(1-\alpha\right) \mathcal{X}_u \left(I - \alpha \hat{W} \right)^{-1} \\ &=& \left(1-\alpha \right) \mathcal{X}_u \left( \left(1-\frac{\alpha}{2} \right) I - \frac{\alpha}{2}W \right)^{-1} \\ &=& \gamma \mathcal{X}_u \left( I - \left( 1-\gamma \right) W \right)^{-1} , \end{eqnarray*} where the parameter $\gamma$ is defined in terms of the parameter $\alpha$. Note that in this we don't need to update the above time process but we can ignore time and do it in an asynchronous manner. That is, we can compute this by solving a linear equation or running a random walk in which one keeps track of two vectors that has this interpretation in terms of diffusing paint. But, Jeh-Widom and Berkhin note that rather than doing it with these equations, one can instead choose a vertex, say that a fraction $\alpha$ of paint at that vertex is dry, and then push the wet paint to the neighbors according to the above rule. (That is, we can ignore time and do it asynchronously.) But this just gives us the push process. To see this, let $\pi_{p,r}$ be the vector of ``dried paint'' that we eventually compute. Then \begin{eqnarray*} \pi_{p,r} &=& p + \left(1-\alpha\right) \sum_{t \ge 0} r \alpha^t W^t \\ &=& p + \left( 1-\alpha \right) r \left( I -\alpha W \right)^{-1} \end{eqnarray*} In this case, the updates we wrote above, written another way, are the following: pick a vertex $u$ and create $p^{\prime}$ and $r^{\prime}$ as follows. \begin{itemize} \item $p^{\prime}(u) = p(u) + \left(1-\alpha\right) r(u)$ \item $r^{\prime}(u) = 0$ \item $r^{\prime}(v) = r(v) + \frac{\alpha}{d(u)} r(u)$, for all neighbors $v$ of $u$. \end{itemize} Then, it can be shown that \[ \pi_{p^{\prime},r^{\prime}} = \pi_{p,r} , \] which is the invariant that we noted before. So, the idea of computing the approximate PageRank vectors is the following. \begin{itemize} \item Pick a vertex where $r(u)$ is large or largest and distribute the paint according to that rule. \item Choose a threshold $\epsilon$ and don't bother to process a vertex if $r(u) \le \epsilon d(u)$. \end{itemize} Then, it can be shown that \begin{lemma} The process will stop within $\frac{1}{\epsilon\left(1-\alpha\right)}$ iterations. \end{lemma} (Note there is an inconsistency with $\alpha$ and $1-\alpha$, i.e., whether $\alpha$ is the teleporting or non-teleporting probability, that needs to be fixed.) \subsection{Using this to find sets of low-conductance} This provides a way to ``explore'' a large graph. Two things to note about this. \begin{itemize} \item This way is very different that DFS or BFS, which are not so good if the diameter is very small, as it is in many real-world graphs. \item The sets of nodes that are found will be different than with a geodesic metric. In particular, diffusions might get stuck in good conductance sets. \end{itemize} Following up on that second point, ACL showed how to use approximate PPR to find sets of low conductance, if they start from a random vector in a set. Importantly, since they do it with the Approximate PPR vector, the number of nodes that is touched is proportional to the size of the output set. So, in particular, if there is a set of small conductance, then the algorithm will run very quickly. Here is the basic idea. ACL first did it, and a simpler approach/analysis can be found in AC07. \begin{itemize} \item Given $\pi_v$, construct $q_v(u) = \frac{ \pi_v(u) }{ d(u) }$. \item Number the vertices, WLOG, such that $q_v(1) \ge q_v(2) \ge \cdots q_v(n)$, and let $S_k = \{ 1,\ldots,k \}$. \end{itemize} Then, it can be shown that starting from a random vertex in the set of low conductance, then one of the sets has low conductance. Computationally, these and related diffusion-based methods use only local information in the graph. If one can look at the entire graph, then one can get information about global eigenvectors, which approximate sparse cuts. But, if the graphs are huge, then one might be interested in the following question: \begin{itemize} \item Given a graph $G=(V,E)$ and a node $v \in V$, find a set of nodes $V$ such that the Cheeger ratio $h_S$ is small. \end{itemize} Recall that $h_S = \frac{|E(S,\bar{S})|}{\min \{ \mbox{Vol}(S),\mbox{Vol}(\bar{S}) \} }$. Before, we were interested in good global clusters, and thus we defined $h_G = \min_{S \subset V} h_S$ and tried to optimize it, e.g., by computing global eigenvectors; but here we are not interested in global eigenvectors and global clusters. Note that it is not immediately obvious that diffusions and PageRank are related to this notion of local clustering, but we will see that they are. In particular, we are interested in the following. \begin{itemize} \item Given a seed node $s$, we want to find a small set of nodes that is near $s$ and that is well connected internally but that is less well connected with the rest of the graph. \end{itemize} The intuition is that if a diffusion is started in such a cluster, then it is unlikely to leave the cluster since the cluster is relatively poorly-connected with the rest of the graph. This is often true, but one must be careful, since a diffusion that starts, e.g., at the boundary of a set $S$ can leave $S$ at the first step. So, the precise statement is going to be rather complicated. Here is a lemma for a basic diffusion process of a graph $G$. \begin{lemma} Let $C \subset V$ be a set with Cheeger ratio $h_C$, and let $t_0$ be a time parameter. Define $C_{t_0}$ to be a subset of $C$ such that for all $v \in C_{t_0}$ and all $t \le t_0$, we have that $\vec{1}_v^T W^t \vec{1}_{\bar{C}}^T \le t_0 h_C$. Then, the volume of $C_{t_0}$ is such that \[ \mbox{Vol}\left( C_{t_0} \right) \ge \frac{1}{2} \mbox{Vol}\left( C \right) . \] \end{lemma} This is a complicated lemma, but it implies that for vertex set $C \subset V$ with small Cheeger ratio, that there are many nodes $v \in C$ such that the probability of a random walk starting at $v$ leaves $C$ is~low. There is a similar result for the PageRank diffusion process. The statement is as follows. \begin{lemma} Let $C \subset V$ have Cheeger ratio $h_C$, and let $\alpha \in (0,1]$. Then, there is a subset $C_{\alpha} \subseteq C$ with volume $\mbox{Vol}\left( C_{\alpha} \right) \ge \frac{1}{2} \mbox{Vol}\left( C \right)$ such that for all $v \in C_{\alpha}$, the PageRank vector $\pi_{\alpha}(1_V)$ satisfies \[ \pi_{\alpha}\left(1_V\right) \vec{1}_C^T \ge 1- \frac{h_C}{\alpha} . \] \end{lemma} This is also a complicated lemma, but it implies that for any vertex set $C \subseteq V$ with small $h_C$, that there are many nodes $v \in C$ for which PPR, using $v$ as a start node, is small on nodes outside of~$C$. These and related results show that there is a relationship or correlation between the Cheeger ratio and diffusion processes, even for ``short'' random walks do not reach the asymptotic limit. In particular, for a set $C \subset V$ with small $h_C$, it is relatively hard for a diffusion process started within $C$ to leave $C$. We can use these ideas to get \emph{local spectral clustering algorithms}. To do this, we need a method for creating vertex cuts from a PR or random walk vector. Here is the way. \begin{itemize} \item Say that $\pi_{\alpha}(s)$ is a PPR vector. \item Then, create a set $C_S$ by doing a sweep cut over $\pi_{\alpha}(s)$. \end{itemize} That is, do the usual sweep cut, except on the vector returned by the algorithm, rather than the leading nontrivial eigenvector of the Laplacian. Here is one such lemma that can be proved about such a local spectral algorithm. \begin{lemma} If $\pi_{\alpha}(s)$ is a PPR vector with $\|s\|_1 \le 1$ and there exists $S \subseteq V$ and a constant $\delta$ such that $\pi_{\alpha}(s) \vec{1}_S - \frac{\mbox{Vol}(S)}{\mbox{Vol}(G)} > \delta$, then \[ h_{C_s} < \left( \frac{12\alpha\log\left( 4\sqrt{\mbox{Vol}(S)}/\delta \right)}{\delta} \right)^{1/2} . \] \end{lemma} This too is a complicated lemma, but the point is that if there exists a set $S$ where the PPR is much larger than the stationary distribution $\frac{d}{\mbox{Vol}(G)}$, then a sweep cut over $\pi_{\alpha}(S)$ produces a set $C_S$ with low Cheeger ratio: $O\left( \sqrt{ \alpha \log \left( \mbox{Vol}(S) \right) } \right)$. This result is for PPR; that there is a similar result for \emph{approximate} PPR, which has a more complicated statement still. Two things to note. \begin{itemize} \item While the statements of these theoretical results is quite complicated, these methods do very well in practice in many applications. (We won't discuss this much.) \item These algorithms were originally used as primitives for the Laplacian solvers, a topic to which we will return in a few weeks. \end{itemize} While one can prove results about the output of these algorithms, one might also be interested in what these algorithms optimize. Clearly, they \emph{approximately} optimize something related to a local version of the global spectral partitioning objective; so the question is what do they optimize \emph{exactly}. This is the topic we will turn to next---it will turn out that there are interesting connections here to with implicit regularization ideas. \section{% (04/07/2015): Local Spectral Methods (3 of 4): An optimization perspective on local spectral methods} Reading for today. \begin{compactitem} \item ``A Local Spectral Method for Graphs: with Applications to Improving Graph Partitions and Exploring Data Graphs Locally,'' in JMLR, by Mahoney, Orecchia, and Vishnoi \end{compactitem} Last time, we considered local spectral methods that involve short random walks started at a small set of localized seed nodes. Several things are worth noting about this. \begin{itemize} \item The basic idea is that these random walks tend to get trapped in good conductance clusters, if there is a good conductance cluster around the seed node. A similar statement holds for approximate localized random walks, e.g., the ACL push procedure---meaning, in particular, that one can implement them ``quickly,'' e.g., with the push algorithm, without even touching all of the nodes in $G$. \item The exact statement of the theorems that can be proven about how these procedures can be used to find good locally-biased clusters is quite technically complicated---since, e.g., one could step outside of the initial set of nodes if one starts near the boundary---certainly the statement is much more complicated than that for the vanilla global spectral method. \item The global spectral method is on the one hand a fairly straightforward algorithm (compute an eigenvector or some other related vector and then perform a sweep cut with it) and on the other hand a fairly straightforward objective (optimize the Rayleigh quotient variance subject to a few reasonable constraints). \item Global spectral methods often do very well in practice. \item Local spectral methods often do very well in practice. \item A natural question is: what objective do local spectral methods optimize---exactly, not approximately? Or, relatedly, can one construct an objective that is quickly-solvable and that also comes with similar locally-biased Cheeger-like guarantees? \end{itemize} To this end, today we will present a local spectral ansatz that will have several appealing properties: \begin{itemize} \item It can be computed fairly quickly, as a PPR. \item It comes with locally-biased Cheeger-like guarantees. \item It has the same form as several of the semi-supervised objectives we discussed. \item Its solution touches all the nodes of the input graph (and thus it is not as quick to compute as the push procedure which does not). \item The strongly local spectral methods that don't touch all of the nodes of the input graph are basically $\ell_1$-regularized variants of it. \end{itemize} \subsection{A locally-biased spectral ansatz} Here is what we would like to do. \begin{itemize} \item We would like to introduce an ansatz for an objective function for locally-biased spectral graph partitioning. This objective function should be a locally-biased version of the usual global spectral partitioning objective; its optimum should be relatively-quickly computable; it should be useful to highlight locally-interesting properties in large data graphs; and it should have some connection to the local spectral algorithms that we have been discussing. \end{itemize} In addition to having an optimization formulation of locally-biased spectral partitioning methods, there are at least two reasons one would be interested in such an objective. \begin{itemize} \item A small sparse cut might be poorly correlated with the second (or even all) global eigenvectors of $L$, and so it might be invisible to global spectral methods. \item We might have exogenous information about a specific region of a large graph in which we are most interested, and so we might want a method that finds clusters near that region, e.g., to do exploratory data analysis. \end{itemize} Here is the approach we will take. \begin{itemize} \item We will start with the usual global spectral partitioning objective function and add to it a certain locality constraint. \item This program will be a non-convex problem (as is the global spectral partitioning problem), but its solution will be computable as a linear equation that is a generalization of the PR spectral ranking method. \item In addition, we will show that it can be used to find locally biased partitions near an input seed node, it has connections with the ACL push-based local spectral method, etc. \end{itemize} Let's set notation. The Laplacian is $L=D-A$; and the normalized Laplacian is $\mathcal{L}=D^{-1/2}LD^{-1/2}$. The degree-weighted inner product is given by $x^TDy = \sum_{i=1}^{n} x_iy_id_i$. In this case, the weighted complete graph is given by \[ A_{K_n} = \frac{1}{\mbox{Vol}(G)}D11^TD , \] in which case $D_{K_n}=D_G$ and thus \[ L_{K_n}=D_{K_n} - A_{K_n} = D_G - \frac{1}{\mbox{Vol}(G)}D11^TD . \] Given this notation, see the left panel of Figure~\ref{fig:spectral} for the usual spectral program $\mathsf{Spectral}(G)$, and see the right panel of Figure~\ref{fig:spectral} for \textsf{LocalSpectral}$(G,s,\kappa)$, a locally-biased spectral program. (Later, we'll call the \textsf{LocalSpectral} objective ``MOV,'' to compare and contrast it with the ``ACL'' push~procedure.) \begin{center} \begin{figure} \begin{minipage}{0.5\textwidth} \begin{alignat*}{4} &\text{min} & x^T L_{G} x \\ &\text{s.t.} & x^T D_{G} x = 1 \\ & & (x^T D_{G} 1)^2 = 0 \\ & & x \in \mathbb{R}^V \end{alignat*} \end{minipage} \begin{minipage}{0.5\textwidth} \begin{alignat*}{4} &\text{min} & x^T L_{G} x \\ &\text{s.t.} & x^T D_{G} x = 1 \\ & & (x^T D_{G} 1)^2 = 0 \\ & &(x^T D_{G} s) ^2 \geq \kappa \\ & & x \in \mathbb{R}^V \end{alignat*} \end{minipage} \caption{Global and local spectral optimization programs. Left: The usual spectral program $\mathsf{Spectral}(G)$. Right: A locally-biased spectral program \textsf{LocalSpectral}$(G,s,\kappa)$. In both cases, the optimization variable is the vector $x \in \mathbb{R}^{n}$. } \label{fig:spectral} \end{figure} \end{center} In the above, we assume WLOG that \[ s \text{ is such that } \left\{ \begin{array}{l l} s^TDs=1 & \\ s^TD1=0 & \end{array} \right. . \] This ``WLOG'' just says that one can subtract off the part of $s$ along the all-ones vector; we could have parametrized the problem to include this component and gotten similar results to what we will present below, had we not done this. Note that $s$ can actually be any vector (that isn't in the span of the all-ones vector); but it is convenient to think of it as an indicator vector of a small ``seed set'' of nodes $S \subset V$. The constraint $\left(x^TDs\right)^2\ge\kappa$ says that the projection of the solution $x$ is at least $\sqrt{\kappa}$ in absolute value, where $\kappa\in(0,1)$. Here is the interpretation of this constraint. \begin{itemize} \item The vector $x$ must be in a spherical cap centered at $s$ with angle at most $\mbox{arccos}\left(\sqrt{\kappa}\right)$ from $s$. \item Higher values of $\kappa$ correspond to finding a vector that is more well-correlated with the seed vector. While the technical details are very different than with strongly local spectral methods such as ACL, informally one should think of this as corresponding to shorter random walks or, relatedly, higher values of the teleportation parameter that teleports the walk back to the original seed set of nodes. \item If $\kappa=0$, then there is no correlation constraint, in which case we recover \textsf{Spectral}($G$). \end{itemize} \subsection{A geometric notion of correlation} Although \textsf{LocalSpectral} is just an objective function and no geometry is explicitly imposed, there is a geometric interpretation of this in terms of a geometric notion of correlation between cuts in $G$. Let's make explicit the geometric notion of correlation between cuts (or, equivalently, between partitions, or sets of nodes) that is used by \textsf{LocalSpectral}. Given a cut $(T,\bar{T})$ in a graph $G=(V,E)$, a natural vector in $\mathbb{R}^{n}$ to associate with it is its indicator/characteristic vector, in which case the correlation between a cut $(T,\bar{T})$ and another cut $(U,\bar{U})$ can be captured by the inner product of the characteristic vectors of the two cuts. Since we are working on the space orthogonal to the degree-weighted all-ones vector, we'll do this after we remove from the characteristic vector its projection along the all-ones vector. In that case, again, a notion of correlation is related to the inner product of two such vectors for two cuts. More precisely, given a set of nodes $T \subseteq V$, or equivalently a cut $(T,\bar{T})$, one can define the unit vector $s_{T}$~as \[ s_T \stackrel{\textup{def}}{=} \sqrt{\frac{\mathrm{vol}(T)\mathrm{vol}(\bar{T})}{2m}} \; \left(\frac{1_T}{\mathrm{vol}(T)} - \frac{1_{\bar{T}}}{\mathrm{vol}(\bar{T})}\right) , \] in which case \[ s_{T}(i) = \left\{ \begin{array}{ll} \sqrt{\frac{\mathrm{vol}(T)\mathrm{vol}(\bar{T})}{2m}} \cdot \frac{1}{\mathrm{vol}(T)} & \mbox{if $i \in T $} \\ - \sqrt{\frac{\mathrm{vol}(T)\mathrm{vol}(\bar{T})}{2m}} \cdot \frac{1}{\mathrm{vol}(\bar{ T})} & \mbox{if $i \in \bar{T}$} \end{array} \right. . \] Several observations are immediate from this definition. \begin{itemize} \item One can replace $s_{T}$ by $s_{\bar{T}}$ and the correlation remains the same with any other set, and so this is well-defined. Also, $s_T = - s_{\bar{T}}$; but since here we only consider quadratic functions of $s_T,$ we can consider both $s_T$ and $s_{\bar{T}}$ to be representative vectors for the cut $(T, \bar{T}).$ \item Defined this way, it immediately follows that $ s_T^T D_G 1 = 0$ and that $ s_T^T D_G s_T = 1$. Thus, $s_T \in \mathcal{S}_{D}$ for $T \subseteq V$, where we denote by $\mathcal{S}_{D}$ the set of vectors $\{x \in \mathbb{R}^V: x^T D_G 1 = 0\}$; and $s_T$ can be seen as an appropriately normalized version of the vector consisting of the uniform distribution over $T$ minus the uniform distribution over $\bar{T}$. \item One can introduce the following measure of correlation between two sets of nodes, or equivalently between two cuts, say a cut $(T, \bar{T})$ and a cut $(U, \bar{U})$: \[ K(T,U) \stackrel{\textup{def}}{=} ( s_T D_G s_U )^2 . \] Then it is easy to show that: $K(T,U) \in [0,1]$; $K(T,U) = 1$ if and only if $T=U$ or $\bar{T}=U$; $K(T,U) = K(\bar{T}, U)$; and $K(T,U) = K(T, \bar{U})$. \item Although we have described this notion of geometric correlation in terms of vectors of the form $s_T \in \mathcal{S}_{D}$ that represent partitions $(T,\bar{T})$, this correlation is clearly well-defined for other vectors $s \in \mathcal{S}_{D}$ for which there is not such a simple interpretation in terms of cuts. \end{itemize} Below we will show that the solution to \textsf{LocalSpectral} can be characterized in terms of a PPR vector. If we were interested in objectives that had solutions of different forms, e.g., the form of a heat kernel, then this would correspond to an objective function with a different constraint, and this would then imply a different form of correlation. \subsection{Solution of \textsf{LocalSpectral}} Here is the basic theorem characterizing the form of the solution of \textsf{LocalSpectral}. \begin{theorem}[Solution Characterization] \label{thm:pagerank} Let $s \in \mathbb{R}^{n}$ be a seed vector such that $s^T D_G 1 =0$, $s^T D_G s = 1$, and $s^T D_G v_2 \neq 0$, where $v_{2}$ is the second generalized eigenvector of $L_G$ with respect to $D_G$. In addition, let $1> \kappa \geq 0$ be a correlation parameter, and let $x^{\star}$ be an optimal solution to \textsf{LocalSpectral}$(G,s,\kappa)$. Then, there exists some $\gamma \in (-\infty, \lambda_{2}(G))$ and a $c \in [0, \infty]$ such that \begin{equation} \label{eqn:xstar} x^{\star} = c(L_{G}-\gamma D_G)^{+} D_G s. \end{equation} \end{theorem} Before presenting the proof of this theorem, here are several things to note. \begin{itemize} \item $s$ and $\kappa$ are the parameters of the program; $c$ is a normalization factor that rescales the norm of the solution vector to be $1$ (and that can be computed in linear time, given the solution vector); and $\gamma$ is implicitly defined by $\kappa$, $G$, and~$s$. \item The correct setting of $\gamma$ ensures that $(s^T D_{G} x^\star)^2 = \kappa,$ i.e., that $x^\star$ is found exactly on the boundary of the feasible region. \item $x^\star$ and $\gamma$ change as $\kappa$ changes. In particular, as $\kappa$ goes to $1$, $\gamma$ tends to $-\infty$ and $x^\star$ approaches $s$; conversely, as $\kappa$ goes to $0$, $\gamma$ goes to $\lambda_2(G)$ and $x^\star$ tends towards $v_2$, the global eigenvector. \item For a fixed choice of $G$, $s$, and $\kappa$, an $\epsilon$-approximate solution to \textsf{LocalSpectral} can be computed in time $\tilde{O}\left(\frac{m}{\sqrt{\lambda_{2}(G)}} \cdot \log(\frac{1}{\epsilon })\right)$ using the Conjugate Gradient Method; or in time $\tilde{O}\left(m \log(\frac{1}{\epsilon })\right)$ using the Spielman-Teng linear-equation solver (that we will discuss in a few weeks), where the $\tilde{O}$ notation hides $\log\log(n)$ factors. This is true for a fixed value of $\gamma$, and the correct setting of $\gamma$ can be found by binary search. While that is theoretically true, and while there is a lot of work recently on developing practically-fast nearly-linear-time Laplacian-based solvers, this approach might not be appropriate in certain applications. For example, in many applications, one has precomputed an eigenvector decomposition of $L_G$, and then one can use those vectors and obtain an approximate solution with a small number of inner products. This can often be much faster in~practice. \end{itemize} In particular, solving \textsf{LocalSpectral} is \emph{not} ``fast'' in the sense of the original local spectral methods, i.e., in that the running time of those methods depends on the size of the output and doesn't depend on the size of the graph. But the running time to solve \textsf{LocalSpectral} \emph{is} fast, in that its solution depends essentially on computing a leading eigenvector of a Laplacian $L$ and/or can be solved with ``nearly linear time'' solvers that we will discuss in a few weeks. While Eqn.~(\ref{eqn:xstar}) is written in the form of a linear equation, there is a close connection between the solution vector $x^\star$ and the Personalized PageRank (PPR) spectral ranking procedure. \begin{itemize} \item Given a vector $s \in \mathbb{R}^{n}$ and a \emph{teleportation} constant $\alpha> 0$, the PPR vector can be written as \[ \mbox{pr}_{\alpha,s}=\left(L_{G}+\frac{1-\alpha}{\alpha}D_{G}\right)^{-1}D_{G}s . \] By setting $\gamma = -\frac{1-\alpha}{\alpha}$, one can see that the optimal solution to \textsf{LocalSpectral} is proved to be a generalization PPR. \item In particular, this means that for high values of the correlation parameter $\kappa$ for which the corresponding $\gamma$ satisfies $\gamma < 0$, the optimal solution to \textsf{LocalSpectral} takes the form of a PPR vector. On the other hand, when $\gamma \geq 0,$ the optimal solution to \textsf{LocalSpectral} provides a smooth way of transitioning from the PPR vector to the global second eigenvector~$v_2$. \item Another way to interpret this is to say that for values of $\kappa$ such that $\gamma <0$, then one could compute the solution to \textsf{LocalSpectral} with a random walk or by solving a linear equation, while for values of $\kappa$ for which $\gamma>0$, one can only compute the solution by solving a linear equation and not by performing a random walk. \end{itemize} About the last point, we have talked about how random walks compute regularized or robust versions of the leading nontrivial eigenvector of $L$---it would be interesting to characterize an algorithmic/statistical tradeoff here, e.g., if/how in this context certain classes of random walk based algorithms are less powerful algorithmically than related classes of linear equation based algorithms but that they implicitly compute regularized solutions more quickly for the parameter values for which they are able to compute solutions. \subsection{Proof of Theorem~\ref{thm:pagerank}} Here is an outline of the proof, which essentially involves ``lifting'' a rank-one constraint to obtain an SDP in order to get strong duality to apply. \begin{itemize} \item Although \textsf{LocalSpectral} is not a convex optimization problem, it can be relaxed to an SDP that is convex. \item From strong duality and complementary slackness, the solution to the SDP is rank one. \item Thus, the vector making up the rank-one component of this rank-one solution is the solution to \textsf{LocalSpectral}. \item The form of this vector is of the form of a PPR. \end{itemize} Here are some more details. Consider the primal $\textsf{SDP}_p$ and dual $\textsf{SDP}_d$ SDPs, given in the left panel and right panel, respectively, of Figure~\ref{fig:sdp}. \begin{center} \begin{figure} \begin{minipage}{0.5\textwidth} \begin{alignat*}{4} \quad& &\text{minimize} \quad && L_{G} \circ X\\ & &\text{s.t.} \quad && L_{K_{n}} \circ X = 1\ \\ & & && ( D_{G} s)( D_{G} s)^T \circ X \geq \kappa \\ & & & & X \succeq 0 \end{alignat*} \end{minipage} \begin{minipage}{.25\textwidth} \begin{alignat*}{4} \quad& &\text{maximize} \quad && \alpha + \kappa \beta\\ & &\text{s.t.} \quad && L_{G} \succeq \alpha L_{K_{n}} + \beta ( D_{G} {s})( D_{G} {s})^T \ \\ & & && \beta \geq 0 \\ & & & & \alpha \in \mathbb{R} \end{alignat*} \end{minipage} \caption{Left: Primal SDP relaxation of \textsf{LocalSpectral}$(G,s, \kappa)$: $\textsf{SDP}_{p}(G,s,\kappa)$. For this primal, the optimization variable is $X \in \mathbb{R}^{n \times n}$ such that $X$ is SPSD. Right: Dual SDP relaxation of \textsf{LocalSpectral}$(G,s, \kappa)$: $\textsf{SDP}_{d}(G,s,\kappa)$. For this dual, the optimization variables are $\alpha,\beta\in\mathbb{R}$.} \label{fig:sdp} \end{figure} \end{center} Here are a sequence of claims. \begin{claim} The primal SDP, $\textsf{SDP}_p$ is a relaxation of \textsf{LocalSpectral}. \end{claim} \begin{proof} Consider $x\in\mathbb{R}^{n}$, a feasible vector for \textsf{LocalSpectral}. Then, the SPSD matrix $X=xx^T$ is feasible for \textsf{SDP}$_p$. \end{proof} \begin{claim} Strong duality holds between $\textsf{SDP}_p$ and $\textsf{SDP}_d$. \end{claim} \begin{proof} The program $\textsf{SDP}_p$ is convex, and so it suffices to check that Slater's constraint qualification conditions hold for $\textsf{SDP}_p$. To do so, consider $X=ss^T$. Then, \[ \left(D_Gs\right)\left(D_Gs\right)^T \circ ss^T = \left(s^TD_Gs\right)^2 = 1 > \kappa . \] \end{proof} \begin{claim} The following feasibility and complementary slackness conditions are sufficient for a primal-dual pair $X^{*}$, $\alpha^{*}$, $\beta^{*}$ to be an optimal solution. The feasibility conditions are: \begin{eqnarray} \nonumber L_{K_{n}} \circ X^\star &=& 1 , \label{F1} \\ \nonumber ( D_{G} {s})( D_{G} {s})^T \circ X^\star &\geq& \kappa , \label{F2} \\ L_{G}- \alpha^\star L_{K_{n}} - \beta^\star ( D_{G} {s})( D_{G} {s})^T &\succeq& 0 , \mbox{ and} \label{F3} \\ \nonumber \beta^\star &\geq& 0 \label{F4} , \end{eqnarray} and the complementary slackness conditions are: \begin{eqnarray} \nonumber \alpha^\star( L_{K_{n}} \circ X^\star - 1) &=& 0 , \label{C1} \\ \beta^\star ( ( D_{G} {s})( D_{G} {s})^T \circ X^\star - \kappa) &=& 0 \label{C2} , \mbox{ and} \\ X^\star \circ ( L_{G}- \alpha^\star L_{K_{n}} - \beta^\star ( D_{G} {s})( D_{G} {s})^T ) &=& 0 \label{C3} . \end{eqnarray} \end{claim} \begin{proof} This follows from the convexity of $\textsf{SDP}_p$ and Slater's condition. \end{proof} \begin{claim} The feasibility and complementary slackness conditions, coupled with the assumptions of the theorem, imply that $X^{*}$ is rank one and that $\beta^{*} \ge 0$. \end{claim} \begin{proof} If we plug $v_{2}$ in Eqn.~\eqref{F3}, then we obtain that $ v_{2}^{T}L_{G}v_{2} - \alpha^{\star} -\beta^{\star} (v_{2}^T D_{G} s)^{2} \geq 0.$ But $ v_{2}^{T}L_{G}v_{2}=\lambda_{2}(G)$ and $\beta^{\star} \geq 0.$ Hence, $\lambda_{2}(G) \geq \alpha^{\star}.$ Suppose $\alpha^\star = \lambda_2(G).$ As $s^T D_{G} v_2 \neq 0,$ it must be the case that $\beta^\star = 0.$ Hence, by Equation~\eqref{C3}, we must have $X^\star \circ L(G) = \lambda_2(G),$ which implies that $X^\star = v_2v_2^T,$ {\em i.e.}, the optimum for \textsf{LocalSpectral} is the global eigenvector $v_2$. This corresponds to a choice of $\gamma = \lambda_2(G)$ and $c$ tending to infinity. Otherwise, we may assume that $\alpha^\star< \lambda_2(G).$ Hence, since $G$ is connected and $\alpha^{\star} <\lambda_{2}(G),$ $L_{G}-\alpha^{\star}L_{K_{n}}$ has rank exactly $n-1$ and kernel parallel to the vector $1.$ From the complementary slackness condition \eqref{C3} we can deduce that the image of $X^{\star}$ is in the kernel of $ L_{G}- \alpha^\star L_{K_{n}} - \beta^\star ( D_{G} {s})( D_{G} {s})^T.$ If $\beta^\star > 0,$ we have that $ \beta^\star ( D_{G} {s})( D_{G} {s})^T$ is a rank one matrix and, since $s^T D_{G} 1 = 0,$ it reduces the rank of $L_{G}-\alpha^{\star}L_{K_{n}}$ by one precisely. If $\beta^{\star}=0$ then $X^{\star}$ must be $0$ which is not possible if $\textsf{SDP}_{p}(G,s,\kappa)$ is feasible. Hence, the rank of $ L_{G}- \alpha^\star L_{K_{n}} - \beta^\star ( D_{G} {s})( D_{G} {s})^T$ must be exactly $n-2.$ As we may assume that $1$ is in the kernel of $X^\star$, $X^{\star}$ must be of rank one. This proves the claim. \end{proof} \textbf{Remark.} It would be nice to have a cleaner proof of this that is more intuitive and that doesn't rely on ``boundary condition'' arguments as much. Now we complete the proof of the theorem. From the claim it follows that, $X^{\star}=x^{\star}x^{\star T}$ where $x^{\star}$ satisfies the equation $$ (L_{G}- \alpha^\star L_{K_{n}} - \beta^\star ( D_{G} {s})( D_{G} {s})^T)x^{\star}=0. $$ From the second complementary slackness condition, Equation~\eqref{C2}, and the fact that $\beta^{\star}>0,$ we obtain that $ (x^{\star})^T D_{G} s = \pm \sqrt{\kappa}.$ Thus, $x^{\star} =\pm \beta^{\star} \sqrt{\kappa} (L_{G}-\alpha^{\star}L_{K_{n}})^{+}D_{G}s,$ as required. \subsection{Additional comments on the \textsf{LocalSpectral} optimization program} Here, we provide some additional discussion for this locally-biased spectral partitioning objective. Recall that the proof we provided for Cheeger's Inequality showed that in some sense the usual global spectral methods ``embed'' the input graph $G$ into a complete graph; we would like to say something similar here. To do so, observe that the dual of \textsf{LocalSpectral} is given by the following. \begin{eqnarray*} \label{prog:spectral-local-d1} &\text{maximize} & \alpha + \beta \kappa \\ &\text{s.t.} & L_{G} \succeq \alpha L_{K_n} + \beta \Omega_T \\ & & \beta \ge 0 , \end{eqnarray*} where $\Omega_T=D_Gs_Ts_T^TD_G$. Alternatively, by subtracting the second constraint of \textsf{LocalSpectral} from the first constraint, it follows that $$ x^T\left(L_{K_n}-L_{K_n}s_Ts_T^TL_{K_n}\right)x \le 1-\kappa . $$ Then it can be shown that $$ L_{K_n}-L_{K_n}s_Ts_T^TL_{K_n} = \frac{L_{K_{T}}}{\mathrm{vol}(\bar{T})} + \frac{L_{K_{\bar{T}}}}{\mathrm{vol}(T)} , $$ where $L_{K_{T}}$ is the $D_G$-weighted complete graph on the vertex set $T$. Thus, \textsf{LocalSpectral} is equivalent~to \begin{eqnarray*} \label{prog:spectral-local-p2} &\text{minimize} & x^T L_{G} x \\ &\text{s.t.} & x^T L_{K_n} x = 1 \\ & & x^T\left( \frac{L_{K_{T}}}{\mathrm{vol}(\bar{T})} + \frac{L_{K_{\bar{T}}}}{\mathrm{vol}(T)} \right)x \le 1-\kappa . \end{eqnarray*} The dual of this program is given by the following. \begin{eqnarray*} \label{prog:spectral-local-d2A} &\text{maximize} & \alpha - \beta(1-\kappa) \\ \label{prog:spectral-local-d2B} &\text{s.t.} & L_{G} \succeq \alpha L_{K_n} - \beta\left( \frac{L_{K_{T}}}{\mathrm{vol}(\bar{T})} + \frac{L_{K_{\bar{T}}}}{\mathrm{vol}(T)} \right) \\ \label{prog:spectral-local-d2C} & & \beta \ge 0 . \end{eqnarray*} Thus, from the perspective of this dual, \textsf{LocalSpectral} can be viewed as ``embedding'' a combination of a complete graph $K_n$ and a weighted combination of complete graphs on the sets $T$ and $\bar{T}$, i.e., $K_T$ and $K_{\bar{T}}$. Depending on the value of $\beta$, the latter terms clearly discourage cuts that substantially cut into $T$ or $\bar{T}$, thus encouraging partitions that are well-correlated with the input cut $(T,\bar{T})$. If we can establish a precise connection between the optimization-based \textsf{LocalSpectral} procedure and operational diffusion-based procedures such as the ACL push procedure, then this would provide additional insight as to ``why'' the short local random walks get stuck in small seed sets of nodes. This will be one of the topics for next time. \section{% (04/09/2015): Local Spectral Methods (4 of 4): Strongly and weakly locally-biased graph partitioning} Reading for today. \begin{compactitem} \item ``Anti-differentiating Approximation Algorithms: A case study with Min-cuts, Spectral, and Flow,'' in ICML, by Gleich and Mahoney \item ``Think Locally, Act Locally: The Detection of Small, Medium-Sized, and Large Communities in Large Networks,'' in PRE, by Jeub, Balachandran, Porter, Mucha, and Mahoney \end{compactitem} Last time we introduced an objective function (\textsf{LocalSpectral}) that looked like the usual global spectral partitioning problem, except that it had a locality constraint, and we showed that its solution is of the form of a PPR vector. Today, we will do two things. \begin{itemize} \item We will introduce a locally-biased graph partitioning problem, we show that the solution to \textsf{LocalSpectral} can be used to compute approximate solutions to that problem. \item We describe the relationship between this problem and what the strongly-local spectral methods, e.g., the ACL push method, compute. \end{itemize} \subsection{Locally-biased graph partitioning} We start with a definition. \begin{definition}[Locally-biased graph partitioning problem.] \label{def:locally-biased-partitioning} Given a graph $G=(V,E)$, an input node $u \in V$, a number $k \in \mathbb{Z}^{+}$, find a set of nodes $T \subset V$ s.t. $$ \phi(u,k) = \min_{T \subset V : u \in T, \mbox{Vol}(T) \le k} \phi(T) , $$ i.e., find the best conductance set of nodes of volume not greater than $k$ that contains the node $u$. \end{definition} That is, rather than look for the best conductance cluster in the entire graph (which we considered before), look instead for the best conductance cluster that contains a specified seed node and that is not too large. Before proceeding, let's state a version of Cheeger's Inequality that applies not just to the leading nontrivial eigenvector of $L$ but instead to \emph{any} ``test vector.'' \begin{theorem} \label{thm:cheeger2} Let $x\in\mathbb{R}^{n}$ s.t. $x^TD \vec{1} = 0$. Then there exists a $t \in [n]$ such that $S \equiv \mbox{SweepCut}_t(x) \equiv \{ i : x_i \ge t \} $ satisfies $ \frac{x^TLx}{x^TDx} \ge \frac{\phi(S)^2}{8} $. \end{theorem} \textbf{Remark.} This form of Cheeger's Inequality provides additional flexibility in at least two ways. First, if one has computed an approximate Fiedler vector, e.g., by running a random walk many steps but not quite to the asymptotic state, then one can appeal to this result to show that Cheeger-like guarantees hold for that vector, i.e., one can obtain a ``quadratically-good'' approximation to the global conductance objective function using that vector. Alternatively, one can apply this to \emph{any} vector, e.g., a vector obtained by running a random walk just a few steps from a localized seed node. This latter flexibility makes this form of Cheeger's Inequality very useful for establishing bounds with both strongly and weakly local spectral methods. Let's also recall the objective with which we are working; we call it \textsf{LocalSpectral}$(G,s,\kappa)$ or \textsf{LocalSpectral}. Here it is. \begin{alignat*}{4} &\text{min} & x^T L_{G} x \\ &\text{s.t.} & x^T D_{G} x = 1 \\ & & (x^T D_{G} 1)^2 = 0 \\ & & (x^T D_{G} s )^{2} \geq \kappa \\ & & x \in \mathbb{R}^n \end{alignat*} Let's start with our first result, which says that \textsf{LocalSpectral} is a relaxation of the intractable combinatorial problem that is the locally-biased version of the global spectral partitioning problem (in a manner analogous to how the global spectral partitioning problem is a relaxation of the intractable problem of finding the best conductance partition in the entire graph). More precisely, we can choose the seed set $s$ and correlation parameter $\kappa$ such that $\textsf{LocalSpectral}(G,s,\kappa)$ is a relaxation of the problem defined in Definition~\ref{def:locally-biased-partitioning}. \begin{theorem} \label{thm:relaxation} For $u \in V$, \textsf{LocalSpectral}$(G,v_{\{u\}},1/k)$ is a relaxation of the problem of finding a minimum conductance cut $T$ in $G$ which contains the vertex $u$ and is of volume at most~$k$. In particular, $\lambda(G,v_{\{u\}},1/k) \leq \phi(u,k)$. \end{theorem} \begin{proof} If we let $x=v_{T}$ in \textsf{LocalSpectral}$(G,v_{\{u\}},1/k)$, then $v_{T}^{T}L_{G}v_{T}= \phi(T)$, $v_{T}^{T}D_{G}1=0$, and $v_{T}^{T}D_{G}v_{T}=1$. Moreover, we have that $$ (v_{T}^{T}D_{G}v_{\{u\}})^{2} = \frac{d_{u}(2m-\mathrm{vol}(T))}{\mathrm{vol}(T) (2m-d_{u})} \geq 1/k , $$ which establishes the lemma. \end{proof} Next, let's apply sweep cut rounding to get locally-biased cuts that are quadratically good, thus establishing a locally-biased analogue of the hard direction of Cheeger's Inequality for this problem. In particular, we can apply Theorem~\ref{thm:cheeger2} to the optimal solution for $\textsf{LocalSpectral}(G,v_{\{u\}},1/k)$ and obtain a cut $T$ whose conductance is quadratically close to the optimal value $\lambda(G,v_{\{u\}},1/k)$. By Theorem~\ref{thm:relaxation}, this implies that $\phi(T) \leq O(\sqrt{\phi(u,k)})$, which essentially establishes the following theorem. \begin{theorem}[Finding a Cut] \label{thm:cut} Given an unweighted graph $G=(V,E)$, a vertex $u \in V$ and a positive integer $k$, we can find a cut in $G$ of conductance at most $O(\sqrt{ \phi(u,k)})$ by computing a sweep cut of the optimal vector for $\textsf{LocalSpectral}(G, v_{\{u\}},1/k)$. \end{theorem} \textbf{Remark.} What this theorem states is that we can perform a sweep cut over the vector that is the solution to $\textsf{LocalSpectral}(G,v_{\{u\}},1/k)$ in order to obtain a locally-biased partition; and that this partition comes with quality-of-approximation guarantees analogous to that provided for the global problem $\textsf{Spectral}(G)$ by Cheeger's inequality. We can also use the optimal value of \textsf{LocalSpectral} to provide lower bounds on the conductance value of other cuts, as a function of how well-correlated they are with the input seed vector $s$. In particular, if the seed vector corresponds to a cut $U$, then we get lower bounds on the conductance of other cuts $T$ in terms of the correlation between $U$ and $T$. \begin{theorem}[Cut Improvement] \label{thm:improve} Let $G$ be a graph and $s \in \mathbb{R}^{n}$ be such that $s^{T}D_{G} 1=0,$ where $D_{G}$ is the degree matrix of $G$. In addition, let $\kappa \geq 0$ be a correlation parameter. Then, for all sets $T \subseteq V$ such that $\kappa' \stackrel{\textup{def}}{=} (s^{T}D_{G}v_{T})^{2}$, we have that \[ \phi(T) \geq \left\{ \begin{array}{ll} \lambda(G,s,\kappa) & \mbox{if $\kappa \leq \kappa'$} \\ \frac{\kappa'}{\kappa} \cdot \lambda(G,s,\kappa) & \mbox{if $\kappa' \leq \kappa$.} \end{array} \right. \] In particular, if $s=s_{U}$ for some $U \subseteq V,$ then note that $\kappa'= K(U,T).$ \end{theorem} \begin{proof} It follows from the results that we established in the last class that $\lambda(G,s,\kappa)$ is the same as the optimal value of $\textsf{SDP}_{p}(G,s,\kappa)$ which, by strong duality, is the same as the optimal value of $\textsf{SDP}_{d}(G,s,\kappa)$. Let $\alpha^{\star},\beta^{\star}$ be the optimal dual values to $\textsf{SDP}_{d}(G, s,\kappa).$ Then, from the dual feasibility constraint $ L_{G}- \alpha^\star L_{K_{n}} - \beta^\star ( D_G {s})( D_G {s})^T \succeq 0 ,$ it follows that $$ s_{T}^{T}L_{G}s_{T} - \alpha^{\star}s_{T}^{T}L_{K_{n}}s_{T}-\beta^{\star} (s^T D_G s_{T})^{2} \geq 0. $$ Notice that since $ s_{T}^T D_G 1 =0$, it follows that $ s_{T}^{T}L_{K_{n}}s_{T}=s_{T}^{T}D_Gs_{T}=1$. Further, since $s_{T}^{T}L_{G}s_{T}=\phi(T),$ we obtain, if $\kappa \leq \kappa',$ that $$ \phi(T) \geq \alpha^{\star} + \beta^{\star} ( s^T D_G s_{T} )^{2} \geq \alpha^{\star} + \beta^{\star}\kappa = \lambda (G,s,\kappa). $$ If on the other hand, $\kappa' \leq \kappa,$ then $$ \phi(T) \geq \alpha^{\star} + \beta^{\star} (s^T D_G s_{T}) ^{2} \geq \alpha^{\star} + \beta^{\star}\kappa \geq \frac{ \kappa'}{\kappa} \cdot (\alpha^{\star} + \beta^{\star}\kappa) =\frac{ \kappa'}{\kappa} \cdot \lambda (G,s,\kappa) . $$ Finally, observe that if $s=s_{U}$ for some $U \subseteq V,$ then $ (s_{U}^T D_G s_{T} )^{2} =K(U,T).$ Note that strong duality was used here. \end{proof} \textbf{Remark.} We call this result a ``cut improvement'' result since it is the spectral analogue of the flow-based ``cut improvement'' algorithms we mentioned when doing flow-based graph partitioning. \begin{itemize} \item These flow-based cut improvement algorithms were originally used as a post-processing algorithm to improve partitions found by other algorithms. For example, GGT, LR (Lang-Rao), and AL (which we mentioned before). \item They provide guarantees of the form: for any cut $\left(C,\bar{C}\right)$ that is $\epsilon$-correlated with the input cut, the cut output by the cut improvement algorithm has conductance $\le$ some function of the conductance of $\left(C,\bar{C}\right)$ and $\epsilon$. \item Theorem~\ref{thm:improve} shows that, while the cut value output by this spectral-based ``improvement'' algorithm might \emph{not} be improved, relative to the input, as they are often guaranteed to do with flow-based cut-improvement algorithms, they do not decrease in quality too much, and in addition one can make claims about the cut quality of ``nearby'' cuts. \item Although we don't have time to discuss it, these two operations can be viewed as building blocks or ``primitives'' that can be combined in various ways to develop algorithms for other problems, e.g., finding minimum conductance cuts. \end{itemize} \subsection{Relationship between strongly and weakly local spectral methods} So far, we have described two different ways to think about local spectral algorithms. \begin{itemize} \item \textbf{Operational.} This approach provides an algorithm, and one can prove locally-biased Cheeger-like guarantees. The exact statement of these results is quite complex, but the running time of these methods is extremely fast since they don't even need to touch all the nodes of a big~graph. \item \textbf{Optimization.} This approach provides a well-defined optimization objective, and one can prove locally-biased Cheeger-like guarantees. The exact statement of these results is much simpler, but the running time is only moderately fast, since it involves computing eigenvectors or linear equations on sparse graphs, and this involves at least touching all the nodes of a big~graph. \end{itemize} An obvious question here is the following. \begin{itemize} \item Shat is the precise relationship between these two approaches? \end{itemize} We'll answer this question by considering the weakly-local \textsf{LocalSpectral} optimization problem (that we'll call MOV below) and the PPR-based local spectral algorithm due to ACL (that we'll call ACL below). What we'll show is roughly the following. \begin{itemize} \item We'll show roughly that if MOV optimizes an $\ell_2$ based penalty, then ACL optimizes an $\ell_1$-regularized version of that $\ell_2$ penalty. \end{itemize} That's interesting since $\ell_1$ regularization is often introduced to enforce or encourage sparsity. Of course, there is no $\ell_1$ regularization in the statement of the strongly local spectral methods like ACL, but clearly they enforce some sort of sparsity, since they don't even touch most of the nodes of a large graph. Thus, this result can be interpreted as providing an implicit regularization characterization of a fast approximation algorithm. \subsection{Setup for implicit $\ell_1$ regularization in strongly local spectral methods} Recall that $L=D-A=B^TCB$, where $B$ is the unweighted edge-incidence matrix. Then \[ \|Bx\|_{C,1} = \sum_{(ij)\in E} C_{(ij)} |x_i-x_j| = \mbox{cut}(S) , \] where $S=\{i:x_i=1\}$. In addition, we can obtain a spectral problem by changing $\|\cdot\|_1 \rightarrow \|\cdot\|_2$ to get \[ \|Bx\|_{C,2}^{2} = \sum_{(ij)\in E} C_{(ij)} \left(x_i-x_j\right)^2 \] Let's consider a specific $(s,t)$-cut problem that is inspired by the AL FlowImprove procedure. To do so, fix a set of vertices (like we did when we did the semi-supervised eigenvector construction), and define a \emph{new} graph that we will call the ``localized cut graph.'' Basically, this new graph will be the original graph augmented with two additional nodes, call them $s$ and $t$, that are connected by weights to the nodes of the original graph. Here is the definition. \begin{definition}[localized cut graph] Let $G = (V,E)$ be a graph, let $S$ be a set of vertices, possibly empty, let $\bar{S}$ be the complement set, and let $\alpha$ be a non-negative constant. Then the \emph{localized cut graph} is the weighted, undirected graph with adjacency matrix: \[ A_S = \left[ \begin{array}{ccc} 0 & \alpha d_S^T & 0 \\ \alpha d_S & A & \alpha d_{\bar{S}} \\ 0 & \alpha d_{\bar{S}}^T & 0 \end{array} \right] \] where $d_S = D e_S$ is a degree vector localized on the set $S$, $A$ is the adjacency matrix of the original graph $G$, and $\alpha \ge 0$ is a non-negative weight. Note that the first vertex is $s$ and the last vertex is $t$. \end{definition} We'll use the $\alpha$ and $S$ parameter to denote the matrices for the localized cut graph. For example, the \emph{incidence matrix} $B(S)$ of the localized cut graph, which depends on the set $S$, is given by the~following. \[ B(S) = \left[ \begin{array}{ccc} e & -I_S & 0 \\ 0 & B & 0 \\ 0 & -I_{\bar{S}} & e \end{array} \right] , \] where, recall, the variable $I_S$ are the columns of the identity matrix corresponding to vertices in $S$. The edge-weights of the localized cut graph are given by the diagonal matrix $C(\alpha)$, which depends on the value $\alpha$. Given this, recall that the $1$-norm formulation of the LP for the min-$s,t$-cut problem, i.e., the minimum weighted $s,t$ cut in the flow graph, is given by the following. \begin{alignat*}{4} &\text{min} & \|Bx\|_{C(\alpha),1} \\ &\text{s.t.} &x_s=1 , x_t=0 , x \ge 0 . \end{alignat*} Here is a theorem that shows that PageRank implicitly solves a $2$-norm variation of the $1$-norm formulation of the $s,t$-cut problem. \begin{theorem} \label{thm:antiderivative1} Let $B(S)$ be the incidence matrix for the localized cut graph, and $C(\alpha)$ be the edge-weight matrix. The PageRank vector $z$ that solves \[ (\alpha D + L) z = \alpha v \] with $v = d_{S}/\mathrm{vol}(S)$ is a renormalized solution of the 2-norm cut computation: \begin{eqnarray} \label{eq:pr-cut} \text{min} & \|B(S)x\|_{C(\alpha),2} \\ \nonumber \text{s.t.} & x_s=1 , x_t=0 . \end{eqnarray} Specifically, if $x(\alpha,S)$ is the solution of Prob.~\eqref{eq:pr-cut}, then \[ x(\alpha,S) = \left[ \begin{array}{c} 1 \\ \mathrm{vol}(S) z \\ 0 \end{array} \right] . \] \end{theorem} \begin{proof} The key idea is that the 2-norm problem corresponds with a quadratic objective, which PageRank solves. The quadratic objective for the 2-norm approximate cut is: \begin{eqnarray*} \| B(S) x \|_{C(\alpha),2}^2 &=& x^T B(S)^T C(\alpha) B(S) x \\ &=& x^T \left[ \begin{array}{ccc} \alpha \mbox{vol}(S) & -\alpha d_S^T & 0 \\ -\alpha d_S & L + \alpha D & -\alpha d_{\bar{S}} \\ 0 & -\alpha d_{\bar{S}} & \alpha \mbox{vol}(\bar{S}) \end{array} \right] x. \end{eqnarray*} If we apply the constraints that $x_s = 1$ and $x_t = 0$ and let $x_G$ be the free set of variables, then we arrive at the unconstrained objective: \begin{eqnarray*} & & \hspace{-70mm} \left[ \begin{array}{ccc} 1 & x_G^T & 0 \end{array} \right] \left[ \begin{array}{ccc} \alpha \mbox{vol}(S) & -\alpha d_S^T & 0 \\ -\alpha d_S & L + \alpha D & -\alpha d_{\bar{S}} \\ 0 & -\alpha d_{\bar{S}} & \alpha \mbox{vol}(\bar{S}) \end{array} \right] \left[ \begin{array}{c} 1 \\ x_G \\ 0 \end{array} \right] \\ \hspace{+50mm} &=& x_G^T (L + \alpha D) x^{}_G - 2 \alpha x_G^T d^{}_{S} + \alpha \mbox{vol}(S). \end{eqnarray*} Here, the solution $x_G$ solves the linear system \[ (\alpha D + L) x_G = \alpha d_{S}. \] The vector $x_G = \mathrm{vol}(S) z$, where $z$ is the solution of the PageRank problem defined in the theorem, which concludes the proof. \end{proof} Theorem~\ref{thm:antiderivative1} essentially says that for each PR problem, there is a related cut/flow problem that ``gives rise'' to it. One can also establish the reverse relationship that extracts a cut/flow problem from \emph{any} PageRank problem. To show this, first note that the proof of Theorem~\ref{thm:antiderivative1} works since the edges we added had weights proportional to the degree of the node, and hence the increase to the degree of the nodes was proportional to their current degree. This causes the diagonal of the Laplacian matrix of the localized cut graph to become $\alpha D + D$. This idea forms the basis of our subsequent analysis. For a general PageRank problem, however, we require a slightly more general definition of the localized cut graph, which we call a \emph{PageRank cut graph}. Here is the definition. \begin{definition} Let $G = (V,E)$ be a graph, and let $s \ge 0$ be a vector such that $d - s \ge 0$. Let $s$ connect to each node in $G$ with weights given by the vector $\alpha s$, and let $t$ connect to each node in $G$ with weights given by $\alpha (d - s)$. Then the \emph{PageRank cut graph} is the weighted, undirected graph with adjacency matrix: \[ A(s) = \left[ \begin{array}{ccc} 0 & \alpha s^T & 0 \\ \alpha s & A & \alpha (d - s) \\ 0 & \alpha (d - s)^T & 0 \end{array} \right] . \] \end{definition} We use $B(s)$ to refer to the incidence matrix of this PageRank cut graph. Note that if $s=d_S$, then this is simply the original construction. With this, we state the following theorem, which is a sort of converse to Theorem~\ref{thm:antiderivative1}. The proof is similar to that of Theorem~\ref{thm:antiderivative1} and so it is omitted. \begin{theorem} \label{thm:antiderivative1converse} Consider any PageRank problem that fits the framework of \[ (I - \beta P^T) x = (1-\beta) v . \] The PageRank vector $z$ that solves \[ ( \alpha D + L) z = \alpha v \] is a renormalized solution of the 2-norm cut computation: \begin{eqnarray} \label{eq:pr-unif} \text{min} & \| B(s) x \|_{C(\alpha),2} \\ \nonumber & x_s = 1, x_t = 0 \end{eqnarray} with $s = v$. Specifically, if $x(\alpha,S)$ is the solution of the 2-norm cut, then \[ x(\alpha,s) = \left[ \begin{array}{c} 1 \\ z \\ 0 \end{array} \right] . \] \end{theorem} Two things are worth noting about this result. \begin{itemize} \item A corollary of this result is the following: if $s = e$, then the solution of a 2-norm cut is a reweighted, renormalized solution of PageRank with $v = e/n$. That is, as a corollary of this approach, the \emph{standard} PageRank problem with $v = e/n$ gives rise to a cut problem where $s$ connects to each node with weight $\alpha$ and $t$ connects to each node $v$ with weight $\alpha(d_v - 1)$. \item This also holds for the semi-supervised learning results we discussed. In particular, e.g., the procedure of Zhou et al. for semi-supervised learning on graphs solves the following: \[ ( I - \beta D^{-1/2}AD^{-1/2})^{-1} Y . \] (The other procedures solve a very similar problem.) This is exactly a PageRank equation for a degree-based scaling of the labels, and thus the construction from Theorem~\ref{thm:antiderivative1converse} is directly applicable. \end{itemize} \subsection{Implicit $\ell_1$ regularization in strongly local spectral methods} In light of these results, let's now move onto the ACL procedure. We will show a connection between it and an $\ell_1$ regularized version of an $\ell_2$ objective, as established in Theorem~\ref{thm:antiderivative1converse}. In particular, we will show that the ACL procedure for \emph{approximating} a PPR vector \emph{exactly} computes a hybrid $1$-norm $2$-norm variant of the min-cut problem. The balance between these two terms (the $\ell_2$ term from Problem~\ref{eq:pr-unif} and an additional $\ell_1$ term) has the effect of producing sparse PageRank solutions that also have sparse truncated residuals, and it also provides an interesting connection with $\ell_1$-regularized $\ell_2$-regression problems. We start by reviewing the ACL method and describing it in such a way to make these connections easier to establish. Consider the problem $(I - \beta A D^{-1}) x = (1-\beta) v$, where $v = e_i$ is localized onto a single node. In addition to the PageRank parameter $\beta$, the procedure has two parameters: $\tau > 0$ is a accuracy parameter that determines when to stop, and $0 < \rho \le 1$ is an additional approximation term that we introduce. As $\tau \to 0$, the computed solution $x$ goes to the PPR vector that is non-zero everywhere. The value of $\rho$ has been $1/2$ in most previous implementations of the procedure; and here we present a modified procedure that makes the effect of $\rho$ explicit. \begin{enumerate} \item $x^{(1)} = 0, r^{(1)} = (1-\beta) e_i $, $k = 1$ \item \emph{while} any $r_j > \tau d_j$ \qquad \emph{(where $d_j$ is the degree of node $j$)} \item \hspace{1em} $x^{(k+1)} = x^{(k)} + (r_j - \tau d_j \rho) e_j$ \item \hspace{1em} $r^{(k+1)}_i = \begin{cases} \tau d_j \rho & i = j \\ r^{(k)}_i + \beta (r_j - \tau d_j \rho) / d_j & i \sim j \\ r^{(k)}_i & \text{otherwise} \end{cases}$ \item \hspace{1em} $k \leftarrow k + 1$ \end{enumerate} As we have noted previously, one of the important properties of this procedure is that the algorithm maintains the invariant $r = (1-\beta) v - (I - \beta A D^{-1}) x $ throughout. For any $0 \le \rho \le 1$, this algorithm converges because the sum of entries in the residual always decreases monotonically. At the solution we will have \[ 0 \le r \le \tau d, \] which provides an $\infty$-norm style worst-case \emph{approximation} guarantee to the exact PageRank solution. Consider the following theorem. In the same way that Theorem~\ref{thm:antiderivative1converse} establishes that a PageRank vector can be interpreted as optimizing an $\ell_2$ objective involving the edge-incidence matrix, the following theorem establishes that, in the case that $\rho = 1$, the ACL procedure to approximate this vector can be interpreted as solving an $\ell_1$-regularized $\ell_2$ objective. That is, in addition to \emph{approximating} the solution to the objective function that is optimized by the PPR, this algorithm also \emph{exactly} computes the solution to an $\ell_1$ regularized version of the same objective. \begin{theorem} \label{thm:implicitL1Reg} Fix a subset of vertices $S$. Let $x$ be the output from the ACL procedure with $\rho = 1$, $0 < \beta < 1$, $v = d_{S}/\mathrm{vol}(S)$, and $\tau$ fixed. Set $\alpha = \frac{1-\beta}{\beta}$, $\kappa = \tau \mathrm{vol}(S) / \beta$, and let $z_G$ be the solution on graph vertices of the sparsity-regularized cut problem: \begin{eqnarray} \label{eq:acl-pr} \text{min} & \frac{1}{2}\| B(s) z \|_{C(\alpha),2}^{2} + \kappa \| Dz \|_{1} \\ \nonumber \text{s.t.} & z_s = 1, z_t = 0, z \ge 0 , \end{eqnarray} where $z = \left[\begin{array}{c} 1 \\ z_G \\ 0 \end{array}\right]$ as above. Then $x = D z_G/\mbox{vol}(S)$. \end{theorem} \begin{proof} If we expand the objective function and apply the constraint $z_s = 1, z_t = 0$, then Prob.~\eqref{eq:acl-pr} becomes: \begin{eqnarray} \text{min} & \frac{1}{2} z_G^T (\alpha D + L) z_G - \alpha z_G^T d_{S} + \alpha^2 \mbox{vol}(S) + \kappa d^Tz_G \\ \nonumber \text{s.t.} & z_G \ge 0 \end{eqnarray} Consider the optimality conditions of this quadratic problem (where $s$ are the Lagrange multipliers): \[ \begin{aligned} 0 & = (\alpha D + L) z_G - \alpha d_{\bar{S}} + \kappa d - s \\ s & \ge 0 \\ z_G & \ge 0 \\ z_G^T s& = 0. \end{aligned} \] These are both necessary and sufficient because $(\alpha D + L)$ is positive definite. In addition, and for the same reason, the solution is unique. In the remainder of the proof, we demonstrate that vector $x$ produced by the ACL method satisfies these conditions. To do so, we first translate the optimality conditions to the equivalent PageRank normalization: \[ \begin{aligned} 0 & = (I - \beta A D^{-1}) D z_G/\mbox{vol}(S) - (1-\beta) d_{S} / \mbox{vol}(S) + \beta \kappa / \mbox{vol}(S) d - \beta s / \mbox{vol}(S) \\ s & \ge 0 \qquad z_G \ge 0 \qquad z_G^T s = 0. \end{aligned} \] When the ACL procedure finishes with $\beta$, $\rho$, and $\tau$ as in the theorem, the vectors $x$ and $r$ satisfy: \[ \begin{aligned} r & = (1-\beta) v - (I - \beta A D^{-1}) x \\ x & \ge 0 \\ 0 & \le r \le \tau d = \beta \kappa / \mbox{vol}(S) d. \end{aligned} \] Thus, if we set $s$ such that $ \beta s / \mbox{vol}(S) = \beta \kappa / \mbox{vol}(S) d - r $, then we satisfy the first condition with $x = D z_G/\mbox{vol}(S)$. All of these transformations preserve $x \ge 0$ and $z_G \ge 0$. Also, because $\tau d \ge r$, we also have $s \ge 0$. What remains to be shown is $z_G^T s = 0$. Here, we show $x^T (\tau d - r) = 0$, which is equivalent to the condition $z_G^T s = 0$ because the non-zero structure of the vectors is identical. Orthogonal non-zero structure suffices because $z_G s = 0$ is equivalent to either $x_i = 0$ or $\tau d_i - r_i = 0$ (or both) for all $i$. If $x_i \not= 0$, then at some point in the execution, the vertex $i$ was chosen at the step $r_j > \tau d_j$. In that iteration, we set $r_i = \tau d_i$. If any other step increments $r_i$, we must revisit this step and set $r_i = \tau d_i$ again. Then at a solution, $x_i \not= 0$ requires $r_i = \tau d_i$. For such a component, $s_i = 0$, using the definition above. For $x_i = 0$, the value of $s_i$ is irrelevant, and thus, we have $x^T (\tau d - r) = 0$. \end{proof} \textbf{Remark.} Finally, a comment about $\rho$, which is set to $1$ in this theorem but equals $1/2$ in most prior uses of the ACL push method. The proof of Theorem~\ref{thm:implicitL1Reg} makes the role of $\rho$ clear. If $\rho < 1$, then the output from ACL is \emph{not} equivalent to the solution of Prob.~\eqref{eq:acl-pr}, i.e., the renormalized solution will \emph{not} satisfy $z_G^T s = 0$; but setting $\rho < 1$, however, \emph{will} compute a solution much more rapidly. It is a nice open problem to get a clean statement of implicit regularization when $\rho < 1$. \section{% (04/14/2015): Some Statistical Inference Issues (1 of 3): Introduction and Overview} Reading for today. \begin{compactitem} \item ``Towards a theoretical foundation for Laplacian-based manifold methods,'' in JCSS, by Belkin and Niyogi \end{compactitem} \subsection{Overview of some statistical inference issues} So far, most of what we have been doing on spectral methods has focused on various sorts of algorithms---often but not necessarily worst-case algorithms. That is, there has been a bias toward algorithms that are more rather than less well-motivated statistically---but there hasn't been a lot statistical emphasis \emph{per se}. Instead, most of the statistical arguments have been informal and by analogy, e.g., if the data are nice, then one should obtain some sort of smoothness, and Laplacians achieve that in a certain sense; or diffusions on graphs should look like diffusions on low-dimensional spaces or a complete graph; or diffusions are robust analogues of eigenvectors, which we illustrated in several ways; and so on. Now, we will spend a few classes trying to make this statistical connection a little more precise. As you can imagine, this is a large area, and we will only be able to scratch the surface, but we will try to give an idea of the space, as well as some of the gotchas of naively applying existing statistical or algorithmic methods here---so think of this as pointing to lots of interesting open questions to do statistically-principled large-scale computing, rather than the final word on the topic. From a statistical perspective, many of the issues that arise are somewhat different than much of what we have been considering. \begin{itemize} \item Computation is much less important (but perhaps it should be much more so). \item Typically, one has some sort of model (usually explicit, but sometimes implicit, as we saw with the statistical characterization of the implicit regularization of diffusion-based methods), and one wants to compute something that is optimal for that model. \item In this case, one might want to show things like convergence or consistence (basically, that what is being computed on the empirical data converges to the answer that is expected, as the number of data points $n \rightarrow \infty$). \end{itemize} For spectral methods, at a high level, there are basically two types of reference states or classes of models that are commonly-used: one is with respect to some sort of very low-dimensional space; and the other is with respect to some sort of random graph model. \begin{itemize} \item \textbf{Low-dimensional spaces.} In this simplest case, this is a line; more generally, this is a low-dimensional linear subspace; and, more generally, this is a low-dimensional manifold. Informally, one should think of low-dimensional manifolds in this context as basically low-dimensional spaces that are curved a bit; or, relatedly, that the data are low-dimensional, but perhaps not in the original representation. This manifold perspective provides added descriptive flexibility, and it permits one to take advantage of connections between the geometry of continuous spaces and graphs (which in very special cases are a discretization of those continuous places). \item \textbf{Random graphs.} In the simples case, this is simply the $G_{nm}$ or $G_{np}$ Erdos-Renyi (ER) random graph. More generally, one is interested in finding clusters, and so one works with the stochastic blockmodel (which can be thought of as a bunch of ER graphs pasted together). Of course, there are many other extensions of basic random graph models, e.g., to include degree variability, latent factors, multiple cluster types, etc. \end{itemize} These two places provide two simple reference states for statistical claims about spectral graph methods; and the types of guarantees one obtains are somewhat different, depending on which of these reference states is assumed. Interestingly, (and, perhaps, not surprisingly) these two places have a direct connection with the two complementary places (line graphs and expanders) that spectral methods implicitly embed the data. In both of these cases, one looks for theorems of the form: ``If the data are drawn from this place and things are extremely nice (e.g., lots of data and not too much noise) then good things happen (e.g., finding the leading vector, recovering hypothesized clusters, etc.) if you run a spectral method. We will cover several examples of this. A real challenge arises when you have realistic noise and sparsity properties in the data, and this is a topic of ongoing research. As just alluded to, another issue that arises is that one needs to specify not only the hypothesized statistical model (some type of low-dimensional manifold or some type of random graph model here) but also one needs to specify exactly what is the problem one wants to solve. Here are several~examples. \begin{itemize} \item One can ask to recover the objective function value of the objective you write down. \item One can ask to recover the leading nontrivial eigenvector of the data. \item One can ask to converge to the Laplacian of the hypothesized model. \item One can ask to find clusters that are present in the hypothesized model. \end{itemize} The first bullet above is most like what we have been discussing so far. In most cases, however, people want to use the solution to that objective for something else, and the other bullets are examples of that. Typically in these cases one is asking for a lot more than the objective function value, e.g., one wants to recover the ``certificate'' or actual solution vector achieving the optimum, or some function of it like the clusters that are found by sweeping along it, and so one needs stronger assumptions. Importantly, many of the convergence and statistical issues are quite different, depending on the exact problem being considered. \begin{itemize} \item \textbf{Today}, we will assume that the data points are drawn from a low-dimensional manifold and that from the empirical point cloud of data we construct an empirical graph Laplacian; and we will ask how this empirical Laplacian relates to the Laplacian operator on the manifold. \item \textbf{Next time}, we will ask whether spectral clustering is consistent in the sense that it converges to something meaningful and $n \rightarrow \infty$, and we will provide sufficient conditions for this (and we will see that the seemingly-minor details of the differences between unnormalized spectral clustering and normalized spectral clustering lead to very different statistical results). \item \textbf{On the day after that}, we will consider results for spectral clustering in the stochastic blockmodel, for both vanilla situations as well as for situations in which the data are very~sparse. \end{itemize} \subsection{Introduction to manifold issues} Manifold-based ML is an area that has received a lot of attention recently, but for what we will discuss today one should think back to the discussion we had of Laplacian Eigenmaps. At root, this method defines a set of features that can then be used for various tasks such as data set parametrization, clustering, classification, etc. Often the features are useful, but sometimes they are not; here are several examples of when the features developed by LE and related methods are often less than useful. \begin{itemize} \item \textbf{Global eigenvectors are localized.} In this case, ``slowly-varying'' functions (by the usual precise definition) are not so slowly-varying (in a sense that most people would find intuitive). \item \textbf{Global eigenvectors are not useful.} This may arise if one is interested in a small local part of the graph and if information of interest is not well-correlated with the leading or with any eigenvector. \item \textbf{Data are not meaningfully low-dimensional.} Even if one believes that there is some sort of hypothesized curved low-dimensional space, there may not be a small number of eigenvectors that capture most of this information. (This does \emph{not} necessarily mean that the data are ``high rank,'' since it is possible that the spectrum decays, just very slowly.) This is more common for very sparse and noisy data, which are of course very common. \end{itemize} Note that the locally-biased learning methods we described, e.g., the \textsf{LocalSpectral} procedure, the PPR procedure, etc., was motivated by one common situation when the global methods such as LE and related methods had challenges. While it may be fine to have a ``feature generation machine,'' most people prefer some sort of theoretical justification that says when a method works in some idealized situation. To that end, many of the methods like LE assume that the data are drawn from some sort of low-dimensional manifold. Today, we will talk about one statistical aspect of that having to do with converging to the manifold. To start, here is a simple version of the ``manifold story'' for a classification problem. Consider a $2$-class classification problem with classes $C_1$ and $C_2$, where the data elements are drawn from some space $\mathcal{X}$, whose elements are to be classified. A statistical or probabilistic model typically includes the following two ingredients: a probability density $p(x)$ on $\mathcal{X}$; and class densities $\{ p \left( C_i | x \in \mathcal{X} \right) \}$, for $i\in\{1,2\}$. Importantly, if there are unlabeled data, then the unlabeled data don't tell us much about the conditional class distributions, as we can't identify classes without labels, but the unlabeled data can help us to improve our estimate of the probability distribution $p(x)$. That is, the unlabeled data tell us about $p(x)$, and the labeled data tell us about $\{ p \left( C_i | x \in \mathcal{X} \right) \}$. If we say that the data come from a low-dimensional manifold $\mathcal{X}$, then a natural geometric object to consider is the Laplace-Beltrami operator on $\mathcal{X}$. In particular, let $\mathcal{M} \subset \mathbb{R}^{n}$ be an $n$-dimensional compact manifold isometrically embedded in $\mathbb{R}^{k}$. (Think of this as an $n$-dimensional ``surface'' in $\mathbb{R}^{k}$.) The Riemannian structure on $\mathcal{M}$ induces a volume form that allows us to integrate functions defined on $\mathcal{M}$. The square-integrable functions form a Hilbert space $\mathcal{L}^2\left(\mathcal{M}\right)$. Let $C^{\infty}\left(\mathcal{M}\right)$ be the space of infinitely-differentiable functions on $\mathcal{M}$. Then, the Laplace-Beltrami operator is a second order differentiable operator $\Delta_{\mathcal{M}} : C^{\infty}\left(\mathcal{M}\right) \rightarrow C^{\infty}\left(\mathcal{M}\right)$. We will define this in more detail below; for now, just note that if the manifold is $\mathbb{R}^{n}$, then the Laplace-Beltrami operator is $\Delta = - \frac{\partial^2}{\partial x_1^2}$. There are two important properties of the Laplace-Beltrami operator. \begin{itemize} \item \textbf{It provides a basis for $\mathcal{L}^2\left(\mathcal{M}\right)$.} In general, $\Delta$ is a PSD self-adjoint operator (w.r.t. the $\mathcal{L}^2$ inner product) on twice differentiable functions. In addition, if $\mathcal{M}$ is a \emph{compact} manifold, then $\Delta$ has a discrete spectrum, the smallest eigenvalue of $\Delta$ equals $0$ and the associated eigenfunction is the constant eigenfunction, and the eigenfunctions of $\Delta$ provide an orthonormal basis for the Hilbert space $\mathcal{L}^2\left(\mathcal{M}\right)$. In that case, any function $f \in \mathcal{L}^2\left(\mathcal{M}\right)$ can be written as $f(x) = \sum_{i=1}^{\infty} a_i e_i(x)$, where $e_i$ are the eigenfunctions of $\Delta$, i.e., where $\Delta e_i = \lambda_i e_i$. In this case, then the simplest model for the classification problem is that the class membership is a square-integrable function, call it $m: \mathcal{M}\rightarrow\{-1,+1\}$, in which case the classification problem can be interpreted as interpolating a function on the manifold. Then we can choose the coefficients to get an optimal fit, $m(x) = \sum_{i=1}^{n} a_i e_i$, in the same way as we might approximate a signal with a Fourier series. (In fact, if $\mathcal{M}$ is a unit circle, call it $S^1$, then $\Delta_{S_1} f(\theta) = -\frac{d^2f(\theta)}{d \theta^2}$, and the eigenfunctions are sinusoids with eigenvalues $\{1^2,2^2,\ldots\}$. and we get the usual Fourier series.) \item \textbf{It provides a smoothness functional.} Recall that a simple measure of the degree of smoothness for a function $f$ on the unit circle $S^1$ is $$ S(f) = \int_{S^1} | f(\theta)^{\prime}|^2 d\theta . $$ In particular, $f$ is smooth iff this is close to zero. If we take this expression and integrate by parts, then we get $$ S(f) = \int_{S^1} f^{\prime}(\theta) d\theta = \int_{S^1} f \Delta f d\theta = \langle \Delta f,f \rangle_{\mathcal{L}^2\left(S^1\right)} . $$ More generally, if $f : \mathcal{M} \rightarrow \mathbb{R}$, then it follows that $$ S(f) = \int_{\mathcal{M}} | \nabla f |^2 d\mu = \int_{\mathcal{M}} f \Delta f d\mu = \langle \Delta f,f \rangle_{\mathcal{L}^2\left(\mathcal{M}\right)} . $$ So, in particular, the smoothness of the eigenfunction is controlled by the eigenvalue, i.e., $$ S(e_i) = \langle \Delta e_i , e_i \rangle_{\mathbb{L}^2\left(\mathcal{M}\right)} = \lambda_i , $$ and for arbitrary $f$ that can be expressed as $f = \sum_i \alpha_i e_i$, we have that $$ S(f) = \langle \Delta f , f \rangle = \left\langle \sum_i \alpha_i \Delta e_i , \sum_i \alpha_i e_i \right\rangle = \sum_i \lambda_i \alpha_i^2 . $$ (So, in particular, approximating a function $f$ by its first $k$ eigenfunctions is a way to control the smoothness of the eigenfunctions; and the linear subspace where the smoothness functions is finite is a RKHS.) \end{itemize} This has strong connections with a range of RKHS problems. (Recall that a RKHS is a Hilbert space of functions where the evaluation functionals, the functionals that evaluate functions at a point, are bounded linear functionals.) Since the Laplace-Beltrami operator on $\mathcal{M}$ can be used to provide a basis for $\mathcal{L}^2\left(\mathcal{M}\right)$, we can take various classes of functions that are defined on the manifold and solve problems of the form \begin{equation} \min_{f \in H} \sum_i \left( y_i - f(x_i) \right)^2 + \lambda G(f) , \label{eqn:reg-opt} \end{equation} where $H :\mathcal{M} \rightarrow \mathbb{R}$. In general, the first term is the empirical risk, and the second term is a stabilizer or regularization term. As an example, one could choose $G(f) = \int_{\mathcal{M}} \langle \nabla f, \nabla f \rangle = \sum_i \alpha_i^2 \lambda_i $ (since $f = \sum_i \alpha_i e_i(x)$), and $H = \{ f = \sum_i \alpha_i e_i | G(f) < \infty \}$, in which case one gets an optimization problem that is quadratic in the $\alpha$ variables. As an aside that is relevant to what we discussed last week with the heat kernel, let's go through the construction of a RKHS that is invariantly defined on the manifold $\mathcal{M}$. To do so, let's fix an infinite sequence of non-negative numbers $\{ \mu_i | i \in \mathbb{Z}^{+} \}$ s.t. $\sum_i \mu_i < \infty$ (as we will consider in the examples below). Then, define the following linear space of continuous functions $$ H = \left\{ f = \sum_i \alpha_i f_i | \sum_i \frac{\alpha_i^2}{\mu_i} < \infty \right\} . $$ Then, we can define the inner product as: for $f = \sum_i \alpha_i f_i$ and $g = \sum_i \beta_i g_i$, we have $\left\langle f,g \right\rangle = \sum_i \frac{\alpha_i \beta_i}{ \mu_i} $. Then, $H$ is a RKHS with the following kernel: $K(p,q) = \sum_i \mu_i e_i(p)e_i(q)$. Then, given this, we can solve regularized optimization problems of the form given in Eqn.~(\ref{eqn:reg-opt}) above. In addition, we can get other choices of kernels by using different choices of $\mu$ vectors. For example, if we let $\mu_i = e^{-t\lambda_i}$, where $\lambda_i$ are the eigenvalues of $\Delta$, then we get the heat kernel corresponding to heat diffusion on the manifold; if we let $\mu_i=0$, for all $i > i^{*}$, then we are solving an optimization problem in a finite dimensional space; and so on. All of this discussion has been for data drawn from an hypothesized manifold $\mathcal{M}$. Since we are interested in a smoothness measure for functions for a graph, then if we think of the graph as a model for the manifold, then we want the value of a function not to change too much between points. In that case, we get $$ S_G(f) = \sum_{i ~ j} W_{ij} \left( f_i - f_j \right) , $$ and it can be shown that $$ S_G(f) = fLf^T = \langle f,Lf \rangle_G = \sum_{i=1}^{n} \lambda_i \langle f,e_i \rangle_G . $$ Of course, this is the discrete object with which we have been working all along. Viewed from the manifold perspective, this corresponds to the discrete analogue of the integration by parts we performed above. In addition, we can use all of this to consider questions having to do with ``regularization on manifolds and graphs,'' as we have allude to in the past. To make this connection somewhat more precise, recall that for a RKHS, there exists a kernel $K : X \times X \rightarrow \mathbb{R}$ such that $f(x) = \left< f(\cdot),K(x,\cdot) \right>_H$. For us today, the domain $X$ could be a manifold $\mathcal{M}$ (in which case we are interested in kernels $K: \mathcal{M} \times \mathcal{M} \rightarrow\mathbb{R}$), or it could be points from $\mathbb{R}^{n}$ (say, on the nodes of the graph that was constructed from the empirical original data by a nearest neighbor rule, in which case we are interested in kernels $K:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}$). We haven't said anything precise yet about how these two relate, so now let's turn to that and ask about connections between kernels constructed from these two different places, as $n \rightarrow\infty$. \subsection{Convergence of Laplacians, setup and background} Now let's look at questions of convergence. If $H : \mathcal{M} \rightarrow{R}$ is a RKHS invariantly defined on $\mathcal{M}$, then the key goal is to minimize regularized risk functionals of the form $$ E_{\lambda} = \min_{f \in H} \mathbb{E}\left[ \left( y-f(x)\right)^2 \right] + \lambda \|f\|_H^2 . $$ In principle, we can do this---if we had an infinite amount of data available \emph{and} the true manifold is known. Instead, we minimize the empirical risk which is of the form $$ \hat{E}_{\lambda,n} = \min_{f \in H} \frac{1}{n} \sum \left( y_i - f(x_i) \right)^2 + \lambda \|f \|_H . $$ The big question is: how far is $\hat{E}_{\lambda,n}$ from $E_{\lambda}$. The point here is the following: assuming the manifold is known or can be estimated from the data, then making this connection is a relatively-straightforward application of Hoeffding bounds and regularization/stability ideas. But: \begin{itemize} \item In theory, establishing convergence to the hypothesized manifold is challenging. We will get to this below. \item In practice, testing the hypothesis that the data are drawn from a manifold in some meaningful sense of the word is harder still. (For some reason, this question is not asked in this area. It's worth thinking about what would be test statistics to validate or invalidate the manifold hypothesis, e.g., is that the best conductance clusters are not well balanced sufficient to invalidate it?) \end{itemize} So, the goal here is to describe conditions under which the point cloud in $\mathcal{X}$ of the sample points converges to the Laplace-Beltrami operator on the underlying hypothesized manifold $\mathcal{M}$. From this perspective, the primary data are points in $\mathcal{X}$, that is assumed to be drawn from an underlying manifold, with uniform or nonuniform density, and we want to make the claim that the Adjacency Matrix or Laplacian Matrix of the empirical data converges to that of the manifold. (That is, the data are not a graph, as will arise with the discussion of the stochastic block model.) In particular, the graph is and empirical object, and if we view spectral graph algorithms as applying to that empirical object then they are stochastically justified when they can relate to the underlying processes generating the data. What we will describe today is the following. \begin{itemize} \item For data drawn from a uniform distribution on a manifold $\mathcal{M}$, the graph Laplacian converges to the Laplace-Beltrami operator, as $n \rightarrow \infty$ and the kernel bandwidth is chosen appropriately (where the convergence is uniform over points on the manifold and for a class of functions). \item The same argument applies for arbitrary probability distributions, except that one converges to a weighted Laplacian; and in this case the weights can be removed to obtain convergence to the normalized Laplacian. (Reweighting can be done in other ways to converge to other quantities of interest, but we won't discuss that in detail.) \end{itemize} Consider a compact smooth manifold $\mathcal{M}$ isometrically embedded in $\mathbb{R}^{n}$. The embedding induces a measure corresponding to volume form $\mu$ on the manifold (e.g., the volume form for a closed curve, i.e., an embedding of the circle, measures the usual curve length in $\mathbb{R}^{n}$). The Laplace-Beltrami operator $\Delta_{\mathcal{M}}$ is the key geometric object associated to a Riemannian manifold. Given $\rho \in \mathcal{M}$, the tangent space $T_{\rho}\mathcal{M}$ can be identified with the affine space to tangent vectors to $\mathcal{M}$ at $\rho$. (This vector space has a natural inner product induced by embedding $\mathcal{M} \subset \mathbb{R}^{n}$.) So, given a differentiable function $f:\mathcal{M} \rightarrow \mathcal{R}$, let $\nabla_{\mathcal{M}}f$ be the gradient vector on $\mathcal{M}$ (where $\nabla_{\mathcal{M}}f(p)$ points in the direction of fastest ascent of $f$ at $\rho$. Here is the definition. \begin{definition} The \emph{Laplace-Beltrami operator} $\Delta_{\mathcal{M}}$ is the divergence of the gradient, i.e., $$ \Delta_{\mathcal{M}}f = - \mbox{div} \left( \nabla_{\mathcal{M}} f \right) . $$ Alternatively, $\Delta_{\mathcal{M}}$ can be defined as the unique operator s.t., for all two differentiable functions $f$ and $h$, $$ \int_{\mathcal{M}} h(x) \Delta_{\mathcal{M}} f(x) d \mu(x) = \int_{\mathcal{M}} \left< \nabla_{\mathcal{M}} h(x), \nabla_{\mathcal{M}} f(x) \right> d \mu , $$ where the inner product is on the tangent space and $\mu$ is the uniform measure. \end{definition} In $\mathbb{R}^{n}$, we have $\Delta f = - \sum_i \frac{\partial^2 f}{\partial x_i^2}$. More generally, on a $k$-dimensional manifold $\mathcal{M}$, in a local coordinate system $\left( x_1, \ldots, x_n \right)$, with a metric tensor $g_{ij}$, if $g^{ij}$ are the components of the inverse of the metric tensor, then the Laplace-Beltrami operator applied to a function $f$ is $$ \Delta_{\mathcal{M}} f = \frac{1}{\sqrt{\mbox{det}(g)}} \sum_j \frac{\partial}{\partial x^j} \left( \sqrt{\mbox{det}(g)} \sum_i g^{ij} \frac{\partial f}{ \partial x_i} \right) $$ (If the manifold has nonuniform measure $\nu$, given by $d \nu(x) = P(x) d \mu (x)$, for some function $P(x)$ and with $d\mu$ being the canonical measure corresponding to the volume form, then we have the more general notion of a \emph{weighted manifold Laplacian}: $\Delta_{\mathcal{M},\mu} = \Delta_P f = \frac{1}{P(x)} \mbox{div} \left( P(x) \nabla_{\mathcal{M}} f \right) $.) The question is how to reconstruct $\Delta_{\mathcal{M}}$, given a finite sample of data points from the manifold? Here are the basic objects (in addition to $\Delta_{\mathcal{M}}$) that are used to answer this question. \begin{itemize} \item \textbf{Empirical Graph Laplacian.} Given a sample of $n$ points $x_i, \ldots, x_n$ from $\mathcal{M}$, we can construct a weighted graph with weights $W_{ij} = e^{-\|x_i-x_j\|^2/4t}$, and then \[ \left( L_n^t\right)_{ij} = \left\{ \begin{array}{l l} -W_{ij} & \quad \text{if $i \ne j$}\\ \sum_k W_{ik} & \quad \text{if $i=j$} \end{array} \right. . \] Call $L_n^t$ the graph Laplacian matrix. We can think of $L_n^t$ as an operation of functions on the $n$ empirical data points: \[ L_n^t f(x_i) = f(x_i) \sum_j e^{-\|x_i-x_j\|^2/(4t)} - \sum_j f(x_j) e^{-\|x_i-x_j\|^2/(4t)} , \] but this operator operates only on the empirical data, i.e., it says nothing about other points from $\mathcal{M}$ or the ambient space in which $\mathcal{M}$ is embedded. \item \textbf{Point Cloud Laplace operator.} This formulation extends the previous results to any function on the ambient space. Denote this by $\underbar{L}_n^t$ to get \[ \underbar{L}_n^t f(x) = f(x) \frac{1}{n} \sum_j e^{-\|x-x_j\|^2/(4t)} - \frac{1}{n}\sum_j f(x_j) e^{-\|x-x_j\|^2/(4t)} \] (So, in particular, when evaluated on the empirical data points, we have that $\underbar{L}_n^t f(x_i) = \frac{1}{n} L_n^T f(x_i)$.) Call $\underbar{L}_n^t$ the Laplacian associated to the point cloud $x_1,\ldots,x_n$. \item \textbf{Functional approximation to the Laplace-Beltrami operator.} Given a measure $\nu$ on $\mathcal{M}$, we can construct an operator \[ \underbar{L}^tf(x) = f(x) \int_{\mathcal{M}} e^{-\|x-y\|^2/(4t)} d\nu(y) - \int_{\mathcal{M}} f(y) e^{-\|x-y\|^2/(4t)} d\nu(y) . \] Observe that $\underbar{L}_n^t$ is just a special form of $\underbar{L}^t$, corresponding to the Dirac measure supported on $x_1,\ldots,x_n$. \end{itemize} \subsection{Convergence of Laplacians, main result and discussion} The main result they describe is to establish a connection between the graph Laplacian associated to a point cloud (which is an extension of the graph Laplacian from the empirical data points to the ambient space) and the Laplace-Beltrami operator on the underlying manifold $\mathcal{M}$. Here is the main results. \begin{theorem} Let $x_1,\ldots,x_n$ be data points sampled from a uniform distribution on the manifold $\mathcal{M} \subset \mathbb{R}^{n}$. Choose $t_n = n^{-1/(k+2+\alpha)}$, for $\alpha > 0$, and let $f \in C^{\infty}\left(\mathcal{M}\right)$. Then \[ \lim_{n \rightarrow \infty} \frac{1}{t_n(4 \pi t_n)^{k/2}} \underbar{L}_n^{t_n} f(x) = \frac{1}{\mbox{Vol}\left(\mathcal{M}\right)} \Delta_{\mathcal{M}} f(x) , \] where the limit is taken in probability and $\mbox{Vol}\left(\mathcal{M}\right)$ is the volume of the manifold with respect to the canonical measure. \end{theorem} We are not going to go through the proof in detail, but we will outline some key ideas used in the proof. Before doing that, here are some things to note. \begin{itemize} \item This theorem assert pointwise convergence of $\underbar{L}_n^t f(p)$ to $\Delta_{\mathcal{M}} f(p)$, for a fixed function $f$ and a fixed point $p$. \item Uniformity over all $p \in \mathcal{M}$ follows almost immediately from the compactness of $\mathcal{M}$. \item Uniform convergence over a class of function, e.g., functions $C^{k}\left(\mathcal{M}\right)$ with bounded $k^{th}$ derivative, follows with more effort. \item One can consider a more general probability distribution $P$ on $\mathcal{M}$ according to which data points are sampled---we will get back to an example of this below. \end{itemize} For the proof, the easier part is to show that $\underbar{L}_n^t \rightarrow \underbar{L}^t$, as $n \rightarrow \infty$, if points are samples uniformly: this uses some basic concentration results. The harder part is to connect $\underbar{L}^t$ and $\Delta_{\mathcal{M}}$: what must be shown is that when $t \rightarrow 0$, then $L^t $ appropriately scaled converges to $\Delta_{\mathcal{M}}$. Here are the basic proof ideas, which exploit heavily connections with the heat equation on $\mathcal{M}$. For simplicity, consider first $\mathbb{R}^{n}$, where we have the following theorem. \begin{theorem}[Solution to heat equation on $\mathbb{R}^{k}$] Let $f(x)$ be a sufficiently differentiable bounded function. Then \[ H^t f = \left(4 \pi t \right)^{-k/2} \int_{\mathbb{R}^{k}} e^{-\frac{\|x-y\|^2}{4t}} f(y) dy , \] and \[ f(x) = \lim_{t \rightarrow 0} H^t f(x) = \left(4 \pi t\right)^{-k/2} \int_{\mathbb{R}^{k}} e^{-\frac{\|x-y\|^2}{4t}} f(y)dy , \] and the function $u(x,t) = H^t f$ satisfies the heat equation \[ \frac{\partial}{\partial t} u(x,t) + \Delta u(x,t) = 0 \] with initial condition $u(x,0) = f(x)$. \end{theorem} This result for the heat equation is the key result for approximating the Laplace operator. \begin{eqnarray*} \Delta f(x) &=& -\frac{\partial}{\partial t} u(x,t) |_{t=0} \\ &=& - \frac{\partial}{\partial t} H^t f(x) |_{t=0} \\ &=& \lim_{t \rightarrow 0} \frac{1}{t} \left( f(x) - H^t f(x) \right) . \end{eqnarray*} By this last result, we have a scheme for approximating the Laplace operator. To do so, recall that the heat kernel is the Gaussian that integrates to $1$, and so \[ \Delta f(x) = \lim_{t \rightarrow 0} -\frac{1}{t} \left( \left(4 \pi t\right)^{-k/2} \int_{\mathbb{R}^{k}} e^{\frac{-\|x-y\|^2}{4t}}f(y)dy - f(x) \left(4 \pi t\right)^{-k/2} \int_{\mathbb{R}^{k}} e^{\frac{-\|x-y\|^2}{4t}} dy . \right) \] It can be shown that this can be approximated by the point cloud $x_1,\ldots,x_n$ by computing the empirical version as \begin{eqnarray*} \hat{\Delta} f(x) &=& \frac{1}{t} \frac{\left(4 \pi t\right)^{-k/2}}{n} \left( f(x)\sum_i e^{\frac{-\|x-x_i\|^2}{4t}} - \sum_i e^{\frac{-\|x-x_i\|^2}{4t}} f(x_i) \right) \\ &=& \frac{1}{t\left(4\pi t\right)^{k/2}} \underbar{L}_n^t f(x) . \end{eqnarray*} It is relatively straightforward to extend this to a convergence result for $\mathbb{R}^{k}$. To extend it to a convergence result for arbitrary manifolds $\mathcal{M}$, two issues arise: \begin{itemize} \item With very few exceptions, we don't know the exact form of the heat kernel $H_{\mathcal{M}}^t(x,y)$. (It has the nice form of a Gaussian for $\mathcal{M}=\mathbb{R}^{k}$.) \item Even asymptotic forms of the heat kernel requires knowing the geodesic distance between points in the point cloud, but we can only observe distance in the ambient space. \end{itemize} See their paper for how they deal with these two issues; this involves methods from differential geometry that are very nice but that are not directly relevant to what we are doing. Next, what about sampling with respect to nonuniform probability distributions? Using the above proof, we can establish that we converge to a weighted Laplacian. If this is not of interest, then once can instead normalize differently and get one of two results. \begin{itemize} \item The weighted scaling factors can be removed by using a different normalization of the weights of the point cloud. This different normalization basically amounts to considering the normalized Laplacian. See below. \item With yet a different normalization, we can recover the Laplace-Beltrami operator on the manifold. The significance of this is that it is possible to separate geometric aspects of the manifold from the probability distribution on it. This is of interest to harmonic analysts, and it underlies extension of the Diffusion Maps beyond the Laplacian Eigenmaps. \end{itemize} As for the first point, if we have a compact Riemannian manifold $\mathcal{M}$ and a probability distribution $P:\mathcal{M} \rightarrow \mathbb{R}^{+}$ according to which points are drawn in an i.i.d. fashion. Assume that $a \le P(x) \le b$, for all $x \in \mathcal{M}$. Then, define the point cloud Laplacian operator as \[\underbar{L}_n^t f(x) = \frac{1}{n} \sum_{i=1}^{n} W(x_i,x_j) \left( f(x) - f(x_i) \right) \] If $W(x,x_i) = e^{\frac{\|x-x_i\|}{4t}}$, then this corresponds to the operator we described above. In order to normalized the weights, let \[ W(x,x_i) = \frac{1}{t} \frac{G_t(x,x_i)}{\sqrt{\hat{d}_t(x)}\sqrt{\hat{d}_t(x_i)}} , \] where \begin{eqnarray*} G_t(x,x_i) &=& \frac{1}{\left( 4 \pi t\right)^{k/2} }e^{-\frac{\|x-x_i\|^2}{4t} } , \\ \hat{d}_t(x) &=& \frac{1}{n} \sum_{j \ne i} G_t(x,x_j) , \quad\mbox{and} \\ \hat{d}_t(x_i) &=& \frac{1}{n-1} \sum_{j \ne i} G_t(x_i,x_j) , \end{eqnarray*} where the latter two quantities are empirical estimates of the degree function $d_t(x)$, where \[ d_t(x) = \int_{\mathcal{M}} G_t (x,y) P(y) \mbox{Vol}(y) . \] Note that we get a degree function---which is a continuous function defined on $\mathcal{M}$. This function bears some resemblance to the diagonal degree matrix of a graph, and it can be thought of as a multiplication operator, but it has very different properties than an integral operator like the heat kernel. We will see this same function next time, and this will be important for when we get consistency with normalized versus unnormalized spectral clustering. \section{% (04/16/2015): Some Statistical Inference Issues (2 of 3): Convergence and consistency questions} Reading for today. \begin{compactitem} \item ``Consistency of spectral clustering,'' in Annals of Statistics, by von Luxburg, Belkin, and Bousquet \end{compactitem} Last time, we talked about whether the Laplacian constructed from point clouds converged to the Laplace-Beltrami operator on the manifold from which the data were drawn, under the assumption that the unseen hypothesized data points are drawn from a probability distribution that is supported on a low-dimensional Riemannian manifold. While potentially interesting, that result is a little unsatisfactory for a number of reasons, basically since one typically does not test the hypothesis that the underlying manifold even exists, and since the result doesn't imply anything statistical about cluster quality or prediction quality or some other inferential goal. For example, if one is going to use the Laplacian for spectral clustering, then probably a more interesting question is to ask whether the actual clusters that are identified make any sense, e.g., do they converge, are they consistent, etc. So, let's consider these questions. Today and next time, we will do this in two different ways. \begin{itemize} \item Today, we will address the question of the consistency of spectral clustering when there are data points drawn from some space $\mathcal{X}$ and we have similarity/dissimilarity information about the points. We will follow the paper ``Consistency of spectral clustering,'' by von Luxburg, Belkin, and Bousquet. \item Next time, we will ask similar questions but for a slightly different data model, i.e., when the data are from very simple random graph models. As we will see, some of the issues will be similar to what we discuss today, but some of the issues will be different. \end{itemize} I'll start today with some general discussion on: algorithmic versus statistical approaches; similarity and dissimilarity functions; and embedding data in Hilbert versus Banach spaces. Although I covered this in class briefly, for completeness I'll go into more detail here. \subsection{Some general discussion on algorithmic versus statistical approaches} When discussing statistical issues, we need to say something about our model of the data generation mechanism, and we will discuss one such model here. This is quite different than the algorithmic perspective, and there are a few points that would be helpful to clarify. To do so, let's take a step back and ask: how are the data or training points generated? Here are two possible answers. \begin{itemize} \item \textbf{Deterministic setting.} Here, someone just provides us with a fixed set of objects (consisting, e.g, of a set of vectors or a single graph) and we have to work with this particular set of data. This setting is more like the algorithmic approach we have been adopting when we prove worst-case bounds. \item \textbf{Probabilistic setting.} Here, we can consider the objects as a random sample generated from some unknown probability distribution $P$. For example, this $P$ could be on (Euclidean or Hilbert or Banach or some other) space $\mathcal{X}$. Alternatively, this $P$ could be over random graphs or stochastic blockmodels. \end{itemize} There are many differences between these two approaches. One is the question of what counts as ``full knowledge.'' A related question has to do with the objective that is of interest. \begin{itemize} \item In the deterministic setting, the data at hand count as full knowledge, since they are all there is. Thus, when one runs computations, one wants to make statements about the data at hand, e.g., how close in quality is the output of an approximation algorithm to the output of a more expensive exact computation. \item In the probabilistic setting, complete or full knowledge is to know $P$ exactly, and the finite sample contains only noisy information about $P$. Thus, when we run computations, we are only secondarily interested in the data at hand, since we are more interested in $P$, or relatedly in what we can say if we draw another noisy sample from $P$ tomorrow. \end{itemize} Sometimes, people think of the deterministic setting as the probabilistic setting, in which the data space equals the sample space and when one has sampled all the data. Sometimes this perspective is useful, and sometimes it is not. In either setting, one simple problem of potential interest (that we have been discussing) is clustering: given a training data $(x_i)_{i=1,\ldots,n}$, where $x_i$ correspond to some features/patterns but for which there are no labels available, the goal is to find some sort of meaningful clusters. Another problem of potential interest is classification: given training points $(x_i,y_i)_{i=1,\ldots,n}$, where $x_i$ correspond to some features/patterns and $y_i$ correspond to labels, the goal is to infer a rule to assign a correct $y$ to a new $x$. It is often said that, in some sense, in the supervised case, \emph{what} we want to achieve is well-understood, and we just need to specify \emph{how} to achieve it; while in the latter case both \emph{what} we want to achieve as well as \emph{how} we want to achieve it is not well-specified. This is a popular view from statistics and ML; and, while it has some truth to it, it hides several things. \begin{itemize} \item In both cases, one specifies---implicitly or explicitly---an objective and tries to optimize it. In particular, while the vague idea that we want to predict labels is reasonable, one obtains very different objectives, and thus very different algorithmic and statistical properties, depending on how sensitive one is to, e.g., false positives versus false negatives. Deciding on the precise form of this can be as much of an art as deciding on an unsupervised clustering objective. \item The objective to be optimized could depend on just the data at hand, or it could depend on some unseen hypothesized data (i.e., drawn from $P$). In the supervised case, that might be obvious; but even in the unsupervised case, one typically is not interested in the output per se, but instead in using it for some downstream task (that is often not specified). \end{itemize} All that being said, it is clearly easier to validate the supervised case. But we have also seen that the computations in the supervised case often boil down to computations that are identical to computations that arise in the unsupervised case. For example, in both cases locally-biased spectral ranking methods arise, but they arise for somewhat different reasons, and thus they are used in somewhat different ways. From the probabilistic perspective, due to randomness in the generation of the training set, it is common to study ML algorithms from this statistical or probabilistic point of view and to model the data as coming from a probability space. For example, in the supervised case, the unseen data are often modeled by a probability space of the form \[ \left(\left( \mathcal{X}\times\mathcal{Y} \right), \sigma\left( \mathcal{B}_{\mathcal{X}}\times\mathcal{B}_{\mathcal{Y}} \right), P \right) \] where $\mathcal{X}$ is the feature/pattern space and $\mathcal{Y}$ is the label space, $\mathcal{B}_{\mathcal{X}}$ and $\mathcal{B}_{\mathcal{Y}}$ are $\sigma$-algebras on $\mathcal{X}$ and $\mathcal{Y}$, and $P$ is a joint probability distribution on patterns and labels. (Don't worry about the $\sigma$-algebra and measure theoretic issues if you aren't familiar with them, but note that $P$ is the main object of interest, and this is what we were talking about last time with labeled versus unlabeled data.) The typical assumption in this case is that $P$ is unknown, but that one can sample $\mathcal{X}\times\mathcal{Y}$ from $P$. On the other hand, in the unsupervised case, there is no $\mathcal{Y}$, and so in that case the unseen data are more often modeled by a probability space of the form \[ \left( \mathcal{X}, \mathcal{B}_{\mathcal{X}}, P \right) , \] in which case the data training points $(x_i)_{i=1,\ldots,n}$ are drawn from $P$. From the probabilistic perspective, one is less interested in the objective function quality on the data at hand, and instead one is often interested in finite-sample performance issues and/or asymptotic convergence issues. For example, here are some questions of interest. \begin{itemize} \item Does the classification constructed by a given algorithm on a finite sample converge to a limit classifier at $n \rightarrow \infty$? \item If it converges, is the limit classifier the best possible; and if not, how suboptimal is it? \item How fast does convergence take place, as a function of increasing $n$? \item Can we estimate the difference between finite sample classifier and the optimal classifier, given only the sample? \end{itemize} Today, we will look at the convergence of spectral clustering from this probabilistic perspective. But first, let's go into a little more detail about similarities and dissimilarities. \subsection{Some general discussion on similarities and dissimilarities} When applying all sorts of algorithms, and spectral algorithms in particular, MLers work with some notion either of similarity or dissimilarity. For example, spectral clustering uses an adjacency matrix, which is a sort of similarity function. Informally, a dissimilarity function is a notion that is somewhat like a distance measure; and a similarity/affinity function measures similarities and is sometimes thought about as a kernel matrix. Some of those intuitions map to what we have been discussing, e.g., metrics and metric spaces, but in some cases there are differences. Let's start first with dissimilarity/distance functions. In ML, people are often a little less precise than say in TCS; and---as used in ML---dissimilarity functions satisfy some or most or all of the following, but typically at least the first two. \begin{itemize} \item (D1) $d(x,x)=0$ \item (D2) $d(x,y) \ge 0$ \item (D3) $d(x,y) = d(y,x)$ \item (D4) $d(x,y)=0 \Rightarrow x=y$ \item (D5) $d(x,y) + d(y,z) \ge d(x,z)$ \end{itemize} Here are some things to note about dissimilarity and metric functions. \begin{itemize} \item Being more precise, a \emph{metric} satisfies all of these conditions; and a \emph{semi-metric} satisfies all of these except for (D4). \item MLers are often interested in dissimilarity measures that do \emph{not} satisfy (D3), e.g., the Kullback-Leibler ``distance.'' \item There is also interest in cases where (D4) is not satisfied. In particular, the so-called cut metric---which we used for flow-based graph partitioning---was a semi-metric. \item Condition (D4) says that if different points have distance equal to zero, then this implies that they are really the same point. Clearly, if this is not satisfied, then one should expect an algorithm should have difficulty discriminating points (in clustering, classification, etc. problems) which have distance zero. \end{itemize} Here are some commonly used methods to transform non-metric dissimilarity functions into proper metric functions. \begin{itemize} \item If $d$ is a distance function and $x_0\in\mathcal{X}$ is arbitrary, then $\tilde{d}(x,y) = | d(x,x_0) - d(y,x_0) |$ is a semi-metric on $\mathcal{X}$. \item If $\left(\mathcal{X},d\right)$ is a finite dissimilarity space with $d$ symmetric and definite, then \[ \tilde{d} = \left\{ \begin{array}{l l} d(x,y)+c & \quad \text{if $x \ne y$} \\ 0 & \quad \text{if $x = y$} \end{array} \right. , \] with $c \ge \max_{p,q,r \in \mathcal{X}} | d(p,q)+d(p,r)+d(r,q) | $, is a metric. \item If $D$ is a dissimilarity matrix, then there exists constants $h$ and $k$ such that the matrix with elements $\tilde{d}_{ij} = \left( d_{ij}^{2} + h \right)^{1/2}$, for $i \ne j$, and also $\bar{d}_{ij} = d_{ij} + k$, for $i \ne j$, are Euclidean. \item If $d$ is a metric, so are $d+c$, $d^{1/r}$, $\frac{d}{d+c}$, for $c \ge 0$ and $r \ge 1$. If $w: \mathbb{R}\rightarrow\mathbb{R}$ is monotonically increasing function s.t. $w(x)=0 \iff x=0$ and $w(x+y) \le w(x) + w(y)$; then if $d(\cdot,\cdot)$ is a metric, then $w(d(\cdot,\cdot))$ is a metric. \end{itemize} Next, let's go to similarity functions. As used in ML, similarity functions satisfy some subset of the following. \begin{itemize} \item (S1) $s(x,x) > 0$ \item (S2) $s(x,y)=s(y,x)$ \item (S3) $s(x,y) \ge 0$ \item (S4) $\sum_{ij=1}^{n} c_ic_j s(x_i,x_j) \ge 0$, for all $n \in \mathbb{N}, c_i\in\mathbb{R}, x_i\in\mathcal{X}$ PSD. \end{itemize} Here are things to note about these similarity functions. \begin{itemize} \item The non-negativity is actually \emph{not} satisfied by two examples of similarity functions that are commonly used: correlation coefficients and scalar products \item One can transform a bounded similarity function to a nonnegative similarity function by adding an offset: $s(x,y) = s(x,y) +c$ for come $c$. \item If $S$ is PSD, then it is a kernel. This is a rather strong requirement that is mainly satisfied by scalar products in Hilbert spaces. \end{itemize} It is common to transform \emph{similarities to dissimilarities}. Here are two ways to do that. \begin{itemize} \item If the similarity is a scalar product in a Euclidean space (i.e., PD), then one can compute the metric \[ d(x,y)^2 = \left<x-y,x-y\right> = \left<x,x\right> - 2\left(x,y\right> + \left<y,y\right> . \] \item If the similarity function is normalized, i.e., $0 \le s(x,y) \le 1$, and $s(x,x)=1$, for all $x,y$, then $d=1-s$ is a distance. \end{itemize} It is also common to transform \emph{dissimilarities to similarities}. Here are two ways to do that. \begin{itemize} \item If the distance is Euclidean, then one can compute a PD similarity \[ s(x,y) = \frac{1}{2}\left( d(x,0)^2 + d(y,0)^2 - d(x,y)^2 \right) , \] where $0 \in \mathcal{X}$ is an arbitrary origin. \item If $d$ is a dissimilarity, then a nonnegative decreasing function of $d$ is a similarity, e.g., $s(x,y) = \exp\left( -d(x,y)^2/t \right)$, for $t\in\mathbb{R}$, and also $s(x,y) = \frac{1}{1-d(x,y)}$. \end{itemize} These and related transformations are often used at the data preprocessing step, often in a somewhat ad hoc manner. Note, though, that the use of any one of them implies something about what one thinks the data ``looks like'' as well as about how algorithms will perform on the data. \subsection{Some general discussion on embedding data in Hilbert and Banach spaces} Here, we discuss embedding data (in the form of similarity or dissimilarity functions) into Hilbert and Banach spaces. To do so, we start with an informal definition (informal since the precise notion of dissimilarity is a little vague, as discussed above). \begin{definition} A space $\left(\mathcal{X},d\right)$ is a dissimilarity space or a metric space, depending on whether $d$ is a dissimilarity function or a metric function. \end{definition} An important question for distance/metric functions, i.e., real metrics that satisfy the above conditions, is the following: when can a given metric space $\left(\mathcal{X},d\right)$ be embedded \emph{isometrically} in Euclidean space $\mathcal{H}$ (or, slightly more generally, Hilbert space $\mathcal{H}$). That is, the goal is to find a mapping $\phi : \mathcal{X} \rightarrow \mathcal{H}$ such that $d(x,y) = \|\phi(x)-\phi(y)\|$, for all $x,y\in\mathcal{X}$. (While this was something we relaxed before, e.g., when we looked at flow-based algorithms and looked at relaxations where there were distortions but they were not too too large, e.g., $O(\log n)$, asking for isometric embeddings is more common in functional analysis.) To answer this question, note that distance in Euclidean vector space satisfies (D1)--(D5), and so a necessary condition for the above is the (D1)--(D5) be satisfied. The well-known Schoenberg theorem characterizes which metric spaces can be isometrically embedded in Hilbert~space. \begin{theorem} A metric space $\left(\mathcal{X},d\right)$ can be embedded isometrically into Hilbert space iff $-d^2$ is conditionally positive definite, i.e., iff \[ -\sum_{ij=1}^{\ell} c_ic_j d^2(x_i,x_j) \ge 0 \] for all $\ell\in\mathbb{N}, x_i,x_j\in\mathcal{X}, c_i,c_j\in\mathbb{R}$, with $\sum_ic_i = 0$. \end{theorem} Informally, this says that Euclidean spaces and Hilbert spaces are not ``big enough'' for arbitrary metric spaces. (We saw this before when we showed that constant degree expanders do not embed well in Euclidean spaces.) More generally, though, isometric embeddings into certain Banach spaces can be achieved for arbitrary metric spaces. (More on this later.) For completeness, we have the following definition. \begin{definition} Let $X$ be a vector space over $\mathcal{C}$. Then $X$ is a \emph{normed linear space} if for all $f\in X$, there exists a number, $\|f\|\in\mathbb{R}$, called the norm of $f$ s.t.: (1) $\|f\| \ge 0$; (2) $\|f\| = 0 \mbox{ iff } f=0$; (3) $\|cf\| = |c|\|f\|$, for all scalar $c$; (4) $\|f+g\| \le \|f\|+\|g\|$. A \emph{Banach space} is a complete normed linear space. A \emph{Hilbert space} is a Banach space, whose norm is determined by an inner product. \end{definition} This is a large area, most of which is off topic for us. If you are not familiar with it, just note that RKHSs are particularly nice Hilbert spaces that are sufficiently heavily regularized that the nice properties of $\mathbb{R}^{n}$, for $n<\infty$, still hold; general infinite-dimensional Hilbert spaces are more general and less well-behaved; and general Banach spaces are even more general and less well-behaved. Since it is determined by an inner product, the norm for a Hilbert space is essentially an $\ell_2$ norm; and so, if you are familiar with the $\ell_1$ or $\ell_{\infty}$ norms and how they differ from the $\ell_2$ norm, then that might help provide very rough intuition on how Banach spaces can be more general than Hilbert~spaces. \subsection{Overview of consistency of normalized and unnormalized Laplacian spectral methods} Today, we will look at the convergence of spectral clustering from this probabilistic perspective. Following the von Luxburg, Belkin, and Bousquet paper, we will address the following two questions. \begin{itemize} \item Q1: Does spectral clustering converge to some limit clustering if more and more data points are sampled and as $n\rightarrow\infty$? \item Q2: If it does converge, then is the limit clustering a useful partition of the input space from which the data are drawn? \end{itemize} One reason for focusing on these questions is that it can be quite difficult to determine what is a cluster and what is a good cluster, and so as a more modest goal one can ask for ``consistency,'' i.e., that the clustering constructed on a finite sample drawn from some distribution converges to a fixed limit clustering of the whole data space when $n \rightarrow \infty$. Clearly, this notion is particularly relevant in the probabilistic setting, since then we obtain a partitioning of the underlying space $\mathcal{X}$ from which the data are drawn. Informally, this will provide an ``explanation'' for why spectral clustering works. Importantly, though, this consistency ``explanation'' will be very different than the ``explanations'' that have been offered in the deterministic or algorithmic setting, where the data at hand represent full knowledge. In particular, when just viewing the data at hand, we have provided the following informal explanation of why spectral clustering works. \begin{itemize} \item Spectral clustering works since it wants to find clusters s.t. the probability of random walks staying within a cluster is higher and the probability of going to the complement is smaller. \item Spectral clustering works since it approximates via Cheeger's Inequality the intractable expansion/conductance objective. \end{itemize} In both of those cases, we are providing an explanation in terms of the data at hand; i.e., while we might have an underlying space $\mathcal{X}$ in the back of our mind, they are statements about the data at hand, or actually the graph constructed from the data at hand. The answer to the above two questions (Q1 and Q2) will be basically the following. \begin{itemize} \item Spectral clustering with the normalized Laplacian is consistent under very general conditions. For the normalized Laplacian, when it can be applied, then the corresponding clustering does converge to a limit. \item Spectral clustering with the non-normalized Laplacian is not consistent, except under very specific conditions. These conditions have to do with, e.g., variability in the degree distribution, and these conditions often do \emph{not} hold in practice. \item In either case, if the method converges, then the limit does have intuitively appealing properties and splits the space $\mathcal{X}$ up into two pieces that are reasonable; but for the non-normalized Laplacian one will obtain a trivial limit if the strong conditions are not satisfied. \end{itemize} As with last class, we won't go through all the details, and instead the goal will be to show some of the issues that arise and tools that are used if one wants to establish statistical results in this area; and also to show you how things can ``break down'' in non-ideal situations. To talk about convergence/consistency of spectral clustering, we need to make statements about eigenvectors, and for this we need to use the spectral theory of bounded linear operators, i.e., methods from functional analysis. In particular, the information we will need will be somewhat different than what we needed in the last class when we talked about the convergence of the Laplacian to the hypothesized Laplace-Beltrami operator, but there will be some similarities. Today, we are going to view the data points as coming from some Hilbert or Banach space, call in $\mathcal{X}$, and from these data points we will construct an empirical Laplacian. (Next time, we will consider graphs that are directly constructed via random graph processes and stochastic block models.) The main step today will be to establish the convergence of the eigenvalues and eigenvectors of random graph Laplacian matrices for growing sample sizes. This boils down to questions of convergence of random Laplacian matrices constructed from sample point sets. (Note that although there has been a lot of work in random matrix theory on the convergence of random matrices with i.i.d. entries or random matrices with fixed sample size, e.g., covariance matrices, this work isn't directly relevant here, basically since the random Laplacian matrix grows with the sample size $n$ and since the entries of the random Laplacian matrix are not independent. Thus, more direct proof methods need to be used here.) Assume we have a data space $\mathcal{X} = \{ x_1. \ldots, x_n \}$ and a pairwise similarity $k: \mathcal{X}\times\mathcal{X} \rightarrow \mathbb{R}$, which is usually symmetric and nonnegative. For any fixed data set of $n$ points, define the following: \begin{itemize} \item the Laplacian $L_n = D_n-K_n$, \item the normalized Laplacian $L_n^{\prime} = D_n^{-1/2}L_nD_n^{-1/2}$, and \item the random walk Laplacian $L_n^{\prime\prime} = D_n^{-1}L_n$. \end{itemize} (Although it is different than what we used before, the notation of the von Luxburg, Belkin, and Bousquet paper is what we will use here.) Note that here we assume that $d_i > 0$, for all $i$. We are interested in computing the leading eigenvector or several of the leading eigenvectors of one of these matrices and then clustering with them. To see the kind of convergence result one could hope for, consider the second eigenvector $\left(v_1,\ldots,v_n\right)^T$ of $L_n$, and let's interpret is as a function $f_n$ on the discrete space $\mathcal{X}_n = \{ X_1,\ldots,X_n \}$ by defining the function $f_n(X_i)=v_i$. (This is the view we have been adopting all along.) Then, we can perform clustering by performing a sweep cut, or we can cluster based on whether the value of $f_n$ is above or below a certain threshold. Then, in the limit $n\rightarrow\infty$, we would like $f_n\rightarrow f$, where $f$ is a function on the entire space $\mathcal{X}$, such that we can threshold $f$ to partition $\mathcal{X}$. To do this, we can do the following. \begin{compactenum} \item Choose this space to be $C\left(\mathcal{X}\right)$, the space of continuous functions of $\mathcal{X}$. \item Construct a function $d \in C\left(\mathcal{X}\right)$, a degree function, that is the ``limit'' as $n\rightarrow\infty$ of the discrete degree vector $\left(d_1,\ldots,d_n\right)$. \item Construct linear operators $U$, $U^{\prime}$, and $U^{\prime\prime}$ on $C\left(\mathcal{X}\right)$ that are the limits of the discrete operators $L_n$, $L_n^{\prime}$, and $L_n^{\prime\prime}$. \item Prove that certain eigenfunctions of the discrete operates ``converge'' to the eigenfunctions of the limit operators. \item Use the eigenfunctions of the limit operator to construct a partition for the entire space $\mathcal{X}$. \end{compactenum} We won't get into details about the convergence properties here, but below we will highlight a few interesting aspects of the limiting process. The main result they show is that in the case of normalized spectral clustering, the limit behaves well, and things converge to a sensible partition of the entire space; while in the case of unnormalized spectral clustering, the convergence properties are much worse (for reasons that are interesting that we will describe). \subsection{Details of consistency of normalized and unnormalized Laplacian spectral methods} Here is an overview of the two main results in more detail. \textbf{Result 1.} (Convergence of normalized spectral clustering.) Under mild assumptions, \emph{if the first $r$ eigenvalues of the limit operator $U^{\prime}$ satisfy $\lambda_i \ne 1$ and have multiplicity one}, then \begin{compactitem} \item the same hold for the first $r$ eigenvalues of $L_n^{\prime}$, as $n\rightarrow\infty$; \item the first $r$ eigenvalues of $L_n^{\prime}$ converge to the first $r$ eigenvalues of $U^{\prime}$; \item the corresponding eigenvectors converge; and \item the clusters found from the first $r$ eigenvectors on finite samples converge to a limit clustering of the entire data space. \end{compactitem} \textbf{Result 2.} (Convergence of unnormalized spectral clustering.) Under mild assumptions, \emph{if the first $r$ eigenvalues of the limit operator $U$ do not lie in the range of the degree function $d$ and have multiplicity one}, then \begin{compactitem} \item the same hold for the first $r$ eigenvalues of $L_n$, as $n\rightarrow\infty$; \item the first $r$ eigenvalues of $L_n$ converge to the first $r$ eigenvalues of $U$; \item the corresponding eigenvectors converge; and \item the clusters found from the first $r$ eigenvectors on finite samples converge to a limit clustering of the entire data space. \end{compactitem} Although both of these results have a similar structure (``if the inputs are nice, then one obtains good clusters''), the ``niceness'' assumptions are very different: for normalized spectral clustering, it is the rather innocuous assumption that $\lambda_i \ne 1$, while for unnormalized spectral clustering it is the much stronger assumption that $\lambda_i \in \mbox{range}(d)$. This assumption is necessary, as it is needed to ensure that the eigenvalue $\lambda_i$ is isolated in the spectrum of the limit operator. This is a requirement to be able to apply perturbation theory to the convergence of eigenvectors. In particular, here is another result. \textbf{Result 3.} (The condition $\lambda \notin \mbox{range}(d)$ is necessary.) \begin{compactitem} \item There exist similarity functions such that there exist no nonzero eigenvectors outside of $\mbox{range}(d)$. \item In this case, the sequence of second eigenvalues of $\frac{1}{n}L_n$ converge to $\min d(x)$, and the corresponding eigenvectors do \emph{not} yield a sensible clustering of the entire data space. \item For a wide class of similarity functions, there exist only finitely many eigenvalues $r_0$ outside of $\mbox{range}(d)$, and the same problems arise if one clusters with $r > r_0$ eigenfunctions. \item The condition $\lambda \notin \mbox{range}(d)$ refers to the limit and cannot be verified on a finite sample. \end{compactitem} That is, unnormalized spectral clustering can fail completely, and one cannot detect it with a finite~sample. The reason for the difference between the first results is the following. \begin{itemize} \item In the case of normalized spectral clustering, the limit operator $U^{\prime}$ has the form $U^{\prime}=I-T$, where $T$ is a compact linear operator. Thus, the spectrum of $U^{\prime}$ is well-behaved, and all the eigenvalues $\lambda \ne 1$ are isolated and have finite multiplicity. \item In the case of unnormalized spectral clustering, the limit operator $U$ has the form $U=M-S$, where $M$ is a multiplication operator, and $S$ is a compact integral operator. Thus, the spectrum of $U$ is not as nice as that of $U^{\prime}$, since it contains the interval $\mbox{range}(d)$, and the eigenvalues will be isolated only if $\lambda_i \ne \mbox{range}(d)$. \end{itemize} Let's get into more detail about how these differences arise. To do so, let's make the following assumptions about the data. \begin{itemize} \item The data space $\mathcal{X}$ is a compact metric space, $\mathcal{B}$ is the Borel $\sigma$-algebra on $\mathcal{X}$, and $P$ is a probability measure on $\left(\mathcal{X},\mathcal{B}\right)$. We draw a sample of points $\left(X_i\right)_{i\in\mathbb{N}}$ i.i.d. from $P$. The similarity function $k: X \times X \rightarrow \mathcal{R}$ is symmetric, continuous, and there exists an $\ell > 0$ such that $k(x,y) > \ell$, for all $x,y\in\mathcal{X}$. (The assumption that $f$ is bounded away from $0$ is needed due to the division in the normalized Laplacian.) \end{itemize} For $f:\mathcal{X}\rightarrow\mathbb{R}$, we can denote the range of $f$ by $\mbox{range}(f)$. Then, if $\mathcal{X}$ is connected and $f$ is continuous then $\mbox{range}(f) = \left[ \inf_x f(x), \sup_x f(x) \right]$. Then we can define the following. \begin{definition} The \emph{restriction operator} $\rho_n : C\left(\mathcal{X}\right)\rightarrow\mathbb{R}^{n}$ denotes the random operator which maps a function to its values on the first $n$ data points, i.e., \[ \rho_n(f) = \left( f(X_1),\ldots,f(X_n) \right)^T . \] \end{definition} Here are some facts from spectral and perturbation theory of linear operators that are needed. Let $E$ be a real-valued Banach space, and let $T: E \rightarrow E$ be a bounded linear operator. Then, an \emph{eigenvalue} of $T$ is defined to be a real or complex number $\lambda$ such that \[ Tf=\lambda f, \mbox{ for some } f \in E . \] Note that $\lambda$ is an eigenvalue of $T$ iff the operator $T-\lambda$ has a nontrivial kernel (recall that if $L:V \rightarrow W$ then $\mbox{ker}(L) = \{ v \in V : L(v) = 0 \}$) or equivalently if $T-\lambda$ is \emph{not} injective (recall that $f:A \rightarrow B$ is injective iff $\forall a,b \in A$ we have that $f(a)=f(b) \Rightarrow a=b$, i.e., different elements of the domain do not get mapped to the same element). Then, the \emph{resolvent} of $T$ is defined to be \[ \rho(T) = \{ \lambda \in \mathbb{R} : \left(\lambda-T\right)^{-1} \mbox{ exists and is bounded} \} , \] and the \emph{spectrum} of $T$ id defined to be \[ \sigma(T) = \mathbb{R} \setminus \rho(T) . \] This holds very generally, and it is the way the spectrum is generalized in functional analysis. (Note that if $E$ is finite dimensional, then every non-invertible operator is not injective; and so $\lambda \in \sigma(T) \Rightarrow \lambda \mbox{ is an eigenvalue of } T$. If $E$ is infinite dimensional, this can fail; basically, one can have operators that are injective but that have no bounded inverse, in which case the spectrum can contain more than just eigenvalues.) We can say that a point $\sigma_{iso} \subset \sigma(T)$ is \emph{isolated} if there exists an open neighborhood $\xi \subset \mathbb{C}$ of $\sigma_{iso}$ such that $\sigma(T) \cap \xi = \left( \sigma_{iso} \right)$. If the spectrum $\sigma(T)$ of a bounded operator $T$ in a Banach space $E$ consists of isolated parts, then for each isolated part of the spectrum, a \emph{spectral projection} $P_{iso}$ can be \emph{defined} operationally as a path integral over the complex plane of a path $\Gamma$ that encloses $\sigma_{iso}$ and that separates it from the rest of $\sigma(T)$, i.e., for $\sigma_{iso}\in\sigma(T)$, the corresponding spectral projection is \[ P_{iso} = \frac{1}{2 \pi i} \int_{\Gamma} \left( T- \lambda I \right)^{-1} d\lambda , \] where $\Gamma$ is a closed Jordan curve in the complex plane separating $\sigma_{iso}$ from the rest of the spectrum. If $\lambda$ is an isolated eigenvalue of $\sigma(T)$, then the dimension of the range of the spectral projection $P_{\lambda}$ is defined to be the \emph{algebraic multiplicity} of $\lambda$, (for a finite dimensional Banach space, this is the multiplicity of the root $\lambda$ of the characteristic polynomial, as we saw before), and the \emph{geometric multiplicity} is the dimension of the eigenspace of $\lambda$. One can split up the spectrum into two parts: the \emph{discrete spectrum} $\sigma_d$(T) is the part of $\sigma(T)$ that consists of isolated eigenvalues of $T$ with finite algebraic multiplicity; and the \emph{essential spectrum} is $\sigma_{ess}(T) = \sigma(T) \setminus \sigma_d(T) $. It is a fact that the essential spectrum cannot be changed by a finite-dimensional or compact perturbation of an operator, i.e., for a bounded operator $T$ and a compact operator $V$, it holds that $\sigma_{ess}(T+V) = \sigma_{ess}(T)$. The important point here is that one can define spectral projections only for isolated parts of the spectrum of an operator and that these isolated parts of the spectrum are the only parts to which perturbation theory can be applied. Given this, one has perturbation results for compact operators. We aren't going to state these precisely, but the following is an informal statement. \begin{itemize} \item Let $\left( E , \|\cdot\|_E \right)$ be a Banach space, and $\left(T_n\right)_n$ and $T$ bounded linear operators on $E$ with $T_n\rightarrow T$. Let $\lambda \in \sigma(T)$ be an isolated eigenvalue with finite multiplicity $m$, and let $\xi \subset \mathbb{C}$ be an open neighborhood of $\lambda$ such that $\sigma(T) \cap \xi = \{\lambda\}$. Then, \begin{compactitem} \item eigenvalues converge, \item spectral projections converge, and \item if $\lambda$ is a simple eigenvalue, then the corresponding eigenvector converges. \end{compactitem} \end{itemize} We aren't going to go through the details of their convergence argument, but we will discuss the following issues. The technical difficulty with proving convergence of normalized/unnormalized spectral clustering, e.g., the convergence of $\left(v_n\right)_{n \in \mathbb{N}}$ or of $\left(L_n^{\prime}\right)_{n\in\mathbb{N}}$, is that for different sample sized $n$, the vectors $v_n$ have different lengths and the matrices $L_n^{\prime}$ have different dimensions, and so they ``live'' in different spaces for different values of $n$. For this reason, one can't apply the usual notions of convergence. Instead, one must show that there exists functions $f\in C\left(\mathcal{X}\right)$ such that $\| v_n - \rho_n f \| \rightarrow 0$, i.e., such that the eigenvector $v_n$ and the restriction of $f$ to the sample converge. Relatedly, one relates the Laplacians to some other operator such that they are all defined on the same space. In particular, one can define a sequence $(U_n)$ of operators that are related to the matrices $(L_n)$; but each operator $(U_n)$ is defined on the space $C(\mathcal{X})$ of continuous functions on $\mathcal{X}$, independent of $n$. All this involves constructing various functions and operators on $C\left(\mathcal{X}\right)$. There are basically two \emph{types} of operators, integral operators and multiplication operators, and they will enter in somewhat different ways (that will be responsible for the difference in the convergence properties between normalized and unnormalized spectral clustering). So, here are some basic facts about integral operators and multiplication operators. \begin{definition} Let $\left(\mathcal{X},\mathcal{B},\mu\right)$ be a probability space, and let $k \in L_2\left( \mathcal{X}\times\mathcal{X},\mathcal{B}\times\mathcal{B},\mu\times\mu \right)$. Then, the function $S: L_2\left(\mathcal{X},\mathcal{B},\mu\right) \rightarrow L_2\left(\mathcal{X},\mathcal{B},\mu\right)$ defined as \[ Sf(x) : \int_{\mathcal{X}} k(x,y) f(y) d\mu(y) \] is an \emph{integral operator} with kernel $k$. \end{definition} If $\mathcal{X}$ is compact and $k$ is continuous, then (among other things) the integral operator $S$ is compact. \begin{definition} Let $\left(\mathcal{X},\mathcal{B},\mu\right)$ be a probability space, and let $d \in L_{\infty}\left(\mathcal{X},\mathcal{B},\mu\right)$. Then a \emph{multiplication operator} $M_d : L_2\left(\mathcal{X},\mathcal{B},\mu\right) \rightarrow L_2\left(\mathcal{X},\mathcal{B},\mu\right)$ is \[ M_d f = fd . \] \end{definition} This is a bounded linear operator; but if $d$ is non-constant, then the operator $M_d$ is \emph{not} compact. Given the above two different types of operators, let's introduce specific operators on $C\left(\mathcal{X}\right)$ corresponding to matrices we are interested in. (In general, we will proceed by identifying vectors $\left( v_1,\ldots,v_n \right)^{T} \in \mathbb{R}^{n}$ with functions $f \in C\left(\mathcal{X}\right)$ such that $f(v_i) = v_i$ and extending linear operators on $\mathbb{R}^{n}$ to deal with such functions rather than vectors.) Start with the unnormalized Laplacian: $L_n = D_n - K_n$, where $D = \mbox{diag}(d_i)$, where $d_i = \sum_{ij} K(x_i,x_j)$. We want to relate the degree vector $\left( d_1,\ldots,d_n \right)^T$ to a function on $C\left(\mathcal{X}\right)$. To do so, define the true and empirical degree functions: \begin{eqnarray*} d(x) &=& \int k(x,y) dP(y) \in C\left(\mathcal{X}\right) \\ d_n(x) &=& \int k(x,y) dP_n(y) \in C\left(\mathcal{X}\right) \end{eqnarray*} (Note that $d_n \rightarrow d$ as $n\rightarrow\infty$ by a LLN.) By definition, $d_n(x_i) = \frac{1}{n}d_i$, and so the empirical degree function agrees with the degrees of the points $X_i$, up to the scaling $\frac{1}{n}$. Next, we want to find an operator acting on $C\left(\mathcal{X}\right) $ that behaves similarly to the matrix $D_n$ on $\mathbb{R}^{n}$. Applying $D_n$ to a vector $f = \left(f_1,\ldots,f_n\right)^{T} \in \mathbb{R}^{n}$ gives $\left(D_nf\right)_i = d_if_i$, i.e., each element is multiplied by $d_i$. So, in particular, we can interpret $\frac{1}{n}D_n$ as a multiplication operator. Thus, we can define the true and empirical multiplication operators: \begin{eqnarray*} M_d : C\left(\mathcal{X}\right) \rightarrow C\left(\mathcal{X}\right) & & \quad M_d f(x) = d(x) f(x) \\ M_{d_n} : C\left(\mathcal{X}\right) \rightarrow C\left(\mathcal{X}\right) & & \quad M_{d_n}f(x) = d_n(x) f(x) \end{eqnarray*} Next, we will look at the matrix $K_n$. Applying it to a vector $f \in \mathbb{R}^{n}$ gives $\left( K_n f \right)_i = \sum_j K(x_i,x_j) f_j $. Thus, we can define the empirical and true integral operator: \begin{eqnarray*} S_{n} : C\left(\mathcal{X}\right) \rightarrow C\left(\mathcal{X}\right) & & \quad S_{n}f(x) = \int k(x,y) f(y) dP_n(y) \\ S : C\left(\mathcal{X}\right) \rightarrow C\left(\mathcal{X}\right) & & \quad S_n f(x) = \int k(x,y) f(y) dP(y) \end{eqnarray*} With these definitions, we can define the \emph{empirical unnormalized graph Laplacian}, $U_n : C\left(\mathcal{X}\right)\rightarrow C\left(\mathcal{X}\right)$, and the \emph{true unnormalized graph Laplacian}, $U : C\left(\mathcal{X}\right)\rightarrow C\left(\mathcal{X}\right)$ as \begin{eqnarray*} U_nf(x) &=& M_{d_n} f(x) - S_nf(x) = \int k(x,y) \left( f(x)-f(y) \right) dP_n(y) \\ Uf(x) &=& M_d f(x) - Sf(x) = \int k(x,y) \left( f(x)-f(y) \right) dP(y) \end{eqnarray*} For the normalized Laplacian, we can proceed as follows. Recall that $v$ is an eigenvector of $L_n^{\prime}$ with eigenvalue $v$ iff $v$ is an eigenvector of $H_n^{\prime} = D^{-1/2} K_n D^{-1/2}$ with eigenvalue $1-\lambda$. So, consider $H_n^{\prime}$, defined as follows. The matrix $H_n^{\prime}$ operates on a vector $f = \left( f_1,\ldots, f_n \right)^{T}$ as $\left( H_n^{\prime} f \right)_{i} = \sum_j \frac{ K(x_i,x_j) }{ \sqrt{ d_id_j} } $. Thus, we can define the normalized empirical and true similarity functions \begin{eqnarray*} h_n(x,y) &=& k(x,y) / \sqrt{ d_n(x) d_n(y) } \\ h(x,y) &=& k(x,y) / \sqrt{ d(x) d(y) } \end{eqnarray*} and introduce two integral operators \begin{eqnarray*} T_{n} : C\left(\mathcal{X}\right) \rightarrow C\left(\mathcal{X}\right) & & \quad T_{n}f(x) = \int h_n(x,y) f(y) dP_n(y) \\ T : C\left(\mathcal{X}\right) \rightarrow C\left(\mathcal{X}\right) & & \quad T f(x) = \int h(x,y) f(y) dP(y) \end{eqnarray*} Note that for these operators the scaling factors $\frac{1}{n}$ which are hidden in $P_n$ and $d_n$ cancel each other. Said another way, the matrix $H_n^{\prime}$ already has $\frac{1}{n}$ scaling factor---as opposed to the matrix $K_n$ in the unnormalized case. So, contrary to the unnormalized case, we do not have to scale matrices $H_n^{\prime}$ and $H_n$ with the $\frac{1}{n}$ factor. All of the above is machinery that enables us to transfer the problem of convergence of Laplacian matrices to problems of convergence of sequences of operators on $C\left(\mathcal{X}\right)$. Given the above, they establish a lemma which, informally, says that under the general assumptions: \begin{compactitem} \item the functions $d_n$ and $d$ are continuous, bounded from below by $\ell > 0$, and bounded from above by $\|k\|_{\infty}$, \item all the operators are bounded, \item all the integral operators are compact, \item all the operator norms can be controlled. \end{compactitem} The hard work is to show that the empirical quantities converge to the true quantities; this is done with the perturbation result above (where, recall, the perturbation theory can be applied only to isolated parts of the spectrum). In particular: \begin{itemize} \item In the normalized case, this is true if $\lambda \ne 1$ is an eigenvalue of $U^{\prime}$ that is of interest. The reason is that $U^{\prime} = I - T^{\prime}$ is a compact operator. \item In the unnormalized case, this is true if $\lambda \notin \mbox{range}(d)$ is an eigenvalue of $U$ that is of interest. The reason is that $U = M_{d}-S$ is \emph{not} a compact operator, unless $M_d$ is a multiple of the~identity. \end{itemize} So, the key difference is the condition under which eigenvalues of the limit operator are isolated in the spectrum: for the normalized case, this is true if $\lambda \neq 1$, while for the non normalized case, this is true if $\lambda \notin \mbox{range}(d)$. In addition to the ``positive'' results above, a ``negative'' result of the form given in the following lemma can be established. \begin{lemma}[Clustering fails if $\lambda\notin\mbox{range}(d)$ is violated.] Assume that $\sigma(U) - \{0\} \cup\mbox{range}(d)$ with eigenvalue $0$ having multiplicity $1$, and that the probability distribution $P$ on $\mathcal{X}$ has no point masses. Then the sequence of second eigenvectors of $\frac{1}{n}L_n$ converges to $\min_{x\in\mathcal{X}}d(x)$. The corresponding eigenfunction will approximate the characteristic function of some $x\in\mathcal{X}$, with $d(x)=\min_{x\in\mathcal{X}}d(x)$ or a linear combination of such functions. \end{lemma} That is, in this case, the corresponding eigenfunction does \emph{not} contain any useful information for clustering (and one can't even check if $\lambda\in\mbox{range}(d)$ with a finite sample of data points). While the analysis here has been somewhat abstract, the important point here is that this is \emph{not} a pathological situation: a very simple example of this failure is given in the paper; and this phenomenon will arise whenever there is substantial degree heterogeneity, which is very common in practice. \section{% (04/21/2015): Some Statistical Inference Issues (3 of 3): Stochastic blockmodels} Reading for today. \begin{compactitem} \item ``Spectral clustering and the high-dimensional stochastic blockmodel,'' in The Annals of Statistics, by Rohe, Chatterjee, and Yu \item ``Regularized Spectral Clustering under the Degree-Corrected Stochastic Blockmodel,'' in NIPS, by Qin and Rohe \end{compactitem} Today, we will finish up talking about statistical inference issues by discussing them in the context of stochastic blockmodels. These are different models of data generation than we discussed in the last few classes, and they illustrate somewhat different issues. \subsection{Introduction to stochastic block modeling} As opposed to working with expansion or conductance---or some other ``edge counting'' objective like cut value, modularity, etc.---the \emph{stochastic block model (SBM)} is an example of a so-called probabilistic or generative model. Generative models are a popular way to encode assumptions about the way that latent/unknown parameters interact to create edges $(ij)$ Then, they assign a probability value for each edges $(ij)$ in a network. There are several advantages to this approach. \begin{itemize} \item It makes the assumptions about the world/data explicit. This is as opposed to encoding them into an objective and/or approximation algorithm---we saw several examples of reverse engineering the implicit properties of approximation algorithms. \item The parameters can sometimes be interpreted with respect to hypotheses about the network structure. \item It allows us to use likelihood scores, to compare different parameterizations or different models. \item It allows us to estimate missing structures based on partial observations of graph structure. \end{itemize} There are also several disadvantages to this approach. The most obvious is the following. \begin{itemize} \item One must fit the model to the data, and fitting the model can be complicated and/or computationally expensive. \item As a result of this, various approximation algorithms are used to fit the parameters. This in turn leads to the question of what is the effect of those approximations versus what is the effect of the original hypothesized model? (I.e., we are back in the other case of reverse engineering the implicit statistical properties underlying approximation algorithms, except here it is in the approximation algorithm to estimate the parameters of a generative model.) This problem is particularly acute for sparse and noisy data, as is common. \end{itemize} Like other generative models, SBMs define a probability distribution over graphs, $\mathbb{P}\left[ G | \Theta \right]$, where $\Theta$ is a set of parameters that govern probabilities under the model. Given a specific $\Theta$, we can then draw or \emph{generate} a graph $G$ from the distribution by flipping appropriately-biased coins. Note that \emph{inference} is the reverse task: given a graph $G$, either just given to us or generated synthetically by a model, we want to recover the model, i.e., we want to find the specific values of $\Theta$ that generated~it. The simpled version of a SBM is specified by the following. \begin{itemize} \item A positive integer $k$, a scalar value denoting the the number of blocks. \item A vector $\vec{z}\in\mathbb{R}^{n}$, where $z_i$ gives the group index of vertex $i$. \item A matrix $M\in\mathbb{R}^{k \times k}$, a stochastic block matrix, where $M_{ij}$ gives the probability that a vertex of type $i$ links to a vertex of type $j$. \end{itemize} Then, one generates edge $(ij)$ with probability $M_{z_iz_j}$. That is, edges are not identically distributed, but they are conditionally independent, i.e., conditioned on their types, all edges are independent, and for a given pair of types $(ij)$, edges are i.i.d. Observe that the SBM has a relatively large number of parameters, ${k \choose 2}$, even after we have chosen the labeling on the vertices. This has plusses and minuses. \begin{itemize} \item Plus: it allows one the flexibility to model lots of possible structures and reproduce lots of quantities of interest. \item Minus: it means that there is a lot of flexibility, thus making the possibility of overfitting more likely. \end{itemize} Here are some simple examples of SBMs. \begin{itemize} \item If $k=1$ and $M_{ij}=p$, for all $i,j$, then we recover the vanilla ER model. \item Assortative networks, if $M_{ii} > M_{ij}$, for $i \ne j$. \item Disassortative networks, if $M_{ii} < M_{ij}$, for $i \ne j$. \end{itemize} \subsection{Warming up with the simplest SBM} To illustrate some of the points we will make in a simple context, consider the ER model. \begin{itemize} \item If, say, $p = \frac{1}{2}$ and the graph $G$ has more than a handful of nodes, then it will be very easy to estimate $p$, i.e., to estimate the parameter vector $\Theta$ of this simple SBM, basically since measure concentration will occur very quickly and the empirical estimate of $p$ we obtain by counting the number of edges will be very close to its expected value, i.e., to $p$. More generally, if $n$ is large and $p \gtrsim \frac{\log(n)}{n}$, then measure will still concentrate, i.e., the empirical and expected values of $p$ will be close, and we will be able to estimate $p$ well. (This is related to the well-known observation that if $p \gtrsim\frac{\log(n)}{n}$, then $G_{np}$ and $G_{nm}$ are very similar, for appropriately chosen values of $p$ and $m$.) \item If, on the other hand, say, $p=\frac{3}{n}$, then this is not true. In this regime, measure has \emph{not} concentrated for most statistics of interest: the graph is not even fully connected; the giant component has nodes of degree almost $O\left(\log(n)\right)$; and the giant component has small sets of nodes of size $\Theta\left( \log(n) \right)$ that have conductance $O\left( \frac{1}{\log(n)} \right)$. (Contrast all of these the a $3$-regular random graph, which: is fully connected, is degree-homogeneous, and is a very good~expander.) \end{itemize} In these cases when measure concentration fails to occur, e.g., due to exogenously-specified degree heterogeneity or due to extreme sparsity, then one will have difficulty with recovering parameters of hypothesized models. More generally, similar problems arise, and the challenge will be to show that one can reconstruct the model under as broad a range of parameters as possible. \subsection{A result for a spectral algorithm for the simplest nontrivial SBM} Let's go into detail on the following simple SBM (which is the simplest aside from ER). \begin{itemize} \item Choose a partition of the vertics, call them $V^1$ and $V^2$, and WLOG let $V^1 = \{ 1,\ldots,\frac{n}{2} \}$ and $V^2 = \{ \frac{n}{2}+1,\ldots,n \}$. \item Then, choose probabilities $p > q$ and place edges between vertices $i$ and $j$ with probability \[ \mathbb{P}\left[ (ij) \in E \right] = \left\{ \begin{array}{l l} q & \quad \text{if $i \in V^1$ and $j \in V^2$ of $i \in V^2$ and $j \in V^1$ }\\ p & \quad \text{otherwise} \end{array} \right. , \] \end{itemize} In addition to being the ``second simplest'' SBM, this is also a simple example of a \emph{planted partition model}, which is commonly studied in TCS and related areas. Here is a fact: \[ \mathbb{E}\left[ \text{number of edges crossing bw } V^1 \text{ and } V^2 \right] = q |V^1| |V^2| . \] In addition, if $p$ is sufficiently larger than $q$, then every other partition has more edges. This is the basis of recovering the model. Of course, if $p$ is only slightly but not sufficiently larger than $q$, then there might be fluctuational effects such that it is difficult to find this from the empirical graph. This is analogous to having difficulty with recovering $p$ from very sparse ER, as we discussed. Within the SBM framework, the most important inferential task is recovering cluster membership of nodes from a single observation of a graph (i.e., the two clusters in this simple planted partition form of the SBM). There are a variety of procedures to do this, and here we will describe spectral~methods. In particular, we will follow a simple analysis motivated by McSherry's analysis, as described by Spielman, that will provide a ``positive'' result for sufficiently dense matrices where $p$ and $q$ are sufficiently far apart. Then, we will discuss this model more generally, with an emphasis on how to deal with very low-degree nodes that lead to measure concentration problems. In particular, we will focus on a form of regularized spectral clustering, as done by Qin and Rohe in their paper ``Regularized spectral clustering under the degree-corrected stochastic blockmodel.'' This has connections with what we have done with the Laplacian over the last few weeks. To start, let $M$ be the \emph{population adjacency matrix}, i.e., the hypothesized matrix, as described above. That is, \[ M = \left( \begin{array}{cc} p\vec{1}\vec{1}^{T} & q\vec{1}\vec{1}^{T} \\ q\vec{1}\vec{1}^{T} & p\vec{1}\vec{1}^{T} \end{array} \right) \] Then, let $A$ be the \emph{empirical adjacency matrix}, i.e., the actual matrix that is generated by flipping coins and on which we will perform computations. This is generated as follows: let $A_{ij}=1$ w.p. $M_{ij}$ and s.t. $A_{ij}=A_{ji}$. So, the basic goal is going to be to recover clusters in $M$ by looking at information in $A$. Let's look at the eigenvectors. First, since $M\vec{1} = \frac{n}{2}(p+q)\vec{1}$, we have \begin{eqnarray*} \mu_1 &=& \frac{n}{2}(p+q) \\ w_1 &=& \vec{1} , \end{eqnarray*} where $\mu_1$ and $w_1$ are the leading eigenvalue and eigenvector, respectively. Then, since the second eigenvector (of $M$) is constant on each cluster, we have that $Mw_2 = \mu_2 w_2$, where \begin{eqnarray*} \mu_2 &=& \frac{n}{2}(p-q) \\ w_2 &=& \begin{cases} \frac{1}{\sqrt{n}} \text{ if } i \in V^1 \\ -\frac{1}{\sqrt{n}} \text{ if } i \in V^2 \\ \end{cases} . \end{eqnarray*} In that case, here is a simple algorithm for finding the planted bisection. \begin{enumerate} \item Compute $v_2$, the eigenvector of second largest eigenvalue of $A$. \item Set $S = \{ i : v_2(i) \geq 0 \}$ \item Guess that $S$ is one side of the bisection and that $\bar{S}$ is the other side. \end{enumerate} We will show that under not unreasonable assumptions on $p$, $q$, and $S$, then by running this algorithm one gets the hypothesized cluster mostly right. Why is this? The basic idea is that $A$ is a perturbed version of $M$, and so by perturbation theory the eigenvectors of $A$ should look like the eigenvectors of $M$. Let's define $R = A-M$. We are going to view $R$ as a random matrix that depends on the noise/randomness in the coin flipping process. Since matrix perturbation theory bounds depend on (among other things) the norm of the perturbation, the goal is to bound the probability that $\|R\|_2$ is large. There are several methods from random matrix theory that give results of this general form, and one or the other is appropriate, depending on the exact statement that one wants to prove. For example, if you are familiar with Wigner's semi-circle law, it is of this general form. More recently, Furedi-Komlos got another version; as did Krivelevich and Vu; and Vu. Here we state a result due to Vu. \begin{theorem} With probability tending to one, if $p \ge c \frac{\log^4(n)}{n}$, for a constant $c$, then \[ \|R\|_2 \le 3 \sqrt{pn} . \] \end{theorem} The key question in theorems like this is the value of $p$. Here, one has that $p \gtrapprox \frac{\log(n)}{n}$, meaning that one can get pretty sparse (relative to $p=1$) but not extremely sparse (relative to $p=\frac{1}{n}$ or $p=\frac{3}{n}$). If one wants stronger results (e.g., not just mis-classifying only a constant fraction of the vertices, which we will do below, but instead that one predicts correctly for all but a small fraction of the vertices), then one needs $p$ to be larger and the graph to be denser. As with the ER example, the reason for this is that we need to establish concentration of appropriate estimators. Let's go onto perturbation theory for eigenvectors. Let $\alpha_1 \ge \alpha_2 \ge \cdots \alpha_n$ be the eigenvalues of $A$, and let $\mu_1 > \mu_2 > \mu_3 = \cdots \mu_n = 0$ be the eigenvalues of $M$. Here is a fact from matrix perturbation theory that we mentioned before: for all $i$, \[ |\alpha_i - \mu_i | \le \|A-M\|_2 = \|R\|_2 . \] The following two claims are easy to establish. \begin{claim} If $\|R\|_2 < \frac{n}{4}(p-q)$, then \[ \frac{n}{4}(p-q) < \alpha_2 < \frac{3n}{4}(p-q) \] \end{claim} \begin{claim} If, in addition, $q > \frac{p}{3}$, then $\frac{3n}{4}(p-q) < \alpha_1$. \end{claim} From these results, we have a separation, and so we can view $\alpha_2$ as a perturbation of $\mu_2$. The question is: can we view $v_2$ as a perturbation of $w_2$? The answer is Yes. Here is a statement of this result. \begin{theorem} Let $A$, $M$ be symmetric matrices, and let $R=M-A$. Let $\alpha_1 \ge \cdots \ge \alpha_n$ be the eigenvectors of $A$, with $v_1,\cdots,v_n$ the corresponding eigenvectors. Let $\mu_1 \ge \cdots \ge \mu_n$ be the eigenvectors of $M$, with $w_1,\cdots,w_n$ the corresponding eigenvectors. Let $\theta_i$ be the angle between $v_i$ and $w_i$. Then, \begin{eqnarray*} \sin \theta_i \le \frac{ 2 \| R \|_2 }{ \min_{j \ne i} |\alpha_i - \alpha_j | } \\ \sin \theta_i \le \frac{ 2 \| R \|_2 }{ \min_{j \ne i} |\mu_i - \mu_j | } \end{eqnarray*} \end{theorem} \begin{proof} WLOG, we can assume $\mu_i=0$, since the matrices $M-\mu_i I$ and $A-\alpha_i I$ have the same eigenvectors as $M$ and $A$, and $M-\mu_i I$ has the $i$th eigenvalue being $0$. Since the theorem is vacuous if $\mu_i$ has multiplicities, we can assume unit multiplicity, and that $w_i$ is a unit vector in the null space of $M$. Due to the assumption that $\mu_i = 0$, we have that $|\alpha_i| \le \|R\|_2$. Then, expand $v_i$ in an eigenbasis of $M$: $v_i = \sum_j c_j w_j$, where $c_j = w_j^T v_i$. Let $\delta = \min_j |\mu_j|$. Then observe that \[ \|Mv_i\|_2^2 = \sum_j c_j^2 \mu_j^2 \ge \sum_{j \ne i} c_j^2 \delta^2 = \delta^2 \sum_{j \ne i} c_j^2 = \delta^2 \left( 1-c_i^2 \right) = \delta^2 \sin^2 \theta_i \] and also that \[ \| Mv_i \| \le \| Av_i \| + \| Rv_i \| = \alpha_i + \| R v_i \| \le 2 \| R \|_2 . \] So, from this it follows that $\sin \theta_i \le \frac{ 2 \|R\|_2 }{ \delta } $ . \end{proof} This is essentially a version of the Davis-Kahan result we saw before. Note that it says that the amount by which eigenvectors are perturbed depends on how close are other eigenvalues, which is what we would expect. Next, we use this for partitioning the simple SBM. We want to show that not too many vertices are mis-classified. \begin{theorem} Given the two-class SBM defined above, assume that $p \ge c \frac{\log^4(n)}{n}$ and that $q > p/3$. If one runs the spectral algorithm described above, then at most a constant fraction of the vertices are misclassified. \end{theorem} \begin{proof} Consider the vector $\vec{\delta} = v_2 - w_2$. For all $i \in V$ that are misclassified by $v_2$, we have that $ | \delta(i) | \ge \frac{1}{\sqrt{n}} $. So, if $v_2$ misclassified $k$ vertices, then $\|\delta\| \ge \sqrt{k/n}$. Since $u$ and $v$ are unit vectors, we have the crude bound that $\|\delta\| \le \sqrt{2} \sin \theta_2$. Next, we can combine this with the perturbation theory result above. Since $q > p/3$, we have that $\min_{j \ne 2} | \mu_2 - \mu_i | = \frac{n}{2}(p-q)$; and since $p \ge c \frac{\log^4(n)}{n}$, we have that $\|R\| \le 3 \sqrt{pn}$. Then, \[ \sin \theta_2 \le \frac{ 3 \sqrt{pn} }{ \frac{n}{2}(p-q) } = \frac{ 6 \sqrt{p} }{ \sqrt{n}(p-q) }. \] So, the number $k$ of mis-classified vertices satisfies $\sqrt{\frac{k}{n}} \le \frac{ 6 \sqrt{p} }{ \sqrt{n}(p-q) } $, and thus $ k \le \frac{ 36 p }{ (p-q)^2 } $. \end{proof} So, in particular, if $p$ and $q$ are both constant, then we expect to misclassify at most a constant fraction of the vertices. E.g., if $p=\frac{1}{2}$ and $q = p - \frac{12}{\sqrt{n}}$, then $\frac{ 36p }{(p-q)^2 } = \frac{n}{8}$, and so only a constant fraction of the vertices are misclassified. This analysis is a very simple result, and it has been extended in various ways. \begin{itemize} \item The Ng et al. algorithm we discussed before computes $k$ vectors and then does $k$ means, making similar gap assumptions. \item Extensions to have more than two blocks, blocks that are not the same size, etc. \item Extensions to include degree variability, as well as homophily and other empirically-observed properties of networks. \end{itemize} The general form of the analysis we have described goes through to these cases, under the following types of assumptions. \begin{itemize} \item The matrix is dense enough. Depending on the types of recovery guarantees that are hoped for, this could mean that $\Omega(n)$ of the edges are present for each node, or perhaps $\Omega(\mbox{polylog}(n))$ edges for each node. \item The degree heterogeneity is not too severe. Depending on the precise algorithm that is run, this can manifest itself by placing an upper bound on the degree of the highest degree node and/or placing a lower bound on the degree of the lowest degree node. \item The number of clusters is fixed, say as a function of $n$, and each of the clusters is not too small, say a constant fraction of the nodes. \end{itemize} Importantly, \emph{none} of these simplifying assumptions are true for most ``real world'' graphs. As such, there has been a lot of recent work focusing on dealing with these issues and making algorithms for SBMs work under broader assumptions. Next, we will consider one such~extension. \subsection{Regularized spectral clustering for SBMs} Here, we will consider a version of the degree-corrected SBM, and we will consider doing a form of \emph{regularized spectral clustering (RSC)} for it. Recall the definition of the basic SBM. \begin{definition} Given nodes $V=[n]$, let $z:[n]\rightarrow[k]$ be a partition of the $n$ nodes into $k$ blocks, i.e., $z_i$ is the block membership of the $i^{th}$ node. Let $B \in [0,1]^{k \times k}$. Then, under the SBD, we have that the probability of an edge between $i$ and $j$ is \[ P_{ij} = B_{z_iz_j}, \text{ for all } i,j \in \{1,\ldots,n\} . \] \end{definition} In particular, this means that, given $z$, the edges are independent. Many real-world graphs have substantial degree heterogeneity, and thus it is common to in corporate this into generative models. Here is the extension of the SBM to the \emph{Degree-corrected stochastic block model (DC-SBM)}, which introduces additional parameters $\theta_i$, for $i \in[n]$, to control the node~degree. \begin{definition} Given the same setup as for the SBM, specify also additional parameters $\theta_i$, for $i\in[n]$. Then, under the DC-SBM, the probability of an edge between $i$ and $j$ is \[ P_{ij} = \theta_i \theta_j B_{z_iz_j} , \] where $\theta_i \theta_j B_{z_iz_j} \in [0,1] $, for all $i,j \in [n]$. \end{definition} Note: to make the DC-SBM identifiable (i.e., so that it is possible in principle to learn the true model parameters, say given an infinite number of observations, which is clearly a condition that is needed for inference), one can impose the constraint that $\sum_i \theta_i \delta_{z_i,r} = 1$, for each block $r$. (This condition says that $\sum_i \theta_i = 1$ within each block.) In this case $B_{st}$, for $s \ne t$, is the expected number of links between block $s$ and block $t$; and $B_{st}$, for $s = t$, is the expected number of links within block $s$. Let's say that $A \in \{0,1\}^{n \times n}$ is the adjacency matrix; $L = D^{-1/2}A D^{-1/2}$. In addition, let $\mathcal{A} = \mathbb{E}\left[ A \right] $ be the population matrix, under the DC-SBM. Then, one can express $\mathcal{A}$ as $\mathcal{A} = \Theta Z B Z^T \Theta$, where $\Theta \in \mathbb{R}^{n \times n} = \mbox{diag}(\theta_i)$, and where $Z \in \{0,1\}^{n \times k}$ is a membership matrix with $Z_{it} = 1$ iff node $i$ is in block $t$, i.e., if $z_i = t$. We are going to be interested in very sparse matrices, for which the minimum node degree is very small, in which case a vanilla algorithm will fail to recover the SBM blocks. Thus, we will need to introduce a regularized version of the Laplacian. Here is the~definition. \begin{definition} Let $\tau > 0$. The \emph{regularized graph Laplacian} is $L_{\tau} = D_{\tau}^{-1/2} A D_{\tau}^{-1/2} \in \mathbb{R}^{n \times n}$, with $D_{\tau} = D + \tau I$, for $\tau > 0$. \end{definition} This is defined for the empirical data; but given this, we can define the corresponding population~quantities: \begin{eqnarray*} \mathcal{D}_{ii} &=& \sum_j \mathcal{A}_{ij} \\ \mathcal{D}_{\tau} &=& \mathcal{D} + \tau I \\ \mathcal{L} &=& \mathcal{D}^{-1/2} \mathcal{A} \mathcal{D}^{-1/2} \\ \mathcal{L}_{\tau} &=& \mathcal{D}_{\tau}^{-1/2} \mathcal{A} \mathcal{D}_{\tau}^{-1/2} \end{eqnarray*} Two things to note. \begin{itemize} \item Under the DC-SBM, if the model is identifiable, then one should be able to determine the partition from $\mathcal{A}$ (which we don't have direct access to, given the empirical data). \item One also wants to determine the partition from the empirical data $A$, under broader assumptions than before, in particular under smaller minimum degree. \end{itemize} Here is a description of the basic algorithm of Qin and Rohe. Basically, it is the Ng et al. algorithm that we described before, except that we apply it to the regularized graph Laplacian, i.e., it involves finding the leading eigenvectors of $L_{\tau}$ and then clustering in the low dimensional space. Given as input an Adjacency Matrix $A$, the number of clusters $k$, and the regularizer $\tau \ge 0$. \begin{enumerate} \item Compute $L_{\tau}$. \item Compute the matrix $X_{\tau} = \left[ X_1^{\tau} ,\ldots, X_k^{\tau}\right] \in \mathbb{R}^{n \times k}$, the orthogonal matrix consisting of the $k$ largest eigenvectors of $L_{\tau}$. \item Compute the matrix $X_{\tau}^{*} \in \mathbb{R}^{n \times k}$ by normalizing each row of $X_{\tau}$ to have unit length, i.e., project each row of $X_{\tau}$ onto the unit sphere in $\mathbb{R}^{k}$, i.e., $X_{ij}^{*,\tau} = X_{ij}^{\tau} / \sum_j X_{ij}^{\tau,2}$. \item Run $k$ means on the rows of $X^{*}_{\tau}$ to create $k$ non-overlapping clusters $V_1,\ldots,V_k$. \item Output $V_1,\ldots,V_k$; node $i$ is assigned to cluster $r$ if the $i^{th}$ tow of $X^{*}_{\tau}$ is assigned to $V$. \end{enumerate} There are a number of empirical/theoretical tradeoffs in determining the best value for $\tau$, but one can think of $\tau$ as being the average node degree. There are several things one can show here. First, one can show that $L_{\tau}$ is close to $\mathcal{L}_{\tau}$. \begin{theorem} Let $G$ be the random graph with $\mathbb{P}\left[ \mbox{edge bw } ij \right] = P_{ij}$. Let $\delta = \min_i \mathcal{D}_{ii}$ be the minimum expected degree of $G$. If $\delta+\tau > O\left(\log(n)\right)$, then with constant probability \[ \| L_{\tau} - \mathcal{L}_{\tau} \| \le O(1) \sqrt{ \frac{ \log(n) }{ \delta + \tau } } . \] \end{theorem} \textbf{Remark.} Previous results required that the minimum degree $\delta \ge O(\log(n))$, so this result generalizes these to allow $\delta$ to be much smaller, assuming the regularization parameter $\tau$ is large enough. Importantly, typical real networks do \emph{not} satisfy the condition that $\delta \ge O(\log(n))$, and RSC is most interesting when this condition fails. So, we can apply this result in here to graph with small node degrees. \textbf{Remark.} The form of $L_{\tau}$ is similar to many of the results we have discussed, and one can imagine implementing RSC (and obtaining this theorem as well as those given below) by computing approximations such as what we have discussed. So far as I know, that has not been done. Second, one can bound the difference between the empirical and population eigenvectors. For this, one needs an additional concept. \begin{itemize} \item Given an $n \times k$ matrix $A$, the \emph{statistical leverage scores} of $A$ are the diagonal elements of the projection matrix onto the span of $A$. \end{itemize} In particular, if the $n \times k$ matrix $U$ is an orthogonal matrix for the column span of $A$, then the leverage scores of $A$ are the Euclidean norms of the \emph{rows} of $U$. For a ``tall'' matrix $A$, the $i^{th}$ leverage score has an interpretation in terms of the leverage or influence that the $i^{th}$ row of an $A$ has on the least-squares fit problem defined by $A$. In the following, we will use an extension of the leverage scores, defined relative to the best rank-$k$ approximation the the matrix. \begin{theorem} \label{thm:eigenvectors} Let $X_{\tau}$ and $\mathcal{X}_{\tau}$ be in $\mathbb{R}^{n \times k}$ contain the top $k$ eigenvectors of $L_{\tau}$ and $\mathcal{L}_{\tau}$, respectively. Let \[ \xi = \min_i \{ \min \{ \| X_{\tau}^{i} \|_2 , \| \mathcal{X}_{\tau}^{i} \|_2 \} \} . \] Let $X_{\tau}^{*}$ and $\mathcal{X}_{\tau}^{*}$ be the row normalized versions of $X_{\tau}$ and $\mathcal{X}_{\tau}$. Assume that $\sqrt{ \frac{ k \log(n) }{ \delta + \tau } } \le O( \lambda_k)$ and $\delta + \tau > O(\log(n))$. Then, with constant probability, \begin{eqnarray*} \| X_{\tau} - \mathcal{X}_{\tau} O \|_F &\le& O \left( \frac{1}{\lambda_k} \sqrt{k \log(n)}{\delta+\tau} \right) \\ \| X_{\tau}^{*} - \mathcal{X}_{\tau}^{*} O \|_F &\le& O \left( \frac{1}{\xi\lambda_k} \sqrt{k \log(n)}{\delta+\tau} \right) , \end{eqnarray*} where $O$ is a rotation matrix. \end{theorem} Note that the smallest leverage score enters the second expression but not the first expression. That is, it does not enter the bounds on the empirical quantities, but it does enter into the bounds for the population quantities. We can use these results to derive misclassification rate for RSC. The basic idea for the misclassification rate is to run $k$-means on the rows of $X_{\tau}^{*}$ and also on the rows of $\mathcal{X}_{\tau}^{*}$. Then, one can say that a node on the empirical data is clustered correctly if it is closer to the centroid of the corresponding cluster on the population data. This basic idea needs to be modified to take into account the fact that if any $\lambda_i$ are equal, then only the subspace spanned by the eigenvectors is identifiable, so we consider this up to a rotation $O$. \begin{definition} If $C_i O$ is closer to $\mathcal{C}_i $ than any other $\mathcal{C}_j$, then we say that the node is correctly clustered; and we define the misclassified nodes to be \[ \mathcal{M} = \left\{ i : \exists j \ne i \mbox{ s.t. } \| C_i O^T -\mathcal{C}_i \|_2 > \| \mathcal{C}_i O^T - C_j \right\} . \] \end{definition} Third, one can bound the misclassification rate of the RCS classifier with the following theorem. \begin{theorem} \label{thm:misclassification} With constant probability, the misclassification rate is \[ \frac{|\mathcal{M}|}{n} \le c \frac{ k \log(n) }{ n \xi^2 (\delta+\tau) \lambda_k^2 }. \] \end{theorem} Here too the smallest leverage score determines the overall quality. \textbf{Remark.} This is the first result that explicitly relates leverage scores to the statistical performance of a spectral clustering algorithm. This is a large topic, but to get a slightly better sense of it, recall that the leverage scores of $\mathcal{L}_{\tau}$ are $\| \mathcal{X}_{\tau}^{i} \|_2^2 = \frac{ \theta_i^{\tau} }{ \sum_j \theta_j^{\tau} \delta_{z_jz_i} }$. So, in particular, if a node $i$ has a small expected degree, then $\theta_i^{\tau}$ is small and $\| \mathcal{X}_{\tau}^{i} \|_2 $ is small. Since $\xi$ appears in the denominator of the above theorems, this leads to a worse bound for the statistical claims in these theorems. In particular, the problem arises due to projecting $X_{\tau}^{i}$ onto the unit sphere, i.e., while large-leverage nodes don't cause a problem, errors for small-leverage rows can be amplified---this didn't arise when we were just making claims about the empirical data, e.g., the first claim of Theorem~\ref{thm:eigenvectors}, but when considering statistical performance, e.g., the second claim of Theorem~\ref{thm:eigenvectors} or the claim of Theorem~\ref{thm:misclassification}, for nodes with small leverage score it amplifies noisy measurements. \section{% (04/23/2015): Laplacian solvers (1 of 2)} Reading for today. \begin{compactitem} \item ``Effective Resistances, Statistical Leverage, and Applications to Linear Equation Solving,'' in arXiv, by Drineas and Mahoney \item ``A fast solver for a class of linear systems,'' in CACM, by Koutis, Miller, and Peng \item ``Spectral Sparsification of Graphs: Theory and Algorithms,'' in CACM, by Batson, Spielman, Srivastava, and Teng \end{compactitem} (Note: the lecture notes for this class and the next are taken from the lecture notes for the final two classes of the class I taught on Randomized Linear Algebra in Fall 2013.) \subsection{Overview} We have seen problems that can be written in the form of a system of linear equations with Laplacian constraint matrices, i.e., \[ Lx = b . \] For example, we saw this with the various semi-supervised learning methods as well as with the MOV weakly-local spectral method. In some cases, this arises in slightly modified form, e.g., as an augmented/modified graph and/or if there are additional projections (e.g., the Zhou et al paper on ``Learning with labeled and unlabeled data on a directed graph,'' that is related to the other semi-supervised methods we discussed, does this explicitly). Today and next time we will discuss how to solve linear equations of this form. \subsection{Basic statement and outline} While perhaps not obvious, solving linear equations of this form is a useful \emph{algorithmic primitive}---like divide-and-conquer and other such primitives---much more generally, and thus there has been a lot of work on it in recent years. Here is a more precise statement of the use of this problem as a primitive. \begin{definition} The \emph{Laplacian Primitive} concerns systems of linear equations defined by Laplacian constraint matrices: \begin{itemize} \item \textsc{INPUT}: a Laplacian $L \in \mathbb{R}^{n \times n}$, a vector $b\in\mathbb{R}^{n}$ such that $\sum_{i=1}^{n} b_i = 0$, and a number $\epsilon>0$. \item \textsc{OUTPUT}: a vector $\tilde{x}_{opt} \in \mathbb{R}^{n}$ such that $\|\tilde{x}_{opt} - L^{\dagger} b \|_{L} \le \epsilon \| L^{\dagger} b\|_{L}$, where for a vector $z\in\mathbb{R}^{n}$ the $L$-norm is given by $\|z\|_L=\sqrt{z^TLz}$. \end{itemize} \end{definition} While we will focus on linear equations with Laplacian constraint matrices, most of the results in this area hold for a slightly broader class of problems. In particular, they hold for any linear system $Ax=b$, where $A$ is an SDD (symmetric diagonally dominant) matrix (i.e., that the diagonal entry of each row is larger, or not smaller, than the sum of the absolute values of the off-diagonal entries in that row). The reason for this is that SDD systems are linear-time reducible to Laplacian linear systems via a construction that only doubles the number of nonzero entries in the matrix. As mentioned, the main reason for the interest in this topic is that, given a fast, e.g., nearly linear time algorithm, for the Laplacian Primitive, defined above, one can obtain a fast algorithm for all sorts of other basic graph problems. Here are several examples of such problems. \begin{itemize} \item Approximate Fiedler vectors. \item Electrical flows. \item Effective resistance computations. \item Semi-supervised learning for labeled data. \item Cover time of random walks. \item Max flow and min cut and other combinatorial problems. \end{itemize} Some of these problems we have discussed. While it might not be surprising that problems like effective resistance computations and semi-supervised learning for labeled data can be solved with this primitive, it should be surprising that max flow and min cut and other combinatorial problems can be solved with this primitive. We won't have time to discuss this in detail, but some of the theoretically fastest algorithms for these problems are based on using this primitive. Here is a statement of the basic result that led to interest in this area. \begin{theorem}[ST] There is a randomized algorithm for the Laplacian Primitive that runs in expected time $O\left( m \log^{O(1)} (n) \log \left(1/\epsilon\right) \right)$, where $n$ is the number of nodes in $L$, $m$ is the number of nonzero entries in $L$, and $\epsilon$ is the precision parameter. \end{theorem} Although the basic algorithm of ST had something like the $50^{th}$ power in the exponent of the logarithm, it was a substantial theoretical breakthrough, and since then it has been improved by KMP to only a single log, leading to algorithms that are practical or almost practical. Also, although we won't discuss it in detail, many of the local and locally-biased spectral methods we have discussed arose out of this line of work in an effort to develop and/or improve this basic result. At a high level, the basic algorithm is as follows. \begin{enumerate} \item Compute a sketch of the input by sparsifying the input graph. \item Use the sketch to construct a solution, e.g., by solving the subproblem with any black box solver or by using the sketch as a preconditioner for an iterative algorithm on the original~problem. \end{enumerate} Thus, the basic idea of these methods is very simple; but to get the methods to work in the allotted time, and in particular to work in nearly-linear time, is very complicated. Today and next time, we will discuss these methods, including a simple but slow method in more detail and a fast but complicated method in less detail. \begin{itemize} \item \textbf{Today.} We will describe a simple, non-iterative, but slow algorithm. This algorithm provides a very simple version of the two steps of the basic algorithm described above; and, while slow, this algorithm highlights several basic ideas of the more sophisticated versions of these~methods. \item \textbf{Next time.} We will describe a fast algorithm provides a much more sophisticated implementation of the two steps of this basic algorithm. Importantly, it makes nontrivial use of combinatorial ideas and couples the linear algebra with combinatorial preconditioning in interesting~ways. \end{itemize} \subsection{A simple slow algorithm that highlights the basic ideas} Here, we describe in more detail a very simple algorithm to solve Laplacian-based linear systems. It will be good to understand before we get to the fast but more complicated versions of the algorithm. Recall that $L = D-W = B^TWB$ is our Laplacian, where $B$ is the $m \times n$ edge-incidence matrix, and where $W$ is an $m \times m$ edge weight matrix. In particular, note that $m > n$ (assume the graph is connected to avoid trivial cases), and so the matrix $B$ is a \emph{tall} matrix. Here is a restatement of the above problem. \begin{definition} Given as input a Laplacian matrix $L \in \mathbb{R}^{n \times n}$, a vector $b \in \mathbb{R}^{n}$, compute \[ \mbox{argmin}_{x\in\mathbb{R}^{n}} \| Lx - b \|_2 . \] The minimal $\ell_2$ norm $x_{opt}$ is given by $x_{opt} = L^{\dagger}b$, where $L^{\dagger}$ is the Moore-Penrose generalized inverse of $L$. \end{definition} We have reformulated this as a regression since it makes the proof below, which is based on RLA (Randomized Linear Algebra) methods, cleaner. The reader familiar with linear algebra might be concerned about the Moore-Penrose generalized inverse since, e.g., it is typically not well-behaved with respect to perturbations in the data matrix. Here, the situation is particularly simple: although $L$ is rank-deficient, (1) it is invertible if we work with vectors $b\perp\vec{1}$, and (2) because this null space is particular simple, the pathologies that typically arise with the Moore-Penrose generalized inverse do \emph{not} arise here. So, it isn't too far off to think of this as the inverse. Here is a simple algorithm to solve this problem. This algorithm takes as input $L$, $b$, and $\epsilon$; and it returns as output a vector $\tilde{x}_{opt}$. \begin{enumerate} \item Form $B$ and $W$, define $\Phi = W^{1/2}B \in \mathbb{R}^{m \times n}$, let $U_{\Phi}\in\mathbb{R}^{m \times n}$ be an orthogonal matrix spanning the column space of $\Phi$, and let $\left( U_{\Phi} \right)_{(i)}$ denote the $i^{th}$ row of $U_{\Phi}$. \item Let $p_i$, for $i \in [n]$ such that $\sum_{i=1}^{n}p_i=1$ be given by \begin{equation} \label{eqn:lev-scores} p_i \ge \beta\frac{ \| \left( U_{\Phi} \right)_{(i)}\|_2^2 }{ \| U_{\Phi} \|_F^2 } = \frac{\beta}{n} \| \left( U_{\Phi} \right)_{(i)} \|_2^2 \end{equation} for some value of $\beta\in(0,1]$. (Think of $\beta=1$, which is a legitimate choice, but the additional flexibility of allowing $\beta\in(0,1)$ will be important in the next class.) \end{enumerate} A key aspect of this algorithm is that the sketch is formed by choosing elements of the Laplacian with the probabilities in Eqn.~(\ref{eqn:lev-scores}); these quantities are known as the statistical leverage scores, and they are of central interest in RLA. Here is a definition of these scores more generally. \begin{definition} Given a matrix $A\in\mathbb{R}^{m \times n}$, where $m > n$, the $i^{th}$ leverage score is \[ \left(P_{A}\right)_{ii} = \left(U_AU_A^T\right)_{ii} = \| \left( U_A \right)_{ii} \|_2^2 , \] i.e., it is equal to the diagonal element of the projection matrix onto the column span of $A$. \end{definition} Here is a definition of a seemingly-unrelated notion that we talked about before. \begin{definition} Given $G=(V,E)$, a connected, weighted, undirected graph with $n$ nodes, $m$ edges, and corresponding weights $w_e \ge 0$, for all $e \in E$, let $L=B^TWB$. Then, the effective resistance $R_{e}$ across edge $e \in E$ are given by the diagonal elements of the matrix $R=BL^{\dagger}B$. \end{definition} Here is a lemma relating these two quantities. \begin{lemma} Let $\Phi = W^{1/2} B$ denote the scaled edge-incidence matrix. If $\ell_i$ is the leverage score of the $i^{th}$ row of $\Phi$, then $\frac{\ell_i}{w_i}$ is the effective resistance of the $i^{th}$ edge. \end{lemma} \begin{proof} Consider the matrix \[ P = W^{1/2}B \left( B^T W B \right)^{\dagger} B^T W^{1/2} \in \mathbb{R}^{m \times m} , \] and notice that $P = W^{1/2}R W^{1/2}$ is a rescaled version of $R = B L^{\dagger} B$, whose diagonal elements are the effective resistances. Since $\Phi = W^{1/2}B$, it follows that \[ P = \Phi \left( \Phi^T \Phi \right)^{\dagger} \Phi^T . \] Let $U_{\Phi}$ be an orthogonal matrix spanning the columns of $\Phi$. Then, $P=U_{\Phi}U_{\Phi}^T$, and so \[ P_{ii} = \left( U_{\Phi} U_{\Phi}^T \right)_{ii} = \| \left( U_{\Phi} \right)_{(i)} \|_2^2 , \] which establishes the lemma. \end{proof} So, informally, we sparsify the graph by biasing our random sampling toward edges that are ``important'' or ``influential'' in the sense that they have large statistical leverage or effective resistance, and then we use the sparsified graph to solve the subproblem. Here is the main theorem for this algorithm. \begin{theorem} With constant probability, $\| x_{opt} - \tilde{x}_{opt} \|_L \le \epsilon \| x_{opt} \|_L$. \end{theorem} \begin{proof} The main idea of the proof is that we are forming a sketch of the Laplacian by randomly sampling elements, which corresponds to randomly sampling rows of the edge-incidence matrix, and that we need to ensure that the corresponding sketch of the edge-incidence matrix is a so-called subspace-preserving embedding. If that holds, then the eigenvalues of the edge-incidence matrix and it's sketch are close, and thus the eigenvalues of the Laplacian are close, and thus the original Laplacian and the sparsified Laplacian are ``close,'' in the sense that the quadratic form of one is close to the quadratic form of the other. Here are the details. By definition, \[ \| x_{opt} - \tilde{x}_{opt} \|_L^2 = \left(x_{opt}-\tilde{x}_{opt} \right)^{T} L \left(x_{opt}-\tilde{x}_{opt} \right) . \] Recall that $L = B^TWB$, that $x_{opt} = L^{\dagger}b$, and that $\tilde{x}_{opt} = \tilde{L}^{\dagger}b$. So, \begin{eqnarray*} \| x_{opt} - \tilde{x}_{opt} \|_L^2 &=& \left(x_{opt}-\tilde{x}_{opt} \right)^{T} B^TWB \left(x_{opt}-\tilde{x}_{opt} \right) \\ &=& \| W^{1/2} B \left( x_{opt}-\tilde{x}_{opt} \right) \|_2^2 \end{eqnarray*} Let $\Phi \in \mathbb{R}^{m \times n}$ be defined as $\Phi = W^{1/2}B$, and let its SVD be $\Phi = U_{\Phi} \Sigma_{\Phi} V_{\Phi}^T$. Then \[ L = \Phi^T\Phi = V_{\Phi} \Sigma_{\Phi}^{2} V_{\Phi}^T \] and \[ x_{opt} = L^{\dagger} b = V_{\Phi} \Sigma_{\Phi}^{-2} V_{\Phi}^T b . \] In addition \[ \tilde{L} = \Phi^T S^T S \Phi = \left( S\Phi \right)^{T} \left( S\Phi \right) \] and also \[ \tilde{x}_{opt} = \tilde{L}^{\dagger}b = \left( S\Phi \right)^{\dagger} \left( S\Phi \right)^{T\dagger} b = \left( SU_{\Phi}\Sigma_{\Phi}V_{\Phi}^{T}\right)^{\dagger} \left( SU_{\Phi}\Sigma_{\Phi}V_{\Phi}^{T}\right)^{T\dagger} b \] By combining these expressions, we get that \begin{eqnarray*} \| x_{opt} - \tilde{x}_{opt} \|_L^2 &=& \| \Phi \left( x_{opt} - \tilde{x}_{opt} \right) \|_2^2 \\ &=& \| U_{\Phi}\Sigma_{\Phi}V_{\Phi}^{T} \left( V_{\Phi} \Sigma_{\Phi}^{-2} V_{\Phi}^T - \left( SU_{\Phi}\Sigma_{\Phi}V_{\Phi}^{T}\right)^{\dagger} \left( SU_{\Phi}\Sigma_{\Phi}V_{\Phi}^{T}\right)^{T\dagger} \right) b \|_2^2 \\ &=& \| \Sigma_{\Phi}^{-1} V_{\Phi}^T b - \Sigma_{\Phi} \left( SU_{\Phi}\Sigma_{\Phi}V_{\Phi}^{T}\right)^{\dagger} \left( SU_{\Phi}\Sigma_{\Phi}V_{\Phi}^{T}\right)^{T\dagger} V_{\Phi} b \|_2^2 \end{eqnarray*} Next, we note the following: \[ \mathbb{E}\left[ \| U_{\Phi}^T S^T S U_{\Phi} - I \|_2 \right] \le \sqrt{\epsilon} , \] where of course the expectation can be removed by standard methods. This follows from a result of Rudelson-Vershynin, and it can also be obtained as a matrix concentration bound. This is a key result in RLA, and it holds since we are sampling $O\left(\frac{n}{\epsilon} \log \left( \frac{n}{\epsilon} \right) \right)$ rows from $U$ according to the leverage score sampling probabilities. From standard matrix perturbation theory, it thus follows that \[ \left| \sigma_i \left( U_{\Phi}^TS^TSU_{\Phi} \right) - 1 \right| = \left| \sigma_i^2\left(SU_{\Phi}\right)-1 \right| \le \sqrt{\epsilon} . \] So, in particular, the matrix $SU_{\Phi}$ has the same rank as the matrix $U_{\Phi}$. (This is a so-called subspace embedding, which is a key result in RLA; next time we will interpret it in terms of graphic inequalities that we discussed before.) In the rest of the proof, let's condition on this random event being true. Since $SU_{\Phi}$ is full rank, it follows that \[ \left( SU_{\Phi}\Sigma_{\Phi}\right)^{\dagger} = \Sigma_{\Phi}^{-1} \left( SU_{\Phi} \right)^{\dagger} . \] So, we have that \begin{eqnarray*} \| x_{opt} - \tilde{x}_{opt} \|_L^2 &=& \| \Sigma_{\Phi}^{-1} V_{\Phi}^T b - \left( SU_{\Phi}\right)^{\dagger}\left(SU_{\Phi}\right)^{T\dagger} \Sigma_{\Phi}^{-1}V_{\Phi}^T b \|_2^2 \\ &=& \| \Sigma_{\Phi}^{-1} V_{\Phi}^T b - V_{\Omega}\Sigma_{\Omega}^{-2}V_{\Omega}^{T} \Sigma_{\Phi}^{-1}V_{\Phi}^T b \|_2^2 , \end{eqnarray*} where the second line follows if we define $\Omega = S U_{\Phi}$ and let its SVD be \[ \Omega = SU_{\Phi} = U_{\Omega} \Sigma_{\Omega} V_{\Omega}^T . \] Then, let $\Sigma_{\Omega}^{-1} = I+E$, for a diagonal error matrix $E$, and use that $V_{\Omega}^TV_{\Omega} = V_{\Omega}V_{\Omega}^T = I$ to write \begin{eqnarray*} \|x_{opt}-\tilde{x}_{opt} \|_L^2 &=& \| \Sigma_{\Phi}^{-1}V_{\Phi}^Tb - V_{\Omega}\left(I+E\right) V_{\Omega}^{T} \Sigma_{\Phi}^{-1} V_{\Phi}^{T} b \|_2^2 \\ &=& \| V_{\Omega}E V_{\Omega}^T \Sigma_{\Phi}^{-1} V_{\Phi}^{T} b \|_2^2 \\ &=& \| E V_{\Omega}^T \Sigma_{\Phi}^{-1} V_{\Phi}^{T} b \|_2^2 \\ &\le& \| E V_{\Omega}^T\|_2^2 \| \Sigma_{\Phi}^{-1} V_{\Phi}^{T} b \|_2^2 \\ &=& \| E \|_2^2 \| \Sigma_{\Phi}^{-1} V_{\Phi}^{T} b \|_2^2 \end{eqnarray*} But, since we want to bound $\|E\|$, note that \[ \left|E_{ii}\right| = \left| \sigma_i^{-2}\left(\Omega\right) - 1 \right| = \left| \sigma_i^{-1}\left(SU_{\Phi}\right) - 1 \right| . \] So, \[ \|E\|_2 = \max_i \left| \sigma_i^{-2}\left(SU_{\Phi}\right) - 1 \right| \le \sqrt{\epsilon} . \] So, \[ \| x_{opt}-\tilde{x}_{opt} \|_L^2 \le \epsilon \| \Sigma_{\Phi}^{-1}V_{\Phi}^T b \|_2^2 . \] In addition, we can derive that \begin{eqnarray*} \| x_{opt} \|_L^2 &=& x_{opt}^T L x_{opt} \\ &=& \left( W^{1/2}B x_{opt} \right)^T \left( W^{1/2}B x_{opt} \right) \\ &=& \| \Phi x_{opt} \|_2^2 \\ &=& \| U_{\Phi} \Sigma_{\Phi} V_{\Phi}^T V_{\Phi} \Sigma_{\Phi}^{-2} V_{\Phi}^T b \|_2^2 \\ &=& \| \Sigma_{\Phi}^{-1} V_{\Phi}^{T} b \|_2^2 . \end{eqnarray*} So, it follows that \[ \|x_{opt}-\tilde{x}_{opt} \|_L^2 \le \epsilon \| x_{opt} \|_L^2 , \] which establishes the main result. \end{proof} Before concluding, here is where we stand. This is a very simple algorithm that highlights the basic ideas of Laplacian-based solvers, but it is not fast. To make it fast, two things need to be done. \begin{itemize} \item We need to compute or approximate the leverage scores quickly. This step is very nontrivial. The original algorithm of ST (that had the $\log^{50}(n)$ term) involved using local random walks (such as what we discussed before, and in fact the ACL algorithm was developed to improve this step, relative to the original ST result) to construct well-balanced partitions in nearly-linear time. Then, it was shown that one could use effective resistances; this was discovered by SS independently of the RLA-based method outlined above, but it was also noted that one could call the nearly linear time solver to approximate them. Then, it was shown that one could relate it to spanning trees to construct combinatorial preconditioners. If this step was done very carefully, then one obtains an algorithm that runs in nearly linear time. In particular, though, one needs to go beyond the linear algebra to map closely to the combinatorial properties of graphs, and in particular find low-stretch spanning trees. \item Instead of solving the subproblem on the sketch, we need to use the sketch to create a preconditioner for the original problem and then solve a preconditioned version of the original problem. This step is relatively straightforward, although it involves applying an iterative algorithm that is less common than popular CG-based methods. \end{itemize} We will go through both of these in more detail next time. \section{% (04/28/2015): Laplacian solvers (2 of 2)} Reading for today. \begin{compactitem} \item Same as last class. \end{compactitem} Last time, we talked about a very simple solver for Laplacian-based systems of linear equations, i.e., systems of linear equations of the form $Ax=b$, where the constraint matrix $A$ is the Laplacian of a graph. This is not fully-general---Laplacians are SPSD matrices of a particular form---but equations of this form arise in many applications, certain other SPSD problems such as those based on SDD matrices can be reduced to this, and there has been a lot of work recently on this topic since it is a primitive for many other problems. The solver from last time is very simple, and it highlights the key ideas used in fast solvers, but it is very slow. Today, we will describe how to take those basic ideas and, by coupling them with certain graph theoretic tools in various ways, obtain a ``fast'' nearly linear time solver for Laplacian-based systems of linear equations. In particular, today will be based on the Batson-Spielman-Srivastava-Teng and the Koutis-Miller-Peng articles. \subsection{Review from last time and general comments} Let's start with a review of what we covered last time. Here is a very simple algorithm. Given as input the Laplacian $L$ of a graph $G=(V,E)$ and a right hand side vector $b$, do the following. \begin{itemize} \item Construct a sketch of $G$ by sampling elements of $G$, i.e., rows of the edge-node incidence matrix, with probability proportional to the leverage scores of that row, i.e., the effective resistances of that edge. \item Use the sketch to construct a solution, e.g., by solving the subproblem with a black box or using it as a preconditioner to solve the original problem with an iterative method. \end{itemize} The basic result we proved last time is the following. \begin{theorem} Given a graph $G$ with Laplacian $L$, let $x_{opt}$ be the optimal solution of $Lx=b$; then the above algorithm returns a vector $\tilde{x}_{opt}$ such that, with constant probability, \begin{equation} \label{eqn:solution-approximation} \| x_{opt} - \tilde{x}_{opt} \|_{L} \le \epsilon \| x_{opt} \|_{L} . \end{equation} \end{theorem} The proof of this result boiled down to showing that, by sampling with respect to a judiciously-chosen set of nonuniform importance sampling probabilities, then one obtains a data-dependent subspace embedding of the edge-incidence matrix. Technically, the main thing to establish was that, if $U$ is an $m \times n$ orthogonal matrix spanning the column space of the weighted edge-incidence matrix, in which case $I = I_n = U^TU$, then \begin{equation} \label{eqn:subspace-embedding} \| I - \left(SU\right)^{T}\left(SU\right) \|_2 \le \epsilon , \end{equation} where $S$ is a sampling matrix that represents the effect of sampling elements from $L$. The sampling probabilities that are used to create the sketch are weighted versions of the statistical leverage scores of the edge-incidence matrix, and thus they also are equal to the effective resistance of the corresponding edge in the graph. Importantly, although we didn't describe it in detail, the theory that provides bounds of the form of Eqn.~(\ref{eqn:subspace-embedding}) is robust to the exact form of the importance sampling probabilities, e.g., bounds of the same form hold if any other probabilities are used that are ``close'' (in a sense that we will discuss) to the statistical leverage scores. The running time of this simple strawman algorithm consists of two parts, both of which the fast algorithms we will discuss today improve upon. \begin{itemize} \item Compute the leverage scores, exactly or approximately. A naive computation of the leverage scores takes $O(mn^2)$ time, e.g., with a black box QR decomposition routine. Since they are related to the effective resistances, one can---theoretically at least compute them with any one of a variety of fast nearly linear time solvers (although one has a chicken-and-egg problem, since the solver itself needs those quantities). Alternatively, since one does not need the exact leverage scores, one could hope to approximate them in some way---below, we will discuss how this can be done with low-stretch spanning trees. \item Solve the subproblem, exactly or approximately. A naive computation of the solution to the subproblem can be done in $O(n^3)$ time with standard direct methods, or it can be done with an iterative algorithm that requires a number of matrix-vector multiplications that depends on the condition number of $L$ (which in general could be large, e.g., $\Omega(n)$) times $m$, the number of nonzero elements of $L$. Below, we will see how this can be improved with sophisticated versions of certain preconditioned iterative algorithms. \end{itemize} More generally, here are several issues that arise. \begin{itemize} \item Does one use exact or approximate leverage scores? Approximate leverage scores are sufficient for the worst-case theory, and we will see that this can be accomplished by using LSSTs, i.e., combinatorial techniques. \item How good a sketch is necessary? Last time, we sampled $\Theta\left( \frac{ n \log(n) }{\epsilon^2} \right)$ elements from $L$ to obtain a $1\pm\epsilon$ subspace embedding, i.e., to satisfy Eqn.~(\ref{eqn:subspace-embedding}), and this leads to an $\epsilon$-approximate solution of the form of Eqn~(\ref{eqn:solution-approximation}). For an iterative method, this might be overkill, and it might suffice to satisfy Eqn.~(\ref{eqn:subspace-embedding}) for, say, $\epsilon = \frac{1}{2}$. \item What is the dependence on $\epsilon$? Last time, we sampled and then solved the subproblem, and thus the complexity with respect to $\epsilon$ is given by the usual random sampling results. In particular, since the complexity is a low-degree polynomial in $\frac{1}{\epsilon}$, it will be essentially impossible to obtain a high-precision solution, e.g., with $\epsilon = 10^{-16}$, as is of interest in certain~applications. \item What is the dependence on the condition number $\kappa(L)$? In general, the condition number can be very large, and this will manifest itself in a large number of iterations (certainly in worst case, but also actually quite commonly). By working with a preconditioned iterative algorithm, one should aim for a condition number of the preconditioned problem that is quite small, e.g., if not constant then $\log(n)$ or less. In general, there will be a tradeoff between the quality of the preconditioner and the number of iterations needed to solve the preconditioned~problem. \item Should one solve the subproblem directly or use it to construct a preconditioned to the original problem? Several of the results just outlined suggest that an appropriate iterative methods should be used and this is what leads to the best results. \end{itemize} \textbf{Remark.} Although we are not going to describe it in detail, we should note that the LSSTs will essentially allow us to approximate the large leverage scores, but they won't have anything to say about the small leverage scores. We saw (in a different context) when we were discussing statistical inference issues that controlling the small leverage scores can be important (for proving statistical claims about unseen data, but not for claims on the empirical data). Likely similar issues arise here, and likely this issue can be mitigated by using implicitly regularized Laplacians, e.g., as as implicitly computed by certain spectral ranking methods we discussed, but as far as I know no one has explicitly addressed these questions. \subsection{Solving linear equations with direct and iterative methods} Let's start with the second step of the above two-level algorithm, i.e., how to use the sketch from the first step to construct an approximate solution, and in particular how to use it to construct a preconditioner for an iterative algorithm. Then, later we will get back to the first step of how to construct the sketch. As you probably know, there are a wide range of methods to solve linear systems of the form $Ax=b$, but they fall into two broad categories. \begin{itemize} \item \textbf{Direct methods.} These include Gaussian elimination, which runs in $O(n^3)$ time; and Strassen-like algorithms, which run in $O(n^{\omega})$ time, where $\omega = 2.87 \ldots 2.37$. Both require storing the full set of in general $O(n^2)$ entries. Faster algorithms exist if $A$ is structured. For example, if $A$ is $n \times n$ PSD with $m$ nonzero, then conjugate gradients, used as a direct solver, takes $O(mn)$ time, which if $m = O(n)$ is just $O(n^2)$. That is, in this case, the time it takes it proportional to the time it takes just to write down the inverse. Alternatively, if $A$ is the adjacency matrix of a path graph or any tree, then the running time is $O(m)$; and so on. \item \textbf{Iterative methods.} These methods don't compute an exact answer, but they do compute an $\epsilon$-approximate solution, where $\epsilon$ depends on the structural properties of $A$ and the number of iterations, and where $\epsilon$ can be made smaller with additional iterations. In general, iterations are performed by doing matrix-vector multiplications. Advantages of iterative methods include that one only needs to store $A$, these algorithms are sometimes very simple, and they are often faster than running a direct solver. Disadvantages include that one doesn't obtain an exact answer, it can be hard to predict the number of iterations, and the running time depends on the eigenvalues of $A$, e.g., the condition number $\kappa(A) = \frac{\lambda_{max}(A)}{\lambda_{min}(A)}$. Examples include the Richardson iteration, various Conjugate Gradient like algorithms, and the Chebyshev~iteration. \end{itemize} Since the running time of iterative algorithms depend on the properties of $A$, so-called \emph{preconditioning methods} are a class of methods to transform a given input problem into another problem such that the modified problem has the same or a related solution to the original problem; and such that the modified problem can be solved with an iterative method more quickly. For example, to solve $Ax=b$, with $A\in\mathbb{R}^{n \times n}$ and with $m=\textbf{nnz}(A)$, if we define $\kappa(A) = \frac{\lambda_{max}(A)}{\lambda_{min}(A)}$, where $\lambda_{max}$ and $\lambda_{min}$ are the maximum and minimum non-zero eigenvalues of $A$, to be the condition number of $A$, then CG runs in \[ O\left( \sqrt{\kappa{(A)}}\log\left(1/\epsilon\right) \right) \] iterations (each of which involves a matrix-vector multiplication taking $O(m)$ time) to compute and $\epsilon$-accurate solution to $Ax=b$. By an $\epsilon$-accurate approximation, here we mean the same notion that we used above, i.e., that $$\|\tilde{x}_{opt}-A^{\dagger}b\|_{A}\le\epsilon\|A^{\dagger}b\|_{A},$$ where the so-called $A$-norm is given by $\|y\|_{A}=\sqrt{y^TAy}$. This $A$-norm is related to the usual Euclidean norm as follows: $\|y\|_{A}\le\kappa(A)\|y\|_2$ and $\|y\|_2\le\kappa(A)\|y\|_{A}$. While the $A$-norm is perhaps unfamiliar, in the context of iterative algorithms it is not too dissimilar to the usual Euclidean norm, in that, given an $\epsilon$-approximation for the former, we can obtain an $\epsilon$-approximation for the latter with $O\left( \log\left( \kappa(A)/\epsilon \right) \right)$ extra iterations. In this context, preconditioning typically means solving \[ B^{-1}Ax = B^{-1}b , \] where $B$ is chosen such that $\kappa\left(B^{-1}A\right)$ is small; and it is easy to solve problems of the form $Bz=c$. The two extreme cases are $B=I$, in which case it is easy to compute and apply but doesn't help solve the original problem, and $B=A^{-1}$, which means that the iterative algorithm would converge after zero steps but which is difficult to compute. The running time of the preconditioned problem involves $$O\left( \sqrt{\kappa\left( B^{-1}A \right)} \log\left(1/\epsilon \right) \right)$$ matrix vector multiplications. The quantity $\kappa\left( B^{-1}A \right)$ is sometimes known as the \emph{relative condition number of $A$ with respect to $B$}---in general, finding a $B$ that makes it smaller takes more initial time but leads to fewer iterations. (This was the basis for the comment above that there is a tradeoff in choosing the quality of the preconditioner, and it is true more generally.) These ideas apply more generally, but we consider applying them here to Laplacians. So, in particular, given a graph $G$ and its Laplacian $L_{G}$, one way to precondition it is to look for a different graph $H$ such that $L_H \approx L_G$. For example, one could use the sparsified graph that we computed with the algorithm from last class. That sparsified graph is actually an $\epsilon$-good preconditioned, but it is too expensive to compute. To understand how we can go beyond the linear algebra and exploit graph theoretic ideas to get good approximations to them more quickly, let's discuss different ways in which two graphs can be close to one another. \subsection{Different ways two graphs can be close} We have talked formally and informally about different ways graphs can be close, e.g., we used the idea of similar Laplacian quadratic forms when talking about Cheeger's Inequality and the quality of spectral partitioning methods. We will be interested in that notion, but we will also be interested in other notions, so let's now discuss this topic in more detail. \begin{itemize} \item \textbf{Cut similarity.} One way to quantify the idea that two graphs are close is to say that they are similar in terms of their cuts or partitions. The standard result in this area is due to BZ, who developed the notion of \emph{cut similarity} to develop fast algorithms for min cut and max flow and other related combinatorial problems. This notion of similarity says that two graphs are close if the weights of the cuts, i.e., the sum of edges or edge weights crossing a partition, are close for all cuts. To define it, recall that, given a graph $G=(V,E,W)$ and a set $S \subset V$, we can define $\mbox{cut}_G = \sum_{u \in S, v \in \bar{S} } W_{(uv)}$. Here is the definition. \begin{definition} Given two graphs, $G=(V,E,W)$ and $\tilde{G} = (V,\tilde{E},\tilde{W})$, on the same vertex set, we say that $G$ and $\tilde{G}$ are \emph{$\sigma$-cut-similar} if, for all $S\subseteq V$, we have that \[ \frac{1}{\sigma} \mbox{cut}_{\tilde{G}}(S) \le \mbox{cut}_{G}(S) \le \sigma \mbox{cut}_{\tilde{G}}(S) . \] \end{definition} As an example of a result in this area, the following theorem shows that every graph is cut-similar to a graph with average degree $O(\log(n))$ and that one can compute that cut-similar graph quickly. \begin{theorem}[BK] For all $\epsilon > 0$, every graph $G=(V,E,W)$ has a $\left(1+\epsilon\right)$-cut-similar graph $\tilde{G}=(V,\tilde{E},\tilde{V})$ such that $\tilde{E} \subseteq E$ and $|\tilde{E}| = O\left( n \log(n/\epsilon^2) \right)$. In addition, the graph $\tilde{G}$ can be computed in $O\left( m \log^3 (n) + m \log (n/\epsilon^2) \right)$ time. \end{theorem} \item \textbf{Spectral similarity.} ST introduced the idea of spectral similarity in the context of nearly linear time solvers. One can view this in two complementary ways. \begin{itemize} \item As a generalization of cut similarity. \item As a special case of subspace embeddings, as used in RLA. \end{itemize} We will do the former here, but we will point out the latter at an appropriate point. Given $G=(V,E,W)$, recall that $L:\mathbb{R}^{n}\rightarrow\mathbb{R}$ is a quadratic form associated with $G$ such that \[ L(x) = \sum_{(uv) \in E} W_{(uv)} \left( x_u - x_v \right)^2 . \] If $S \subset V$ and if $x$ is an indicator/characteristic vector for the set $S$, i.e., it equals $1$ on nodes $u \in S$, and it equals $0$ on nodes $v \in S$, then for those indicator vectors $x$, we have that $L(x) = \mbox{cut}_{G}(x)$. We can also ask about the values it takes for other vectors $x$. So, let's define the following. \begin{definition} Given two graphs, $G=(V,E,W)$ and $\tilde{G}=(V,\tilde{E},\tilde{W})$, on the same vertex set, we say that $G$ and $\tilde{G}$ are \emph{$\sigma$-spectrally similar} if, for all $x\in\mathbb{R}^{n}$, we have that \[ \frac{1}{\sigma} L_{\tilde{G}}(x) \le L_{G}(x) \le \sigma L_{\tilde{G}}(x) . \] \end{definition} That is, two graphs are spectrally similar if their Laplacian quadratic forms are close. In addition to being a generalization of cut similarity, this also corresponds to a special case of subspace embeddings, restricted from general matrices to edge-incidence matrices and their associated~Laplacians. \begin{itemize} \item To see this, recall that subspace embeddings preserve the geometry of the subspace and that this is quantified by saying that all the singular values of the sampled/sketched version of the edge-incidence matrix are close to $1$, i.e., close to those of the edge-incidence matrix of the original un-sampled graph. Then, by considering the Laplacian, rather than the edge-incidence matrix, the singular values of the original and sketched Laplacian are also close, up to a quadratic of the approximation factor on the edge-incidence matrix. \end{itemize} Here are several other things to note about spectral embeddings. \begin{itemize} \item Two graphs can be cut-similar but not spectrally-similar. For example, consider $G$ to be an $n$-vertex path and $\tilde{G}$ to be an $n$-vertex cycle. They are $2$-cut similar but are only $n$-spectrally similar. \item Spectral similarity is identical to the notion of relative condition number in NLA that we mentioned above. Recall, given $A$ and $B$, then $ A \preceq B$ iff $x^TAx \le x^TBx$, for all $x\in\mathbb{R}^{n}$. Then, $A$ and $B$, if they are Laplacians, are spectrally similar if $\frac{1}{\sigma}B \preceq A \preceq \sigma B$. In this case, they have similar eigenvalues, since: from the Courant-Fischer results, if $\lambda_1,\ldots,\lambda_n$ are the eigenvalues of $A$ and $\tilde{\lambda}_1,\ldots,\tilde{\lambda}_n$ are the eigenvalues of $B$, then for all $i$ we have that $\frac{1}{\sigma} \tilde{\lambda}_i \le \lambda_i \le \sigma \tilde{\lambda_i}$. \item More generally, spectral similarity means that the two graphs will share many spectral or linear algebraic properties, e.g., effective resistances, resistance distances, etc. \end{itemize} \item \textbf{Distance similarity.} If one assigns a length to every edge $e \in E$, then these lengths induce a shortest path distance between every $u,v \in V$. Thus, given a graph $G=(V,E,W)$, we can let $d: V \times V \rightarrow \mathbb{R}^{+}$ be the shortest path distance. Given this, we can define the following notion of similarity. \begin{definition} Given two graphs, $G=(V,E,W)$ and $\tilde{G}=(V,\tilde{E},\tilde{W})$, on the same vertex set, we say that $G$ and $\tilde{G}$ are \emph{$\sigma$-distance similar} if, for all pairs of vertices $u,v \in V$, we have that \[ \frac{1}{\sigma} \tilde{d}(u,v) \le d(u,v) \le \sigma \tilde{d}(u,v) . \] \end{definition} Note that if $\tilde{G}$ is a subgraph if $G$, then $d_{G}(u,v) \le d_{\tilde{G}}(u,v)$, since shortest-path distances can only increase. (Importantly, this does \emph{not} necessarily hold if the edges of the subgraph are re-weighted, as they were done in the simple algorithm from the last class, when the subgraph is constructed; we will get back to this later.) In this case, a spanner is a subgraph such that distances in the other direction are not changed too much. \begin{definition} Given a graph $G=(V,E,W)$, a \emph{$t$-spanner} is a subgraph of $G$ such that for all $u,v \in V$, we have that $d_{\tilde{G}}(u,v) \le t d_{G}(u,v)$. \end{definition} There has been a range of work in TCS on spanners (e.g., it is known that every graph has a $2t+1$ spanner with $O\left( n^{1+1/\epsilon} \right)$ edges) that isn't directly relevant to what we are doing. We will be most interested in spanners that are trees or nearly trees. \begin{definition} Given a graph $G=(V,E,W)$, a \emph{spanning tree} is a tree includes all vertices in $G$ and is a subgraph of $G$. \end{definition} There are various related notions that have been studied in different contexts: for example, minimum spanning trees, random spanning trees, and low-stretch spanning trees (LSSTs). Again, to understand some of the differences, think of a path versus a cycle. For today, we will be interested in LSSTs. The most extreme form of a sparse spanner is a LSST, which has only $n-1$ edges but which approximates pairwise distances up to small, e.g., hopefully polylog,~factors. \end{itemize} \subsection{Sparsified graphs} Here is an aside with some more details about sparsified graphs, which is of interest since this is the first step of our Laplacian-based linear equation solver algorithm. Let's define the following, which is a slight variant of the above. \begin{definition} Given a graph $G$, a \emph{$(\sigma,d)$-spectral sparsifier} of $G$ is a graph $\tilde{G}$ such that \begin{compactenum} \item $\tilde{G}$ is $\sigma$-spectrally similar to $G$. \item The edges of $\tilde{G}$ are reweighed versions of the edges of $G$. \item $\tilde{G}$ has $\le d|V|$ edges. \end{compactenum} \end{definition} \textbf{Fact.} Expanders can be thought of as sparse versions of the complete graph; and, if edges are weighted appropriately, they are spectral sparsifiers of the complete graph. This holds true more generally for other graphs. Here are examples of such results. \begin{itemize} \item SS showed that every graph has a $\left(1+\epsilon, O(\left( \frac{\log(n)}{\epsilon^2} \right) \right)$ spectral sparsifier. This was shown by SS with an effective resistance argument; and it follows from what we discussed last time: last time, we showed that sampling with respect to the leverage scores gives a subspace embedding, which preserves the geometry of the subspace, which preserves the Laplacian quadratic form, which implies the spectral sparsification claim. \item BSS showed that every $n$ node graph $G$ has a $\left( \frac{\sqrt{d}+1}{\sqrt{d}-1},d\right)$-spectral sparsifier (which in general is more expensive to compute than running a nearly linear time solver). In particular, $G$ has a $\left( 1+2\epsilon, \frac{4}{\epsilon^2} \right)$-spectral sparsifier, for every $\epsilon\in(0,1)$. \end{itemize} Finally, there are several ways to speed up the computation of graph sparsification algorithms, relative to the strawman that we presented in the last class. \begin{itemize} \item Given the relationship between the leverage scores and the effective resistances and that the effective resistances can be computed with a nearly linear time solver, one can use the ST or KMP solver to speed up the computation of graph sparsifiers. \item One can use local spectral methods, e.g., diffusion-based methods from ST or the push algorithm of ACL, to compute well-balanced partitions in nearly linear time and from them obtain spectral sparsifiers. \item Union of random spanning trees. It is known that, e.g., the union of two random spanning trees is $O(\log(n))$-cut similar to $G$; that the union of $O\left( \log^2(n)/\epsilon^2 \right)$ reweighed random spanning trees is a $1+\epsilon$-cut sparsifier; and so on. This suggests looking at spanning trees and other related combinatorial quantities that can be quickly computed to speed up the computation of graph sparsifiers. We turn to this next. \end{itemize} \subsection{Back to Laplacian-based linear systems} KMP considered the use of combinatorial preconditions, an idea that traces back to Vaidya. They coupled this with a fact that has been used extensively in RLA: that only approximate leverage scores are actually needed in the sampling process to create a sparse sketch of $L$. In particular, they compute upper estimates of the leverage scores or effective resistance of each edge, and they compute these estimates on a modified graph, in which each upper estimate is sufficiently good. The modified graph is rather simple: take a LSST and increase its weights. Although the sampling probabilities obtained from the LSST are strictly greater than the effective resistances, they are not too much greater in aggregate. This, coupled with a rather complicated iterative preconditioning scheme, coupled with careful accounting with careful data structures, will lead to a solver that runs in $O\left( m \log(n)\log(1/\epsilon) \right)$ time, up to $\log\log(n)$ factors. We will discuss each of these briefly in turn. \paragraph{Use of approximate leverage scores.} Recall from last class that an important step in the algorithm was to use nonuniform importance sampling probabilities. In particular, if we sampled edges from the edge-incidence matrix with probabilities $\{p_i\}_{i=1}^{m}$, where each $p_i = \ell_i$, where $\ell_i$ is the effective resistance or statistical leverage score of the weighted edge-incidence matrix, then we showed that if we sampled $r=O\left( n\log(n)/\epsilon\right)$ edges, then it follows that \[ \| I - \left(SU_{\Phi}\right)^{T}\left(SU_{\Phi}\right) \|_2 \le \epsilon , \] from which we were able to obtain a good relative-error solution. Using probabilities exactly equal to the leverage scores is overkill, and the same result holds if we use any probabilities $p_i^{\prime}$ that are ``close'' to $p_i$ in the following sense: if \[ p_i^{\prime} \ge \beta \ell_i , \] for $\beta\in(0,1]$ and $\sum_{i=1}^{m} p_i^{\prime}=1$, then the same result follows if we sample $r=O\left( n\log(n)/(\beta\epsilon)\right)$ edges, i.e., if we oversample by a factor of $1/\beta$. The key point here is that it is essential not to underestimate the high-leverage edges too much. It is, however, acceptable if we overestimate and thus oversample some low-leverage edges, as long as we don't do it too much. In particular, let's say that we have the leverage scores $\{\ell_1\}_{i=1}^{m}$ and overestimation factors $\{\gamma_i\}_{i=1}^{m}$, where each $\gamma_i \ge 1$. From this, we can consider the probabilities \[ p_i^{\prime\prime} = \frac{\gamma_i\ell_i}{\sum_{i=1}^{m}\gamma_i\ell_i} . \] If $\sum_{i=1}^{m}\gamma_i\ell_i$ is not too large, say $O\left(n\log(n)\right)$ or some other factor that is only slightly larger than $n$, then dividing by it (to normalize $\{\gamma_i\ell_i\}_{i=1}^{m}$ to unity to be a probability distribution) does not decrease the probabilities for the high-leverage components too much, and so we can use the probabilities $p_i^{\prime\prime}$ with an extra amount of oversampling that equals $\frac{1}{\beta} = \sum_{i=1}^{m}\gamma_i\ell_i$. \paragraph{Use of LSSTs as combinatorial preconditioners.} Here, the idea is to use a LSST, i.e., use a particular form of a ``combinatorial preconditioning,'' to replace $\ell_i=\ell_{(uv)}$ with the stretch of the edge $(uv)$ in the LSST. Vaidya was the first to suggest the use of spanning trees of $L$ as building blocks as the base for preconditioning matrix $B$. The idea is then that the linear system, if the constraint matrix is the Laplacian of a tree, can be solved in $O(n)$ time with Gaussian elimination. (Adding a few edges back into the tree gives a preconditioned that is only better, and it is still easy to solve.) Boman-Hendrickson used a LSST as a stand-along preconditioner. ST used a preconditioner that is a LSST plus a small number of extra edges. KMP had additional extensions that we describe here. Two question arise with this approach. \begin{itemize} \item Q1: What is the appropriate base tree? \item Q2: Which off-tree edges should added into the preconditioner? \end{itemize} One idea is to use a tree that concentrates that maximum possible weight from the total weight of the edges in $L$. This is what Vaidya did; and, while it led to good result, the results weren't good enough for what we are discussing here. (In particular, note that it doesn't discriminate between different trees in unweighted graphs, and it won't provide a bias toward the middle edge of a dumbbell graph.) Another idea is to use a tree that concentrates mass on high leverage/influence edges, i.e., edges with the highest leverage in the edge-incidence matrix or effective resistance in the corresponding Laplacian. The key idea to make this work is that of \emph{stretch}. To define this, recall that for every edge $(u,v)\in E$ in the original graph Laplacian $L$, there is a unique ``detour'' path between $u$ and $v$ in the tree $T$. \begin{definition} The \emph{stretch} of the edge with respect to $T$ equals the distortion caused by this detour. \end{definition} In the unweighted case, this stretch is simply the length of the tree path, i.e., of the path between nodes $u$ and $v$ that were connected by an edge in $G$ in the tree $T$. Given this, we can define the~following. \begin{definition} The \emph{total stretch} of a graph $G$ and its Laplacian $L$ with respect to a tree $T$ is the sum of the stretches of all off-tree edges. Then, a \emph{low-stretch spanning tree (LSST)} $T$ is a tree such that the total stretch is low. \end{definition} Informally, a LSST is one such that it provides a good ``on average'' detours for edges of the graph, i.e., there can be a few pairs of nodes that are stretched a lot, but there can't be too many such~pairs. There are many algorithms for LSSTs. For example, here is a result that is particularly relevant for us. \begin{theorem} Every graph $G$ has a spanning tree $T$ with total stretch $\tilde{O}\left( m \log(n) \right)$, and this tree can be found in $\tilde{O}\left( m \log(n) \right)$ time. \end{theorem} In particular, we can use the stretches of pairs of nodes in the tree $T$ in place of the leverage scores or effective resistances as importance sampling probabilities: they are larger than the leverage scores, and there might be a few that much larger, but the total sum is not much larger than the total sum of the leverage scores (which equals $n-1$). \paragraph{Paying careful attention to data structures, bookkeeping, and recursive preconditions.} Basically, to get everything to work in the allotted time, one needs the preconditioner $B$ that is extremely good approximation to $L$ and that can be computed in linear time. What we did in the last class was to compute a ``one step'' preconditioner, and likely any such ``one step'' preconditioned won't be substantially easier to compute that solving the equation; and so KMP consider recursion in the construction of their preconditioner. \begin{itemize} \item In a recursive preconditioning method, the system in the preconditioned $B$ is not solved exactly but only approximately, via a recursive invocation of the same iterative method. So, one must find a preconditioned for $B$, a preconditioned for it, and so on. This gives s multilevel hierarchy of progressively smaller graphs. To make the total work small, i.e., $O(kn)$, for some constant $k$, one needs the graphs in the hierarchy to get small sufficiently fast. It is sufficient that the graph on the $(i+1)^{th}$ level is smaller than the graph on the $i^{th}$ level by a factor of $\frac{1}{2k}$. However, one must converge within $O(kn)$. So, one can use CG/Chebyshev, which need $O(k)$ iterations to converge, when $B$ is a $k^2$-approximation of $L$ (as opposed to $O(k^2)$ iterations which are needed for something like a Richardson's iteration). \end{itemize} So, a LSST is a good base; and a LSST also tells us which off-tree edges, i.e., which additional edges from $G$ that are not in $T$, should go into the preconditioner. \begin{itemize} \item This leads to an $\tilde{O}\left( m \log^2(n)\log(1/\epsilon) \right)$ algorithm. \end{itemize} If one keeps sampling based on the same tree and does some other more complicated and careful stuff, then one obtains a hierarchical graph and is able to remove the the second log factor to yield a potentially practical solver. \begin{itemize} \item This leads to an $\tilde{O}\left( m \log (n)\log(1/\epsilon) \right)$ algorithm. \end{itemize} See the BSST and KMP papers for all the details. \section*{Acknowledgments} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,226
Q: How to add Custom Authentication in SSRS 2016 - Using ADFS & OWIN I am working on adding custom authentication on top of SSRS 2016. There is a very good article available to do the same Custom SecuritySample 2016 - https://github.com/Microsoft/Reporting-Services/tree/master/CustomSecuritySample2016 The steps Microsoft have provided to add custom authentication is basically adding forms authentication in which user name/password is taken as input on logon.aspx. I did this and it worked as expected. But my requirement is to authenticate user using ADFS (Active Directory Federation Services). And luckily there is another good article available to dotnetcurry.com/windows-azure/1166/aspnet-mvc-multiple-adfs-owin-katana Thanks to above article I was able to authenticate using ADFS and OWIN in sample MVC web Forms application. But here is the problem, when I am trying to add the above sample login into Custom Security Sample provided by Microsoft(link above provided). I am getting an exception Exception of type 'System.Web.HttpUnhandledException' was thrown. No owin.Environment item was found in the context. So here is what I am doing. I added Startup.cs file in CustomSecuritySample2016 solution provided https://github.com/Microsoft/Reporting-Services/tree/master/CustomSecuritySample2016 on GitHub. Then also added Startup.Auth.cs file under App_Start folder. Modified the web.config file to add following settings. <add key="owin:AppStartup" value="Microsoft.Samples.ReportingServices.CustomSecurity.Startup, Microsoft.Samples.ReportingServices.CustomSecurity" /> <add key="owin:AutomaticAppStartup" value="true" /> *Then on Page_Load event of Logon.aspx I am just trying to access the GetOwinContext method by extending HttpContext. private void Page_Load(object sender, System.EventArgs e) { var ct = HttpContext.Current.GetOwinContext(); } *Rest of the changes are exactly the same as mentioned in custom security sample 2016 code on GitHub(Link above). I already googled for this exception and I have already added this setting as per the right answers out there but mine is still giving this exception. <add key="owin:AppStartup" value="Microsoft.Samples.ReportingServices.CustomSecurity.Startup, Microsoft.Samples.ReportingServices.CustomSecurity" /> <add key="owin:AutomaticAppStartup" value="true" /> Is is because SSRS don't allow to load any DLL that is not added in RSReportServer.config?
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,662
\section{Introduction}\label{sec:intro} Historically, humans have performed inconsistently in judgemental forecasting \cite{Makridakis2010,TetlockExp2017}, which incorporates subjective opinion and probability estimates to predictions \cite{Lawrence2006}. Yet, human judgement remains essential in cases where pure statistical methods are not applicable, e.g. where historical data alone is insufficient or for one-off, more `unknowable' events \cite{Petropoulos2016,Arvan2019,deBaets2020}. Judgemental forecasting is widely relied upon for decision-making \cite{Nikolopoulos2021}, in myriad fields from epidemiology to national security \cite{Nikolopoulos2015,Litsiou2019}. Effective tools to help humans improve their predictive capabilities thus have enormous potential for impact. Two recent global events -- the COVID-19 pandemic and the US withdrawal from Afghanistan -- underscore this by highlighting the human and financial cost of predictive deficiency. A multi-purpose system which could improve our ability to predict the incidence and impact of events by as little as 5\%, could save millions of lives and be worth trillions of dollars per year \cite{TetlockGard2016}. Research on judgemental forecasting (see \cite{Lawrence2006,Zellner2021} for overviews), including the recent\AX{,} groundbreaking `Superforecasting Experiment' \cite{TetlockGard2016}, is instructive in establishing the desired properties for systems for supporting forecastin . In addition to reaffirming the importance of fine-grained probabilistic reasoning \cite{Mellers2015}, this literature points to the benefits of some group techniques versus solo forecasting \cite{Landeta2011,Tetlock2014art}, of synthesising qualitative and quantitative information \cite{Lawrence2006}, of combating agents' irrationality \cite{Chang2016} and of high agent engagement with the forecasting challenge, e.g. robust debating \cite{Landeta2011} and frequent prediction updates \cite{Mellers2015}. Meanwhile, \emph{computational argumentation} (see \cite{AImagazine17,handbook} for recent overviews) is a field of AI which involves reasoning with uncertainty and resolving conflicting information, e.g. in natural language debates. As such, it is an ideal candidate for aggregating the broad, polymorphous set of information involved in judgemental group forecasting. An extensive and growing literature is based on various argumentation frameworks -- rule-based systems for aggregating, representing and evaluating sets of arguments, such as those applied in the contexts of \emph{scheduling} \cite{Cyras_19}, \emph{fact checking} \cite{Kotonya_20} or in various instances of \emph{explainable AI} \cite{Cyras_21}. Subsets of the requirements for forecasting systems are addressed by individual formalisms, e.g. \emph{probabilistic argumentation} \AX{\cite{Dung2010,Thimm2012,Hunter2013,Fazzinga2018}} may effectively represent and analyse uncertain arguments about the future. However, we posit that a purpose-built argumentation framework for forecasting is essential to effectively utilise argumentation's reasoning capabilities in this context. \begin{figure*} \includegraphics[width=\textwidth]{images/FAF_diagram.png} \caption{The step-by-step process of a FAF over its lifetime.} \label{fig:FAFdiag} \end{figure*} In this paper, we attempt to cross-fertilise these two as of yet unconnected academic areas. We draw from forecasting literature to inform the design of a new computational argumentation approach: \emph{Forecasting Argumentation Frameworks} (FAFs). FAFs empower (human and artificial) agents to structure debates in real time and to deliver argumentation-based forecasting. They offer an approach in the spirit of \emph{deliberative democracy} \cite{Bessette1980} to respond to a forecasting problem over time. The steps which underpin FAFs are depicted in Figure \ref{fig:FAFdiag} (referenced throughout) and can be described in simple terms \FT{as follows}: a FAF is initialised with a time limit \FT{(for the overall forecasting process and for each iteration therein)} and a pre-agreed `base-rate' forecast $\ensuremath{\mathcal{F}}$ (Stage 1), e.g. based on historical data. \FT{Then,} the forecast is revised by one or more (non-concurrent) debates, \BI{in the form of `update frameworks' (Stage 2)}, opened and resolved by participating agents \FT{(}until \FT{the} specified time limit is reached\FT{)}. Each update framework begins with a proposed revision to the current forecast (Stage 2a), and proceeds with a cycle of argumentation (Stage 2b) about the proposed forecast, voting on said argumentation and forecasting. Forecasts deemed `irrational' with a view to agents' argumentation and voting are blocked. Finally, the rational forecasts are aggregated and the result replaces the current group forecast (Stage 2c). This process may be repeated over time \BI{in an indefinite number of update frameworks} (thus continually \BI{revising} the group forecast) until the \FT{(overall)} time limit is reached. The composite nature of this process enables the appraisal of new information relevant to the forecasting question as and when it arrives. Rather than confronting an unbounded forecasting question with a diffuse set of possible debates open at once, all agents concentrate their argumentation on a single topic (a proposal) at any given time. After giving the necessary background on forecasting and argumentation (§\ref{sec:background}), we formalise our \FT{update} framework\FT{s for Step 2a} (§\ref{sec:fw}). We then give \FT{our} notion of rationality \FT{(Step 2b)}, along with \FT{our} new method for \FT{aggregating rational forecasts (Step 2c)} from a group of agents (§\ref{sec:forecasting}) \FT{and FAFs overall}. We explore the underlying properties of \FT{FAFs} (§\ref{sec:props}), before describing \FT{\AX{an experiment} with \FT{a prototype implementing} our approach (§\ref{sec:experiments}). Finally, we conclude and suggest potentially fruitful avenues for future work (§\ref{sec:conclusions}). \section{Background}\label{sec:background} \subsection{Forecasting} Studies on the efficacy of judgemental forecasting have shown mixed results \cite{Makridakis2010,TetlockExp2017,Goodwin2019}. Limitations of the judgemental approach are a result of well-documented cognitive biases \cite{Kahneman2012}, irrationalities in human probabilistic reasoning which lead to distortion of forecasts. Manifold methodologies have been explored to improve judgemental forecasting accuracy to varying success \cite{Lawrence2006}. These methodologies include, but are not limited to, prediction intervals \cite{Lawrence1989}, decomposition \cite{MacGregorDonaldG1994Jdwd}, structured analogies \cite{Green2007,Nikolopoulos2015} and unaided judgement \cite{Litsiou2019}. Various group forecasting techniques have also been explored \cite{Linstone1975,Delbecq1986,Landeta2011}, although the risks of groupthink \cite{McNees1987} and the importance of maintaining the independence of each group member's individual forecast are well established \cite{Armstrong2001}. Recent advances in the field have been led by Tetlock and Mellers' superforecasting experiment \cite{TetlockGard2016}, which leveraged \AX{geopolitical} forecasting tournaments and a base of 5000 volunteer forecasters to identify individuals with consistently exceptional accuracy (top 2\%). The experiment\AR{'s} findings underline the effectiveness of group forecasting orientated around debating \cite{Tetlock2014art}, and demonstrate a specific cognitive-intellectual approach conducive for forecasting \cite{Mellers20151,Mellers2015}, but stop short of suggesting a concrete universal methodology for higher accuracy. Instead, Tetlock draws on his own work and previous literature to crystallise a broad set of methodological principles by which superforecasters abide \cite[pg.144]{TetlockGard2016}: \begin{itemize} \item \emph{Pragmatic}: not wedded to any idea or agenda; \item \emph{Analytical:} capable of stepping back from the tip-of-your-nose perspective and considering other views; \item \emph{Dragonfly-eyed:} value diverse views and synthesise them into their own; \item \emph{Probabilistic:} judge using many grades of maybe; \item \emph{Thoughtful updaters:} when facts change, they change their minds; \item \emph{Good intuitive psychologists:} aware of the value of checking thinking for cognitive and emotional biases. \end{itemize} Subsequent research after the superforecasting experiment has included exploring further optimal forecasting tournament preparation \cite{penn_global_2021,Katsagounos2021} and extending Tetlock and Mellers' approach to answer broader, more time-distant questions \cite{georgetown}. It should be noted that there have been no recent advances on computational tool\AX{kits} for the field similar to that proposed in this paper. \iffalse \begin{quote} \textbf{PRAGMATIC:} Not wedded to any idea or agenda \textbf{ANALYTICAL:} Capable of stepping back from the tip-of-your-nose perspective and considering other views \textbf{DRAGONFLY-EYED:} Value diverse views and synthesize them into their own \textbf{PROBABILISTIC:} Judge using many grades of maybe \textbf{THOUGHTFUL UPDATERS:} When facts change, they change their minds \textbf{GOOD INTUITIVE PSYCHOLOGISTS:} Aware of the value of checking thinking for cognitive and emotional biases \cite[pg.144]{TetlockGard2016}\newline \end{quote} \fi \subsection{Computational Argumentation} We posit that existing argumentation formalisms are not well suited for the aforementioned future-based arguments, which are necessarily semantically and structurally different from arguments about present or past concerns. Specifically, forecasting arguments are inherently probabilistic and must deal with the passage of time and its implications for the outcomes at hand. Further, several other important characteristics can be drawn from the forecasting literature which render current argumentation formalisms unsuitable, e.g. the paramountcy of dealing with bias (in data and cognitive), forming granular conclusions, fostering group debate and the co-occurrence of qualitative and quantitative arguing. Nonetheless, several of these characteristics have been previously explored in argumentation and our formalisation draws from several existing approache . First and foremost, it draws in spirit from abstract argumentation frameworks (AAFs) \cite{Dung1995}, in that the arguments' inner contents are ignored and the focus is on the relationships between arguments. However, we consider arguments of different types and \AX{an additional relation of} support (pro), \AX{rather than} attack (con) alone as in \cite{Dung1995}. Past work has also introduced probabilistic constraints into argumentation frameworks. {Probabilistic AAFs} (prAAFs) propose two divergent ways for modelling uncertainty in abstract argumentation using probabilities - the constellation approach \cite{Dung2010,Li2012} and the epistemic approach \cite{Hunter2013,Hunter2014,Hunter2020}. These formalisations use probability as a means to assess uncertainty over the validity of arguments (epistemic) or graph topology (constellation), but do not enable reasoning \emph{with} or \emph{about} probability, which is fundamental in forecasting. In exploring temporality, \cite{Cobo2010} augment AAFs by providing each argument with a limited lifetime. Temporal constraints have been extended in \cite{Cobo2012} and \cite{Baron2014}. Elsewhere, \cite{Rago2017} have used argumentation to model irrationality or bias in agents. Finally, a wide range of gradual evaluation methods have gone beyond traditional qualitative semantics by measuring arguments' acceptability on a scale (normally [0,1]) \cite{Leite2011,Evripidou2012,Amgoud2017,Amgoud2018,Amgoud2016}. Many of these approaches have been unified as Quantitative Bipolar Argumentation Frameworks (QBAFs) in \cite{Baroni2018}. Amongst existing approaches, of special relevance in this paper are Quantitative Argumentation Debate (QuAD) frameworks \cite{Baroni2015}, i.e. 5-tuples ⟨$\mathcal{X}^a$, $\mathcal{X}^c$, $\mathcal{X}^p$, $\mathcal{R}$, $\ensuremath{\mathcal{\tau}}$⟩ where $\mathcal{X}^a$ is a finite set of \emph{answer} arguments (to implicit \emph{issues}); $\mathcal{X}^c$ is a finite set of \emph{con} arguments; $\mathcal{X}^p$ is a finite set of \emph{pro} arguments; $\mathcal{X}^a$, $\mathcal{X}^c$ and $\mathcal{X}^p$ are pairwise disjoint; $\mathcal{R} \subseteq (\mathcal{X}^c \cup \mathcal{X}^p) \times (\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p)$ is an acyclic binary relation; $\ensuremath{\mathcal{\tau}}$ : $(\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p) \rightarrow [0,1]$ is a total function: $\ensuremath{\mathcal{\tau}}(a)$ is the \emph{base score} of $a$. Here, attackers and supporters of arguments are determined by the pro and con arguments they are in relation with. Formally, for any $a\in\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p$, the set of \emph{con} (\emph{pro}\AX{)} \emph{arguments} of $a$ is $\mathcal{R}^-(a) = \{b\in\mathcal{X}^c|(b,a)\in\mathcal{R}\}$ ($\mathcal{R}^+(a) = \{b\in\mathcal{X}^p|(b,a)\in\mathcal{R}\}$, resp.). Arguments in QuAD frameworks are scored by the \emph{Discontinuity-Free QuAD} (DF-QuAD) algorithm \cite{Rago2016}, using the argument's intrinsic base score and the \emph{strengths} of its pro/con arguments. \FTn{Given that DF-QuAD is used to define our method (see Def.~\ref{def:conscore}), for completeness we define it formally here.} DF-QuAD's \emph{strength aggregation function} is defined as $\Sigma : [0,1]^* \rightarrow [0,1]$, where for $\mathcal{S} = (v_1,\ldots,v_n) \in [0,1]^*$: if $n=0$, $\Sigma(S) = 0$; if $n=1$, $\Sigma(S) = v_1$; if $n=2$, $\Sigma(S) = f(v_1, v_2)$; if $n>2$, $\Sigma(S) = f(\Sigma(v_1,\ldots,v_{n-1}), v_n)$; with the \emph{base function} $f [0,1]\times [0,1] \rightarrow [0,1]$ defined, for $v_1, v_2\i [0,1]$, as: $f(v_1,v_2)=v_1+(1-v_1)\cdot v_2 = v_1 + v_2 - v_1\cdot v_2$. After separate aggregation of the argument's pro/con descendants, the combination function $c : [0,1]\time [0,1]\time [0,1]\rightarro [0,1]$ combines $v^-$ and $v^+$ with the argument's base score ($v^0$): $c(v^0,v^-,v^+)=v^0-v^0\cdot\mid v^+ - v^-\mid\:if\:v^-\geq v^+$ and $c(v^0,v^-,v^+)=v^0+(1-v^0)\cdot\mid v^+ - v^-\mid\:if\:v^-< v^+$, resp.\ The inputs for the combination function are provided by the \emph{score function}, $\ensuremath{\mathcal{\sigma}} : \mathcal{X}^a\cup\mathcal{X}^c\cup\mathcal{X}^p\rightarro [0,1]$, which gives the argument's strength, as follows: for any $\ensuremath{x} \in \mathcal{X}^a\cup\mathcal{X}^c\cup\mathcal{X}^p$: $\ensuremath{\mathcal{\sigma}}(\ensuremath{x}) = c(\ensuremath{\mathcal{\tau}}(\ensuremath{x}),\Sigma(\ensuremath{\mathcal{\sigma}}(\mathcal{R}^-(\ensuremath{x}))),\Sigma(\ensuremath{\mathcal{\sigma}}(\mathcal{R}^+(\ensuremath{x}))))$ where if $(\ensuremath{x}_1,\ldots,\ensuremath{x}_n)$ is an arbitrary permutation of the ($n \geq 0$) con arguments in $\mathcal{R}^-(\ensuremath{x})$, $\ensuremath{\mathcal{\sigma}}(\mathcal{R}^-(\ensuremath{x}))=(\ensuremath{\mathcal{\sigma}}(\ensuremath{x}_1),\ldots,\ensuremath{\mathcal{\sigma}}(\ensuremath{x}_n))$ (similarly for pro arguments). Note that the DF-QuAD notion of $\ensuremath{\mathcal{\sigma}}$ can be applied to any argumentation framework where arguments are equipped with base scores and pro/con arguments. We will do so later, for our novel formalism. \section Update \AX{F}rameworks}\label{sec:fw} We begin by defining the individual components of our frameworks, starting with the fundamental notion of a \emph{forecast}. \FT{This} is a probability estimate for the positive outcome of a given (binary) question. \begin{definition} A \emph{forecast} $\ensuremath{\mathcal{F}}$ is the probability $P(\ensuremath{\mathcal{Q}}=true) \in [0,1]$ for a given \emph{forecasting question} $\ensuremath{\mathcal{Q}}$. \end{definition} \begin{example} \label{FAFEx} Consider the forecasting question $\ensuremath{\mathcal{Q}}$: \emph{`Will the Tokyo \AX{2020 Summer} Olympics be cancelled/postponed to another year?'}. \AX{Here, the $true$ outcome amounts to the Olympics being cancelled/postponed (and $false$ to it taking place in 2020 as planned).} Then, a forecast $\ensuremath{\mathcal{F}}$ may be $P(\ensuremath{\mathcal{Q}}=true)= 0.15$\, which amounts to a 15\% probability of the Olympics \BIn{being cancelled/postponed}. \BI{Note that $\ensuremath{\mathcal{F}}$ may have been introduced as part of an update framework (herein described), or as an initial base rate at the outset of a FAF (Stage 1 in Figure \ref{fig:FAFdiag}).} \end{example} In the remainder, we will often drop $\ensuremath{\mathcal{Q}}$, implicitly assuming it is given, and use $P(true)$ to stand for $P(\ensuremath{\mathcal{Q}}=true)$. In order to empower agents to reason about probabilities and thus support forecasting, we need, in addition to \emph{pro/con} arguments as in QuAD frameworks, two new argument types: \begin{itemize} \item \emph{proposal} arguments, each about some forecast (and its underlying forecasting question); each proposal argument $\ensuremath{\mathcal{P}}$ has a \emph{forecast} and, optionally, some supporting \emph{evidence ; and \item \emph{amendment} argument , which \AX{suggest a modification to} some forecast\AX{'s probability} by increasing or decreasing it, and are accordingly separated into disjoint classes of \emph{increase} and \emph{decrease} (amendment) arguments.\footnote{Note that we decline to include a third type of amendment argument for arguing that $\ensuremath{\Forecast^\Proposal}$ is just right. This choice rests on the assumption that additional information always necessitates a change to $\ensuremath{\Forecast^\Proposal}$, however granular that change may be. This does not restrict individual agents arguing about $\ensuremath{\Forecast^\Proposal}$ from casting $\ensuremath{\Forecast^\Proposal}$ as their own final forecast. However, rather than cohering their argumentation around $\ensuremath{\Forecast^\Proposal}$, which we hypothesise would lead to high risk of groupthink~\cite{McNees1987}, agents are compelled to consider the impact of their amendment arguments on this more granular level.} \end{itemize} Note that amendment arguments are introduced specifically for arguing about a proposal argument, given that traditional QuAD pro/con arguments are of limited use when the goal is to judge the acceptability of a probability, and that in forecasting agents must not only decide \emph{if} they agree/disagree but also \emph{how} they agree/disagree (i.e. whether they believe the forecast is too low or too high considering, if available, the evidence). Amendment arguments, with their increase and decrease classes, provide for this. \begin{example \label{ProposalExample} A proposal argument $\ensuremath{\mathcal{P}}$ in the Tokyo Olympics setting may comprise forecast: \emph{\AX{`}There is a 75\% chance that the Olympics will be cancelled/postponed to another year'}. It may also include evidence: \emph{`A new poll today shows that 80\% of the Japanese public want the Olympics to be cancelled. The Japanese government is likely to buckle under this pressure.'} This argument may aim to prompt updating the earlier forecast in Example~\ref{FAFEx}. A \emph{decrease} amendment argument may be $\ensuremath{\decarg_1}$: \emph{`The International Olympic Committee and the Japanese government will ignore the views of the Japanese public'}. An \emph{increase} amendment argument may be $\ensuremath{\incarg_1}$: \emph{`Japan's increasingly popular opposition parties will leverage this to make an even stronger case for cancellation'}. \end{example} Intuitively, a proposal argument is the focal point of the argumentation. It typically suggests a new forecast to replace prior forecasts, argued on the basis of some new evidence (as in the earlier example). We will see that proposal arguments remain immutable through each debate (update framework), which takes place via amendment arguments and standard pro/con arguments. Note that, wrt QuAD argument types, proposal arguments replace issues and amendment arguments replace answers, in that the former are driving the debates and the latter are the options up for debate. Note also that amendment arguments merely state a direction wrt $\ensuremath{\Forecast^\Proposal}$ and do not contain any more information, such as \emph{how much} to alter $\ensuremath{\Forecast^\Proposal}$ by. We will see that alteration can be determined by \emph{scoring} amendment arguments. Proposal and amendment arguments, alongside pro/con arguments, form part of our novel update frameworks \BI{(Stage 2 of Figure \ref{fig:FAFdiag})}, defined as follows: \begin{definition} An \emph{update framework} is a nonad ⟨$\ensuremath{\mathcal{P}}, \ensuremath{\mathcal{X}}, \ensuremath{\AmmArgs^-}, \ensuremath{\AmmArgs^+}, \ensuremath{\Rels^p}, \ensuremath{\Rels}, \ensuremath{\mathcal{A}}, \ensuremath{\mathcal{V}}, \ensuremath{\Forecast^\Agents}$⟩ such that: \item[$\bullet$] $\ensuremath{\mathcal{P}}$ is a single proposal argument with \emph{forecast} $\NewForecas $ and, optionally, \emph{evidence} $\mathcal{E}^\ensuremath{\mathcal{P}}$ for this forecast; \item[$\bullet$] $\ensuremath{\mathcal{X}} = \ensuremath{\AmmArgs^\uparrow} \cup \ensuremath{\AmmArgs^\downarrow}$ is a finite set of \emph{amendment arguments} composed of subsets $\ensuremath{\AmmArgs^\uparrow}$ of \emph{increase} arguments and $\ensuremath{\AmmArgs^\downarrow}$ of \emph{decrease} arguments; \item[$\bullet$] $\ensuremath{\AmmArgs^-}$ is a finite set of \emph{con} arguments; \item[$\bullet$] $\ensuremath{\AmmArgs^+}$ is a finite set of \emph{pro} arguments; \item[$\bullet$] the sets $\{\ensuremath{\mathcal{P}}\}$, $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\AmmArgs^\downarrow}$, $\ensuremath{\AmmArgs^-}$ and $\ensuremath{\AmmArgs^+}$ are pairwise disjoint; \item[$\bullet$] $\ensuremath{\Rels^p}$ $\subseteq$ $\ensuremath{\mathcal{X}}$ $\times$ $\{\ensuremath{\mathcal{P}}\}$ is a directed acyclic binary relation between amendment arguments and the proposal argument (we may refer to this relation informally as `probabilistic'); \item[$\bullet$] $\ensuremath{\Rels}$ $\subseteq$ ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\times$ ($\ensuremath{\mathcal{X}}$ $\cup$ $\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) is a directed acyclic, binary relation \FTn{from} pro/con arguments \FTn{to} amendment\FTn{/pro/con arguments} (we may refer to this relation informally as `argumentative'); \item[$\bullet$] $\ensuremath{\mathcal{A}} = \{ \ensuremath{a}_1, \ldots, \ensuremath{a}_n \}$ is a finite set of \emph{agents} $(n >1$); \item[$\bullet$] $\ensuremath{\mathcal{V}}$ : $\ensuremath{\mathcal{A}}$ $\times$ ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\rightarrow$ [0, 1] is a total function such that $\ensuremath{\mathcal{V}}(\ensuremath{a},\ensuremath{x})$ is the \emph{vote} of agent $\ensuremath{a}\in\ensuremath{\mathcal{A}}$ on argument $\ensuremath{x} \in \ensuremath{\AmmArgs^-} \cup \ensuremath{\AmmArgs^+}$; with an abuse of notation, we let $\ensuremath{\mathcal{V}}_\ensuremath{a}$ : ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\rightarrow [0, 1]$ represent the votes of a \emph{single} agent $\ensuremath{a}\in\ensuremath{\mathcal{A}}$, e.g. $\ensuremath{\mathcal{V}}_\ensuremath{a}(\ensuremath{x}) = \ensuremath{\mathcal{V}}(\ensuremath{a},\ensuremath{x})$; \item[$\bullet$] $\ensuremath{\Forecast^\Agents} = \{ \ensuremath{\Forecast^\Agents}_{\ensuremath{a}_1}, \ldots, \ensuremath{\Forecast^\Agents}_{\ensuremath{a}_n} \}$ is such that $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_i} $, where $i \in \{ 1, \ldots n \}$, is the \emph{forecast} of agent $\ensuremath{a}_i\in\ensuremath{\mathcal{A}}$. \end{definition} \BIn{Note that pro \AX{(}con\AX{)} arguments can be seen as supporting (attacking, resp.) other arguments via $\ensuremath{\mathcal{R}}$, as in the case of conventional QuAD frameworks~\cite{Baroni2015}.} \begin{example \label{eg:tokyo} A possible update framework in our running setting may include $\ensuremath{\mathcal{P}}$ as in Example~\ref{ProposalExample} as well as (see Table \ref{table:tokyo}) $\ensuremath{\mathcal{X}}=\{\ensuremath{\decarg_1}, \ensuremath{\decarg_2}, \ensuremath{\incarg_1}\}$, $\ensuremath{\AmmArgs^-}=\{\ensuremath{\attarg_1}, \ensuremath{\attarg_2}, \ensuremath{\attarg_3}\}$, $\ensuremath{\AmmArgs^+}=\{\ensuremath{\supparg_1}, \ensuremath{\supparg_2}\}$, $\ensuremath{\Rels^p}=\{(\ensuremath{\decarg_1}, \ensuremath{\mathcal{P}})$, $(\ensuremath{\decarg_2}, \ensuremath{\mathcal{P}}), (\ensuremath{\incarg_3}, \ensuremath{\mathcal{P}})\}$, and $\ensuremath{\mathcal{R}}=\{(\ensuremath{\attarg_1}, \ensuremath{\decarg_1}), (\ensuremath{\attarg_2}, \ensuremath{\decarg_1}), (\ensuremath{\attarg_3}, \ensuremath{\incarg_1})$, $(\ensuremath{\supparg_1}, \ensuremath{\decarg_2}),$ $ (\ensuremath{\supparg_2}, \ensuremath{\incarg_1})\} . Figure \ref{fig:tokyo} gives a graphical representation of these arguments and relations. \BIn{Assuming $\ensuremath{\mathcal{A}}=\{alice, bob, charlie\}$, $\ensuremath{\mathcal{V}}$ may be such that $\AX{\ensuremath{\mathcal{V}}_{alice}(\ensuremath{\attarg_1})} = 1$, $\AX{\ensuremath{\mathcal{V}}_{bob}(\ensuremath{\supparg_1})} = 0$, and so on.} \end{example} \begin{table}[t] \begin{tabular}{p{0.7cm}p{6.7cm}} \hline & Content \\ \hline $\ensuremath{\mathcal{P}}$ & `A new poll today shows that 80\% of the Japanese public want the Olympics to be cancelled owing to COVID-19, and the Japanese government is likely to buckle under this pressure ($\mathcal{E}^\ensuremath{\mathcal{P}})$. Thus, there is a 75\% chance that the Olympics will be cancelled/postponed to another year' ($\ensuremath{\Forecast^\Proposal}$). \\ $\ensuremath{\decarg_1}$ & `The International Olympic Committee and the Japanese government will ignore the views of the Japanese public'. \\ $\ensuremath{\decarg_2}$ & `This poll comes from an unreliable source.' \vspace{2mm}\\ $\ensuremath{\incarg_1}$ & `Japan's increasingly popular opposition parties will leverage this to make an even stronger case for cancellation.' \\ $\ensuremath{\attarg_1}$ & `The IOC is bluffing - people are dying, Japan is experiencing a strike. They will not go ahead with the games if there is a risk of mass death.' \\ $\ensuremath{\attarg_2}$ & `The Japanese government may renege on its commitment to the IOC, and use legislative or immigration levers to block the event.' \\ $\ensuremath{\attarg_3}$ & `Japan's government has sustained a high-approval rating in the last year and is strong enough to ward off opposition attacks.' \\ $\ensuremath{\supparg_1}$ & `This pollster has a track record of failure on Japanese domestic issues.' \\ $\ensuremath{\supparg_2}$ & `Rising anti-government sentiment on Japanese Twitter indicates that voters may be receptive to such arguments.' \\ \hline \end{tabular} \caption Arguments in the update framework in Example~\ref{eg:tokyo}.} \label{table:tokyo} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{images/FAF1.png} \centering \caption {\BIn{A graphical representation of arguments and relations in the update framework from Example~\ref{eg:tokyo}. Nodes represent proposal ($\ensuremath{\mathcal{P}}$), increase ($\uparrow$), decrease ($\downarrow$), pro ($+$) and con ($-$) arguments, while \FTn{dashed/solid} edges indicat , resp., the $\ensuremath{\Rels^p}$/$\ensuremath{\mathcal{R}}$ relations. } } \label{fig:tokyo} \end{figure} Several considerations about update frameworks are in order. Firstly, they represent `stratified' debates: graphically, they can be represented as trees with the proposal argument as root, amendment arguments as children of the root, and pro/con arguments forming the lower layers, as shown in Figure \ref{fig:tokyo}. This tree structure serves to focus argumentation towards the proposal (i.e. the probability and, if available, evidence) it puts forward. Second, we have chosen to impose a `structure' on proposal arguments, whereby their forecast is distinct from their (optional) evidence. Here the forecast has special primacy over the evidence, because forecasts are the vital reference point and the drivers of debates in FAFs. They are, accordingly, both mandatory and required to `stand out' to participating agents. In the spirit of abstract argumentation \cite{Dung1995}, we nonetheless treat all arguments, including proposal arguments, as `abstract', and focus on relations between them rather between their components. In practice, therefore, amendment arguments may relate to a proposal argument's forecast but also, if present, to its evidence. We opt for this abstract view on the assumption that the flexibility of this approach better suits judgmental forecasting, which has a diversity of use cases (e.g. including politics, economics and sport) where different argumentative approaches may be deployed (i.e. quantitative, qualitative, directly attacking amendment nodes or raising alternative POVs) and wherein forecasters may lack even a basic knowledge of argumentation. We leave the study of structured variants of our framework (e.g. see overview in \cite{structArg}) to future work: these may consider finer-grained representations of all arguments in terms of different components, and finer-grained notions of relations between components, rather than full arguments. Third, in update frameworks, voting is restricted to pro/con arguments. Preventing agents from voting directly on amendment arguments mitigates against the risk of arbitrary judgements: agents cannot make off-the-cuff estimations but can only express their beliefs via (pro/con) argumentation, thus ensuring a more rigorous process of appraisal for the proposal and amendment arguments. Note that rather than facilitating voting on arguments using a two-valued perspective (i.e. positive/negative) or a three-valued perspective (i.e. positive/negative/neutral), $\ensuremath{\mathcal{V}}$ allows agents to cast more granular judgements of (pro/con) argument acceptability, the need for which has been highlighted in the literature \cite{Mellers2015}. Finally, although we envisage that arguments of all types are put forward by agents during debates, we do not capture this mapping in update frameworks. Thus, we do not capture who put forward which arguments, but instead only use votes to encode and understand agents' views. This enables more nuanced reasoning and full engagement on the part of agents with alternative viewpoints (i.e. an agent may freely argue both for and against a point before taking an explicit view with their voting). Such conditions are essential in a healthy forecasting debate \cite{Landeta2011,Mellers2015}. In the remainder of this paper, with an abuse of notation, we often use $\ensuremath{\Forecast^\Proposal}$ to denote, specifically, the probability advocated in $\ensuremath{\Forecast^\Proposal}$ (e.g. 0.75 in Example \ref{ProposalExample}). \section{Aggregating Rational Forecasts }\label{sec:forecasting} In this section we formally introduce (in \AX{§}\ref{subsec:rationality}) our notion of rationality and discuss how it may be used to identify\BI{, and subsequently `block',} undesirable behaviour in forecasters. We then define (in \AX{§}\ref{subsec:aggregation}) a method for calculating a revised forecast \BI{(Stage 2c of Figure \ref{fig:FAFdiag})}, which aggregates the views of all agents in the update framework, whilst optimising their overall forecasting accuracy. \subsection{Rationality}\label{subsec:rationality} Characterising an agent's view as irrational offers opportunities to refine the accuracy of their forecast (and thus the overall aggregated group forecast). Our definition of rationality is inspired by, but goes beyond, that of QuAD-V \cite{Rago2017}, which was introduced for the e-polling context. Whilst update frameworks eventually produce a single aggregated forecast on the basis of group deliberation, each agent is first evaluated for their rationality on an individual basis. Thus, as in QuAD-V, in order to define rationality for individual agents, we first reduce frameworks to \emph{delegate frameworks} for each agent, which are the restriction of update frameworks to a single agent. \begin{definition} A \emph{delegate framework} for an agent $\ensuremath{a}$ is $\ensuremath{u}_{\ensuremath{a}} =$ ⟨$\ensuremath{\mathcal{P}}, \ensuremath{\mathcal{X}}, \ensuremath{\AmmArgs^-}, \ensuremath{\AmmArgs^+}, \ensuremath{\Rels^p}, \ensuremath{\Rels}, \ensuremath{a}, \ensuremath{\mathcal{V}}_{\ensuremath{a}}, \ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩. \end{definition} Note that all arguments in an update framework are included in each agent's delegate framework, but only the agent's votes and forecast are carried over. Recognising the irrationality of an agent requires comparing the agent's forecast against (an aggregation of) their opinions on the amendment arguments and, by extension, on the proposal argument. To this end, we evaluate the different parts of the update framework as follows. We use the DF-QuAD algorithm \cite{Rago2016} to score each amendment argument for the agent, in the context of the pro/con arguments `linked' to the amendment argument, using $\ensuremath{\mathcal{R}}$, in the context of the agent's delegate framework. We refer to the DF-QuAD score function as $\ensuremath{\mathcal{\sigma}}$. This requires a choice of base scores for amendment arguments as well as pro/con arguments. We assume the same base score $\ensuremath{\mathcal{\tau}}(\ensuremath{x})=0.5$ for all $\ensuremath{x} \in \ensuremath{\mathcal{X}}$; in contrast, the base score of pro/con arguments is a result of the votes they received from the agent, in the spirit of QuAD-V \cite{Rago2017}. The intuition behind assigning a neutral (0.5) base score for amendment arguments is that an agent's estimation of their strength from the outset would be susceptible to bias and inaccuracy. Once each amendment argument has been scored (using $\ensuremath{\mathcal{\sigma}}$) for the agent, we aggregate the scores of all amendment arguments (for the same agent) to to calculate the agent's \emph{confidence score} in the proposal argument (which underpins our rationality constraints), by weighting the mean average strength of this argument's increase amendment relations against that of the set of decrease amendment relations: \begin{definition}\label{def:conscore} Given a delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩ , let $\ensuremath{\AmmArgs^\uparrow} = \{ \ensuremath{\incarg_1}, \ensuremath{\incarg_2}, \ldots , \ensuremath{\arg^\uparrow}_i \}$ and $\ensuremath{\AmmArgs^\downarrow} = \{ \ensuremath{\decarg_1}, \ensuremath{\decarg_2}, \ldots , \ensuremath{\arg^\downarrow}_j \}$. Then, $\ensuremath{a}$'s \emph{confidence score} is as follows: \begin{align} &\text{if } i\neq0, j\neq0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = \frac{1}{i} \sum_{k=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\uparrow}_k) - \frac{1}{j} \sum_{l=1}^{j} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\downarrow}_l); \nonumber \\ &\text{if } i\neq0, j=0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = \frac{1}{i} \sum_{k=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\uparrow}_k); \nonumber \\ &\text{if } i=0, j\neq0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = - \frac{1}{j} \sum_{l=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\downarrow}_l); \nonumber \\ &\text{if } i=0, j=0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = 0. \nonumber \end{align} \end{definition} Note that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) \in [-1,1]$, which denotes the overall views of the agent on the forecast $\ensuremath{\Forecast^\Proposal}$ (i.e. as to whether it should be \emph{increased} or \emph{decreased}, and how far). A negative (positive) $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ indicates that an agent believes that $\ensuremath{\Forecast^\Proposal}$ should be amended down (up, resp.). The size of $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ reflects the degree of the agent's certainty in either direction. In turn, we can constrain an agent's forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ if it contradicts this belief as follows. \begin{definition}\label{def:irrationality} Given a delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$ , $\ensuremath{a}$'s forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ is \emph{strictly rational} (wrt $\ensuremath{u}_{\ensuremath{a}}$) iff: \begin{align} if\;\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) < 0\;then\; \ensuremath{\Forecast^\Agents}_\ensuremath{a} < \ensuremath{\Forecast^\Proposal} \nonumber \\ if\;\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) > 0\;then\; \ensuremath{\Forecast^\Agents}_\ensuremath{a} > \ensuremath{\Forecast^\Proposal} \nonumber \\ \centering \mid\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\mid \geq \frac{\mid\ensuremath{\Forecast^\Proposal} - \ensuremath{\Forecast^\Agents}_\ensuremath{a}\mid}{\ensuremath{\Forecast^\Proposal}} \nonumber \end{align} \end{definition} Hereafter, we refer to forecasts which violate the first two constraints as, resp., \emph{irrational increase} and \emph{irrational decrease} forecasts, and to forecasts which violate the final constraint as \emph{irrational scale} forecasts. This definition of rationality preserves the integrity of group forecast in two ways. First, it prevents agents from forecasting against their beliefs: an agent cannot increase $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) < 0$ and an agent cannot decrease $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) > 0 ; further, it ensures that agents cannot make forecasts disproportionate to their confidence score -- \emph{how far} an agent $\ensuremath{a}$ deviates from the proposed change $\ensuremath{\Forecast^\Proposal}$ is restricted by $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$; finally, an agent must have $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ greater than or equal to the relative change to $\ensuremath{\Forecast^\Proposal}$ denoted in their forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a} . Note that the \emph{irrational scale} constraint deals with just one direction of proportionality (i.e. providing only a maximum threshold for $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$'s deviation from $\ensuremath{\Forecast^\Proposal}$, but no minimum threshold). Here, we avoid bidirectional proportionality on the grounds that such a constraint would impose an arbitrary notion of arguments' `impact' on agents. An agent may have a very high $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$, indicating \FT{their} belief that $\ensuremath{\Forecast^\Proposal}$ is too low, but \AX{may}, we suggest, rationally choose to increase $\ensuremath{\Forecast^\Proposal}$ by only a small amount (e.g. if, despite \FT{their} general agreement with the arguments, \FT{they} believe the overall issue at stake in $\ensuremath{\mathcal{P}}$ to be minor or low impact to the overall forecasting question). Our definition of rationality, which relies on notions of argument strength derived from DF-QuAD, thus informs but does not wholly dictate agents' forecasting, affording them considerable freedom. We leave alternative, stricter definitions of rationality, which may derive from probabilistic conceptions of argument strength, to future work. \begin{example Consider our running Tokyo Olympics example, with the same arguments and relations from Example \ref{eg:tokyo} and an agent \BIn{$alice$} with a confidence score \BIn{$\ensuremath{\mathcal{C}}_{alice}(\ensuremath{\mathcal{P}}) = -0.5$}. From this we know that \BIn{$alice$} believes that the suggested $\ensuremath{\Forecast^\Proposal}$ in the proposal argument $\ensuremath{\mathcal{P}}$ should be decreased. Then, under our definition of rationality, \BIn{$alice$'s} forecast \BIn{$\ensuremath{\Forecast^\Agents}_{alice}$} is `rational' if it decreases $\ensuremath{\Forecast^\Proposal}$ by up to 50\%. \end{example} If an agent's forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ violates these rationality constraints then \BI{it is `blocked'} and the agent is prompted to return to the argumentation graph. From here, they may carry out one or more of the following actions to render their forecast rational: a. Revise their forecast; b. Revise their votes on arguments; c. Add new arguments (and vote on them). \iffalse \begin{enumerate}[label=\alph*.] \item Revise their forecast; \item Revise their votes on arguments; \item Add new arguments to the update framework (and vote on them). \end{enumerate} \fi Whilst a) and b) occur on an agent-by-agent basis, confined to each delegate framework, c) affects the shared update framework and requires special consideration. Each time new \AX{arguments} are added to the shared graph, every agent must vote on \AX{them}, even if they have already made a rational forecast. In certain cases, after an agent has voted on a new argument, it is possible that their rational forecast is made irrational. In this instance, the agent must resolve their irrationality via the steps above. In this way, the update framework can be refined on an iterative basis until the graph is no longer being modified and all agents' forecasts are rational. At this stage, the update framework has reached a stable state and the agents $\ensuremath{\mathcal{A}}$ are collectively rational: \begin{definition} Given an update framework $\ensuremath{u}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{V}}$, $\ensuremath{\Forecast^\Agents}$⟩, $\ensuremath{\mathcal{A}}$ is \emph{collectively rational} (wrt \emph{u}) iff $\forall \ensuremath{a} \in \ensuremath{\mathcal{A}}$, $\ensuremath{a}$ is individually rational (wrt the delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩). \end{definition} When $\ensuremath{\mathcal{A}}$ is collectively rational, the update framework $u$ becomes immutable and the aggregation (defined next) \AX{produces} a group forecast $\ensuremath{\Forecast^g}$, which becomes the new $\ensuremath{\mathcal{F}}$. \subsection{Aggregating Forecasts}\label{subsec:aggregation} After all the agents have made a rational forecast, an aggregation function is applied to produce one collective forecast. One advantage of forecasting debates vis-a-vis \AX{the} many other forms of debate, is that a ground truth always exists -- an event either happens or does not. This means that, over time and after enough FAF instantiations, data on the forecasting success of different agents can be amassed. In turn, the relative historical performance of forecasting agents can inform the aggregation of group forecasts. In update frameworks, a weighted aggregation function based on Brier Scoring \cite{Brier1950} is used, such that more accurate forecasting agents have greater influence over the final forecast. Brier Scores are a widely used criterion to measure the accuracy of probabilistic predictions, effectively gauging the distance between a forecaster's predictions and an outcome after it has(n't) happened, as follows. \begin{definition} \label{def:bscore} Given an agent $\ensuremath{a}$, a non-empty series of forecasts $\ensuremath{\Forecast^\Agents}_\ensuremath{a}(1), \ldots, \ensuremath{\Forecast^\Agents}_\ensuremath{a}(\ensuremath{\mathcal{N}}_{\ensuremath{a}})$ with corresponding actual outcomes $\ensuremath{\mathcal{O}}_1, \ldots,$ $\ensuremath{\mathcal{O}}_{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} \in \{true, false\}$ (where $\ensuremath{\mathcal{N}}_{\ensuremath{a}}>0$ is the number of forecasts $\ensuremath{a}$ has made in a non-empty sequence of as many update frameworks), $\ensuremath{a}$'s Brier Score $\ensuremath{b}_{\ensuremath{a}} \in [0, 1]$ is as follows: \begin{align} \ensuremath{b}_{\ensuremath{a}} = \frac{1}{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} \sum_{t=1}^{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} (\ensuremath{\Forecast^\Agents}_\ensuremath{a}(t) - val(\ensuremath{\mathcal{O}}_t))^2 \nonumber \end{align} where $val(\ensuremath{\mathcal{O}}_t)=1$ if $\ensuremath{\mathcal{O}}_t=true$, and 0 otherwise. \end{definition} A Brier Score $\ensuremath{b}$ is effectively the mean squared error used to gauge forecasting accuracy, where a low $\ensuremath{b}$ indicates high accuracy and high $\ensuremath{b}$ indicates low accuracy. This can be used in the update framework's aggregation function via the weighted arithmetic mean as follows. \AX{E}ach Brier Score is inverted to ensure that more (less, resp.) accurate forecasters have higher (lower, resp.) weighted influence\AX{s} on $\ensuremath{\Forecast^g}$: \begin{definition}\label{def:group} Given a set of agents $\ensuremath{\mathcal{A}} = \{\ensuremath{a}_1, \ldots,\ensuremath{a}_n\}$, their corresponding set of Brier Scores $\ensuremath{b} = \{\ensuremath{b}_{\ensuremath{a}_1}, \ldots,\ensuremath{b}_{\ensuremath{a}_n}\}$ and their forecasts $\ensuremath{\Forecast^\Agents} = \{\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_1}, \ldots,\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_n}\}$, and letting, for $i \!\!\in\!\! \{ 1, \ldots, n\}$, $w_{i} \!\!=\!\! 1-\ensuremath{b}_{\ensuremath{a}_i}$, the \emph{group forecast} $\ensuremath{\Forecast^g}$ is as follows: \begin{align} &\text{if } \sum_{i=1}^{n}w_{i} \neq 0: & &\ensuremath{\Forecast^g} = \frac{\sum_{i=1}^{n}w_{i}\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_i}}{\sum_{i=1}^{n}w_{i}}; \nonumber \\ &\text{otherwise}: & &\ensuremath{\Forecast^g} = 0. \nonumber \end{align} \end{definition} This group forecast could be `activated' after a fixed number of debates (with the mean average used prior), when sufficient data has been collected on the accuracy of participating agents, or after a single debate, in the context of our general \emph{Forecasting Argumentation Frameworks}: \begin{definition} A \emph{Forecasting Argumentation Framework} (FAF) is a triple ⟨$ \ensuremath{\mathcal{F}}, \ensuremath{\mathcal{U}}, \ensuremath{\mathcal{T}}$⟩ such that: \item[$\bullet$] $\ensuremath{\mathcal{F}}$ is a \emph{forecast ; \item[$\bullet$] $\ensuremath{\mathcal{U}}$ is a finite, non-empty sequence of update frameworks with \ensuremath{\mathcal{F}}\ the forecast of the proposal argument in the first update framework in the sequence\AR{;} the forecast of each subsequent update framework is the group forecast of the previous update framework's agents' forecasts; \item[$\bullet$] $\ensuremath{\mathcal{T}}$ is a preset time limit representing the lifetime of the FAF; \item[$\bullet$] each agent's forecast wrt the agent's delegate framework drawn from each update framework is strictly rational. \end{definition} \begin{example \BIn{Consider our running Tokyo Olympics example: the overall FAF may be composed of $\ensuremath{\mathcal{F}} = 0.15$, update frameworks $\ensuremath{\mathcal{U}} = \{ u_1, u_2, u_3 \}$ and time limit $\ensuremath{\mathcal{T}}=14\ days$, where $u_3$ is the latest (and therefore the only open) update framework after, for example, four days.} \end{example} \AX{T}he superforecasting literature explores a range of forecast aggregation algorithms: extremizing algorithms \cite{Baron2014}, variations on logistic \AX{and} Fourier $L_2E$ regression \cite{Cross2018}, with considerable success. \AX{T}hese approaches \AX{aim} at ensuring that less certain \AX{or less} accurate forecasts have a lesser influence over the final aggregated forecast. We believe that FAFs apply a more intuitive algorithm \AX{since} much of the `work' needed to bypass inaccurate and erroneous forecasting is \AX{expedited} via argumentation. \section{Properties}\label{sec:props} We now undertake a theoretical analysis of FAFs by considering mathematical properties they satisfy. Note that the properties of the DF-QuAD algorithm (see \cite{Rago2016}) hold (for the amendment and pro/con arguments) here. For brevity, we focus on novel properties unique to FAFs which relate to our new argument types. These properties focus on aggregated group forecasts wrt a debate (update framework). They imply the two broad, and we posit, desirable, principles of \emph{balance} and \emph{unequal representation}. We assume for this section a generic update framework $\ensuremath{u} = $ ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{V}}$, $\ensuremath{\Forecast^\Agents}$⟩ with group forecast $\ensuremath{\Forecast^g}$. \paragraph{Balance.} The intuition for these properties is that differences between $\ensuremath{\Forecast^g}$ and $\ensuremath{\Forecast^\Proposal}$ correspond to imbalances between the \emph{increase} and \emph{decrease} amendment arguments. The first result states that $\ensuremath{\Forecast^g}$ only differs from $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\Forecast^\Proposal}$ is the dialectical target of amendment arguments. \begin{proposition} \label{prop:balance1} If $\ensuremath{\mathcal{X}}\!\!=\!\!\emptyset$ ($\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$), then $\ensuremath{\Forecast^g}\!\!=\!\!\ensuremath{\Forecast^\Proposal}$. \end{proposition} \begin{proof} \AX{If $\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!=\!0$ by Def.~\ref{def:conscore} and $\ensuremath{\Forecast^\Agents}_\ensuremath{a}=\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}. Then, $\ensuremath{\Forecast^g}=\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:group}.} \end{proof} \AX{T}his simple proposition conveys an important property for forecasting for an agent to put forward a different forecast, amendment arguments must have been introduced. \begin{example} In the Olympics setting, the group of agents could only forecast higher or lower than the proposed forecast $\ensuremath{\Forecast^\Proposal}$ after the addition of at least one of \AX{the} amendment arguments $\ensuremath{\decarg_1}$, $\ensuremath{\decarg_2}$ or $\ensuremath{\incarg_1}$. \end{example} In the absence of increase \FTn{(decrease)} amendment arguments, if there are decrease \FTn{(increase, resp.)} amendment arguments, then $\ensuremath{\Forecast^g}$ is not higher \FTn{(lower, resp.)} than $\ensuremath{\Forecast^\Proposal}$. \begin{proposition}\label{prop:balance2} If $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}=\emptyset$, then $\ensuremath{\Forecast^g} \leq\ensuremath{\Forecast^\Proposal}$. \FTn{\label{balance3prop} If $\ensuremath{\AmmArgs^\downarrow}=\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$, then $\ensuremath{\Forecast^g}\geq\ensuremath{\Forecast^\Proposal}$.} \end{proposition} \begin{proof} \AX{If $\ensuremath{\AmmArgs^\downarrow}\!\! \neq \!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!\leq\!0$ by Def.~\ref{def:conscore} and then $\ensuremath{\Forecast^\Agents}_\ensuremath{a}\!\leq\!\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}. Then, by Def.~\ref{def:group}, $\ensuremath{\Forecast^g}\!\leq\!\ensuremath{\Forecast^\Proposal}$. If $\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!\neq\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!\geq\!0$ by Def.~\ref{def:conscore} and then $\ensuremath{\Forecast^\Agents}_\ensuremath{a}\!\geq\!\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}. Then, by Def.~\ref{def:group}, $\ensuremath{\Forecast^g}\!\geq\!\ensuremath{\Forecast^\Proposal}$.} \end{proof} This proposition demonstrates that, if a decrease \BIn{(increase)} amendment argument has an effect on proposal arguments, it can only be as its name implies. \begin{example} \BIn{In the Olympics setting, the agents could not forecast higher than the proposed forecast $\ensuremath{\Forecast^\Proposal}$ if either of the decrease amendment arguments $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$ \AX{had} been added, but the increase argument $\ensuremath{\incarg_1}$ \AX{had} not. Likewise, \AX{the} agents could not forecast lower than $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\incarg_1}$ \AX{had} been added, but \AX{neither} of $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$ \AX{had .} \end{example} If $\ensuremath{\Forecast^g}$ is lower \BIn{(higher)} than $\ensuremath{\Forecast^\Proposal}$, there is at least one decrease \BIn{(increase, resp.)} argument. \begin{proposition} \label{prop:balance4} If $\ensuremath{\Forecast^g}<\ensuremath{\Forecast^\Proposal}$, then $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$. \BIn{If $\ensuremath{\Forecast^g}>\ensuremath{\Forecast^\Proposal}$, then $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$.} \end{proposition} \begin{proof} \AX{ If $\ensuremath{\Forecast^g}<\ensuremath{\Forecast^\Proposal}$ then, by Def.~\ref{def:group}, $\exists \ensuremath{a} \in \ensuremath{\mathcal{A}}$ where $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}<\ensuremath{\Forecast^\Proposal}$, for which it holds from Def.~\ref{def:irrationality} that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})<0$. Then, irrespective of $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$. If $\ensuremath{\Forecast^g}>\ensuremath{\Forecast^\Proposal}$ then, by Def.~\ref{def:group}, $\exists \ensuremath{a} \in \ensuremath{\mathcal{A}}$ where $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}>\ensuremath{\Forecast^\Proposal}$, for which it holds from Def.~\ref{def:irrationality} that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})>0$. Then, irrespective of \BIn{$\ensuremath{\AmmArgs^\downarrow}$, $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$}. } \end{proof} We can see here that the only way an agent can decrease \BIn{(increase)} the forecast is \FT{by adding} decrease \BIn{(increase, resp.)} arguments, ensuring the debate is structured as \FT{intended}. \begin{example} \BIn{In the Olympics setting, the group of agents could only produce a group forecast $\ensuremath{\Forecast^g}$ lower than $\ensuremath{\Forecast^\Proposal}$ due to the presence of \emph{decrease} amendment arguments $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$. Likewise, the group of agents could only produce a $\ensuremath{\Forecast^g}$ higher than $\ensuremath{\Forecast^\Proposal}$ due to the presence of $\ensuremath{\incarg_1}$.} \end{example} \paragraph{Unequal representation.} AFs exhibit instances of unequal representation in the final voting process. In formulating the following properties, we distinguish between two forms of unequal representation. First, \emph{dictatorship}, where a single agent dictates $\ensuremath{\Forecast^g}$ with no input from other agents. Second, \emph{pure oligarchy}, where a group of agents dictates $\ensuremath{\Forecast^g}$ with no input from other agents outside the group. In the forecasting setting, these properties are desirable as they guarantee higher accuracy \AX{from} the group forecast $\ensuremath{\Forecast^g}$. An agent with a forecasting record of \emph{some} accuracy exercises \emph{dictatorship} over the group forecast $\ensuremath{\Forecast^g}$, if the rest of the participating \AX{agents} have a record of total inaccuracy. \begin{proposition}\label{prop:dictatorship} If $\ensuremath{a}_d\in\ensuremath{\mathcal{A}}$ has a Brier score $\ensuremath{b}_{\ensuremath{a}_d}<1$ and $\forall \ensuremath{a}_z\in\ensuremath{\mathcal{A}} \setminus \{\ensuremath{a}_d$\}, $\ensuremath{b}_{\ensuremath{a}_z} = 1$, then $\ensuremath{\Forecast^g}=\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$. \end{proposition} \begin{proof} \AX{ By Def.~\ref{def:group}: if $\ensuremath{b}_{\ensuremath{a}_z} \!\!\!=\!\! 1$ $\forall \ensuremath{a}_z\!\in\!\ensuremath{\mathcal{A}} \!\setminus\! \{\!\ensuremath{a}_d\!\}$, then $w_{\ensuremath{a}_z}\!\!\!=\!0$; and if $\ensuremath{b}_{\ensuremath{a}_d}\!\!<\!\!1$, then $w_{\ensuremath{a}_d}\!\!>\!\!0 . Then, again by Def.~\ref{def:group}, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$ is weighted at 100\% and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at 0\% so $\ensuremath{\Forecast^g}\!\!=\!\!\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$. } \end{proof} This proposition demonstrates how we will disregard agents with total inaccuracy, even in \FT{the} extreme case where we allow one (more accurate) agent to dictate the forecast. \begin{example} \BIn{In the running example, if \AX{alice, bob and charlie have Brier scores of 0.5, 1 and 1, resp., bob's and charlie's forecasts have} no impact on $\ensuremath{\Forecast^g}$, whilst \AX{alice's} forecast becomes the group forecast $\ensuremath{\Forecast^g}$.} \end{example} A group of agents with a forecasting record of $some$ accuracy exercises \emph{pure oligarchy} over $\ensuremath{\Forecast^g}$ if the rest of the \AX{agents} all have a record of total inaccuracy. \begin{proposition}\label{oligarchytotalprop} Let $\ensuremath{\mathcal{A}} = \ensuremath{\mathcal{A}}_o \cup \ensuremath{\mathcal{A}}_z$ where $\ensuremath{\mathcal{A}}_o \cap \ensuremath{\mathcal{A}}_z = \emptyset$, $\ensuremath{b}_{\ensuremath{a}_o}<1$ $\forall \ensuremath{a}_o \in \ensuremath{\mathcal{A}}_o$ and $\ensuremath{b}_{\ensuremath{a}_z}=1$ $\forall \ensuremath{a}_z \in \ensuremath{\mathcal{A}}_z$. Then, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_o}$ is weighted at $>0\%$ and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at 0\ . \end{proposition} \begin{proof} \AX{ By Def.~\ref{def:group}: if $\ensuremath{b}_{\ensuremath{a}_z} = 1$ $\forall \ensuremath{a}_z\in\ensuremath{\mathcal{A}}_z$, then $w_{\ensuremath{a}_z}=0$; and if $\ensuremath{b}_{\ensuremath{a}_o}<1$ $\forall \ensuremath{a}_o\in\ensuremath{\mathcal{A}}_o$, then $w_{\ensuremath{a}_o}>0$. Then, again by Def.~\ref{def:group}, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_o}$ is weighted at $> 0\%$ and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at $0\%$. } \end{proof} This proposition extends the behaviour from Proposition \ref{prop:dictatorship} to the (more desirable) case where fewer agents have a record of total inaccuracy. \begin{example} \BIn{In the running example, if \AX{alice, bob and charlie have Brier scores of 1, 0.2 and 0.6, resp., alice's forecast} has no impact on $\ensuremath{\Forecast^g}$, whilst \AX{bob and charlie's} aggregated forecast becomes the group forecast $\ensuremath{\Forecast^g}$.} \end{example} \section{Evaluation}\label{sec:experiments} \BI{We conducted an experiment using a dataset obtained from the `Superforecasting' project, Good Judgment Inc \cite{GJInc}, to simulate four past forecasting debates in FAFs. This dataset contained 1770 datapoints (698 `forecasts' and 1072 `comments') posted by 242 anonymised users with a range of expertise. The original debates had occurred on the publicly available group forecasting platform, the Good Judgment Open (GJO)\footnote{https://www.gjopen.com/}, providing a suitable baseline against which to compare FAFs' accurac . \BI{For the experiment, we used a prototype implementation of FAFs in the form of the publicly available web platform called \emph{Arg\&Forecast} (see \cite{Irwin2022} for an introduction to the platform and an additional human experiment with FAFs). Python's Gensim topic modelling library \cite{rehurek2011gensim} was used to separate the datapoints for each debate into contextual-temporal groups that could form update frameworks.} In each update framework the proposal forecast was set to the mean average of forecasts made in the update framework window and each argument appeared only once. Gensim was further used to simulate voting, matching users to specific arguments they (dis)approved of. Some 4,700 votes \AX{were then} generated with a three-valued system (where votes were taken from \{0,0.5,1\}) to ensure consistency: if a user voiced approval for an argument in the debate time window, their vote for the corresponding argument(s) was set to 1; disapproval for an argument led to a vote of 0, and (in the most common case) if a user did not mention an argument at all, their vote for the corresponding argument(s) defaulted to 0.5. With the views of all participating users wrt the proposal argument encoded in each update framework's votes, forecasts could then be simulated. If a forecast was irrational, violating any of the three constraints in Def.~\ref{def:irrationality} (referred to \AX{in the following} as \emph{increase}, \emph{decrease} and \emph{scale}, resp.), it was blocked and, to mimic real life use, an automatic `follow up' forecast was made. The `follow up' forecast would be the closest possible prediction (to their original choice) a user could make whilst remaining `rational'. \BI{Note that evaluation of the aggregation function described in \AX{§}\ref{subsec:aggregation} was outside this experiment, since the past forecasting accuracy of the dataset's 242 anonymised users was unavailable. Instead, we used \AX{the} mean average whilst adopting the GJO's method for scoring the accuracy of a user and/or group over the lifetime of the question \cite{roesch_2015}. This meant calculating a daily forecast and daily Brier score for each user, for every day of the question. After users made their first rational forecast, that forecast became their `daily forecast' until it was updated with a new forecast. Average and range of daily Brier scores allowed reliable comparison between (individual and aggregated) performance of the GJO versus the FAF implementation.} \begin{table}[t] \begin{tabular}{@{}llllll@{}} \toprule Q & Group $\ensuremath{b}$ & $min(\ensuremath{b})$ & $max(\ensuremath{b})$ \\ \midrule Q1 & 0.1013 (0.1187) & 0.0214 (0) & 0.4054 (1) \\ Q2 & 0.216 (0.1741) & 0 (0) & 0.3853 (1) \\ Q3 & 0.01206 (0.0227) & 0.0003 (0) & 0.0942 (0.8281) \\ Q4 & 0.5263 (0.5518) & 0 (0) & 0.71 (1) \\ \midrule \textbf{All} & \textbf{0.2039 (0.217)} & \textbf{0 (0)} & \textbf{1 (1)} \\ \bottomrule \end{tabular} \caption{The accuracy of the platform group versus control, where \AX{`}Group $\ensuremath{b}$\AX{'} is the aggregated (mean) Brier score, `$min(\ensuremath{b})$' is the lowest individual Brier score and `$max(\ensuremath{b})$' is the highest individual Brier score. Q1-Q4 indicate the four simulated debates.} \label{accuracyExp1} \end{table} \begin{table}[t] \begin{tabular}{llllll} \hline \multirow{2}{*}{Q} & \multirow{2}{*}{$\overline{\ensuremath{\mathcal{C}}}$} & \multirow{2}{*}{Forecasts} & \multicolumn{3}{c}{Irrational Forecasts} \\ \cline{4-6} & & & \multicolumn{1}{c}{\emph{Increase} } \!\!\!\! & \multicolumn{1}{c}{\emph{Decrease} } \!\!\!\! & \multicolumn{1}{c}{\emph{Scale} }\!\! \!\! \\ \hline Q1 & -0.0418 & 366 & 63 & 101 & 170 \\ Q2 & 0.1827 & 84 & 11 & 15 & 34 \\ Q3 & -0.4393 & 164 & 53 & 0 & 86 \\ Q4 & 0.3664 & 84 & 4 & 19 & 15 \\ \hline All & -0.0891 & 698 & 131 & 135 & 305 \\ \hline \end{tabular} \caption{Auxiliary results from \FT{the experiment}, where $\overline{\ensuremath{\mathcal{C}}}$ is the average confidence score, `Forecasts' is number of forecasts made in each question and `Irrational Forecasts' the number in each question which violated each constraint in §\ref{subsec:rationality}.} \label{exp1auxinfo} \end{table} \paragraph{Results.} As Table \ref{accuracyExp1} demonstrates, simulating forecasting debates from GJO in \emph{Arg\&Forecast} led to predictive accuracy improvements in three of the four debates. \BIn{This is reflected in these debates by a substantial reduction in Brier scores versus control.} The greatest accuracy improvement in absolute terms was in Q4, which saw a Brier score decrease of 0.0255. In relative terms, Brier score decreases ranged from 5\% (Q4) to 47\% (Q3). \BIn{The average Brier score decrease was 33\%, representing a significant improvement in forecasting accuracy across the board}. \BIn{Table \ref{exp1auxinfo} demonstrates how \AX{our} rationality constraints drove forward this improvement}. 82\% of forecasts made across the four debates were classified as irrational \BIn{and subsequently moderated with a rational `follow up' forecast}. Notably, there were more \emph{irrational scale} forecasts than \emph{irrational increase} and \emph{irrational decrease} forecasts combined. These results demonstrate how argumentation-based rationality constraints can play an active role in facilitating higher forecasting accuracy, signalling the early promise of FAFs. \section{Conclusions}\label{sec:conclusions} We have introduced the Forecasting Argumentation Framework (FAF), a multi-agent argumentation framework which supports forecasting debates and probability estimates. FAFs are composite argumentation frameworks, comprised of multiple non-concurrent update frameworks which themselves depend on three new argument types and a novel definition of rationality for the forecasting context. Our theoretical and empirical evaluation demonstrates the potential of FAFs, namely in increasing forecasting accuracy, holding intuitive properties, identifying irrational behaviour and driving higher engagement with the forecasting question (more arguments and responses, and more forecasts in the user study). These strengths align with requirements set out by previous research in the field of judgmental forecasting. There \AX{is} a multitude of possible directions for future wor . First, FAFs are equipped to deal only with two-valued outcomes but, given the prevalence of forecasting issues with multi-valued outcomes (e.g. `Who will win the next UK election?'), expanding their capability would add value. Second, further work may focus on the rationality constraints, e.g. by introducing additional parameters to adjust their strictness, or \AX{by implementing} alternative interpretations of rationalit . Third, future work could explore constraining agents' argumentation. This could involve using past Brier scores to limit the quantity or strength of agents' arguments and also to give them greater leeway wrt the rationality constraints. \FTn{Fourth, our method relies upon acyclic graphs: we believe that they are intuitive for users and note that all Good Judgment Open debates were acyclic; nonetheless, the inclusion of cyclic relations (e.g. to allow \AX{con} arguments that attack each other) could expand the scope of the argumentative reasoning in \AX{in FAFs.} } Finally, there is an immediate need for larger scale human experiments. \newpage \section*{Acknowledgements} The authors would like to thank Prof. Anthony Hunter for his helpful contributions to discussions in the build up to this work. \BIn{Special thanks, in addition, go to Prof. Philip E. Tetlock and the Good Judgment Project team for their warm cooperation and for providing datasets for the experiments.} \AX{Finally, the authors would like to thank the anonymous reviewers and meta-reviewer for their suggestions, which led to a significantly improved paper.} \bibliographystyle{kr} \section{Introduction}\label{sec:intro} Historically, humans have performed inconsistently in judgemental forecasting \cite{Makridakis2010,TetlockExp2017}, which incorporates subjective opinion and probability estimates to predictions \cite{Lawrence2006}. Yet, human judgement remains essential in cases where pure statistical methods are not applicable, e.g. where historical data alone is insufficient or for one-off, more `unknowable' events \cite{Petropoulos2016,Arvan2019,deBaets2020}. Judgemental forecasting is widely relied upon for decision-making \cite{Nikolopoulos2021}, in myriad fields from epidemiology to national security \cite{Nikolopoulos2015,Litsiou2019}. Effective tools to help humans improve their predictive capabilities thus have enormous potential for impact. Two recent global events -- the COVID-19 pandemic and the US withdrawal from Afghanistan -- underscore this by highlighting the human and financial cost of predictive deficiency. A multi-purpose system which could improve our ability to predict the incidence and impact of events by as little as 5\%, could save millions of lives and be worth trillions of dollars per year \cite{TetlockGard2016}. Research on judgemental forecasting (see \cite{Lawrence2006,Zellner2021} for overviews), including the recent\AX{,} groundbreaking `Superforecasting Experiment' \cite{TetlockGard2016}, is instructive in establishing the desired properties for systems for supporting forecastin . In addition to reaffirming the importance of fine-grained probabilistic reasoning \cite{Mellers2015}, this literature points to the benefits of some group techniques versus solo forecasting \cite{Landeta2011,Tetlock2014art}, of synthesising qualitative and quantitative information \cite{Lawrence2006}, of combating agents' irrationality \cite{Chang2016} and of high agent engagement with the forecasting challenge, e.g. robust debating \cite{Landeta2011} and frequent prediction updates \cite{Mellers2015}. Meanwhile, \emph{computational argumentation} (see \cite{AImagazine17,handbook} for recent overviews) is a field of AI which involves reasoning with uncertainty and resolving conflicting information, e.g. in natural language debates. As such, it is an ideal candidate for aggregating the broad, polymorphous set of information involved in judgemental group forecasting. An extensive and growing literature is based on various argumentation frameworks -- rule-based systems for aggregating, representing and evaluating sets of arguments, such as those applied in the contexts of \emph{scheduling} \cite{Cyras_19}, \emph{fact checking} \cite{Kotonya_20} or in various instances of \emph{explainable AI} \cite{Cyras_21}. Subsets of the requirements for forecasting systems are addressed by individual formalisms, e.g. \emph{probabilistic argumentation} \AX{\cite{Dung2010,Thimm2012,Hunter2013,Fazzinga2018}} may effectively represent and analyse uncertain arguments about the future. However, we posit that a purpose-built argumentation framework for forecasting is essential to effectively utilise argumentation's reasoning capabilities in this context. \begin{figure*} \includegraphics[width=\textwidth]{images/FAF_diagram.png} \caption{The step-by-step process of a FAF over its lifetime.} \label{fig:FAFdiag} \end{figure*} In this paper, we attempt to cross-fertilise these two as of yet unconnected academic areas. We draw from forecasting literature to inform the design of a new computational argumentation approach: \emph{Forecasting Argumentation Frameworks} (FAFs). FAFs empower (human and artificial) agents to structure debates in real time and to deliver argumentation-based forecasting. They offer an approach in the spirit of \emph{deliberative democracy} \cite{Bessette1980} to respond to a forecasting problem over time. The steps which underpin FAFs are depicted in Figure \ref{fig:FAFdiag} (referenced throughout) and can be described in simple terms \FT{as follows}: a FAF is initialised with a time limit \FT{(for the overall forecasting process and for each iteration therein)} and a pre-agreed `base-rate' forecast $\ensuremath{\mathcal{F}}$ (Stage 1), e.g. based on historical data. \FT{Then,} the forecast is revised by one or more (non-concurrent) debates, \BI{in the form of `update frameworks' (Stage 2)}, opened and resolved by participating agents \FT{(}until \FT{the} specified time limit is reached\FT{)}. Each update framework begins with a proposed revision to the current forecast (Stage 2a), and proceeds with a cycle of argumentation (Stage 2b) about the proposed forecast, voting on said argumentation and forecasting. Forecasts deemed `irrational' with a view to agents' argumentation and voting are blocked. Finally, the rational forecasts are aggregated and the result replaces the current group forecast (Stage 2c). This process may be repeated over time \BI{in an indefinite number of update frameworks} (thus continually \BI{revising} the group forecast) until the \FT{(overall)} time limit is reached. The composite nature of this process enables the appraisal of new information relevant to the forecasting question as and when it arrives. Rather than confronting an unbounded forecasting question with a diffuse set of possible debates open at once, all agents concentrate their argumentation on a single topic (a proposal) at any given time. After giving the necessary background on forecasting and argumentation (§\ref{sec:background}), we formalise our \FT{update} framework\FT{s for Step 2a} (§\ref{sec:fw}). We then give \FT{our} notion of rationality \FT{(Step 2b)}, along with \FT{our} new method for \FT{aggregating rational forecasts (Step 2c)} from a group of agents (§\ref{sec:forecasting}) \FT{and FAFs overall}. We explore the underlying properties of \FT{FAFs} (§\ref{sec:props}), before describing \FT{\AX{an experiment} with \FT{a prototype implementing} our approach (§\ref{sec:experiments}). Finally, we conclude and suggest potentially fruitful avenues for future work (§\ref{sec:conclusions}). \section{Background}\label{sec:background} \subsection{Forecasting} Studies on the efficacy of judgemental forecasting have shown mixed results \cite{Makridakis2010,TetlockExp2017,Goodwin2019}. Limitations of the judgemental approach are a result of well-documented cognitive biases \cite{Kahneman2012}, irrationalities in human probabilistic reasoning which lead to distortion of forecasts. Manifold methodologies have been explored to improve judgemental forecasting accuracy to varying success \cite{Lawrence2006}. These methodologies include, but are not limited to, prediction intervals \cite{Lawrence1989}, decomposition \cite{MacGregorDonaldG1994Jdwd}, structured analogies \cite{Green2007,Nikolopoulos2015} and unaided judgement \cite{Litsiou2019}. Various group forecasting techniques have also been explored \cite{Linstone1975,Delbecq1986,Landeta2011}, although the risks of groupthink \cite{McNees1987} and the importance of maintaining the independence of each group member's individual forecast are well established \cite{Armstrong2001}. Recent advances in the field have been led by Tetlock and Mellers' superforecasting experiment \cite{TetlockGard2016}, which leveraged \AX{geopolitical} forecasting tournaments and a base of 5000 volunteer forecasters to identify individuals with consistently exceptional accuracy (top 2\%). The experiment\AR{'s} findings underline the effectiveness of group forecasting orientated around debating \cite{Tetlock2014art}, and demonstrate a specific cognitive-intellectual approach conducive for forecasting \cite{Mellers20151,Mellers2015}, but stop short of suggesting a concrete universal methodology for higher accuracy. Instead, Tetlock draws on his own work and previous literature to crystallise a broad set of methodological principles by which superforecasters abide \cite[pg.144]{TetlockGard2016}: \begin{itemize} \item \emph{Pragmatic}: not wedded to any idea or agenda; \item \emph{Analytical:} capable of stepping back from the tip-of-your-nose perspective and considering other views; \item \emph{Dragonfly-eyed:} value diverse views and synthesise them into their own; \item \emph{Probabilistic:} judge using many grades of maybe; \item \emph{Thoughtful updaters:} when facts change, they change their minds; \item \emph{Good intuitive psychologists:} aware of the value of checking thinking for cognitive and emotional biases. \end{itemize} Subsequent research after the superforecasting experiment has included exploring further optimal forecasting tournament preparation \cite{penn_global_2021,Katsagounos2021} and extending Tetlock and Mellers' approach to answer broader, more time-distant questions \cite{georgetown}. It should be noted that there have been no recent advances on computational tool\AX{kits} for the field similar to that proposed in this paper. \iffalse \begin{quote} \textbf{PRAGMATIC:} Not wedded to any idea or agenda \textbf{ANALYTICAL:} Capable of stepping back from the tip-of-your-nose perspective and considering other views \textbf{DRAGONFLY-EYED:} Value diverse views and synthesize them into their own \textbf{PROBABILISTIC:} Judge using many grades of maybe \textbf{THOUGHTFUL UPDATERS:} When facts change, they change their minds \textbf{GOOD INTUITIVE PSYCHOLOGISTS:} Aware of the value of checking thinking for cognitive and emotional biases \cite[pg.144]{TetlockGard2016}\newline \end{quote} \fi \subsection{Computational Argumentation} We posit that existing argumentation formalisms are not well suited for the aforementioned future-based arguments, which are necessarily semantically and structurally different from arguments about present or past concerns. Specifically, forecasting arguments are inherently probabilistic and must deal with the passage of time and its implications for the outcomes at hand. Further, several other important characteristics can be drawn from the forecasting literature which render current argumentation formalisms unsuitable, e.g. the paramountcy of dealing with bias (in data and cognitive), forming granular conclusions, fostering group debate and the co-occurrence of qualitative and quantitative arguing. Nonetheless, several of these characteristics have been previously explored in argumentation and our formalisation draws from several existing approache . First and foremost, it draws in spirit from abstract argumentation frameworks (AAFs) \cite{Dung1995}, in that the arguments' inner contents are ignored and the focus is on the relationships between arguments. However, we consider arguments of different types and \AX{an additional relation of} support (pro), \AX{rather than} attack (con) alone as in \cite{Dung1995}. Past work has also introduced probabilistic constraints into argumentation frameworks. {Probabilistic AAFs} (prAAFs) propose two divergent ways for modelling uncertainty in abstract argumentation using probabilities - the constellation approach \cite{Dung2010,Li2012} and the epistemic approach \cite{Hunter2013,Hunter2014,Hunter2020}. These formalisations use probability as a means to assess uncertainty over the validity of arguments (epistemic) or graph topology (constellation), but do not enable reasoning \emph{with} or \emph{about} probability, which is fundamental in forecasting. In exploring temporality, \cite{Cobo2010} augment AAFs by providing each argument with a limited lifetime. Temporal constraints have been extended in \cite{Cobo2012} and \cite{Baron2014}. Elsewhere, \cite{Rago2017} have used argumentation to model irrationality or bias in agents. Finally, a wide range of gradual evaluation methods have gone beyond traditional qualitative semantics by measuring arguments' acceptability on a scale (normally [0,1]) \cite{Leite2011,Evripidou2012,Amgoud2017,Amgoud2018,Amgoud2016}. Many of these approaches have been unified as Quantitative Bipolar Argumentation Frameworks (QBAFs) in \cite{Baroni2018}. Amongst existing approaches, of special relevance in this paper are Quantitative Argumentation Debate (QuAD) frameworks \cite{Baroni2015}, i.e. 5-tuples ⟨$\mathcal{X}^a$, $\mathcal{X}^c$, $\mathcal{X}^p$, $\mathcal{R}$, $\ensuremath{\mathcal{\tau}}$⟩ where $\mathcal{X}^a$ is a finite set of \emph{answer} arguments (to implicit \emph{issues}); $\mathcal{X}^c$ is a finite set of \emph{con} arguments; $\mathcal{X}^p$ is a finite set of \emph{pro} arguments; $\mathcal{X}^a$, $\mathcal{X}^c$ and $\mathcal{X}^p$ are pairwise disjoint; $\mathcal{R} \subseteq (\mathcal{X}^c \cup \mathcal{X}^p) \times (\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p)$ is an acyclic binary relation; $\ensuremath{\mathcal{\tau}}$ : $(\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p) \rightarrow [0,1]$ is a total function: $\ensuremath{\mathcal{\tau}}(a)$ is the \emph{base score} of $a$. Here, attackers and supporters of arguments are determined by the pro and con arguments they are in relation with. Formally, for any $a\in\mathcal{X}^a \cup \mathcal{X}^c \cup \mathcal{X}^p$, the set of \emph{con} (\emph{pro}\AX{)} \emph{arguments} of $a$ is $\mathcal{R}^-(a) = \{b\in\mathcal{X}^c|(b,a)\in\mathcal{R}\}$ ($\mathcal{R}^+(a) = \{b\in\mathcal{X}^p|(b,a)\in\mathcal{R}\}$, resp.). Arguments in QuAD frameworks are scored by the \emph{Discontinuity-Free QuAD} (DF-QuAD) algorithm \cite{Rago2016}, using the argument's intrinsic base score and the \emph{strengths} of its pro/con arguments. \FTn{Given that DF-QuAD is used to define our method (see Def.~\ref{def:conscore}), for completeness we define it formally here.} DF-QuAD's \emph{strength aggregation function} is defined as $\Sigma : [0,1]^* \rightarrow [0,1]$, where for $\mathcal{S} = (v_1,\ldots,v_n) \in [0,1]^*$: if $n=0$, $\Sigma(S) = 0$; if $n=1$, $\Sigma(S) = v_1$; if $n=2$, $\Sigma(S) = f(v_1, v_2)$; if $n>2$, $\Sigma(S) = f(\Sigma(v_1,\ldots,v_{n-1}), v_n)$; with the \emph{base function} $f [0,1]\times [0,1] \rightarrow [0,1]$ defined, for $v_1, v_2\i [0,1]$, as: $f(v_1,v_2)=v_1+(1-v_1)\cdot v_2 = v_1 + v_2 - v_1\cdot v_2$. After separate aggregation of the argument's pro/con descendants, the combination function $c : [0,1]\time [0,1]\time [0,1]\rightarro [0,1]$ combines $v^-$ and $v^+$ with the argument's base score ($v^0$): $c(v^0,v^-,v^+)=v^0-v^0\cdot\mid v^+ - v^-\mid\:if\:v^-\geq v^+$ and $c(v^0,v^-,v^+)=v^0+(1-v^0)\cdot\mid v^+ - v^-\mid\:if\:v^-< v^+$, resp.\ The inputs for the combination function are provided by the \emph{score function}, $\ensuremath{\mathcal{\sigma}} : \mathcal{X}^a\cup\mathcal{X}^c\cup\mathcal{X}^p\rightarro [0,1]$, which gives the argument's strength, as follows: for any $\ensuremath{x} \in \mathcal{X}^a\cup\mathcal{X}^c\cup\mathcal{X}^p$: $\ensuremath{\mathcal{\sigma}}(\ensuremath{x}) = c(\ensuremath{\mathcal{\tau}}(\ensuremath{x}),\Sigma(\ensuremath{\mathcal{\sigma}}(\mathcal{R}^-(\ensuremath{x}))),\Sigma(\ensuremath{\mathcal{\sigma}}(\mathcal{R}^+(\ensuremath{x}))))$ where if $(\ensuremath{x}_1,\ldots,\ensuremath{x}_n)$ is an arbitrary permutation of the ($n \geq 0$) con arguments in $\mathcal{R}^-(\ensuremath{x})$, $\ensuremath{\mathcal{\sigma}}(\mathcal{R}^-(\ensuremath{x}))=(\ensuremath{\mathcal{\sigma}}(\ensuremath{x}_1),\ldots,\ensuremath{\mathcal{\sigma}}(\ensuremath{x}_n))$ (similarly for pro arguments). Note that the DF-QuAD notion of $\ensuremath{\mathcal{\sigma}}$ can be applied to any argumentation framework where arguments are equipped with base scores and pro/con arguments. We will do so later, for our novel formalism. \section Update \AX{F}rameworks}\label{sec:fw} We begin by defining the individual components of our frameworks, starting with the fundamental notion of a \emph{forecast}. \FT{This} is a probability estimate for the positive outcome of a given (binary) question. \begin{definition} A \emph{forecast} $\ensuremath{\mathcal{F}}$ is the probability $P(\ensuremath{\mathcal{Q}}=true) \in [0,1]$ for a given \emph{forecasting question} $\ensuremath{\mathcal{Q}}$. \end{definition} \begin{example} \label{FAFEx} Consider the forecasting question $\ensuremath{\mathcal{Q}}$: \emph{`Will the Tokyo \AX{2020 Summer} Olympics be cancelled/postponed to another year?'}. \AX{Here, the $true$ outcome amounts to the Olympics being cancelled/postponed (and $false$ to it taking place in 2020 as planned).} Then, a forecast $\ensuremath{\mathcal{F}}$ may be $P(\ensuremath{\mathcal{Q}}=true)= 0.15$\, which amounts to a 15\% probability of the Olympics \BIn{being cancelled/postponed}. \BI{Note that $\ensuremath{\mathcal{F}}$ may have been introduced as part of an update framework (herein described), or as an initial base rate at the outset of a FAF (Stage 1 in Figure \ref{fig:FAFdiag}).} \end{example} In the remainder, we will often drop $\ensuremath{\mathcal{Q}}$, implicitly assuming it is given, and use $P(true)$ to stand for $P(\ensuremath{\mathcal{Q}}=true)$. In order to empower agents to reason about probabilities and thus support forecasting, we need, in addition to \emph{pro/con} arguments as in QuAD frameworks, two new argument types: \begin{itemize} \item \emph{proposal} arguments, each about some forecast (and its underlying forecasting question); each proposal argument $\ensuremath{\mathcal{P}}$ has a \emph{forecast} and, optionally, some supporting \emph{evidence ; and \item \emph{amendment} argument , which \AX{suggest a modification to} some forecast\AX{'s probability} by increasing or decreasing it, and are accordingly separated into disjoint classes of \emph{increase} and \emph{decrease} (amendment) arguments.\footnote{Note that we decline to include a third type of amendment argument for arguing that $\ensuremath{\Forecast^\Proposal}$ is just right. This choice rests on the assumption that additional information always necessitates a change to $\ensuremath{\Forecast^\Proposal}$, however granular that change may be. This does not restrict individual agents arguing about $\ensuremath{\Forecast^\Proposal}$ from casting $\ensuremath{\Forecast^\Proposal}$ as their own final forecast. However, rather than cohering their argumentation around $\ensuremath{\Forecast^\Proposal}$, which we hypothesise would lead to high risk of groupthink~\cite{McNees1987}, agents are compelled to consider the impact of their amendment arguments on this more granular level.} \end{itemize} Note that amendment arguments are introduced specifically for arguing about a proposal argument, given that traditional QuAD pro/con arguments are of limited use when the goal is to judge the acceptability of a probability, and that in forecasting agents must not only decide \emph{if} they agree/disagree but also \emph{how} they agree/disagree (i.e. whether they believe the forecast is too low or too high considering, if available, the evidence). Amendment arguments, with their increase and decrease classes, provide for this. \begin{example \label{ProposalExample} A proposal argument $\ensuremath{\mathcal{P}}$ in the Tokyo Olympics setting may comprise forecast: \emph{\AX{`}There is a 75\% chance that the Olympics will be cancelled/postponed to another year'}. It may also include evidence: \emph{`A new poll today shows that 80\% of the Japanese public want the Olympics to be cancelled. The Japanese government is likely to buckle under this pressure.'} This argument may aim to prompt updating the earlier forecast in Example~\ref{FAFEx}. A \emph{decrease} amendment argument may be $\ensuremath{\decarg_1}$: \emph{`The International Olympic Committee and the Japanese government will ignore the views of the Japanese public'}. An \emph{increase} amendment argument may be $\ensuremath{\incarg_1}$: \emph{`Japan's increasingly popular opposition parties will leverage this to make an even stronger case for cancellation'}. \end{example} Intuitively, a proposal argument is the focal point of the argumentation. It typically suggests a new forecast to replace prior forecasts, argued on the basis of some new evidence (as in the earlier example). We will see that proposal arguments remain immutable through each debate (update framework), which takes place via amendment arguments and standard pro/con arguments. Note that, wrt QuAD argument types, proposal arguments replace issues and amendment arguments replace answers, in that the former are driving the debates and the latter are the options up for debate. Note also that amendment arguments merely state a direction wrt $\ensuremath{\Forecast^\Proposal}$ and do not contain any more information, such as \emph{how much} to alter $\ensuremath{\Forecast^\Proposal}$ by. We will see that alteration can be determined by \emph{scoring} amendment arguments. Proposal and amendment arguments, alongside pro/con arguments, form part of our novel update frameworks \BI{(Stage 2 of Figure \ref{fig:FAFdiag})}, defined as follows: \begin{definition} An \emph{update framework} is a nonad ⟨$\ensuremath{\mathcal{P}}, \ensuremath{\mathcal{X}}, \ensuremath{\AmmArgs^-}, \ensuremath{\AmmArgs^+}, \ensuremath{\Rels^p}, \ensuremath{\Rels}, \ensuremath{\mathcal{A}}, \ensuremath{\mathcal{V}}, \ensuremath{\Forecast^\Agents}$⟩ such that: \item[$\bullet$] $\ensuremath{\mathcal{P}}$ is a single proposal argument with \emph{forecast} $\NewForecas $ and, optionally, \emph{evidence} $\mathcal{E}^\ensuremath{\mathcal{P}}$ for this forecast; \item[$\bullet$] $\ensuremath{\mathcal{X}} = \ensuremath{\AmmArgs^\uparrow} \cup \ensuremath{\AmmArgs^\downarrow}$ is a finite set of \emph{amendment arguments} composed of subsets $\ensuremath{\AmmArgs^\uparrow}$ of \emph{increase} arguments and $\ensuremath{\AmmArgs^\downarrow}$ of \emph{decrease} arguments; \item[$\bullet$] $\ensuremath{\AmmArgs^-}$ is a finite set of \emph{con} arguments; \item[$\bullet$] $\ensuremath{\AmmArgs^+}$ is a finite set of \emph{pro} arguments; \item[$\bullet$] the sets $\{\ensuremath{\mathcal{P}}\}$, $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\AmmArgs^\downarrow}$, $\ensuremath{\AmmArgs^-}$ and $\ensuremath{\AmmArgs^+}$ are pairwise disjoint; \item[$\bullet$] $\ensuremath{\Rels^p}$ $\subseteq$ $\ensuremath{\mathcal{X}}$ $\times$ $\{\ensuremath{\mathcal{P}}\}$ is a directed acyclic binary relation between amendment arguments and the proposal argument (we may refer to this relation informally as `probabilistic'); \item[$\bullet$] $\ensuremath{\Rels}$ $\subseteq$ ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\times$ ($\ensuremath{\mathcal{X}}$ $\cup$ $\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) is a directed acyclic, binary relation \FTn{from} pro/con arguments \FTn{to} amendment\FTn{/pro/con arguments} (we may refer to this relation informally as `argumentative'); \item[$\bullet$] $\ensuremath{\mathcal{A}} = \{ \ensuremath{a}_1, \ldots, \ensuremath{a}_n \}$ is a finite set of \emph{agents} $(n >1$); \item[$\bullet$] $\ensuremath{\mathcal{V}}$ : $\ensuremath{\mathcal{A}}$ $\times$ ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\rightarrow$ [0, 1] is a total function such that $\ensuremath{\mathcal{V}}(\ensuremath{a},\ensuremath{x})$ is the \emph{vote} of agent $\ensuremath{a}\in\ensuremath{\mathcal{A}}$ on argument $\ensuremath{x} \in \ensuremath{\AmmArgs^-} \cup \ensuremath{\AmmArgs^+}$; with an abuse of notation, we let $\ensuremath{\mathcal{V}}_\ensuremath{a}$ : ($\ensuremath{\AmmArgs^-}$ $\cup$ $\ensuremath{\AmmArgs^+}$) $\rightarrow [0, 1]$ represent the votes of a \emph{single} agent $\ensuremath{a}\in\ensuremath{\mathcal{A}}$, e.g. $\ensuremath{\mathcal{V}}_\ensuremath{a}(\ensuremath{x}) = \ensuremath{\mathcal{V}}(\ensuremath{a},\ensuremath{x})$; \item[$\bullet$] $\ensuremath{\Forecast^\Agents} = \{ \ensuremath{\Forecast^\Agents}_{\ensuremath{a}_1}, \ldots, \ensuremath{\Forecast^\Agents}_{\ensuremath{a}_n} \}$ is such that $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_i} $, where $i \in \{ 1, \ldots n \}$, is the \emph{forecast} of agent $\ensuremath{a}_i\in\ensuremath{\mathcal{A}}$. \end{definition} \BIn{Note that pro \AX{(}con\AX{)} arguments can be seen as supporting (attacking, resp.) other arguments via $\ensuremath{\mathcal{R}}$, as in the case of conventional QuAD frameworks~\cite{Baroni2015}.} \begin{example \label{eg:tokyo} A possible update framework in our running setting may include $\ensuremath{\mathcal{P}}$ as in Example~\ref{ProposalExample} as well as (see Table \ref{table:tokyo}) $\ensuremath{\mathcal{X}}=\{\ensuremath{\decarg_1}, \ensuremath{\decarg_2}, \ensuremath{\incarg_1}\}$, $\ensuremath{\AmmArgs^-}=\{\ensuremath{\attarg_1}, \ensuremath{\attarg_2}, \ensuremath{\attarg_3}\}$, $\ensuremath{\AmmArgs^+}=\{\ensuremath{\supparg_1}, \ensuremath{\supparg_2}\}$, $\ensuremath{\Rels^p}=\{(\ensuremath{\decarg_1}, \ensuremath{\mathcal{P}})$, $(\ensuremath{\decarg_2}, \ensuremath{\mathcal{P}}), (\ensuremath{\incarg_3}, \ensuremath{\mathcal{P}})\}$, and $\ensuremath{\mathcal{R}}=\{(\ensuremath{\attarg_1}, \ensuremath{\decarg_1}), (\ensuremath{\attarg_2}, \ensuremath{\decarg_1}), (\ensuremath{\attarg_3}, \ensuremath{\incarg_1})$, $(\ensuremath{\supparg_1}, \ensuremath{\decarg_2}),$ $ (\ensuremath{\supparg_2}, \ensuremath{\incarg_1})\} . Figure \ref{fig:tokyo} gives a graphical representation of these arguments and relations. \BIn{Assuming $\ensuremath{\mathcal{A}}=\{alice, bob, charlie\}$, $\ensuremath{\mathcal{V}}$ may be such that $\AX{\ensuremath{\mathcal{V}}_{alice}(\ensuremath{\attarg_1})} = 1$, $\AX{\ensuremath{\mathcal{V}}_{bob}(\ensuremath{\supparg_1})} = 0$, and so on.} \end{example} \begin{table}[t] \begin{tabular}{p{0.7cm}p{6.7cm}} \hline & Content \\ \hline $\ensuremath{\mathcal{P}}$ & `A new poll today shows that 80\% of the Japanese public want the Olympics to be cancelled owing to COVID-19, and the Japanese government is likely to buckle under this pressure ($\mathcal{E}^\ensuremath{\mathcal{P}})$. Thus, there is a 75\% chance that the Olympics will be cancelled/postponed to another year' ($\ensuremath{\Forecast^\Proposal}$). \\ $\ensuremath{\decarg_1}$ & `The International Olympic Committee and the Japanese government will ignore the views of the Japanese public'. \\ $\ensuremath{\decarg_2}$ & `This poll comes from an unreliable source.' \vspace{2mm}\\ $\ensuremath{\incarg_1}$ & `Japan's increasingly popular opposition parties will leverage this to make an even stronger case for cancellation.' \\ $\ensuremath{\attarg_1}$ & `The IOC is bluffing - people are dying, Japan is experiencing a strike. They will not go ahead with the games if there is a risk of mass death.' \\ $\ensuremath{\attarg_2}$ & `The Japanese government may renege on its commitment to the IOC, and use legislative or immigration levers to block the event.' \\ $\ensuremath{\attarg_3}$ & `Japan's government has sustained a high-approval rating in the last year and is strong enough to ward off opposition attacks.' \\ $\ensuremath{\supparg_1}$ & `This pollster has a track record of failure on Japanese domestic issues.' \\ $\ensuremath{\supparg_2}$ & `Rising anti-government sentiment on Japanese Twitter indicates that voters may be receptive to such arguments.' \\ \hline \end{tabular} \caption Arguments in the update framework in Example~\ref{eg:tokyo}.} \label{table:tokyo} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{images/FAF1.png} \centering \caption {\BIn{A graphical representation of arguments and relations in the update framework from Example~\ref{eg:tokyo}. Nodes represent proposal ($\ensuremath{\mathcal{P}}$), increase ($\uparrow$), decrease ($\downarrow$), pro ($+$) and con ($-$) arguments, while \FTn{dashed/solid} edges indicat , resp., the $\ensuremath{\Rels^p}$/$\ensuremath{\mathcal{R}}$ relations. } } \label{fig:tokyo} \end{figure} Several considerations about update frameworks are in order. Firstly, they represent `stratified' debates: graphically, they can be represented as trees with the proposal argument as root, amendment arguments as children of the root, and pro/con arguments forming the lower layers, as shown in Figure \ref{fig:tokyo}. This tree structure serves to focus argumentation towards the proposal (i.e. the probability and, if available, evidence) it puts forward. Second, we have chosen to impose a `structure' on proposal arguments, whereby their forecast is distinct from their (optional) evidence. Here the forecast has special primacy over the evidence, because forecasts are the vital reference point and the drivers of debates in FAFs. They are, accordingly, both mandatory and required to `stand out' to participating agents. In the spirit of abstract argumentation \cite{Dung1995}, we nonetheless treat all arguments, including proposal arguments, as `abstract', and focus on relations between them rather between their components. In practice, therefore, amendment arguments may relate to a proposal argument's forecast but also, if present, to its evidence. We opt for this abstract view on the assumption that the flexibility of this approach better suits judgmental forecasting, which has a diversity of use cases (e.g. including politics, economics and sport) where different argumentative approaches may be deployed (i.e. quantitative, qualitative, directly attacking amendment nodes or raising alternative POVs) and wherein forecasters may lack even a basic knowledge of argumentation. We leave the study of structured variants of our framework (e.g. see overview in \cite{structArg}) to future work: these may consider finer-grained representations of all arguments in terms of different components, and finer-grained notions of relations between components, rather than full arguments. Third, in update frameworks, voting is restricted to pro/con arguments. Preventing agents from voting directly on amendment arguments mitigates against the risk of arbitrary judgements: agents cannot make off-the-cuff estimations but can only express their beliefs via (pro/con) argumentation, thus ensuring a more rigorous process of appraisal for the proposal and amendment arguments. Note that rather than facilitating voting on arguments using a two-valued perspective (i.e. positive/negative) or a three-valued perspective (i.e. positive/negative/neutral), $\ensuremath{\mathcal{V}}$ allows agents to cast more granular judgements of (pro/con) argument acceptability, the need for which has been highlighted in the literature \cite{Mellers2015}. Finally, although we envisage that arguments of all types are put forward by agents during debates, we do not capture this mapping in update frameworks. Thus, we do not capture who put forward which arguments, but instead only use votes to encode and understand agents' views. This enables more nuanced reasoning and full engagement on the part of agents with alternative viewpoints (i.e. an agent may freely argue both for and against a point before taking an explicit view with their voting). Such conditions are essential in a healthy forecasting debate \cite{Landeta2011,Mellers2015}. In the remainder of this paper, with an abuse of notation, we often use $\ensuremath{\Forecast^\Proposal}$ to denote, specifically, the probability advocated in $\ensuremath{\Forecast^\Proposal}$ (e.g. 0.75 in Example \ref{ProposalExample}). \section{Aggregating Rational Forecasts }\label{sec:forecasting} In this section we formally introduce (in \AX{§}\ref{subsec:rationality}) our notion of rationality and discuss how it may be used to identify\BI{, and subsequently `block',} undesirable behaviour in forecasters. We then define (in \AX{§}\ref{subsec:aggregation}) a method for calculating a revised forecast \BI{(Stage 2c of Figure \ref{fig:FAFdiag})}, which aggregates the views of all agents in the update framework, whilst optimising their overall forecasting accuracy. \subsection{Rationality}\label{subsec:rationality} Characterising an agent's view as irrational offers opportunities to refine the accuracy of their forecast (and thus the overall aggregated group forecast). Our definition of rationality is inspired by, but goes beyond, that of QuAD-V \cite{Rago2017}, which was introduced for the e-polling context. Whilst update frameworks eventually produce a single aggregated forecast on the basis of group deliberation, each agent is first evaluated for their rationality on an individual basis. Thus, as in QuAD-V, in order to define rationality for individual agents, we first reduce frameworks to \emph{delegate frameworks} for each agent, which are the restriction of update frameworks to a single agent. \begin{definition} A \emph{delegate framework} for an agent $\ensuremath{a}$ is $\ensuremath{u}_{\ensuremath{a}} =$ ⟨$\ensuremath{\mathcal{P}}, \ensuremath{\mathcal{X}}, \ensuremath{\AmmArgs^-}, \ensuremath{\AmmArgs^+}, \ensuremath{\Rels^p}, \ensuremath{\Rels}, \ensuremath{a}, \ensuremath{\mathcal{V}}_{\ensuremath{a}}, \ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩. \end{definition} Note that all arguments in an update framework are included in each agent's delegate framework, but only the agent's votes and forecast are carried over. Recognising the irrationality of an agent requires comparing the agent's forecast against (an aggregation of) their opinions on the amendment arguments and, by extension, on the proposal argument. To this end, we evaluate the different parts of the update framework as follows. We use the DF-QuAD algorithm \cite{Rago2016} to score each amendment argument for the agent, in the context of the pro/con arguments `linked' to the amendment argument, using $\ensuremath{\mathcal{R}}$, in the context of the agent's delegate framework. We refer to the DF-QuAD score function as $\ensuremath{\mathcal{\sigma}}$. This requires a choice of base scores for amendment arguments as well as pro/con arguments. We assume the same base score $\ensuremath{\mathcal{\tau}}(\ensuremath{x})=0.5$ for all $\ensuremath{x} \in \ensuremath{\mathcal{X}}$; in contrast, the base score of pro/con arguments is a result of the votes they received from the agent, in the spirit of QuAD-V \cite{Rago2017}. The intuition behind assigning a neutral (0.5) base score for amendment arguments is that an agent's estimation of their strength from the outset would be susceptible to bias and inaccuracy. Once each amendment argument has been scored (using $\ensuremath{\mathcal{\sigma}}$) for the agent, we aggregate the scores of all amendment arguments (for the same agent) to to calculate the agent's \emph{confidence score} in the proposal argument (which underpins our rationality constraints), by weighting the mean average strength of this argument's increase amendment relations against that of the set of decrease amendment relations: \begin{definition}\label{def:conscore} Given a delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩ , let $\ensuremath{\AmmArgs^\uparrow} = \{ \ensuremath{\incarg_1}, \ensuremath{\incarg_2}, \ldots , \ensuremath{\arg^\uparrow}_i \}$ and $\ensuremath{\AmmArgs^\downarrow} = \{ \ensuremath{\decarg_1}, \ensuremath{\decarg_2}, \ldots , \ensuremath{\arg^\downarrow}_j \}$. Then, $\ensuremath{a}$'s \emph{confidence score} is as follows: \begin{align} &\text{if } i\neq0, j\neq0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = \frac{1}{i} \sum_{k=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\uparrow}_k) - \frac{1}{j} \sum_{l=1}^{j} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\downarrow}_l); \nonumber \\ &\text{if } i\neq0, j=0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = \frac{1}{i} \sum_{k=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\uparrow}_k); \nonumber \\ &\text{if } i=0, j\neq0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = - \frac{1}{j} \sum_{l=1}^{i} \ensuremath{\mathcal{\sigma}}(\ensuremath{\arg^\downarrow}_l); \nonumber \\ &\text{if } i=0, j=0: \quad \ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) = 0. \nonumber \end{align} \end{definition} Note that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) \in [-1,1]$, which denotes the overall views of the agent on the forecast $\ensuremath{\Forecast^\Proposal}$ (i.e. as to whether it should be \emph{increased} or \emph{decreased}, and how far). A negative (positive) $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ indicates that an agent believes that $\ensuremath{\Forecast^\Proposal}$ should be amended down (up, resp.). The size of $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ reflects the degree of the agent's certainty in either direction. In turn, we can constrain an agent's forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ if it contradicts this belief as follows. \begin{definition}\label{def:irrationality} Given a delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$ , $\ensuremath{a}$'s forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ is \emph{strictly rational} (wrt $\ensuremath{u}_{\ensuremath{a}}$) iff: \begin{align} if\;\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) < 0\;then\; \ensuremath{\Forecast^\Agents}_\ensuremath{a} < \ensuremath{\Forecast^\Proposal} \nonumber \\ if\;\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) > 0\;then\; \ensuremath{\Forecast^\Agents}_\ensuremath{a} > \ensuremath{\Forecast^\Proposal} \nonumber \\ \centering \mid\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\mid \geq \frac{\mid\ensuremath{\Forecast^\Proposal} - \ensuremath{\Forecast^\Agents}_\ensuremath{a}\mid}{\ensuremath{\Forecast^\Proposal}} \nonumber \end{align} \end{definition} Hereafter, we refer to forecasts which violate the first two constraints as, resp., \emph{irrational increase} and \emph{irrational decrease} forecasts, and to forecasts which violate the final constraint as \emph{irrational scale} forecasts. This definition of rationality preserves the integrity of group forecast in two ways. First, it prevents agents from forecasting against their beliefs: an agent cannot increase $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) < 0$ and an agent cannot decrease $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}}) > 0 ; further, it ensures that agents cannot make forecasts disproportionate to their confidence score -- \emph{how far} an agent $\ensuremath{a}$ deviates from the proposed change $\ensuremath{\Forecast^\Proposal}$ is restricted by $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$; finally, an agent must have $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$ greater than or equal to the relative change to $\ensuremath{\Forecast^\Proposal}$ denoted in their forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a} . Note that the \emph{irrational scale} constraint deals with just one direction of proportionality (i.e. providing only a maximum threshold for $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$'s deviation from $\ensuremath{\Forecast^\Proposal}$, but no minimum threshold). Here, we avoid bidirectional proportionality on the grounds that such a constraint would impose an arbitrary notion of arguments' `impact' on agents. An agent may have a very high $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})$, indicating \FT{their} belief that $\ensuremath{\Forecast^\Proposal}$ is too low, but \AX{may}, we suggest, rationally choose to increase $\ensuremath{\Forecast^\Proposal}$ by only a small amount (e.g. if, despite \FT{their} general agreement with the arguments, \FT{they} believe the overall issue at stake in $\ensuremath{\mathcal{P}}$ to be minor or low impact to the overall forecasting question). Our definition of rationality, which relies on notions of argument strength derived from DF-QuAD, thus informs but does not wholly dictate agents' forecasting, affording them considerable freedom. We leave alternative, stricter definitions of rationality, which may derive from probabilistic conceptions of argument strength, to future work. \begin{example Consider our running Tokyo Olympics example, with the same arguments and relations from Example \ref{eg:tokyo} and an agent \BIn{$alice$} with a confidence score \BIn{$\ensuremath{\mathcal{C}}_{alice}(\ensuremath{\mathcal{P}}) = -0.5$}. From this we know that \BIn{$alice$} believes that the suggested $\ensuremath{\Forecast^\Proposal}$ in the proposal argument $\ensuremath{\mathcal{P}}$ should be decreased. Then, under our definition of rationality, \BIn{$alice$'s} forecast \BIn{$\ensuremath{\Forecast^\Agents}_{alice}$} is `rational' if it decreases $\ensuremath{\Forecast^\Proposal}$ by up to 50\%. \end{example} If an agent's forecast $\ensuremath{\Forecast^\Agents}_\ensuremath{a}$ violates these rationality constraints then \BI{it is `blocked'} and the agent is prompted to return to the argumentation graph. From here, they may carry out one or more of the following actions to render their forecast rational: a. Revise their forecast; b. Revise their votes on arguments; c. Add new arguments (and vote on them). \iffalse \begin{enumerate}[label=\alph*.] \item Revise their forecast; \item Revise their votes on arguments; \item Add new arguments to the update framework (and vote on them). \end{enumerate} \fi Whilst a) and b) occur on an agent-by-agent basis, confined to each delegate framework, c) affects the shared update framework and requires special consideration. Each time new \AX{arguments} are added to the shared graph, every agent must vote on \AX{them}, even if they have already made a rational forecast. In certain cases, after an agent has voted on a new argument, it is possible that their rational forecast is made irrational. In this instance, the agent must resolve their irrationality via the steps above. In this way, the update framework can be refined on an iterative basis until the graph is no longer being modified and all agents' forecasts are rational. At this stage, the update framework has reached a stable state and the agents $\ensuremath{\mathcal{A}}$ are collectively rational: \begin{definition} Given an update framework $\ensuremath{u}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{V}}$, $\ensuremath{\Forecast^\Agents}$⟩, $\ensuremath{\mathcal{A}}$ is \emph{collectively rational} (wrt \emph{u}) iff $\forall \ensuremath{a} \in \ensuremath{\mathcal{A}}$, $\ensuremath{a}$ is individually rational (wrt the delegate framework $\ensuremath{u}_{\ensuremath{a}}$ = ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^+}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{a}$, $\ensuremath{\mathcal{V}}_{\ensuremath{a}}$, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}$⟩). \end{definition} When $\ensuremath{\mathcal{A}}$ is collectively rational, the update framework $u$ becomes immutable and the aggregation (defined next) \AX{produces} a group forecast $\ensuremath{\Forecast^g}$, which becomes the new $\ensuremath{\mathcal{F}}$. \subsection{Aggregating Forecasts}\label{subsec:aggregation} After all the agents have made a rational forecast, an aggregation function is applied to produce one collective forecast. One advantage of forecasting debates vis-a-vis \AX{the} many other forms of debate, is that a ground truth always exists -- an event either happens or does not. This means that, over time and after enough FAF instantiations, data on the forecasting success of different agents can be amassed. In turn, the relative historical performance of forecasting agents can inform the aggregation of group forecasts. In update frameworks, a weighted aggregation function based on Brier Scoring \cite{Brier1950} is used, such that more accurate forecasting agents have greater influence over the final forecast. Brier Scores are a widely used criterion to measure the accuracy of probabilistic predictions, effectively gauging the distance between a forecaster's predictions and an outcome after it has(n't) happened, as follows. \begin{definition} \label{def:bscore} Given an agent $\ensuremath{a}$, a non-empty series of forecasts $\ensuremath{\Forecast^\Agents}_\ensuremath{a}(1), \ldots, \ensuremath{\Forecast^\Agents}_\ensuremath{a}(\ensuremath{\mathcal{N}}_{\ensuremath{a}})$ with corresponding actual outcomes $\ensuremath{\mathcal{O}}_1, \ldots,$ $\ensuremath{\mathcal{O}}_{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} \in \{true, false\}$ (where $\ensuremath{\mathcal{N}}_{\ensuremath{a}}>0$ is the number of forecasts $\ensuremath{a}$ has made in a non-empty sequence of as many update frameworks), $\ensuremath{a}$'s Brier Score $\ensuremath{b}_{\ensuremath{a}} \in [0, 1]$ is as follows: \begin{align} \ensuremath{b}_{\ensuremath{a}} = \frac{1}{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} \sum_{t=1}^{\ensuremath{\mathcal{N}}_{\ensuremath{a}}} (\ensuremath{\Forecast^\Agents}_\ensuremath{a}(t) - val(\ensuremath{\mathcal{O}}_t))^2 \nonumber \end{align} where $val(\ensuremath{\mathcal{O}}_t)=1$ if $\ensuremath{\mathcal{O}}_t=true$, and 0 otherwise. \end{definition} A Brier Score $\ensuremath{b}$ is effectively the mean squared error used to gauge forecasting accuracy, where a low $\ensuremath{b}$ indicates high accuracy and high $\ensuremath{b}$ indicates low accuracy. This can be used in the update framework's aggregation function via the weighted arithmetic mean as follows. \AX{E}ach Brier Score is inverted to ensure that more (less, resp.) accurate forecasters have higher (lower, resp.) weighted influence\AX{s} on $\ensuremath{\Forecast^g}$: \begin{definition}\label{def:group} Given a set of agents $\ensuremath{\mathcal{A}} = \{\ensuremath{a}_1, \ldots,\ensuremath{a}_n\}$, their corresponding set of Brier Scores $\ensuremath{b} = \{\ensuremath{b}_{\ensuremath{a}_1}, \ldots,\ensuremath{b}_{\ensuremath{a}_n}\}$ and their forecasts $\ensuremath{\Forecast^\Agents} = \{\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_1}, \ldots,\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_n}\}$, and letting, for $i \!\!\in\!\! \{ 1, \ldots, n\}$, $w_{i} \!\!=\!\! 1-\ensuremath{b}_{\ensuremath{a}_i}$, the \emph{group forecast} $\ensuremath{\Forecast^g}$ is as follows: \begin{align} &\text{if } \sum_{i=1}^{n}w_{i} \neq 0: & &\ensuremath{\Forecast^g} = \frac{\sum_{i=1}^{n}w_{i}\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_i}}{\sum_{i=1}^{n}w_{i}}; \nonumber \\ &\text{otherwise}: & &\ensuremath{\Forecast^g} = 0. \nonumber \end{align} \end{definition} This group forecast could be `activated' after a fixed number of debates (with the mean average used prior), when sufficient data has been collected on the accuracy of participating agents, or after a single debate, in the context of our general \emph{Forecasting Argumentation Frameworks}: \begin{definition} A \emph{Forecasting Argumentation Framework} (FAF) is a triple ⟨$ \ensuremath{\mathcal{F}}, \ensuremath{\mathcal{U}}, \ensuremath{\mathcal{T}}$⟩ such that: \item[$\bullet$] $\ensuremath{\mathcal{F}}$ is a \emph{forecast ; \item[$\bullet$] $\ensuremath{\mathcal{U}}$ is a finite, non-empty sequence of update frameworks with \ensuremath{\mathcal{F}}\ the forecast of the proposal argument in the first update framework in the sequence\AR{;} the forecast of each subsequent update framework is the group forecast of the previous update framework's agents' forecasts; \item[$\bullet$] $\ensuremath{\mathcal{T}}$ is a preset time limit representing the lifetime of the FAF; \item[$\bullet$] each agent's forecast wrt the agent's delegate framework drawn from each update framework is strictly rational. \end{definition} \begin{example \BIn{Consider our running Tokyo Olympics example: the overall FAF may be composed of $\ensuremath{\mathcal{F}} = 0.15$, update frameworks $\ensuremath{\mathcal{U}} = \{ u_1, u_2, u_3 \}$ and time limit $\ensuremath{\mathcal{T}}=14\ days$, where $u_3$ is the latest (and therefore the only open) update framework after, for example, four days.} \end{example} \AX{T}he superforecasting literature explores a range of forecast aggregation algorithms: extremizing algorithms \cite{Baron2014}, variations on logistic \AX{and} Fourier $L_2E$ regression \cite{Cross2018}, with considerable success. \AX{T}hese approaches \AX{aim} at ensuring that less certain \AX{or less} accurate forecasts have a lesser influence over the final aggregated forecast. We believe that FAFs apply a more intuitive algorithm \AX{since} much of the `work' needed to bypass inaccurate and erroneous forecasting is \AX{expedited} via argumentation. \section{Properties}\label{sec:props} We now undertake a theoretical analysis of FAFs by considering mathematical properties they satisfy. Note that the properties of the DF-QuAD algorithm (see \cite{Rago2016}) hold (for the amendment and pro/con arguments) here. For brevity, we focus on novel properties unique to FAFs which relate to our new argument types. These properties focus on aggregated group forecasts wrt a debate (update framework). They imply the two broad, and we posit, desirable, principles of \emph{balance} and \emph{unequal representation}. We assume for this section a generic update framework $\ensuremath{u} = $ ⟨$\ensuremath{\mathcal{P}}$, $\ensuremath{\mathcal{X}}$, $\ensuremath{\AmmArgs^-}$, $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\Rels^p}$, $\ensuremath{\Rels}$, $\ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{V}}$, $\ensuremath{\Forecast^\Agents}$⟩ with group forecast $\ensuremath{\Forecast^g}$. \paragraph{Balance.} The intuition for these properties is that differences between $\ensuremath{\Forecast^g}$ and $\ensuremath{\Forecast^\Proposal}$ correspond to imbalances between the \emph{increase} and \emph{decrease} amendment arguments. The first result states that $\ensuremath{\Forecast^g}$ only differs from $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\Forecast^\Proposal}$ is the dialectical target of amendment arguments. \begin{proposition} \label{prop:balance1} If $\ensuremath{\mathcal{X}}\!\!=\!\!\emptyset$ ($\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$), then $\ensuremath{\Forecast^g}\!\!=\!\!\ensuremath{\Forecast^\Proposal}$. \end{proposition} \begin{proof} \AX{If $\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!=\!0$ by Def.~\ref{def:conscore} and $\ensuremath{\Forecast^\Agents}_\ensuremath{a}=\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}. Then, $\ensuremath{\Forecast^g}=\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:group}.} \end{proof} \AX{T}his simple proposition conveys an important property for forecasting for an agent to put forward a different forecast, amendment arguments must have been introduced. \begin{example} In the Olympics setting, the group of agents could only forecast higher or lower than the proposed forecast $\ensuremath{\Forecast^\Proposal}$ after the addition of at least one of \AX{the} amendment arguments $\ensuremath{\decarg_1}$, $\ensuremath{\decarg_2}$ or $\ensuremath{\incarg_1}$. \end{example} In the absence of increase \FTn{(decrease)} amendment arguments, if there are decrease \FTn{(increase, resp.)} amendment arguments, then $\ensuremath{\Forecast^g}$ is not higher \FTn{(lower, resp.)} than $\ensuremath{\Forecast^\Proposal}$. \begin{proposition}\label{prop:balance2} If $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}=\emptyset$, then $\ensuremath{\Forecast^g} \leq\ensuremath{\Forecast^\Proposal}$. \FTn{\label{balance3prop} If $\ensuremath{\AmmArgs^\downarrow}=\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$, then $\ensuremath{\Forecast^g}\geq\ensuremath{\Forecast^\Proposal}$.} \end{proposition} \begin{proof} \AX{If $\ensuremath{\AmmArgs^\downarrow}\!\! \neq \!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!=\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!\leq\!0$ by Def.~\ref{def:conscore} and then $\ensuremath{\Forecast^\Agents}_\ensuremath{a}\!\leq\!\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}. Then, by Def.~\ref{def:group}, $\ensuremath{\Forecast^g}\!\leq\!\ensuremath{\Forecast^\Proposal}$. If $\ensuremath{\AmmArgs^\downarrow}\!\!=\!\!\emptyset$ and $\ensuremath{\AmmArgs^\uparrow}\!\!\neq\!\!\emptyset$ then $\forall \ensuremath{a} \!\in\! \ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})\!\geq\!0$ by Def.~\ref{def:conscore} and then $\ensuremath{\Forecast^\Agents}_\ensuremath{a}\!\geq\!\ensuremath{\Forecast^\Proposal}$ by Def.~\ref{def:irrationality}. Then, by Def.~\ref{def:group}, $\ensuremath{\Forecast^g}\!\geq\!\ensuremath{\Forecast^\Proposal}$.} \end{proof} This proposition demonstrates that, if a decrease \BIn{(increase)} amendment argument has an effect on proposal arguments, it can only be as its name implies. \begin{example} \BIn{In the Olympics setting, the agents could not forecast higher than the proposed forecast $\ensuremath{\Forecast^\Proposal}$ if either of the decrease amendment arguments $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$ \AX{had} been added, but the increase argument $\ensuremath{\incarg_1}$ \AX{had} not. Likewise, \AX{the} agents could not forecast lower than $\ensuremath{\Forecast^\Proposal}$ if $\ensuremath{\incarg_1}$ \AX{had} been added, but \AX{neither} of $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$ \AX{had .} \end{example} If $\ensuremath{\Forecast^g}$ is lower \BIn{(higher)} than $\ensuremath{\Forecast^\Proposal}$, there is at least one decrease \BIn{(increase, resp.)} argument. \begin{proposition} \label{prop:balance4} If $\ensuremath{\Forecast^g}<\ensuremath{\Forecast^\Proposal}$, then $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$. \BIn{If $\ensuremath{\Forecast^g}>\ensuremath{\Forecast^\Proposal}$, then $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$.} \end{proposition} \begin{proof} \AX{ If $\ensuremath{\Forecast^g}<\ensuremath{\Forecast^\Proposal}$ then, by Def.~\ref{def:group}, $\exists \ensuremath{a} \in \ensuremath{\mathcal{A}}$ where $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}<\ensuremath{\Forecast^\Proposal}$, for which it holds from Def.~\ref{def:irrationality} that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})<0$. Then, irrespective of $\ensuremath{\AmmArgs^\uparrow}$, $\ensuremath{\AmmArgs^\downarrow}\neq\emptyset$. If $\ensuremath{\Forecast^g}>\ensuremath{\Forecast^\Proposal}$ then, by Def.~\ref{def:group}, $\exists \ensuremath{a} \in \ensuremath{\mathcal{A}}$ where $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}}>\ensuremath{\Forecast^\Proposal}$, for which it holds from Def.~\ref{def:irrationality} that $\ensuremath{\mathcal{C}}_{\ensuremath{a}}(\ensuremath{\mathcal{P}})>0$. Then, irrespective of \BIn{$\ensuremath{\AmmArgs^\downarrow}$, $\ensuremath{\AmmArgs^\uparrow}\neq\emptyset$}. } \end{proof} We can see here that the only way an agent can decrease \BIn{(increase)} the forecast is \FT{by adding} decrease \BIn{(increase, resp.)} arguments, ensuring the debate is structured as \FT{intended}. \begin{example} \BIn{In the Olympics setting, the group of agents could only produce a group forecast $\ensuremath{\Forecast^g}$ lower than $\ensuremath{\Forecast^\Proposal}$ due to the presence of \emph{decrease} amendment arguments $\ensuremath{\decarg_1}$ or $\ensuremath{\decarg_2}$. Likewise, the group of agents could only produce a $\ensuremath{\Forecast^g}$ higher than $\ensuremath{\Forecast^\Proposal}$ due to the presence of $\ensuremath{\incarg_1}$.} \end{example} \paragraph{Unequal representation.} AFs exhibit instances of unequal representation in the final voting process. In formulating the following properties, we distinguish between two forms of unequal representation. First, \emph{dictatorship}, where a single agent dictates $\ensuremath{\Forecast^g}$ with no input from other agents. Second, \emph{pure oligarchy}, where a group of agents dictates $\ensuremath{\Forecast^g}$ with no input from other agents outside the group. In the forecasting setting, these properties are desirable as they guarantee higher accuracy \AX{from} the group forecast $\ensuremath{\Forecast^g}$. An agent with a forecasting record of \emph{some} accuracy exercises \emph{dictatorship} over the group forecast $\ensuremath{\Forecast^g}$, if the rest of the participating \AX{agents} have a record of total inaccuracy. \begin{proposition}\label{prop:dictatorship} If $\ensuremath{a}_d\in\ensuremath{\mathcal{A}}$ has a Brier score $\ensuremath{b}_{\ensuremath{a}_d}<1$ and $\forall \ensuremath{a}_z\in\ensuremath{\mathcal{A}} \setminus \{\ensuremath{a}_d$\}, $\ensuremath{b}_{\ensuremath{a}_z} = 1$, then $\ensuremath{\Forecast^g}=\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$. \end{proposition} \begin{proof} \AX{ By Def.~\ref{def:group}: if $\ensuremath{b}_{\ensuremath{a}_z} \!\!\!=\!\! 1$ $\forall \ensuremath{a}_z\!\in\!\ensuremath{\mathcal{A}} \!\setminus\! \{\!\ensuremath{a}_d\!\}$, then $w_{\ensuremath{a}_z}\!\!\!=\!0$; and if $\ensuremath{b}_{\ensuremath{a}_d}\!\!<\!\!1$, then $w_{\ensuremath{a}_d}\!\!>\!\!0 . Then, again by Def.~\ref{def:group}, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$ is weighted at 100\% and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at 0\% so $\ensuremath{\Forecast^g}\!\!=\!\!\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_d}$. } \end{proof} This proposition demonstrates how we will disregard agents with total inaccuracy, even in \FT{the} extreme case where we allow one (more accurate) agent to dictate the forecast. \begin{example} \BIn{In the running example, if \AX{alice, bob and charlie have Brier scores of 0.5, 1 and 1, resp., bob's and charlie's forecasts have} no impact on $\ensuremath{\Forecast^g}$, whilst \AX{alice's} forecast becomes the group forecast $\ensuremath{\Forecast^g}$.} \end{example} A group of agents with a forecasting record of $some$ accuracy exercises \emph{pure oligarchy} over $\ensuremath{\Forecast^g}$ if the rest of the \AX{agents} all have a record of total inaccuracy. \begin{proposition}\label{oligarchytotalprop} Let $\ensuremath{\mathcal{A}} = \ensuremath{\mathcal{A}}_o \cup \ensuremath{\mathcal{A}}_z$ where $\ensuremath{\mathcal{A}}_o \cap \ensuremath{\mathcal{A}}_z = \emptyset$, $\ensuremath{b}_{\ensuremath{a}_o}<1$ $\forall \ensuremath{a}_o \in \ensuremath{\mathcal{A}}_o$ and $\ensuremath{b}_{\ensuremath{a}_z}=1$ $\forall \ensuremath{a}_z \in \ensuremath{\mathcal{A}}_z$. Then, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_o}$ is weighted at $>0\%$ and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at 0\ . \end{proposition} \begin{proof} \AX{ By Def.~\ref{def:group}: if $\ensuremath{b}_{\ensuremath{a}_z} = 1$ $\forall \ensuremath{a}_z\in\ensuremath{\mathcal{A}}_z$, then $w_{\ensuremath{a}_z}=0$; and if $\ensuremath{b}_{\ensuremath{a}_o}<1$ $\forall \ensuremath{a}_o\in\ensuremath{\mathcal{A}}_o$, then $w_{\ensuremath{a}_o}>0$. Then, again by Def.~\ref{def:group}, $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_o}$ is weighted at $> 0\%$ and $\ensuremath{\Forecast^\Agents}_{\ensuremath{a}_z}$ is weighted at $0\%$. } \end{proof} This proposition extends the behaviour from Proposition \ref{prop:dictatorship} to the (more desirable) case where fewer agents have a record of total inaccuracy. \begin{example} \BIn{In the running example, if \AX{alice, bob and charlie have Brier scores of 1, 0.2 and 0.6, resp., alice's forecast} has no impact on $\ensuremath{\Forecast^g}$, whilst \AX{bob and charlie's} aggregated forecast becomes the group forecast $\ensuremath{\Forecast^g}$.} \end{example} \section{Evaluation}\label{sec:experiments} \BI{We conducted an experiment using a dataset obtained from the `Superforecasting' project, Good Judgment Inc \cite{GJInc}, to simulate four past forecasting debates in FAFs. This dataset contained 1770 datapoints (698 `forecasts' and 1072 `comments') posted by 242 anonymised users with a range of expertise. The original debates had occurred on the publicly available group forecasting platform, the Good Judgment Open (GJO)\footnote{https://www.gjopen.com/}, providing a suitable baseline against which to compare FAFs' accurac . \BI{For the experiment, we used a prototype implementation of FAFs in the form of the publicly available web platform called \emph{Arg\&Forecast} (see \cite{Irwin2022} for an introduction to the platform and an additional human experiment with FAFs). Python's Gensim topic modelling library \cite{rehurek2011gensim} was used to separate the datapoints for each debate into contextual-temporal groups that could form update frameworks.} In each update framework the proposal forecast was set to the mean average of forecasts made in the update framework window and each argument appeared only once. Gensim was further used to simulate voting, matching users to specific arguments they (dis)approved of. Some 4,700 votes \AX{were then} generated with a three-valued system (where votes were taken from \{0,0.5,1\}) to ensure consistency: if a user voiced approval for an argument in the debate time window, their vote for the corresponding argument(s) was set to 1; disapproval for an argument led to a vote of 0, and (in the most common case) if a user did not mention an argument at all, their vote for the corresponding argument(s) defaulted to 0.5. With the views of all participating users wrt the proposal argument encoded in each update framework's votes, forecasts could then be simulated. If a forecast was irrational, violating any of the three constraints in Def.~\ref{def:irrationality} (referred to \AX{in the following} as \emph{increase}, \emph{decrease} and \emph{scale}, resp.), it was blocked and, to mimic real life use, an automatic `follow up' forecast was made. The `follow up' forecast would be the closest possible prediction (to their original choice) a user could make whilst remaining `rational'. \BI{Note that evaluation of the aggregation function described in \AX{§}\ref{subsec:aggregation} was outside this experiment, since the past forecasting accuracy of the dataset's 242 anonymised users was unavailable. Instead, we used \AX{the} mean average whilst adopting the GJO's method for scoring the accuracy of a user and/or group over the lifetime of the question \cite{roesch_2015}. This meant calculating a daily forecast and daily Brier score for each user, for every day of the question. After users made their first rational forecast, that forecast became their `daily forecast' until it was updated with a new forecast. Average and range of daily Brier scores allowed reliable comparison between (individual and aggregated) performance of the GJO versus the FAF implementation.} \begin{table}[t] \begin{tabular}{@{}llllll@{}} \toprule Q & Group $\ensuremath{b}$ & $min(\ensuremath{b})$ & $max(\ensuremath{b})$ \\ \midrule Q1 & 0.1013 (0.1187) & 0.0214 (0) & 0.4054 (1) \\ Q2 & 0.216 (0.1741) & 0 (0) & 0.3853 (1) \\ Q3 & 0.01206 (0.0227) & 0.0003 (0) & 0.0942 (0.8281) \\ Q4 & 0.5263 (0.5518) & 0 (0) & 0.71 (1) \\ \midrule \textbf{All} & \textbf{0.2039 (0.217)} & \textbf{0 (0)} & \textbf{1 (1)} \\ \bottomrule \end{tabular} \caption{The accuracy of the platform group versus control, where \AX{`}Group $\ensuremath{b}$\AX{'} is the aggregated (mean) Brier score, `$min(\ensuremath{b})$' is the lowest individual Brier score and `$max(\ensuremath{b})$' is the highest individual Brier score. Q1-Q4 indicate the four simulated debates.} \label{accuracyExp1} \end{table} \begin{table}[t] \begin{tabular}{llllll} \hline \multirow{2}{*}{Q} & \multirow{2}{*}{$\overline{\ensuremath{\mathcal{C}}}$} & \multirow{2}{*}{Forecasts} & \multicolumn{3}{c}{Irrational Forecasts} \\ \cline{4-6} & & & \multicolumn{1}{c}{\emph{Increase} } \!\!\!\! & \multicolumn{1}{c}{\emph{Decrease} } \!\!\!\! & \multicolumn{1}{c}{\emph{Scale} }\!\! \!\! \\ \hline Q1 & -0.0418 & 366 & 63 & 101 & 170 \\ Q2 & 0.1827 & 84 & 11 & 15 & 34 \\ Q3 & -0.4393 & 164 & 53 & 0 & 86 \\ Q4 & 0.3664 & 84 & 4 & 19 & 15 \\ \hline All & -0.0891 & 698 & 131 & 135 & 305 \\ \hline \end{tabular} \caption{Auxiliary results from \FT{the experiment}, where $\overline{\ensuremath{\mathcal{C}}}$ is the average confidence score, `Forecasts' is number of forecasts made in each question and `Irrational Forecasts' the number in each question which violated each constraint in §\ref{subsec:rationality}.} \label{exp1auxinfo} \end{table} \paragraph{Results.} As Table \ref{accuracyExp1} demonstrates, simulating forecasting debates from GJO in \emph{Arg\&Forecast} led to predictive accuracy improvements in three of the four debates. \BIn{This is reflected in these debates by a substantial reduction in Brier scores versus control.} The greatest accuracy improvement in absolute terms was in Q4, which saw a Brier score decrease of 0.0255. In relative terms, Brier score decreases ranged from 5\% (Q4) to 47\% (Q3). \BIn{The average Brier score decrease was 33\%, representing a significant improvement in forecasting accuracy across the board}. \BIn{Table \ref{exp1auxinfo} demonstrates how \AX{our} rationality constraints drove forward this improvement}. 82\% of forecasts made across the four debates were classified as irrational \BIn{and subsequently moderated with a rational `follow up' forecast}. Notably, there were more \emph{irrational scale} forecasts than \emph{irrational increase} and \emph{irrational decrease} forecasts combined. These results demonstrate how argumentation-based rationality constraints can play an active role in facilitating higher forecasting accuracy, signalling the early promise of FAFs. \section{Conclusions}\label{sec:conclusions} We have introduced the Forecasting Argumentation Framework (FAF), a multi-agent argumentation framework which supports forecasting debates and probability estimates. FAFs are composite argumentation frameworks, comprised of multiple non-concurrent update frameworks which themselves depend on three new argument types and a novel definition of rationality for the forecasting context. Our theoretical and empirical evaluation demonstrates the potential of FAFs, namely in increasing forecasting accuracy, holding intuitive properties, identifying irrational behaviour and driving higher engagement with the forecasting question (more arguments and responses, and more forecasts in the user study). These strengths align with requirements set out by previous research in the field of judgmental forecasting. There \AX{is} a multitude of possible directions for future wor . First, FAFs are equipped to deal only with two-valued outcomes but, given the prevalence of forecasting issues with multi-valued outcomes (e.g. `Who will win the next UK election?'), expanding their capability would add value. Second, further work may focus on the rationality constraints, e.g. by introducing additional parameters to adjust their strictness, or \AX{by implementing} alternative interpretations of rationalit . Third, future work could explore constraining agents' argumentation. This could involve using past Brier scores to limit the quantity or strength of agents' arguments and also to give them greater leeway wrt the rationality constraints. \FTn{Fourth, our method relies upon acyclic graphs: we believe that they are intuitive for users and note that all Good Judgment Open debates were acyclic; nonetheless, the inclusion of cyclic relations (e.g. to allow \AX{con} arguments that attack each other) could expand the scope of the argumentative reasoning in \AX{in FAFs.} } Finally, there is an immediate need for larger scale human experiments. \newpage \section*{Acknowledgements} The authors would like to thank Prof. Anthony Hunter for his helpful contributions to discussions in the build up to this work. \BIn{Special thanks, in addition, go to Prof. Philip E. Tetlock and the Good Judgment Project team for their warm cooperation and for providing datasets for the experiments.} \AX{Finally, the authors would like to thank the anonymous reviewers and meta-reviewer for their suggestions, which led to a significantly improved paper.} \bibliographystyle{kr}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,577
package org.collectionspace.chain.csp.persistence.services; import java.io.ByteArrayInputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import javax.activation.DataSource; public class UTF8StringDataSource implements DataSource { private String mime_type; private byte[] data; public UTF8StringDataSource(String source,String mime_type) throws IOException { this.mime_type=mime_type; data=source.getBytes("UTF-8"); } public String getContentType() { return mime_type; } public InputStream getInputStream() throws IOException { return new ByteArrayInputStream(data); } public String getName() { return "[a string]"; } public OutputStream getOutputStream() throws IOException { throw new IOException(getClass().getCanonicalName()+" is readonly"); } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,529
\section{Introduction} Under the Dutch province of Groningen lies one of the largest gas fields in the world. The reservoir lies at a depth of 3~km in Rotliegend sandstone and contains an estimated 2800 billion cubic metres of gas. Since production started in 1963, around 2000 billion cubic metres of gas has been produced up to 2012 by the NAM \textit{(Nederlandse Aardolie Maatschappij)}, a partnership between Shell and ExxonMobil. As a result of taxes and its participation in NAM, the Dutch government typically receives 70\% of the profit from the Groningen Gas Field (GGF), although in some periods this can be even as high as 90\% \citep{Groningen_env}. Despite the economic advantages of the gas extraction on the Dutch government finances, there is also a serious drawback. Since 1986, anthropogenic (man-made) seismicity is observed in the, otherwise mostly aseismic, northern part of the Netherlands, and especially in the province of Groningen. When the gas is extracted, the porous layer of sandstone, in which it is contained, compacts. Normally, this happens gradually, and the surface subsides without causing any problem. However, when this process happens e.g.\@ close to fault lines, the sandstone layers can locally compact differently which causes seismic activity \citep{vanEck, Groningen_env}. Because of this anthropogenic seismicity, houses have been damaged and the NAM has paid around 200 million euro of compensation up to 2014. Moreover, several thousands of houses need to be reinforced to avoid serious damage caused by future potential seismic activity. \citet{vanEck} also mentions other social impacts of the seismic activity including declining house prices and concerns about breaching of the dykes in the gas field area in case of a large seismic event. One of the obvious parameters responsible for the damage caused by seismic activity is the magnitude of the seismic event, which is directly linked to the energy released by the seismic event. So far, the largest (local) seismic event magnitude observed in the GGF is $M=3.6$ which occurred on 16 August 2012 near the village of Huizinge, municipality of Loppersum. A Modified Mercalli Intensity of VI was observed less than 4~km from the event epicentre \citep{KNMI_Huizinge}. The event caused significant damage to the infrastructure. A natural question arises: what is the maximum possible seismic event magnitude $T_M$ which can be generated by the GGF? Knowledge of this parameter is required by the local authorities, the engineering community, disaster management agencies, environmentalists, and the insurance industry. Its value depends on the regional tectonic setting of the area, the presence of active (capable) tectonic faults and, up to certain extent, the production regime. According to a comprehensive study of anthropogenic seismicity since 1929, the largest observed seismic event magnitude caused by oil and gas extraction is 7.3 \citep{Davies}. This event and another event of magnitude $\sim$7.0 took place near Gazli in Uzbekistan, in an area that is known to be aseismic. At the Lacq gas field in France, an event of magnitude $\sim$6.0 was recorded \citep{Lacq}. It is uncertain that the events were indeed of anthropogenic origin, but several factors suggest that these are examples of the strongest seismic events related to gas extraction from the gas fields. Seismicity generated by groundwater extraction has a similar character. On 11 May 2011, in Lorca, Spain, extensive groundwater extraction caused the occurrence of a shallow (2-4~km) seismic event of magnitude 5.1, leading to nine casualties and significant damage to infrastructure \citep{Lorca}. The purpose of this research is to assess the maximum possible seismic event magnitude $T_M$, based on the available seismic event catalogue of anthropogenic seismicity generated by the GGF. Several such estimates for the area have been made by the KNMI \textit{(Koninklijk Nederlands Meteorologisch Instituut)}: 3.3 in 1995, 3.8 in 1998 and 3.9 in 2004. In March 2016, a workshop was held in Amsterdam to provide an estimate for the maximum possible seismic event magnitude, which can be generated by the GGF (see \citet{NAM} for an overview of the results). The range of $T_M$ estimates, provided by the experts, is 3.8 to 5.0. So far, the epicentres of all occurred seismic events are within the areas of the gas extraction or not more than 500~m outside of the extraction area. This indicates that the observed seismicity can be classified as anthropogenic. However, it cannot be excluded that in the future the stresses generated by the gas extraction will be able to trigger tectonic origin stresses, resulting in significantly stronger events outside of the gas field. As a rule, such events can be significantly stronger than purely induced \citep[see e.g.\@][]{Gibowicz_Kijko}. So far, experts have found no evidence that the Groningen gas fields are capable of triggering significantly stronger seismicity than already observed. However, if such events would occur, experts believe that an event of magnitude at most 7.25 can take place \citep{NAM}. The estimation of $T_M$ can be done in many different ways. For a review of different methods applicable for the assessment of $T_M$, see e.g.\@ \citet{KijkoGraham, Kijko, Wheeler, KS, VermeulenKijko}. A comprehensive discussion of $T_M$ assessment techniques, mainly related and applicable to fluid injection, is provided in \citet{Yeck}. Unlike \citet{Shapiro} and \citet{Hallo}, \citet{Yeck} assumed that the parameters describing the anthropogenic seismic regime (seismic activity rate, the $b$-value of Gutenberg-Richter and an upper limit of magnitude $T_M$) are subject to significant spatial and temporal variation. Especially prone to time-space fluctuation is the value of $T_M$. \citet{Yeck} suggests two different approaches for the assessment of this parameter. The first one is based on the observation \citep[see e.g.\@][]{McGarr76, McGarr14, McGarr02, Nicol} that the maximum seismic event magnitude is linearly proportional to the logarithm of the cumulative volume of fluid injected/extraction. However, \citet{Yeck} is not answering the question about the saturation of such a time dependence plot. Since fault sizes are limited, and the seismic event magnitude is linked to the fault size, the magnitudes also need to have an upper limit. Based on this simple physical consideration, the $T_M$ value must reach this certain upper limit. The second approach to assess $T_M$, which is explored in \citet{Yeck}, is based on the relationship between the size of the fault rupture and the seismic event magnitude \citep[see e.g.\@][]{Wells,Stirling}. A similar approach, extended by application of the logic tree formalism, is suggested in \citet{Bommer}. The drawback of the proposed method is the fact that anthropogenic seismicity often takes place in previously inactive areas with unknown and unmapped faults. Clearly, assessment of the upper limit of magnitude $T_M$ can be done using statistical tools, in particular extreme value theory (EVT). In this work, the EVT formalism is applied for assessment of the maximum possible seismic event magnitude in the GGF, by application of two different parameter estimation techniques, as developed in \citet{Truncation,trunc_real}. Our work also includes analyses of the confidence bounds of the upper limit of the magnitude distribution. For this purpose, we applied the asymptotic techniques as developed in \citet{Truncation,trunc_real}. Other EVT-based estimators using the moment estimator \citep{Dek} or the peaks-over-threshold maximum likelihood (POT-ML) approach have also been applied \citep[see e.g.\@][]{SoE,dHF}. However, comprehensive tests based on simulated data show that moment and POT-ML based estimators perform worse for truncated distributions than the estimators developed in \citet{Truncation,trunc_real}. For this reason, the moment and the POT-ML endpoint estimators are not discussed in this work. Recently, another endpoint estimator based on EVT was proposed in \citet{FANR2017}. It is however not suitable to estimate the endpoint when the distribution is truncated, as is for example the case for the Gutenberg-Richter distribution we discuss later. When applying the estimator to simulated data or the GGF data example, which we consider later in this paper, the method of \citet{FANR2017} yields very volatile estimates. Therefore, it is not included in this paper. \newpage The EVT-based estimators of the upper limit of the earthquake magnitudes, or equivalently the endpoint of the magnitude distribution, have received rather limited attention in the respectable seismological literature. \citet{Pisarenko_Mmax,Pisarenko_Mmax2,PisarenkoRodkin,VermeulenKijko} are notable exceptions. Based on empirical evidence, it is often assumed that earthquake magnitudes follow the so-called Gutenberg-Richter (GR) distribution \citep{GR}. The original GR magnitude distribution has no upper limit. After right truncation of the GR distribution, or physically speaking, after introducing the upper limit of seismic event magnitude \citep{Hamilton,Page}, the cumulative distribution function (CDF) takes the form \[F_M(m)=\mathbb{P}(M\leq m)=\begin{cases} 0 & \text{if } m \leq t_M\\ \frac{\exp(-\beta t_M)-\exp(-\beta m)}{\exp(-\beta T_M)-\exp(-\beta m)} & \text{if } t_M< m < T_M\\ 1 & \text{if } m \geq T_M, \end{cases} \] where $t_M > 0$ is the level of completeness of the seismic event catalogue, $T_M$ is the maximum possible magnitude, i.e.\@ the upper limit (truncation point) of the magnitude distribution, and $\beta > 0$ the distribution parameter. Note that the Gutenberg-Richter distribution is not only derived empirically. There are several attempts \citep[see e.g.\@][]{Scholz,Scholz2,Rundle} to derive the GR relation based on the physical principles of earthquake occurrence or by application of the universal concept of entropy \citep[see e.g.\@][]{BerrilDavis}. Several parametric estimators of $T_M$ have been derived, which are based on the GR magnitude distribution \citep[see e.g.\@][]{Pisarenko,Pisarenko_Mmax,Raschke}. Here, we only look at one parametric estimator of $T_M$: the Kijko-Sellevoll estimator \citep{Kijko_Sellevoll,Kijko}. Moreover, we are also analysing a parametric upper confidence bound for $T_M$ based on the truncated GR distribution \citep{Pisarenko91}. This technique is applied in \citet{Potsdam} to assess the maximum possible seismic event magnitude in the GGF. Note that Bayesian estimators for the maximum earthquake magnitude have also been considered, see e.g.\@ \citet{Cornell_Bayesian,Holschneider_Bayesian, Kijko_Bayesian}. Another parametric model for earthquake magnitudes is the tapered Pareto distribution a.k.a.\@ the modified GR distribution \citep[see e.g.\@][]{KaganJackson, KaganSchoenberg}. However, unlike the truncated GR distribution, this model does not provide an upper bound for the magnitudes, which makes it unrealistic from a physical point of view. \citet{Potsdam} like e.g.\@ \citet{PisarenkoRodkin} provides estimates for the maximum expected seismic event magnitude to occur, for different time intervals (time horizons). It is important to note that in our work, we do not try to estimate that quantity, but we only look at estimates for the time-independent maximum possible seismic event magnitude. In the next section, we discuss the different endpoint estimators that can be applied to estimate the maximum possible seismic magnitude $T_M$. In Section 3, we apply these methods to estimate $T_M$ for the GGF. Moreover, we also discuss upper confidence bounds for $T_M$. Afterwards, we compare the performance of the EVT-based estimators with some discussed in \citet{KS} using simulations, assuming that the seismic event magnitude is distributed according to the truncated GR distribution. \section{Overview of applied estimators} We now discuss several different types of endpoint estimators: the EVT-based estimators are presented in Section~\ref{sec:endpointEVT}, the non-parametric estimators as discussed in \citet{KS} are described in Section~\ref{sec:endpointNP} and the parametric Kijko-Sellevoll estimator is presented in Section~\ref{sec:endpointKS}. We provide only very few details for the estimators already in use for assessment of the upper limit of the seismic event magnitude. More details can be found in \citet{Kijko, KS}. In all cases where order statistics are used, the ordered sample of magnitudes is denoted as $M_{1,n} \leq \ldots \leq M_{n,n}$. \subsection{EVT-based estimators}\label{sec:endpointEVT} We consider two EVT-based estimators of the endpoint: the truncated generalised Pareto distribution (GPD) estimator using the framework from \citet{trunc_real} and the truncated Pareto estimator of \citet{Truncation}. The methodology for modelling the upper tail of the distribution of a random variable $Y$ relies on the fact that the maximum of independent measurements $Y_i, \; i=1,\ldots,n,$ can be approximated by the generalised extreme value distribution: as $n\to \infty$, \begin{equation} \mathbb{P}\left(\frac{\displaystyle\max_{i=1,\ldots,n}Y_i -b_n}{a_n} \leq y \right) \to G_{\xi} (y) = \exp \left( - (1 + \xi y)^{-1/\xi} \right), \;\; 1+\xi y>0, \label{eq:maxd} \end{equation} where $b_n \in \mathbb{R}$, $a_n >0$ and $\xi \in \mathbb{R}$ are the location, scale and shape parameters, respectively. For $\xi =0$, $G_0(y)$ has to be read as $\exp\left(- \exp (-y)\right)$. In fact, \eqref{eq:maxd} represents the only possible non-degenerate limits for maxima of independent and identically distributed sequences $Y_i$. Let $F_Y (y) = \mathbb{P}(Y \leq y)$ denote the CDF, $\bar{F}_Y(y) = 1-F_Y(y)$ the right tail function (RTF), and $Q_Y (p) = \inf \{y \,|\, F_Y(y) \geq p \}$ ($0<p<1$) the quantile function of a random variable $Y$. \par Condition \eqref{eq:maxd} is equivalent to the convergence of the distribution of excesses (or peaks) over high thresholds $t$ to the generalised Pareto distribution (GPD): as $t$ tends to the endpoint of the distribution of $Y$, then, \begin{equation} \mathbb{P} \left(\frac{Y -t}{\sigma_t}>y \,\middle|\, Y>t\right) = \frac{\bar{F}_Y (t+y \sigma_t )}{\bar{F}_Y (t)} \to H_{\xi}(y) = -\ln G_{\xi} (y) = \left( 1+\xi y\right)^{-1/\xi}, \label{eq:pot} \end{equation} where $\sigma_t >0$. The shape parameter $\xi$ is often called the extreme value index (EVI). The specific case $\xi >0$ consists of the Pareto-type distributions defined through \begin{equation} \frac{Q_Y(1-\frac{1}{vy})}{Q_Y(1-\frac{1}{y})} \to_{y\to \infty} v^{\xi} \quad \mbox{ and } \quad \mathbb{P} \left(\frac Yt>y \,\middle|\, Y>t\right) = \frac{\bar{F}_Y (ty )}{\bar{F}_Y (t)} \to_{t\to \infty} y^{-1/\xi}. \label{eq:Pa} \end{equation} The max-domain of attraction (MDA) in case $\xi=0$ is called the Gumbel domain to which exponentially decreasing tails belong. Finally, the domain corresponding to negative values of the EVI have finite right endpoints. Right truncation models for $X$ based on a parent variable $Y$ satisfying the above extreme value assumptions, are obtained from \begin{equation} X =_d (Y\,|\,Y<T), \label{eq:T} \end{equation} for some $T>0$. The odds of the truncated probability mass under the untruncated distribution $Y$ are denoted by $D_T = \bar{F}_Y (T)/F_Y(T)$. Truncation with the threshold $t=t_n \to \infty$ is defined through the assumption \begin{equation} \frac{T-t}{\sigma_t} \to \kappa >0, \label{eq:C} \end{equation} which then entails that for $x\in (0,\kappa)$ \begin{equation} \mathbb{P}\left(\frac{X-t}{\sigma_t} >x \,\middle|\, X>t\right) \to \frac{(1+ \xi x)^{-1/\xi} - (1+ \xi \kappa )^{-1/\xi}} {1-(1+ \xi \kappa)^{-1/\xi}}. \label{eq:FT} \end{equation} This corresponds to situations where the deviation from the GPD behaviour due to truncation at a high value $T$ will be visible in the data from $t$ on, and the approximation of the peaks-over-threshold (POT) distribution using the limit distribution in \eqref{eq:FT} appears more appropriate than with a simple GPD. In the specific case of Pareto-type distributions (i.e.\@ $\xi >0$) condition \eqref{eq:FT} can be simplified to \begin{equation} \mathbb{P}\left(\frac{X}{t} >x \,\middle|\, X>t\right) \to \frac{x^{-1/\xi} - \rho^{-1/\xi}}{1- \rho^{-1/\xi}}, \;\; 1< x < \rho, \label{eq:FT+} \end{equation} assuming that $T/t \to \rho > 1$. In practice, one has to choose a certain threshold $t$. Often, one takes it equal to the $(k+1)$-th largest observation $X_{n-k,n}$ and then computes the estimator for many values of $k$. \subsubsection{Truncated GPD estimator} We can estimate the endpoint of the magnitude distribution using the techniques developed in \citet{trunc_real}. Its estimator for the truncation point $T_M$ is based on condition \eqref{eq:FT} for the variable $M$ where $\xi$ is the EVI of $Y$, the parent variable of $M$, see Table~\ref{tab:notation_magen}. The corresponding estimator for the endpoint is then given by \begin{equation} \hat{T}^M_{k} = M_{n-k,n} + \frac{1}{\hat\tau_k}\left[\left( \frac{1-\frac1k}{(1+ \hat\tau _k (M_{n,n}-M_{n-k,n}))^{-1/\hat\xi_k}-\frac1k} \right)^{\hat\xi _k} -1 \right], \label{eq:hatTmle} \end{equation} with $\hat{\xi}_k$ and $\hat{\tau}_k$ the estimates for $\xi$ and $\tau = \xi/\sigma_t$ obtained by application of the maximum likelihood principle. See \citet{trunc_real} for more details on estimation and testing. We will call this estimator the \textit{Truncated GPD}. \par Using Theorem~2 in \citet{trunc_real} with $p=0$, we obtain an approximate \mbox{$100(1-\alpha)\%$} upper confidence bound for $T_M$: \begin{equation}\label{eq:CI_trMLE} \hat{T}^M_{k} -(\ln\alpha+1)\frac{\frac{k+1}{(n+1)\hat{D}_{T,k}}}{k+1}\left(1+\frac{k+1}{(n+1)\hat{D}_{T,k}}\right)^{\hat{\xi}_k}\frac{\hat{\xi}_k}{\hat{\tau}_k}\, . \end{equation} One has to note that in \eqref{eq:CI_trMLE}, second order terms have been omitted, and $\hat{D}_{T,k}$ denote the estimates for the truncation odds $D_T$, see \citet{trunc_real}. \begin{table}[ht] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|cc|cc} \hline Variable & EVI & Endpoint & Parent variable & EVI of parent variable \\ \hline Magnitude $M$ & $\xi_M$ & $T_M$ & $Y$ with $M =_d (Y \, | \, Y<T_M)$ & $\xi$ \\ Energy $E$ & $\xi_E$ & $T_E$ & $Y_E$ with $E=_d (Y_E \, | \, Y_E<T_E)$ & $\xi_{Y_E}$ \\ \hline \end{tabular}}% \caption{Magnitude and energy: overview of notation.}\label{tab:notation_magen} \end{table} \subsubsection{Truncated Pareto estimator}\label{sec:trHill} The endpoint estimator of \citet{Truncation} is based on condition \eqref{eq:FT+} and is hence only suitable for truncated Pareto-type tails. Since the (truncated) GR magnitude distribution is a truncated exponential distribution, we expect that this estimator cannot be applied to the magnitudes directly. Instead, we use following empirical relationship between the (local) earthquake magnitude $M$ and the energy released by earthquakes \citep{MGS} \begin{equation} E = 2\times 10^{1.5(M-1)} = \exp(\ln2+(M-1)1.5\ln10), \label{eq:energy_mag} \end{equation} or reversely \begin{equation} M = \frac{\log_{10} \left( \frac{E}2\right)}{1.5}+1=\frac{\ln \left( \frac{E}2\right)}{1.5\ln10}+1, \label{eq:mag_energy} \end{equation} where the energy is expressed in megajoules (MJ). We thus expect the energy to follow a truncated Pareto-type distribution. Therefore, we apply the estimator of \citet{Truncation} to the energy and transform the endpoint back to the magnitudes using \eqref{eq:mag_energy}. By denoting the parent variable of $E$ by $Y_E$, we have \mbox{$ E=_d (Y_E \,|\, Y_E<T_E)$}, where $T_E$ is the endpoint for $E$, see Table~\ref{tab:notation_magen}. \par Using the approach of \citet{Truncation} applied to the variable $E$, the endpoint for the energy is then estimated as \begin{equation}\label{eq:endpoint_trHill_energy} \hat{T}^{E,+}_{k} = 2\times 10^{1.5(M_{n-k,n}-1)} \left(\frac{\left(\frac{2\times 10^{1.5(M_{n-k,n}-1)}}{2\times 10^{1.5(M_{n,n}-1)}}\right)^{1/\hat{\xi}^{Y_E,+}_k}-\frac1{k+1}}{1-\frac1{k+1}}\right)^{-\hat{\xi}^{Y_E,+}_k}. \end{equation} Here, $\hat{\xi}^{Y_E,+}_k$ are the estimates for $\xi_{Y_E}$, the extreme value index of $Y_E$. See \citet{Truncation} for more details on estimation and testing. Note that $\rho$ (see \eqref{eq:FT+}) is estimated by $E_{n,n}/E_{n-k,n}$. Transforming the estimated endpoints for the energy gives the following endpoint estimates for the magnitudes: \begin{equation}\label{eq:endpoint_trHill} \hat{T}^{M,+}_{k} = \frac{\log_{10} \left( \frac{\hat{T}^{E,+}_{k}}2\right)}{1.5} +1. \end{equation} We denote this estimator by \textit{Truncated Pareto}. \par Using the asymptotic results in \citet{Truncation}, an approximate $100(1-\alpha)\%$ upper confidence bound for $T_E$ can be constructed. Application of Theorem~2 in \citet{Truncation}, after omitting second-order terms again, gives the following approximate \mbox{$100(1-\alpha)\%$} upper confidence bound for $T_E$: \[ \exp\left(\ln \hat{T}^{E,+}_{k}-\frac{\frac{k+1}{(n+1)\hat{D}^{E,+}_{T,k}}\hat{\xi}^{Y_E,+}_k}{k+1}(\ln\alpha+1)\right).\] Here, $\hat{D}^{E,+}_{T,k}$ are the truncated Pareto estimates for the truncation odds $D_T^E= \bar{F}_{Y_E} (T_E)/F_{Y_E}(T_E)$, see \citet{Truncation}. This upper bound can then be transformed back to the magnitude level as before to get an approximate $100(1-\alpha)\%$ upper confidence bound for $T_M$: \begin{equation}\label{eq:CI_trHill} \frac{\ln\left(\frac{\hat{T}^{E,+}_{k}}2\right)}{1.5\ln10} + 1 -\frac{\frac{\frac{k+1}{(n+1)\hat{D}^{E,+}_{T,k}}\hat{\xi}^{Y_E,+}_k}{k+1}(\ln\alpha+1)}{1.5\ln10}=\hat{T}^{M,+}_{k} -\frac{\frac{\frac{k+1}{(n+1)\hat{D}^{E,+}_{T,k}}\hat{\xi}^{Y_E,+}_k}{k+1}(\ln\alpha+1)}{1.5\ln10}. \end{equation} \subsection{Non-parametric estimators}\label{sec:endpointNP} The next estimators are all based on the fact that \begin{equation}\label{eq:endpoint_E} \mathbb{E}(M_{n,n}) = \int_{t_M}^{T_M} m \,d F_M^n(m)= T_M - \int_{t_M}^{T_M} F_M^n(m) \, dm, \end{equation} see \citet{KS}. Hence, $T_M$ can be estimated by \[\hat{T}^M= M_{n,n} + \Delta\] with $\Delta$ an estimator for $\int_{t_M}^{T_M} F_M^n(m) \, dm$. \subsubsection{Non-parametric with Gaussian kernel} The CDF in \eqref{eq:endpoint_E} can be estimated using a Gaussian kernel. The estimator for the endpoint is then obtained as the iterative solution of the equation \begin{equation*}\label{eq:endpoint_npG1} T_M = M_{n,n} +\Delta \end{equation*} with \begin{equation}\label{eq:endpoint_npG2} \Delta = \int_{t_M}^{T_M} \left(\frac{\sum_{i=1}^n \Phi\left(\frac{m-M_i}h\right)-\Phi\left(\frac{t_M-M_i}h\right)}{\sum_{i=1}^n \Phi\left(\frac{T_M-M_i}h\right)-\Phi\left(\frac{t_M-M_i}h\right)}\right)^n \,dm \end{equation} and $\Phi$ the standard normal CDF. The bandwidth $h$ is chosen using unbiased cross-validation. We denote this estimator by \textit{N-P-G}. For more details we refer to \citet{Kijko2} and Equations 28 and 29 in \citet{KS}. \subsubsection{Non-parametric based on order statistics} \citet{Cooke79} proposes to approximate the CDF in \eqref{eq:endpoint_E} with the empirical CDF. The corresponding endpoint estimator, see Equation 33 in \citet{KS}, is given by \begin{equation}\label{eq:endpoint_npOS} \hat{T}_n^{M,N-P-OS} = M_{n,n} + \left[ M_{n,n}-(1-\exp(-1)) \sum_{i=0}^{n-1} \exp(-i) M_{n-i,n}\right]. \end{equation} We denote this estimator by \textit{N-P-OS}. \par \citet{Cooke79} also constructed an approximate $100(1-\alpha)\%$ upper confidence bound for $T_M$: \begin{equation}\label{eq:CI_npOS} M_{n,n} + \frac{M_{n,n}-M_{n-1,n}}{(1-\alpha)^{-\nu}-1}, \end{equation} where the parameter $\nu$ is determined by \begin{equation*} \lim_{y\uparrow 0} \frac{1-F_M(T_M+cy)}{1-F_M(T_M+y)}=c^{1/\nu} \label{eq:nu_Cooke} \end{equation*} for every constant $c>0$. Note that $\nu=1$ for upper truncated distributions which can be proved by application of the mean value theorem. Since it is often assumed that magnitude data come from an upper truncated distribution, e.g.\@ the truncated Gutenberg-Richter distribution, we use $\nu=1$ in the remainder. \subsubsection{Few largest observations} Later, \citet{Cooke80} proposed a simple estimator that only uses the maximum and the \mbox{$(k+1)$}-th largest magnitude. This estimator, see Equation 38 in \citet{KS}, is equal to \begin{equation}\label{eq:endpoint_FL} \hat{T}^{M,FL}_{k} = M_{n,n} + \left[\frac1k(M_{n,n}-M_{n-k+1,n})\right]. \vspace{-0.05cm} \end{equation} We denote this estimator by \textit{FL}. \subsubsection{Extended FL} The previous estimator only uses two observations. It can be extended as \begin{equation}\label{eq:endpoint_EFL} \hat{T}^{M,EFL}_{k} = M_{n,n} + \left[\frac1{k} \left(M_{n,n} - \frac1{k-1} \sum_{i=2}^k M_{n-i+1,n}\right)\right], \end{equation} see Equation 40 in \citet{KS}. We denote this estimator by \textit{EFL}. \subsubsection{Robson -- Whitlock} \citet{RW} proposes the following simple estimator: \begin{equation}\label{eq:endpoint_RW} \hat{T}_2^{M,R-W} = M_{n,n} +\left[ M_{n,n} - M_{n-1,n}\right], \end{equation} see Equation 42 in \citet{KS}. We denote this estimator by \textit{R-W}. \par Another approximate $100(1-\alpha)\%$ upper confidence bound for $T_M$ was derived in \citet{RW}: \begin{equation}\label{eq:CI_RW} M_{n,n} + \frac{1-\alpha}{\alpha}\left(M_{n,n}-M_{n-1,n}\right). \end{equation} Note that this corresponds to the upper confidence bound \eqref{eq:CI_npOS} of \citet{Cooke79} (with $\nu=1$). \subsubsection{Robson -- Whitlock -- Cooke} The previous estimator can be improved, in terms of MSE, as shown in \citet{Cooke79}. The improved estimator is obtained as \begin{equation}\label{eq:endpoint_RWC} \hat{T}_2^{M,R-W-C} = M_{n,n} + \left[\frac1{2\nu}(M_{n,n} - M_{n-1,n})\right], \end{equation} see Equation 46 in \citet{KS}. As before, we take $\nu$ equal to 1. We denote this estimator by \textit{R-W-C}. Note that this estimator corresponds to the FL estimator for $k=2$. \subsection{Parametric estimator: Kijko -- Sellevol}\label{sec:endpointKS} \citet{Kijko_Sellevoll} introduced the equation (see Equation 13 in \citet{KS}) \begin{equation}\label{eq:endpoint_KS} T_M= M_{n,n} +\left[\frac{E_1(n_2)-E_1(n_1)}{\beta \exp(-n_2)}+t_M\exp(-n)\right] \end{equation} with \[n_1=\frac{n}{1-\exp(-\beta(T_M-t_M))},\quad n_2=n_1\exp(-\beta(T_M-t_M)),\] and $E_1(z)=\int_z^{\infty} \exp(-s)/s\, ds $ the exponential integral function. Since these expressions depend on $T_M$, we obtain $T_M$ using an iterative procedure. The parameter $\beta$ is estimated based on the truncated Gutenberg-Richter law using maximum likelihood, see \citet{Page} and Chapter~12 in \citet{Gibowicz_Kijko}. It is estimated iteratively using the equation \[\frac{1}{\beta} = \overline{M}_n -t_M + \frac{(T_M-t_M)\exp(-\beta(T_M-t_M))}{1-\exp(-\beta(T_M-t_M))},\] where $\overline{M}_n=1/n\sum_{i=1}^n M_i$ is the sample mean of $M_1,\ldots,M_n$. Using a Taylor expansion, this becomes \begin{equation}\label{eq:beta_Taylor} \hat{\beta} = \hat{\beta}_0\left(1-\hat{\beta}_0\frac{(T_M-t_M)\exp(-\hat{\beta}_0(T_M-t_M))}{1-\exp(-\hat{\beta}_0(T_M-t_M))}\right) \end{equation} where $\hat{\beta}_0 = \frac{1}{\overline{M}_n- t_M}$ is the Aki-Utsu \citep{Aki, Utsu} estimator for $\beta$. This approach does not use iterations and is thus preferred for computational reasons. In each iteration step (for $T_M$), we first update the estimate of $\beta$ using \eqref{eq:beta_Taylor}, and then improve the estimate of $T_M$. We denote this estimator of the maximum magnitude by \textit{K-S}. Note that of all discussed estimators, this is the only one that uses the truncated Gutenberg-Richter law directly. \par Based on the truncated Gutenberg-Richter law, a parametric $100(1-\alpha)\%$ upper confidence bound for $T_M$ can be constructed (see Equation~19 in \citet{Pisarenko91}): \begin{equation}\label{eq:CI_GR} t_M-\frac1{\beta}\ln\left(\frac{\exp(-\beta(M_{n,n}-t_M))-1}{\alpha^{1/n}}+1\right), \end{equation} where we estimate $\beta$ using the K-S method. \citet{Holschneider_Bayesian,ZH_CI} noted that the upper confidence bound as defined in \citet{Pisarenko91} is infinite if the maximum observed seismic event magnitude is larger than $t_M-\frac1{\beta}\ln(1-\alpha^{1/n})$. For the GGF magnitude data, this happens when $\alpha\leq 0.061$. Therefore, we consider $\alpha=0.1$ in the data example and the simulations. A comprehensive discussion on this subject, including a condition on the existence of Pisarenko's original $T_M$ estimator, can be found in \citet{VermeulenKijko}. \section{Estimation of the maximum possible seismic event magnitude generated by the GGF}\label{sec:Groningen} \begin{figure}[!h] \vspace{-0.7cm} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth,trim={5cm 0 5cm 0},clip]{NL_induced3.eps}% \caption{}\label{fig:NL_ind}% \end{subfigure}% \hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth,trim={5cm 0 4.75cm 0},clip]{Groningen_induced2.eps}% \caption{}\label{fig:Groningen_ind}% \end{subfigure}% \newline \begin{subfigure}{0.5\textwidth} \centering \includegraphics[height=\linewidth, angle=270]{Groningen_index.eps} \caption{}\label{fig:Groningen_data}% \end{subfigure} \caption{Locations of anthropogenic seismicity in (a) the Netherlands and (b) Groningen between December 1986 and 31 December 2016 with magnitudes at least 1.5, and (c) magnitude plot of anthropogenic seismicity in the considered area with magnitudes at least 1.5.}% \end{figure} In this section, we attempt to estimate the maximum possible seismic event magnitude which can be generated by gas extraction in the GGF. The database of the seismicity of anthropogenic origin in the area is downloaded from the website of the KNMI: \url{https://www.knmi.nl/kennis-en-datacentrum/dataset/aardbevingscatalogus}. The database contains (local) magnitudes $M$ of seismic events of anthropogenic origin in the Netherlands. We only consider events from the database that are located within the rectangle determined by (53.1\degree N, 6.5\degree E), (53.1\degree N, 7\degree E), (53.5\degree N, 7\degree E) and (53.5\degree N, 6.5\degree E), see Figure~\ref{fig:NL_ind}. The selected area is almost the same as the area that was considered in \citet{Potsdam}. The extracted database contains 286 seismic events with magnitudes at least 1.5, which have been recorded between December 1986 and 31 December 2016. The events, together with the boundaries of the selected area and approximate contours of the whole GGF (green), are shown in Figure~\ref{fig:Groningen_ind}. A plot of the magnitudes of the selected events is shown in Figure~\ref{fig:Groningen_data}. The dataset was tested for serial correlation, and no significant correlation could be detected. Moreover, comparing the analysis using all earthquakes (as is done in this section) with the analysis using only more recent earthquakes did not indicate non-stationarity in the data which is also confirmed by Figure~\ref{fig:Groningen_data}. \begin{figure}[!h] \centering \begin{subfigure}{0.495\textwidth} \centering \includegraphics[height=\textwidth, angle=270]{Groningen_ParetoQQ.eps}% \caption{}\label{fig:Groningen_ParetoQQ}% \end{subfigure}% \hfill \begin{subfigure}{0.495\textwidth} \centering \includegraphics[height=\textwidth, angle=270]{Groningen_ExpQQ.eps}% \caption{}\label{fig:Groningen_expQQ} \end{subfigure} \begin{subfigure}{0.495\textwidth} \centering \includegraphics[height=\textwidth, angle=270]{Groningen_MeanExcess.eps}% \caption{}\label{fig:Groningen_MeanExcess}% \end{subfigure} \hfill \begin{subfigure}{0.495\textwidth} \centering \includegraphics[height=\textwidth, angle=270]{Groningen_ExpQQ_fit.eps}% \caption{}\label{fig:Groningen_ExpQQ_fit}% \end{subfigure} \caption{Groningen gas field anthropogenic seismicity: (a) Pareto QQ-plot of energy data, (b) exponential QQ-plot of magnitude data, (c) mean excess plot of magnitude data and (d) exponential QQ-plot of magnitude data with fit based on the $k=125$ largest magnitudes.}% \end{figure} \begin{figure}[!h] \centering \begin{subfigure}{0.495\textwidth} \centering \includegraphics[height=\textwidth, angle=270]{Groningen_xi.eps}% \caption{}% \label{fig:Groningen_xi}% \end{subfigure} \hfill \begin{subfigure}{0.495\textwidth} \centering \includegraphics[height=\textwidth, angle=270]{Groningen_truncodds.eps}% \caption{}% \label{fig:Groningen_DT}% \end{subfigure}% \newline \begin{subfigure}{0.495\textwidth} \centering \includegraphics[height=\textwidth, angle=270]{Groningen_trTest_mle.eps}% \caption{}% \label{fig:Groningen_trTest_mle}% \end{subfigure} \begin{subfigure}{0.495\textwidth} \centering \includegraphics[height=\textwidth, angle=270]{Groningen_trTest.eps}% \caption{}% \label{fig:Groningen_trTest}% \end{subfigure} \caption{GGF anthropogenic seismicity: (a) estimates of $\xi$ (full line) and $\xi_{Y_E}$ (dashed line), (b) estimates of the truncation odds $D_T$, (c) P-values of a test for truncation based on the truncated GPD and (d) P-values of a test for truncation based on the truncated Pareto.}% \end{figure} The magnitudes in the database are rounded to one decimal digit, and hence there are several ties in the dataset. Therefore, we smoothed the data by adding independent uniform random numbers within the range [-0.05, 0.05] to all magnitudes that occur more than once. This ensures that all observations are unique. We then retain the 250 magnitudes larger than or equal to $t_M=1.5$. The choice of 1.5 as threshold in the Groningen case is standard in the geological literature, see e.g.\@ \citet{KNMI}. The exponential QQ-plot in Figure~\ref{fig:Groningen_expQQ} indicates that an exponential distribution is indeed suitable for the magnitudes, but the bending off at the largest observations suggests a possible upper truncated tail. The same behaviour is seen in the mean excess plot \citep[see e.g.\@ Chapter~1 in][]{SoE} in Figure~\ref{fig:Groningen_MeanExcess}: the first horizontal part suggests that the data come from an exponential-like distribution, whereas the downward trend at the end indicates an upper truncation point. Note that the Pareto QQ-plot of the energy in Figure~\ref{fig:Groningen_ParetoQQ} suggests that the energy follows a truncated Pareto distribution as discussed in Section~\ref{sec:trHill}. When applying the truncated GPD estimator to the magnitudes, a value of $\xi$ around 0 is found suggesting again an exponential-like distribution, see Figure~\ref{fig:Groningen_xi}. The parameter $\xi_{Y_E}$ is estimated by the truncated Pareto estimator to be around 1.8. The estimators for $D_T$ based on the truncated GPD and truncated Pareto estimators for $\xi$ and $\xi_{Y_E}$, respectively, suggest that the truncation odds are around 1\%, see Figure~\ref{fig:Groningen_DT}. Next, we test (directly and via the energy) if the data come indeed from an upper truncated distribution. Under the null hypotheses of both tests, the data come from an unbounded, hence not upper truncated, distribution. The P-values of a test for truncation based on the truncated GPD \citep{trunc_real} in Figure~\ref{fig:Groningen_trTest_mle} indicate, for larger values of $k$, that the magnitude data come from an upper truncated distribution. Similarly, P-values of a test for truncation based on the truncated Pareto \citep{trHill, Truncation} in Figure~\ref{fig:Groningen_trTest} indicate that, for values of $k$ above 75, the distribution of the energy is upper truncated. Note that the significance level of the tests, 10\%, is indicated by the horizontal lines in Figure~\ref{fig:Groningen_trTest_mle} and~\ref{fig:Groningen_trTest}. Finally, the fit provided by the truncated GPD with $k=125$, and hence $\hat{\xi}_{125}\approx0$, models the data well, see Figure~\ref{fig:Groningen_ExpQQ_fit}. All these elements suggest that the truncated Gutenberg-Richter distribution, i.e.\@ a doubly truncated exponential distribution, might indeed be a suitable model for the GGF magnitude data. \begin{figure}[!h] \centering \begin{subfigure}{0.625\textwidth} \centering \includegraphics[height=\textwidth, angle=270]{Groningen_endpoint.eps}% \caption{}% \label{fig:Groningen_endpoint_all}% \end{subfigure} \hfill \begin{subfigure}{0.625\textwidth} \centering \includegraphics[height=\textwidth, angle=270]{Groningen_endpoint_CI90_legend2.eps}% \caption{}% \label{fig:Groningen_CI}% \end{subfigure}% \caption{GGF anthropogenic seismicity: (a) estimates of the maximum possible earthquake magnitude $T_M$ and (b) 90\% upper confidence bounds for $T_M$.}% \end{figure} \par Next, we compute all discussed estimates for the maximum possible earthquake magnitude (Figure~\ref{fig:Groningen_endpoint_all}). For estimators where no value of $k$ needs to be chosen, the dot indicates how many observations are used: 2 or $n$. All estimators suggest that the endpoint lies between 3.61 and 3.80 on the Richter scale. Note however, that for the estimators of the endpoint based on EVT, we need to look at larger values of $k$ where a more stable pattern emerges as the test for truncation was only significant for $k\geq 75$. For $k$ around 125, the EVT-based methods estimate the endpoint around 3.76. Note that the EVT estimates for $k=n$ are close to the estimates of the N-P-G and K-S methods which use all $n$ observations above 1.5. All other methods lead to estimates for the endpoint that are lower than the EVT results. \par Additionally, we look at 90\% upper confidence bounds for the endpoint as discussed above. The endpoint estimators are given by the full orange (truncated GPD), dashed blue (truncated Pareto), purple long dashed (N-P-OS) and grey dash-dotted (K-S) lines in Figure~\ref{fig:Groningen_CI}. The corresponding 90\% upper bounds are added as dash-dotted lines in the same colour. The upper bounds using the truncated GPD \eqref{eq:CI_trMLE} and truncated Pareto \eqref{eq:CI_trHill} take values of 4.04 and 3.98, respectively, for $k=125$. The 90\% upper bound \eqref{eq:CI_npOS} takes a value of 4.50, and the parametric 90\% upper bound \eqref{eq:CI_GR} is equal to 4.32 (grey point). Note that the latter two confidence bounds are based on $n$ magnitudes and should hence be compared with the EVT-based upper bounds for $k=n$ (4.03 and 4.04). We summarised the obtained estimates and 90\% confidence bounds for the maximum possible earthquake magnitude in Table~\ref{tab:results}. Note that for the estimators where $k$ needs to be chosen, we indicate the chosen value of $k$ in the last column. Fixed values of $k$, e.g.\@ $2$ for the R-W estimator, are indicated in the last column in italics. \begin{table}[!h] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l|r|r|r} \hline Estimator & Estimated $T_M$ & 90\% upper confidence bound & $k$ \\ \hline Truncated GPD & 3.77 & 4.04 & 125\\ Truncated Pareto & 3.75 & 3.98 & 125\\ Non-parametric Gaussian (N-P-G) & 3.78 & / & $n=$\textit{250}\\ Non-parametric order statistics (N-P-OS) & 3.68 & 4.50 & $n=$\textit{250}\\ Few largest observations (FL) & 3.61 & / & 250\\ Extended few largest observations (EFL) & 3.61 & / & 250\\ Robson -- Whitlock (R-W) & 3.70 & / & \textit{2}\\ Robson -- Whitlock -- Cooke (R-W-C) & 3.65 & / & \textit{2} \\ Kijko -- Sellevoll (K-S) & 3.77 & 4.32 & $n=$\textit{250}\\ \hline \end{tabular}}% \caption{Summary of estimates and 90\% confidence bounds for the maximum possible earthquake magnitude in the GGF.}\label{tab:results} \end{table} \section{Simulations}\label{sec:Groningen_sim} The performance of the nine applied estimators of the upper limit of the magnitude distribution was tested using simulations. We generated 5000 magnitude samples of size 250 from the truncated Gutenberg-Richter distribution with level of completeness $t_M = 1.5$, rate parameter $\beta = 2.1203$ and three different endpoints: $T_M = 3.75$, 4.0 and 4.5. The parameter $\beta$ was estimated from the GGF data by application of \eqref{eq:beta_Taylor} \citep{Gibowicz_Kijko}. Note that these endpoints correspond to the 99.2\%, 99.5\% and 99.8\% quantiles of the shifted exponential distribution with $\beta = 2.1203$ and level of completeness $t_M = 1.5$. For each of these simulations, we plot the relative mean, the relative mean squared error (MSE) and the coverage percentage of the upper confidence bounds over the 5000 simulations. These plots can be found in Appendix~\ref{sec:app}. The simulations show that the truncated GPD and truncated Pareto estimators have the lowest bias, over all three considered truncation points. However, their MSE is among the highest which indicates that these estimators have the largest variances. As expected, the bias and MSE of all nine analysed estimators increases when the endpoint gets larger. For simulations with endpoint 3.75 and 4.0 (which seem to be realistic scenarios), on average, the EVT estimators slightly overestimate the true endpoint. When $T_M = 4.5$, all estimates of $T_M$, except K-S, are on average too low. The coverage percentages of the upper confidence bounds are defined as the percentage of times that the obtained upper bounds are larger than the true endpoint. In theory, these percentages should be equal to 90\%. When the endpoint gets larger, the observed coverage percentages decrease. The coverage percentage for the upper bound \eqref{eq:CI_npOS} of \citet{Cooke79} is closer to 90\% than the ones for the upper bounds of the EVT-based estimators. The performance of the first two EVT-based estimators is rather similar with a slight advantage for the truncated Pareto. Since second-order bias terms were not taken into account for the upper bounds \eqref{eq:CI_trMLE} and \eqref{eq:CI_trHill}, developing bias reduced methods can improve these upper bounds. The parametric upper confidence bound \eqref{eq:CI_GR}, which uses all $n=250$ observations, performs similarly to the one using the truncated Pareto for $k$ large when the endpoint is 3.75. For higher endpoints, this upper confidence bound performs much worse than the other ones. It is important to note that the parametric K-S estimator is designed specifically for the truncated Gutenberg-Richter distribution, which we consider in these simulations, whereas the EVT-based estimators are also suitable for other upper truncated distributions. The good performance of the EVT-based estimators on different upper truncated distributions, e.g.\@ a truncated lognormal distribution, is shown through simulations in \citet{Truncation, trunc_real}. \section{Conclusions} In our work, we investigated the performance of nine different estimators of the endpoint of the distribution, and applied it to the estimation of the maximum possible seismic event magnitude generated by gas production in the Groningen gas field. The analysis includes a comparison of EVT-based estimators, non-parametric estimators and a parametric estimator. Since the available database contains only a few large magnitude events, all estimates provide the assessment of the upper limit of magnitude with significant uncertainty. The quantification of the uncertainty is a problem on its own, which requires careful consideration and effort, not less than the assessment of the upper limit of magnitude itself. Based on the application of the nine different techniques, the maximum possible anthropogenic origin seismic event magnitude in the Groningen gas field is estimated to be in the range 3.61 to 3.80. The 90\% upper confidence bounds vary from 3.85 to 4.50. In addition, the extreme value analysis in Section~\ref{sec:Groningen} suggests that the widely used truncated Gutenberg-Richter distribution might indeed be appropriate to model the distribution of seismic event magnitudes in the Groningen gas field. However, the EVT-based and non-parametric estimators do not require knowledge of the magnitude distribution, which gives them more flexibility compared to their parametric counterparts. Based on simulations from the truncated GR distribution, it is clear that the EVT-based methods perform well when estimating the endpoint. It is important to note that these methods usually provide an assessment with a positive bias, which means that, on average, the true endpoint is overestimated, whereas the other estimators (except K-S and N-P-G), on average, are too low. The upper confidence bounds based on these two estimators are sharper than the other ones. However, the simulations point out that they are too sharp indicating the need for bias reduction. In general, the presence of bias is not an obstacle leading to disqualification of any of the applied endpoint assessment procedures. It would be very useful to study the bias in detail. If we knew the bias, it could be used to correct the endpoint estimator \citep{LasockiUrban}, and potentially lead to improvement of any of the discussed procedures. Moreover, if additional, independent high-quality information is available, the Bayesian formalism provides a powerful tool, capable of both improving the endpoint estimates and providing a more reliable assessment of its confidence bounds. Overall, we can conclude that the EVT-based estimators of \citet{Truncation,trunc_real} are a valuable addition to the already existing methods for estimation of the area characteristic, maximum possible seismic event magnitude. \bibliographystyle{spbasic}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,990
\section{Introduction} Currently, we know that neutrinos oscillate and have a tiny mass. In the theoretical framework of three active neutrinos, the difference of the squared neutrino masses for normal (inverted) hierarchy are given by~ $\Delta m^{2}_{21} \left( 10^{-5} \, \textrm{eV}^{2} \right) = 7.60_{-0.18}^{+0.19},$ and $\left| \Delta m^{2}_{31} \right| \left( 10^{-3} \, \textrm{eV}^{2} \right)= 2.48_{-0.07}^{+0.05}~ (2.38_{-0.06}^{+0.05})$. Additionally, we have the values of the mixing angles $\sin^{2} \theta_{12} / 10^{-1} = 3.23 \pm 0.16,$ $\sin^{2} \theta_{23} / 10^{-1} = 5.67_{-1.24}^{+0.32}~(5.73_{-0.39}^{+0.25})$ and $\sin^{2} \theta_{13} / 10^{-2}= 2.26 \pm 0.12~(2.29 \pm 0.12)$~\cite{Forero:2014bxa}.At present, there is no yet solid evidence on the Dirac CP-violating phase and the ordering that respects the neutrino masses. The Nova \cite{Adamson:2017qqn} and KamLAND-Zen \cite{KamLAND-Zen:2016pfg} Collaborations can shed light on the hierarchy in the coming years. In spite of the fact that the Standard Model (SM) works out almost perfectly, the neutrino experimental data can not be explained within this framework. If the neutrino sector opens the window to the new physics, then what is the new model and the extra ingredients that are needed to accommodate the masses and mixings ?. In this line of thought, a simplest route to include small neutrino masses and mixings to the SM is to add the missing right-handed neutrinos (RHN's) states to the matter content, and then invoking the see-saw mechanism~\cite{Minkowski:1977sc, GellMann:1980vs, Yanagida:1979as, Mohapatra:1979ia, Schechter:1980gr, Mohapatra:1980yp, Schechter:1981cv}. However, we should point out that the RHN mass scale is introduced by hand with no relation whatsoever to the Higgs mechanism that gives mass to all other fields. Nonetheless, this problem may be alleviated if the minimal extension of the SM is replaced by the left-right symmetric model (LRSM) \cite{Pati:1974yy, Mohapatra:1974gc, Senjanovic:1975rk, Senjanovic:1978ev, Mohapatra:1979ia} where the RHN's are already included in the matter content. Additionally, the see-saw mechanism comes in rather naturally in the context of left-right symmetric scenarios; aside from other nice features, as for instance the recovery of Parity Symmetry, and the appearance of right-handed currents at high energy, which also makes such extensions very appealing. Recently, the left-right scenarios have been revised~\cite{Chen:2013fna, Senjanovic:2014pva, Dev:2015kca, Chakrabortty:2016wkl, Dev:2016dja, Mitra:2016kov, Senjanovic:2016bya, Senjanovic:2015yea, Lindner:2016lpp, Patra:2015bga, Lindner:2016bgg} in order to make contact with the last experimental data of LHC. Moreover, the dark matter problem~\cite{Dev:2016xcp, Berlin:2016eem, Patra:2015vmp} and the diphoton excess anomaly~\cite{Hati:2016thk, Deppisch:2016scs, Dev:2015vjd, Dey:2015bur, Das:2015ysz} have been explored in this kind of scenarios. Explaining the peculiar neutrino mixing pattern (besides the CKM mixing matrix) has been a hard task. Along this line, the mass textures have played an important role in trying to solve this puzzle~\cite{Fritzsch:1999ee}. In fact, discrete symmetries may be the missing ingredient to understand the mixings so that several groups have been proposed~\cite{Ishimori:2010au, Ishimori:2012zz, King:2013eh} to get in an elegant way the mass textures. In this line of thought, the ${\bf S}_{3}$ flavor symmetry, in particular, is a good candidate to handle the Yukawa couplings for leptons and quarks; and this has been studied exhaustively in different frameworks~\cite{Chen:2004rr, Felix:2006pn, Mondragon:2007af, Canales:2011ug, Canales:2012ix, Kubo:2012ty, Canales:2012dr, GonzalezCanales:2012kj, GonzalezCanales:2012za, Canales:2013ura, Canales:2013cga, Hernandez:2014lpa, Hernandez:2014vta, Hernandez:2015dga, Hernandez:2015zeh, Hernandez:2015hrt, Arbelaez:2016mhg, Hernandez:2013hea, CarcamoHernandez:2016pdu, Das:2014fea, Das:2015sca, Pramanick:2016mdp}. In most of these works, the meaning of the flavor has been extended to the scalar sector such that three Higgs doublets are required to accommodate the PMNS and CKM mixing matrices. Although there are too many flavored models in the literature, the LRSM has received few attention in the context of the flavored puzzle~\cite{GomezIzquierdo:2009id, Dev:2013oxa, Rodejohann:2015hka}. It is not an easy task to study the mixings in the LRSM since the structure of the gauge group increases the Yukawa sector parameters compared to the SM. However, as was shown in the early works, Parity Symmetry might reduce substantially the gauge and Yukawa couplings; this last issue gives the opportunity to calculate the right-handed CKM matrix \cite{Senjanovic:2016bya, Senjanovic:2015yea} which is crucial to study in great detail the $W_{R}$ gauge boson that comes out being a prediction of the LRMS. Then, it is fundamental to face the flavor puzzle in this kind of theoretical frameworks. Therefore, we propose a non-minimal LRSM with Parity Symmetry where the fermion mixings arise as result of imposing an ${\bf S}_{3}\otimes {\bf Z}_{2}$ flavor symmetry, and an extra ${\bf Z}^{e}_{2}$ symmetry is considered to suppress some Yukawa couplings in the lepton sector. Additionally, a non conventional assignment is done for the matter content under the ${\bf S}_{3}$ symmetry and this is the clear difference between the previous studies and this one. As a consequence, in the lepton sector, the effective neutrino mass matrix possesses approximately the $\mu-\tau$ symmetry~\cite{Mohapatra:1998ka, Lam:2001fb, Kitabayashi:2002jd, Grimus:2003kq, Koide:2003rx, Fukuyama:1997ky, Gupta:2013it, Grimus:2012hu, Xing:2015fdg, Luo:2014upa, Ahn:2014gva, Rivera-Agudelo:2015vza, Zhao:2016orh, Biswas:2016yan}. The breaking of the $\mu-\tau$ symmetry induces sizable non zero $\theta_{13}$, and the deviation of $\theta_{23}$ from $45^{\circ}$ is strongly controlled by an $\epsilon$ free parameter and the complex neutrino masses. Then, an analytic study on the extreme Majorana phases is done since these turn out to be relevant to enhance or suppress the reactor and atmospheric angle. Thus, we can constrain the parameter space for $\epsilon$ parameter and the lightest neutrino mass that accommodate the mixing angles. The highlighted results are: a) the normal hierarchy is ruled out since the reactor angle comes out being tiny, for any values of the Majorana phases; b) for the inverted hierarchy there is one combination in the extreme phases where the values of the reactor and atmospheric angles are compatible up to $2, 3~\sigma$ of C. L., but the parameter space is tight; c) the model favors the degenerate ordering for one combination in the extreme Majorana phases. In this case, the reactor and atmospheric angle are compatible with the experimental data for a large set of values of the free parameters. The quark sector will be discussed exhaustively in a future work, however, some preliminary results will be commented. The paper is organized as follows: we present, in Sec. II, the matter content of the model and also their respective assignment under the ${\bf S}_{3}$ symmetry. In addition, we briefly explain the scalar sector and argue about the need to include the ${\bf Z}^{e}_{2}$ symmetry. In Sec. III, the fermion mass matrices are obtained and we put attention on the lepton sector for getting the mixing matrices. We present, in Sec. IV, the PMNS matrix that the model predicts. Finally, we present an analytic study on the mixing angles and our results in Sec. V, and we close our discussion with a summary of conclusions. \section{Flavored Left-Right Symmetric Model} The minimal LRSM is based on the usual, $SU(3)_{c}\otimes SU(2)_{L}\otimes SU(2)_{R}\otimes U(1)_{B-L}$, gauge symmetry where Parity Symmetry, $\mathcal{P}$, is assumed to be a symmetry a high energy but it is broken at electroweak scale since there are no right-handed currents. The matter fields and their respective quantum numbers (in parenthesis) under the gauge symmetry are given by {\scriptsize \begin{align} Q_{(L, R)}&=\left( \ba{c} u \\ d \\ \end{array} \right)_{(L, R)}\sim {\left(3, (2,1), (1,2), 1/3\right)},\quad (L, R)=\left( \ba{c} \nu \\ \ell \\ \end{array} \right)_{(L, R)}\sim { (1, (2,1), (1,2), -1)},\nonumber\\ \Phi&=\left( \ba{cc} \phi^{0} & \phi^{'+} \\ \phi^{-} & \phi^{'0} \\ \end{array} \right)\sim \left(1, 2, 2, 0 \right);\quad \Delta_{(L, R)}=\left( \ba{cc} \frac{\delta^{+}}{2} & \delta^{++} \\ \delta^{0} & -\frac{\delta^{+}}{2} \\ \end{array} \right)_{(L, R)}\sim \left(1, (3,1), (1,3), 2 \right). \label{eq1} \end{align}} The gauge invariant Yukawa mass term is given by {\scriptsize \begin{align} -\mathcal{L}_{Y}=\bar{Q}_{L}\left[y^{q}\Phi+ \tilde{y}^{q}\tilde{\Phi} \right]Q_{R}+ \bar{L}\left[y^{\ell}\Phi+ \tilde{y}^{\ell}\tilde{\Phi} \right]R+ y^{L}\bar{L}\Delta_{L}L^{c}+y^{R}\bar{R}^{c}\Delta_{R}R+h.c. \label{yt} \end{align}} where the family indexes have been suppressed and $\tilde{\Phi}_{i}=-i\sigma_{2}\Phi^{\ast}_{i}i\sigma_{2}$. Here, Parity Symmetry will be assumed in the above Lagrangian, then, this requires that $\Psi_{i L}\leftrightarrow \Psi_{i R}$, $\Phi_{i} \leftrightarrow \Phi^{\dg}_{i}$ and $\Delta_{i L}\leftrightarrow \Delta^{\dg}_{i R}$ for fermions and scalar fields, respectively. Thereby, the Yukawa couplings may reduce substantially and the gauge couplings too. In particular, for the former issue we have that $y=y^{\dagger}$, $\tilde{y}=\tilde{y}^{\dagger}$ and $y^{R}=y^{L}$. On the other hand, due to our purpose the scalar potential will be left aside. But, in the minimal LRMS the spontaneous symmetry breaking is as follows: Parity Symmetry is broken at the same scale where the $\Delta_{R}$ right-handed scale acquires its vacuum expectation value (vev). At the first stage, the RHN's are massive particles, then the rest of the particles turn out massive since the Higgs scalars get their vev's. Explicitly, {\scriptsize \begin{align} \langle\Delta_{L, R}\rangle= \left( \ba{cc} 0 & 0 \\ v_{L,R} & 0 \\ \end{array} \right),\quad \langle\Phi\rangle= \left( \ba{cc} k & 0 \\ 0 & k^{\prime} \\ \end{array} \right), \quad \langle \tilde{\Phi}\rangle= \left( \ba{cc} k^{\prime \ast} & 0 \\ 0 & k^{\ast} \\ \end{array} \right).\label{eq6} \end{align}} As result, the Yukawa mass term is given by {\scriptsize \begin{align} -\mathcal{L}_{Y}=\bar{q}_{i L} \left({\bf M}_{q} \right)_{ij}q_{j R}+\bar{\ell}_{i L} \left( {\bf M}_{\ell}\right)_{ij}\ell_{j R} +\dfrac{1}{2}\bar{\nu}_{i L}\left({\bf M}_{\nu}\right)_{ij}\nu^{c}_{j L}+\dfrac{1}{2}\bar{\nu}^{c}_{i R}\left({\bf M}_{R}\right)_{ij}\nu_{j R}+h.c.\label{eq7} \end{align}} where the type I see-saw mechanism has been realized, ${\bf M}_{\nu}=-{\bf M}_{D} {\bf M}^{-1}_{R} {\bf M}^{T}_{D}$; so that the ${\bf M}_{L}$ were neglected for simplicity. In the present model, the Yukawa mass term will be controlled by the ${\bf S}_{3}$ flavor symmetry. The non-Abelian group ${\bf S}_{3}$ is the permutation group of three objects and this has three irreducible representations: two 1-dimensional, ${\bf 1}_{S}$ and ${\bf 1}_{A}$, and one 2-dimensional representation, ${\bf 2}$ (for a detailed study see \cite{Ishimori:2010au}). The multiplication rules among them are {\scriptsize \begin{align}\label{rules} &{\bf 1}_{S}\otimes {\bf 1}_{S}={\bf 1}_{S},\quad {\bf 1}_{S}\otimes {\bf 1}_{A}={\bf 1}_{A},\quad {\bf 1}_{A}\otimes {\bf 1}_{S}={\bf 1}_{A},\quad {\bf 1}_{A}\otimes {\bf 1}_{A}={\bf 1}_{A},\nonumber\\& {\bf 1}_{S}\otimes {\bf 2}={\bf 2},\quad {\bf 1}_{A}\otimes {\bf 2}={\bf 2},\quad {\bf 2}\otimes {\bf 1}_{S}={\bf 2},\quad {\bf 2}\otimes {\bf 1}_{A}={\bf 2};\nonumber\\ &\begin{pmatrix} a_{1} \\ a_{2} \end{pmatrix}_{{\bf 2}} \otimes \begin{pmatrix} b_{1} \\ b_{2} \end{pmatrix}_{{\bf 2}} = \left(a_{1}b_{1}+a_{2}b_{2}\right)_{{\bf 1}_{S}} \oplus \left(a_{1}b_{2}-a_{2}b_{1}\right)_{{\bf 1}_{A}} \oplus \begin{pmatrix} a_{1}b_{2}+a_{2}b_{1} \\ a_{1}b_{1}-a_{2}b_{2} \end{pmatrix}_{{\bf 2}}. \end{align}} Having introduced briefly the gauge and the non-Abelian group, let us build the gauge and flavored Yukawa mass term. To do this, we will consider three Higgs bidoublets as well as three left-right triplets with the purpose of getting the mixing in the lepton sector. Here, we want to emphasize a clear difference between this model and the previous ones with the ${\bf S}_{3}$ symmetry. In our model, the quark and lepton families have been assigned in a different way under the irreducible representation of ${\bf S}_{3}$. Explicitly, for the former and the Higgs sector respectively, the first and second family have been put together in a flavor doublet ${\bf 2}$, and the third family is a singlet ${\bf 1}_{S}$. On the contrary, for the latter sector, the first family is a singlet ${\bf 1}_{S}$ and the second and third families are put in a doublet ${\bf 2}$. The advantage of making this choice is that the quark mass matrices may be put into two mass textures fashion that fit the CKM matrix very well; in the lepton sector, on the other hand, the appearance of the approximated $\mu-\tau$ symmetry in the effective neutrino mass matrix is good signal to understand the mixings. Remarkably, Nova collaboration is testing the $\mu-\tau$ symmetry and some results have been released \cite{Adamson:2017qqn}. The matter content of the model transforms in a not trivial way under the ${\bf S}_{3}$ symmetry and this is displayed in the table below. Here, the ${\bf Z_{2}}$ symmetry has been added in order to prohibit some Yukawa couplings in the lepton sector. \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \hline {\footnotesize Matter} & {\footnotesize $Q_{I (L, R)}$} & {\footnotesize $Q_{3 (L, R)}$} & {\footnotesize $(L_{1}, R_{1})$} & {\footnotesize $(L_{J}, R_{J})$} & {\footnotesize $\Phi_{I}$} & {\footnotesize $\Phi_{3}$} & {\footnotesize $\Delta_{I(L, R)}$ } & {\footnotesize $\Delta_{3 (L, R)}$} \\ \hline {\footnotesize \bf $S_{3}$} & {\footnotesize \bf $2$} & {\footnotesize \bf $1_{S}$} & {\footnotesize \bf $1_{S}$} & {\footnotesize \bf $2$} & {\footnotesize \bf $2$} & {\footnotesize \bf $1_{S}$} & {\footnotesize \bf $2$} & {\footnotesize \bf $1_{S}$} \\ \hline {\footnotesize \bf $Z_{2}$} & {\footnotesize $1$} & {\footnotesize $1$} & {\footnotesize $1$} & {\footnotesize $-1$} & {\footnotesize $1$} & {\footnotesize $1$} & {\footnotesize \bf $-1$} & {\footnotesize \bf $1$} \\ \hline \hline \end{tabular}\caption{Non-minimal left-right model. Here, $I=1,2$ and $J=2,3$.} \end{center} \end{table} Thus, the most general Yukawa mass term, that respects the ${\bf S}_{3}\otimes {\bf Z}_{2}$ flavour symmetry and the gauge group, is given as {\scriptsize \begin{align} -\mathcal{L}_{Y}&=y^{q}_{1}\left[\bar{Q}_{1 L}\left(\Phi_{1} Q_{2 R}+\Phi_{2} Q_{1 R}\right)+\bar{Q}_{2 L}\left(\Phi_{1} Q_{1 R}-\Phi_{2}Q_{2 R}\right)\right]+y^{q}_{2}\left[\bar{Q}_{1 L}\Phi_{3}Q_{1 R}+\bar{Q}_{2 L}\Phi_{3} Q_{2 R}\right]+y^{q}_{3}\left[\bar{Q}_{1 L}\Phi_{1}+\bar{Q}_{2 L}\Phi_{2}\right]Q_{3 R}\nonumber\\&+y^{q}_{4}\bar{Q}_{3 L}\left[\Phi_{1}Q_{1 R}+\Phi_{2}Q_{2 R}\right]+y^{q}_{5}\bar{Q}_{3 L}\Phi_{3}Q_{3 R}+\tilde{y}^{q}_{1}\left[\bar{Q}_{1 L}\left(\tilde{\Phi}_{1} Q_{2 R}+\tilde{\Phi}_{2}Q_{1 R}\right)+\bar{Q}_{2 L}\left(\tilde{\Phi}_{1} Q_{1 R}-\tilde{\Phi}_{2}Q_{2 R}\right)\right]\nonumber\\&+\tilde{y}^{q}_{2}\left[\bar{Q}_{1 L}\tilde{\Phi}_{3}Q_{1 R}+\bar{Q}_{2 L}\tilde{\Phi}_{3} Q_{2 R}\right]+\tilde{y}^{q}_{3}\left[\bar{Q}_{1 L}\tilde{\Phi}_{1}+\bar{Q}_{2 L}\tilde{\Phi}_{2}\right]Q_{3 R}+\tilde{y}^{q}_{4}\bar{Q}_{3 L}\left[\tilde{\Phi}_{1}Q_{1 R}+\tilde{\Phi}_{2}Q_{2 R}\right]+\tilde{y}^{q}_{5}\bar{Q}_{3 L}\tilde{\Phi}_{3}Q_{3 R} +y^{\ell}_{1}\bar{L}_{1}\Phi_{3}R_{1}\nonumber\\&+y^{\ell}_{2}\left[(\bar{L}_{2}\Phi_{2}+\bar{L}_{3}\Phi_{1})R_{2}+(\bar{L}_{2}\Phi_{1}-\bar{L}_{3}\Phi_{2})R_{3} \right]+y^{\ell}_{3}\left[\bar{L}_{2}\Phi_{3}R_{2}+\bar{L}_{3}\Phi_{3}R_{3}\right]+ \tilde{y}^{\ell}_{1}\bar{L}_{1}\tilde{\Phi}_{3}R_{1}\nonumber\\&+\tilde{y}^{\ell}_{2}\left[(\bar{L}_{2}\tilde{\Phi}_{2}+\bar{L}_{3}\tilde{\Phi}_{1})R_{2}+(\bar{L}_{2}\tilde{\Phi}_{1}-\bar{L}_{3}\tilde{\Phi}_{2})R_{3} \right]+\tilde{y}^{ \ell}_{3}\left[\bar{L}_{2}\tilde{\Phi}_{3}R_{2}+\bar{L}_{3}\tilde{\Phi}_{3}R_{3}\right] +y^{L}_{1}\bar{L}_{1}\Delta_{3 L}L^{c}_{1}+y^{L}_{2}\bar{L}_{1}\left[\Delta_{1 L }L^{c}_{2}+\Delta_{2 L}L^{c}_{3}\right]\nonumber\\&+y^{L}_{3} \left[ \bar{L}_{2}\Delta_{1 L}+\bar{L}_{3}\Delta_{2 L}\right]L^{c}_{1}+y^{L}_{4}\left[\bar{L}_{2}\Delta_{3 L}L^{c}_{2}+\bar{L}_{3}\Delta_{3 L}L^{c}_{3}\right] +y^{R}_{1}\bar{R}^{c}_{1}\Delta_{3 R}R_{1}+ y^{R}_{2}\bar{R}^{c}_{1}\left[\Delta_{1 R}R_{2}+\Delta_{2 R}R_{3}\right]\nonumber\\&+y^{R}_{3}\left[\bar{R}^{c}_{2}\Delta_{1 R}+\bar{R}^{c}\Delta_{2 R}\right]R_{1}+ y^{R}_{4}\left[\bar{R}^{c}_{2}\Delta_{3 R}R_{2}+\bar{R}^{c}_{3}\Delta_{3 R}R_{3}\right]+h.c,\label{eq2} \end{align}} In this flavored model, we have to keep in mind that Parity Symmetry will be assumed in the above Lagrangian in such a way the number of Yukawa couplings is reduced. More even, we stress that an extra symmetry ${\bf Z}^{e}_{2}$ is used to get a diagonal charged lepton and Dirac neutrino mass matrix whereas the Majorana mass matrices retain their forms. Explicitly, in the above Lagrangian, we demand that {\footnotesize \begin{align} L_{3}\leftrightarrow-L_{3},\quad R_{3}\leftrightarrow-R_{3},\quad \Delta_{2 L}\leftrightarrow -\Delta_{2 L},\quad \Delta_{2 R}\leftrightarrow -\Delta_{2 R}.\label{exss} \end{align}} so that the terms $\bar{L}_{2}R_{3}$ and $\bar{L}_{3}R_{2}$ are absent in the lepton sector. As was already commented, because of our interest in studying masses and mixings for fermions, the scalar potential will not be analyzed for the moment. We ought to comment that this study is not trivial since the scalar sector has been augmented, so that the potential is rather complicated, but this study has to be done eventually since is crucial for theoretical and phenomenological purpose. From Eq.(\ref{eq6}) and Eq.(\ref{eq2}), the mass matrices have the following structure {\scriptsize \begin{align} {\bf M}_{q}=\begin{pmatrix} a_{q}+b^{\prime}_{q} & b_{q} & c_{q} \\ b_{q} & a_{q}-b^{\prime}_{q} & c^{\prime}_{q} \\ f_{q} & f^{\prime}_{q} & g_{q} \end{pmatrix},\quad {\bf M}_{\ell}=\begin{pmatrix} a_{\ell} & 0 & 0 \\ 0 & b_{\ell}+c_{\ell} & 0 \\ 0 & 0 & b_{\ell}-c_{\ell} \end{pmatrix} ,\quad {\bf M}_{(L, R)}=\begin{pmatrix} a_{(L, R)} & b_{(L, R)} & b^{\prime}_{(L, R)} \\ b_{(L, R)} & c_{(L, R)} & 0 \\ b^{\prime}_{(L, R)} & 0 & c_{(L, R)} \end{pmatrix},\label{eq8} \end{align}} where the $q= u, d$ and $\ell=e, \nu_{D}$. Explicitly, the matrix elements for quarks and leptons are given as {\scriptsize \begin{align} &a_{u}=y^{q}_{2}k_{3}+\tilde{y}^{q}_{2}k^{\prime \ast}_{3},\quad b^{\prime}_{u}=y^{q}_{1}k_{2}+\tilde{y}^{q}_{1}k^{\prime \ast}_{2} ,\quad b_{u}=y^{q}_{1}k_{1}+\tilde{y}^{q}_{1}k^{\prime \ast}_{1},\quad c_{u}=y^{q}_{3}k_{1}+\tilde{y}^{q}_{3}k^{\prime \ast}_{1},\, c^{\prime}_{u}=y^{q}_{3}k_{2}+\tilde{y}^{q}_{3}k^{\prime \ast}_{2},\quad f_{u}= y^{\dg q}_{3}k_{1}+\tilde{y}^{\dg q}_{3}k^{\prime \ast}_{1},\nonumber\\ &f^{\prime}_{u}= y^{\dg q}_{3}k_{2}+\tilde{y}^{\dg q}_{3}k^{\prime \ast}_{2} ,\quad g_{u}=y^{q}_{5}k_{3}+\tilde{y}^{q}_{5}k^{\prime \ast}_{3},\quad a_{d}=y^{q}_{2}k^{\prime}_{3}+\tilde{y}^{q}_{2}k^{\ast}_{3},\quad b^{\prime}_{d}=y^{q}_{1}k^{\prime}_{2}+\tilde{y}^{q}_{1}k^{\ast}_{2},\quad b_{d}=y^{q}_{1}k^{\prime}_{1}+\tilde{y}^{q}_{1}k^{\ast}_{1},\quad c_{d}=y^{q}_{3}k^{\prime}_{1}+\tilde{y}^{q}_{3}k^{\ast}_{1};\nonumber\\ &c^{\prime}_{d}=y^{q}_{3}k^{\prime}_{2}+\tilde{y}^{q}_{3}k^{\ast}_{2},\quad f_{d}= y^{\dg q}_{3}k^{\prime}_{1}+\tilde{y}^{\dg q}_{3}k^{\ast}_{1},\quad f^{\prime}_{d}= y^{\dg q}_{3}k^{\prime}_{2}+\tilde{y}^{\dg q}_{3}k^{\ast}_{2} ,\quad g_{u}=y^{q}_{5}k^{\prime}_{3}+\tilde{y}^{q}_{5}k^{\ast}_{3},\quad a_{D}=y^{\ell}_{1}k_{3}+\tilde{y}^{\ell}_{1}k^{\prime \ast}_{3},\quad b_{D}=y^{\ell}_{3}k_{3}+\tilde{y}_{3}k^{\prime \ast}_{3},\nonumber\\ &c_{D}=y^{\ell}_{2}k_{2}+\tilde{y}_{2}k^{\prime \ast}_{2},\quad a_{e}=y^{\ell}_{1}k^{\prime}_{3}+\tilde{y}^{\ell}_{1}k^{\ast}_{3},\quad b_{e}=y^{\ell}_{3}k^{\prime}_{3}+\tilde{y}_{3}k^{\ast}_{3},\quad c_{e}=y^{\ell}_{2}k^{\prime}_{2}+\tilde{y}_{2}k^{\ast}_{2},\quad a_{(L, R)}=y^{R}_{1}v_{1(L, R)},\quad b_{(L, R)}=y^{R}_{2}v_{2(L, R)}\nonumber\\ &b^{\prime}_{(L, R)}=y^{R}_{2}v_{3(L, R)},\quad c_{(L, R)}=y^{R}_{4}v_{1(L, R)}. \label{eq9} \end{align}} where Parity Symmetry has been considered. Remarkably, we will end up having a complex symmetric (diagonal) quark (lepton) mass matrix if the vev's are complex; in the literature this scenario is well known as {\bf pseudomanifest left-right symmetry} \cite{Langacker:1989xa, Harari:1983gq}. If the vev's are real, the quark (lepton) mass matrix is hermitian (real) and the number of CP phases are reduced, this framework is known as {\bf manifest left-right symmetry}\cite{Beg:1977ti, Langacker:1989xa} . In this work, we will discuss only the first framework and the second one will be studied in an extended version of the model and its consequences on the quark sector. \section{Masses and Mixings} In principle, in the mass matrices, we can reduce a further the number of free parameters considering certain alignment in the vev's, see Eq.(\ref{eq9}). So that, for the moment, we will assume that the vev's of $\Phi_{1}$ and $\Phi_{2}$ are degenerate. Explicitly, we demand that $k_{1}=k_{2}\equiv k$ and $k^{\prime}_{1}=k^{\prime}_{2}\equiv k^{\prime}$. Additionally, $v_{1 R}=v_{2 R}=v_{R}$. Therefore, we have: {\bf Pseudomanisfest left-right theory}. {\scriptsize \begin{align} {\bf M}_{q}=\begin{pmatrix} a_{q}+b_{q} & b_{q} & c_{q} \\ b_{q} & a_{q}-b_{q} & c_{q} \\ c_{q} & c_{q} & g_{q} \end{pmatrix},\quad {\bf M}_{\ell}=\begin{pmatrix} a_{\ell} & 0 & 0 \\ 0 & b_{\ell}+c_{\ell} & 0 \\ 0 & 0 & b_{\ell}-c_{\ell} \end{pmatrix} ,\quad {\bf M}_{(L, R)}=\begin{pmatrix} a_{(L, R)} & b_{(L, R)} & b_{(L, R)} \\ b_{(L, R)} & c_{(L, R)} & 0 \\ b_{(L, R)} & 0 & c_{(L, R)} \end{pmatrix}. \label{eq10} \end{align}} {\bf Manifest left-right theory}. {\scriptsize \begin{align} {\bf M}_{q}=\begin{pmatrix} a_{q}+b_{q} & b_{q} & c_{q} \\ b_{q} & a_{q}-b_{q} & c_{q} \\ c^{\ast}_{q} & c^{\ast}_{q} & g_{q} \end{pmatrix},\quad {\bf M}_{\ell}=\begin{pmatrix} a_{\ell} & 0 & 0 \\ 0 & b_{\ell}+c_{\ell} & 0 \\ 0 & 0 & b_{\ell}-c_{\ell} \end{pmatrix} ,\quad {\bf M}_{(L, R)}=\begin{pmatrix} a_{(L, R)} & b_{(L, R)} & b_{(L, R)} \\ b_{(L, R)} & c_{(L, R)} & 0 \\ b_{(L, R)} & 0 & c_{(L, R)} \end{pmatrix}. \label{eq10.m} \end{align}} As was already commented the full analysis of the quark masses and mixings will be left aside for this moment. However, we just make some comments. In the {\bf pseudomanifest} framework, the ${\bf M}_{q}$ mass matrix may be put into two mass textures fashion that fit the CKM matrix very well. In similar way, the {\bf manifest} framework is tackled. For this case, the quark mixing matrix has fewer free parameters than the above framework since this is hermitian; the study, and its predictions on the mixing angles is work in progress. \subsection{Charged Leptons} The ${\bf M}_{e}$ mass matrix is complex and diagonal then one could identify straight the physical masses, however, we will make a similarity transformation in order to prohibit a fine tuning in the free parameters. What we mean is the following, the ${\bf M}_{e}$ mass matrix is diagonalized by ${\bf U}_{e L}={\bf S}_{23}{\bf P}_{e}$ and ${\bf U}_{e R}={\bf S}_{23}{\bf P}^{\dg}_{e}$, this is, ${\hat{\bf M}_{e}}=\textrm{diag.}(\vert m_{e}\vert, \vert m_{\mu}\vert,\vert m_{\tau}\vert)={\bf U}^{\dg}_{e L}{\bf M}_{e}{\bf U}_{e R} ={\bf P}^{\dg}{\bf m}_{e}{\bf P}^{\dg}_{e}$ with ${\bf m}_{e}={\bf S}^{T}_{23}{\bf M}_{e}{\bf S}_{23}$. After factorizing the phases, we have ${\bf m}_{e}={\bf P}_{e}{\bf \bar{m}_{e}}{\bf P}_{e}$~ where {\scriptsize \begin{align} {\bf m}_{e}=\textrm{diag.}(m_{e}, m_{\mu}, m_{\tau}),\quad {\bf S}_{23}=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix},\quad {\bf P}_{e}=\textrm{diag.}(e^{i\eta_{e}}, e^{i\eta_{\mu}}, e^{i\eta_{\tau}}) \label{eue} \end{align} } As result, one obtains that $\vert m_{e}\vert=\vert a_{e}\vert$, $\vert m_{\mu}\vert=\vert b_{e}-c_{e}\vert$ and $\vert m_{\tau}\vert=\vert b_{e}+c_{e}\vert$. \subsection{Neutrinos} On the other hand, the ${\bf M}_{\nu}$ effective neutrino mass matrix is given as {\scriptsize \begin{align} {\bf M}_{\nu}=\begin{pmatrix} \mathcal{X}a^{2}_{D}& -a_{D}\mathcal{Y}(b_{D}+c_{D}) & -a_{D}\mathcal{Y}(b_{D}-c_{D}) \\ -a_{D}\mathcal{Y}(b_{D}+c_{D})& \mathcal{W}(b_{D}+c_{D})^{2} & \mathcal{Z}(b^{2}_{D}-c^{2}_{D}) \\ -a_{D}\mathcal{Y}(b_{D}-c_{D})& \mathcal{Z}(b^{2}_{D}-c^{2}_{D}) & \mathcal{W}(b_{D}-c_{D})^{2} \end{pmatrix}\quad\textrm{where}\quad {\bf M}^{-1}_{R}\equiv\begin{pmatrix} \mathcal{X}& -\mathcal{Y} & -\mathcal{Y} \\ -\mathcal{Y} & \mathcal{W} & \mathcal{Z} \\ -\mathcal{Y} & \mathcal{Z} & \mathcal{W} \label{efm} \end{pmatrix} \end{align}} Now as hypothesis, we will assume that $b_{D}$ is larger than $c_{D}$, in this way the effective mass matrix can be written as {\scriptsize \begin{align} {\bf M}_{\nu}\equiv\begin{pmatrix} A_{\nu}& -B_{\nu}(1+\epsilon) & -B_{\nu}(1-\epsilon) \\ -B_{\nu}(1+\epsilon)& C_{\nu}(1+\epsilon)^{2} & D_{\nu}(1-\epsilon^{2}) \\ -B_{\nu}(1-\epsilon)& D_{\nu}(1-\epsilon^{2}) & C_{\nu}(1-\epsilon)^{2}\label{efm2} \end{pmatrix} \end{align}} where $A_{\nu}\equiv \mathcal{X}a^{2}_{D}$, $B_{\nu} \equiv\mathcal{Y}a_{D}b_{D}$, $C_{\nu}\equiv\mathcal{W}b^{2}_{D}$ and $D_{\nu}\equiv\mathcal{Z}b^{2}_{D}$ are complex. Besides, $\epsilon\equiv c_{D}/b_{D}$ is complex too. Here, we want to stress that the last parameter will be considered as a perturbation to the effective mass matrix such that $\vert \epsilon \vert \lll 1$. To be more specific, $\vert \epsilon \vert\leq 0.3$ in order to break softly the $\mu-\tau$ symmetry. So that, hereafter, we will neglect the $\epsilon^{2}$ quadratic terms in the above matrix. Having done this, we go back to the effective neutrino mass matrix. In order to cancel the ${\bf S}_{23}$ contribution that comes from the charged lepton sector, we make the following to ${\bf M}_{\nu}$. We know that $\hat{\bf M}_{\nu}=\textrm{diag.}(m_{\nu_{1}}, m_{\nu_{2}}, m_{\nu_{3}})={\bf U}^{\dg}_{\nu}{\bf M}_{\nu}{\bf U}^{\ast}_{\nu}$, then ${\bf U}_{\nu}={\bf S}_{23}{\bf \mathcal{U}_{\nu}}$ where the latter mixing matrix will be obtained below. Then, $\hat{\bf M}_{\nu}={\bf \mathcal{U}^{\dg}_{\nu}}{\bf \mathcal{M}_{\nu}}{\bf \mathcal{U}^{\ast}_{\nu}}$ with {\scriptsize \begin{align} {\bf \mathcal{M}_{\nu}}= {\bf S}^{T}_{23}{\bf M}_{\nu}{\bf S}_{23}\approx\begin{pmatrix} A_{\nu}& -B_{\nu}(1-\epsilon) & -B_{\nu}(1+\epsilon) \\ -B_{\nu}(1-\epsilon)& C_{\nu}(1-2\epsilon) & D_{\nu} \\ -B_{\nu}(1+\epsilon)& D_{\nu} & C_{\nu}(1+2\epsilon)\label{efm3} \end{pmatrix} \end{align} } When the $\epsilon$ parameter is switched off the effective mass matrix, which is denoted by ${\bf \mathcal{M}^{0}_{\nu}}$, possesses the $\mu-\tau$ symmetry and this is diagonalized by {\scriptsize \begin{align} {\bf \mathcal{U}}^{0}_{\nu}=\begin{pmatrix} \cos{\theta}_{\nu}~e^{i(\eta_{\nu}+\pi)} & \sin{\theta}_{\nu}~e^{i(\eta_{\nu}+\pi)} & 0 \\ -\frac{\sin{\theta}_{\nu}}{\sqrt{2}}& \frac{\cos{\theta}_{\nu}}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ -\frac{\sin{\theta}_{\nu}}{\sqrt{2}}& \frac{\cos{\theta}_{\nu}}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{pmatrix} \label{ubm} \end{align} } where the ${\bf \mathcal{M}^{0}_{\nu}}$ matrix elements are fixed in terms of the complex neutrinos physical masses, the $\theta_{\nu}$ free parameter and the $\eta_{\nu}$ Dirac CP phase. To be more explicit, {\scriptsize \begin{align} A_{\nu}&=(m^{0}_{\nu_{1}}\cos^{2}{\theta}_{\nu}+m^{0}_{\nu_{2}}\sin^{2}{\theta}_{\nu})e^{2i(\eta_{\nu}+\pi)},\quad -B_{\nu}=\frac{\sin{2\theta_{\nu}}}{\sqrt{8}}(m^{0}_{\nu_{2}}-m^{0}_{\nu_{1}})e^{i(\eta_{\nu}+\pi)};\nonumber\\ C_{\nu}&=\frac{1}{2}(m^{0}_{\nu_{1}}\sin^{2}{\theta}_{\nu}+m^{0}_{\nu_{2}}\cos^{2}{\theta}_{\nu}+m^{0}_{\nu_{3}}),\quad D_{\nu}=\frac{1}{2}(m^{0}_{\nu_{1}}\sin^{2}{\theta}_{\nu}+m^{0}_{\nu_{2}}\cos^{2}{\theta}_{\nu}-m^{0}_{\nu_{3}}).\label{mne} \end{align} } Including the $\epsilon$ parameter we can write the effective mass matrix as ${\bf \mathcal{M}}_{\nu}={\bf \mathcal{M}^{0}_{\nu}}+{\bf \mathcal{M}^{\epsilon}_{\nu}}$ where the second matrix contains the perturbation, then, when we apply ${\bf\mathcal{U}^{0}_{\nu}}$ one gets ${\bf \mathcal{M}}_{\nu}={\bf \mathcal{U}^{0\dg}_{\nu}}({\bf \mathcal{M}^{0}_{\nu}}+{\bf \mathcal{M}^{\epsilon}_{\nu}}){\bf \mathcal{U}^{0 \ast}_{\nu}}$. Explicitly {\scriptsize \begin{align} {\bf \mathcal{M}}_{\nu}=\textrm{Diag.}(m^{0}_{\nu_{1}}, m^{0}_{\nu_{2}}, m^{0}_{\nu_{3}})+\begin{pmatrix} 0 & &-\sin{\theta_{\nu}}(m^{0}_{\nu_{3}}+m^{0}_{\nu_{1}})\epsilon \\ 0 & 0 & \cos{\theta_{\nu}}(m^{0}_{\nu_{3}}+m^{0}_{\nu_{2}})\epsilon \\ -\sin{\theta_{\nu}}(m^{0}_{\nu_{3}}+m^{0}_{\nu_{1}})\epsilon& \cos{\theta_{\nu}}(m^{0}_{\nu_{3}}+m^{0}_{\nu_{2}})\epsilon & 0\label{mnp} \end{pmatrix} \end{align} } The contribution of second matrix to the mixing one is given by {\scriptsize \begin{align} {\bf \mathcal{U}}^{\epsilon}_{\nu}\approx\begin{pmatrix} N_{1}& 0 & -N_{3}\sin{\theta}r_{1}\epsilon \\ 0 & N_{2} & N_{3}\cos{\theta_{\nu}}r_{2}\epsilon \\ N_{1}\sin{\theta_{\nu}}r_{1}\epsilon & -N_{2}\cos{\theta_{\nu}}r_{2}\epsilon & N_{3} \end{pmatrix} \label{ubpr} \end{align}} where we have defined the complex mass ratios $r_{(1, 2)}\equiv (m^{0}_{\nu_{3}}+m^{0}_{\nu_{(1, 2)}})/(m^{0}_{\nu_{3}}-m^{0}_{\nu_{(1, 2)}})$. Here, $N_{1}$, $N_{2}$ and $N_{3}$ are the normalization factors which are given as {\scriptsize \begin{align} N_{1}=\left(1+\sin^{2}{\theta_{\nu}}\vert r_{1}\epsilon \vert^{2}\right )^{-1/2},\quad N_{2}=\left(1+\cos^{2}{\theta_{\nu}}\vert r_{2}\epsilon \vert^{2}\right)^{-1/2},\quad N_{3}=\left(1+\sin^{2}{\theta_{\nu}}\vert r_{1}\epsilon \vert^{2}+\cos^{2}{\theta_{\nu}}\vert r_{2}\epsilon \vert^{2}\right)^{-1/2}.\label{nofac} \end{align}} Finally, the effective mass matrix given in Eq.(\ref{efm2}) is diagonalized approximately by ${\bf U}_{\nu}\approx {\bf S}_{23}{\bf \mathcal{U}^{0}_{\nu}{\bf \mathcal{U}^{\epsilon}_{\nu}}}$. Therefore, the theoretical PMNS mixing matrix is written as $V_{PMNS}={\bf U}^{\dg}_{e L}{\bf U}_{\nu}\approx {\bf P}^{\dg}_{e}{\bf \mathcal{U}^{0}_{\nu}{\bf \mathcal{U}^{\epsilon}_{\nu}}}$. \section{PMNS Mixing Matrix} The PMNS mixing matrix is given explicitly as {\scriptsize \begin{align} {\bf V}_{PMNS}={\bf P}^{\dagger}_{e}\begin{pmatrix} \cos{\theta_{\nu}}N_{1}e^{i(\eta_{\nu}+\pi)}& \sin{\theta_{\nu}}N_{2}e^{i(\eta_{\nu}+\pi)} & \sin{2\theta_{\nu}}\frac{N_{3}}{2}(r_{2}-r_{1})\epsilon e^{i(\eta_{\nu}+\pi)} \\ -\frac{\sin{\theta_{\nu}}}{\sqrt{2}}N_{1}(1+r_{1}\epsilon)& \frac{\cos{\theta_{\nu}}}{\sqrt{2}}N_{2}(1+r_{2}\epsilon) & -\frac{N_{3}}{\sqrt{2}}\left[1-\epsilon~r_{3}\right] \\ -\frac{\sin{\theta_{\nu}}}{\sqrt{2}}N_{1}(1-r_{1}\epsilon)& \frac{\cos{\theta_{\nu}}}{\sqrt{2}}N_{2}(1-r_{2}\epsilon) & \frac{N_{3}}{\sqrt{2}}\left[1+\epsilon~ r_{3}\right] \end{pmatrix} \label{pmma} \end{align} } where $r_{3}\equiv r_{2}\cos^{2}{\theta_{\nu}}+r_{1}\sin^{2}{\theta_{\nu}}$. On the other hand, comparing the magnitude of entries ${\bf V}_{PMNS}$ with the mixing matrix in the standard parametrization of the PMNS, we obtain the following expressions for the lepton mixing angles {\scriptsize \begin{align} \sin^{2}{\theta}_{13}&=\vert {\bf V}_{13}\vert^{2} =\frac{\sin^{2}{2\theta_{\nu}}}{4}N^{2}_{3}\vert \epsilon \vert^{2}~\vert r_{2}-r_{1} \vert^{2},\quad \sin^{2}{\theta}_{23}=\dfrac{\vert {\bf V}_{23}\vert^{2}}{1-\vert {\bf V}_{13}\vert^{2}}=\dfrac{N^{2}_{3}}{2}\frac{\vert 1-\epsilon~r_{3} \vert ^{2}}{1- \sin^{2}{\theta_{13}}},\nonumber\\ \sin^{2}{\theta_{12}}&=\dfrac{\vert {\bf V}_{12}\vert^{2}}{1-\vert {\bf V}_{13}\vert^{2}}= \dfrac{N^{2}_{2}\sin^{2}{\theta_{\nu}}}{1-\sin^{2}{\theta}_{13}}.\label{mixang} \end{align} } As can be noticed, if $\epsilon$ vanishes, one would recover the exact $\mu-\tau$ symmetry where $\theta_{12}=0^{\circ}$ and $\theta_{23}=45^{\circ}$. Additionally, we have to point out that the reactor and atmospheric angles depend strongly on the neutrino masses ratios so that these angles are sensitive to the Majorana phases. At the same time, the reactor angle does not depend on the phase of the parameter $\epsilon$, but on the other hand, the atmospheric one has a clear dependency on this phase. \section{Analytic Study and Results} In order to make an analytic study on the above formulas, let us emphasize that we are working in a perturbative regime which means that~$\vert \epsilon \vert \leq 0.3$. Then $N_{i}$ normalization factors should be the order of $1$ so that, as is usual in models where the $\mu-\tau$ symmetry is broken softly, the solar angle is directly related to the free parameter $\theta_{\nu}$, as can be seen in Eq. (\ref{mixang}). Therefore, at the leading order we have that \begin{align} \sin^{2}{\theta_{12}}=\sin^{2}{\theta_{\nu}},\qquad\textrm{then},\qquad \theta_{12}=\theta_{\nu}.\label{sol} \end{align} Therefore, along the analytic study we will consider that $\sin{\theta_{\nu}}\approx 1/\sqrt{3}$ which is a good approximation to the solar angle. Additionally, we will analyze the extreme Majorana phases for the complex neutrino masses for each hierarchy. What we mean by extreme phases is that these can be either $0$ or $\pi$. Explicitly, $m^{0}_{\nu_{i}}=\pm \vert m^{0}_{\nu_{i}}\vert $, for $i=1, 2, 3$, where $\vert m^{0}_{\nu_{i}}\vert$ is the absolute mass. As we will see, these phases can be relevant to enhance or suppress the reactor and atmospheric angles. In the following, the lightest neutrino mass and the $\vert \epsilon \vert$ parameter will be constrained. {\bf Normal hierarchy}. From experimental data, the absolute neutrino masses are $\vert m^{0}_{\nu_{3}}\vert= \sqrt{\Delta m^{2}_{31}+\vert m^{0 }_{\nu_{1}}\vert^{2}}$ and $\vert m^{0}_{\nu_{2}}\vert=\sqrt{\Delta m^{2}_{21}+\vert m^{0}_{\nu_{1}}\vert^{2} }$. Now, the mass ratios $r_{1}$, $r_{2}$ and $r_{3}$ can be approximated as follows {\scriptsize \begin{align} r_{1}&\approx 1+2\frac{m^{0}_{\nu_{1}}}{m^{0}_{\nu_{3}}}\approx 1,\qquad r_{2}\approx 1+2\frac{m^{0}_{\nu_{2}}}{m^{0}_{\nu_{3}}},\qquad r_{3}\approx 1+2\frac{m^{0}_{\nu_{2}}}{m^{0}_{\nu_{3}}}\cos^{2}{\theta_{\nu}}\label{massrat} \end{align}} as results of this, we obtain {\scriptsize \begin{align} \sin^{2}{\theta}_{13}&\approx\sin^{2}{2\theta_{\nu}}\vert \epsilon \vert^{2} \left|\frac{m^{0}_{\nu_{2}}}{m^{0}_{\nu_{3}}}\right|^{2},\quad \sin^{2}{\theta}_{23}\approx \dfrac{1}{2}\dfrac{ \left| 1-\epsilon \left(1+2\frac{m^{0}_{\nu_{2}}}{m^{0}_{\nu_{3}}}\cos^{2}{\theta_{\nu}} \right) \right|^{2} }{1-\sin^{2}{\theta}_{13}}.\label{annh} \end{align}} As can be noticed, if the strict normal hierarchy is assumed then the reactor angle comes out being very small since $\vert m^{0}_{\nu_{2}}/m^{0}_{\nu_{3}}\vert^{2}\approx \Delta m^{2}_{21}/\Delta m^{2}_{31}$, and $\vert \epsilon \vert \leq 0.3$. This holds for any extreme Majorana phases in the neutrino masses and this result does not change substantially if the $m^{0}_{\nu_{1}}$ is non-zero. Therefore, the normal spectrum is ruled out for~$\vert \epsilon \vert \leq 0.3$. {\bf Inverted hierarchy}. In this case, we have that $\vert m^{0}_{\nu_{2}}\vert=\sqrt{\Delta m^{2}_{13}+\Delta m^{2}_{21}+\vert m^{0 }_{\nu_{3}}\vert^{2}}$ and $\vert m^{0}_{\nu_{1}}\vert=\sqrt{\Delta m^{2}_{13}+\vert m^{0}_{\nu_{3}}\vert^{2}}$. The mass ratios $r_{1}$, $r_{2}$ and $r_{3}$ are written approximately as {\scriptsize \begin{align} r_{(1, 2)}&\approx -\left(1+2\frac{m^{0}_{\nu_{3}}}{m^{0}_{\nu_{(1,2)}}}\right),\quad r_{2}-r_{1}\approx 2m^{0}_{\nu_{3}}\left[\frac{m^{0}_{\nu_{2}}-m^{0}_{\nu_{1}}}{m^{0}_{\nu_{2}}m^{0}_{\nu_{1}}}\right],\quad r_{3}\approx -\left[1+2\frac{m^{0}_{\nu_{3}}\left(m^{0}_{\nu_{1}}\cos^{2}{\theta_{\nu}}+m^{0}_{\nu_{2}}\sin^{2}{\theta_{\nu}}\right) }{m^{0}_{\nu_{2}}m^{0}_{\nu_{1}}}\right]\label{massratin} \end{align}} Due to the mass difference $m^{0}_{\nu_{2}}-m^{0}_{\nu_{1}}$ in the factor $r_{2}-r_{1}$, the reactor angle can be small or large since the relative signs in these two masses may conspire to achieve it. Then, there are four independent cases where the signs in the masses can affect substantially the mixing angles: \begin{itemize} \item {\bf Case A}. If $m_{\nu_{i}}> 0$. {\scriptsize \begin{align} r_{2}-r_{1}\approx 2\vert m^{0}_{\nu_{3}}\vert \left[\frac{\vert m^{0}_{\nu_{2}}\vert -\vert m^{0}_{\nu_{1}}\vert }{\vert m^{0}_{\nu_{2}}\vert \vert m^{0}_{\nu_{1}}\vert}\right],\quad r_{3}\approx -\left[1+2\frac{\vert m^{0}_{\nu_{3}}\vert \left(\vert m^{0}_{\nu_{2}}\vert \sin^{2}{\theta_{\nu}}+\vert m^{0}_{\nu_{1}}\vert \cos^{2}{\theta_{\nu}}\right) }{\vert m^{0}_{\nu_{2}}\vert \vert m^{0}_{\nu_{1}}\vert} \right]\label{caih} \end{align}} \item {\bf Case B}. If $m^{0}_{\nu_{(2, 1)}}> 0$ and $m^{0}_{\nu_{3}}<0$. {\scriptsize \begin{align} r_{2}-r_{1}\approx -2\vert m^{0}_{\nu_{3}}\vert \left[\frac{\vert m^{0}_{\nu_{2}}\vert -\vert m^{0}_{\nu_{1}}\vert }{\vert m^{0}_{\nu_{2}}\vert \vert m^{0}_{\nu_{1}}\vert}\right],\quad r_{3}\approx -\left[1-2\frac{\vert m^{0}_{\nu_{3}}\vert \left(\vert m^{0}_{\nu_{2}}\vert \sin^{2}{\theta_{\nu}}+\vert m^{0}_{\nu_{1}}\vert \cos^{2}{\theta_{\nu}}\right) }{\vert m^{0}_{\nu_{2}}\vert \vert m^{0}_{\nu_{1}}\vert} \right]\label{cbih} \end{align}} \item {\bf Case C}. If If $m^{0}_{\nu_{(3, 2)}}> 0$ and $m^{0}_{\nu_{1}}<0$. {\scriptsize \begin{align} r_{2}-r_{1}\approx -2\vert m^{0}_{\nu_{3}}\vert \left[\frac{\vert m^{0}_{\nu_{2}}\vert +\vert m^{0}_{\nu_{1}}\vert }{\vert m^{0}_{\nu_{2}}\vert \vert m^{0}_{\nu_{1}}\vert}\right],\quad r_{3}\approx -\left[1-2\frac{\vert m^{0}_{\nu_{3}}\vert \left(\vert m^{0}_{\nu_{2}}\vert \sin^{2}{\theta_{\nu}}-\vert m^{0}_{\nu_{1}}\vert \cos^{2}{\theta_{\nu}}\right)}{\vert m^{0}_{\nu_{2}}\vert \vert m^{0}_{\nu_{1}}\vert} \right]\label{ccih} \end{align}} \item {\bf Case D}. If $m^{0}_{\nu_{2}}>0$ and $m^{0}_{\nu_{(3, 1)}}< 0$. {\scriptsize \begin{align} r_{2}-r_{1}\approx 2\vert m^{0}_{\nu_{3}}\vert \left[\frac{\vert m^{0}_{\nu_{2}}\vert +\vert m^{0}_{\nu_{1}}\vert }{\vert m^{0}_{\nu_{2}}\vert \vert m^{0}_{\nu_{1}}\vert}\right],\quad r_{3}\approx -\left[1+2\frac{\vert m^{0}_{\nu_{3}}\vert \left(\vert m^{0}_{\nu_{2}}\vert \sin^{2}{\theta_{\nu}}-\vert m^{0}_{\nu_{1}}\vert \cos^{2}{\theta_{\nu}}\right)}{\vert m^{0}_{\nu_{2}}\vert \vert m^{0}_{\nu_{1}}\vert} \right]\label{cdih} \end{align}} \end{itemize} Noticing, if the strict inverted hierarchy were realized, $r_{2}-r_{1}=0$ and $r_{3}=-1$, we would have that $\sin^{2}{\theta_{13}}=0$ and $\sin^{2}{\theta_{23}}=N^{2}_{3}\vert 1+\epsilon \vert^{2}/2$, which is not compatible with the observations. Nonetheless, this strict ordering allows us to infer that the $\vert \epsilon \vert e^{i\alpha_{\epsilon}} $ parameter magnitude has to be small in order to deviate sufficiently the atmospheric angle from $45^{\circ}$, and the same time, this has to be enough large to enhance the reactor one. In here, the $\alpha_{\epsilon}$ associated phase determines if we are above or below of $45^{\circ}$. Along this line, Nova experiment has discarded the lower octant \cite{Adamson:2017qqn}. On the contrary, if the constraint, on the lightest neutrino mass, is relaxed, the reactor angle comes out being non zero and the atmospheric one has an extra contribution, $r_{3}$, which can enlarge or reduce the $\vert \epsilon \vert$ magnitude since this may be greater or minor than $1$. So that, the factor $\vert \epsilon~r_{3}\vert $ might deviate drastically the atmospheric angle beyond of $45^{\circ}$. Notice that, roughly speaking, the reactor angle turns out being equal for the {\bf Cases A} and {\bf B}, and also, for {\bf C} and {\bf D}. The key difference among them comes from the atmospheric angle as can be seen in Eq.(\ref{caih}-\ref{cdih}). Now, from the absolute value of the neutrino masses we have $\vert m^{0}_{\nu_{2}}\vert \approx \vert m^{0}_{\nu_{1}}\vert (1+2R_{1})$, then {\scriptsize \begin{align} \vert m^{0}_{\nu_{2}}\vert -\vert m^{0}_{\nu_{1}}\vert \approx 2\vert m^{0}_{\nu_{1}}\vert R_{1} ,\quad \vert m^{0}_{\nu_{2}}\vert +\vert m^{0}_{\nu_{1}}\vert \approx 2 \vert m^{0}_{\nu_{1}}\vert\left[1+R_{1}\right],\quad \vert m^{0}_{\nu_{1}}\vert \vert m^{0}_{\nu_{2}}\vert\approx \vert m^{0}_{\nu_{1}}\vert^{2}\left[1+2 R_{1} \right],\label{masdih} \end{align}} where, $R_{1}\equiv \Delta m^{2}_{21}/4 \vert m^{0}_{\nu_{1}}\vert^{2}\approx \mathcal{O}(10^{-3})$, if the $\vert m^{0}_{\nu_{3}}\vert$ lightest neutrino mass is tiny. Therefore, for the {\bf Cases A} and {\bf B}, we have {\scriptsize \begin{align} \sin^{2}{\theta_{13}}\approx \frac{32}{9} | \epsilon |^{2} R^{2}_{1} \left| \frac{m^{0}_{\nu_{3}}}{m^{0}_{\nu_{1}}} \right|^{2},\qquad \sin^{2}{\theta_{23}}\approx \frac{1}{2}\frac{\left|1+\epsilon \left(1\pm 2 \left|\frac{m^{0}_{\nu_{3}}}{m^{0}_{\nu_{1}}}\right|\right)\right|^{2}}{1-\sin^{2}{\theta_{13}}}\label{reica} \end{align} } where the upper (lower) sign, in the atmospheric angle, stands for the {\bf Case A} ({\bf Case B}). Here, we have to keep in mind that $\vert m^{0}_{\nu_{3}}\vert/\vert m^{0}_{\nu_{1}}\vert< 1$ so that we can conclude that the first two scenarios are ruled out since that the reactor angle is proportional to the small quantity $(\vert m^{0}_{\nu_{3}}\vert/\vert m^{0}_{\nu_{1}}\vert)R_{1}\vert \epsilon \vert $, where $\vert \epsilon \vert \leq 0.3$. For the {\bf Case C} ( {\bf Case D}) the corresponding sign is the upper (lower), then the mixing angles are given as {\scriptsize \begin{align} \sin^{2}{\theta}_{13}&\approx \frac{32}{9}\vert\epsilon \vert^{2} \left| \frac{ m^{0}_{\nu_{3}} }{ m^{0}_{\nu_{1}}}\right|^{2}(1-R_{1})^{2},\quad \sin^{2}{\theta}_{23}\approx \frac{1}{2}\frac{\left|1+\epsilon \left(1\pm \frac{2}{3} \left|\frac{m^{0}_{\nu_{3}}}{m^{0}_{\nu_{1}}}\right|\right)\right|^{2}}{1-\sin^{2}{\theta_{13}}}.\label{anih} \end{align}} From these formulas, in general, an $\vert \epsilon\vert $ large value will be needed to compensate the $\vert m^{0}_{\nu_{3}}\vert$ lightest neutrino mass to get the allowed region for the reactor angle. But, the atmospheric angle prefers an $\vert \epsilon \vert$ small values. In addition, since that $r_{3}<0$, the complex parameter phase is taken to be $\alpha_{\epsilon}=0$ to increase the atmospheric angle value. In order to fix ideas, we obtain for the {\bf Case C}: (a) if $\vert \epsilon\vert\approx 0.3$, it is required that $\vert m^{0}_{\nu_{3}}\vert/\vert m^{0}_{\nu_{1}}\vert \approx 0.26$, to obtain $\sin^{2}{\theta_{13}}\approx 0.0229$. As a consequence, we get $\sin^{2}{\theta_{23}}\approx 0.94$ which is too large; (b) if $\vert \epsilon\vert\approx 0.1$, then we need that $\vert m^{0}_{\nu_{3}}\vert/\vert m^{0}_{\nu_{1}}\vert \approx 0.8$ to get $\sin^{2}{\theta_{13}}\approx 0.0229$, and therefore, $\sin^{2}{\theta_{23}}\approx 0.68$, which is still large in comparison to the central value. \begin{figure}[ht] \centering \includegraphics[scale=0.55]{ihCC1.png}\hspace{0.3cm}\includegraphics[scale=0.55]{ihCD1.png}\hspace{0.3cm} \caption{ $\sin^{2}{\theta_{23}}$ versus $\sin^{2}{\theta_{13}}$. The left and right panels stand for the {\bf Case C} and {\bf Case D}, respectively. The dotdashed, dashed and thick lines stand for $1~\sigma$, $2~\sigma$ and $3~\sigma$, respectively for each case. \label{fi1}} \end{figure} For the {\bf Case D}, the reactor angle has approximately the same values for $\vert \epsilon\vert\approx 0.3$, $\vert \epsilon\vert\approx 0.1$ and their respective $\vert m^{0}_{\nu_{3}}\vert/\vert m^{0}_{\nu_{1}}\vert$ mass ratios as above case. Then, with these values of $\vert \epsilon \vert$, we obtain $\sin^{2}{\theta_{23}}\approx 0.79$ and $\sin^{2}{\theta_{23}}\approx 0.56$, respectively. Notice that both values are approaching the allowed region for this mixing angle, then, this case is more favorable than the {\bf Case C}. This happens since a large contribution of $\vert \epsilon \vert$, in the atmospheric angle, is suppressed by $r_{3}$ which is minor than $1$ and the reactor angle prefers an $\vert \epsilon \vert$ large values. Let us remark the following, if $\vert \epsilon \vert$ is tiny, we require that $\vert m^{0}_{\nu_{3}}\vert/\vert m^{0}_{\nu_{1}}\vert$ neutrino mass ratio should be larger than $1$ to enhance the reactor angle but this mass ratio violates the inverted ordering. This statement is valid for the {\bf Cases C} and {\bf D}. At the same time, if $\alpha_{\epsilon}=\pi$ is chosen in the atmospheric angle, this would be tiny for the same values of $\vert \epsilon \vert$ and the $\vert m^{0}_{\nu_{3}}\vert/\vert m^{0}_{\nu_{1}}$, as can be verified straight from Eq. (\ref{anih}). In order to get a complete view of the parameter space, let us show some plots for the reactor and atmospheric angles. We have considered the exact formulas given in Eq. (\ref{mixang}), for the the observables as the $\theta_{12}$ solar angle, $\Delta m^{2}_{21}$ and $\Delta m^{2}_{13}$, their values were taken up to $3~\sigma$. Then, the figure {\ref{fi1}} shows the atmospheric angle versus the reactor one for the {\bf Case C} and {\bf D}. This scattering plots clearly support our analytic result on the {\bf Case C}, this is, both mixing angles can not be accommodate simultaneously. In the {\bf Case D}, the reactor angle is consistent with the experimental data but the atmospheric one is large but consistent up to $2-3~\sigma$ in its allowed region. In addition, for the {\bf Case D}, the parameter space is shown in the figure {\ref{fi2}}. As can be seen, the atmospheric angle prefers small values for $\vert \epsilon \vert$ whereas the reactor one needs a large value, as was already pointed out. Moreover, the set of values for $\vert \epsilon \vert$ and $\vert m^{0}_{\nu_{3}}\vert$ is tight. \begin{figure}[ht] \centering \includegraphics[scale=0.55]{ihCD2.png}\hspace{0.3cm}\includegraphics[scale=0.55]{ihCD3.png} \caption{{\bf Case D}: Allowed region for $\sin^{2}{\theta_{23}}$. The dotdashed, dashed and thick lines stand for $1~\sigma$, $2~\sigma$ and $3~\sigma$ \label{fi2}.} \end{figure} {\bf Degenerated hierarchy}. In this case, $\vert m^{0}_{\nu_{3}}\vert \approxeq \vert m^{0}_{\nu_{2}}\vert \approxeq \vert m^{0}_{\nu_{1}}\vert\approxeq m_{0}$, with $m_{0}\gtrsim 0.1~eV$. Then, the absolute neutrino masses can be written as $\vert m^{0}_{\nu_{3}} \vert= \sqrt{\Delta m^{2}_{31}+m^{2}_{0}}\approx m_{0}\left(1+\Delta m^{2}_{31}/2m^{2}_{0}\right)$ and $\vert m^{0}_{\nu_{2}} \vert=\sqrt{\Delta m^{2}_{21}+m^{2}_{0}}\approx m_{0}\left(1+\Delta m^{2}_{21}/2m^{2}_{0}\right)$. As in the inverted case, there are four independent cases for the signs which are shown below. \begin{itemize} \item {\bf Case A}. If $m^{0}_{\nu_{i}}>0 $, {\scriptsize \begin{align} r^{A}_{1}=\frac{\vert m^{0}_{\nu_{3}}\vert+m_{0}}{\vert m^{0}_{\nu_{3}}\vert-m_{0}},\qquad r^{A}_{2}=\frac{\vert m^{0}_{\nu_{3}}\vert+\vert m^{0}_{\nu_{2}}\vert}{\vert m^{0}_{\nu_{3}}\vert-\vert m^{0}_{\nu_{2}}\vert},\qquad r^{A}_{3}=r^{A}_{2}\cos^{2}{\theta_{\nu}}+r^{A}_{1}\sin^{2}{\theta_{\nu}}.\label{dh1} \end{align}} \item {\bf Case B}. If If $m^{0}_{\nu_{(2, 1)}}>0$ and $m^{0}_{\nu_{3}}<0$, {\scriptsize \begin{align} r^{B}_{1}=\frac{\vert m^{0}_{\nu_{3}}\vert-m_{0}}{\vert m^{0}_{\nu_{3}}\vert+m_{0}}=\frac{1}{r^{A}_{1}},\qquad r^{B}_{2}=\frac{\vert m^{0}_{\nu_{3}}\vert-\vert m^{0}_{\nu_{2}}\vert}{\vert m^{0}_{\nu_{3}}\vert+\vert m^{0}_{\nu_{2}}\vert}=\frac{1}{r^{A}_{2}},\qquad r^{B}_{3}=r^{B}_{2}\cos^{2}{\theta_{\nu}}+r^{B}_{1}\sin^{2}{\theta_{\nu}}.\label{dh4} \end{align}} \item {\bf Case C}. If $m^{0}_{\nu_{(3, 2)}}>0$ and $m^{0}_{\nu_{1}}=-m_{0}$, {\scriptsize \begin{align} r^{C}_{1}=\frac{\vert m^{0}_{\nu_{3}}\vert-m_{0}}{\vert m^{0}_{\nu_{3}}\vert+m_{0}}=\frac{1}{r^{A}_{1}},\qquad r^{C}_{2}=\frac{\vert m^{0}_{\nu_{3}}\vert+\vert m^{0}_{\nu_{2}}\vert}{\vert m^{0}_{\nu_{3}}\vert-\vert m^{0}_{\nu_{2}}\vert}=r^{A}_{2},\qquad r^{C}_{3}=r^{C}_{2}\cos^{2}{\theta_{\nu}}+r^{C}_{1}\sin^{2}{\theta_{\nu}}. \label{dh2} \end{align}} \item {\bf Case D}. If $m^{0}_{\nu_{2}}>0$ and $m^{0}_{\nu_{(3, 1)}}<0$, {\scriptsize \begin{align} r^{D}_{1}=\frac{\vert m^{0}_{\nu_{3}}\vert+m_{0}}{\vert m^{0}_{\nu_{3}}\vert-m_{0}}=r^{A}_{1},\qquad r^{D}_{2}=\frac{\vert m^{0}_{\nu_{3}}\vert-\vert m^{0}_{\nu_{2}}\vert}{\vert m^{0}_{\nu_{3}}\vert+\vert m^{0}_{\nu_{2}}\vert}=\frac{1}{r^{A}_{2}}, \qquad r^{D}_{3}=r^{D}_{2}\cos^{2}{\theta_{\nu}}+r^{D}_{1}\sin^{2}{\theta_{\nu}}. \label{dh3} \end{align}} \end{itemize} Notice that {\scriptsize \begin{align} \vert m^{0}_{\nu_{3}}\vert- m_{0}&\approx 2 m_{0}R_{2},\quad \vert m^{0}_{\nu_{3}}\vert+ m_{0}\approx 2m_{0}\left(1+R_{2}\right),\nonumber\\ \vert m^{0}_{\nu_{3}}\vert- \vert m^{0}_{\nu_{2}}\vert &\approx 2m_{0}R_{2}\left(1-R_{3}\right),\quad \vert m^{0}_{\nu_{3}}\vert+ \vert m^{0}_{\nu_{2}}\vert \approx 2 m_{0}\left[1+R_{2}+R_{4}\right],\label{dh5} \end{align}} with $R_{2}\equiv \Delta m^{2}_{31}/4m^{2}_{0}$, $R_{3}=\Delta m^{2}_{21}/\Delta m^{2}_{31}$ and $R_{4}=\Delta m^{2}_{21}/4m^{2}_{0}$, where $R_{4}< R_{3}\lesssim R_{2}$. Thus, $r^{A}_{1}\approx (1+R_{2})/R_{2}$ and $r^{A}_{2}\approx r^{A}_{1}(1+R_{3})$. To fix ideas on the order of magnitude of each defined quantity, we use the data for the inverted hierarchy and their respective central values. So that, $R_{2}\sim 6\times 10^{-2}$, $R_{3}\sim 3\times 10^{-2}$, $R_{4}\sim 2\times 10^{-3}$ and $r^{A}_{1}\sim 17$ with $m_{0}=0.1~eV$. Indeed, $R_{2}$ and $R_{4}$ might be fairly small, and therefore $r^{A}_{1}$ so large since that $m_{0}\gtrsim 0.1~eV$. Therefore, in the {\bf Cases A}, $r^{A}_{2}-r^{A}_{1}\approx r^{A}_{1}R_{3}$ and $r^{A}_{3}\approx r^{A}_{1}$ then {\scriptsize \begin{align} \sin^{2}{\theta_{13}}&\approx \frac{2}{9}\left|\epsilon \right|^{2}\left[r^{A}_{1} R_{3}\right]^{2},\qquad \sin^{2}{\theta_{23}}\approx \frac{1}{2}\frac{\left|1-\epsilon r^{A}_{1}\right|^{2}}{1-\sin^{2}{\theta_{13}}}. \end{align}} In the {\bf Case B}, $r^{B}_{2}-r^{B}_{1}\approx -r^{A}_{1}/R_{3}$ and $r^{B}_{3}\approx 1/r^{A}_{1}$ so that {\scriptsize \begin{align} \sin^{2}{\theta_{13}}&\approx \frac{2}{9}\left|\epsilon \right|^{2}\left[ \frac{R_{3}}{r^{A}_{1}}\right]^{2},\qquad \sin^{2}{\theta_{23}}\approx \frac{1}{2}\frac{\left|1-\frac{\epsilon}{ r^{A}_{1}}\right|^{2}}{1-\sin^{2}{\theta_{13}}}. \end{align} } Thus, in the former case if the reactor angle is fixed to its central value ($\sin^{2}{\theta_{13}}\approx 0.0229$), with the above values for $r^{A}_{1}$ and $R_{3}$, we obtain that $\vert \epsilon \vert \approx 0.5$ which means a strong breaking of the $\mu-\tau$ symmetry. As result, the atmospheric angle comes out begin too large. Analogously, for the second case one gets that $\vert \epsilon \vert\approx 10^{2}$ if the reactor angle is fixed to its central value. As consequence, the atmospheric angle is also quite large. Therefore, these two cases are excluded, the only feasible cases are the last ones. For the {\bf Case C}, from Eq.(\ref{dh2}), we have $r^{C}_{2}-r^{C}_{1}\approx r^{A}_{1}(1+R_{3})$ and $r^{C}_{3}\approx r^{A}_{1}\cos^{2}{\theta_{\nu}}$, so that {\scriptsize \begin{align} \sin^{2}{\theta}_{13}\approx\frac{2}{9}\vert \epsilon \vert^{2}\left[r^{A}_{1}(1+R_{3})\right]^{2},\qquad \sin^{2}{\theta_{23}}\approx \frac{1}{2}\frac{\left| 1-\frac{2}{3}r^{A}_{1}\epsilon \right|^{2} }{1-\sin^{2}{\theta_{13}}}. \label{dh8} \end{align}} For the {\bf Case D}, from Eq. (\ref{dh3}), we obtain $r^{D}_{2}-r^{D}_{1}\approx -r^{A}_{1}$ and $r^{D}_{3}\approx r^{A}_{1}\sin^{2}{\theta_{\nu}}$. Then, {\scriptsize \begin{align} \sin^{2}{\theta}_{13}\approx \frac{2}{9}\vert \epsilon \vert^{2} \left[r^{A}_{1} \right]^{2},\qquad \sin^{2}{\theta_{23}}\approx \frac{1}{2}\frac{\left| 1-\frac{1}{3}r^{A}_{1}\epsilon \right|^{2}}{1-\sin^{2}{\theta_{13}}}.\label{dh9} \end{align}} \begin{figure}[ht] \centering \includegraphics[scale=0.55]{dhCC1.png}\hspace{0.3cm}\includegraphics[scale=0.55]{dhCD1.png} \caption{$\sin^{2}{\theta_{23}}$ versus $\sin^{2}{\theta_{13}}$. The left and right panels stand for the {\bf Case C} and {\bf Case D}, respectively. The dotdashed, dashed and tick lines stand for $1~\sigma$, $2~\sigma$ and $3~\sigma$, respectively for each case. \label{fd1}} \end{figure} Roughly speaking, as in the inverted case, the reactor angle has approximately the same behavior for both cases but the atmospheric angle comes out being different. In here, on the other hand, notice that $r_{3}>0$ for both cases then if $\alpha_{\epsilon}=0$, the atmospheric angle would be smaller than $45^{\circ}$ which is far away from the experimental data, as can be verified from Eq.(\ref{dh8}) and Eq.(\ref{dh9}). In order to increase this value, it is needed that $\alpha_{\epsilon}=\pi$. Additionally, because of $r^{A}_{1}\ggg 1$, the value of $\vert \epsilon \vert$ should be of the order of $10^{-2}$ in order to not enhance too much the atmospheric angle, of course, we must be careful to not spoil the reactor angle or vice versa. Now, in {\bf Case C} and {\bf D}, if the reactor angle is fixed to its central value ($\sin^{2}{\theta_{13}}\approx 0.0229$) then it is required that $\vert \epsilon \vert\sim 2\times 10^{-2}$, so that one obtains $\sin^{2}{\theta_{23}}\approx 0.74$ and $\sin^{2}{\theta_{23}}\approx 0.63$, respectively. As can be seen, the favored case is the latter due to the $\epsilon r^{D}_{3}$ contribution, in the atmospheric angle, is minor than $\epsilon r^{C}_{3}$ such that the atmospheric angle is softly being deviated from $45^{\circ}$. Now, an interesting fact is the following: if $m_{0}$ is increased to the allowed value, then $r^{A}_{1}$ becomes quite large and therefore, a tiny $\vert \epsilon \vert$ value is needed to not deviate so much from $45^{\circ}$ the atmospheric angle and at the same time, to get an allowed region for the reactor angle. In this hierarchy, the $\mu-\tau$ symmetry is being broken softly. We will now explore the complete parameter space for both cases. The exact formulas for the mixing angles have been used with the respective extreme Majorana phases for each case, apart from the allowed values for $\Delta m^{2}_{21}$, $\Delta m^{2}_{13}$ and $\theta_{\nu}$ the solar angle for the inverted ordering as a good approximation. Therefore, in figure \ref{fd1}, the atmospheric versus the reactor angle is show up to $3~\sigma$. This panels allow to compare the two cases and these support our analytic result, in the {\bf Case D}, both angles of interest are accommodated very well. In the figure \ref{fd2}, as can be seen, the parameter space is large where the atmospheric angle, and therefore the reactor one, is accommodated in good agreement with the experimental data. At the end of the day, the degenerate ordering is favored instead of the inverted case. \begin{figure}[ht] \centering \includegraphics[scale=0.55]{dhCD2.png}\hspace{0.3cm}\includegraphics[scale=0.55]{dhCD3.png} \caption{{\bf Case D}: Allowed region for $\sin^{2}{\theta_{23}}$. The dotdashed, dashed and thick lines stand for $1~\sigma$, $2~\sigma$ and $3~\sigma$\label{fd2}} \end{figure} \section{Conclusions} We have extended the scalar sector of the LRSM in order to get masses and mixings for fermions. In the lepton sector, neutrino masses and mixings have been studied in the limit of a slightly broken $\mu-\tau$ symmetry, so that the reactor and atmospheric angles depend strongly on the $\epsilon$ free parameter, that characterizes the $\mu-\tau$ symmetry breaking, and the neutrino masses. Due to this last fact, the mixing angles are sensitive to the extreme Majorana phases which may increase or decrease their respective values. Therefore, we have made an analytic study on the role that the extreme Majorana phases might have in each hierarchy. Additionally, the $\epsilon$ free parameter and the lightest neutrino mass have been constrained. The main results are the following: (a) the model predicts a tiny value for the reactor angle in the normal hierarchy and this result holds for whatever extreme Majorana phases. Then, the normal ordering is completely ruled out for $\vert \epsilon\vert \leq 0.3$; (b) in the inverted hierarchy there is one combination in the extreme Majorana phases where the reactor and atmospheric angles are compatible up to $2-3~\sigma$ within the allowed region for the latter angle. This scenario is fairly constrained since the parameter space is so tight; (c) the degenerate ordering is the most viable scenario to accommodate simultaneously the reactor and atmospheric angles. In this case, there is one combination in the extreme Majorana phases where both angles are consistent with the current limits imposed by the experimental data for $\sin^{2}\theta_{23}$ and $\sin^{2}\theta_{13}$. At the same time, a set of values for $\epsilon$ and the lightest neutrino mass was found such that the $\mu-\tau$ symmetry is broken softly. Remarkably, the viable cases predict that $\theta_{23}>45^{\circ}$. For the moment, the quark sector has been left aside for a future work but we have pointed out that the mass matrices possess textures that might fit the CKM matrix. Although the model is quite elaborate, it is fairly predictive and testable by the future results that the Nova and KamLAND-Zen collaborations will provide. \section*{Acknowledgements} We would like to thank Myriam Mondrag\'on and Abdel P\'erez-Lorenzana for their useful comments and discussion on the manuscript. This work was partially supported by a PAPIIT grant IN111115. The author thanks Red de Altas Energ\'{\i}as-CONACYT for the financial support. \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,184
\section*{Supplementary Information} \section*{Experimental setup and superfluid velocity calculation.} \label{sec:A-setup} Flow in the Helmholtz resonators is driven and sensed capacitively using two aluminum electrodes deposited on the top and bottom wall of the device, forming a parallel plate capacitor (see \cite*{Souris2017} for details on the fabrication process). An alternating voltage of amplitude $U_0$ applied to the electrodes of the device causes a periodic deformation of the walls of the basin due to electrostatic force, which pushes the superfluid in and out of the basin through the two side channels and into the bath, thus driving the Helmholtz mode. The Helmholtz resonance is observed as a periodic variation of the capacitance of the device. The resonator is wired in a bridge circuit shown in Fig.~\ref{fig:wiring}, balanced to the capacitance $C_0$ of the resonator at rest. Change in this capacitance caused by the Helmholtz resonance results in bridge imbalance and a current $I$ through the detector G. An example of the resulting spectrum of two resonators wired in parallel is shown in Fig.~\ref{fig:spectrum}. The current is first amplified by a transimpedance amplifier (Stanford Research SR570) and then measured by a (Zurich Instruments HF2LI) lock-in amplifier referenced to the frequency of excitation $U_0$. A standard 9V battery is used as the source of the bias voltage ($U_\mathrm{B} = 9.2$ V). The capacitance bridge is the model General Radio 1615-A. \begin{figure}[h!] \centering \includegraphics{wiring} \caption{Measurement scheme. Aluminum electrodes of the Helmholtz resonator form a parallel plate capacitor of capacitance $C_0$ biased by the battery $U_\mathrm{B}$. The resonator is wired as one arm of a capacitance bridge (the other arm being the capacitor $C_\mathrm{b}$) which is balanced such that current $I$ through the detector G is approximately zero when flow of the helium is negligible. When the Helmholtz mechanical mode is excited by the oscillating voltage $U_0$, the oscillating pressure in the basin changes the capacitance of the device and thus a nonzero current through the detector G. The bridge circuit is isolated from the battery voltage $U_B$ by the two bias tees (BT). The transformer ratio is 1:1 on the resonator arm and adjustable for the $C_b$ arm.} \label{fig:wiring} \end{figure} \begin{figure} \centering \includegraphics{peaks} \caption{The spectrum of two Helmholtz resonators wired in parallel measured at 1.4 K and normalized by the drive voltage amplitude $U_0$. The fact that the normalized peaks do not overlap for increasing drives indicates nonlinear dissipation. The higher-frequency peak corresponds to the device with $D=1067$ nm and the lower-frequency one to $D = 805$ nm (see Eq.~\ref{eq:omega0}).} \label{fig:spectrum} \end{figure} In this section we first analyze the equation of motion of the fluid in channels, approximated as a mass on a spring, and derive the relationship between the oscillating driving voltage $U_0$ and the pressure gradient in the channels. Next, we calculate the superfluid velocity in the channels from the current $I$ through the detector. \textbf{Helmholtz equation of motion. --} We derive the equation of motion of the superfluid in the Helmholtz resonator with explicit drive and damping forces. We approximate the flow in the channel as a mass on a spring, which is displaced by distance $y$ (positive in the direction away from the basin). The average displacement of the plates of the basin is denoted by $x$ (positive when the basin contracts). We begin by calculating the change in total density of the fluid inside the basin as a response to the mean deformation of the basin $x$ and the displacement of the superfluid inside the channel $y$: \begin{equation} \label{eq:delta-rho} \delta \rho = \delta\left(\frac{M_B}{V_B}\right) = \frac{\delta M_B}{V_B} - \frac{M_B}{V_B^2}\delta V_B = \frac{1}{V_B}\left( - 2a\rho_s y + 2\rho A x\right). \end{equation} Here $M_B = \rho V_B$ is the mass of the fluid inside the basin, $V_B = AD$ is the volume of the basin ($A$ being its area and $D$ the confinement) and from the assumption that only the superfluid moves $\delta M_B = -2a\rho_s y$ ($a = wD$ is the cross-sectional area of the channel; the factor of 2 comes from the two channels), and $\delta V_B = -2Ax$ is the change in basin volume due to motion of the plates. A change in density corresponds to a change in pressure via the compressibility $\chi$, $\delta \rho = \rho\chi\delta P$, or \begin{equation} \label{eq:delta-P} \delta P = \frac{1}{\rho\chi V_B}\left(2\rho A x - 2a \rho_s y\right). \end{equation} Balancing forces on the plate (neglecting its inertia) yields \begin{equation} \label{eq:plate-balance} \ensuremath{F_\mathrm{es}} = \frac{1}{2}k_p x + A\delta P, \end{equation} where $\ensuremath{F_\mathrm{es}} = C_0U^2/(2D)$ is the electrostatic force between the parallel plates of the capacitor formed by the circular electrodes in the basin, $U=U_B + U_0$ is the total applied voltage and $k_p = 2.4\times10^7$ N/m \cite*{Souris2017} is the stiffness of the substrate deflection (note that this is double that in Ref.~\cite*{Souris2017}, where the stiffness refers to deflection of both plates in parallel). Expressing $x$ from Eq.~\ref{eq:plate-balance} and substituting back in to Eq.~\ref{eq:delta-P} yields \begin{equation} \label{eq:delta-P-F} \delta P = \frac{k_p}{\rho(\chi V_Bk_p + 4A^2)}\left[\frac{4\rho A}{k_p}\ensuremath{F_\mathrm{es}} - 2a\rho_s y\right]. \end{equation} The superfluid inside the channel is accelerated by the pressure \begin{align} \label{eq:y-motion} \rho_s a l \ddot y &= \frac{\rho_s}{\rho}a\delta P - F_f,\\ \rho_s a l \ddot y &= \frac{\rho_sk_pa}{\rho^2(\chi V_Bk_p + 4A^2)}\left[\frac{4\rho A}{k_p}\ensuremath{F_\mathrm{es}} - 2a\rho_s y\right] - F_f, \end{align} where we included a friction force $F_f = al\rho_s\zeta\dot y$. Here $\zeta$ is a friction parameter with units of frequency but will remain otherwise unspecified for now. Rearranging, \begin{equation} \label{eq:y-lho} \ddot y + \frac{2\rho_s k_p a}{l\rho^2(\chi V_B k_p + 4A^2)} y + \zeta\dot y = \frac{1}{\rho}\frac{4A}{l(\chi V_B k_p + 4A^2)}\ensuremath{F_\mathrm{es}}, \end{equation} from which the resonance frequency follows \begin{equation} \label{eq:omega0} \omega_0^2 = \frac{2a}{l\rho}\frac{\rho_s}{\rho}\frac{k_p}{4A^2(1 + \Sigma)}, \end{equation} where $\Sigma = \chi D k_p/(4A)$. Finally, the driving pressure gradient, the quantity shown on the x-axis in Fig.~2A,B of the main text, is given by \begin{equation} \frac{\delta P}{l} = \frac{4A}{(\chi V_B k_p + 4A^2)l}\ensuremath{F_\mathrm{es}} = \frac{4AC_0U_B}{(\chi V_B k_p + 4A^2)Dl}U_0, \end{equation} where we take only the component of the force $F_\mathrm{es}$ on resonance with the Helmholtz mode ($U_0$ being the AC drive), $F_\mathrm{es}^\mathrm{res} = C_0U_0U_B/D$. \textbf{Calculation of velocity from detector current. --} Whenever the capacitance bridge in the measurement circuit shown in Fig.~\ref{fig:wiring} becomes imbalanced, current will flow through the detector. Assuming that the bridge is tuned to the total capacitance of the devices at rest, the current through the detector at the frequency of the drive (the only component detected by the lock-in amplifier) is given only by the oscillation of the device capacitance due to the Helmholtz resonance, \begin{equation} \label{eq:I-C} I = \diff{(CU)}{t} = U_B\diff{C}{t} = U_B\diff{C}{y}\dot y, \end{equation} where $U_B$ is the bias voltage. The change of capacitance with superfluid displacement in the channel can be written as \begin{equation} \label{eq:dCdy} \diff{C}{y} = \frac{C_0}{\varepsilon}\diff{\varepsilon}{\rho}\diff{\rho}{y} + 2\frac{C_0}{D}\diff{x}{y}, \end{equation} where $C_0 = \varepsilon_0\varepsilon A_\mathrm{el} / D$ is the capacitance of an undisturbed device with $\varepsilon_0,\varepsilon$ being the vacuum permittivity and dielectric constant of helium, respectively, and $A_\mathrm{el}$ the area of the electrodes. The spacing between electrodes is given by $h = D - 2x$, hence the second term in Eq.~\ref{eq:dCdy}. Neglecting the dependence of polarizability of helium on density \cite*{Kierstead1976} and using the Clausius-Mossoti relation we can estimate the change in dielectric constant as \begin{equation} \label{eq:eps-rho} \diff{\varepsilon}{\rho} = \frac{\varepsilon - 1}{\rho}. \end{equation} The change of density with superfluid displacement can be calculated directly from Eq.~\ref{eq:delta-P-F} and using $\ensuremath{\mathrm{d}} \rho = \rho\chi\ensuremath{\mathrm{d}} P$, \begin{equation} \label{eq:rho-y} \diff{\rho}{y} = -\frac{2a\rho_s}{V_B}\frac{2\Sigma}{1 + 2\Sigma}. \end{equation} where $\Sigma = \chi D k_p / (4A)$. Differentiating Eq.~\ref{eq:delta-rho} with respect to $y$ and putting the result equal to Eq.~\ref{eq:rho-y} results in an equation for $\ensuremath{\mathrm{d}} x / \ensuremath{\mathrm{d}} y$ and yields \begin{equation} \label{eq:x-y} \diff{x}{y} = \frac{a\rho_s}{A\rho}\left(1 - \frac{2\Sigma}{1 + 2\Sigma}\right). \end{equation} Inserting Eq.~\ref{eq:x-y}, Eq.~\ref{eq:rho-y} and Eq.~\ref{eq:eps-rho} back into Eq.~\ref{eq:dCdy} yields \begin{equation} \label{eq:dCdy-final} g \equiv \frac{1}{C_0}\diff{C}{y} = \frac{2a\rho_s}{V_B\rho}\left(1 - 2\frac{\varepsilon - 1}{\varepsilon}\Sigma\right)\frac{1}{1 + 2\Sigma}. \end{equation} Finally, the flow velocity is calculated as \begin{equation} \label{eq:I-V} \dot y = \frac{I}{gV_BC_0}, \end{equation} assuming that the background has been subtracted from $I$. \subsection{Two-dimensionality of turbulence} \label{sec:two-dim} To what extent can the studied flow be considered 2D? The thickness of the flow channel is too large (i.e., $D$ is much larger than the coherence length, $\approx$\AA) for finite-size effects of 2D superfluidity to be relevant \cite*{Bishop1978}. 2D turbulence, on the other hand, requires only that the fluctuating velocity is restricted to 2D. This is essentially controlled by the channel aspect ratio and damping of the self-induced vortex motion. Turbulence in He-II, especially when forced by a pressure gradient in large systems, typically behaves quasi-classically---as a classical liquid with effective viscosity \cite{Babuin2014}. Thus we first verify that the turbulence could be considered two-dimensional based on classical fluid dynamics criteria. The forced superflow induced by the Helmholtz resonance is naturally 2D, however, the device geometry (specifically, the sharp corners near where the basin connects to the channel) induces shear on the scale of the channel width $w = 1.6$ mm, which can, in principle, drive a 3D flow instability. It was shown by Benavides and Alexakis \cite*{Benavides2017}, for systems of reduced dimensionality that the direction of the turbulent energy cascade critically depends on the ratio $w/D$ of forcing to confinement scale. Specifically, for $w/D \gtrsim \sqrt{\mathrm{Re}}$ the turbulence develops the 2D inverse energy cascade. Here, $\mathrm{Re}$ is the Reynolds number, which we define for our system using the effective quasi-classical viscosity He-II \cite{Babuin2014} $\nu_\mathrm{eff} \approx 0.1\kappa$ as $\mathrm{Re} = wv_s/\nu_\mathrm{eff}$. In our experiments $w/D \approx 1500$ for $D = 1067$ nm and the highest experimentally achieved $\sqrt{\mathrm{Re}} \approx 400$. Therefore, from a standpoint of classical turbulence, the turbulence in our devices ought to be in the 2D regime. It should be noted, however, that even if a few vertical modes of motion are possible, the inverse energy cascade responsible for appearance of large-scale features is still expected to be present \cite*{Pouquet2017}. The turbulent fluctuations, however, will be also strongly affected by the presence of quantized vortices, whose core size is on the scale of $a_0 \approx 0.1$ nm---significantly smaller than the confinement imposed by the device. A potential complication arises from vortex pinning on rough surfaces. The RMS surface roughness of our devices is expected to be \cite{Duh2012} about 1 nm, which puts the flow velocity required to dislodge a vortex from a typical surface defect \cite{Schwarz1985} at about 4~cm/s. The velocities we observe in the turbulent regime are significantly higher, thus it is unlikely that pinning plays an important role for our results. It is in principle possible that a portion of the vortices in the flow are intrinsically three-dimensional, e.g. half-loops pinned on one of the opposing confining walls. To estimate the importance of such vortices we estimate their lifetime in a configuration shown in Fig.~\ref{fig:loop-lifetime}(a). We assume a circular vortex attached to one wall aligned perpendicular to the applied oscillating flow. The self-induced velocity of the ring (neglecting pinning) as a function of its radius is given by \cite{Donnelly1991} \begin{equation} v_i(R) = \frac{\kappa}{4\pi R} \left[\log\left(\frac{8R}{a_0}\right) - \frac{1}{4}\right], \end{equation} and, for stationary normal fluid, the change in radius is given by \cite{Donnelly1991} \begin{equation} \dot R = \alpha\left[V_\mathrm{s}(t) - v_i(R)\right], \end{equation} where $\alpha$ is the mutual friction constant \cite{Donnelly1998} and $V_\mathrm{s}(t) = V_\mathrm{s0}\sin\Omega t$ is the imposed superflow. We numerically integrate the evolution of $R$ for a range of initial radii $R_0 = R(t=0)$ and velocity amplitudes, terminating the calculation when either $R\approx 0$ and the loop is annihilated or when $R\approx D = 1$ $\mathrm{\mu}$m and the loop reconnects with the opposing wall, thus transforming into a vortex dipole. As shown in Fig.~\ref{fig:loop-lifetime}(b), the typical lifetime $t^*$ of half-rings for the parameters typical of our experiment is shorter than the flow oscillation period $T_0$, reaching, at most, about 0.6$T_0$ for a very specific choice of parameters. Vortex loops attached to a surface are thus short-lived transient objects. Creation and expansion of these loops is a likely scenario for vortex splitting and unpolarized injection, which feature in the quasi-2D model of Eqs.~(3,4) of the main text, discussed further in the next section. Note, however, that we neglected the effects of the opposing wall on the self-induced velocity of the ring. This will cause the vortex to deform and be attracted to the opposing wall, thus slightly altering the lifetime. Changing the phase of the oscillating flow either does not significantly influence the outcome or causes loops of all sizes to quickly decay. Changing the angle between the plane of the loop and flow velocity would result in a somewhat more complicated transient flow, which is, however, unlikely to terminate in a significantly different manner. \begin{figure} \centering \includegraphics{half_loop_lifetime.pdf} \caption{(a) Configuration of the flow and vortex for calculation of the lifetime of the loop. Under the imposed oscillatory flow, the loop can either annihilate or expand, reconnect with the opposite wall and thus create a dipole pair of vortices spanning the confinement. (b) Time, $t^*$ relative to the flow oscillation period $T_0$, for a half ring to grow to the size of the channel (and thus reconnect with opposite wall to form a vortex dipole, red scale) or to completely annihilate (blue scale) in oscillating superfluid flow of varying amplitudes. Flow frequency was assumed to be $f=1200$ Hz ($T_0 = 1/f = 0.83$ ms) and mutual friction constant $\alpha = 0.034$ (corresponding to approximately 1.3 K). Higher $\alpha$ (temperature) results in typically shorter lifetime.} \label{fig:loop-lifetime} \end{figure} The calculation outlined above is not valid for vortex loop radii $R$ comparable with the surface roughness $b\approx 1$ nm, since the vortex will be subject to highly nonuniform flow resulting from the surface imperfections. Stagg et \emph{al.} \cite{Stagg2017} studied a flow close to an irregular surface using a simulation of vortices in a Bose-Einstein condensate in the zero-temperature limit using the Gross-Pitaevskii equation (GPE). In that work, a dense layer of vortices was found in the rough landscape of the surface sustained by intrinsic nucleation of vortices on the protruding peaks of the surface. It is possible that such a dense boundary layer exists in our case as well, however, results obtained using GPE ought to be adopted with caution for helium at finite temperatures. Intrinsic nucleation of vortices in He-II requires significantly higher velocities and mutual friction at finite temperatures will strongly damp any small, highly-curved vortex structures. Regardless, this boundary layer is expected to be confined to within the scale of surface roughness \cite{Stagg2017} which in our case is about 0.1\% of the confinement thus making it unlikely to be a significant contribution to the observed macroscopic drag. Finally, the vortices connecting the two confining walls can, in principle, deform arbitrarily on the scale of the confinement, $D\approx 10^4 a_0$. We estimate the dynamical importance of these deformations by comparing their typical rate of decay to the time scale of their forcing, i.e., the flow oscillation period. Assume that the vertical modes of flow, mediated by the vortex deformation, take the form of a cascade of Kelvin waves---helical wave modes on vortices \cite*{Donnelly1991}. The decay rate of a Kelvin wave mode of wave vector $k$ is $\tau = {\kappa}/{4\pi\alpha k^2}$ \cite*{Barenghi1985a}. The smallest admissible $k \approx 2\pi/D$ results for $T = 1.3$ K and $D=1067$ nm in $\tau \approx 30$ $\mu$s. Increasing temperature will decrease $\tau$. The decay rate of Kelvin waves is thus significantly faster than the time scale of their pumping (i.e., flow period, which is of the order of 1 ms) and comparable to the inverse frequency of the Kelvin mode itself \cite*{Donnelly1991}, i.e., no Kelvin wave cascade is likely to develop along the individual vortices since the largest scales are already in the dissipative range. Other modes of vortex deformation (e.g., solitons \cite*{Hopfinger1982}), which cannot be decomposed to Kelvin waves, are possible. However, since the local velocity of the deformed line, and thus its decay rate mediated by mutual friction, are primarily determined by the local curvature, we expect the decay of these deformations to be comparable to that of the Kelvin waves. The amplitude of thermally excited Kelvin waves is also expected to be negligible \cite{Barenghi1985a}. We therefore consider the vortices in our system which span the confinement to be point-like. Decreasing the temperature, particularly below 1 K, would suppress the mutual friction damping and allow the vortices to deform strongly. Therefore we expect the turbulence to cease to be 2D-like at sufficiently low temperatures. \subsection{Vortex density model} \label{sec:model} Discrete quantized vortices are transported by flow similar to how vorticity is transported in classical 2D flow \cite*{Kraichnan1980}: \begin{equation} \frac{\partial n_\pm}{\partial t} + (\mathbf{v}_\mathrm{s}\cdot\nabla)n_\pm = (n_+ + n_-)b(\mathbf{v}_\mathrm{s}) - dn_+n_- + g_\pm, \end{equation} where the terms on the right hand side correspond, respectively, to splitting, decay by collision and generation of vortices by the external drive. Note that this is not simply a passive scalar transport with source terms, since $\mathbf{v}_\mathrm{s}$ depends on the vortex distribution. We expect the splitting rate $b$ to depend on velocity, possibly exhibiting critical behavior itself. For simplicity, however, we take $b$ to be constant since in a high-velocity regime it is likely to be dominated by the flow oscillation period, which is independent of velocity. Averaged over the flow oscillation period, the advection term $(\mathbf{v}_\mathrm{s}\cdot\nabla)n_\pm$ will have no effect on the vortex density far from the system boundaries. In the region near the boundaries, however, some vortices will be transported toward the wall and annihilated, i.e., the average effect of the advection is to reduce the vortex number. Since the vortex density will vary on the scale of the channel width, we approximate the gradient term as $(\mathbf{v}_\mathrm{s}\cdot\nabla)n_\pm \approx v_\mathrm{s}n_\pm/w$. Putting $a = b - pv_\mathrm{s}/w$, where $p$ characterises the inhomogeneous distribution of vortices throughout the channel, we recover Eqs.~1,2 of the main text. The assumption of velocity-independent $b$ and $d$ limits the applicability of the model to turbulent states at relatively high velocity and makes it unsuitable for modelling transition to turbulence from the laminar state or predicting the scaling of vortex number with velocity. Following from Eqs.~1, 2 of the main text, the total vortex density $n = n_+ + n_-$ and polarization $s = (n_+ - n_-)/n$ obey \begin{equation} \label{eq:dndt-S} \diff{n}{t} = (a+b)n - \frac{1}{2}dn^2(1 - s^2) + g, \end{equation} and \begin{equation} \label{eq:dsdt-S} \diff{s}{t} = -2bs + \frac{1}{2}dns(1 - s^2) + \frac{g_s}{n}, \end{equation} where we grouped all terms depending on $g_\pm$ to new terms $g = g_+ + g_-$ and $g_s = (1-s)g_+ - (1+s)g_-$. The generation terms $g_\pm$ in Eqs.~1, 2 of the main text represent extrinsic or intrinsic nucleation of vortices and are likely to be concentrated near the sharp corners connecting the basin and the channel. The vortices generated at these edges in a polarized configuration are advected into the channel where they contribute to the observed drag. Near the corners, however, the polarization $s$ will likely be dominated by the instantaneous flow and, averaged over the flow oscillation period, $s\approx 0$ making the $g_s \approx g_+ - g_-$, independent of $s$. For simplicity we adopt $g$ and $g_s$ as independent control parameters, rather than $g_\pm$. It should be noted, however, that it is the assumption of $s$-independent $g_s$ that allows for bistable solutions. The equations above are assumed to be local, but spatially averaged quantities are required for comparison with the experiment. The total vortex density $n$ is an always positive quantity and thus, to a first approximation, can be replaced by its spatial average. The vortex polarization $s$, on the other hand, has a vanishing average since we assume that the flow will remain on average neutral. To connect Eq.~\ref{eq:dsdt-S} to averaged quantities, let us consider the simplified device geometry shown in Fig.~\ref{fig:simple-geometry}. The basin is removed and a single channel runs through the entire length of the device, but otherwise we assume general flow features similar to the real device (e.g., flow direction, behavior of the generation terms $g,g_s$). From the symmetry of the problem, $s$ is anti-symmetric with respect to mirroring about either of the axes and thus can be decomposed into orthogonal modes as \begin{equation} \label{eq:s-expansion} s(u, v) = \sum_{k,l} s_{kl}\sin\left(\frac{2\pi k}{L}u\right)\sin\left(\frac{2\pi l}{W}v\right), \end{equation} and the spatial average is then given by $\langle s^2\rangle = 1/4\sum_{kl}|s_{kl}|^2$. In the actual device geometry the modes in the expansion Eq.~\ref{eq:s-expansion} will be more complicated but could, in principle, be constructed by a suitable transformation of the rectangular domain of Fig.~\ref{fig:simple-geometry} onto the actual device geometry. Truncating the expansion at the lowest $s_{11}$ mode (which is likely to be the dominant term in the generation $g_s$) allows us to essentially use Eqs.~\ref{eq:dndt-S}, \ref{eq:dsdt-S} as they are and recover the results from the main text. \begin{figure} \centering \includegraphics{s_modes_simply} \caption{Simplified geometry for the decomposition of $s$ into orthogonal modes.} \label{fig:simple-geometry} \end{figure} In principle higher modes $s_{kl}$ can be considered, where Eq.~\ref{eq:dsdt-S} would be replaced by a set of equations for each mode coupled through nonlinear terms. The generation term $g_s$ is unlikely to have a single-mode decomposition and the nonlinear terms (in $s$) in Eq.~\ref{eq:dsdt-S} will excite higher modes at the expense of lower modes. This picture is fully consistent with the forward enstrophy (quadratic integral of vorticity) cascade of classical 2D turbulence \cite*{Kraichnan1980}. Higher modes will again exhibit near-degeneracy of the $s_{kl} > 0$ and $s_{kl} < 0$ solutions lifted by the appropriate mode of the $g_s$ and, possibly, by the lower-lying modes through the nonlinear terms. This will result in more general multi-stability of the mean vortex number $n$, as illustrated in Fig.~\ref{fig:multi-stability}b. As the flow velocity or drive increases, the system will randomly select either the $s>0$ or $s<0$ solution. As the drive increases further, the higher-order terms will become important, which will again be selected randomly, splitting each branch further. The beginning of this tree of turbulent states is, perhaps, already seen in the high-velocity part of the pressure-velocity curves at 1.4 K shown in Fig.~2a of the main text and highlighted in Fig.~\ref{fig:multi-stability} where three distinct turbulent branches are clearly seen. \begin{figure} \centering \includegraphics{splitting} \caption{(a) Three distinct turbulent branches of the pressure-velocity curve at 1.4 K (the laminar regime, see Fig.~2a,b of the main text, is not shown in this plot). The multi-stable behaviour hints at the involvement of higher modes of the vortex polarization. (b) An illustration of a tree of multi-stable states generated by the Eqs.~\ref{eq:dndt-S}, \ref{eq:dsdt-S}.} \label{fig:multi-stability} \end{figure} \subsection{Comparison of turbulence in 805 nm and 1067 nm confinements} \label{sec:805nm} The velocity-pressure gradient curves for 805 nm confinement (shown in Fig.~\ref{fig:fv-800}) were measured in parallel with the 1067 nm confinement (shown in Fig.~\ref{fig:fv-1000} and Fig.~2 of the main text) under identical conditions. The bistability is again present in the 805 nm confinement, but in a weaker form and in the temperature range of 1.6--1.8 K. The bistability also tends to be suppressed at high velocities. \begin{figure} \centering \includegraphics{Fv_all_800} \caption{Velocity-pressure gradient relationship for the 805 nm confinement, under identical conditions as 1067 nm confinement shown in Fig.~2 of the main text and Fig.~\ref{fig:fv-1000}. Darker curves show increasing pressure gradient, lighter decreasing. The bistability is observed here in a weaker form between 1.6 and 1.8 K.} \label{fig:fv-800} \end{figure} \begin{figure} \centering \includegraphics{Fv_all_1000} \caption{Velocity-pressure gradient relationship for the 1067 nm, same data as in Fig.~2 of the main text shown for a wider range of the pressure gradient. Darker curves show increasing pressure gradient, lighter decreasing. A few random transitions from the less-dissipative to the more-dissipative state are visible for the bistable turbulence at $T < 1.8$ K. The probability of obtaining the more-dissipative state in the high-velocity regimes can be improved by suddenly increasing the velocity from zero to the target velocity, without the preceding slow ramp-up. These measurements are included within the ramp-down sets of curves. We have never observed a transition from the more-dissipative to the less-dissipative state.} \label{fig:fv-1000} \end{figure} Within the model of Sec.~\ref{sec:model} the bistability can be destroyed in several ways, apart from the already discussed temperature dependence of the decay parameter $d$. For example, increasing the splitting rate $b$ above a critical value will result in a single solution with $s\approx 0$ (i.e., frequent splitting will completely mix the flow). Similarly, increasing $g_s$ above a critical value will destabilise the solution with sign of $s$ opposite to the sign of $g_s$ (i.e., opposing polarization will be overwhelmed by the strong drive). Additionally, the confinement of the device likely affects the $d$ parameter as well -- the smaller 805 nm device allows for lesser lateral deformation of the vortices and hence lowers the effective cross-section for collision, thus reducing $d$, which tends to reduce the bistability. In fact the bistability is not necessarily destroyed completely. If the environmental disturbances (e.g., vibration of the cryostat) are non-negligible compared to the relative stability of the less-stable state (controlled, for example, by the $d$ parameter), the flow would stochastically transition to the more stable state whenever a sufficiently strong fluctuation randomly occurs. This scenario is consistent with the fact that the transition between the two turbulent states in the temperature range 1.6--1.8 K for the 805 nm confinement does not appear to have a well defined critical velocity. The device-dependence of the $d$ parameter discussed above, however, does not account for the complete lack of bistability at lower temperatures in the 805 nm device. One possibility for this observation is that critical velocity of type II (in Fig.~2(b) of the main text) moved beyond the critical velocity of type I. Once the laminar flow becomes unstable, only one turbulent state would be available which would thus be the only state observed. Indeed, this would be consistent with a relatively narrow hysteretic region at low temperatures in Fig.~\ref{fig:fv-800}. Additionally, Fig.~2(c) of the main text could suggest that the closing of the gap between critical velocities of types I and II is plausible even for the 1067 nm device at lower temperatures. However, due to lack of data from lower temperatures and lack of a model of the critical velocities this scenario ought to be regarded as a speculation at this point. In order to describe the destruction of bistability precisely, significantly more detailed understanding of the critical velocities and the vortex-boundary interaction, and the parameters of the Eqs.~\ref{eq:dndt-S}, \ref{eq:dsdt-S} that stem from it, would be required. This will depend, for example, strongly on the morphology of the surface \cite*{Stagg2017} and is beyond the scope of this work. \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,128
PATH="$HOME/.bin:/usr/local/sbin:/usr/local/texbin:$PATH" # load rbenv if available if command -v rbenv >/dev/null; then eval "$(rbenv init - --no-rehash)" fi # mkdir .git/safe in the root of repositories you trust PATH=".git/safe/../../bin:$PATH" export -U PATH
{ "redpajama_set_name": "RedPajamaGithub" }
6,649
\section{Introduction} Understanding activity in galactic nuclei requires high spatial resolution. Kormendy \& Richstone (1995) have outlined the techniques for quantifying the supermassive black holes that power active galactic nuclei (AGN). Our strategy (Mould et al 2012) is good seeing infrared spectroscopy of AGN in a volume limited sample, followed by adaptive optics spectroscopy on large aperture telescopes. In this paper we present Palomar TripleSpec spectra of a number of nearby radiogalaxies of early type. NGC 1326 is a ring barred S0 galaxy in the Fornax cluster with circumnuclear star formation (Buta et al 2000). Our second galaxy is a Hubble Atlas polar ring galaxy, an S0 Seyfert 2. Schinnerer \& Scoville (2002) detected four giant molecular cloud associations within the polar ring in NGC 2685 (the Helix) with of order 10$^7$ M$_\odot$ of molecular hydrogen. Dust has been detected with Spitzer in our third S0 galaxy, NGC 5273, totalling 2.5 $\times$ 10$^5$ M$_\odot$ by Martini et al (2013). NGC5838 has a nuclear star cluster of 5 $\times$ 10$^7$ M$_\odot$ (Scott \& Graham 2013). \section{Sample and observations} We have drawn our radiogalaxy sample from Brown et al (2011), further limiting the distance to 20 Mpc in order to have 100 pc resolution in 1$^{\prime\prime}$ seeing. Observations of NGC 1326, 2685, 5273 \& 5838 were obtained on the Hale Telescope in 2011 and 2012. Obtaining our Palomar TripleSpec spectra was described by Mould et al (2012) and data reduction was outlined by Batt et al (2014, Paper I). We very briefly recap this here. The spectrograph has resolution of 2600 with a 1$^{\prime\prime}$ slit, and observations were made with the nucleus in two slit positions ABBA in 4 $\times$ 5 minutes. These were followed by observations of an A0 star for telluric correction and seeing measurement. Flatfielded spectra were subtracted and extracted at different impact parameters along the slit, yielding the wavelength shifts and first overtone CO line widths given in Table 1. The IRAF cross-correlation task $fxcor$ was used for this purpose with the Gemini library stellar template HD2490 interpolated to the same resolution. Table 1 gives the radial position of the extracted spectrum in column (1), the pixel shift between that and the template in column (2), the peak height of the cross-correlation in column (3) and the FWHM of the fit to the cross-correlation in column (4). The units of columns (2--4) are pixels. \begin{deluxetable}{llllllll} \tabletypesize{\small \tablecaption{Raw crosscorrelation data \tablehead{\colhead{}& \colhead{}& \colhead{}& \colhead{}& \colhead{}& \colhead{}& \colhead{}& \colhead{}} \startdata {\bf NGC 2685}\\ position& pixel shift& peak& fwhm& position& pixel shift& peak& fwhm\\ (arcsec)&[2]&[3]&[4]&(arcsec)&[2]&[3]&[4]\\ 0 & -189.96 & 0.23& 15.4& 0 & -190.55 & 0.21& 12.16\\ 0 & -190.68 & 0.26& 14.96& 0 & -190.7 & 0.25& 15.1\\ 0.9 & -190.5 & 0.26& 14.2& 1.63 & -187.09 & 0.24& 14.8\\ 0.73 & -188.4 & 0.27& 13.9& 0.79 & -188.99 & 0.18& 14.98\\ 0.79 & -190.65 & 0.26& 13.41& 0.84 & -189.96 & 0.2& 12.7\\ 0.79 & -191.01 & 0.2& 12.3\\ \\ {\bf NGC 5838}\\ position& peak height& fwhm& shift& position& peak height& fwhm& shift\\ (arcsec)&[2]&[3]&[4]&(arcsec)&[2]&[3]&[4]\\ 0 &0.366 &27.4& -0.76& 0 &0.519 &29.1& -0.67\\ 0.316 &0.329 &27.1& -1.31 0.632 &0.301 &13.1& -0.1\\ 0.948 &0.333 &14.2& -0.6& 1.264 &0.368 &10.2& -0.41\\ 1.58 &0.331 &10& -0.27& 1.896 &0.314 &8.18& -0.08\\ 2.212 &0.322 &11.4& -0.28& 0.316 &0.325 &13.1& 0.55\\ 0.632 &0.391 &8.35& -0.03& 0.948 &0.398 &8.99& 0.249\\ 1.264 &0.476 &10.6& 0.01& 1.58 &0.462 &12.3& 0.4\\ 1.896 &0.484 &18.6& 0.89& 2.212 &0.503 &14.3& 1.01\\ 0.316 &0.397 &13.7& -0.13& 0.632 &0.398 &14.6& -0.4\\ 0.948 &0.394 &11.7& -0.71& 1.264 &0.39 &13.9& -0.49\\ 1.58 &0.418 &8.8& -0.47& 1.896 &0.381 &10.1& -0.52\\ \\ {\bf NGC 5273}\\ position& peak& fwhm& position& peak& fwhm\\ (arcsec)&[2]&[3]&(arcsec)&[2]&[3]&\\ 0 & 0.28& 28.36& 0 & 0.22& 15.6\\ 0 & 0.28& 23.22& 0 & 0.22& 20.63\\ 1.57 & 0.13& 21.09& 0.73 & 0.26& 17.12\\ 1.99 & 0.23& 16.76& 1.09 & 0.23& 24.18\\ 1.58 & 0.12& 11.94& 0.79 & 0.28& 20.57\\ 1.69 & 0.12& 14.22& 0.9 & 0.2& 11.31\\ 1.18 & 0.18& 21.12& 1.97 & 0.16& 7.22\\ 0.54 & 0.19& 22.96& 1.33 & 0.1& 21.68\\ 0.84 & 0.29& 27.98& 1.63 & 0.13& 15.2\\ 1.07 & 0.24& 24.97& 1.86 & 0.19& 14.78\\ \\ {\bf NGC 1326}\\ position& peak& fwhm& position& peak& fwhm\\ 0&0.4&20.61& 0&0.39&20.56\\ 0&0.4&20.13& 0&0.42&20.84\\% e 0.78&0.43&22.4& 0.56&0.41&21.94\\ 1.62&0.43&22.17& 1.12&0.38&20.38\\ 1.01&0.42&22.8& 0.95&0.43&21.99\\ 1.57&0.41&22.61& 1.8&0.39&24.8\\% w 0.45&0.42&21.96& 0.62&0.44&21.96\\ 1.41&0.41&20.65& 1.07&0.42&21.61\\ 0.9&0.41&20.88& 0.68&0.42&21.4\\ 1.52&0.39&20.5& 1.52&0.41&22.17\\ \enddata \end{deluxetable} \section{Kinematics and dynamics} \subsection{NGC 1326} NGC 1326 has been imaged by the Hubble Space Telescope (Figure 1) and its ultraviolet light distribution is displayed in the radial profile from the IRAF STSDAS surface photometry task $ellipse$ in Figure 2. The Jeans equation allows us to predict the velocity dispersion profile $\sigma$(r) corresponding to this light distribution, assuming spherically distributed stars on isotropic orbits. To do this, we need the logarithmic derivatives with respect to radius of the density and velocity dispersion profiles. The former is obtained numerically using an Abell transform, the latter by calculating the (small) slope of the velocity dispersion data. The visual mass to light ratio is a free parameter in this model and we fit it to the data at r $>$ 80 pc, finding M/L = 6.5 in solar units, a normal value for a stellar population not dominated by dark matter. TripleSpec line width values were normalized in the same way as in Paper I. The addition of a 1 $\times$ 10$^7 M_\odot$ black hole modifies the mass distribution and $\sigma$(r). It is a better fit to the data than the solid line in the lower part of Figure 2. The no BH model is ruled out with 70\% confidence based on $\chi^2$. \subsection{NGC 2685} NGC 2685 is a polar ring galaxy, known as `the spindle'. The HST nuclear image is reproduced in Figure 3 and the light distribution has been fitted with a `nuker profile' (Lauer et al 2007). The profile appears in Figure~4, the model fit, and $\chi^2$ per degree of freedom implies that M$_\bullet~>$ 3 $\times$ 10$^7$ with less than 20\% probability. This is consistent with Beifiori et al (2009), who find an upper limit M$_\bullet~<$ 1.1 $\times$ 10$^7M_\odot$. The innermost datapoint has been located, not at zero radius as Table 1 would imply, but at the effective light centre of the zero radius observation taking account of seeing. \subsection{NGC 5273} We fitted a nuker profile to archival HST WFPC2 PC data (Figure 5), obtaining ($\alpha, \beta, \gamma$) = (1.8, 1.8, 0.75) and normalized the profile to the surface photometry of Mu\~noz Marin et al (2007) with r$_b$ = 50 pc. Figure 6 is the model fit, and $\chi^2$ per degree of freedom implies that M$_\bullet~ >$ 10$^8$ with less than 25\% probability. We assumed the Tonry et al (2001) surface brightness fluctuations distance of m-M = 31.09 $\pm$ 0.26. The TripleSpec spectrum also shows an interesting He I 10830\AA~ line (Figure 10). Silhouetted against the broad line region helium emission and its luminous (10$^8$ L$_\odot$) x-ray gas (Liu 2011) is a P-Cyg profile of cooler (kT $\sim$ 30 eV) neutral gas with a terminal outflow velocity of 750 km/sec. This object will repay IFU study of its circumnuclear gas and modelling to determine the outflow rate. \subsection{NGC 5838} Calculation of a predicted stellar velocity dispersion profile was described for galaxies with nuker profiles in Paper I. NGC 5838 has such a profile (Lauer et al 2007). Figure 7 shows the nucleus of NGC 5838 and Figure 8 shows a fit with M/L = 30 and a black hole of 1 $\times$ 10$^8$ M$_\odot$. Note that Lauer et al assume V-H = 2.39 in converting NICMOS data to visual magnitudes. We also adopted their distance of 22.2 Mpc. The no BH model is rejected with 98\% confidence. \section{Summary} We summarize our findings in Table 2. In two cases we have SMBH detections; in two cases we have upper limits on the SMBH mass. Our upper limit for NGC 5273 is consistent with the result from reverberation mapping of 4.7 $\pm$ 1.6 $\times$ 10$^6~M_\odot$ by Bentz et al 2014. Figure 9 shows our 4 radio galaxies in their Magorrian diagram. NGC 5838 is plotted at $\sigma$ = 290 km/sec (McElroy 1995). \vspace*{1 cm} \centerline{\bf Table 2: Black hole masses} \begin{tabbing} Namessss\=Typess\=Distances\=Msssssss\=ssssss\=ss\kill Name\>Type\>Distance\>M$_V$\>M/L\>SMBH\\ NGC\>\>(Mpc)\>\>\>M$_\odot$\\ N1326\>SB0+\>20.5\>--21.05\>6.5\>1 $\times$~10$^7$ \\ N2685\>SB0+\>14.3\>--19.72\>1.3\>$<~3^*\times~10^7$\\ N5273\>S0\>20\>--20.1\>1$^\dagger$\>$<$ 1 $\times$ 10$^8$\\ N5838\>S0-\>22.2\>--20.51\>30\>1 $\times$ 10$^8$ \\ ~ *1.1 (Beifiori et al 2009)\>\>\>\>$^\dagger$ UV M/L\\ \end{tabbing} \acknowledgements We thank our referee for comments that improved the paper. We are grateful for the support of the Australian Research Council through DP140100435. GC acknowledges support from STFC grant ST/K005596/1. Spectra were extracted using a version of the Spextool program modified for the Palomar TripleSpec Spectrograph (Cushing et al. 2004, ; M. Cushing, private communication 2011). We acknowledge the Hubble Legacy Archive, a facility of STScI, which is operated by AURA for the National Aeronautics and Space Administration (NASA). This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. This research has also made use of IRAF, software written by NOAO and data products from the Gemini Observatory, which are operated by AURA under a cooperative agreement with NSF. David Batt was a summer student at Swinburne University while this work was carried out.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,363
Home NBA Chris Paul's First NBA Finals Trip Comes After Topping Michael Jordan for a Ridiculous NBA Record by Bob Garcia IV twitter linkedin on July 1, 2021 Star point guard Chris Paul helped seal the Phoenix Suns' spot in the 2021 NBA Finals after a dominant performance in the Game 6 win over the Los Angeles Clippers. Paul became the difference-maker that secured his first career trip to the NBA Finals. His stellar performance also put him in the NBA playoff record books over former Chicago Bulls' great Michael Jordan. Chris Paul leads the Suns to the NBA Finals With an opportunity to secure his first NBA Finals berth, Paul showed out with one of his best career playoff performances. The 36-year-old guided the Suns throughout the game despite the Clippers' strong push to force an all-or-nothing Game 7. Paul led the charge in the second half with 31 of his game-high and career-playoff best 41 points with zero turnovers. The impressive outing put him alongside Michael Jordan, Shaquille O'Neal, and Hakeem Olajuwon as the only players to record at least 40 points with no turnovers in a conference finals game since turnovers became an official statistic, according to ESPN Stats & Info. Paul dominated the contest behind his scoring as he knocked down 7-of-8 attempts from beyond the arc while hitting 16-of-24 overall shots. The Suns also had a solid performance from big man Deandre Ayton, who poured in 16 points, 17 rebounds, and two blocks. Meanwhile, Devin Booker notched 22 points, and Jae Crowder tallied 19 points behind five made 3-pointers. The series close-out win lifts Phoenix to the NBA Finals for the first time in 28 years with a chance to secure the franchise's first championship in its 53 years of existence. Paul's dominant performance in Game 6 also put him in the NBA record books over Jordan. After dropping 37 in the series-clincher against Denver, Chris Paul poured in 41 Wednesday night against the Clippers. He became the oldest player in postseason history with 35 points in consecutive close-out games within a postseason. The previous oldest? Michael Jordan. pic.twitter.com/NJ7oU6XW7Q — ESPN Stats & Info (@ESPNStatsInfo) July 1, 2021 Paul's brilliant playoff performance marked not only the Suns' first trip to the NBA finals in nearly three decades but also his first go-around on that stage in his 16th campaign. The star guard stepped up by providing the game-changing outing. According to ESPN's Stats & Info, his 41 points in Game 6 against the Clippers, coupled with his 37-point performance in the series-clinching win over the Denver Nuggets, made him the oldest player to post at least 35 points in consecutive close-out playoff games. It's a feat that pushes him ahead of Jordan in the record books as he accomplished that previous mark at age 33 during the 1996 playoffs. Meanwhile, Paul also topped the former Chicago Bulls great as the oldest player to score 40-plus points in a postseason series-clinching win. Paul channeled the most pivotal performance in his NBA career that the Suns needed to reach the NBA Finals. Despite missing the first two games of the Western Conference Finals due to testing positive for COVID-19, he still became the difference-maker in the series. Beyond that, it's another legacy-altering moment that will add to his legacy as an all-time great player. A career-defining moment lies ahead In Paul's 16th season, he holds the golden opportunity to finally earn the NBA championship that has eluded him throughout his career. Paul has already established himself as one of the game's greatest point guards, but an NBA title remains missing from his resume. He has been remarkable in reshaping the Suns behind his play and leadership, transforming the franchise into a legitimate championship contender. It only took Paul until his 1,214th career game to make his first NBA Finals appearance. However, it's a moment he's ready to guide the Suns forward to take on either the Milwaukee Bucks or Atlanta Hawks. "Sixteen years of this," Paul said via CBS Sports. "Sixteen years of surgeries, hard work, losses, bad losses, but we're gonna enjoy tonight. We're gonna enjoy it." Ultimately, Paul sits on the doorstep of adding the biggest the crowning achievement in his NBA career. Contract courtesy of Spotrac. RELATED: Chris Paul Is Worth $130 Million But Doesn't Like to Spend His Money on Physical Things: 'Me Now, It's Experiences' Tags:Chris Paul Michael Jordan Phoenix Suns
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,977
\section{Introduction}\label{sec-intro} The notion of \emph{tangled circuit diagram} was introduced in \cite{RSW-TangCirc-2012} as part of a programme (\cite{KSW97b,KSW00a, KSW00b, KSW02,RSW04,RSW05,RSW08,dFSWcsg}) to study the kinds of parallel and sequential networks which occur in Computer Science. Allowing tangling introduced questions of geometric interest, and the search for invariants lead to connections with knot invariants. This paper is an attempt to concentrate attention on some very specific tangled circuits which we called blocked braids, which consist of two components with the same number of ports joined by a braid of wires. Such circuits have a group structure (analogous to the sum of knots) and we describe some progress in understanding these groups. This work appeared in the Laurea Magistrale Thesis of Davide Maglia \cite{M12}. \section{Blocked-braid groups} \begin{dfn}\label{def-braided-monoidal} A \emph{braided strict monoidal category} (\cite{jsbmc}) is a category ${\bf C}$ with a functor, called tensor, $\otimes : {\bf C} \times {\bf C} \to {\bf C}$ and a ``unit'' object $I$ together with a natural family of isomorphisms ${\tau}_{A,B} : A \otimes B \to B \otimes A$ called twist satisfying \begin{itemize} \item[1)] $\otimes$ is associative and unitary on objects and arrows, \item[2)] the following diagrams commute for objects $A, B, C$: $$ \bfig \place(-400,200)[B1:] \Vtriangle/>`>`<-/<600,400>[A\otimes B\otimes C`B\otimes C\otimes A`B\otimes A\otimes C;{\tau}`{\tau}\otimes 1`1\otimes{\tau}] \efig $$ and $$ \bfig \place(-400,200)[B2:] \Vtriangle/>`>`<-/<600,400>[A\otimes B\otimes C`C\otimes A\otimes B`A\otimes C\otimes B;{\tau}`1\otimes{\tau}`{\tau}\otimes 1] \efig $$ \end{itemize} We will denote the n-fold tensor product of an object $A$ by $A^n$. \end{dfn} \begin{dfn}\label{def-tangle-algebra} A tangle algebra in a braided strict monoidal category (with twist $\tau$) is an object $A$ equipped with arrows $\eta : I \to A \otimes A$ and ${\epsilon} : A \otimes A \to I$ that satisfy the equations: \begin{itemize} \item[(i)] $({\epsilon}\otimes 1_A) (1_A\otimes \eta) = 1_A = (1_A\otimes {\epsilon})(\eta\otimes 1_A) $ \item[(ii)] ${\epsilon} {\tau}_{A,A} = {\epsilon}$ and ${\tau}_{A,A}\eta = \eta$. \end{itemize} \end{dfn} \begin{dfn}\label{def-tangle-category}\cite{FY89} The category $\mathbf{Tangle}_n$ is the free braided strict monoidal category generated by an object $A$ with a tangle algebra structure, and two arrows $R:I\to A^n$ and $S:A^n\to I$. \end{dfn} \begin{dfn}\label{def-braid} We call an arrow $B:A^n\to A^n$ in $\mathbf{Tangle}_n$ a \emph{braid in $\mathbf{Tangle}_n$} if it is a composite of arrows each of which is a tensor of identities and arrows ${\tau}_{A,A}$. It is straightforward to see that since ${\tau}$ has an inverse then so also has any braid in $\mathbf{Tangle}_n$. \end{dfn} We can picture braids in an obvious way (though notice that the order of composition of arrows in the expression is reversed in the picture). For example, the braid $$ (1\otimes {\tau})({\tau}\otimes 1)(1\otimes {\tau})({\tau}\otimes 1)(1\otimes {\tau})({\tau}\otimes 1)$$ is pictured as: \begin{center} \includegraphics[width=250mm,bb=-100 0 900 100]{pix/abraid.jpg} \end{center} \begin{dfn}\label{def-blocked-braid} A blocked braid on $n$ strings is defined to be an arrow in $\mathbf{Tangle}_n$ from $I$ to $I$ of the form $SBR$ where $B$ is a braid in $\mathbf{Tangle}_n$. Notice $SBR=SB'R$ does not imply that $B=B'$. \end{dfn} The picture of the blocked braid $$ S(1\otimes {\tau})({\tau}\otimes 1)(1\otimes {\tau})({\tau}\otimes 1)(1\otimes {\tau})({\tau}\otimes 1)R$$ is: \begin{center} \includegraphics[width=200mm,bb=-100 0 900 100]{pix/bbraid.jpg} \end{center} \begin{dfn}\label{def-blocked-braid-group} The blocked-braid group on $n$ strings has as elements the blocked braids on $n$ strings. The identity of the group is $S1_{A^n}R$. The composition of $SBR$ and $SB'R$ is $SBB'R$. The inverse of $SBR$ is $SB^{-1}R$. \end{dfn} We need to check that composition is well-defined. That is, to show that if $SBR=SB'R$ and $SCR=SC'R$ ($B,B',C,C'$ braids in $\mathbf{Tangle}_n$) then $SBCR=SB'C'R$. It clearly suffices to check this for $C=C'$. Since the category $\mathbf{Tangle}_n$ is free with the appropriate structure, and no assumptions were made on $R$ and $S$ except their domains and codomains, then clearly $SBR=SB'R$ implies that $S'BR'=S'B'R'$ for any $S':A^n\to I$ and $R':I\to A^n$. Take $S'=S$ and $R'=CR$. Then $SBCR=SB'CR$. The group axioms are now clearly satisfied. \begin{rem}\label{rem-composition-well-defined} Notice the similarity of the composition operation with the sum of knots, in which a piece of each knot is removed to form the composition. \end{rem} \begin{dfn} The braid group ${\bf B}_n$ on $n$ strings is generated by $n-1$ elements ${\sigma}_1, {\sigma}_2,\dots,{\sigma}_{n-1}$ and satisfies the equations \begin{itemize} \item[(i)] ${\sigma}_i{\sigma}_j ={\sigma}_j{\sigma}_i$ if $i+1<j$,\\ \item[(ii)] ${\sigma}_i{\sigma}_{i+1}{\sigma}_i ={\sigma}_{i+1}{\sigma}_{i}{\sigma}_{i+1}$\\. \end{itemize} \end{dfn} \begin{prop} In $\mathbf{Tangle}_n$ (or more generally in any braided strict monoidal category) if we define ${\sigma}_i:A^n\to A^n$ $(i<n)$ by ${\sigma}_i=1_{A^{i-1}}\otimes{\tau} \otimes 1_{A^{n-i-1}}$ then these ${\sigma}_i$ satisfy the equations of the group ${\bf B}_n$. This is also clearly true of the arrows $S{\sigma}_i R$ in $\mathbf{BB}_n$. Hence there is a surjective homomorphism from ${\bf B}_n$ to $\mathbf{BB}_n$. \end{prop} \begin{rem} It is convenient to use the symbol ${\sigma}_i$ in several different senses as we have done above: 1) as an element of ${\bf B}_n$ in any of the braid groups with $n>i$, and 2) as an arrow in $\mathbf{Tangle}_n$ from $A^n$ to $A^n$ for $n>i$. In each case we will make clear by the context which of the meanings is intended for ${\sigma}_i$. \end{rem} \section{Categories of Tangled Relations}\label{sec-tangled-relations} In order to distinguish blocked-braids we describe a family of categories $\mathbf{TRel}_G$, introduced in \cite{RSW-TangCirc-2012}. \subsection{The definition of $\mathbf{TRel}_G$}\label{subsec-defn-trel} We will describe a braided modification of the category $\mathbf{Rel}$ of relations with an object with a tangle algebra structure. \begin{definition}\label{def-trel} Let $G$ be a group. The objects of $\mathbf{TRel}_G$ are the formal powers of $G$, and the arrows from $G^m$ to $G^n$ are relations $R$ from the set $G^m$ to the set $G^n$ satisfying: \begin{itemize} \item[1)] if $(x_1,...,x_m)R(y_1,...y_n)$ then also for all $g$ in $G$ \\ $(gx_1g^{-1}, ...,gx_mg^{-1})R(gy_1g^{-1}, ... ,gy_mg^{-1})$, \item[2)] if $(x_1,...,x_m)R(y_1,...y_n)$ then $x_1...x_m(y_1...y_n)^{-1}\in Z(G)$ (the center of $G$). \end{itemize} Composition and identities are defined to be composition and identity of relations. The tensor is defined on objects by $G^m\otimes G^n = G^{m+n}$ and on arrows by product of relations. The twist $${\tau}:G\otimes G\to G\otimes G$$ is the functional relation $$(x,y)\sim (xyx^{-1},x). $$ \end{definition} \begin{prop}\cite{RSW-TangCirc-2012} The object $G\in \mathbf{TRel}_G$ has a tangle algebra structure given by the arrows is the functional relation $\eta: *\sim (x,x^{-1})$, and ${\epsilon}$ is the opposite relation of $\eta$. As a consequence given any particular relations $R:G^0\to G^n$ and $S:G^n\to G^0$ in $\mathbf{TRel}_G$ we have a representation of $\mathbf{BB}_n$ in $\mathbf{TRel}_G(G^n,G^n)$. Clearly blocked-braids which are distinct in such a representation are distinct in $\mathbf{BB}_n$. \end{prop} \subsection{The category of relations} The usual symmetric monoidal category $\mathbf{Rel}$ of relations whose objects are sets, and whose arrows are relations and whose symmetry $X\times Y\to Y\times X$ is the functional relation $(x,y)\mapsto (y,x)$ has tangle algebra structures on each object $X$: $\eta$ is the relation $\eta: *\sim (x,x)$ and ${\epsilon}$ is the opposite relation to $\eta$ \cite{CW87}. The representation of ${\bf B}_n$ in $\mathbf{Rel}(X^n,X^n)$ is that each braid $B$ goes to the corresponding permutation, denoted $\phi(B)$, of the factors of $X^n$. Further, given any particular relations $R:X^0\to X^n$, $S:X^n\to X^0$ (that is, subsets of $X^n$) there is a representation of the blocked-braid group which takes $SBR\in \mathbf{BB}_n$ to the subset of $X^0$ given by the composite of relations $S\phi(B)R:X^0\to X^n\to X^0$. It is easy to check that these representations detect the permutation of the braid in a blocked braid. Hence there is a surjective homomorphism $\mathbf{BB}_n$ to the symmetric group $\Sigma_n$. \section{$\mathbf{BB}_2$ is $\mathbb{Z}_2$}\label{sec-BB2} To prove that $\mathbf{BB}_2=\mathbb{Z}_2$ it suffices to show that $S{\sigma}_1R=S{\sigma}_1^{-1}R$ in $\mathbf{BB}_2$. We will prove a more general result. \begin{prop}\label{inverse-theorem} In $\mathbf{BB}_n$ the following holds: $$S{\sigma}_{n-1}{\sigma}_{n-2}\cdots{\sigma}_1R=S{\sigma}_{n-1}^{-1}{\sigma}_{n-2}^{-1}\cdots{\sigma}_1^{-1}R.$$ Clearly also $$S{\sigma}_{1}{\sigma}_{2}\cdots{\sigma}_{n-1}R=S{\sigma}_{1}^{-1}{\sigma}_{2}^{-1}\cdots{\sigma}_{n-1}^{-1}R.$$ \end{prop} We give an algebraic proof but a proof which makes clear the geometric content is available in the case of $\mathbf{BB}_2$ in \cite{RSW-TangCirc-2012}. \vspace{\baselineskip}\noindent{\bf Proof. } \begin{align*} S{\sigma}_{n-1}{\sigma}_{n-2}\cdots{\sigma}_1 R &=S( 1_{A^{n-2}}\otimes {\tau})(1_{A^{n-3}}\otimes {\tau} \otimes 1_{A})(\cdots )({\tau}\otimes 1_{A^{n-2}})R\\ &=S{\sigma}_{n-1}{\sigma}_{n-2}(\cdots)({\tau}\otimes 1_{A^{n-2}})(1\otimes {\epsilon}\otimes 1_{A^{n-1}})(1_{A^2}\otimes R)(\eta)\tag{duality} \\ &=S({\epsilon}\otimes 1_{A^{n}})(1_{A}\otimes R\otimes 1_A)({\tau}\eta)\tag{naturality} \\ &=S({\epsilon}\otimes 1_{A^{n}})(1_{A}\otimes R\otimes 1_A)(\eta)\tag{commutativity}\\ \end{align*} The final expression does not involve ${\tau}$. It is clear that repeating the above argument commencing with the right-hand-side reduces to the same final expression. \qed \section{$\mathbf{BB}_3$ has order $6$ or $12$}\label{sec-BB3} \subsection{Equations in $\mathbf{BB}_3$} Let $a=S{\sigma}_1R$ and $b=S{\sigma}_2R$ be blocked braid on three strings. Clearly $\mathbf{BB}_3$ is generated by $a$ and $b$. The first equation satisfied by $a$, $b$ comes directly from the Yang-Baxter equation: $$aba=S({\tau}\otimes 1)(1\otimes {\tau})({\tau}\otimes 1)R=S(1\otimes {\tau})({\tau}\otimes 1)(1\otimes {\tau})R=bab.$$ A further equation comes from the proposition of section \ref{sec-BB2}, namely $$abba=(S{\sigma}_1{\sigma}_2R)(S{\sigma}_2{\sigma}_1R)=(S{\sigma}_1{\sigma}_2R)(S{\sigma}_2^{-1}{\sigma}_1^{-1}R)=S1R=1.$$ It is straightforward to see that the group generated by two letters $a$ and $b$ satisfying $aba=bab$ and $abba=1$ has $12$ elements. The elements $a$ and $b$ both have order $4$ while $c=ab^{-1}$ has order $3$. Further $a^{-1}ca=c^{-1}$ hence the three element subgroup $\{1,c,c^2\}$ is normal, and in fact $$<a,b; aba=bab,abba=1>$$ is the semidirect product of $\mathbb{Z}_3$ by $\mathbb{Z}_4$ with the non-trivial action of $\mathbb{Z}_4$ on $\mathbb{Z}_3$. Of course this does not completely identify $\mathbf{BB}_3$ since it could be that $\mathbf{BB}_3$ is the symmetric group on three elements - this would be the case if, for example also $aa=1$. \begin{rem} We suspect that $BB_3$ has $12$ elements but have not been able to prove it. It was pointed out in \cite{RSW-TangCirc-2012} that $\mathbf{TRel}_G$ is unable to distinguish between the blocked braids $aa$ and $1$ in $\mathbf{BB}_3$. \end{rem} \section{Dirac's belt trick}\label{sec-dirac} \begin{dfn} The \emph{torsion} (through $\pi/2$) of $n$ strings, denoted $T_n$, is the braid in ${\bf B}_n$ $$({\sigma}_1)({\sigma}_2{\sigma}_1)(\cdots)({\sigma}_{n-2}{\sigma}_{n-3}\cdots{\sigma}_1)({\sigma}_{n-1}{\sigma}_{n-2}\cdots{\sigma}_2{\sigma}_1).$$ \end{dfn} \begin{rem} We will consider the braid $$({\sigma}_1)({\sigma}_2{\sigma}_1)(\cdots)({\sigma}_{n-2}{\sigma}_{n-3}\cdots{\sigma}_1)({\sigma}_{n-1}{\sigma}_{n-2}\cdots{\sigma}_2{\sigma}_1)$$ also as an element of ${\bf B}_m$ for $m>n$ and we will still denote it $T_n$, though in ${\bf B}_m$ it will not be called a torsion. \end{rem} In \cite{RSW-TangCirc-2012} there is a geometric sketch of a proof the $ST_3^4R=S1R$ in $\mathbf{BB}_3$ (three strings). The aim of this section is to prove algebraically that $(ST_nR)^4=S1R$ in $\mathbf{BB}_n$ ($n$ strings). We suspect that $(ST_nR)^2$ is \emph{not} the identity in $\mathbf{BB}_n$ but are unable to find a proof. In $\mathbf{TRel}_G$ it is always the case that $ST_n^2R=S1T$. \begin{prop}\label{other-torsion} The braid $T_n$ satisfies $$T_n= ({\sigma}_{1}{\sigma}_{2}{\sigma}_{3}\cdots {\sigma}_{n-1})(\cdots)({\sigma}_1{\sigma}_2{\sigma}_3)({\sigma}_1{\sigma}_2)({\sigma}_1).$$ \end{prop} \vspace{\baselineskip}\noindent{\bf Proof. } Straightforward from the equations (ii) of braid groups. \qed \begin{dfn} We denote as $Q_n$ the braid ${\sigma}_{n-1}{\sigma}_{n-2}\cdots {\sigma}_{1} $. By $Q_n^k$ we mean the $k$th power of $Q_n$. Similarly let $P_n$ be the braid ${\sigma}_{1}{\sigma}_{2}\cdots {\sigma}_{n-1} $, and let $P_n^k$ be the $k$th power of $P_n$ \end{dfn} \begin{prop}\label{theorem4} $({\sigma}_{k-1}^{-1}\cdots{\sigma}_{2}^{-1}{\sigma}_1^{-1})Q_k^{k-1}=Q_{k-1}^{k-1}$. \end{prop} \vspace{\baselineskip}\noindent{\bf Proof. } The key point in the proof is the fact that for $i<k$ $${\sigma}_i^{-1}Q_{k}^{k-i}=Q_{k}^{k-i-1}Q_{k-1}.$$ The pattern will be clear from the example of $k=4$. \begin{align*} ({\sigma}_3^{-1}{\sigma}_2^{-1}{\sigma}_1^{-1})Q_4^3 &={\sigma}_3^{-1}{\sigma}_2^{-1}{\sigma}_1^{-1}{\sigma}_3{\sigma}_2{\sigma}_1{\sigma}_3{\sigma}_2{\sigma}_1{\sigma}_3{\sigma}_2{\sigma}_1\\ &={\sigma}_3^{-1}{\sigma}_2^{-1}{\sigma}_3{\sigma}_1^{-1}{\sigma}_2{\sigma}_1{\sigma}_3{\sigma}_2{\sigma}_1{\sigma}_3{\sigma}_2{\sigma}_1\\ &={\sigma}_3^{-1}{\sigma}_2^{-1}{\sigma}_3{\sigma}_2{\sigma}_1{\sigma}_2^{-1}{\sigma}_3{\sigma}_2{\sigma}_1{\sigma}_3{\sigma}_2{\sigma}_1\\ &={\sigma}_3^{-1}{\sigma}_2^{-1}{\sigma}_3{\sigma}_2{\sigma}_1{\sigma}_3{\sigma}_2{\sigma}_3^{-1}{\sigma}_1{\sigma}_3{\sigma}_2{\sigma}_1\\ &={\sigma}_3^{-1}{\sigma}_2^{-1}{\sigma}_3{\sigma}_2{\sigma}_1{\sigma}_3{\sigma}_2{\sigma}_1{\sigma}_2{\sigma}_1\\ &={\sigma}_3^{-1}{\sigma}_3{\sigma}_2{\sigma}_3^{-1}{\sigma}_1{\sigma}_3{\sigma}_2{\sigma}_1{\sigma}_2{\sigma}_1\\ &={\sigma}_2{\sigma}_1{\sigma}_2{\sigma}_1{\sigma}_2{\sigma}_1\\ &=Q_3^3\\ \end{align*} \qed \begin{prop}\label{prop-Q} The braid $T_k$ in ${\bf B}_n$ $(n\geq k)$ satisfies $T_kT_k=Q_{k}^{k}$. \end{prop} \vspace{\baselineskip}\noindent{\bf Proof. } The proof will be by induction on $k$. Assume the result for $k$. Notice that $T_{k+1}=T_k({\sigma}_k{\sigma}_{k-1}\cdots {\sigma}_1)$. But also by proposition \ref{other-torsion} $T_{k+1}=({\sigma}_{1}{\sigma}_{2}{\sigma}_{3}\cdots {\sigma}_{k})T_{k}$. Hence $$T_{k+1}T_{k+1}= T_k({\sigma}_k{\sigma}_{k-1}\cdots {\sigma}_1)({\sigma}_{1}{\sigma}_{2}{\sigma}_{3}\cdots {\sigma}_{k})T_{k}.$$ But it is easy to check that ${\sigma}_i$ $(i<k)$ commutes with $({\sigma}_k{\sigma}_{k-1}\cdots {\sigma}_1)({\sigma}_1{\sigma}_{2}{\sigma}_{3}\cdots {\sigma}_{k})$ and hence any braid in ${\bf B}_k$, and in particular $T_k$ commutes with $({\sigma}_k{\sigma}_{k-1}\cdots {\sigma}_1)({\sigma}_{1}{\sigma}_{2}{\sigma}_{3}\cdots {\sigma}_{k})$. Hence \begin{align*} T_{k+1}T_{k+1}&=({\sigma}_k{\sigma}_{k-1}\cdots {\sigma}_1)({\sigma}_{1}{\sigma}_{2}{\sigma}_{3}\cdots {\sigma}_{k})T_kT_k\\ &=({\sigma}_k{\sigma}_{k-1}\cdots {\sigma}_1)({\sigma}_{1}{\sigma}_{2}{\sigma}_{3}\cdots {\sigma}_{k})Q_{k}^k\tag{inductive hypothesis}\\ &=({\sigma}_k{\sigma}_{k-1}\cdots {\sigma}_1)({\sigma}_{1}{\sigma}_{2}{\sigma}_{3}\cdots {\sigma}_{k})({\sigma}_{k}^{-1}\cdots {\sigma}_{2}^{-1}{\sigma}_{1}^{-1})Q_{k+1}^k\tag{Proposition \ref{theorem4}}\\ &=({\sigma}_k{\sigma}_{k-1}\cdots {\sigma}_1)Q_{k+1}^k\\ &=Q_{k+1}^{k+1}.\\ \end{align*} \qed \begin{prop}\label{prop-P} The braid $T_k$ in ${\bf B}_n$ $(n\geq k)$ satisfies $T_kT_k=P_{k}^{k}$. \end{prop} \vspace{\baselineskip}\noindent{\bf Proof. } The proof follows from Proposition \ref{other-torsion} and the symmetry of the equations for the braid groups. \qed Finally the fact that \emph{in the blocked-braid group} four torsions equals the identity: \begin{thm} In $\mathbf{BB}_n$, $(ST_nR)^4=S1R$. \end{thm} \vspace{\baselineskip}\noindent{\bf Proof. } Notice that Proposition \ref{inverse-theorem} says that, in $\mathbf{BB}_n$, $SP_nR=SQ_n^{-1}R$. Hence in $\mathbf{BB}_n$ \begin{align*} (ST_nR)^4&=(ST_n^2R)(ST_n^2R)\\ &=(SQ_n^nR)(SP_n^nR)\tag{Propositions \ref{prop-Q},\ref{prop-P}}\\ &=(SQ_n^nR)(SQ_n^{-n}R)\tag{Proposition \ref{inverse-theorem}}\\ &=SQ_n^nQ_n^{-n}R\\ &=SR.\\ \end{align*} \qed \section{$\mathbf{BB}_n$ is infinite if $n>3$}\label{sec-BB4} \begin{thm} $\mathbf{BB}_n$ is infinite if $n>3$ \end{thm} \vspace{\baselineskip}\noindent{\bf Proof. } We will show that the elements $S{\sigma}_1^kR$ $(k=0,1,2,\cdots)$ are all distinct in $\mathbf{BB}_n$ $(n>3)$ by looking at representations of $\mathbf{BB}_n$ in $\mathbf{TRel}_{SL_2(\mathbb{Z})}$. As the relation $R:I\to SL_2(\mathbb{Z})^n$ we will take the conjugacy class of the $n$-tuple $(x,y,z,w,e,e,\cdots ,e)$ where $$x= \left( \begin{array}{ccc} 1 & 1 \\ 0 & 1 \end{array} \right), y= \left( \begin{array}{ccc} 1 & 0 \\ 1 & 1 \end{array} \right), z= \left( \begin{array}{ccc} 1 & 0 \\ -1 & 1 \end{array} \right), w= \left( \begin{array}{ccc} 1 & -1 \\ 0 & 1 \end{array} \right), e= \left( \begin{array}{ccc} 1 & 0 \\ 0& 1 \end{array} \right). $$ Notice that $xyzwe^{n-4}=e\in Z(SL_2(\mathbb{Z}))$ and hence the relation is in $\mathbf{TRel}_{SL_2(\mathbb{Z})}$. We will now consider just the case $n=4$ because the argument extends trivially to the cases $n>4$. We will show that the conjugacy classes $S_k={\sigma}_1^kR$ $(k=0,1,2,\cdots)$ are pairwise disjoint and hence $S_k{\sigma}_1^lR=\emptyset$ except when $k=l$ in which case it is the one point set. The conjugacy classes $S_k$ $(k=0,1,2,\dots)$ are the conjugacy classes of the $4$-tuples \begin{align*} (x,y,z,w), (xyx^{-1},x,z,w), &(xyxy^{-1}x^{-1}, xyx^{-1},z,w),\\ &(xyxyx^{-1}y^{-1}x^{-1},xyxy^{-1}x^{-1},z,w),\cdots\\ \end{align*} If $(u_1,v_1,z,w)$, $(u_2,v_2,z,w)$ are two different elements in this list which generate the same conjugacy classes there must exist an element $g\in SL_2(\mathbb{Z})$ such that $g^{-1}u_1g=v_1$, $g^{-1}u_2g=v_2$, $g^{-1}zg=z$, $g^{-1}wg=w$. But $g^{-1}zg=z$, $g^{-1}wg=w$ imply that $g$ is either the identity matrix $e$ or $-e$. Then $g^{-1}u_1g=v_1$, $g^{-1}u_2g=v_2$ imply that $u_1=v_1$ and $u_2=v_2$. However a straightforward calculation of the sequence of matrices $$x,xyx^{-1},xyxy^{-1}x^{-1},xyxyx^{-1}y^{-1}x^{-1},xyxy^{-1}x^{-1},\cdots$$ shows that no repetition occurs - the maximum absolute value of the entries of the $i$th matrix in the sequence is strictly increasing as $i$ increases. \qed
{ "redpajama_set_name": "RedPajamaArXiv" }
3,879
\section{Introduction} \label{sec:intro} The solar surface is covered by granules that constitute the tops of small convective cells. They appear when the hot plasma coming from the solar interior rises into the solar atmosphere and radiatively cools down. The study of photospheric flows is based on observational data for velocity fields derived from FLCT applied to intensity maps, \textit{e.g.} \citet{Fisher2008}, and velocity fields obtained from numerical simulations based on magnetohydrodynamics (MHD) equations. Strong negative divergence regions of velocity field intensity tends to behave as sinks and concentrate the magnetic field \citep{Balmaceda2010} and the observational data also suggest that downdraft centers may display vortical motions \citet{Bonet2010}. The main theory concerning the creation of the observed intergranular vortices describes the vortical dynamics to be originated at those downdraft centers as the plasma diverging from the granule centers has an angular momentum in relation to the sink, leading the elements of fluid to rotate as they approach the downdraft center. This process is also known as the ``bathtub effect'' and it is related to a free, i.e. not forced vortex. The ``bathtub effect'' was suggested as a photospheric vortex creation mechanism by \citet{Nordlund85} based on simulations of convective motions. Not all downdraft centers present vortex dynamics, however. One necessary condition for their formation is the existence of vorticity in the sink region \citep{Simon_1997}. In the downdraft centers, the concentration of magnetic flux leads to the generation of vorticity by the magnetic tension term, which dominates the vorticity evolution in the solar atmosphere \citep{Shelyag2011}. Other mechanisms have also been shown to lead to vortices in the solar atmosphere, e.g., based on radiative hydrodynamic simulations, \citet{Kitiashvili2012a} found that both horizontal and vertical vortex tubes at the intergranular lanes can be generated by Kelvin–Helmholtz instability of shearing flows. Vortical flows in the photosphere have been investigated using velocity fields derived from both observations \citep{Bonet2010, Attie08,Bonet2008,Balmaceda2010, Giagkiozis2017,Requerey2017, Tziotziou2018, Tziotziou2019, Shetye2019} and magnetohydrodynamics (MHD) simulations \citep[see e.g.][]{Kitiashvili2012, moll12, Wedemeyer2012, Wedemeyer2014, kato17}. The vortices observed in the solar atmosphere present a radius ranging from 0.1 to around 2 Mm \citep{Bonet2010,Silva_2018,Giagkiozis2017} and have an average lifetime of around 0.29 minutes \citep{Giagkiozis2017}, but in supergranular convection it is possible to find vortices that last for hours \citep{Requerey2017,Chian2019}. Swirling motions have also been detected in the chromosphere based on Ca II 8542 \AA \hspace{1mm} and H$\alpha$ line observations \citep{Wedemeyer09, Shetye2019}. Those observations indicate that chromospheric swirls last for 10 minutes, and their vortical motions extend to around 1 Mm from the center. These swirls also present different shapes and seem to be correlated to magnetic concentrations at downdraft centers and can be observed in different line emissions \citep{Shetye2019}. \cite{Wedemeyer2012} showed observational signatures of vortical motions at different height levels that are spatially correlated, which suggests that the solar photospheric vortices are most likely the lower part of solar atmospheric vortex tubes that extends up to the solar corona. Nevertherless, \cite{Shetye2019} could not determined whether the observed swirls correspond to motions in the photosphere or a propagating Alfvén wave. The magnetic field in the intergranular lanes also interacts with vortices, affecting their dynamics. Magnetohydrodynamical simulations show that the presence of the magnetic field intensifies the vortex tube effects in the chromosphere in weak magnetic fields \citep{Kitiashvili2012}. On the other hand, the vortical flow gives stability to the magnetic fluxtube \citep{Requerey2017} and drags the magnetic field. \cite{Wedemeyer2012} and \cite{Wedemeyer2014} suggest that, as the magnetic field is twisted, it drives the vortical motion of the plasma in the chromosphere, which is, in turn, observed as swirl signatures in different line emission observations. Based on MHD modeling, \cite{moll12} and \cite{shelyag2013} found that the magnetic field lines are not considerably twisted, whereas \cite{Wedemeyer2014} and \cite{Rappazzo2019} simulation results display a rotating magnetic field coexisting with a kinematic vortex. In this paper, we apply a state-the-art vortex detection method, Instantaneous Vorticity Deviation (IVD), to precisely define vortex tubes in the solar atmosphere. We investigate the dynamics across the vortical flows at different height levels and their impact on the magnetic field. The paper is organized as follows. First, we introduce the IVD technique and the construction of three-dimensional vortices in section \ref{sec:Methodology}. We then proceed in section \ref{sec:results} to describe the detected vortices and show radial profiles from selected vortices for velocity and magnetic field related variables. Section \ref{sec:discussions} deepens the analysis of the relationship between the magnetic field and the vortex dynamics. Lastly, conclusions are presented in section \ref{sec:conclusions}. \section{Methodology} \label{sec:Methodology} We analyze the data from the radiative MHD simulations of magnetoconvection in the solar photosphere and upper convection zone obtained with the MuRAM code \citep{voegler2005}. The rectangular domain we use has $960 \times 960 \times 160$ grid cells, which cover a region of 24 Mm in the $x-$ an $y$-directions and 1.6 Mm in the vertical $z$-direction. The model realistically simulates a solar plage region with the net vertical magnetic field of $200~\mathrm{G}$. The visible solar surface (Rosseland optical depth $\tau=1$) is located at $ H= 0.0~\mathrm{Mm}$, 600 km below the upper boundary. The upper boundary of the simulation domain is located in the temperature minimum. The simulation region size and resolution are chosen such that it covers horizontal and vertical convective spatial scales in the solar photosphere. The simulation box is positioned in the solar atmosphere so that it covers the region where the radiation comes from and where the transition from magnetically dominated (atmospheric part of the domain) to fluid-dominated (interior part) dynamics occurs, which are specifically of interest for this study. A standard gravitationally stratified radiative resistive magnetohydrodynamic model is used in the simulations, which are self-consistent with only a small number of parameters, such as solar gravity acceleration, the average outward radiative flux, initial vertical uniform magnetic field strength, and solar photospheric chemical composition. Further information regarding how those terms were implemented and the values used can be found in \cite{voegler2005}. The system of equations solved is essentially nonadiabatic. The equation of state is used in a very general form with a tabulated functional dependence of pressure and temperature on density and internal energy per unit volume. The nonideal MHD terms are ohmic resistive. The average parameters of the modeled atmosphere were checked to make sure that it reached a quasi-stationary state. It was found that the total box mass and the net radiative flux oscillated around their required constant values. As small scales are of interest for this study, the phase of these 5 minute oscillations are of no importance. Partial ionization in the solar interior and photosphere is taken into account through the nonideal equation of state as explained by \cite{voegler2005}. In this paper, we focus on a fraction of the whole domain, with $240 \times 240$ grid points. In Fig. \ref{fig:domain} we display both the 2D view of the whole $xy$-plane from the original domain (left panel) and the 3D view of the partition used in our investigations. Our studies then concern a domain of size 6 Mm $\times$ 6 Mm $\times$ 1.6 Mm, which is large enough to cover multiple granules and their intergranular regions. \begin{figure}[ht!] \plotone{domains.png} \caption{Simulation domain at $t=0$. (a) 2D view: The $xy$-plane of the whole domain at $H=0.0~\mathrm{Mm}$ colored by the z-component of the velocity. The part of the domain investigated in this paper is delimited by a black square. (b) The 3D view of the region within the black square in (a). The $xy$-plane at $ H=0.0~\mathrm{Mm}$ ($z =1.0 \mathrm{Mm}$) and it is colored by the $z$-component of the velocity. The black lines are the magnetic field lines. \label{fig:domain}} \end{figure} We applied the IVD technique \citep{Haller2016} to find vortices in the upper part of our domain, $z\geqslant 1.0$ Mm, shown in Fig. \ref{fig:domain}(b). The IVD field is computed by the following expression \begin{equation} \label{Eq:3} \mbox{IVD}(\mathbf{x},t):= |\mathbf{\omega}(\mathbf{x},t) -\langle \mathbf{\omega}(t) \rangle|, \end{equation} where $\mathbf{x}$ is a position vector, $\mathbf{\omega} = \nabla \times \mathbf{u}$ is the vorticity, and $\langle \cdot \rangle$ denotes the instantaneous spatial average. \cite{Haller2016} establishes the boundary of a given vortex in 2D as the outermost convex closed contour of the IVD scalar field around the vortex center, which in turn is defined by a local maximum of the IVD field. Physically, this contour provides a locus of particles with the same intrinsic rotation rates \citep{Haller2016}. In other words, IVD defines the vortex boundary using an intuitive notion where the particles have a coherent rotation along an approximately elliptical curve. In 3D flows, the method can be applied in successive 2D planes and interpolating among the closed contours found in each plane, forming a vortex tube. A significant advantage of employing IVD is the fact that the only parameter that needs to be chosen is the maximum amount of deviation from convexity a curve may have to describe a vortex. This parameter is also called convexity deficiency, $\epsilon$, and it is defined as: \begin{equation} \epsilon = \frac{A_c - A_{ch}}{A_c}, \end{equation} where $A_c$ is the area which is enclosed by the extracted contour whereas the term $A_{ch}$ stands for the area enclosed by its convexhull. IVD is the instantaneous (Eulerian) version of the Lagrangian Average Vorticity Deviation(LAVD) method, which has been successfully employed in a number of hydrodynamic and plasma problems and shown to perform better than other available vortex detection methods (see, e.g., \citet{Hadjighasem2017, Silva_2018}). Nonetheless, IVD and LAVD have some limitations when the velocity field has convex regions with strong shear leading to high vorticity even if no coherent swirling motion is seen, as reported by \citet{Silva_2018}. This may also cause a difference between the position of a local maximum of IVD and a true vortex center. To avoid such problems, the $d$-parameter was proposed by \citet{Silva_2018} to filter out false vortex detections by IVD/LAVD. The $d$-parameter first detects vortex centers as points in the flow surrounded by fluid particles that undergo circular motions during a certain time interval. The circular motion is determined by checking the relative positions of displacement vectors obtained by integrating four particles surrounding each grid point. Once the vortex centers are found with the $d$-parameter, the IVD operator is employed to detect the vortex boundaries surrounding each vortex center. \subsection{Three-dimensional vortices} The construction of three-dimensional vortices performed in this work is based on the analysis of a series of two-dimensional IVD fields, which were computed for $xy$-planes within a range of $z$ above the solar surface. The choice of using horizontal planes is based on previous works, e. g., \cite{Kitiashvili2012a} and \cite{ Wedemeyer2012}, that show that vortical motions are mainly in the horizontal direction. As we are mostly interested in solar atmospheric vortices, we use the range from $H = 0$ to $H=0.5$ Mm which corresponds to $z =1.0 \mathrm{Mm}$ to $z =1.5 \mathrm{Mm}$. The time evolution of the vortices was determined by computing the IVD field at all time frames within a time interval of 50 s, starting at $t=0$, which is labeled $t_0$ from now on. To avoid most of the abovementioned problems due to the presence of shear in the flow, we first apply the d-criterion to determine possible vortex centers and also dismiss any false detection. As the d-criterion is based on the particle displacement after a time interval and IVD is an instantaneous method, we compute the d-criterion in the $xy$-plane within the minimal time interval of $\Delta t=2.5~\mathrm{s}$ between frames. Then, for each time frame, we pick the points that obey the d-criterion in different horizontal layers that are sufficiently close to determine a vortex core line. Thereby, we are able to compute the core lines of possible vertical vortices, as shown for $t=0$ in Fig. \ref{fig:corelines}. \begin{figure}[ht!] \plotone{corelines.png} \caption{3D simulation domain at $t= 0$ with $xy$-plane at $H=0.0 \mathrm{Mm}$ ($z =1.0 \mathrm{Mm}$) colored by the $z$-component of the velocity. The core lines of vertical vortices as computed by the d-criterion are shown in purple. } \label{fig:corelines} \end{figure} After that, we use each core line to compute the vortex boundary in all $xy$-planes from $H=0.0$ to $H=0.5$ Mm. The vortex boundary around the points given by the core line was defined as the outermost convex contour of the IVD field in each $xy$-plane. In order to obtain this contour, we apply a convex deficiency of $c=0.03$. The 3D vortex is then obtained by the group of all contours of a given vortex core line, as illustrated in Fig. \ref{fig:3Dvortex}. \begin{figure}[ht!] \plotone{vortex_construction.png} \caption{3D Vortex construction from the IVD field. The left panel shows the IVD field for the $xy$-plane at $H=0.0 $ Mm (or $z =1.0 \mathrm{Mm}$) with the dark blue line denoting the vortex boundary. The bottom-right panel is a closer view of the vortex boundary and the upper-right panel shows the complete vortex from IVD contours in several planes and its core line in orange.} \label{fig:3Dvortex} \end{figure} One can see that even though the IVD field was computed separately for each height level, the final vortex boundary varies smoothly and shows a spatial coherence. \section{Results} \label{sec:results} We detect a total of 17 vortices that persist for the whole time interval considered in this analysis, 50 s. Eleven vortices rotate counterclockwise, and six rotate clockwise. The vortices present different shapes and sizes, as illustrated in Fig.\ \ref{fig:allvortices1}(a), which shows those 17 detected vortex boundaries colored in orange for $t=25~\mathrm{s}$ and the $xy$-plane at $H=0.0~\mathrm{Mm}$ colored by the intensity of the magnetic field. Another common feature in the geometry of the vortices is the widening up of the vortex boundaries in the upper part of the domain. The vortex radius is, on average, around 40 km at $H=0$ and up to approximately 80 km at $H =0.5$ Mm. Therefore, that vortex radius at $H= 0.0$ Mm is one order of magnitude smaller than the vortices obtained using the two-dimensional velocity field derived from observational data \citep{Bonet2008,Silva_2018,Giagkiozis2017}. In Fig.\ \ref{fig:allvortices}(a) we see the instantaneous spiraling velocity streamlines, which were traced from points withing the vortex boundary. The vortices are located between regions of strong current density as indicated in Fig.\ \ref{fig:allvortices}(a), and they tend to appear in low-pressure regions, as displayed in Fig.\ \ref{fig:allvortices}(b). The magnetic field lines traced from points encompassed by the vortex boundaries are shown in red in Fig, \ref{fig:allvortices}(b), they are mostly vertical lines and organized in tube structures. \begin{figure*} \gridline{\fig{Bsurface2.png}{0.8\textwidth}{} } \caption{3D simulation domain at $t=25 s$. Label numbers identify the vortices detected by IVD. (a) $xy$-plane at $z=1.0 $ Mm or $H=0.0 $ Mm colored by the intensity of the magnetic field and the vortex boundaries computed by IVD for $t=25 s$ are shown in orange.\label{fig:allvortices1}} \end{figure*} \begin{figure*} \gridline{ \fig{streamV.png}{0.85\textwidth}{(a)} } \gridline{\fig{streamB.png}{0.85\textwidth}{(b)} } \caption{3D simulation domain at $t=25 s$. Label numbers identify the vortices detected by IVD. (a) $xy$-plane at $H =0.0 $ Mm ( or $z =1.0 \mathrm{Mm}$) colored by the magnitude of current density and the instantaneous streamlines are for the velocity field, traced from points within each vortex boundary. (b) The magnetic field lines are shown in red, traced from points within each vortex boundary\label{fig:allvortices}, and the $xy$-plane at $H =0.0 $Mm ($z =1.0 \mathrm{Mm}$) is colored by the pressure.} \end{figure*} For our analysis, we select three vortices, \# 7 , \# 8, and \#12, which are located in different parts of the domain. This choice was based on the fact that those vortices' boundaries were detected in the photosphere and in the upper part of the domain. In addition, they also give a good representation of the dynamics found for the detected solar vortices. In Fig.~\ref{fig:selected}, we show field lines in red for the magnetic field and in dark khaki for the velocity field for those selected vortices at the initial time instant, $t=t_0$. They were traced from random points within the vortex boundary and the colors in $xy$-planes in the figure correspond to the local temperatures at the constant heights of $H=0$ and $H=0.5~\mathrm{Mm}.$ The vortices seem to encompass regions of different temperature ranges in the photosphere and in the chromosphere. We see that the vortical dynamics imposed on the plasma also seem to influence the temperature distribution, dragging, and mixing the hot and cold plasma. \begin{figure}[ht!] \includegraphics[width=\textwidth]{selected.png} \caption{Magnetic and velocity field lines traced from random points within the vortex boundary are show in red and dark khaki colours correspondingly for vortices \#7, \#8 and \#12 for $t=t_0$. The $xy$-planes are placed at $H= 0$ Mm ( or $z =1.0 \mathrm{Mm}$) and $H=0.5$ Mm ( or $z =1.5 \mathrm{Mm}$)} and are colored by the plasma temperature. \label{fig:selected} \end{figure} \subsection{Radial profiles} The plasma dynamics across the vortex flow can be studied using the radial profile, which describes the changes in the plasma as one moves away from the center of the vortex toward its boundary. Now, each vortex boundary at a given $xy$-plane is formed by a group of vertices that are not necessarily within the same distance from the vortex center as illustrated in Fig.\ \ref{fig:vertices12} for vortex \#12 at $H = 0.5$ Mm. For each vertex, we set a grid with 20 equally spaced points along the line segment from the identified vortex center to the vertex (a vortex "radius"). The distance from those grid points to the vortex center, $r$, is normalized by the distance from the given boundary vertex to the vortex center, $R$. The physical quantities at the grid points are obtained by linear interpolation. Also, at each height level, we obtain the general tendency of the vortex radial profile by averaging the radial profiles, obtained for each vertex, along the angular direction. \begin{figure*} \gridline{\fig{vertices.png}{0.3\textwidth}{} } \caption{Contour of the vortex \# 12 at H=0.5 Mm for $t=t_0$ given by a red line. The vertices of the vortex are indicate by red circles and the vortex center is given by the green circle. The black dashed lines represents the radius of the vortex for each vertex. \label{fig:vertices12}} \end{figure*} All the figures of the radial profile plots show the variable average distribution along the vortex radius from the center, $r=0$, to the vertex $r=R$ for different times $t_0= 0$ (red line), $t_1=25s$ (green line) and $t_2=50s$ (blue line) and different heights (a)-(c) $H = 0.1$ Mm, (d)-(f) $H = 0.3$ Mm,(g)-(i) $H = 0.5$ Mm. The left $y$-axis displays the values for the averaged radial profile given by the solid lines, and the right $y$-axis is for the averaged radial profile shown in dashed lines and all the values are in cgs units. The radial profile for vortex \#7 is on the first column of the figures, the panels in the middle column are for the vortex \#8, and the last column depicts the radial profiles of vortex \#12. For nonmagnetized flows, the main aspects of the vortical flow are drawn from the tangential velocity distribution along the radii. All the detected vortices present the tendency for the tangential velocity profile illustrated by the solid lines in Fig.\ \ref{fig:velsradial} for the selected vortices, \textit{i. e.}, the intensity of tangential velocity decreases around 90\% as one goes away from the vortex boundary, reaching a minimum close to its center. This behavior is in agreement with observational data \citep{Simon_1997} and also MHD simulations \citep{Onishchenko2018}. The differences found among the vortices concern mainly the sign of the averaged tangential velocity as well as the time evolution. The tangential velocities of vortices \#7 and \#8 tend to decrease around 20\% at $H=0.5$ Mm, whereas the variations in the lower part were less than 10\%. The averaged intensity of tangential velocity for vortex \#12 increases more than 30\% in both upper and lower parts of the vortex tube. The negative signal for the averaged tangential velocity of vortex \#12 indicates that it rotates clockwise, as opposed to the rotation direction of vortices \#7 and \#8. Since most of the vortical flow is along the $xy$-plane, we compute the radial distribution of the $z$-component of the vorticity vector, $\omega_z$, which is shown by the dashed lines and right $y$-axis in Fig.\ \ref{fig:velsradial}. The maximum $\omega_z$ is around the vortex center, and it decreases in the radial direction, in agreement with 2D vortices from observational data \citep{Simon_1997}. The difference found between the vorticity at the center, and the boundary tends to be greater at lower heights. Among the three vortices analyzed, \#12 displays less variation of $\omega_z$ from the center to its boundary. In general, $\omega_z$ tends to vary more overtime at the center than on the vortex boundary and it decreases in time, except for vortex \#12. The averaged plasma $\beta$ within the vortex is also indicated in orange for each height level. We see that, in general, the vortices present low plasma $\beta$. The lowest value is found for vortex \#8, which has a plasma $\beta$ that is 10 times lower than the other vortices, \#7 and \#12. \begin{figure*}[htp!] \gridline{\fig{VthetW7atZ110.pdf}{0.3\textwidth}{(a)V7} \fig{VthetW8atZ110.pdf}{0.3\textwidth}{(b)V8} \fig{VthetW12atZ110.pdf}{0.3\textwidth}{(c)V12} } \gridline{\fig{VthetW7atZ130.pdf}{0.3\textwidth}{(d)V7} \fig{VthetW8atZ130.pdf}{0.3\textwidth}{(e)V8} \fig{VthetW12atZ130.pdf}{0.3\textwidth}{(f)V12} } \gridline{\fig{VthetW7atZ150.pdf}{0.3\textwidth}{(g)V7} \fig{VthetW8atZ150.pdf}{0.3\textwidth}{(h)V8} \fig{VthetW12atZ150.pdf}{0.3\textwidth}{(i)V12} } \caption{The average tangential velocity field (left $y$-axis, solid lines) along the vortex radius from the center, $r=0$, to the boundary $r=R$ and the $z$-component of the vorticity vector (right $y$-axis, dashed lines). The red lines are for the initial time $t_0= 0$, the green lines are for $t_1=25 s$ (green line), and the blue lines are for $t_2=50 s$. The radial profiles are shown for vortex \#7(a)(d)(g), \#8(b)(e)(h) and \#12(c)(f)(i) at different heights: $H = 0.1$ Mm (a)(b)(c), $H = 0.3$ Mm (d)(e)(f), $H = 0.5$ Mm (g)(h)(i). The averaged value for plasma $\beta$ at the analyzed times and for the region within the vortex is displayed in orange \label{fig:velsradial}} \end{figure*} Figure \ref{fig:vzradial} shows the averaged radial profile of the $z$-component of the velocity field. All vortices encompass downflows at $H= 0.1$ Mm. For vortices \#7 and \#12, one can see a tendency of interchange between up- and downflows at other height levels during their lifetime. As for vortex \#8, there is only downflow, which tends to decrease as a function of time. The downflows tend to be greater at the center, whereas the upflows are stronger around the vortex boundary. \begin{figure*}[htp!] \gridline{\fig{Uz7atZ110.pdf}{0.3\textwidth}{(a)V7} \fig{Uz8atZ110.pdf}{0.3\textwidth}{(b)V8} \fig{Uz12atZ110.pdf}{0.3\textwidth}{(c)V12} } \gridline{\fig{Uz7atZ130.pdf}{0.3\textwidth}{(d)V7} \fig{Uz8atZ130.pdf}{0.3\textwidth}{(e)V8} \fig{Uz12atZ130.pdf}{0.3\textwidth}{(f)V12} } \gridline{\fig{Uz7atZ150.pdf}{0.3\textwidth}{(g)V7} \fig{Uz8atZ150.pdf}{0.3\textwidth}{(h)V8} \fig{Uz12atZ150.pdf}{0.3\textwidth}{(i)V12} } \caption{The average z-component of velocity field along the vortex radius from the center, $r=0$, to the boundary $r=R$. The red lines are for the initial time $t_0= 0$, the green lines are for $t_1=25$ s, and the blue lines are for $t_2=50$ s. The radial profiles are shown for Vortex \#7(a)(d)(g), \#8(b)(e)(h) and \#12(c)(f)(i) at different heights: $H = 0.1$ Mm (a)(b)(c), $H = 0.3$ Mm (d)(e)(f), $H = 0.5$ Mm (g)(h)(i). \label{fig:vzradial}} \end{figure*} In order to obtain a general model for the tangential velocity profile within the vortex region, we have tried four different fitting polynomials: \begin{itemize} \item Linear approximation: $v_{\theta} = ar $. \item Quadratic approximation: $v_{\theta} = ar^{2} + br +c $. \item Cubic approximation: $v_{\theta} = ar^{3} +br^{2} + cr +d $. \item Vortex model approximation : $v_{\theta} = -ar^{3} +br^{5} + cr^{7} $. \end{itemize} The vortex model approximation is based on the work of \cite{Rodriguez2012}, which established a series expansion to describe the common aspects of existing vortex models in nonmagnetized fluids. Those models are generally based on approximate solutions to the Navier-Stokes equations, which are obtained using different assumptions regarding boundary conditions and viscosity effects. Some of the classical vortex models, like Lamb–Oseen and Burgers vortex, (e.g., \citet{acheson1990}), could not provide a suitable fit to our data and therefore were left out of this study and replaced by the general model proposed by \cite{Rodriguez2012}. To evaluate the best approximation, we compute the percentage average relative error, $E(i)$, of each approximation at a given point for each point \textit{i} along the radial direction until the vortex boundary: \begin{equation} E(i) = 100*\frac{\lambda(i) -\lambda_0(i)}{\lambda_0(i)}, \end{equation} where $\lambda$ is the value obtained by the approximation given by the polynomial and $\lambda_0$ is the actual value obtained for tangential velocity. We then proceed to averaged $E(i)$ at each height level to obtain $E$ for each vortex. In table \ref{tab:fit_vtheta}, we display the value of $E$ at different height levels and averaged for all the detected vortices at different times. We see that the cubic approximation gives the best description for the tangential velocity distribution as a function of the vortex radius. Figure \ref{fig:vthetacubic} shows the cubic fit for vortex \#12 at $H=0.1$ Mm, 0.3 Mm, and 0.5 Mm for $t=50$s. The tangential velocity is normalized by its maximum intensity at the given height level. \begin{table}[htp!] \centering \begin{tabular}{|c|c|c|c|c|c} \hline Height (Mm) & Linear & Quadratic & Cubic & Vortex Model \\ \hline 0 & 12.47 & 2.68 & 1.13 & 4.78 \\ 0.1 & 8.93 & 1.80 & 1.21 & 2.16\\ 0.2 & 8.52 & 1.19 & 0.50 & 1.13 \\ 0.3 & 8.05 & 0.97 & 0.29 & 0.99 \\ 0.4 & 6.89 & 0.95 & 0.34 & 0.91 \\ 0.5 & 6.43 & 1.14 & 0.39 & 1.29 \\ \hline \end{tabular} \caption{Average Relative Error of Polynomial Fits for the Radial Profiles of the Tangential Velocities of All Detected vortices.} \label{tab:fit_vtheta} \end{table} The solar vortex tubes present similar curves concerning the description of the rotating flow. A vortex with a solid body rotation has a tangential velocity given by: \begin{equation} V_\theta= \Omega r, \label{eq:solid} \end{equation} where $\Omega$ is the angular velocity that is uniform and $r$ is the vortex radius. Therefore, the tangential velocity would have a linear dependence with $r$ in a rigid body rotation. According to Table \ref{tab:fit_vtheta}, the solar vortices tend to deviate from rigid body rotation as $V_{\theta}$ has a better fit with cubic dependence. We see from Table \ref{tab:fit_vtheta} that closer to the photosphere, the deviation from a solid body rotation is more than 10\%, whereas it decreases for the upper parts of the vortex. The small relative errors found for a cubic approximation is a clear indication that solar vortices do not present a rigid body rotation. \begin{figure*}[htp!] \gridline{\fig{cubic_vthetafit_12for_t=126300_z110.png}{0.3\textwidth}{(a)} \fig{cubic_vthetafit_12for_t=126300_z130.png}{0.3\textwidth}{(b)} \fig{cubic_vthetafit_12for_t=126300_z150.png}{0.3\textwidth}{(c)} } \caption{ Radial profile of the tangential velocity of vortex \#12 normalized by its maximum value (red dots). The blue curve depicts the cubic function approximation for the profile. \label{fig:vthetacubic}} \end{figure*} We now turn to the properties of the magnetic field inside the vortices. Figure \ref{fig:BJradial} shows the averaged radial profile of magnetic field intensity (solid lines, left axis) and the current density intensity (dashed lines, right axis) for vortices \#7, \#8, and \#12. The magnetic field at the center tends to be between 1\% and 12\% higher than at the boundary. This difference tends to increase over time, and it is also greater at the parts of the vortices closer to the photosphere. The exception is for vortex \#8, which has stronger magnetic field intensities at its border at $H=0.1$ Mm, where it also loses intensity compared to initial times. Another interesting feature of vortex \#8 is that it encompasses a magnetic field from $\sim $ 30\% up to $\sim $ 45\% larger than the other selected vortices at any given height level. For most of the detected vortices, we found that the magnetic field intensity tends to increase around 1\% $\sim$ 15\% at vortex's center. The current density intensity is within the same range at each height level for all those three vortices, even though there are considerable differences for magnetic field intensities inside each vortex. Initially, at time $t_0$, the current density is higher at the center for the upper part of the vortex, whereas, at $H=0.1$ Mm, the boundary presents greater current density values. Over time, the vortex boundary tends to hold the highest current density within the vortex And, except for vortex \#12, the current density intensity tends to decrease over time. \begin{figure*} \gridline{\fig{BfieldJ7atZ110.pdf}{0.3\textwidth}{(a)V7} \fig{BfieldJ8atZ110.pdf}{0.3\textwidth}{(b)V8} \fig{BfieldJ12atZ110.pdf}{0.3\textwidth}{(c)V12} } \gridline{\fig{BfieldJ7atZ130.pdf}{0.3\textwidth}{(d)V7} \fig{BfieldJ8atZ130.pdf}{0.3\textwidth}{(e)V8} \fig{BfieldJ12atZ130.pdf}{0.3\textwidth}{(f)V12} } \gridline{\fig{BfieldJ7atZ150.pdf}{0.3\textwidth}{(g)V7} \fig{BfieldJ8atZ150.pdf}{0.3\textwidth}{(h)V8} \fig{BfieldJ12atZ150.pdf}{0.3\textwidth}{(i)V12} } \caption{The average magnetic field intensity (left $y$-axis, solid lines) and current density field intensity (right $y$-axis, dashed lines) along the vortex radius from the center, $r=0$, to the boundary $r=R$. The red lines are for the initial time $t_0= 0$, the green lines are for $t_1=25$ s (green line), and the blue lines are for $t_2=50$ s. The radial profiles are shown for vortices \#7 (a)(d)(g), \#8 (b)(e)(h) and \#12 (c)(f)(i) at different heights: $H = 0.1$ Mm (a)(b)(c), $H = 0.3$ Mm (d)(e)(f), $H = 0.5$ Mm(g)(h)(i). \label{fig:BJradial} } \end{figure*} We also apply the first three fitting polynomials mentioned above in order to fit the magnetic intensity radial profile of the vortices. The average relative errors for each function for different height levels are shown in Table \ref{tab:Bfit}. Again, the best fit is given by the cubic approximation, which is shown for vortex \#7 at different height levels for $t = 50$ s in Fig.\ \ref{fig:Bfit}. \begin{table}[htp!] \centering \begin{tabular}{c|c|c|c} \hline Height (Mm) & Linear & Quadratic & Cubic \\ \hline 0 & 0.65 & 0.062 & 0.035 \\ 0.1 & 0.56 & 0.11 & 0.037 \\ 0.2 & 0.36 & 0.072 & 0.028 \\ 0.3 & 0.34 & 0.071 & 0.024 \\ 0.4 & 0.37 & 0.055 & 0.019 \\ 0.5 & 0.33 & 0.056 & 0.020 \end{tabular} \caption{Average Relative Error of Polynomial Fits for the Magnetic Field Radial Profile of All Detected Vortices.} \label{tab:Bfit} \end{table} \begin{figure*} \gridline{\fig{cubic_Bfit_7for_t=126300_z110.png}{0.3\textwidth}{(a)} \fig{cubic_Bfit_7for_t=126300_z130.png}{0.3\textwidth}{(b)} \fig{cubic_Bfit_7for_t=126300_z150.png}{0.3\textwidth}{(c)} } \caption{Magnetic field intensity radial profile of vortex \#7 for $t= 50$ s normalized by its maximum value (red dots). The blue curve depicts the cubic function approximation for the profile. \label{fig:Bfit}} \end{figure*} \section{Discussions} \label{sec:discussions} The magnetic field concentration by solar vortices seems to saturate at a given time as the vortex presenting the highest magnetic field intensity, \#8, displays negligible increments over time. The highest growth of magnetic field concentration is found in vortex \#12, which holds the lowest magnetic field intensity. The vortices with higher magnetic field concentration also display considerably higher vorticity value, indicating that magnetic field has an important contribution to vorticity evolution. For instance, vortex \#12 presents more significant vorticity increase over time and vortex \#8 the highest $\omega_z$ among the three analyzed vortices. The importance of the magnetic field for vortex dynamics is also suggested by the differences found in tangential velocity radial profiles of solar atmospheric vortices, and vortices models of nonmagnetized fluids. As the pressure gradient, $\nabla P$, is an important force on the dynamics of nonmagnetized vortex flows, we compare the radial force balance between $\nabla P$ and the Lorentz force, $L$, in the horizontal plane. Figure \ref{fig:Fgradpradial} shows both horizontal intensities of those forces with the same axis ranges in order to facilitate comparisons. Vortices \#7 and \#12 tend to have their dynamics alternately ruled by the pressure gradient and by the Lorentz force. The presence of an intense magnetic field in vortex \#8 also leads to the Lorentz force dominating the dynamics in the horizontal plane over gradient pressure forces, which is confirmed by the low plasma $\beta$ found for \#8. \begin{figure*} \gridline{\fig{balance7atZ110.pdf}{0.3\textwidth}{(a)V7} \fig{balance8atZ110.pdf}{0.3\textwidth}{(b)V8} \fig{balance12atZ110.pdf}{0.3\textwidth}{(c)V12} } \gridline{\fig{balance7atZ130.pdf}{0.3\textwidth}{(d)V7} \fig{balance8atZ130.pdf}{0.3\textwidth}{(e)V8} \fig{balance12atZ130.pdf}{0.3\textwidth}{(f)V12} } \gridline{\fig{balance7atZ150.pdf}{0.3\textwidth}{(g)V7} \fig{balance8atZ150.pdf}{0.3\textwidth}{(h)V8} \fig{balance12atZ150.pdf}{0.3\textwidth}{(i)V12} } \caption{The average intensity of the horizontal component of the Lorentz force (left $y$-axis, solid lines) and the average intensity horizontal component of the pressure gradient (right $y$-axis, dashed lines) along the vortex radii from the center, $r=0$, to the boundary $r=R$. The red lines are for the initial time $t_0= 0$, the green lines are for $t_1=25$ s (green line), and the blue lines are for $t_2=50$ s. The radial profiles are shown for vortices \#7 (a)(d)(g), \#8 (b)(e)(h) and \#12 (c)(f)(i) at different heights: $H = 0.1$ Mm (a)(b)(c), $H = 0.3$ Mm (d)(e)(f), $H = 0.5$ Mm (g)(h)(i). \label{fig:Fgradpradial}} \end{figure*} In order to investigate the dynamics imposed by vortical motions on the magnetic field lines, we select a set of points at the boundary of vortices \#7, \#8, \#12 at $H = 0$, and advect them for $\Delta t =50$ s. Figure \ref{fig:Blinesadv} shows the $xy$-plane at $H = 0$ colored by pressure and (a) the velocity field streamlines at $t=0$, (b) the magnetic field streamlines for the same points at $t=0$, and (c) the magnetic field lines at $t=50$ s for the advected points. All the vortices shown in Figure \ref{fig:Blinesadv} seem to drag the magnetic field rooted at $H= 0$, leading to some torsion of those lines. Vortex \#8 has the lowest plasma $\beta$ among the analyzed vortices, and it is also the one with less torsion. The vortex with a higher plasma $\beta$, vortex \#12, was the only one in the whole analyzed domain that was able to create and sustain a twisted magnetic flux. The differences found for vortex \#12 and all the other 17 detected vortices are its higher tangential velocity and plasma $\beta$ values. Among the 17 detected vortices, some had even higher plasma $\beta$ than \#12, but their $V_\theta$ values were between 50\% and 80\% of the values found for \#12 at different height levels. \begin{figure*}[htp!] \gridline{\fig{v7_V_125300_adv.jpeg}{0.25\textwidth}{(a)Velocity field lines for V7 at $t=t_0$} \fig{v7_B_125300_adv.jpeg}{0.25\textwidth}{(b)Magnetic field lines for V7 at $t=t_0$} \fig{v7_B_126300_adv.jpeg}{0.25\textwidth}{(c)Magnetic field lines for V7 at $t=t_f$} } \gridline{\fig{v8_V_125300_adv.jpeg}{0.25\textwidth}{(d)Velocity field lines for V8 at $t=t_0$} \fig{v8_B_125300_adv.jpeg}{0.25\textwidth}{(e)Magnetic field lines for V8 at $t=t_0$} \fig{v8_B_126300_adv.jpeg}{0.25\textwidth}{(f)Magnetic field lines for V8 at $t=t_f$} } \gridline{ \fig{v12_V_125300_adv.jpeg}{0.25\textwidth}{(g)Velocity field lines for V12 at $t=t_0$} \fig{v12_B_125300_adv.jpeg}{0.25\textwidth}{(h)Magnetic field lines for V12 at $t=t_0$} \fig{v12_B_126300_adv.jpeg}{0.25\textwidth}{(i)Magnetic field lines for V12 at $t=t_f$} } \caption{ Close view of vortex region for vortices \#7 (a,b,c), \#8 (d,e,f) and \#12 (g,h,i), where the $xy$-plane is colored by pressure. The panels on the left (middle) display the velocity (magnetic) streamlines traced from the vortex boundary detected by IVD at $t_0=0$ for $H = 0.0$ Mm. The panels on the right depict the magnetic field lines from points originally at the vortex boundary detected by IVD at $t_0$ for $H = 0.0$ Mm and advected in time to $t=50$ s. \label{fig:Blinesadv}} \end{figure*} \section{Conclusions} \label{sec:conclusions} The 17 detected vertical vortex tubes reported in this work were found within intergranular regions, in areas of high concentration of magnetic field flux. The vortex tubes in the lower solar atmosphere were precisely calculated, allowing the study of the plasma dynamics across the vortical flow. The solar vortex tubes present different shapes as a function of time, being deformed by the forces acting on the flow. The horizontal radii of the vortices are, on average, 40 km at the photosphere and around 80 km at the upper part of the simulation domain. As the domain only reaches 600 km above the surface, the upper parts of the vortices are located at the lower part of the chromosphere, but it does not correspond to the observed chromospheric swirls observed in line emissions \citep{Wedemeyer2012, Leenaarts2013, Shetye2019}. Even so, our results indicate that photospheric and chromospheric vortices are part of the same 3D vortex tube. For the detected vortices, the part laying in the chromosphere tends to cover an area almost twice as large as the part of the vortex at $H= 0.0$ Mm. Another relevant aspect concerning the linking of parts of the vortex at different height levels is the similarity in the radial profiles throughout the solar atmosphere, which confirms that chromospheric and photospheric vortices are not only part of a vortex tube, but they are also under similar dynamics. More specifically, the tangential velocity profiles show that the plasma rotates in the same direction at all height levels, with the chromospheric part of the vortex rotating up to twice as fast as the photospheric part. At all height levels, once the plasma is dragged into the vortex tube, its tangential velocity decreases, indicating eddy viscosity effects. The plasma also carries vorticity, which is, in turn, concentrated in the low-pressure vortex regions as confirmed by the vorticity profile and also matched by observations of mesogranular flows \citep{Simon_1997}. The in- and outflow of vorticity implies that the vortex is not a conservative system and, therefore, the assumption of conservation of angular momentum as a vortex generation mechanism is misleading. Also, both tangential velocity and vorticity profiles indicate that the ``bathtub effect'' mechanism is likely not responsible for the observed vortices as they do not present in any part of the vortex the expected behavior of a ``bathtub effect''; \textit{i.e} a tangential velocity that would initially increase from the boundary toward the center of the vortex and then decay closer to the center. For the solar vortices, there is only decreasing of the tangential velocity from the boundary to the center, indicating the action of another mechanism to vortex creation. Within the higher parts of the tube vortices, there are intermittent upflows that were described by the investigations of \cite{Kitiashvili2012,Kitiashvili13}, where they used MHD simulations to describe the turbulent convection of quiet Sun regions. Those upflow plasma jets were observed for the upper part of the vortex tube and they are stronger at the vortex boundary and they can become downflows at the center of the vortex as suggested by \cite{Kitiashvili13}. The solar vortex tubes also concentrate the magnetic field with a cubic dependence on the radius, leading to the formation of magnetic flux tubes above the solar surface. The vortices encompass a maximum around 1300 G at the solar surface and up to around 600 G at their upper levels. The magnetic concentrations found for the detected vortices are similar to the ones detected by other MHD simulations \citep{Kitiashvili2012, Wedemeyer2014}. The magnetic field was found to play an essential role in vortex dynamics. The main forces acting on the vortex, the pressure gradient, and the Lorentz force, have similar intensities at all height levels, which indicates that magnetic effects are as important as hydrodynamic terms for the vortex evolution. The importance of Lorentz force in vortex dynamics was previously hinted by \cite{Kitiashvili13} for the highest parts of a simulated magnetized solar atmosphere, but our studies suggest that this actually applies to the whole vortex tube. Also, the magnetic field contributions to vorticity seem to be an essential aspect of vorticity evolution, which confirms the findings of \cite{Shelyag2011}. Besides, another corroboration to the importance of the magnetic field in solar vortex evolution relies on the fact that the tangential velocity profiles in the solar atmosphere have a better fit with cubic approximation instead of a general model for vortices in nonmagnetized fluids. In turn, the magnetic field is also impacted by the vortices' dynamics, leading to torsion and bending of the magnetic field. For the 17 analyzed solar vortices, only \#12 had a magnetic vortex as defined by \cite{Rempel2019} and which is cospatially existing with the kinematic solar vortex tube. Those results are in accordance with the findings of \cite{moll12}, who show that the magnetic field lines tend to expand with height and do not present significant twisting. Our findings also indicate that the generation of twisted magnetic flux tubes by vortical motions in the photosphere can occur only when there are sufficiently high tangential speed vortices at various height levels to overcome the magnetic tension and twist the field lines. Our study hints that most of the detected solar vortices will not have existing cospatially magnetic vortices in the atmosphere, but instead slightly bending magnetic flux tubes. As the rotation of the magnetic field by kinematic vortices is believed to generate the detected chromospheric swirls in line emission observations \citep{Wedemeyer2012,Wedemeyer2014}, our results indicate that the amount of photospheric vortices is likely larger than the number of observed chromospheric swirls. \acknowledgments SSAS, VF, GV and ER are grateful to The Royal Society, International Exchanges Scheme, collaboration with Brazil (IES$\backslash$R1$\backslash$191114). VF and GV are grateful to Science and Technology Facilities Council (STFC) grant ST/M000826/1 and to The Royal Society, International Exchanges Scheme, collaboration with Chile (IE170301). VF would like to thank the International Space Science Institute (ISSI) in Bern, Switzerland, for the hospitality provided to the members of the team on `The Nature and Physics of Vortex Flows in Solar Plasmas'. E.L.R. acknowledges Brazilian agencies CAPES, CNPq,and FAPESP (Grants No. 88881.309066/2018-01, No 304449/2017-2 and No. 16/24970-7) for their financial support. This research has also received financial support from the European Union's Horizon 2020 research and innovation program under grant agreement No. 824135 (SOLARNET). This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) – Finance Code 88882.316962/2019-01.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,884
Q: How to Execute complex batch command on the fly I am looking for a way to run DOS/windows batch directly from C# code without saving it as .BAT file before. I'm mainly interested to run DOS command with a combination of stdin stream. Let's say I need execute something like that: echo 'abcd' | programXXX.exe -arg1 --getArgsFromStdIn After that programXXX.exe will take 'abcd' string as -arg1 What I'm doing now is just create bat file in TMP directory and running it, deleting after execution. What I need is run it "on the fly" just from .NET code without saving to file previously. (Main reason is security, but also do not want to leave rubbish when program crashes etc) Do you know how to achieve that? A: You can use Process.Start. It will take several parameters, and you can pass through command line parameters to it. A: I would offer two suggestions * *use redirected IO and launch the program from in your code *you can use power shell like in this tutorial A: A switch to PowerShell (PSH) would give you much greater ability to execute commands. PSH executes in process, and multiple command lines can be executed in a single runspace (scope/context) with full control over input and output object pipelines: var runspace = RunspaceFactory.CreateRunspace(); PSSnapInException pex; var loadedSnapIn = Runspace.RunspaceConfiguration.AddPSSnapIn(SnapInName, out pex); runspace.Open(); var pipe = runspace.CreatePipeline(commandline); var output = pipe.Invoke(); This creates the runspace, loads a snapin (i.e. extra custom commands), sets up a command and executes it, collecting the collection of returned objects.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,411
var webpack = require('webpack'); var HtmlWebpackPlugin = require('html-webpack-plugin'); var ExtractTextPlugin = require('extract-text-webpack-plugin'); var glob = require('glob'); var path = require('path'); function resolve(dir) { return path.join(__dirname, dir) } var nodeModules = resolve('node_modules'); var port = 8080; var rootPath = resolve('./dist'); var _entries = {}; var fileNames=[]; var jsDir = resolve('./src/entry'); var entryFiles = glob.sync(`${jsDir}/*.js`); entryFiles.forEach((filePath) => { var filename = filePath.substring(filePath.lastIndexOf('/')+1,filePath.lastIndexOf('.')); _entries[filename] = filePath; fileNames.push(filename); }); var webpackConfig = { entry: Object.assign({},{"vendor":['vue']},_entries), output: { filename: '[name].[hash:8].min.js', path: rootPath, publicPath: './' }, resolve: { extensions: ['.js', '.vue'], alias: { 'vue$': 'vue/dist/vue.common.js', } }, devServer: { port: port }, devtool: '#eval-source-map', module: { rules: [{ test: /\.vue$/, use: 'vue-loader', }, { test: /\.js$/, use: 'babel-loader', exclude: nodeModules, }, { test: /\.css$/, use: ExtractTextPlugin.extract({ fallback: 'vue-style-loader', use: 'css-loader' }), }, { test: /\.(scss|sass)$/, use: ExtractTextPlugin.extract({ fallback: 'vue-loader', use: 'vue-style-loader!css-loader!sass-loader?indentedSyntax'}), }, // loader png or jpg or git and svg files 然后压缩之,并把小于10kb的图片base64格式内联到css文件中。 { test: /\.(jpe?g|png|gif|svg)$/i, use: [ 'image-webpack-loader?{progressive:true, optimizationLevel: 4, ' + 'interlaced: false, pngquant:{quality: "65-90", speed: 4}}', // 压缩图片 'url-loader?limit=10000&name=img/[name].[hash:8].[ext]', // 小于10kb的图片base64格式内联到css文件中。 ], }, { test: /\.(woff2?|eot|ttf|otf)(\?.*)?$/, use: 'url-loader?limit=10000&name=fonts/[name].[hash:8].[ext]' } ] }, plugins: [ new webpack.DefinePlugin({ 'process.env': "'production'" }), new ExtractTextPlugin({filename:'css/[name].[hash:8].min.css',disable: false, allChunks: true}),//可以将所有css文件打包到css/main.css文件中 // https://github.com/glenjamin/webpack-hot-middleware#installation--usage new webpack.optimize.OccurrenceOrderPlugin(), new webpack.NoEmitOnErrorsPlugin(), new webpack.ProvidePlugin({'Vue': "vue"}), new webpack.optimize.UglifyJsPlugin({ compress: { warnings: false } }) ] } let htmlPageWebpack = fileNames.map((name)=>{ return new HtmlWebpackPlugin({ filename: `${name}.html`, template: 'index.html', inject: true, title: "hello page", chunks:['vendor', name], hash:true, minify: { removeComments: true,//移除注释 collapseWhitespace: true } }); }); webpackConfig.plugins = [].concat(webpackConfig.plugins,htmlPageWebpack); module.exports = webpackConfig;
{ "redpajama_set_name": "RedPajamaGithub" }
1,432
ISO 22000 2005 describes the requirements for operating an effective food safety management system integrating the use of the Hazard Analysis and Critical Control Point (HACCP) techniques and defined prerequisites for the safe production of food. This section of the standard is designed to enable top management to establish and maintain commitment to the development and improvement of a food safety management system. The need for measurable objectives is intended to support top management understanding how the food safety management system is performing and therefore what improvements and updating may be required to enable the ongoing production of safe food. Food Safety Policy: Establish a policy that is appropriate to the role of the organization in the food chain ensuring it conforms to both statutory and regulatory requirements and agreed food safety requirements of customers. Objectives: Establish measurable objectives relating to food safety in support of the food safety policy. Define the scope of the food safety management system in terms of products, activities and sites. Documented food safety management system. Development of internal and external communication on food safety issues with relevant interested parties. Development of a food safety management system that enables all food safety hazards to be identified and controlled. Establish procedures to manage potential emergency situations that can impact food safety. Responsibilities: Responsibilities and authorities defined and communicated. Appointment of a food safety team leader and establish a food safety team. Review the continued suitability, adequacy and effectiveness of the food safety management system at planned intervals and identify opportunities for improvement and updating of the system. Resources: Provide adequate resources for the development, maintenance, updating and improvement of the food safety system. All relevant information needed to conduct the hazard analysis shall be collected, maintained, updated and documented. The food safety team shall conduct a hazard analysis to determine which hazards need to be controlled. A combination of control measures shall be put in place and managed through pre requisite programmes and/or by HACCP plans. Traceability systems will need to be implemented to enable the identification of product lots/batches back through to raw materials and delivery records in the event that recall or withdrawal is warranted. There shall be procedures in place to handle potentially unsafe products, withdrawals, disposal.
{ "redpajama_set_name": "RedPajamaC4" }
7,864
Abstract: We give an overview of recent work on charge degrees of freedom of strongly correlated electrons on geometrically frustrated lattices. Special attention is paid to the checkerboard lattice, i.e., the two-dimensional version of a pyrochlore lattice and to the kagome lattice. For the checkerboard lattice it is shown that at half filling when spin degrees of freedom are neglected and quarter filling when they are included excitations with fractional charges $\pm$e/2 may exist. The same holds true for the three-dimensional pyrochlore lattice. In the former case the fractional charges are confined. The origin of the weak constant confining force is discussed and some similarities to quarks and to string theory are pointed out. For the checkerboard lattice a formulation in terms of a compact U(1) gauge theory is described. Furthermore a new kinetic mechanism for ferromagnetism at special fillings of a kagome lattice is discussed.
{ "redpajama_set_name": "RedPajamaC4" }
1,078
Moulton School & Science College is a secondary school with academy status located in the village of Moulton, Northamptonshire. Formerly known as Moulton School, the founding headmaster was Leslie Alfred Scott (1914-1999), who was headmaster from 1954 until his retirement in 1979. He established the school motto - "Fill the Unforgiving Minute". He also established the first house system (Hilary, Bannister, Fleming and Whittle) and created the school crest. Two new houses were added, Scott and Petit, after Scott retired in 1979. As of 2019, the school as of 2019 had 1,355 students on roll, including in the sixth form, and 135 teachers. It is a school for ages 11–18. The school was granted specialist Science College status in 2002, and this was re-designated in 2007. The acting headteacher, as of March 2021, is Angie Dabbs. The school used to separate the students into four different houses named after the Northamptonshire houses of Holdenby House, Althorp, Rockingham, and Sulgrave Manor. They had different colour ties to represent them: blue for Althorp, green for Holdenby, red for Rockingham and yellow for Sulgrave, but since the 2012–13 academic year, the school has year groups consisting of eight classes per year, instead of houses. The colours for each year are rotated. As of 2022, The year colours are as follows. Gold (Year 7) , Blue (Year 8) , Green (Year 9) , Black (Year 10) Silver (Year 11). They rotate every year. As of 2022, The houses have been renamed to Hunsbury (Green), Ravensthorpe (Red), Stanwick (Yellow) and Barnwell (Blue). The school serves students from Moulton, Pitsford, Boughton, Brixworth, Chapel Brampton, Church Brampton, Old, Kingsthorpe, Walgrave, Harlestone, Rectory Farm, Holcot and Sywell. In June 2013, it received a "Good" report from Ofsted. which was confirmed in 2017. Feeder Schools Boughton Primary School Brixworth Primary School Harlestone Primary School Moulton Primary School Overstone Primary School Pitsford Primary School Sywell C Of E Primary School The Bramptons Primary School Walgrave Primary School References External links School profile on Ofsted Secondary schools in West Northamptonshire District Academies in West Northamptonshire District Educational institutions established in 1954 1954 establishments in England
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,130
\section{Introduction} Neural Architecture Search (NAS) is an automated machine learning technique to design an optimal architecture by searching its building blocks from a collection of candidate structures and operations. Despite the success of NAS in several computer vision tasks \cite{zoph2016neural, zoph2018learning, ghiasi2019fpn, liu2019auto, tan2019mnasnet}, the search process demands huge computational resources. The current search times have come down considerably from as many as 2000 GPU days in early NAS \cite{zoph2018learning}, thanks to subsequent studies \cite{cai2018proxylessnas,liu2018progressive,pham2018efficient,real2019regularized,stamoulis2019single,wu2019fbnet} among others. Differentiable Architecture Search (DARTS) \cite{liu2018darts} is an appealing method that avoids searching over all possible combinations by relaxing the categorical architecture indicators to continuous parameters. The higher level architecture can be learned along with lower level weights via stochastic gradient descent by approximately solving a bi-level optimization problem. The 2nd order DARTS is more accurate yet involves a mixed second derivative estimation of loss functions. In spite of the accuracy, it is used less often in practice as it can take much longer search time than the 1st order DARTS. A single level approach (SNAS) based on sampling and reinforcement learning has been proposed in \cite{xie2018snas}. On CIFAR-10, SNAS is more accurate than the 1st order DARTS yet with 50 \% more search time than the 2nd order DARTS. The {\it main contribution of this paper is to introduce a novel Relaxed Architecture Search (RARTS) } based on single-level optimization, and only the first order partial derivatives of loss functions. RARTS achieves {\it higher accuracies than the 2nd order DARTS with shorter search times consistently on the architecture search }. To demonstrate and understand the capability of RARTS, we carried out both analytical and experimental studies below. \begin{itemize} \item Compare RARTS with DARTS directly on the analytical model with quadratic loss functions, and CIFAR-10 based architecture search exactly as conducted in \cite{liu2018darts}. In case of the analytical model, RARTS iterations approach in a robust fashion the true global minimal point missed by the 1st order DARTS. On the CIFAR-10 architecture search task, the model found by RARTS has smaller size and higher test accuracy than that by the 2nd order DARTS with 65\% search time saving. \item Transfer model learned on CIFAR-10 to ImageNet and compare with DARTS and some of its recent variants. \item Prove a convergence theorem for RARTS based on descent of a Lagrangian function, and discover equilibrium equations for the limits. \end{itemize} \section{Related work} \subsection{Differentiable Architecture Search} DARTS \cite{liu2018darts} learns network weights and the architecture parameters simultaneously based on training and validation loss functions. The second order DARTS performs much better than the first order DARTS, however at a considerable overhead of computing mixed second order partial derivatives of the loss functions (see below). There are a group of DARTS style methods being proposed lately with most improvements gained from modifying search space and training procedures. FairDARTS \cite{chu2019fair} and P-DARTS \cite{chen2019progressive} improve the search space by reducing the impact of skip-connections. MiLeNAS \cite{he2020milenas} is a mixed level reformulation of NAS. We shall see that MiLENAS is actually a constrained case of RARTS. \subsection{Bilevel optimization} DARTS training relies on an iterative algorithm to solve a bilevel optimization problem \cite{Bilevel07}: \begin{align}\label{bilev} \begin{split} & \mathop{\min}_{\alpha} L_{val}(w_0(\alpha), \alpha),\\ \mathrm{where} \ \ \ \ & w_0(\alpha) = \mathop{\arg\min}_w L_{train}(w, \alpha). \end{split} \end{align} Here $w$ denotes the network weights, $\alpha$ is the architecture parameter, $L_{train}$ and $L_{val}$ are the training and validation loss functions. DARTS algorithm proceeds as: \begin{itemize} \item update weight $w$ by descending along $\nabla_w L_{train}$ \item update architecture parameter $\alpha$ by descending along: \[\nabla_{\alpha}\, L_{val}(w - \xi \, \nabla_w L_{train}(w,\alpha),\alpha)\] \end{itemize} where $\xi=0$ ($\xi > 0$ ) gives the first (second) order approximation. The second order method requires computing the mixed derivative $\nabla^{2}_{\alpha,w} L_{train}$, and is observed to optimize better in a solvable model and through experiments. The bilevel optimization problem also arises in hyper-parameter optimization and meta-learning, see \cite{Bilevel18} for convergence result on minimizers and a second order algorithm. \section{Methodology} In this section, we introduce RARTS, its iterative algorithm and convergence properties. We then demonstrate advantages of RARTS over DARTS on various datasets. \subsection{Relaxed Architecture Search}\label{32} As pointed out in \cite{liu2018darts} and \cite{he2020milenas}, when learning architecture parameter $\alpha$, one should take into account the validation dataset to avoid overfitting. The bi-level formulation (\ref{bilev}) is a way to handle both training and validation datasets. However, (\ref{bilev}) is solved only approximately by DARTS algorithms whose convergence is not known theoretically. See Theorem 3.2 in (\cite{Bilevel18}) for convergence of minimizers, if the $\alpha$-minimization is solved exactly. Even if the weights $w_0$ are learned optimally on the training dataset, or assumption (vi) of Theorem 3.2 \cite{Bilevel18}, it is unclear how optimal $\alpha$ is on the validation dataset. We propose a single level alternative to the bi-level formulation (\ref{bilev}) by joint training of an auxiliary network of the same architecture on the validation dataset. The original and the auxiliary networks are related by having their weights in the same tensor shapes, and difference in weight values controlled by a penalty. This way, the training and validation datasets contribute to the search of architecture $\alpha$ via the cooperation of two networks. Specifically, we propose a relaxed architecture search framework through the following relaxed Lagrangian $L=L(y,w,\alpha)$: \begin{align}\label{Lag} L := L_{val}(y, \alpha) + \lambda \, L_{train}(w, \alpha) + \frac{1}{2}\beta \, \|y-w\|^2_2, \end{align} where $w$ and $y$ denote the weights of the original and the auxiliary networks respectively, $\lambda$ and $\beta$ are hyper-parameters controlling the penalty and the learning process. We minimize the relaxed Lagrangian $L(y, w, \alpha)$ in (\ref{Lag}) by iteration on the three variables alternately, because they have different meanings and dynamics. Similar to Gauss-Seidel method in numerical linear algebra \cite{Gauss}, we use updated variables immediately in each step and obtain the following three-step iteration: \begin{align} \begin{split} w^{t+1} &= w^{t} - \eta^t_w\, \nabla_w L(y^t, w^t, \alpha^t) \\ y^{t+1} &= y^{t} - \eta^t_y\, \nabla_y L(y^t, w^{t+1}, \alpha^t) \\ \alpha^{t+1} &= \alpha^{t} - \eta^t_{\alpha}\, \nabla_{\alpha}L(y^{t+1}, w^{t+1}, \alpha^t). \end{split} \end{align} With explicit gradient $\nabla_{w,y}\|y-w\|^2_2$, we have: \begin{align} \label{ite} \begin{split} w^{t+1} &= w^{t} - \lambda\, \eta^t_w \nabla_w L_{train}(w^t, \alpha^t)- \beta\eta^t_w(w^t-y^t) \\ y^{t+1} &= y^{t} - \eta^t_y\, \nabla_y L_{val}(y^t, \alpha^t)- \beta\, \eta^t_y(y^t-w^{t+1}) \\ \alpha^{t+1} &=\alpha^{t} - \lambda \, \eta^t_{\alpha}\, \nabla_{\alpha} L_{train}(w^{t+1}, \alpha^t) - \eta^t_{\alpha}\, \nabla_{\alpha} L_{val}(y^{t+1}, \alpha^t). \end{split} \end{align} \medskip Note that the update of $\alpha$ in Eq. (\ref{ite}) involves both the training loss and the validation loss, which is {\it similar to the second order DARTS but without the mixed second derivatives.} The first order DARTS uses $\nabla_{\alpha} L_{val}$ only in this step. \medskip If we set $y=w$, remove the $y$ update and the $\beta $ terms in (\ref{ite}), then we recover the first order algorithm of MiLeNAS \cite{he2020milenas}. \subsection{Convergence analysis} Suppose that $L_{train}:=L_t$ and $L_{val}:=L_v$ both satisfy Lipschitz gradient property, or there exist positive constants $L_1$ and $L_2$ such that ($z=(y,\alpha)$, $z'=(y',\alpha')$): \[ \| \nabla_z L_v (z) - \nabla_z L_v (z') \| \leq L_{1}\|z - z'\|, \; \; \forall (z,z'), \] which implies: \[ L_v(z) -L_v(z') \leq \langle \nabla_{z} L_{v}(z'), (z-z') \rangle + \frac{L_{1}}{2}\|z-z'\|^2, \] for any $(z,z')$; similarly ($\zeta=(w,\alpha)$, $\zeta'=(w',\alpha')$): \[ \| \nabla_\zeta L_t (\zeta) - \nabla_\zeta L_t (\zeta') \| \leq L_{2}\|\zeta - \zeta'\|, \; \; \forall (\zeta,\zeta'), \] which implies: \[ L_t(\zeta) -L_{t}(\zeta') \leq \langle\nabla_{\zeta}L_{t}(\zeta'), (\zeta-\zeta') \rangle +\frac{L_2}{2}\|\zeta-\zeta'\|^2, \] for any $(\zeta,\zeta')$. \begin{theo} Suppose that the loss functions $L_t$ and $L_v$ satisfy Lipschitz gradient property. If the learning rates $\eta_{w}^{t}$ , $\eta_{y}^{t}$ and $\eta_{\alpha}^{t}$ are small enough depending only on the Lipschitz constants as well as $(\lambda,\beta)$, and approach nonzero limit at large $t$, the Lagrangian function $L(y,w,\alpha)$ is descending on the iterations of (\ref{ite}). If additionally the Lagrangian $L$ is lower bounded and coercive (its boundedness implies that of its variables), the sequence $(y^t,w^t,\alpha^t)$ converges sub-sequentially to a critical point $(\bar{y},\bar{w},\bar{\alpha})$ of $L(y,w,\alpha)$ obeying the equilibrium equations: \begin{eqnarray} &&\lambda \nabla_{w} L_t (\bar{w},\bar{\alpha}) + \beta (\bar{w} -\bar{y}) = 0, \nonumber \\ && \nabla_y L_v(\bar{y},\bar{\alpha}) +\beta (\bar{y}-\bar{w})=0, \nonumber \\ && \lambda \nabla_\alpha L_t (\bar{w},\bar{\alpha}) + \nabla_{\alpha} L_v(\bar{y},\bar{\alpha}) =0. \label{equil} \end{eqnarray} \end{theo} The proof is given in the Appendix. \subsection{A solvable bilevel model} Consider quadratic $L_{val}=\alpha\,w -2\alpha +1$, $L_{train}=w^2 -2\, \alpha \,w +\alpha^2$ for problem (\ref{bilev}) as in \cite{liu2018darts}. The model helps compare DARTS and RARTS through bi-level optimization, besides an example for Theorem 1. The learning dynamics start from $(\alpha_0,w_0,y_0)=(2,-2,y_0)$. Clearly, $w_{0}(\alpha) = {\rm argmin}_w \, L_{train} = \alpha$. Then $L_{val}(w_0(\alpha), \alpha) = \alpha^2 -2\alpha +1$, the global minimizer of the bilevel problem (\ref{bilev}) is $(\alpha^*,w^*)=(1,1)$, which is approached by the second order DARTS (Fig. 2 of \cite{liu2018darts}). The learning trajectory of the first order DARTS ends at $(2,2)$, a spurious minimal point. This is reproduced here in Fig. \ref{convergence}, along with three learning curves from RARTS as the parameters $(\lambda, \beta)$ and the initial value $y_0$ vary. In Fig. \ref{convergence}a, $\beta = 10$, $y_0=0$. In Fig. \ref{convergence}b, $\lambda=10$, $y_0=0$. In Fig. \ref{convergence}c, $\lambda=\beta =10$. In all experiments, the learning rates are fixed at $0.01$. For a range of $(\lambda,\beta)$ and $y_0$, we see that our learning curves enter a small circle around $(1,1)$. Both loss functions satisfy Lipschitz gradient property, implying descent of Lagrangian $L$ by the proof of Theorem 1. If $\lambda > 1/2$, $\beta > 1$, $L$ is bounded and coercive as long as $\alpha^t$ is bounded, which follows from an eigenvalue analysis of linear system (\ref{ite}) and is observed in computation. If $\lambda \neq 1/4$, there is {\it unique} solution to system (\ref{equil}): $(\bar{\alpha},\bar{w}, \bar{y})= (\frac{4\lambda}{4\lambda -1},\frac{4\lambda -2}{4\lambda -1},1-\frac{1}{2\lambda}+\frac{1}{\beta})$. At $\lambda=10$, $(\bar{\alpha},\bar{w})\approx (1.025,0.974)$ where {\it global convergence holds for the whole RARTS sequence.} \begin{figure*} \centering {\includegraphics[width=1\textwidth]{conv2.png}}% \caption{Learning trajectories of RARTS approach the global minimal point $(1,1)$ of the solvable model at suitable values of $\lambda$, $\beta$ and $y_0$ ($\lambda=10$ in middle/right subplots, $\beta=10$ in left/right subplots, $y_0=0$ in left/middle subplots), compared with that of the baseline (first order DARTS). } \label{convergence} \end{figure*} \section{Experiments} We show by a series of experiments how RARTS works efficiently on various datasets. \subsection{Datasets} \textbf{CIFAR-10.} It consists of 50,000 training images and 10,000 test images \cite{krizhevsky2009learning}. Those 3-channel $32\times32$ images are allocated to 10/100 object classes evenly. The train and val data we have used are standard random half splits of training data as in DARTS. The building blocks of the architecture is searched on CIFAR-10. \textbf{ImageNet.} ImageNet \cite{deng2009imagenet,russakovsky2015imagenet} is composed of more than 1.2 million training images and 5,000 test images from 1,000 object classes. We train on ImageNet a larger network which is build of the blocks learnt on CIFAR-10. \subsection{Results and Discussions}\label{as} We run RARTS on the CIFAR-10 architecture search task, under the same settings of search space and number of blocks as \cite{liu2018darts}. We train $50$ and $600$ epochs in the first and second stages, respectively. The initial learning rate is $0.025$ for both stages. Besides the standard $\ell_2$ regularization of the weights, we also adopt the latency penalty \cite{chen2019fasterseg}, which is widely used in many architecture search tasks \cite{he2018amc, wu2019fbnet, tan2019mnasnet}. The search cost of RARTS is $1.1$ GPU days, far less than that of the second order DARTS. The test error of RARTS is $2.65\%$, outperforming the $3.00\%$ of the first order DARTS and the $2.76\%$ of the second order DARTS. It should also be pointed out that the model found by RARTS has 3.2M parameters, which saves more memory than the 3.3M model found by DARTS. Moreover, RARTS outperforms SNAS in accuracy and search cost at comparable model size. In the Appendix, the architecture found by RARTS is displayed. The learned building blocks are then transferred to ImageNet, producing the results in Table \ref{Ima}. Our 26.2\% accuracy outperforms those of DARTS and SNAS, which is also comparable to those of GDAS and MiLeNAS. It should be noted that if $y=w$ is enforced e.g. through a multiplier, RARTS essentially becomes that of MiLeNAS. The difference is that MiLeNAS trains a single network on both training and validation datasets, while we train two networks on the two datasets for the same architecture. MiLeNAS seeks a group of models by conducting model size tracking during search, which adds complexity to the search process. Though both methods did away with bi-level optimization, the architecture in our search has more generality and robustness as it is optimized in two networks with different weights. The improvement of FairDARTS comes mainly from the modification of the search space, by reducing the number of paths (skip-connection). Similarly, P-DARTS also makes non-algorithmic improvements, as it divides search into multiple stages and progressively adds more depth than DARTS. These methods are actually complementary to our approach which is {\it a pure algorithmic advance} of DARTS. \begin{table*} \caption{Comparison of DARTS, RARTS and other methods on CIFAR-10 based network search. DARTS-1/DARTS-2 stands for DARTS 1st/2nd order. SNAS-Mi/SNAS-Mo stands for SNAS plus mild/moderate constraints, trained with TITAN Xp GPUs where DARTS-1/2 takes 0.4/1 GPU Day. All our experiments are conducted on GTX 1080 Ti GPUs. Here: $\diamond$ on resp. authors' machines, $\star$ on current authors' machines. Average of 5 runs. } \label{arc} \begin{center} \begin{tabular}{l c c c c} \hline \specialrule{0em}{1pt}{1pt} \multirow{2}{*}{\makecell{Method}} & \multirow{2}{*}{\makecell{Test Error (\%)}} &\multirow{2}{*}{\makecell{Parameters (M)}} & Search & Search \\ & & & GPU Days $\diamond$ & GPU Days $\star$ \\ \specialrule{0em}{1pt}{1pt} \hline \specialrule{0em}{1pt}{1pt} Baseline \cite{liu2018darts} & 3.29 $\pm$ 0.15 & 3.2 & 4 & - \\ \specialrule{0em}{1pt}{1pt} \hline \specialrule{0em}{1pt}{1pt} AmoebaNet-B \cite{real2019regularized} & 2.55 $\pm$ 0.05 & 2.8 & 3150 & - \\ ENAS \cite{pham2018efficient} & 2.89 & 4.6 & 0.5 & - \\ ENAS \cite{pham2018efficient,liu2018darts} & 2.91 & 4.2 & 4 & - \\ SNAS-Mi \cite{xie2018snas} & 2.98 & 2.9 & 1.5 & - \\ SNAS-Mo \cite{xie2018snas} & 2.85 $\pm$ 0.02 & 2.8 & 1.5 & - \\ GDAS \cite{dong2019searching} & 2.82 & 2.5 & 0.2 & - \\ FairDARTS \cite{chu2019fair} & 2.54 $\pm$ 0.05 & 3.32 $\pm$ 0.46 & 0.4 & - \\ P-DARTS \cite{chen2019progressive} & 2.50 & 3.4 & 0.3 & - \\ DARTS-1 \cite{liu2018darts} & 3.00 $\pm$ 0.14 & 3.3 & 1.5 & 0.7 \\ DARTS-2 \cite{liu2018darts} & 2.76 $\pm$ 0.09 & 3.3 & 4 & 3.1 \\ MiLeNAS \cite{he2020milenas} & 2.80 $\pm$ 0.04 & 2.9 & 0.3 & - \\ \specialrule{0em}{1pt}{1pt} \hline \specialrule{0em}{1pt}{1pt} RARTS & \textbf{2.65 $\pm$ 0.07} & 3.2 & 1.1 & 1.1 \\ \specialrule{0em}{1pt}{1pt} \hline \end{tabular} \label{NAS} \end{center} \end{table*} \begin{table*} \caption{Comparison of DARTS, RARTS and other methods on ImageNet. } \begin{center} \begin{tabular}{l c c c} \hline \specialrule{0em}{1pt}{1pt} Method & Top-1 Test Error (\%) & Top-5 Test Error (\%) & Parameters (M)\\ \specialrule{0em}{1pt}{1pt} \hline \specialrule{0em}{1pt}{1pt} SNAS \cite{xie2018snas} & 27.3 & 9.2 & 4.3 \\ DARTS \cite{liu2018darts} & 26.7 & 8.7 & 4.7 \\ MiLeNAS \cite{he2020milenas} & 25.4 & 7.9 & 4.9 \\ GDAS \cite{dong2019searching} & 26.0 & 8.5 & 5.3 \\ \specialrule{0em}{1pt}{1pt} \hline \specialrule{0em}{1pt}{1pt} RARTS & \textbf{26.2} & 8.5 & 4.7 \\ \specialrule{0em}{1pt}{1pt} \hline \end{tabular} \label{Ima} \end{center} \end{table*} \section{Conclusion} We developed RARTS, a novel relaxed differentiable method for Neural Architecture Search. We proved its convergence theorem and showed how the method works on an analytically solvable model. We demonstrated its high accuracy and search efficiency over the state-of-the-art differentiable methods especially DARTS style algorithms on CIFAR-10 and ImageNet classifications. RARTS is an algorithmic advance of DARTS and a new search tool for various datasets and deep networks. Additional gain can be achieved with search space design (e.g. \cite{chu2019fair}) for specific data sets. In future work, we shall extend RARTS to other deep learning applications. \section{Acknowledgements} The work was partially supported by NSF grants IIS-1632935 and DMS-1854434. Most of this paper was submitted to ICML in Jan 2020. We thank the anonymous reviewers for their constructive comments. \newpage \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,683
Happy Prime Day! If you're looking to save money, this is going to be your best opportunity until Black Friday rolls around. Tons of great gear is on sale, and the folks at TechBargains are staying on top of it for the entire event. Be sure to check back regularly – we'll be updating this post with the latest deals. And if you're not already a Prime member, join now! 50-inch Toshiba 4K UltraHD HDR Fire TV Edition Smart HDTV for $289.99 at Amazon (List price: $399.99). iRobot Roomba 671 Robot Vacuum with Wi-Fi and Alexa for $229.99 at Amazon (List price: $349.99). Bose QuietComfort 25 Acoustic Noise Cancelling Wired Headphones for $125 at Amazon (List price: $299). Instant Pot DUO60 6-Quart 1000W Electric Pressure Cooker for $58.99 at Amazon (List price: $99.95). 23andme DNA Ancestry and Genetic Health Kit Report for $99.99 at Amazon (List price: $199). Blink XT 2 Camera Home Security System for $139.99 (3 Camera for $230, 5 Camera for $350) at Amazon (List price: $229.99). Ecovacs Deebot N79S Wi-Fi Smart Robotic Vacuum Cleaner for $159.98 at Amazon (Clip $10 Coupon – List price: $299.98). Amazon Cloud Cam 1080p Wi-Fi Security Camera for $59.99 at Amazon (List price: $119.99). Fire TV Stick Streaming Media Player with Alexa Voice Remote for $19.99 at Amazon (List price: $39.99). Crucial MX500 2TB 3D NAND SATA 2.5-inch SSD for $319 at Amazon (List price: $433). 3 Month Subscription for Amazon Kindle Unlimited for $0.99 at Amazon (List price: $9.99/month). LG 27UD68-W 27-inch 3840×2160 UHD LED Monitor with Freesync for $342.99 at Amazon (List price: $459.99). SanDisk Ultra 128GB MicroSDXC Flash Card for $23.88 at Amazon (List price: $39.99). Ring Spotlight Cam 1080p Wire-Free Wi-Fi Smart Security Camera with Rechargeable Battery, 2-Way Talk for $139.99 at Amazon (List price: $199). Acer G276HL 27-inch/a 1920×1080 VA Zero Frame Monitor for $114.99 at Amazon (List price: $165). $5 Amazon Credit Bonus with $25 Amazon Gift Card Purchase at Amazon (Coupon code: GCPRIME18). Fire TV 4K Streaming Media Player with Alexa Voice Remote for $34.99 at Amazon (List price: $69.99). Echo Smart Speaker (2nd Gen) for $69.99 at Amazon (List price: $99.99). Echo Spot 3.5-inch Touchscreen Smart Speaker for $99.99 at Amazon (List price: $129.99). Echo Dot Bluetooth Speaker for $29.99 at Amazon (List price: $49.99). Echo Plus Smart Speaker with Smart Home Hub for $99.99 at Amazon (List price: $149.99). Echo Show 7-inch Touchscreen and Speaker for $129.99 at Amazon (List price: $229.99). Fire TV Cube 4K Streaming Media Player for $89.99 at Amazon (List price: $119.99). Fire HD 10 32GB 10.1-inch 1080p Tablet with Special Offers for $99.99 at Amazon (List price: $149.99). Fire HD 8 16GB 8-inch Tablet with Special Offers for $49.99 at Amazon (List price: $79.99). Fire 7 8GB 7-inch Tablet with Special Offers for $29.99 at Amazon (List price: $49.99). Kindle Paperwhite 6-inch 300ppi Wi-Fi E-reader with Special Offers for $79.99 at Amazon (List price: $119.99). Kindle 6-inch Wi-Fi Touchscreen E-reader with Special Offers for $49.99 at Amazon (List price: $79.99). Fire HD 8 Kids Edition 32GB 8-inch Tablet with Kid-Proof Case for $89.99 at Amazon (List price: $129.99). Fire 7 Kids Edition 16GB 7-inch Tablet with Kid-Proof Case for $69.99 at Amazon (List price: $99.99). Echo Look Hands-Free Camera and Style Assistant for $99.99 at Amazon (List price: $199.99). 4 Months of Amazon Music Unlimited for $0.99 at Amazon (List price: $7.99/month). 66% off 3 Months of Audible Membership for $4.95/month at Amazon (List price: $14.95/month). $10 off Now and $10 off Future Order with Prime Now at Amazon (Coupon code: 20PRIMENOW). CyberPowerPC Gamer Xtreme Intel Core i5-8400 6-Core Gaming Tower with 4GB AMD RX 580 for $649 at Amazon (List price: $799). ASUS Chromebook C302 Flip Intel Core m3 12.5-inch 1920×1080 Touchscreen (4GB/64GB) for $399 at Amazon (List price: $499). Google Pixelbook i5 2400×1600 12.3-inch (8GB/128GB) for $749 at Amazon (List price: $999). Google Pixelbook i5 2400×1600 12.3-inch (8GB/256GB) for $949 at Amazon (List price: $1199). ASUS Vivobook E203MA Intel Celeron N4000 11.6-inch 1366×768 Laptop (4GB/64GB) for $179.99 at Amazon (List price: $229.99). Acer Chromebook R 13 MediaTek MT8173C Quad-core 13.3-inch 1920×1080 Touch (4GB/32GB) for $314.99 at Amazon (List price: $367). Acer Spin 1 Intel Celeron N3350 11.6-inch 1920×1080 Touch Laptop (4GB/32GB) for $239 at Amazon (List price: $319). Samsung Galaxy Tab A 8-inch 32GB Wi-Fi Tablet for $149 at Amazon (List price: $167.99). Lenovo Tab 4 Plus Octo-Core Snapdragon 4G LTE 1080p 8-inch 16GB Tablet for $169.99 at Amazon (List price: $223.99). ASUS VivoBook S Intel Core i7-8550U Quad-core 14-inch 1920×1080 Laptop for $699 at Amazon. Intel NUC NUC7i5BNH Core i5-7260U Mini PC with Intel IRIS Plus Graphics 640 for $339 at Amazon. LG gram Intel Core i7-8550U Quad-core 1920×1080 IPS Laptop (16GB RAM, 256GB SSD) for $1199 at Amazon. LG 27UD68-W 27-inch 3840×2160 UHD LED Monitor with Freesync for $344.99 at Amazon (List price: $459.99). Acer G276HL 27-inch 1920×1080 VA Zero Frame Monitor for $114.99 at Amazon (List price: $165). Samsung EVO Select 256GB MicroSDXC Memory Card with Adapter for $84.99 at Amazon (List price: $119.99). Dropbox Plus 1TB Storage 1-Year Subscription for $79 at Amazon (List price: $99). AMD Ryzen Threadripper 1950X 16-Core/32-Thread Processor for $699.99 at Amazon (List price: $999). AMD Ryzen Threadripper 1920X 12-Core/24-Thread Processor for $620 at Amazon (List price: $799). AMD Ryzen 7 1800X 8-Core/16-Thread Processor for $239.99 at Amazon (List price: $349). WD Black 500GB NVMe PCIe M.2 2280 SSD for $144.99 at Amazon (List price: $229.99). SanDisk Extreme 1TB Portable SSD for $199.99 at Amazon (List price: $259.99). SanDisk 128GB microSDXC for Nintendo Switch for $34.99 at Amazon (List price: $59.9). SanDisk Ultra 64GB USB Type-C Dual Drive for $17.49 (128GB for $27) at Amazon (List price: $32.99). SanDisk Connect Wireless Stick 128GB Flash Drive for $44.76 (200GB for $57) at Amazon (List price: $99.99). SanDisk iXpand 128GB Flash Drive for iPhone and iPad for $50.39 at Amazon (List price: $119.99). SanDisk Ultra Flair 128GB USB 3.0 Flash Drive for $22.99 at Amazon (List price: $30.99). SanDisk Extreme 64GB SDXC Card for $22.96 (128GB for $40) at Amazon (List price: $33.99). Adobe Photoshop Elements 2018 (No Subscription Required) for $49.99 at Amazon (List price: $69.99). SanDisk SSD Plus 480GB Solid State Drive for $79.99 (1TB for $150) at Amazon (List price: $104.99). Samsung T5 1TB USB 3.1 Portable External Drive for $269.99 at Amazon (List price: $329.99). SanDisk Ultra 128GB MicroSDXC Card with Adapter for $26.52 (200GB for $45.59) at Amazon (List price: $49.99). SanDisk Fit 128GB USB 3.1 Flash Drive for $21.99 at Amazon (List price: $37.99). Seagate FireCuda Gaming 2TB 3.5-inch Solid State Hybrid Drive for $69.99 at Amazon (List price: $99.99). Seagate Backup Plus 4TB USB 3.0 Portable External Hard Drive and 2mo Adobe CC Photography for $89.99 (5TB for $110) at Amazon (List price: $129.99). Seagate IronWolf Pro 6TB NAS 3.5-inch HDD for $160.99 at Amazon (List price: $229.99). Seagate IronWolf 3TB NAS 3.5-inch HDD for $73.99 (12TB for $301) at Amazon (List price: $109.99). Seagate BarraCuda Pro 6TB 3.5-inch HDD for $157.99 (8TB for $210) at Amazon (List price: $225.99). Seagate BarraCuda 3TB 3.5-inch HDD for $59.99 (8TB for $170) at Amazon (List price: $84.99). Seagate Backup Hub Plus 8TB External Desktop Hard Drive and 2mo Adobe CC Photography (Hub for Mac for $140) for $139.99 at Amazon (List price: $199.99). WD My Passport 2TB USB 3.0 Game Storage for PS4 for $59.99 (4TB for $90) at Amazon (List price: $89.99). Seagate 2TB Game Drive for PS4 for $62.99 at Amazon (List price: $89.99). EVGA GeForce GTX 1060 3GB GDDR5 Gaming ACX 2.0 Graphics Card for $179.99 at Amazon (List price: $229). AMD Ryzen 7 2700 8-Core/16-Thread Processor with Wraith Spire LED Cooler for $224.99 at Amazon (List price: $294.99). Corsair Hydro Series H100i V2 240mm Liquid CPU Cooler for $79.99 at Amazon (List price: $129.99). Norton Security Deluxe (5 Devices, 1 Year) for $19.99 at Amazon (List price: $39.99). Netgear Orbi RBK33 AC2200 Home Mesh Wi-Fi System (Cover up to 5,000sq. ft.) for $219.99 at Amazon (List price: $289.99). TP-Link Deco Whole Home Mesh Wi-Fi System (3-Pack, Up to 5500 sq ft Coverage) for $177.48 at Amazon (Coupon code: 40PDDECOM5 – List price: $219.99). Google Wi-Fi AC1200 Router for $99.99 at Amazon (List price: $110). TP-Link Archer C9 AC1900 Wi-Fi Dual-Band Router for $64.99 at Amazon (List price: $89.99). TP-Link 8-Port Gigabit Ethernet Metal Switch with Lifetime Replacement for $23.99 at Amazon (List price: $39.99). NetGear EX3700 AC750 Wi-Fi Range Extender for $22.99 at Amazon (List price: $34.99). AMD Ryzen Threadripper 1900X 8-Core/16-Thread Processor for $299.99 at Amazon. Ecobee3 Wi-Fi Smart Thermostat with 3 Room Sensors (Alexa Compatible) for $199.99 at Amazon (List price: $289). Kwikset Convert Smart Lock Conversion Kit with Amazon Cloud Cam for $139.99 at Amazon (List price: $269.98). Lutron Caseta Wireless Smart Lighting Starter Kit (2-Pack) for $119.89 at Amazon (List price: $159.89). Rachio Wi-Fi Smart Sprinkler Controller (2nd Gen, 16-Zone) for $149.99 at Amazon (List price: $249.99). Philips Hue 2 Color Bulb 2nd Gen Dimmable Starter Kit and Echo Dot for $119.99 at Amazon (List price: $199.98). eufy RoboVac11 Robotic Vacuum for $159.99 at Amazon (List price: $249.99). Ecovacs Deebot M80 Pro Wi-Fi Robotic Vacuum Cleaner with Mop and Brushroll for $184.74 at Amazon (Clip $30 Coupon – List price: $229.99). Ecovacs Deebot N79S Wi-Fi Smart Robotic Vacuum Cleaner for $169.98 at Amazon (Clip $10 Coupon – List price: $299.98). Netgear Arlo Wireless 720p Security Cam Kit (6-Pack) for $399.99 at Amazon. Blink Indoor 1 Camera Home Security System for $69 (2 Camera for $100, 3 Camera for $150) at Amazon (List price: $99). 40-inch TCL 40S305 1080p Smart Roku LED HDTV for $194.99 at Amazon (List price: $289.99). ViewSonic PX747-4K 3500 Lumens 4K Projector for $999.99 at Amazon (List price: $1299.99). NVIDIA Shield TV Streaming Media Player with Remote and Game Controller for $179.99 at Amazon (List price: $199.99). Epson Home Cinema 1060 1080p 3100-Lumens Projector for $499.99 at Amazon (List price: $699.99). Game of Thrones: The Complete Seasons 1-7 Blu-ray Set for $74.99 at Amazon (List price: $119.99). Harry Potter: Hogwarts Collection 31 Disc Set (Blu-ray and Digital HD) for $84.99 at Amazon (List price: $149.99). Harry Potter 4K Complete 8 Film Collection (Blu-ray) for $79.98 at Amazon (List price: $178.99). Harry Potter Complete 8 Film Collection (Blu-ray) for $28.99 at Amazon (List price: $99.99). Nintendo Switch with Gray Joy-Con and SanDisk 64GB microSDXC and $20 Nintendo GC for $299.99 at Amazon (List price: $340.99). HTC Vive Pro VR Head Mounted Display and $50 Amazon GC and $50 Viveport Code for $799 at Amazon (Go to Special offers and Add Both to Cart – List price: $899). Oculus Rift VR Headset with Touch Controllers and 6 Games for $349 at Amazon (List price: $399). Microsoft Xbox One S 1TB and 3 Month Gold and Rare Replay Video Game for $229.99 at Amazon (List price: $299.99). Samsung Gear VR with Controller (2017 Edition) for $89.99 at Amazon (List price: $129.99). Amazon Fire TV Game Controller (Compatible with Fire TV and Fire TV Stick) for $29.99 at Amazon (List price: $49.99). Nintendo 3DS XL SNES Edition and Super Mario Kart for $149 at Amazon (List price: $199). HyperX Cloud II 7.1ch Gaming Headset for $69.99 at Amazon (List price: $99). Corsair HS70 7.1 Surround Sound Wireless Gaming Headset for $69.99 at Amazon (List price: $99.99). Logitech G920 Dual-Motor Feedback Driving Wheel with Pedals for $199 at Amazon (List price: $399.99). Elgato Stream Deck Live Content Creation Controller for $99.99 at Amazon (List price: $149.95). LG V35 ThinQ 64GB 6-inch 2880×1440 QHD+ OLED Octa-Core Unlocked Smartphone for $599.99 at Amazon (List price: $899.99). Sony Xperia XZ1 64GB 5.2-inch 1920×1080 Unlocked Smartphone for $479.99 at Amazon (List price: $499.99). Samsung Galaxy Note 8 64GB 6.3-inch Unlocked Smartphone for $649.99 at Amazon (List price: $949.99). Essential Phone 128GB Unlocked Smartphone for $249.99 at Amazon (List price: $499.99). Huawei Mate 10 Pro 128GB 6-inch Unlocked Smartphone with Lecia Camera for $499.99 at Amazon (List price: $549.99). Huawei Mate SE 64GB 5.93-inch Octa-Core Processor Smartphone for $219.99 at Amazon (List price: $249.99). Honor 7X 32GB 5.93-inch Dual-Lens Unlocked Smartphone for $169.99 at Amazon (List price: $199.99). Motorola Moto X 4th Gen 32GB 5.2-inch Unlocked 4G LTE Smartphone (Project Fi Compatible) for $199.99 at Amazon (List price: $399.99). Samsung Galaxy S8 64GB Unlocked Smartphone for $499.99 at Amazon (List price: $724.99). Samsung Galaxy S8+ 64GB Unlocked Smartphone for $589.99 at Amazon (List price: $824.99). Ultimate Ears BLAST Portable Wi-Fi Bluetooth Wireless Speaker with Alexa for $89.99 at Amazon (List price: $129.05). Edifier R1280T 42W Powered Bookshelf Speakers for $69.99 at Amazon (List price: $99.99). Sony WH-CH700N Wireless Noise Cancelling Headphones for $98 at Amazon (List price: $198). Audio-Technica ATH-M40x Professional Monitor Headphones for $74.25 at Amazon (List price: $99). Sonos One Smart Speakers with Alexa and Bonus $50 Amazon Gift Card for $199 at Amazon (List price: $249). Logitech Z906 5.1ch 500W THX-Certified Speaker System for $199.99 at Amazon (List price: $399). Logitech MX Sound 2.0 24W Bluetooth Stereo Speakers for $69.99 at Amazon (List price: $99.99). Blue Yeti USB Microphone for $89 at Amazon (List price: $129). GoPro Hero Session 1440p Action Camera for $99.99 at Amazon (List price: $149). Garmin vivoactive 3 GPS Smartwatch for $199.99 at Amazon (List price: $269.99). Canon imageCLASS D570 Wi-Fi Duplex Monochrome Laser Printer for $99.99 at Amazon (List price: $159.99). Canon imageCLASS MF247dw Wireless Multfunction Duplex Laser Office Printer for $129.99 at Amazon (List price: $193.99). Garmin Vivofit 4 Activity Tracker and Fitness Band for $59.99 at Amazon (List price: $79.99). Samsung Gear S3 Frontier Smartwatch with Built-in GPS for $279.99 at Amazon (List price: $349.99). Samsung Qi Certified Fast Charge Wireless Charger Stand (2018 Edition) for $29.99 at Amazon (List price: $67.99). Fitbit Alta HR Fitness Activity Tracker for $89.95 at Amazon (List price: $119.95). Instant Pot Ultra 3qt 10-in-1 1000W Stainless Steel Electric Pressure Cooker for $85.95 at Amazon (List price: $119.95). FlexiSpot Sit Stand Height Adjustable Electric Desk Base for $286.99 at Amazon (List price: $409.99). Blackstone 36″ Outdoor 4-Burner Flat Top Gas Griddle Station with Side Shelf for $209.29 at Amazon (List price: $299). Coleman RoadTrip LXX 22,000 BTU Grill for $112.77 at Amazon (List price: $229.99). LEGO Technic 42043 Mercedes-Benz Arocs 3245 Building Kit (2793 Pieces) for $183.99 at Amazon (List price: $229.99). Segway miniPRO Smart Self Balancing Personal Transporter (2018) for $399.99 at Amazon (List price: $549.99). Bio Bidet Ultimate BB-600 Advanced Dual Nozzle Bidet Elongated Toilet Seat for $221 at Amazon (List price: $320). Levoit LV-H132 3-in-1 Air Purifier with True HEPA Filter for $58.49 at Amazon (List price: $89.99). Oral-B Pro 7000 Bluetooth Rechargeable Electric Toothbrush for $64.99 at Amazon (Coupon code: 15OB7000PD – List price: $114.94). Instant Pot DUO60 6-Quart 1000W Electric Pressure Cooker for $58.49 at Amazon (List price: $99.95). National Geographic DNA Test Kit: Geno 2.0 Next Generation (Ancestry) for $49.97 at Amazon (List price: $99.95). Sun Joe SPX3000 14.5Amp 2030PSI 1.76GPM Electric Pressure Washer for $109.49 at Amazon (List price: $199.99). Greenworks 21-Inch 40V Brushless Cordless Mower for $247 at Amazon (List price: $399). FlexiSpot M2B Height Adjustable Standing Desk (35-inch Wide) for $224.99 at Amazon (List price: $329.99). FlexiSpot Deskcise Pro Exercise Bike and Height Adjustable Standing Desk for $349.99 at Amazon (List price: $499.99). Waterpik WP-660 Aquarius Water Flosser with 7 Tips (ADA Accepted) for $39.99 at Amazon (List price: $79.99). Fleximounts 4×8-Foot Overhead Garage Storage Rack for $125.99 at Amazon (Coupon code: STORE130 – List price: $179.99). Google Chromecast Media Streamer for $25 at Walmart (List price: $35). Google Chromecast Ultra 4K Media Streamer for $49 at Walmart (List price: $69). Apple Watch Series 1 38mm for $149 at Walmart (List price: $249). Apple Watch Series 1 42mm for $179 at Walmart (List price: $279). GoPro HERO 1440p Waterproof Action Camera (2018 Model) and $25 Walmart Gift Card for $178.99 at Walmart (List price: $199.99). 65″ Vizio E65-E1 4K Smart XLED TV for $599.99 at Walmart (List price: $898). Dell XPS 8930 Intel Core i5 8400 Six-core Tower Gaming Desktop with 16GB RAM, AMD RX 560 GPU for $699.99 at Dell (List price: $899.99). Dell XPS 13 i7-8550U Quad-core 13.3-inch 1920×1080 Touch Laptop with 16GB RAM for $1299.99 at Dell (List price: $1549.99). Dell Inspiron 3268 Intel Core i3-7100 Small Desktop (4GB/1TB HDD) for $329.99 at Dell (List price: $449.99). Dell Inspiron 3668 Intel Core i5-7400 Quad-core Desktop (8GB/1TB HDD) for $449.99 at Dell (List price: $629.99). Alienware Aurora R7 Intel Core i7-8700 Six-core Gaming Desktop with GTX 1070, Dual Storage for $1399.99 at Dell (List price: $1709.99). Alienware AW3418HW 34-inch Curved 2560×1080 WFHD 160Hz GSYNC IPS Gaming Monitor for $699.99 at Dell (List price: $1199.99). Dell S2417DG 24-inch 2560×1440 G-Sync 1ms 165Hz Gaming Monitor and $100 Dell Gift Card for $379.99 at Dell (List price: $569.99). Dell S2718D 27-inch 2560×1440 IPS HDR USB-C Monitor and $50 Gift Card for $349.99 at Dell (List price: $699.99).
{ "redpajama_set_name": "RedPajamaC4" }
4,720
{"url":"https:\/\/worldbank.github.io\/SARMD_guidelines\/qcheck-categorical.html","text":"# Chapter 9 Temporal consistency of categorical variables\n\nUnlike continuous variables for which averages, standard deviations, and ranges convey statistical and socioeconomical meaning, categorical variables may be analyzed by tabulating the frequency of their values. Figure 9.1 allows to plot the relative frequencies of categorical variables over time for a single country. For example, the user may tabulate the absolute and relative frequencies for the values in categorical variables such as relationharm, marital, urban, and educat7 for Pakistan as presented below.\n\nFigure 9.2 does the same, but presents the results for all eight countries at the same time. For example, the frequency of values \u201cYes\u201d\" and \u201cNo\u201d\" for the harmonized variable ownhouse is presented below.\n\nThis tool is useful to evaluate whether categorical variables have been harmonized properly. A large change in the relative frequency of values in a categorical variable could indicate that the harmonization process has been inconsistent. For example, if someone mistakenly exchanges the value labels for urban (i.e., rural=1 urban=0 instead of rural=0 urban=1), the inconsistency with previous survey rounds could be easily detected in these dashboards. For example, as of today, Jun\/19\/2019, variable computer in Pakistan (in category Assets) presents a weird trend from 2013 to 2015. This clearly indicates an error either in the harmonization or in the raw data.","date":"2022-11-29 20:25:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7101519107818604, \"perplexity\": 1666.6336859157007}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710711.7\/warc\/CC-MAIN-20221129200438-20221129230438-00021.warc.gz\"}"}
null
null
, également connue sous le nom de Marie Tudor, née le et morte le , est la première reine régnante d'Angleterre et d'Irlande, de 1553 à sa mort, et, par son mari le roi d'Espagne , reine d'Espagne, de Sicile et de Naples, duchesse de Bourgogne, de Milan, de Brabant, de Luxembourg et de Limbourg, comtesse de Flandre, de Hainaut et comtesse palatine de Bourgogne. Issue du mariage malheureux du roi et de Catherine d'Aragon, Marie fut écartée de la succession au trône en 1534 par la de succession au trône, après le remariage de son père avec Anne Boleyn. Elle ne redevint éligible à la succession au trône, après son demi-frère Édouard mais avant sa demi-sœur Élisabeth, qu'en 1543, avec la de succession au trône. Comme Marie était catholique, son jeune demi-frère , devenu roi en 1547, tenta de l'évincer de sa succession et, à sa mort en 1553, sa parente Jeanne Grey fut proclamée reine. Marie rassembla une armée en Est-Anglie et déposa Jeanne qui fut décapitée. Elle devint ainsi la première femme de l'histoire à être couronnée reine d'Angleterre et à diriger le pays en son propre nom. Elle épousa en 1554 et devint ainsi reine consort d'Espagne lorsqu'il devint roi en 1556. Le règne de fut marqué par ses tentatives visant à rebâtir le catholicisme après les règnes protestants de son demi-frère et de son père. Plus de et dissidents furent brûlés vifs lors des persécutions mariales — lors du règne de son père, le nombre de morts s'éleva à plusieurs milliers. Cette brutale répression lui valut le surnom de (« Marie la Sanglante »). Ce retour au catholicisme fut annulé après sa mort en 1558 par sa demi-sœur cadette . Biographie Naissance Marie est née le au palais de Placentia dans le quartier londonien de Greenwich. Elle était la fille du roi et de sa première épouse Catherine d'Aragon, et leur seul enfant à avoir survécu jusqu'à l'âge adulte. Avant Marie, Catherine avait été enceinte à quatre reprises ; deux de ses grossesses s'étaient terminées par des fausses couches tandis que les deux fils qu'elle mit au monde, tous deux appelés Henry, moururent dans les semaines qui suivirent leur naissance. Elle fut baptisée dans la foi catholique trois jours après sa naissance. Parmi ses parrains figuraient sa grand-tante Catherine d'York, le lord chancelier Thomas Wolsey et Agnès de Norfolk. L'année suivante, Marie devint également marraine à l'occasion du baptême de sa cousine Frances Brandon. En 1520, la comtesse de Salisbury, Margaret Pole, qui avait représenté Marie durant sa confirmation, fut choisie pour devenir sa gouvernante. Adolescence Marie était une enfant précoce. En , alors qu'elle n'avait que quatre ans et demi, elle joua du clavecin pour la visite d'une délégation française. La reine fut très impliquée dans l'éducation de sa fille et elle prit conseil auprès de l'humaniste espagnol Jean Louis Vivès à qui elle commanda un traité sur l'éducation des filles intitulé . À l'âge de neuf ans, Marie savait lire et écrire en latin et elle étudia le français, l'espagnol, la musique, la danse et peut-être le grec. Le roi l'adorait et il se vanta auprès de l'ambassadeur vénitien Sebastian Giustiniani qu'. Malgré son affection pour Marie, était profondément déçu par le fait qu'il n'avait pas de fils. Le temps passant, il devint clair que le couple royal n'aurait pas d'autres enfants et qu' n'aurait pas d'héritier mâle légitime. En 1525, envoya Marie au pays de Galles pour présider, au moins nominalement, le conseil chargé de gouverner la région et l'Ouest de l'Angleterre. Elle reçut sa propre cour au château de Ludlow et des prérogatives royales habituellement réservées au prince de Galles. Elle était parfois appelée princesse de Galles, même si elle ne porta jamais techniquement ce titre. Il semble qu'elle soit restée trois ans dans les marches galloises avant de revenir dans les autour de Londres à partir de 1528. Tout au long de l'enfance de Marie, négocia un possible mariage pour sa fille. Alors qu'elle n'avait que deux ans, elle fut promise au fils du roi de France , le dauphin François, mais le contrat fut annulé au bout de trois ans. En 1522, il fut décidé qu'elle épouserait son cousin, l'empereur Charles Quint, mais l'accord fut rompu au bout de quelques années. Thomas Wolsey, le principal conseiller d', reprit des négociations avec les Français et le roi suggéra que Marie épouse le roi qui cherchait à former une alliance avec l'Angleterre. Selon un nouvel accord, Marie épouserait ou son second fils, le duc Henri d'Orléans, mais Wolsey parvint à négocier une alliance avec la France sans avoir besoin d'organiser un mariage. Selon le diplomate vénitien Mario Savorgnano, Marie était devenue une belle et élégante jeune femme. Pendant ce temps, le mariage de ses parents était en péril. Déçu de ne pas avoir d'héritier mâle et impatient de se remarier, tenta de faire annuler son union mais cette demande fut rejetée par le pape . Le roi avança, en citant des passages de la Bible (), que son mariage avec Catherine était impur car elle était la veuve de son frère Arthur, mais cette dernière affirma que leur union n'avait pas été consommée. De fait, ce premier mariage avait été annulé par le précédent pape sur cette base. a peut-être été influencé dans sa décision par Charles Quint, le neveu de Catherine d'Aragon, dont les troupes occupaient Rome dans le cadre de la septième guerre d'Italie. À partir de 1531, Marie était souvent malade avec des menstruations irrégulières et des épisodes dépressifs, sans que l'on sache si cela était causé par le stress, la puberté ou une maladie. Le roi lui interdit de voir sa mère qui fut envoyée vivre à l'écart de la cour au château de Kimbolton. Au début de l', épousa en secret sa maîtresse Anne Boleyn qui était enceinte de lui et, en mai, l'archevêque de Cantorbéry Thomas Cranmer annula officiellement le mariage avec Catherine. Cette dernière perdit son titre de reine et devint princesse douairière de Galles, tandis que Marie fut déclarée illégitime et donc incapable de réclamer le trône à la mort de son père. L'héritière de la Couronne d'Angleterre devint sa demi-sœur et la fille d'Anne, Élisabeth. La cour de Marie fut dissoute, ses serviteurs, dont la comtesse de Salisbury, furent renvoyés. Le refus de Marie, alors âgée de , de reconnaître qu'Anne était reine et qu'Élisabeth était princesse ulcéra le roi qui, en représailles, la nomma en , dame d'honneur d'Élisabeth à Hatfield Palace dans l'Hertfordshire. Limitée dans ses déplacements et stressée, Marie était fréquemment malade. Au nombre de ces maladies, il faut inscrire les fois où Marie se faisait porter « malade » pour ne pas faire partie de la suite d'Élisabeth, lors de ses changements de résidence. L'ambassadeur impérial Eustache Chappuis devint son proche conseiller et tenta sans succès d'intercéder en sa faveur à la cour. Les relations entre Marie et son père étaient tendues et ils ne se parlèrent pas pendant trois ans. Marie, faisant fi de la réprobation paternelle, s'ingéniait à commander des vêtements de cour aux couleurs qu'elle portait enfant et princesse, rompant ainsi avec l'étiquette si chère à son père. Même si elle et sa mère étaient malades, le roi l'empêcha de rendre visite à Catherine et elle fut « inconsolable » quand elle mourut le . Catherine fut inhumée dans la cathédrale de Peterborough tandis que Marie fut confinée à Hunsdon dans l'Hertfordshire. Âge adulte En 1536, Anne Boleyn perdit les faveurs du roi et fut décapitée. Élisabeth perdit son titre de princesse et fut, comme Marie, évincée de l'ordre de succession. Moins de deux semaines après l'exécution d'Anne, épousa Jeanne Seymour qui le pressa de se réconcilier avec sa fille. Le roi insista pour que Marie reconnaisse son statut de chef suprême de l'Église d'Angleterre, répudie l'autorité pontificale, reconnaisse que le mariage de ses parents était impur et accepte sa propre illégitimité. Elle tenta de se réconcilier avec lui en se soumettant à son autorité aussi loin que l'y autorisèrent mais elle fut finalement contrainte de signer un document par lequel elle acceptait toutes les demandes de son père, ce qui lui permit de retrouver sa place à la cour. lui accorda une suite et les enregistrements de ses dépenses à cette période montrent que Hatfield Palace, le , Richmond et Hundson étaient parmi ses principales résidences, de même que les palais de Placentia, de Westminster et d'Hampton Court appartenant à son père. Ses dépenses étaient consacrées aux vêtements et aux jeux d'argent avec des cartes, l'un de ses loisirs préférés. Dans le Nord de l'Angleterre, des rebelles, dont lord Hussey, l'ancien chambellan de Marie, se soulevèrent contre les réformes religieuses d' et l'une de leurs demandes était la relégitimation de Marie. Cette révolte, appelée Pèlerinage de Grâce, fut violemment réprimée. Lord Hussey et de nombreux rebelles furent exécutés mais rien n'indique que Marie fut impliquée. En 1537, Jeanne mourut en donnant naissance à un fils, Édouard, et Marie fut choisie pour être sa marraine. Marie fut courtisée par Philippe du Palatinat-Neubourg à partir de 1539, mais ce dernier était luthérien et ses demandes de mariage furent rejetées. Durant l', le principal conseiller du roi, Thomas Cromwell, négocia une alliance avec le duché de Clèves. Les propositions de mariage entre Marie et le duc de Clèves n'aboutirent pas mais une union entre et sa sœur Anne fut acceptée. Lorsque le roi rencontra Anne pour la première fois à la fin de l'année, une semaine avant la cérémonie, il la trouva peu attirante mais fut incapable, pour des raisons diplomatiques et en l'absence d'un prétexte convenable, d'annuler le mariage. Cromwell perdit les faveurs royales et fut arrêté pour trahison en sur l'accusation qu'il complotait pour épouser Marie. Anne accepta l'annulation du mariage qui ne fut pas consommé et Cromwell fut décapité. En 1541, fit décapiter la comtesse de Salisbury, l'ancienne gouvernante et marraine de Marie, sous le prétexte d'un complot catholique dans lequel son fils Reginald Pole fut impliqué. Son bourreau était qui . En 1542, après l'exécution de la cinquième épouse d', Catherine Howard, pour adultère et trahison, le roi célibataire invita Marie à assister aux célébrations de Noël où elle joua le rôle d'hôtesse. En 1543, épousa sa sixième et dernière épouse, Catherine Parr, qui parvint à réconcilier la famille. Le roi réintégra ses filles dans l'ordre de succession, même si d'après la de succession au trône elles se trouvaient après Édouard ; les deux restaient néanmoins légalement illégitimes. mourut en 1547 et lui succéda. Marie hérita des propriétés du Norfolk, du Suffolk et de l'Essex et reçut Hunsdon et le palais de Beaulieu comme résidences personnelles. Comme n'avait que neuf ans, le pouvoir fut exercé par un conseil de régence dominé par les protestants qui tentèrent d'établir leur foi dans toute l'Angleterre. L' rendit obligatoire les rites protestants tels que l'emploi du livre de la prière commune de Thomas Cranmer. Marie resta fidèle au catholicisme et, par défi, continua de faire célébrer la messe traditionnelle dans sa chapelle. Elle demanda à son cousin Charles Quint d'exercer des pressions diplomatiques pour qu'elle puisse pratiquer sa religion. Durant la plus grande partie du règne d', Marie resta dans ses propriétés et se rendit rarement à la cour. La religion était un sujet de tension entre Marie et son demi-frère. Lors des célébrations de Noël en 1550, réprimanda Marie devant toute la cour pour son refus de respecter ses lois sur la religion et les deux se mirent à pleurer. Marie continua de refuser d'abandonner le catholicisme et refusa de renoncer à ses demandes. Accession au trône Le , mourut d'une infection des poumons, probablement la tuberculose, à l'âge de . Il ne voulait pas que Marie devînt reine, car il craignait qu'elle ne restaurât le catholicisme et n'annulât ses réformes et celles de son père ; ses conseillers lui indiquèrent néanmoins qu'il ne pourrait pas exclure une seule de ses demi-sœurs de l'ordre de succession et qu'il devrait également évincer Élisabeth, même si elle était anglicane. Guidé notamment par John Dudley, il décida d'exclure ses deux sœurs dans son testament. En violation du Troisième Acte de Succession qui réintégrait Marie et Élisabeth dans l'ordre de succession, désigna Jeanne Grey, par lettre patente, pour lui succéder. Elle était la belle-fille de Dudley et la petite-fille de la sœur cadette d', Marie, tandis que sa mère était Frances Brandon qui était la cousine et la marraine de Marie. Juste avant la mort d', Marie fut convoquée à Londres pour voir son demi-frère mourant. Elle fut néanmoins avertie que cette convocation était un prétexte pour l'arrêter et ainsi faciliter l'accession au trône de Jeanne. Au lieu de se rendre à Londres depuis sa résidence de Hunsdon, Marie s'enfuit en Est-Anglie où elle possédait de nombreuses propriétés et où Dudley avait violemment réprimé la révolte de Kett. Le , elle écrivit depuis Kenninghall au Conseil privé pour faire reconnaître la lettre patente d'Édouard comme acte de trahison selon la loi de Trahison de 1547, et pour lui demander de la proclamer reine. Le , Jeanne fut proclamée reine par Dudley, et le même jour la lettre de Marie arriva au Conseil privé à Londres. Deux jours plus tard, Marie et ses partisans avaient rassemblé une armée au château de Framlingham. Dudley perdit ses soutiens et Jeanne fut déposée le . Dudley et elle furent emprisonnés à la tour de Londres. Marie entra triomphalement dans la capitale le aux côtés d'Élisabeth et d'une procession de . L'une des premières décisions de Marie en tant que reine fut de libérer les conseillers catholiques Thomas Howard, Étienne Gardiner et Édouard Courtenay qui étaient emprisonnés à la tour de Londres. Elle comprit que Jeanne n'était qu'un pion dans le plan de Dudley, et ce dernier fut la seule personne de son rang exécutée pour haute trahison immédiatement après son accession au trône. Jeanne et son époux, Guilford Dudley, bien que reconnus coupables, furent détenus à la tour de Londres tandis que le père de Jeanne, Henry Grey, fut libéré. Marie était dans une position difficile car presque tous les membres du conseil privé avaient été impliqués dans la conspiration pour placer Jeanne sur le trône. Elle nomma Gardiner au conseil et le fit évêque de Winchester et lord chancelier, des fonctions qu'il occupa jusqu'à sa mort en . Le , Gardiner couronna Marie en l'abbaye de Westminster. Mariage À , commença à se concentrer sur la recherche d'un partenaire pour engendrer un héritier et ainsi empêcher la protestante Élisabeth de lui succéder au trône. Édouard Courtenay et Reginald Pole étaient considérés comme des prétendants possibles, mais son cousin Charles Quint lui suggéra d'épouser son fils unique, le prince Philippe d'Espagne. Philippe avait un fils issu d'une précédente union avec Marie-Manuelle de Portugal morte peu après avoir accouché. Dans le cadre des négociations, un portrait de Philippe réalisé par Le Titien fut envoyé en Angleterre en . Gardiner et la Chambre des communes tentèrent sans succès de la convaincre d'épouser un Anglais, car ils craignaient que l'Angleterre ne passât sous le contrôle des Habsbourg. L'union fut impopulaire en Angleterre ; Gardiner et ses alliés s'y opposaient par patriotisme tandis que les protestants ne voulaient pas d'une monarchie catholique. Lorsque la reine insista pour épouser Philippe, Thomas Wyatt le Jeune organisa un soulèvement dans le Kent impliquant Henry Grey pour placer Élisabeth sur le trône. déclara publiquement qu'elle convoquerait le Parlement pour discuter du mariage et qu'elle déclinerait l'union si l'assemblée estimait qu'elle n'était pas à l'avantage du pays. Les rebelles furent battus à leur arrivée à Londres ; Wyatt, Henry Grey, sa fille Jeanne et son mari Guildford Dudley furent exécutés. Pour son implication dans le complot, Courtenay fut emprisonné puis exilé. Même si elle défendit son innocence dans le soulèvement, Élisabeth fut détenue deux mois à la tour de Londres, au palais de Woodstock et confinée en résidence surveillée dans son palais de Hatfield jusqu'à la fin du règne de Marie. fut, à l'exception des règnes contestés de Jeanne Grey et de Mathilde l'Emperesse, la première reine régnante d'Angleterre. De plus, selon la coutume anglaise du , les propriétés et les titres d'une femme devenaient également ceux de son mari et certains craignaient que l'homme qu'elle épouserait devînt de fait roi d'Angleterre. Si les grands-parents maternels de , de Castille et d'Aragon, avaient conservé la souveraineté de leurs propres royaumes durant leur mariage, il n'existait pas de précédent de ce type en Angleterre. Selon l'acte de mariage, Philippe recevrait le titre de « roi d'Angleterre », tous les documents officiels seraient signés avec leurs deux noms et le Parlement serait convoqué sous l'autorité conjointe du couple jusqu'à la mort de . L'Angleterre ne serait pas obligée de fournir un soutien militaire aux guerres du père de Philippe et ce dernier ne pourrait pas agir sans l'accord de son épouse ou nommer des étrangers dans l'administration anglaise ; il ne pourrait également pas revendiquer le trône si mourait avant lui. Philippe était mécontent de ces conditions mais il les accepta pour que le mariage se concrétise. Il n'avait pas de sentiments pour et voulait se marier uniquement pour des raisons politiques et stratégiques ; son conseiller, Rui Gomes da Silva, écrivit à un correspondant à Bruxelles que . Pour élever son fils au rang de sa future épouse, Charles Quint lui céda la Couronne de Naples ainsi que ses revendications au royaume de Jérusalem. devint ainsi reine de Naples et reine titulaire de Jérusalem lors de son mariage. La cérémonie fut organisée à la cathédrale de Winchester le , deux jours après leur première rencontre. Philippe ne parlait pas anglais et ils communiquèrent en espagnol, en français et en latin. Grossesse nerveuse En , cessa d'avoir ses règles. Elle prit du poids et souffrait de nausées au réveil. Quasiment toute la cour, y compris ses médecins, pensait qu'elle était enceinte. Le Parlement adopta un texte prévoyant que Philippe devienne régent au cas où mourrait lors de l'accouchement. Dans la dernière semaine d', Élisabeth fut autorisée à quitter sa résidence et convoquée à la cour pour assister à l'accouchement qui était jugé imminent. Selon l'ambassadeur vénitien Giovanni Michieli, Philippe aurait prévu d'épouser Élisabeth si la reine mourait mais dans une lettre adressée à son beau-frère Maximilien d'Autriche, Philippe exprima ses doutes quant à la réalité de cette grossesse. Des cérémonies furent organisées par le diocèse de Londres à la fin du mois d' après que des rumeurs annonçant la naissance d'un fils se furent propagées dans toute l'Europe. L'accouchement n'ayant toujours pas eu lieu en et en , l'hypothèse d'une fausse grossesse grandit ; continua de présenter les signes d'une grossesse jusqu'en quand son abdomen perdit en volume. Il s'agissait probablement d'une grossesse nerveuse peut-être déclenchée par son désir d'avoir un enfant. En , peu après la disgrâce de cette fausse grossesse que estima être le pour sa , Philippe quitta l'Angleterre pour combattre les Français en Flandre. eut le cœur brisé et elle sombra dans une profonde dépression. Michieli fut touché par le chagrin de la reine qui était et fut inconsolable après le départ de son mari. Élisabeth, apparemment revenue en grâce, resta à la cour jusqu'en . En l'absence d'enfants, Philippe craignait qu'après et Élisabeth, la Couronne ne passe à d'Écosse qui était promise au dauphin de France. Philippe voulut persuader Élisabeth d'épouser son cousin Emmanuel-Philibert de Savoie afin de garantir une succession catholique et préserver les intérêts des Habsbourg en Angleterre, mais elle refusa et l'accord du Parlement aurait été difficile à obtenir. Politique religieuse Durant le premier mois de son règne, proclama qu'elle ne forcerait aucun de ses sujets à suivre sa religion mais à la fin du mois de , plusieurs ecclésiastiques réformateurs tels que , , John Hooper, Hugh Latimer et Thomas Cranmer furent emprisonnés. Le premier Parlement, convoqué par la reine en , déclara que le mariage de ses parents était valide et abrogea les lois religieuses édictées par . La doctrine de l'Église redevint celle précisée par l'Acte des six articles de 1539 qui interdisait le mariage des ecclésiastiques ; ceux dans ce cas furent privés de leurs bénéfices. avait toujours rejeté la rupture avec Rome instituée par son père et l'établissement de l'anglicanisme. Son mari et elle voulaient réconcilier l'Angleterre avec Rome et Philippe persuada le Parlement d'abroger les lois religieuses adoptées par , ce qui ramenait l'Église d'Angleterre sous la juridiction du Vatican. Les négociations prirent plusieurs mois et le pape dut faire une importante concession : les propriétés et les biens confisqués sous ne seraient pas rétrocédés à l'Église romaine. En 1554, l'Acte de Suprématie est supprimé. À la fin de l'année le pape accepta le compromis et les lois sur les hérétiques furent rétablies. Lors des persécutions mariales, de nombreux protestants furent exécutés à partir de . Certains des plus aisés comme John Foxe choisirent l'exil et plus de 800 quittèrent le pays. L'archevêque de Cantorbéry, Thomas Cranmer, fut obligé de voir les évêques Nicholas Ridley et Hugh Latimer brûlés vifs le . Il abjura, rejeta la théologie protestante et rejoignit la foi catholique. Selon l'application normale de la loi, il aurait dû être absous mais la reine refusa de lui pardonner et lors de son exécution le , il revendiqua son adhésion au protestantisme. Au total, furent exécutées sous le règne de , la plupart par le feu. Les bûchers se révélèrent très impopulaires et même , l'un des conseillers religieux de Philippe, les condamna tandis que Simon Renard, un autre de ses conseillers, l'avertit que des pourraient . continua néanmoins sa politique de persécution jusqu'à sa mort, ce qui exacerba les sentiments anti-catholiques et anti-espagnols. Les victimes de cette répression furent considérés comme des martyrs par les protestants, et John Foxe leur consacra une longue partie de son Livre des Martyrs. Reginald Pole, dont la mère avait été exécutée par , arriva comme légat pontifical en Angleterre en . Il fut ordonné prêtre et nommé archevêque de Cantorbéry immédiatement après l'exécution de Cranmer en . Politique étrangère À la suite de la conquête de l'Irlande, des colons anglais s'installèrent dans les Midlands pour protéger la région de Dublin contre les attaques irlandaises. Les et (aujourd'hui les comtés de Laois et d'Offaly) furent fondés et leur colonisation commença. Leurs principales villes étaient respectivement Maryborough (aujourd'hui Portlaoise) et Philipstown (aujourd'hui Daingean). En , le beau-père de abdiqua et Philippe devint roi d'Espagne, et donc reine. Philippe fut proclamé roi à Bruxelles mais resta en Angleterre. Une paix fragile fut signée avec la France en mais le mois suivant, l'ambassadeur français en Angleterre, Antoine de Noailles, fut impliqué dans un complot contre la reine quand Henry Dudley, un cousin de John Dudley, tenta de rassembler une armée en France. Noailles quitta l'Angleterre et Dudley fut exilé en France. Philippe retourna en Angleterre de à pour persuader de soutenir l'Espagne dans une nouvelle guerre contre la France. y était favorable, mais ses conseillers s'y opposèrent en avançant que le commerce avec la France serait interrompu, que cela contrevenait aux termes du mariage et que le marasme économique et les mauvaises récoltes limitaient les possibilités militaires de l'Angleterre. La guerre ne fut déclarée qu'en après que le neveu de Reginald Pole, Thomas Stafford, ait envahi l'Angleterre et pris le contrôle du château de Scarborough, avec l'aide française, pour essayer de renverser . Cette guerre dégrada les relations entre l'Angleterre et la papauté, car le pape était allié avec le roi . En , les forces françaises prirent Calais, la dernière possession anglaise sur le continent. La défense de ce territoire était un fardeau financier, mais sa perte affecta le prestige de . Politique économique Le règne de fut marqué par une pluviosité intense qui affecta les récoltes et causa des disettes. Un autre problème fut le déclin du commerce du textile à Anvers. Malgré le mariage avec Philippe, l'Angleterre ne profita pas du commerce extrêmement lucratif de l'Espagne avec le Nouveau Monde. Les routes commerciales espagnoles étaient étroitement contrôlées et ne pouvait soutenir la flibusterie et la piraterie car elle était mariée au roi d'Espagne. Dans une tentative pour développer le commerce et soutenir l'économie anglaise, les conseillers de la reine poursuivirent la politique de Dudley visant à obtenir de nouveaux débouchés commerciaux. Elle accorda une charte royale à la Compagnie de Moscovie dont le premier directeur fut Sébastien Cabot et commanda un atlas au cartographe portugais Diogo Homem. Financièrement, l'administration anglaise tenta de concilier une forme moderne de gouvernement ayant des dépenses plus élevées avec un système fiscal largement médiéval. maintint dans ses fonctions le lord trésorier nommé par , , et lui demanda de superviser la collecte des impôts et des taxes. En 1558, le gouvernement publia une révision du listant les droits de douane s'appliquant à toutes les importations ; ce document resta largement inchangé jusqu'en 1604. La monnaie anglaise fut dévaluée sous et , et prépara une réforme monétaire qui ne fut cependant pas appliquée avant sa mort. Mort Après le retour de Philippe en 1557, pensa qu'elle était à nouveau enceinte et qu'elle devrait accoucher en . Elle notifia dans son testament que Philippe devrait être régent durant la minorité de son enfant. La grossesse était cependant inexistante et fut contrainte d'accepter qu'Élisabeth soit son successeur. tomba malade en et elle mourut le à l'âge de au palais Saint James, durant une épidémie de grippe qui emporta également Reginald Pole le même jour. Elle était affaiblie et souffrait peut-être d'un kyste ovarien ou d'un cancer de l'utérus. Sa demi-sœur Élisabeth lui succéda. Philippe, qui se trouvait à Bruxelles, écrivit à sa sœur Jeanne d'Autriche : . Malgré ses dispositions testamentaires, ne fut pas enterrée aux côtés de sa mère, et elle fut inhumée dans l'abbaye de Westminster le dans un caveau qu'elle partage avec . Après son accession au trône d'Angleterre en 1603, fit ajouter une plaque sur la tombe portant l'inscription en latin : (). Héritage Lors de la cérémonie funèbre, l'évêque de Winchester, John White, fit les louanges de : . Elle fut la première femme à occuper durablement le trône d'Angleterre malgré une forte opposition et disposait d'un large soutien populaire au début de son règne, notamment auprès des catholiques. Les historiens catholiques, comme John Lingard, ont avancé que ses politiques échouèrent non pas parce qu'elles étaient mauvaises mais en raison de la faible durée de son règne et des problèmes météorologiques hors de sa portée. Son mariage avec Philippe se révéla particulièrement impopulaire et ses politiques religieuses créèrent un profond ressentiment qui fut accru par les défaites contre la France. Philippe passa une grande partie de son temps sur le continent, laissant la reine éplorée et déprimée par son absence et son incapacité à procréer. Après la mort de , il envisagea d'épouser mais elle refusa. Trente ans plus tard, il envoya l'Invincible Armada pour la renverser sans plus de succès. Au , les persécutions des protestants par lui valurent le surnom de (« Marie la Sanglante »). John Knox l'attaqua dans son Premier coup de trompette contre le gouvernement monstrueux des femmes publié en 1558 et elle fut violemment vilipendée dans le Livre des Martyrs de John Foxe édité en 1563, cinq ans après sa mort. Les éditions ultérieures de l'ouvrage restèrent populaires auprès des protestants durant les siècles qui suivirent et contribuèrent à la perception de comme un tyran sanguinaire. Au milieu du , l'historienne H. F. M. Prescott tenta de réévaluer la vision traditionnelle d'une reine intolérante et autoritaire et les études plus récentes considèrent les anciennes évaluations avec un plus grand scepticisme. Même si le règne de fut finalement impopulaire et inefficace, les réformes fiscales et l'expansion navale coloniale qui furent par la suite célébrées comme des accomplissements de l'ère élisabéthaine furent initiées par . Cinéma et télévision a été jouée à l'écran par : 1911 : Maria Brioschi dans Maria Tudor de Giuseppe De Liguoro ; Maria Gasparini dans Regina per quindici giorni de Mario Caserini ; 1912 : Jeanne Delvair dans Marie Tudor d'Albert Capellani ; 1917 : Jeanne Delvair dans l'adaptation cinématographique de la pièce de théâtre Marie Tudor (1833) de Victor Hugo ; 1920 : Ellen Richter dans Maria Tudor d'Adolf Gärtner et Willi Wolff ; 1936 : Gwen Ffrangcon-Davies dans Marie Tudor de Robert Stevenson ; 1937 : Yvette Pienne dans Les Perles de la couronne de Sacha Guitry ; 1940 : Zarah Leander dans Marie Stuart de Carl Froelich ; 1953 : Peggy Thorpe-Bates dans The Young Elizabeth de Michael Henderson ; Ann Tyrrell dans La Reine vierge de George Sidney ; 1955 : Jeanette Nolan dans The Last Day of an English Queen ; 1960 : Julie Sommars dans The Prince and the Pauper de David Greene ; 1964 : Katherine Blake dans The Young Elizabeth de Charles Jarrott ; 1965 : Françoise Christophe (adulte) et Caty Fraisse (adolescente) dans Marie Tudor d'Abel Gance ; 1967 : Amaro Pamplona dans María Tudor ; Katharine Blake dans The Young Elizabeth de Charles Jarrott ; 1969 : Nicola Pagett dans Anne des mille jours de de Charles Jarrott ; 1971 : Daphne Slater dans la série ; 1974 : Mireille Delcroix dans La Reine galante de Michel Roux ; 1975 : Nadine Alari dans Marie Tudor de Claude Dagues ; 1977 : Inge Keller dans Die Liebe und die Königin de Martin Eckermann ; 1986 : Jane Lapotaire dans le film Lady Jane ; 1993 : Mary MacDonald dans King & Queens de Graham Holloway ; 1994 : Christèle Wurmser dans Marie Tudor de Robert Mazoyer ; 1998 : Kathy Burke dans le film Elizabeth ; 2005 : Emily Smith (enfant) et Joanne Whalley (adulte) dans The Virgin Queen de Coky Giedroyc ; 2007 : Blathnaid McKeown et Sarah Bolger dans la série Les Tudors ; 2008 : Constance Stride dans le film Deux Sœurs pour un roi ; Miranda French dans le film The Twisted Tale of Bloody Mary ; 2011 : Actrice non créditée dans Marie Tudor de Pascal Faber ; 2015 : Lily Lesser dans Dans l'ombre des Tudors série de Peter Kosminsky ; 2019 : Billie Gadsdon dans la série The Spanish Princess ; Vanessa Valens dans La Guerre des trônes, la véritable histoire de l'Europe : Jeu de dames (1542-1559) série d'Alain Brunard et Vanessa Pontet ; Actrice non créditée dans Marie la sanglante sur le trône d'Angleterre, Secret d'histoires ; 2022 : Romola Garai dans la série . Titulature et armoiries Titulature complète Lorsque monta sur le trône, elle fut proclamée reine de la même manière qu' et : . Les revendications sur le trône de France n'étaient que symboliques et étaient invoquées par tous les rois d'Angleterre depuis , peu importe la quantité de territoires français contrôlés. Après son mariage avec Philippe, le titre conjoint reflétait les possessions des deux époux : . Ce titre, utilisé depuis 1554, fut modifié quand hérita de la Couronne d'Espagne en 1556 : . Armoiries Les armoiries de étaient différentes de celles de ses prédécesseurs depuis : écartelé en 1 et en 3 d'azur à trois fleurs de lys d'or (qui est de France) en 2 et en 4 de gueule aux trois lions léopardés d'or (qui est Angleterre). Ses armoiries étaient parfois associées côte à côte avec celles de Philippe. Elle adopta la devise (« la Vérité est la fille du Temps »). Ascendance Notes et références Notes Références Bibliographie . . . . . . . Liens externes sur le site officiel de la monarchie britannique Documents artistiques sur le site de la National Portrait Gallery Monarque d'Angleterre du XVIe siècle Roi et reine consort d'Espagne Reine consort de Naples Reine consort de Sicile Duchesse de Bourgogne Duchesse de Brabant Duchesse de Limbourg Comtesse de Flandre Maison Tudor Histoire du catholicisme en Angleterre Répression du protestantisme Couronné à l'abbaye de Westminster Naissance en février 1516 Naissance à Greenwich (Angleterre) Décès en novembre 1558 Décès à Westminster Personnalité inhumée à l'abbaye de Westminster Décès à 42 ans Duchesse de Milan Mort de la grippe Comtesse de Hainaut Comtesse de Hollande Princesse anglaise ou britannique du XVIe siècle Archiduchesse d'Autriche du XVIe siècle Princesse des Asturies Personnalité récipiendaire de la Rose d'or Reine d'Angleterre Reine consort de Majorque Roi du royaume d'Irlande Monarque irlandais du XVIe siècle Marquise de Namur Duchesse de Luxembourg
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,374
\section{\Large Supplemental Material} \end{center} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} \setcounter{page}{1} \makeatletter \renewcommand{\theequation}{S\arabic{equation}} \renewcommand{\thefigure}{S\arabic{figure}} \renewcommand{\bibnumfmt}[1]{[S#1]} \renewcommand{\citenumfont}[1]{S#1} \begin{quote} In this Supplemental Material to the article on ``Exponentially-Enhanced Light-Matter Interaction, Cooperativities, and Steady-State Entanglement Using Parametric Amplification'', we first present more details of the elimination of squeezing-induced noises to show an exponential enhancement of the light-matter interaction, as well as of the cooperativity. Then, we derive an effective master equation including an effective Hamiltonian and effective Lindblad operators, and also give a detailed description of our entanglement preparation method. Finally, we discuss, in detail, the effects of counter-rotating terms and show how to remove them. \end{quote} \section{Elimination of squeezing-induced fluctuation noise} To demonstrate more explicitly the elimination of the squeezing-induced noise, we now derive the Lindblad master equation for our atom-cavity system. In addition to an exponential enhancement of the atom-cavity coupling, the squeezing can introduce undesired noise, including thermal noise and two-photon correlations, into the cavity mode. In order to avoid such noises, our approach employs an auxiliary, high-bandwidth squeezed-vacuum field, which can be experimentally generated, e.g., via optical parametric amplification~\cite{Xast2013high,Xserikawa2016creation}. Owing to the bandwidth of the squeezed-vacuum field of up to $\sim$ GHz, the auxiliary field can be thought of as a squeezed-vacuum reservoir for a typical cavity mode with its bandwidth of order of MHz. When being coupled to the cavity mode, the auxiliary field can suppress or even completely eliminate these undesired types of noise of the squeezed-cavity mode. The Hamiltonian determining the unitary dynamics of our atom-cavity system, as shown in Fig.~1, is given by Eq.~(1) and, for convenience, is recalled here \begin{align} \label{seq:full_Hamiltonian} H\left(t\right)=\;&\sum_{k}\left[\Delta_{e}|e\rangle_{k}\langle e|+\Delta_{f}|f\rangle_{k}\langle f|\right]+H_{\text{AC}}+H_{\text{NL}}\nonumber\\ &+\frac{1}{2}\Omega_{\text{MW}}\sum_{k}\left(|f\rangle_{k}\langle g|+\text{H.c.}\right)+V\left(t\right),\\ \label{seq:DPA} H_{\text{NL}}=\;&\Delta_{c}a^{\dag}a+\frac{1}{2}\Omega_{p}\left[\exp\left(i\theta_{p}\right)a^{2}+\text{H.c.}\right],\\ \label{seq:atom_cavity_coupling} H_{\text{AC}}=\;&g\sum_{k}\left(a|e\rangle_{k}\langle f|+\text{H.c.}\right),\\ \label{seq:laserdrive} V\left(t\right)=\;&\frac{1}{2}\Omega\exp\left(i\beta t\right)\sum_{k}\left[\left(-1\right)^{k-1}|g\rangle_{k}\langle e|+\text{H.c.}\right]. \end{align} Here $k=1,2$ labels the atoms, $g$ is the atom-cavity coupling, the annihilation operator $a$ corresponds to the cavity mode, $\Omega$ ($\Omega_{\text{MW}}$) is the Rabi frequency of the laser (microwave) drive applied to the atoms, and $\Omega_{p}$ ($\theta_{p}$) is the amplitude (phase) of the strong pump applied to the nonlinear medium. We have defined the following detunings: \begin{align} \Delta_{c}=\;&\omega_{c}-\omega_{p}/2,\\ \Delta_{e}=\;&\omega_{e}-\omega_{g}-\omega_{\text{MW}}-\omega_{p}/2,\\ \Delta_{f}=\;&\omega_{f}-\omega_{g}-\omega_{\text{MW}},\\ \beta=\;&\omega_{L}-\omega_{\text{MW}}-\omega_{p}/2, \end{align} where $\omega_{c}$ is the cavity frequency, $\omega_{L}$ ($\omega_{\text{MW}}$) is the frequency of the laser (microwave) drive applied to the atoms, $\omega_{p}$ is the frequency of the strong pump applied to the nonlinear medium, and $\omega_{z}$ is the frequency associated with level $|z\rangle$ ($z=g,f,e$). When the cavity mode is coupled to the squeezed-vacuum reservoir with a squeezing parameter $r_{e}$ and a reference phase $\theta_{e}$, the dynamics of the atom-cavity system is described by the following master equation~\cite{Xscully1997book}: \begin{align}\label{Seq:full_masterequation} \dot{\rho}\left(t\right)=&i\left[\rho\left(t\right),H\left(t\right)\right -\frac{1}{2}\Bigg\{\sum_{x'}\mathcal{L}\left(L_{x'}\right)\rho\left(t\right) +\left(N+1\right)\mathcal{L}\left(L_{a}\right)\rho\left(t\right)\nonumber\\ &+N\mathcal{L}\left(L_{a}^{\dag}\right)\rho\left(t\right) -M\mathcal{L}^{\prime}\left(L_{a}\right)\rho\left(t\right)-M^{*}\mathcal{L}^{\prime}\left(L_{a}^{\dag}\right)\rho\left(t\right)\Bigg\}, \end{align} where $\rho\left(t\right)$ is the density operator of the system, a Lindblad operator $L_{a}=\sqrt{\kappa}a$ describes the cavity decay with a rate $\kappa$, and \begin{equation} N=\sinh^{2}\left(r_{e}\right)\quad \text{and}\quad M=\cosh\left(r_{e}\right)\sinh\left(r_{e}\right)e^{-i\theta_{e}} \label{N11} \end{equation} describe thermal noise and two-photon correlations caused by the squeezed-vacuum reservoir, respectively. Moreover, \begin{eqnarray} \mathcal{L}\left(o\right)\rho\left(t\right)&=&o^{\dag}o\rho\left(t\right)-2o\rho\left(t\right) o^{\dag}+\rho\left(t\right) o^{\dag}o, \\ \mathcal{L}'\left(o\right)\rho\left(t\right)&=&oo\rho\left(t\right)-2o\rho\left(t\right) o+\rho\left(t\right) oo \label{N12} \end{eqnarray} and the sum runs over all atomic spontaneous emissions, including the Lindblad operators \begin{equation} L_{g1}=\sqrt{\gamma_{g}}|g\rangle_{1}\langle e|,\quad L_{f1}=\sqrt{\gamma_{f}}|f\rangle_{1}\langle e|,\quad L_{g2}=\sqrt{\gamma_{g}}|g\rangle_{2}\langle e|,\quad L_{f2}=\sqrt{\gamma_{f}}|f\rangle_{2}\langle e|. \label{N1} \end{equation} Note that, here, we have assumed that the atoms are coupled to a thermal reservoir and that in each atom, $|e\rangle$ decays to $|g\rangle$ and $|f\rangle$, respectively, with rates $\gamma_{g}$ and $\gamma_{f}$. When pumped, the nonlinear medium can squeeze the cavity mode along the axis rotated at an angle $\left(\pi-\theta_{p}\right)/2$, with a squeezing parameter $r_{p}=\left(1/4\right)\ln\left[\left(1+\alpha\right)/\left(1-\alpha\right)\right]$, where $\alpha=\Omega_{p}/\Delta_{c}$. This results in a squeezed-cavity mode, as described by the Bogoliubov transformation $a_{s}=\cosh\left(r_{p}\right)a+\exp\left(-i\theta_{p}\right)\sinh\left(r_{p}\right) a^{\dag}$~\cite{Xscully1997book}, such that \begin{align}\label{seq:freesqueezedmode} H_{\text{NL}}=\omega_{s}a_{s}^{\dag}a_{s}, \end{align} where $\omega_{s}=\Delta_{c}\sqrt{1-\alpha^2}$ is the squeezed-cavity frequency. In terms of the mode $a_{s}$, the atom-cavity interaction Hamiltonian $H_{\text{AC}}$ in Eq.~(\ref{seq:atom_cavity_coupling}) is reexpressed as \begin{align}\label{seq:fullsquzeedmodeandatoms} H_{\text{AC}}=\sum_{k}\left[\left(g_{s}a_{s}-g^{\prime}_{s}a_{s}^{\dag}\right)|e\rangle_{k}\langle f|+\text{H.c.}\right], \end{align} where $g_{s}=g\cosh\left(r_{p}\right)$ and $g_{s}^{\prime}=\exp\left(-i\theta_{p}\right)g\sinh\left(r_{p}\right)$. Under the assumption that $|g^{\prime}_{s}|/\left(\omega_{s}+\Delta_{e}-\Delta_{f}\right)\ll1$, we can make the rotating-wave approximation to neglect the counter-rotating terms, which results in a standard Jaynes-Cummings Hamiltonian \begin{align} H_{\text{ASC}}=g_{s}\sum_{k}\left(a_{s}|e\rangle_{k}\langle f|+\text{H.c.}\right). \end{align} This Hamiltonian describes an interaction between the atoms and the squeezed-cavity mode, and demonstrate that as long as $r_{p}\geq1$, there is an exponential enhancement in the atom-cavity coupling, \begin{equation} \frac{g_{s}}{g}\sim\frac{1}{2}\exp\left(r_{p}\right). \label{N14} \end{equation} Furthermore, the master equation in Eq.~(\ref{Seq:full_masterequation}) can accordingly be reexpressed as \begin{align} \dot{\rho}\left(t\right)=\;&i\left[\rho\left(t\right),H_{s}\left(t\right)\right]\nonumber\\ &-\frac{1}{2}\bigg\{\sum_{x'}\mathcal{L}\left(L_{x'}\right)\rho\left(t\right) +\left(N_{s}+1\right)\mathcal{L}\left(L_{as}\right)\rho\left(t\right)\nonumber\\ &+N_{s}\mathcal{L}\left(L_{as}^{\dag}\right)\rho\left(t\right) -M_{s}\mathcal{L}^{\prime}\left(L_{as}\right)\rho\left(t\right)-M_{s}^{*}\mathcal{L}^{\prime}\left(L_{as}^{\dag}\right)\rho\left(t\right)\bigg\},\\ \label{seq_reducedH} H_{s}\left(t\right)=&\sum_{k}\left[\Delta_{e}|e\rangle_{k}\langle e|+\Delta_{f}|f\rangle_{k}\langle f|\right]+\omega_{s}a_{s}^{\dag}a_{s}+H_{\text{ASC}}\nonumber\\ &+\frac{1}{2}\Omega_{\text{MW}}\sum_{k}\left(|f\rangle_{k}\langle g|+\text{H.c.}\right)+V\left(t\right), \end{align} where $N_{s}$ and $M_{s}$ are given, respectively, by \begin{align} \label{effective-thermal-noise} N_{s}=&\cosh^{2}\left(r_{p}\right)\sinh^{2}\left(r_{e}\right)+\sinh^{2}\left(r_{p}\right)\cosh^{2}\left(r_{e}\right)\nonumber\\ &+\frac{1}{2}\sinh\left(2r_{p}\right)\sinh\left(2r_{e}\right)\cos\left(\theta_{e}+\theta_{p}\right),\\ \label{effective-tow-photon-correlation} M_{s}=&\exp\left(i\theta_{p}\right)\left[\sinh\left(r_{p}\right)\cosh\left(r_{e}\right)+\exp\left[-i\left(\theta_{e}+\theta_{p}\right)\right] \cosh\left(r_{p}\right)\sinh\left(r_{e}\right)\right]\nonumber\\ &\times\left[\cosh\left(r_{p}\right)\cosh\left(r_{e}\right)+\exp\left[i\left(\theta_{p}+\theta_{e}\right)\right]\sinh\left(r_{e}\right) \sinh\left(r_{p}\right)\right], \end{align} corresponding to an effective thermal noise and two-photon correlations of the squeezed-cavity mode, and where $L_{\text{as}}=\sqrt{\kappa}a_{s}$ is a Lindblad operator corresponding to the decay of the squeezed-cavity mode, $g_{s}=g\cosh\left(r_{p}\right)$ is the enhanced, controllable atom-cavity coupling. We have neglected the counter-rotating terms to obtain the Hamiltonian $H_{s}$. From Eqs.~(\ref{effective-thermal-noise}) and (\ref{effective-tow-photon-correlation}), we can, as $r_{e}=0$, observe the noise caused only by squeezing the cavity mode. However, when choosing $r_{e}=r_{p}$ and $\theta_{e}+\theta_{p}=\pm n\pi$ ($n=1,3,5,\cdots$), we have \begin{align} N_{s}=M_{s}=0, \end{align} so that the master equation is simplified to a Lindblad form, \begin{align}\label{Seq:simplified_masterequation} \dot{\rho}\left(t\right)=i\left[\rho\left(t\right),H_{s}\left(t\right)\right]-\frac{1}{2}\sum_{x}\mathcal{L}\left(L_{x}\right)\rho\left(t\right). \end{align} Here, the sum runs over all dissipative processes, including atomic spontaneous emission and squeezed-cavity decay. From Eq.~(\ref{Seq:simplified_masterequation}), we find that the squeezed-cavity mode is equivalently coupled to a thermal reservoir, and the squeezing-induced noises are completely removed as desired. Therefore, we can define the effective cooperativity $C_{s}=g_{s}^{2}/\left(\kappa\gamma\right)$, and obtain an exponential enhancement in the atom-cavity cooperativity $C=g^{2}/\left(\kappa\gamma\right)$, that is, \begin{align} \frac{C_{s}}{C}=\cosh^{2}\left(r_{p}\right)\sim\frac{1}{4}\exp\left(2r_{p}\right). \end{align} This can be used to improve the quality of dissipative entanglement preparation. The resulting entanglement infidelity is no longer lower-bounded by the cooperativity $C$ of the atom-cavity system and could be, in principle, made very close to zero. Our method is to use a squeezed-vacuum field to suppress the noise of the squeezed-cavity mode, including thermal noise and two-photon correlations. This makes the squeezed-cavity mode equivalently coupled to a thermal-vacuum reservoir. Therefore, this method only changes the environment of the squeezed-cavity mode, and cannot cause the cavity mode to violate the Heisenberg uncertainty principle. To elucidate more explicitly the physics underlying this effect and to obtain an analytical understanding, we consider a simple case when the cavity mode is decoupled from the atoms. In this case, the Hamiltonian only includes the nonlinear term given in Eq.~(\ref{seq:DPA}). The cavity mode is then coupled to the squeezed-vacuum reservoir. Following the same method as before, we can find that the squeezed-cavity mode is equivalently coupled to a thermal vacuum reservoir. The corresponding master equation is \begin{equation}\label{seq:only-cavity-mode-master-equation} \dot{\rho}\left(t\right)=i\left[\rho\left(t\right),\omega_{s}a_{s}^{\dag}a_{s}\right] -\frac{\kappa}{2}\left[a^{\dag}_{s}a_{s}\rho\left(t\right)-2a_{s}\rho\left(t\right) a_{s}^{\dag}+\rho\left(t\right) a_{s}^{\dag}a_{s}\right]. \end{equation} We now calculate the Heisenberg uncertainty relation of the cavity mode $a$ evolving according to the master equation given in Eq.~(\ref{seq:only-cavity-mode-master-equation}). To start, we define two rotated quadratures at an angle $\left(\pi-\theta_{p}\right)/2$, \begin{align} X_{1}&=\frac{1}{2}\left\{a\exp\left[-i\left(\pi-\theta_{p}\right)/2\right]+a^{\dag}\exp\left[i\left(\pi-\theta_{p}\right)/2\right]\right\},\\ X_{2}&=\frac{1}{2i}\left\{a\exp\left[-i\left(\pi-\theta_{p}\right)/2\right]-a^{\dag}\exp\left[i\left(\pi-\theta_{p}\right)/2\right]\right\}. \end{align} In terms of the $a_{s}$ mode, $X_{1}$ and $X_{2}$ can be reexpressed as \begin{align} X_{1}&=x_{1}a_{s}+x_{1}^{\ast}a_{s}^{\dag},\\ X_{2}&=-i\left(x_{2}a_{s}-x_{2}^{\ast}a_{s}^{\dag}\right). \end{align} Here, \begin{align} x_{1}&=\frac{1}{2}\left\{\exp\left[-i\left(\pi-\theta_{p}\right)/2\right]\cosh\left(r_{p}\right) -\exp\left[i\left(\pi+\theta_{p}\right)/2\right]\sinh\left(r_{p}\right)\right\},\\ x_{2}&=\frac{1}{2}\left\{\exp\left[-i\left(\pi-\theta_{p}\right)/2\right]\cosh\left(r_{p}\right) +\exp\left[i\left(\pi+\theta_{p}\right)/2\right]\sinh\left(r_{p}\right)\right\}. \end{align} According to the master equation in Eq.~(\ref{seq:only-cavity-mode-master-equation}), a straightforward calculation gives \begin{align} \left(\Delta X_{1}\right)^2&=\langle X_{1}^{2}\rangle-\langle X_{1}\rangle^{2}\nonumber\\ &=\Big\{y_{1}^{2}\exp\left(-i2\omega_{s}t\right)\left[\langle a_{s}a_{s}\rangle\!\left(0\right)-\langle a_{s}\rangle^{2}\!\left(0\right)\right]\nonumber\\ &+2|y_{1}|^{2}\left[\langle a^{\dag}_{s}a_{s}\rangle\!\left(0\right)-\langle a^{\dag}_{s}\rangle\!\left(0\right)\langle a_{s}\rangle\!\left(0\right)\right]\nonumber\\ &+y_{1}^{\ast2}\exp\left(i2\omega_{s}t\right)\left[\langle a^{\dag}_{s}a^{\dag}_{s}\rangle\!\left(0\right)-\langle a^{\dag}_{s}\rangle^{2}\!\left(0\right)\right]\Big\}\exp\left(-\kappa t\right)+\frac{1}{4}\exp\left(2r_{p}\right),\\ \left(\Delta X_{2}\right)^2&=\langle X_{2}^{2}\rangle-\langle X_{2}\rangle^{2}\nonumber\\ &=\Big\{y_{2}^{2}\exp\left(-i2\omega_{s}t\right)\left[\langle a_{s}\rangle^{2}\!\left(0\right)-\langle a_{s}a_{s}\rangle\!\left(0\right)\right]\nonumber\\ &+2|y_{2}|^{2}\left[\langle a^{\dag}_{s}a_{s}\rangle\!\left(0\right)-\langle a^{\dag}_{s}\rangle\!\left(0\right)\langle a_{s}\rangle\!\left(0\right)\right]\nonumber\\ &+y_{2}^{\ast2}\exp\left(i2\omega_{s}t\right)\left[\langle a^{\dag}_{s}\rangle^{2}\!\left(0\right)\right]-\langle a^{\dag}_{s}a^{\dag}_{s}\rangle\!\left(0\right)\Big\}\exp\left(-\kappa t\right)+\frac{1}{4}\exp\left(-2r_{p}\right), \end{align} where $\langle O\rangle\!\left(t\right)$ represents the expectation value of the operator $O$ at the evolution time $t$. For simplicity, and without loss of generality, we assume that the squeezed-cavity mode is initially in a Fock state $|n_{s}\rangle$, with $n_{s}$ being the squeezed-cavity photon number. In this case, we have \begin{align} \left(\Delta X_{1}\right)^2=&\frac{1}{4}\left[2n_{s}\exp\left(-\kappa t\right)+1\right]\exp\left(2r_{p}\right),\\ \left(\Delta X_{2}\right)^2=&\frac{1}{4}\left[2n_{s}\exp\left(-\kappa t\right)+1\right]\exp\left(-2r_{p}\right), \end{align} and then \begin{equation}\label{req:Heisenberg-uncertainty-relation} \left(\Delta X_{1}\right)\left(\Delta X_{2}\right)=\frac{1}{4}\left[2n_{s}\exp\left(-\kappa t\right)+1\right]\geq\frac{1}{4}. \end{equation} It is found, from Eq.~(\ref{req:Heisenberg-uncertainty-relation}), that the Heisenberg uncertainty relation holds, as expected. We now turn to the discussion of the squeezed vacuum drive. The squeezing strength $r_{e}$ and squeezing phase $\theta_{e}$ are experimentally adjustable quantities. In optics, the squeezed vacuum can be produced by a pumped $\chi^{\left(2\right)}$ nonlinear medium (e.g., a periodically-poled KTiOPO4 (PPKTP) crystal) placed in an optical cavity~\cite{Xvahlbruch2016detection, Xserikawa2016creation, Xast2013high, Xvahlbruch2008observation}. This method is similar to generating cavity-field squeezing of a atom-cavity system. The parameters $r_{e}$ and $\theta_{e}$ can be controlled by the amplitude and phase of the laser, which pumps the crystal. To confirm the values of the parameters, one can further measure these by using balanced homodyne detection~\cite{Xschnabel2017squeezed}. The parameters $r_{p}$ and $\theta_{p}$ can be controlled analogously in such a way to fulfill the conditions $r_{e}=r_{p}$ and $\theta_{e}+\theta_{p}=\pm n\pi$ ($n=1,3,5,\cdots$). We note that optical squeezing has also been experimentally implemented utilizing a waveguide cavity~\cite{Xstefszky2017waveguide}. Superconducting quantum circuits, due to their tunable nonlinearity and low losses for microwave fields, are other promising devices for producing squeezed states. The most popular method to generate microwave squeezing is to use a Josephson parametric amplifier (JPA)~\cite{Xclark2017sideband, Xbienfait2017magnetic, Xmurch2013reduction, Xzhong2013squeezing, Xmallet2011quantum}. The JPA is a superconducting {\it LC} resonator, which consists of a superconducting quantum interference device (SQUID). This resonator can be pumped not only through the resonator, but also by modulating the magnetic flux in the SQUID. In this case, the parameters $r_{e}$ and $\theta_{e}$ can be controlled by the amplitude and phase of a pump tone used to modulate the magnetic flux. Recent experiments have shown that the squeezed vacuum, generated by a JPA, can be used to reduce the radiative decay of superconducting qubits~\cite{Xmurch2013reduction} and to modify resonance fluorescence~\cite{Xtoyli2016resonance}. The squeezing of quantum noise has also been demonstrated with tunable Josephson metamaterials~\cite{Xcastellanos2008amplification}. \section{Perturbative treatment and maximizing steady-state entanglement} For the preparation of a steady entangled state, e.g., the singlet state $|S\rangle=\left(|gf\rangle-|fg\rangle\right)/\sqrt{2}$, the key element is that the system dynamics cannot only drive the population into $|\psi_{-}\rangle$, but also prevent the population from moving out of $|\psi_{-}\rangle$. In our approach, when we choose $\Delta_{e}=\beta=\omega_{s}+\Delta_{f}$, the coherent couplings mediated by the laser drive and by the squeezed-cavity mode are resonant. In addition, the microwave field also resonantly drives the transition \begin{equation} |\phi_{-}\rangle\leftrightarrow|\phi_{+}\rangle\leftrightarrow|\psi_{+}\rangle. \end{equation} The proposed entanglement preparation can, therefore, be understood via a hopping-like model, as illustrated in Fig.~\ref{Sfighopping}(a). Note that, here, $\Delta_{f}$ is required to be nonzero, or $|\phi_{-}\rangle$ becomes a dark state of the microwave drive, whose population is trapped and cannot be transferred to $|\psi_{+}\rangle$. In the preparation process, the populations initially in the states $|\phi_{-}\rangle$, $|\phi_{+}\rangle$, and $|\psi_{+}\rangle$ can be coherently driven to the dark state $|D\rangle$ through the microwave and laser drives and, then, decay to the desired state $|\psi_{-}\rangle$ through two atomic decays, respectively, with rates $\gamma_{g1}$ and $\gamma_{g2}$. Indeed, such atomic decays originate, respectively, from the spontaneous emissions, $|e\rangle\rightarrow|g\rangle$, of the two atoms, so we have $\gamma_{g1}=\gamma_{g2}=\gamma_{g}/4$. Furthermore, owing to the laser drive, the state $|\psi_{-}\rangle$ is resonantly excited to $|\varphi_{e}\rangle$. This state is then resonantly coupled to $|ff\rangle|1\rangle_{s}$ by the squeezed-cavity mode. The cavity loss causes the latter state to decay to $|ff\rangle|0\rangle_{s}$, thus giving rise to population leakage from $|\psi_{-}\rangle$. However, because of the exponential enhancement in the atom-cavity coupling [i.e., $g_{s}\sim g\exp\left(r_{p}\right)/2$ in Eq.~(\ref{N14})], the state $|\varphi_{e}\rangle$ is split into a doublet of dressed states, $|e_{\pm}\rangle=\left(|\varphi_{e}\rangle\pm|ff\rangle|1\rangle_{s}\right)/\sqrt{2}$, exponentially separated by \begin{equation} 2\sqrt{2}g_{s}\sim\sqrt{2}g\exp\left(r_{p}\right), \end{equation} which is much larger than the couplings strength $\Omega_{\pm}=\Omega/\left(2\sqrt{2}\right)$, as shown in Fig.~\ref{Sfighopping}(b). Hence, the population leakage from $|\psi_{-}\rangle$ is exponentially suppressed, and we can make the effective decay rate, $\Gamma_{\text{out}}$, out of $|\psi_{-}\rangle$, exponentially smaller than the effective decay rate, $\Gamma_{\text{in}}$, into $|\psi_{-}\rangle$. To discuss these decay rates more specifically, we need to give an effective master equation of the system, when the laser drive $\Omega$ is assumed to be much smaller than the interactions inside the excited-state subspace. In this case, the coupling between the ground- and excited-state subspaces is treated as a perturbation, so that both cavity mode and excited states of the atoms can be adiabatically eliminated. \begin{figure}[tbph] \centering \includegraphics[width=17cm]{Sfighopping.pdf} \caption{(Color online) (a) Hopping-like model for the proposed steady-state nearly-maximal entanglement preparation. (b) Exponential suppression in the leakage of the population in $|\psi_{-}\rangle$. (c) Effective dynamics after adiabatically eliminating the states $|D\rangle$, $|e_{+}\rangle$, and $|e_{-}\rangle$.}\label{Sfighopping} \end{figure} Specifically, we follow the procedure in Ref.~\cite{Xreiter2012effective}, and begin by considering the Lindblad master equation in Eq.~(\ref{Seq:simplified_masterequation}). For convenience, we rewrite the Hamiltonian $H_{s}\left(t\right)$ as \begin{align} H_{s}\left(t\right)=H_{g}+H_{e}+v\!\left(t\right)+v^{\dag}\!\left(t\right), \end{align} with \begin{align} H_{g}=&\sum_{k=1,2}\left[\Delta_{f}|f\rangle_{k}\langle f|+\frac{\Omega_{\text{MW}}}{2}\left(|f\rangle_{k}\langle g|+\text{H.c.}\right)\right],\\ H_{e}=&\sum_{k=1,2}|e\rangle_{k}\langle e|+\omega_{s}a_{s}^{\dag}a_{s}+H_{\text{ASC}}, \end{align} representing the interactions, respectively, inside the ground- and excited-state subspaces, and \begin{align} v\!\left(t\right)=\;&\frac{1}{2}\exp\left(i\beta t\right)\Omega\sum_{k=1,2}\exp\left[i\left(k-1\right)\pi\right]|g\rangle_{k}\langle e| \end{align} being the deexcitation from the excited-state subspace to the ground-states subspace. Under the assumption that the laser drive $\Omega$ is sufficiently weak compared to the coupling $g_{s}$, the effective Hamiltonian and Lindblad operators read: \begin{align} H_{\text{eff}}=\;&-\frac{1}{2}\left[v\!\left(t\right)\left(H_{\text{NH}}-\beta\right)^{-1}v^{\dag}\!\left(t\right)\right]+H_{g},\\ L_{x, \text{eff}}=\;&L_{x}\left(H_{\text{NH}}-\beta\right)^{-1}v^{\dag}\!\left(t\right), \end{align} where \begin{align} H_{\text{NH}}=H_{e}-\frac{i}{2}\sum_{x}L_{x}^{\dag}L_{x} \end{align} is the no-jump Hamiltonian. The system dynamics is, therefore, determined by an effective master equation \begin{align} \dot{\rho}_{g}\!\left(t\right)=i\left[\rho_{g}\!\left(t\right),H_{\text{eff}}\right]-\frac{1}{2}\sum_{x}\mathcal{L}\left(L_{x,\text{eff}}\right)\rho_{g}\!\left(t\right), \end{align} where $\rho_{g}\!\left(t\right)$ is the reduced density operator associated only with the ground states of the atoms. After a straightforward calculation restricted in the Hilbert space having at most one excitation, we have: \begin{align} H_{\text{eff}}=\; &\Delta_{f}\left(\mathcal{I}/2-|\phi_{+}\rangle\langle\phi_{-}|+\text{H.c.}\right)+\Omega_{\text{MW}}\left(|\psi_{+}\rangle\langle \phi_{+}|+\text{H.c.}\right),\label{Seq:effHamiltonian}\\ L_{g1,\text{eff}}=\;&r_{g}\left[\left(|\psi_{+}\rangle+|\psi_{-}\rangle\right)\left(\gamma_{\text{eff},0}\langle \psi_{+}|+\gamma_{\text{eff},2}\langle \psi_{-}|\right)+\gamma_{\text{eff},1}\left(|\phi_{+}\rangle+|\phi_{-}\rangle\right)\left(\langle\phi_{+}+\langle\phi_{-}|\right)\right],\label{Seq:efflindbladoperatorg1}\\ L_{g2,\text{eff}}=\;&-r_{g}\left[\left(|\psi_{+}\rangle-|\psi_{-}\rangle\right)\left(\gamma_{\text{eff},0}\langle \psi_{+}|-\gamma_{\text{eff},2}\langle\psi_{-}|\right)+\gamma_{\text{eff},1}\left(|\phi_{+}\rangle+|\phi_{-}\rangle\right)\left(\langle\phi_{+}+\langle\phi_{-}|\right)\right],\label{Seq:efflindbladoperatorg2}\\ L_{f1,\text{eff}}=\;&r_{f}\left[\left(|\phi_{+}\rangle-|\phi_{-}\rangle\right)\left(\gamma_{\text{eff},0}\langle \psi_{+}|+\gamma_{\text{eff},2}\langle\psi_{-}|\right)+\gamma_{\text{eff},1}\left(|\psi_{+}\rangle-|\psi_{-}\rangle\right)\left(\langle\phi_{+}|+\langle\phi_{-}|\right)\right],\label{Seq:efflindbladoperatorf1}\\ L_{f2,\text{eff}}=\;&-r_{f}\left[\left(|\phi_{+}\rangle-|\phi_{-}\rangle\right)\left(\gamma_{\text{eff},0}\langle \psi_{+}|-\gamma_{\text{eff},2}\langle\psi_{-}|\right)+\gamma_{\text{eff},1}\left(|\psi_{+}\rangle+|\psi_{-}\rangle\right)\left(\langle\phi_{+}|+\langle\phi_{-}|\right)\right],\label{Seq:efflindbladoperatorf2}\\ L_{\text{as},\text{eff}}=\;&r_{\text{as}}\left[\kappa_{\text{eff},1}|\psi_{-}\rangle\left(\langle\phi_{+}|+\langle\phi_{-}|\right)-\frac{1}{\sqrt{2}}\kappa_{\text{eff},2} \left(|\phi_{+}\rangle-|\phi_{-}\rangle\right)\langle \psi_{-}|\right].\label{Seq:efflindbladoperatoras} \end{align} Here, \begin{align} \mathcal{I}=\;&|\phi_{+}\rangle\langle\phi_{+}|+|\phi_{-}\rangle\langle\phi_{-}|+|\psi_{+}\rangle\langle\psi_{+}|+|\psi_{-}\rangle\langle\psi_{-}|,\\ |\phi_{\pm}\rangle=\;&\frac{1}{\sqrt{2}}\left(|gg\rangle\pm|ff\rangle\right),\\ |\psi_{\pm}\rangle=\;&\frac{1}{\sqrt{2}}\left(|gf\rangle\pm|fg\rangle\right), \end{align} and \begin{align} r_{g\left(f\right)}=\;&\exp\left(-i\beta t\right)\frac{\Omega\sqrt{\gamma_{g\left(f\right)}}}{4\gamma},\\ r_{\text{as}}=\;&\exp\left(-i\beta t\right)\frac{\Omega}{2\sqrt{\gamma}},\\ \gamma_{\text{eff},0}=\;&\frac{1}{\widetilde{\Delta}_{e,1}},\\ \gamma_{\text{eff},m}=\;&\frac{\widetilde{\omega}_{s,m}}{\widetilde{\omega}_{s,m}\widetilde{\Delta}_{e,m-1}-mC_{s}},\\ \kappa_{\text{eff},m}=\;&\frac{\sqrt{mC_{s}}}{\widetilde{\omega}_{s,m}\widetilde{\Delta}_{e,m-1}-mC_{s}}, \end{align} where \begin{align} \widetilde{\omega}_{s,m}=&\frac{1}{\kappa}\left(\omega_{s}+m\Delta_{f}-\beta\right)-\frac{i}{2},\\ \widetilde{\Delta}_{e,m-1}=&\frac{1}{\gamma}\left[\Delta_{e}+\left(m-1\right)\Delta_{f}-\beta\right]-\frac{i}{2}, \end{align} for $m=1,2$, and where $\gamma=\gamma_{g}+\gamma_{f}$ is the total atomic decay rate. Having obtained the effective master equation, let us now consider the decay rates $\Gamma_{\text{in}}$ and $\Gamma_{\text{out}}$. According to the effective Lindblad operators in Eqs.~(\ref{Seq:efflindbladoperatorg1})-(\ref{Seq:efflindbladoperatoras}), the decay rates of moving into and out of the singlet state $|\psi_{-}\rangle$ are given, respectively, by \begin{align} \Gamma_{\text{in}}=\;&\frac{\Omega^{2}}{4\gamma^{2}}\left(\gamma_{g}|\gamma_{\text{eff},0}|^{2} +2\gamma_{f}|\gamma_{\text{eff},1}|^{2}+4\gamma|\kappa_{\text{eff},1}|^{2}\right),\\ \Gamma_{\text{out}}=\;&\frac{\Omega^{2}}{4\gamma^{2}}\left(\gamma_{g}|\gamma_{\text{eff},2}|^{2} +2\gamma_{f}|\gamma_{\text{eff},2}|^{2}+2\gamma|\kappa_{\text{eff},2}|^{2}\right). \end{align} Let us define the entanglement fidelity as $F=\langle \psi_{-}|\rho_{g}\left(t\right)|\psi_{-}\rangle$ (that is, the probability of the atoms being in $|\psi_{-}\rangle$) and, then, the entanglement infidelity as $\delta=1-F$. In the steady state ($t\rightarrow+\infty$), the entanglement infidelity is found \begin{align}\label{seq:infidelity0} \delta\sim\frac{1}{1+\Gamma_{\text{in}}/\left(3\Gamma_{\text{out}}\right)}. \end{align} Note that, here, we have assumed that $|\phi_{+}\rangle$, $|\phi_{-}\rangle$, and $|\psi_{+}\rangle$ have the same population in a steady state. In order to prepare nearly-maximal steady-state entanglement, we choose the detunings to be \begin{align}\label{seq:detuningsforeffmeq} \Delta_{e}=\beta=\omega_{s}+\Delta_{f}, \end{align} such that $\widetilde{\omega}_{s,m}\sim\widetilde{\Delta}_{e,m-1}\sim-i/2$, yielding \begin{align}\label{seq:Gammmainout} \frac{\Gamma_{\text{in}}}{\Gamma_{\text{out}}}\sim\frac{4\gamma_{g}}{\gamma}C_{s}\gg1, \end{align} for $C_{s}\gg1$. As shown in Fig.~\ref{Sfighopping}(c), the underlying dynamics is as follows: after adiabatically eliminating the excited states $|D\rangle$, $|e_{+}\rangle$, and $|e_{-}\rangle$, the states $|\psi_{+}\rangle$ and $|\psi_{-}\rangle$ are directly connected by two effective spontaneous emission processes with rates $\gamma^{g1}_{\text{eff}}$ and $\gamma^{g2}_{\text{eff}}$, \begin{equation} \gamma^{g1}_{\text{eff}}=\gamma^{g2}_{\text{eff}} =|r_{g}\gamma_{\text{eff},0}|^2\sim\frac{\gamma_{g}}{4\gamma^{2}}\Omega^{2}, \label{N15} \end{equation} and at the same time, the desired state $|\psi_{-}\rangle$ leaks the population through an effective cavity decay with a rate $\kappa_{\text{eff}}$, \begin{equation} \kappa_{\text{eff}}=|r_{\text{as}}\kappa_{\text{eff},2}|^2/2\sim\frac{\Omega^{2}}{16\gamma C_{s}}. \label{N16} \end{equation} Therefore, together with the effective Hamiltonian $H_{\text{eff}}$ driving the populations from both $|\phi_{+}\rangle$ and $|\phi_{-}\rangle$ to $|\psi_{+}\rangle$, the initial populations in the ground-states subspace of the atoms can be transferred to $|\psi_{-}\rangle$ and trapped in this state. By substituting Eq.~(\ref{seq:Gammmainout}) into Eq.~(\ref{seq:infidelity0}), we can straightforwardly have \begin{align} \delta\sim\frac{3\gamma}{4\gamma_{g}C_{s}}. \end{align} As long as $r_{p}\geq1$, an exponential enhancement of the cooperativity, $C_{s}/C\sim \exp\left(2r_{p}\right)/4$, is obtained, leading to \begin{align} \delta\sim\frac{3\gamma}{\gamma_{g}\exp\left(2r_{p}\right)C}. \end{align} This equation shows that we can increase the squeezing parameter $r_{p}$, so as to exponentially decrease the entanglement infidelity, as seen in Fig.~\ref{Sfigsteadyerror}. Moreover, the result in this figure also reveals that, by decreasing $\Omega$, one can suppress non-adiabatic errors and, thus, can cause the steady-state infidelity to approach a theoretical value, as expected. Hence, as opposed to prior entanglement preparation protocols, which relied on controlled unitary dynamics or engineered dissipation, such an infidelity is no longer lower bounded by the cooperativity $C$ and, in principle, can be made very close to zero. \begin{figure}[tbph] \centering \includegraphics[width=7.0cm]{Sfigsteadyerror.pdf} \caption{(Color online) Steady-state entanglement infidelity versus the squeezing parameter $r_{p}$. We have plotted the numerical infidelity for $\Omega=0.5\gamma$ (dashed curve), $\Omega=1.0\gamma$ (dashed-dotted curve), and $\Omega=1.5\gamma$ (dotted curve) by calculating the effective master equation, and also plotted the theoretical prediction (solid curve). Here, we have assumed that $\gamma_{g}=\gamma/2$, $\kappa=2\gamma/3$, $C=20$, $\Delta_{f}=\Omega/2^{7/4}$, $\Omega_{\text{MW}}=\sqrt{2}\Delta_{f}$, and that with the vacuum cavity, the initial state of the atoms is $\left(\mathcal{I}-|\psi_{-}\rangle\langle\psi_{-}|\right)/3$.}\label{Sfigsteadyerror} \end{figure} \section{Effects of the counter-rotating terms} The counter-rotating terms of the form $a^{\dag}_{s}\sum_{k}|e\rangle_{k}\langle f|$ and $a_{s}\sum_{k}|f\rangle_{k}\langle e|$, which result from optical parametric amplification, do not conserve the excitation number, and can couple the ground- and double-excited states subspaces. Thus, this would give rise to an additional leakage of the population in the desired state $|\psi_{-}\rangle$, and decrease the entanglement fidelity. For example, in the presence of the counter-rotating terms, the state $|\psi_{-}\rangle$ can be excited to a double-excitation state $\left(|ge\rangle-|eg\rangle\right)|1\rangle_{s}/\sqrt{2}$, which, then, de-excites to the ground state $|gg\rangle|0\rangle$ through cavity decay and spontaneous emission. In general, we can decrease the ratio $|g_{s}^{\prime}|/\left(2\Delta_{e}\right)$ to reduce errors induced by these excitation-number-nonconserving processes. However, to reduce such errors more efficiently in the limit of $|g_{s}^{\prime}|/\left(2\Delta_{e}\right)$, we analyze effects of counter-rotating terms, in detail, in this section, and demonstrate that by modifying external parameters, we can remove these terms and the full system can be mapped to a simplified system described above. According to Eqs.~(\ref{seq:freesqueezedmode}) and (\ref{seq:fullsquzeedmodeandatoms}), the full Hamiltonian of the system in the terms of the squeezed mode $a_{s}$ is \begin{align} H\left(t\right)=&\sum_{k}\left[\Delta_{e}|e\rangle_{k}\langle e|+\Delta_{f}|f\rangle_{k}\langle f|\right]+\omega_{s}a_{s}^{\dag}a_{s}\nonumber\\ +&\sum_{k}\left[\left(g_{s}a_{s}-g^{\prime}_{s}a_{s}^{\dag}\right)|e\rangle_{k}\langle f|+\text{H.c.}\right],\nonumber\\ +&\frac{1}{2}\Omega_{\text{MW}}\sum_{k}\left(|f\rangle_{k}\langle g|+\text{H.c.}\right)+V\left(t\right),\\ V\left(t\right)=\;&\frac{1}{2}\Omega\exp\left(i\beta t\right)\sum_{k}\left[\left(-1\right)^{k-1}|g\rangle_{k}\langle e|+\text{H.c.}\right]. \end{align} Indeed, the counter-rotating terms can be treated as the high-frequency components of the full Hamiltonian. In order to explicitly show these high-frequency components, we can express $H\left(t\right)$ into a rotating frame at \begin{align} H_{0}=\Delta_{e}\sum_{k}|e\rangle_{k}\langle e|+\left(\omega_{s}+\Delta_{f}\right)a_{s}^{\dag}a_{s}. \end{align} Thus, $H\left(t\right)$ is transformed to \begin{align}\label{seq:fullHintermediateframe} \mathcal{H}\left(t\right)=\;&\Delta_{f}\left(\sum_{k}|f\rangle_{k}\langle f|-a_{s}^{\dag}a_{s}\right)\nonumber\\ &+\sum_{k}\left(g_{s}a_{s}|e\rangle_{k}\langle f|-e^{i2\Delta_{e}t}g_{s}^{\prime}a_{s}^{\dag}|e\rangle_{k}\langle f|+\text{H.c.}\right)\nonumber\\ &+\frac{1}{2}\Omega_{\text{MW}}\sum_{k}\left(|f\rangle_{k}\langle g|+\text{H.c.}\right)+\mathcal{V},\\ \mathcal{V}=\;&\frac{1}{2}\Omega\sum_{k}\left[\left(-1\right)^{k-1}|g\rangle_{k}\langle e|+\text{H.c.}\right]. \end{align} Here, we have chosen $\Delta_{e}=\beta=\omega_{s}+\Delta_{f}$. Because $\Delta_{f}$ is required to be much smaller than $\Delta_{e}$, $\mathcal{H}\left(t\right)$ can be divided into two parts, $\mathcal{H}\left(t\right)=H_{\text{low}}+H_{\text{high}}$, where \begin{align} H_{\text{low}}=\;&\Delta_{f}\left(\sum_{k}|f\rangle_{k}\langle f|-a_{s}^{\dag}a_{s}\right)+g_{s}\sum_{k}\left(a_{s}|e\rangle_{k}\langle f|+\text{H.c.}\right)\nonumber\\ &+\frac{1}{2}\Omega_{\text{MW}}\sum_{k}\left(|f\rangle_{k}\langle g|+\text{H.c.}\right)+\mathcal{V},\\ H_{\text{high}}=&\sum_{k}\left(-e^{i2\Delta_{e}t}g_{s}^{\prime}a_{s}^{\dag}|e\rangle_{k}\langle f|+\text{H.c.}\right), \end{align} represent the low- and high- frequency components, respectively. Here, we consider the limit $|g_{s}^{\prime}|/\Delta_{e}\ll1$. By using a time-averaging treatment~\cite{Xgamel2010time}, the behavior of $H_{\text{high}}$ can be approximated by a time-averaged Hamiltonian, \begin{align}\label{seq:time-averaged_Hamiltonian} H_{\text{TA}}=\;&\frac{|g_{s}^{\prime}|^2}{2\Delta_{e}}\sum_{k}a_{s}^{\dag}a_{s}\left(|e\rangle_{k}\langle e|-|f\rangle_{k}\langle f|\right)\nonumber\\ &-\frac{|g_{s}^{\prime}|^2}{2\Delta_{e}}\sum_{k,k^{\prime}}\left(|f\rangle_{k}\langle e|\right)\left(|e\rangle_{k^{\prime}}\langle f|\right). \end{align} The first term describes an energy shift depending on the photon number of the squeezed-cavity mode, and the second term describes a direct coupling between the two atoms. Accordingly, $\mathcal{H}\left(t\right)$ becomes $\mathcal{H}\left(t\right)\simeq H_{\text{low}}+H_{\text{TA}}$, and after transforming back to the original frame, we obtain \begin{align}\label{seq:fullHwithtimeaverageeff} H\left(t\right)\simeq\; &\sum_{k}\left[\Delta_{e}|e\rangle_{k}\langle e|+\Delta_{f}|f\rangle_{k}\langle f|\right]+\omega_{s}a_{s}^{\dag}a_{s}\nonumber\\ \;+\;&g_{s}\sum_{k}\left(a_{s}|e\rangle_{k}\langle f|+\text{H.c.}\right),\nonumber\\ +\;&\frac{1}{2}\Omega_{\text{MW}}\sum_{k}\left(|f\rangle_{k}\langle g|+\text{H.c.}\right)+V\left(t\right)+H_{\text{TA}}. \end{align} We find, from Eq.~(\ref{seq:time-averaged_Hamiltonian}), that the counter-rotating terms are able to conserve the excitation number as long as $|g_{s}^{\prime}|/\Delta_{e}\ll1$. Therefore, we can restrict our discussion in a subspace having at most one excitation, as discussed above. In this subspace, $H_{\text{TA}}$ is expanded as \begin{align}\label{SpannedTAH} H_{\text{TA}}=&-\frac{|g_{s}^{\prime}|^2}{2\Delta_{e}}\left(\mathcal{I}/2+|\varphi_{e}\rangle\langle\varphi_{e}|-|\phi_{+}\rangle\langle\phi_{-}|+\text{H.c.}\right)\nonumber\\ &-\frac{|g_{s}^{\prime}|^2}{\Delta_{e}}\left(\mathcal{I}^{\left(1\right)}/2-|\phi_{+}^{\left(1\right)}\rangle\langle\phi_{-}^{\left(1\right)}|+\text{H.c.}\right), \end{align} where \begin{align} \mathcal{I}^{\left(1\right)}=\;&|\phi_{+}^{\left(1\right)}\rangle\langle\phi_{+}^{\left(1\right)}|+|\phi_{-}^{\left(1\right)}\rangle\langle\phi_{-}^{\left(1\right)}| +|\psi_{+}^{\left(1\right)}\rangle\langle\psi_{+}^{\left(1\right)}|+|\psi_{-}^{\left(1\right)}\rangle\langle\psi_{-}^{\left(1\right)}|,\nonumber\\ |\phi_{\pm}^{\left(1\right)}\rangle=\;&\left(|gg\rangle\pm|ff\rangle\right)|1\rangle_{s}/\sqrt{2},\nonumber\\ |\psi_{\pm}^{\left(1\right)}\rangle=\;&\left(|gf\rangle\pm|fg\rangle\right)|1\rangle_{s}/\sqrt{2}. \end{align} Equation (\ref{SpannedTAH}) indicates that the counter-rotating terms introduce an energy shift of $|g_{s}^{\prime}|^2/\left(2\Delta_{e}\right)$ imposed upon the ground states, and a coherent coupling, of strength $|g_{s}^{\prime}|^2/\left(2\Delta_{e}\right)$, between the states $|\phi_{+}\rangle$ and $|\phi_{-}\rangle$. From Fig.~\ref{Sfighopping}(a), we find that in the regime, where $\Omega/|g_{s}^{\prime}|$ is comparable to $|g_{s}^{\prime}|/\Delta_{e}$, such an energy shift can cause the $|\psi_{+}\rangle\rightarrow|D\rangle$ transition to become far off-resonant and, thus, suppress the population into the desired state $|\psi_{-}\rangle$. Meanwhile, this introduced coupling may increase the entanglement error originating from the microwave dressing of the ground states. For example, if $\Delta_{f}=|g_{s}^{\prime}|^2/\left(2\Delta_{e}\right)$, then the state $|\phi_{-}\rangle$ becomes a dark state of the microwave drive. In this case, the population in $|\phi_{-}\rangle$ is trapped and cannot be transferred to $|\psi_{-}\rangle$. To remove these detrimental effects, it is essential to compensate this energy shift. According to the above analysis, the detunings in Eq.~(\ref{seq:detuningsforeffmeq}) need to be modified as \begin{align}\label{seq:modification} \Delta_{e}=\beta-\frac{|g_{s}^{\prime}|^{2}}{2\Delta_{e}}=\omega_{s}+\Delta_{f}-\frac{|g_{s}^{\prime}|^2}{\Delta_{e}}. \end{align} This modification simplifies the full dynamics to the same hopping-like model, as shown in Fig.~\ref{Sfighopping}(a) with $\Delta_{f}\rightarrow\Delta_{f}^{\prime}=\Delta_{f}-|g_{s}|^{2}/\left(2\Delta_{e}\right)$. Therefore, we can map the full system to a simple system that excludes the counter-rotating terms and has been discussed above. \begin{figure}[tbph] \centering \includegraphics[width=17.5cm]{Sfigmodified.pdf} \caption{(Color online) Entanglement infidelity $\delta$ as a function of time $t\gamma$ for (a) $\Omega=0.5\gamma$, (b) $\Omega=1.0\gamma$, and (c) $\Omega=1.5\gamma$, assuming a cooperativity of $C=20$. Solid and dashed-dotted curves are obtained, respectively, from integrations of the effective and full master equations, both with detunings $\Delta_{f}=\Omega/2^{7/4}$ and $\Delta_{e}=\beta=\omega_{s}+\Delta_{f}$. Dashed curves are also given by calculating the full master equation but with modified detunings $\Delta_{f}=\Omega/2^{7/4}+|g_{s}^{\prime}|^2/\left(2\Delta_{e}\right)$ and $\Delta_{e}=\beta-|g_{s}^{\prime}|^{2}/\left(2\Delta_{e}\right)=\omega_{s}+\Delta_{f}-|g_{s}^{\prime}|^{2}/\Delta_{e}$. For both full cases, we have assumed $\Delta_{e}=200g_{s}^{\prime}$. In all plots, we have assumed that $\gamma_{g}=\gamma/2$, $\kappa=2\gamma/3$, $\Omega_{\text{MW}}=\sqrt{2}\Delta_{f}$, $r_{p}=3$, and $\theta_{p}=\pi$. Moreover, the initial state of the atoms is $\left(\mathcal{I}-|\psi_{-}\rangle\langle\psi_{-}|\right)/3$ and the cavity was initially in the vacuum. }\label{Sfigmodified} \end{figure} To understand this process better, we can follow the same method as above, but now with the Hamiltonian in Eq.~(\ref{seq:fullHwithtimeaverageeff}). Thus, we find the effective Hamiltonian and Lindblad operators as follows: \begin{align} H_{\text{eff}}^{\prime}=\;&\Delta_{f}^{\prime}\left(\mathcal{I}/2-|\phi_{+}\rangle\langle\phi_{-}|+\text{H.c.}\right)+\Omega_{\text{MW}}\left(|\psi_{+}\rangle\langle \phi_{+}|+\text{H.c.}\right),\\ L_{g1,\text{eff}}^{\prime}=\;&r_{g}^{\prime}\left[\left(|\psi_{+}\rangle+|\psi_{-}\rangle\right)\left(\gamma_{\text{eff},0}^{\prime}\langle \psi_{+}|+\gamma_{\text{eff},2}^{\prime}\langle \psi_{-}|\right)+\gamma_{\text{eff},1}^{\prime}\left(|\phi_{+}\rangle+|\phi_{-}\rangle\right)\left(\langle\phi_{+}+\langle\phi_{-}|\right)\right],\\ L_{g2,\text{eff}}^{\prime}=\;&-r_{g}^{\prime}\left[\left(|\psi_{+}\rangle-|\psi_{-}\rangle\right)\left(\gamma_{\text{eff},0}^{\prime}\langle \psi_{+}|-\gamma_{\text{eff},2}^{\prime}\langle\psi_{-}|\right)+\gamma_{\text{eff},1}^{\prime}\left(|\phi_{+}\rangle+|\phi_{-}\rangle\right)\left(\langle\phi_{+}+\langle\phi_{-}|\right)\right],\\ L_{f1,\text{eff}}^{\prime}=\;&r_{f}^{\prime}\left[\left(|\phi_{+}\rangle-|\phi_{-}\rangle\right)\left(\gamma_{\text{eff},0}^{\prime}\langle \psi_{+}|+\gamma_{\text{eff},2}^{\prime}\langle\psi_{-}|\right)+\gamma_{\text{eff},1}^{\prime}\left(|\psi_{+}\rangle-|\psi_{-}\rangle\right)\left(\langle\phi_{+}|+\langle\phi_{-}|\right)\right],\\ L_{f2,\text{eff}}^{\prime}=\;&-r_{f}^{\prime}\left[\left(|\phi_{+}\rangle-|\phi_{-}\rangle\right)\left(\gamma_{\text{eff},0}^{\prime}\langle \psi_{+}|-\gamma_{\text{eff},2}^{\prime}\langle\psi_{-}|\right)+\gamma_{\text{eff},1}^{\prime}\left(|\psi_{+}\rangle+|\psi_{-}\rangle\right)\left(\langle\phi_{+}|+\langle\phi_{-}|\right)\right],\\ L_{\text{as},\text{eff}}^{\prime}=\;&r_{\text{as}}^{\prime}\left[\kappa_{\text{eff},1}^{\prime}|\psi_{-}\rangle\left(\langle\phi_{+}|+\langle\phi_{-}|\right)-\frac{1}{\sqrt{2}}\kappa_{\text{eff},2}^{\prime} \left(|\phi_{+}\rangle-|\phi_{-}\rangle\right)\langle \psi_{-}|\right]. \end{align} Here, \begin{align} \Delta_{f}^{\prime}=\;&\Delta_{f}-\frac{|g_{s}|^{2}}{2\Delta_{e}},\\ r_{g\left(f\right)}^{\prime}=\;&\exp(-i\beta t)\frac{\Omega\sqrt{\gamma_{g\left(f\right)}}}{4\gamma},\\ r_{\text{as}}^{\prime}=\;&\exp(-i\beta t)\frac{\Omega}{2\sqrt{\gamma}}, \end{align} and \begin{align} \gamma_{\text{eff},0}^{\prime}=\;&\frac{1}{\widetilde{\Delta}_{e}^{\prime}},\\ \gamma_{\text{eff},m}^{\prime}=\;&\frac{\widetilde{\omega}_{s,m}^{\prime}}{\widetilde{\omega}_{s,m}^{\prime}\widetilde{\Delta}_{e,m-1}^{\prime}-mC_{s}},\\ \kappa_{\text{eff},m}^{\prime}=\;&\frac{\sqrt{mC_{s}}}{\widetilde{\omega}_{s,m}^{\prime}\widetilde{\Delta}_{e,m-1}^{\prime}-mC_{s}} \end{align} where \begin{align} \widetilde{\Delta}_{e}^{\prime}=&\;\left(\Delta_{e}+\Delta_{f}-\beta\right)/\gamma-i/2,\\ \widetilde{\omega}_{s,m}^{\prime}=&\left[\omega_{s}+m\left(\Delta_{f}-\frac{|g_{s}^{\prime}|^2}{\Delta_{e}}\right)-\beta\right]/\kappa-i/2,\\ \widetilde{\Delta}_{e,m-1}^{\prime}=&\left[\Delta_{e}-\beta+\left(m-1\right)\left(\Delta_{f}-\frac{|g_{s}^{\prime}|^{2}}{\Delta_{e}}\right)\right]/\gamma-i/2, \end{align} for $m=1,2$. Upon using the modified parameter, given in Eq.~(\ref{seq:modification}), we obtain $\widetilde{\Delta}_{e}^{\prime}\sim\widetilde{\omega}_{s,m}^{\prime}\sim\widetilde{\Delta}_{e,m-1}^{\prime}\sim-i/2$. This implies that the dynamics is the same as what we have already described for the simplified system without the counter-rotating terms, thereby leading to the same entanglement infidelity. To confirm this, we perform numerical calculations, as shown in Fig.~\ref{Sfigmodified}. Specifically, we plot the entanglement infidelity as a function of rescaled time. Solid curves indicate the results obtained by integrating the effective master equation, whereas dashed and dashed-dotted curves reveal the predictions of the full master equation, respectively, with modified and unmodified detunings. These results demonstrate that the detrimental effects of the counter-rotating terms can be strongly suppressed by modifying external parameters, in particular, as what we have discussed above, for the case of weak $\Omega$ driving strengths, which are necessary for the validity of the perturbative treatment used in our approach.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,576
require_relative 'aggregator/control_segment' require_relative 'aggregator/isa' require_relative 'aggregator/gs' require_relative 'aggregator/st' require_relative 'aggregator/loops' module Edi835 class Aggregator attr_reader :sets def initialize(segments) @segments = segments end def aggregate @sets = ISA.new(@segments).sets end end end
{ "redpajama_set_name": "RedPajamaGithub" }
315
Kyuss war eine Stoner-Rock-Band aus Palm Desert, Kalifornien. Die Band gilt als Begründer des Genres und war zugleich Vorbild für zahlreiche andere Stoner-Rock-Bands. Da der Sound in keine bestehende Schublade passte, beschrieb die Band ihn selbst als "Wüstenrock" (engl. Desert Rock, heute Synonym für Stoner Rock). Stilprägend waren unter anderem tiefer gestimmte Gitarren, die z. T. sogar durch Bassverstärker gespielt wurden. Bandgeschichte Die Band begann 1988 unter dem Namen "Katzenjammer", änderte diesen 1989 in "Sons of Kyuss" und 1991 in "Kyuss". Das erste Demo-Album erschien unter dem Bandnamen Sons of Kyuss. Die Band benannte sich nach einem düsteren Hexer aus dem "Dungeons & Dragons"-Rollenspiel-Hintergrund "Greyhawk". Sie spielte zunächst vor allem auf den (vom Freundeskreis um Josh Homme initiierten) "Generator Partys", die in der Wüste vor Palm Desert stattfanden und schnell Berühmtheit erlangten. Der Name der Partys rührt daher, dass der Strom für die Anlage in der Wüste von Stromaggregaten geliefert werden musste. Dass die Auftritte vor den Toren der Stadt stattfanden, lag daran, dass in Palm Desert sehr viele alte Leute leben und die sehr lauten Konzerte in der Stadt zu Beschwerden geführt hätten. Die Anfangsbesetzung war: John Garcia (Gesang), Josh Homme (Lead-Gitarre), Nick Oliveri (Rhythmus-Gitarre), Chris Cockrell (Bass) und Brant Bjork (Schlagzeug). Während der Aufnahmen zur Demo-EP Sons of Kyuss verließ Oliveri die Band, nach der Veröffentlichung nahm Oliveri Cockrells Position am Bass ein und spielte kurz darauf mit seinen Bandkollegen ihr erstes Album Wretch ein, welches zum größten Teil aus Neuaufnahmen ihrer Demo-Lieder besteht. Dieses Frühwerk zeichnet sich vor allem durch einen hohen Anteil an Doom-Riffs, gepaart mit Elementen des Psychedelic Rock aus und wird dem Stoner Doom zugeordnet. Bereits in kurzer Zeit strichen die ohnehin groove-orientierten Kyuss große Anteile des schweren und rohen Metals und prägten somit den Stoner Rock der Szene des Palm Desert. 1992 veröffentlichten Kyuss ihr zweites Album Blues for the Red Sun, welches von Chris Goss (Masters of Reality) produziert wurde und ihnen den Durchbruch aus der Underground-Szene bescherte. Letzterer sollte die Band fortan bei allen Projekten unterstützen, was sich nach Meinung vieler Kritiker auch im Sound der folgenden Alben manifestierte. Nach der Veröffentlichung von Blues for the Red Sun verließ Oliveri erneut und endgültig die Band und wurde von Scott Reeder ersetzt. Im Jahr 1993 spielte die Band den Nachfolger Welcome to Sky Valley ein, konnte es aber erst 1994 veröffentlichen, da das Plattenlabel Probleme bereitete. Es war das letzte Album, an dem Schlagzeuger Brant Bjork beteiligt war, der sich mit Alfredo Hernández schon einen Nachfolger ausgesucht hatte. Mit dieser letzten Besetzung gingen sie mit dem Album Welcome to Sky Valley auf Tour, ehe sie sich ins Studio zurückzogen, um einen Nachfolger zu produzieren. Mitte 1995 wurde mit …And the Circus Leaves Town ihr letztes Album veröffentlicht, ehe sich die Band im Oktober 1995 trennte. Nach der Trennung wurde eine Split-EP mit Queens of the Stone Age, Josh Hommes neuer Band, sowie ein Best-of-Album veröffentlicht. Nach der Auflösung 1995 wirkte John Garcia bei unzähligen anderen Stoner-Projekten mit und veröffentlichte unter anderem eine Vier-Track-EP (Special Edition: plus fünf Bonustracks) seines eigenen Projektes Slo Burn. Danach sang Garcia in der Band Unida, die ein Album veröffentlicht hat. Ein zweites Album der Band Unida wurde zwar fertiggestellt, jedoch nie veröffentlicht. Zurzeit singt Garcia in der Band Hermano, die 2004 ihr zweites und 2007 ihr drittes Album veröffentlichte. Teile der Band (vor allem der Songwriter und Gitarrist Josh Homme) formierten sich später neu zu den Queens of the Stone Age, die es auch zu kommerziellem Erfolg brachten. Auch Schlagzeuger Brant Bjork widmete sich nach seinem Bandausstieg 1994 weiteren Musikprojekten. So spielte er von 1997 bis 2001 Schlagzeug bei Fu Manchu und startete 1999 seine Solokarriere, aus der bisher neun Studioalben entstanden sind. Im Februar 2010 wurde bekannt, dass Sänger John Garcia zusammen mit Bruno Fevery (Gitarre), Jacques de Haard (Bass) und Rob Snijders (Schlagzeug) unter dem Motto Garcia Plays Kyuss live auftreten wird. Per Abstimmung über Garcias persönliche MySpace-Seite durften Fans bestimmen, welche Songs gespielt werden sollten. Im Jahr 2011 gingen John Garcia, Nick Oliveri und Brant Bjork mit Unterstützung von Bruno Fevery an der Gitarre auf Tour unter dem neuen Bandnamen Kyuss Lives!. Aufgrund eines Rechtsstreits mit Homme mussten sie allerdings Ende 2012 ihren Namen in Vista Chino ändern. Diskografie Studioalben 1991: Wretch 1992: Blues for the Red Sun 1994: Welcome to Sky Valley 1995: …And the Circus Leaves Town Singles 1992: Thong Song 1993: Green Machine 1994: Demon Cleaner 1995: Gardenia 1995: One Inch Man 1996: Into the Void (Black-Sabbath-Cover) Sonstiges 1990: Sons of Kyuss (Demo) 1994: Live at the Marquee-Club (Promo-Live-EP) 1996: Shine!/Short Term Memory Loss (Split mit Wool) 1997: Kyuss/Queens of the Stone Age (Split mit Queens of the Stone Age) 2000: Muchas Gracias: The Best of Kyuss (Best-Of-Album) Weblinks Kyuss bei Last.fm Kyuss bei MusicMoz (englisch) Einzelnachweise Stoner-Rock-Band Rockband US-amerikanische Band
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,859
Meet Michael Toscano, partner at Toscano & Wilson and an expert at consumer bankruptcy law. Michael is a St. Louis native and has been a bankruptcy lawyer in the area since 2009. He's helped clients overcome their financial challenges and get back on their feet. Michael also can work with non-profit organizations and small businesses that are experiencing financial difficulties to either remediate their debts or file for bankruptcy in an ethical way. Much of his work involves representing debtors and creditors in cases under Chapter 7 and Chapter 13 of the Bankruptcy Code.
{ "redpajama_set_name": "RedPajamaC4" }
8,264
\section{Introduction} An intriguing question in Differential Geometry is whether two minimal surfaces in a Riemannian three-manifold intersect. The roots of this problem can be traced back to the fifth Euclid's postulate and its negation leading to discovery of non-Euclidean geometries. In \cite{hadamard} J. Hadamard showed that on a complete surface with positive curvature every geodesic must intersect every closed geodesic. T. Frankel in \cite{frankel} extended the intersection result of Hadamard showing that minimal hypersurfaces immersed as closed subsets of a Riemannian manifold with positive Ricci curvature intersect provided one of then is compact, see also \cite{petersen_whilhelm}. A class of minimal hypersurfaces of a Riemannian manifold is said to have the intersection property if any two elements of the class intersect unless they are totally geodesic parallel leaves in local product structure. It was proved by G.P. Bessa, L.P. Jorge and G. Oliveira in \cite{bessa-jorge-oliveira} that the class of complete minimal surfaces with bounded curvature immersed in three-manifolds $N$ with positive Ricci curvature and bounded geometry has the intersection property while H. Rosenberg \cite{rosenberg-bounded} proved it for the class of complete minimal surfaces immersed with bounded curvature in compact three-manifolds $N$ with positive Ricci curvature. In the Euclidean space $\mathbb{R}^{3}_{\raisepunct{,}}$ as a consequence of the convex hull theorem due to F. Xavier \cite{xavier}, any complete minimal surface with bounded curvature immersed in an open half-space is a plane. Likewise, in \cite{hoffman-meeks} D. Hoffman and W. Meeks proved that any complete minimal surface properly immersed in an open half-space is also a plane. Furthermore, using a separating plane theorem \cite{meeks-simon-yau}, they proved the intersection property for complete properly immersed minimal surfaces of $\mathbb{R}^3$. In the same vein, Bessa, Jorge, Oliveira \cite{bessa-jorge-oliveira} and Rosenberg \cite{rosenberg-bounded} established a separating plane theorem for minimal surfaces immersed with bounded curvature, and thus by Xavier's result it yields the intersection property for the class of complete minimal surfaces of $\mathbb{R}^3$ with bounded curvature. These intersection results are known in the literature as {\em half-space theorems}. Half-space theorems in $\mathbb{R}^{3}$ have been established between the classes of complete properly minimal surfaces and complete minimal surfaces with bounded curvature in \cite[Cor.1.4]{bessa-jorge-oliveira}, as well as between the classes of parabolic minimal surfaces and complete minimal surfaces with bounded curvature in \cite[Thm.1]{bessa-jorge-pessoa}. There are also intersection results for minimal surfaces immersed in homogeneous three-spaces in \cite{daniel2009half,daniel2011half} and for surfaces with constant mean curvature $H>0$ immersed in various ambient spaces, see \cite{M,ros2010properly,RR,rosenberg2013half} and references therein. The purpose of this paper is to extend some of this circle of ideas about intersection properties to $1$-surfaces immersed in a complete Riemannian three-manifold $P$ with Ricci curvature bounded from below ${\rm Ric}_{P}\geq -2$, and in particular, $1$-surfaces immersed in the hyperbolic space $\mathbb{H}^3_{\raisepunct{.}}$ One of our motivation is the pioneering work of R. Bryant \cite{Br}, in which it is shown that the geometry of minimal surfaces immersed in $\mathbb{R}^3$ shares many similarities with the geometry of $1$-surfaces immersed in $\mathbb{H}^3_{\raisepunct{.}}$ This connection has been exploited in several works to provide important contributions to the theory of $1$-surfaces of $\mathbb{H}^3$, see for instance \cite{UY1,CHR,KKMS}. Let $N$ be a complete $H$-surface properly immersed in a complete oriented three-manifold $P$. Let $\nu$ be the unit normal vector field along $N$ such that $\overrightarrow{H}_{\!\!_N} = H\nu$, $H> 0$. A connected component $\Omega$ of $P\backslash N$ is said to be mean convex if the mean curvature vector field of $N$ points towards $\Omega$. The vector $\overrightarrow{H}_{\!\!_N}$ at $p_0 \in N \cap \partial \Omega$ points towards $\Omega$ if, for any sequence $q_n \in \Omega$ with $q_n \rightarrow p \in V \subset N$, $V$ a neighbourhood of $p_0$, we have $q_n = \exp_p(t_n\nu(p))$ for some $0<t_n<\varepsilon$. In the context of surfaces with positive constant mean curvature, the intersection property means that an immersed $H$-surface can not lie in any mean convex component determined by another disjoint $H$-surface. In our first result we establish the intersection property for complete $1$-surfaces with bounded curvature immersed in three-manifolds with Ricci curvature ${\rm Ric}>-2$. This result correspond to the intersection property proved in \cite{bessa-jorge-oliveira, frankel, rosenberg-bounded} for minimal surfaces with bounded curvature in three-manifolds with ${\rm Ric}>0$. \begin{theorem}\label{intersect}Let $P$ be a complete Riemmanian three-manifold with Ricci curvature bounded below by ${\rm Ric}_{_P}>-2$, and let $M$ and $N$ be two complete immersed $1$-surfaces of $P$. If $M$ has bounded curvature and $N$ is compact, then $M$ can not lie in a mean convex component of $P\backslash N$. \end{theorem} \begin{remark} The Ricci curvature assumption in Theorem \ref{intersect} is essential. Indeed, consider the manifold $P=\mathbb{R} \times \mathbb{T}^{2}$ endowed with the metric $d t^{2}+e^{2 t} g$, where $g$ is the standard flat metric of the torus $\mathbb{T}^{2}_{\raisepunct{.}} $ The manifold $P$ has constant sectional curvature $K=-1$ thus its Ricci curvature is $-2$ and the slices $\mathcal{N}_t = \{t\}\times\mathbb{T}^2$ are compact $1$-surfaces embedded in $P$. Therefore, the slice $M=\mathcal{N}_{t}$ lies in the mean convex component of $P\setminus \mathcal{N}_{s}$ if $t<s$. \end{remark} The proof of Theorem \ref{intersect} relies on the following stability argument for $1$-surfaces, which is a version of \cite[Cor.3]{S} and can be established using ideas contained in the proof of \cite[Thm.2.13]{MPR}. \begin{proposition}\label{MPR-a} There are no complete strongly stable $H$-surfaces with $H\geq 1$ in a three-manifold with $ Ric>-2$. \end{proposition} As an application of Proposition \ref{MPR-a} we have the following counterpart of \cite[Cor.1.6]{bessa-jorge-oliveira} and \cite{rosenberg-bounded} to $1$-surfaces of three-manifolds with Ricci curvature $Ric>-2$. In what follows a manifold is said to have bounded geometry if the sectional curvature is bounded from above and the injectivity radius is bounded away from zero. \begin{theorem}\label{teo2} Let $P$ be a three-manifold with bounded geometry and Ricci curvature $Ric_{_P} > -2$, and $M$ be a complete $1$-surface with bounded curvature injectively immersed of $P$. Then, \begin{itemize} \item[a)] $M$ is compact if $P$ is compact. \item[b)] $M$ is proper if $P$ is non-compact. \end{itemize} \end{theorem} \begin{remark}Recently W. Meeks and A. Ramos \cite{meeks2019properly} proved that complete immersed surfaces of finite topology with mean curvature bounded above in a hyperbolic three-manifold $N$ with sectional curvatures $K_{N}\leq -a^2\leq 0$ under certain assumptions on the injectivity radius along the ends are proper. \end{remark} The hypothesis on the Ricci curvature in Theorem \ref{intersect} can be relaxed to $Ric_{_P}\geq -2$ as well as the curvature assumption of $M$ and the compactness of $N$ if one assumes the existence of a minimizing geodesic realizing the distance ${\rm dist}(M,N)$ and yet yielding a stronger statement, see Theorem \ref{splitting} below. It can be viewed as the analogous for $1$-surfaces of \cite[Thm.3.1]{galloway1991intersections}. \begin{theorem}\label{splitting} Let $P$ be a complete three-manifold with $Ric_P\geq -2$. Let $ M$ and $N$ be complete immersed $1$-surfaces of $P$ that do not intersect and the distance ${\rm dist}(M,N)$ is realized. If $N$ is proper and $M$ lies in a mean convex component of $P\backslash N$, then \begin{itemize} \item[a)] $M$ and $N$ are embedded totally umbilical equidistant $1$-surfaces. \item[b)] $M$ and $N$ bound an open connected region in $P$ whose closure is isometric to $[0, l] \times N$, endowed with the metric $dt^2+e^{2t}g$, where $g$ denotes the metric of $N$. If $M$ and $N$ are compact, then each of them is separating, and the mean convex component of $P\backslash N$ is isometric to $[0,+\infty) \times N$ with the same metric as before. In particular, $P$ can not be compact. \end{itemize} \end{theorem} The second goal of this work is to prove versions of the strong half-space theorem for complete $1$-surfaces immersed in hyperbolic space $\mathbb{H}^3$. We start with a version for $1$-surfaces immersed in $\mathbb{H}^3$ of the strong half-space theorem \cite[Thm.1.4]{bessa-jorge-oliveira} between the classes of complete minimal surfaces with bounded curvature and complete minimal surfaces properly immersed in $\mathbb{R}^{3}$. \begin{theorem}\label{half_bc} Let $M$ be a complete $1$-surface immersed in $\mathbb{H}^3$ with bounded curvature and let $N$ be a complete $1$-surface properly immersed in $\mathbb{H}^3$. If $N$ is non-horospherical, then $M$ can not lie in any mean convex component of $\mathbb{H}^3\backslash N$. \end{theorem} \begin{remark} It should be noticed that if $N$ is a non-horospherical properly embedded $1$-surface of $\mathbb{H}^3_{\raisepunct{,}}$ then each annular end is asymptotic to a catenoid cousin \cite{CHR}. This property contrast from the case of minimal surfaces properly embedded in $\mathbb{R}^3_{\raisepunct{,}}$ where an annular end can be asymptotic to a planar or a catenoid end. \end{remark} The Theorem \ref{half_bc} yields a version of the beautiful Catenoid theorem due to Rodrigues and Rosenberg \cite{RR}, improved by Mazet in \cite{M}, to immersed $1$-surfaces with bounded curvature. \begin{corollary}\label{corollary_catenoid} Let $M$ be a complete $1$-surface immersed in $\mathbb{H}^3$ with bounded curvature, and let $C$ be a Catenoid cousin of $\mathbb{H}^3_{\raisepunct{.}}$ Then, $M$ can not lie on the mean convex side of $\mathbb{H}^3 \backslash C$. \end{corollary} In \cite[Thm.1.1]{bessa-jorge-pessoa} the authors proved a new version of the strong half-space theorem between the classes of complete minimal surfaces with bounded curvature and of parabolic minimal surfaces of $\mathbb{R}^{3}$. Recall that a manifold is said to be parabolic (recurrent) if the standard Brownian motion visits any open set at arbitrary large moments of time with probability one, and it is transient otherwise. The simplest examples of parabolic $1$-surfaces in $\mathbb{H}^3$ are the immersions conformally to $\mathbb{C}$ or $\mathbb{C}\backslash\{0\}$, for instance horospheres, Enneper and Catenoid cousins (see \cite[Sec.9.3]{GG} for other examples). Our next contribution is a version of \cite[Thm.1.1]{bessa-jorge-pessoa} for $1$-surfaces immersed in $\mathbb{H}^3_{\raisepunct{.}}$ \begin{theorem}\label{parabolic_half} Let $M$ be a parabolic $1$-surface immersed in $\mathbb{H}^3$ and $N$ be a complete $1$-surface properly immersed in $\mathbb{H}^3$ with bounded curvature. Then, $M$ can not lie in a mean convex component of $\mathbb{H}^3\backslash N$, unless they are parallel $1$-surfaces.\footnote{In this case $N$ is a horosphere and $M$ could be a horosphere minus a set of zero capacity.} \end{theorem} \begin{remark}\label{rmk_mari} E. Gama, J. Lira, L. Mari and A. Medeiros in \cite{GLM}, generalizing results from \cite{RR, ros2010properly} proved a theorem related to Theorem \ref{parabolic_half} in the case of parabolic surface immersed into a region $\Omega$ whose boundary $\partial \Omega$ has bounded bending from outwards, remarkably including the case of smooth properly embedded $1$-surface \end{remark} A careful analysis of the proof of Theorem \ref{parabolic_half} shows that we can extend it for surfaces with variable mean curvature provided $\sup H_{M} \geq \inf H_{N}$. This last inequality is sufficient to apply the Liouville theorem for bounded subharmonic functions which is equivalent to parabolicity \cite[Thm.5.1]{grigoryan}. On the other hand, the strict inequality $\sup H_{M} > \inf H_{N}$ allows us to prove a version of Theorem \ref{parabolic_half} assuming that $M$ is only stochastically complete, compare with \cite[Thm.1.7]{bessa-jorge-pessoa}. A Riemannian manifold $M$ is said to be {\it stochastically complete} if for some $(x, t)\in M\times (0, +\infty)$ it holds that $\int_Mp(x,y,t)dy=1$, where $p(x, y, t)$ is the heat kernel of the Laplace operator. Stochastic completeness is equivalent for the following Liouville property: for all $\lambda>0$, any bounded, non-negative solution of the subequation $\Delta u \geq \lambda u$ is identically zero. In particular, every parabolic manifold is stochastically complete. In the following theorem we also give a version of \cite[Thm.1.9]{bessa-jorge-pessoa} for surfaces immersed in $\mathbb{H}^{3}_{\raisepunct{.}}$ \begin{theorem}\label{stochastic_half-1-surfaces} Let $N$ be a complete surface properly immersed in $\mathbb{H}^3$ with bounded curvature and let $M$ be a stochastically complete surface of $\mathbb{H}^3$. \begin{itemize} \item[i)] If $\sup H_{M} < \inf H_{N}$, then $M$ can not lie in a mean convex component of $\mathbb{H}^3\backslash N$. \item[ii)] If $\sup H_{M} = \inf H_{N} > 1$, then $d(M,N) = 0$. \end{itemize} \end{theorem} It is well-understood in the literature that strong half-space theorems give rise to Maximum Principle at Infinity involving surfaces with non-empty boundary. It can be viewed as a generalization of Hopf's Maximum Principle for surfaces with constant mean curvature and it has been investigated in several works, see \cite{langevin-rosenberg,meeks-rosenberg-mp,soret-mp} for minimal surfaces, \cite{lima,meeks-lima} for $H$-surfaces, and \cite{GLM,meeks-rosenberg,ros2010properly} for further generalizations. In \cite[Thm.4.2]{meeks-lima}, under a suitable hypothesis of ideal contact at infinity, it was established a Maximum Principle at Infinity for proper surfaces of $\mathbb{H}^3$ with bounded mean curvature, but not both equal to $1$. In \cite[Thm.1]{GLM} it was proved a Maximum Principle at Infinity for parabolic $1$-surfaces with boundary immersed into a region $\Omega$ of a Riemannian manifold $P$ with Ricci curvature bounded ${\rm Ric}\geq -2$, and whose $\partial \Omega$ has bounded curvature and bounded bending from outwards (see Remark \ref{rmk_mari}). Recall that a surface $M$ with non-empty boundary $\partial M$ is said to be parabolic if the absorbed Brownian motion is recurrent, that is, any Brownian path starting from an interior point of $M$, reaches the boundary (and dies) in a finite time with probability $1$ (see \cite{perez-lopez}). From a potential-theoretic viewpoint \cite[Prop.10]{pessoa-pigola-setti}, the parabolicity is equivalent to the following Ahlfors maximum principle: every weak bounded solution $u \in C^0(M)\cap W^{1,2}_{\text{loc}}({\rm int} M)$ of the subequation $\triangle u \geq 0$ in ${\rm int} M$ must satisfies \begin{eqnarray*} \sup_{M} u = \sup_{\partial M} u. \end{eqnarray*} It should be remarked that the usual definition of parabolicity for surfaces with boundary for which the Brownian motion reflects at $\partial M$ is stronger than the above notion (see \cite{impera-pigola-setti,pessoa-pigola-setti}). In our last result we provide the hyperbolic version of the Maximum Principle at Infinity proved in \cite[Thm.1.11]{bessa-jorge-pessoa} for parabolic $1$-surfaces \begin{theorem}\label{maximum-principle-infinity} Let $M$ and $N$ be disjoint immersed surfaces of $\mathbb{H}^3_{\raisepunct{.}}$ Assume $M$ is parabolic with non-empty boundary $\partial M,$ and $N$ is a complete surface properly immersed with bounded curvature. If $\sup_{\!_M}\vert H_{\!_M}\vert \leq \inf_{\!_N}H_{\!_N}>0$ and $M$ lies in a mean convex component of $\mathbb{H}^3\backslash N$, then \[ {\rm dist}(M,N) = {\rm dist}(\partial M,N). \] \end{theorem} \noindent \textbf{Acknowledgements.} This work was partially supported by Alexander von Humboldt Foundation and Capes-Brazil (Finance Code 001), and by CNPq-Brazil, Grants 303057/2018-1, 311803/2019-9 and 306738/2019-8. The third author is grateful to Professor Alexander Grigor'yan and the Faculty of Mathematics at the Universit\"at Bielefeld for their warm hospitality. \section{Strong Stability of $H$-surfaces}\label{preli} Let $P$ be a Riemannian three-manifold and let $\phi \colon M\to P$ be a surface isometrically immersed in $P$. Let $\Phi\colon (-\epsilon,\epsilon)\times M \to P$ be a variation of $M$ with $\Phi_t(p)=\Phi(t,p)$ and $\Phi (0,p) = \phi(p)$, where each $\Phi_t$ is an immersion of $M$ into $P$ for every $0< \vert t\vert <\epsilon$. For each $t\in (-\epsilon, \epsilon) $ we have the area function $A(t) = {\rm Area}(\Phi_t)$ and the volume function $V(t)$ induced by the immersion $\Phi$ given by \[V(t)=\int_{[0,t]\times M}\Phi^*dV, \,\footnote{We agree that $[0,t]=[t,0]$ if $t<0$} \] which measures the signed volume enclosed between $\Phi_0=\phi$ and $\Phi_t$. Let us define the functional $\mathcal J$ setting $\mathcal J(t) = A(t) - 2H V(t)$. The variational vector field $X$ associated to $\Phi$ is defined by $X= {\partial_t \Phi}_{|_{t=0}} = \psi \nu$, for some $\psi \in C^{\infty}(M)$. It is not difficult to check that $M$ is a stationary point for $\mathcal{J}$ if and only if it has constant mean curvature $H$. If we assume that $M$ is stationary, then the second variation formula of $\mathcal J(t)$ is given by \begin{eqnarray* Q(\psi,\psi) &=& -\int_{M}\psi L\psi\;d\sigma \\[0.2cm] &=& \int_{M} [|\nabla\psi|^2-(|II|^2+Ric_{_P}(\nu,\nu))\psi^2]\;d\sigma \quad \forall \psi \in C^{\infty}_0(M), \end{eqnarray*} where $Ric_{_P}$ is the Ricci curvature of $P$, $II$ is the second fundamental form of $M$ and $L=\Delta+|II|^2+Ric_{_P}(\nu,\nu)$ is its Jacobi operator. An $H$-surface $M$ is said to be strongly stable if $Q(\psi,\psi)\geq 0$. This notion is equivalent to the positiveness of the first eigenvalue of $L$ and to the existence of a positive smooth solution $u$ for the equation $L u = 0$ (see \cite{FC}). For $H$-surfaces there is also a weaker notion of stability associated to the isoperimetric problem, that is for minimizing the area of $M$ while keeping enclosed a constant volume. An $H$-surface is stable if $Q(\psi,\psi) \geq 0$ for every test $\psi \in C^{\infty}_0(M)$ satisfying $\int_{M}\psi d\sigma = 0$. Hence, strong stability implies stability, but not otherwise. In this section we are interested in to study the strong stability of leaves from the limit set of surfaces with bounded curvature. Let $\varphi \colon M \to P$ be a complete surface immersed into a complete three-manifold $P$. The limit set of $\varphi$, denoted by $\mathcal{L}_{\varphi}$, is the set $$ \mathcal{L}_{\varphi}=\{q\in P\colon \exists \{p_k\}\subset M,\mbox{dist}_M(p_0,p_k)\to\infty\mbox{ and } \mbox{dist}_P(q,\varphi(p_k))\to 0\}. $$ It is plain to see that if $M$ is properly immersed, then $\mathcal{L}_{\varphi}=\emptyset$. An important tool in our study is the maximum principle for $H$-surfaces. Suppose $M_1$ and $M_2$ are two smooth oriented surfaces of $P$ which are tangent at a point $p\in M_1\cap M_2$ and have at $p$ the same oriented normal $\nu$. The point $p$ is called a point of common tangency. Around $p$ let us express $M_1$ and $M_2$ as graphs of functions $u_1$ and $u_2$ over the common tangent plane through $p$. We shall say that $M_1$ lies above $M_2$ near $p$, if $u_1 \geq u_2$ in a neighborhood of $p$. We can now state the following \textit{maximum principle} for $H$-surfaces (c.f. \cite{coskunuzer}). \begin{lemma}\label{maximum-principle} Let $M_1$ and $M_2$ be oriented surfaces immersed in a complete three-manifold $P$. Assume $M_1$ and $M_2$ have a point of common tangency $p$ and let $H_1$ and $H_2$ be their respective mean curvature functions with respect to the same normal. If $H_1 \leq H_2$ near $p$, then $M_1$ can not lie above $M_2$, unless $M_1$ coincides with $M_2$ in a neighborhood of $p$. \end{lemma} Given $M_1$ and $M_2$ two $1$-surfaces immersed in a complete three-manifold $P_{\raisepunct{.}}$ A point $p\in M_1\cap M_2$ of common tangency is said to be a kissing point if $M_1$ lies above $M_2$ but they do not coincide in a neighborhood of $p$, that is, the mean curvature vectors of $M_1$ and $M_2$ at $p$ point to opposite sides. Unlike the minimal case, an $1$-surface can have a tangential self-intersection at a kissing point $p$. Such a point is called a self-touching point. This means that the $1$-surface is immersed but do not cross itself. In the following result we generalize \cite[Thm.1.5]{bessa-jorge-oliveira} and, although we state it for dimension three, it holds for any dimension. \begin{theorem} \label{Theobessa} Let $P$ be a complete three-manifold with bounded geometry and Ricci curvature $Ric_{_P}\geq -2$. Let $\varphi \colon M \to P$ be a complete $1$-surface immersed in $P$ with Gaussian curvature bounded from below. Then, one of the following conditions holds. \begin{itemize} \item[(a)]$\varphi$ is proper; \item[(b)] Every complete leaf $S\subset \mathcal{L}_{\varphi}$ whose intersection with $\varphi(M)$ is either empty or only admits kissing points is strongly stable. \end{itemize} \end{theorem} \begin{proof} Suppose $\varphi$ is a non-proper immersion and let $p\in \mathcal{L}_{\varphi}$. \vspace{0.2cm} \noindent\textbf{Claim 1:} There exists a sequence of disks $\{D_k\}$ in $P$ converging uniformly to a disk $D\subset \mathcal{L}_{\varphi}$ containing $p$. Moreover, the disk $D$ can be extended to a complete $1$-surface $S\subset \mathcal{L}_{\varphi}$ passing through $p$ with bounded curvature and $H=1$. For the sake of completeness we will briefly outline a proof for this claim. Arguing under the non-properness of $\varphi$ we can take a divergent sequence $x_k \in M$ such that $p_k = \varphi(x_k)$ converges to the point $p \in \mathcal{L}_{\varphi}$. Since $\varphi(M)$ is an $1$-surface with bounded curvature, and $P$ has bounded geometry there is a uniform bound on the second fundamental form of $\varphi(M)$. Therefore, there exists a family of disks $D_k(\delta) \subset T_{p_k} \varphi(M)$, centered at the origin and with uniform radius $\delta > 0$, such that $\varphi(M)$ is locally described as the graph of a function $u_k$ which enjoy a $C^1$ bound independently of $p_k \in \varphi(M)$. We pick a subsequence of $p_k$, still called $p_k$, such that $T_{p_k}\varphi(M)$ converges to a vector subspace $V \subset T_p P$, determined by a fixed unit normal vector field $\nu$, with the property that the sign between the mean curvature vector field $\overrightarrow{H}(x_k)$ and $\nu$ is fixed. For $k$ sufficiently large, the local graphs over $D_k(\delta)$ are also graphs on a small disk $D(\delta/2) \subset V$. The classical quasilinear PDE theory asserts that these graphs converges to a limit $1$-graph $S$ tangent to $V$ at $p$. Since all boundary points of $S$ are in $\mathcal{L}_{\varphi}$, reasoning as above, we can extend $S$ to a geodesically complete, oriented leaf contained in $\mathcal{L}_{\varphi}$ with bounded curvature, also denoted by $S$ (see also \cite{Ronaldo}). To prove assertion $(b)$ we argue along similar lines from \cite[Thm.1.5]{bessa-jorge-oliveira}. The argument is inspired by \cite[Thm.1]{R}. Let $S\subset \mathcal{L}_{\varphi}$ be the complete $1$-surface with bounded curvature passing through $p$ constructed in {\bf Claim 1}. Since the intersection $S\cap \varphi(M)$ is either empty or only admits kissing points, then $S$ has no transversal self-intersection. Moreover, at possible tangential self-intersection points the maximum principle (Lemma \ref{maximum-principle}) implies that the mean curvature vector field along $S$ must point in opposite directions, thus $S$ admits only self-touching points. Let $C \subset S$ be a compact proper subset of $S$ and let $T^+_{\varepsilon}(C)$ be the oriented $\varepsilon$-tubular neighborhood of $C$ in $P$, with respect to the mean curvature vector field of $S$. For some $\varepsilon>0$, depending on the curvature bounds of $M$ and $P$, the $\varepsilon$-tube $T^+_{\varepsilon}(C)$ is embedded. Consider a sequence of compact subsets $C_k \subset \varphi(M)$ converging uniformly to $C$. From our assumption on $\varphi(M)\cap S$, even in the case that $C$ contains a kissing point between $\varphi(M)$ and $S$, or a self-touching point of $S$, we can guarantee that, up to a subsequence, $C_k$ converges to $C$ on one side of $C$, that is, inside $T^+_{\varepsilon}(C)$. Let us denote by $\nu$ be the continuous unit normal vector field along $S$ pointing towards $C_k$. By the construction of $C$, the mean curvature vector fields of $C_k$ point towards the same direction as the mean curvature vector field of $C$, since $C_k$ converges uniformly to $C$ by one side. \vspace{2mm} \noindent\textbf{Claim 2:} $C$ is strongly stable. To prove the claim, we take a compact $\widetilde C$ containing properly $C$. If the first eigenvalue of the Jacobi operator $L=\Delta+|II|^2+Ric_{\!_P}(\nu,\nu)$ in $\widetilde C$ is non-negative, then $\widetilde{C}$ is strongly stable and we are done. Therefore, we can assume that $\lambda_1^{L}(\widetilde{C})< 0$, and in this case, there exists a smooth function $u$ on $\widetilde C$ satisfying \[ \left\lbrace \begin{array}{rl} Lu =1 & \text { in}\ \ \widetilde C, \\[0.2cm] u =0 & \text { on}\ \ \partial \widetilde C. \end{array}\right. \] Consider the variation $\widetilde{C}(t)=\{\exp_x(tu(x) \nu) \colon x\in \widetilde C\}$ for $-\varepsilon<t<\varepsilon$, and denote by $H(t)$ its mean curvature function. The mean curvature $H(t)$ evolves, accordingly to \cite[Thm.3.2]{HP}, as $$ H'(0)=\frac{1}{2}Lu=\frac{1}{2}\,\,\mbox{ in } \widetilde C. $$ Recalling that $H(0)=1,$ we have $H(t)>1$ for every $t\in(0,\epsilon')$, for some $0<\varepsilon' < \varepsilon$. Suppose that $u$ is positive at some point of $\mbox{int}(\widetilde C)$, then there is a small enough $t$ such that $\widetilde C (t)$ have a tangency point with some $C_k$ which is not possible by the maximum principle (Lemma \ref{maximum-principle}). We may conclude that $u \leq 0$. Similarly, if $u\leq 0$ and attains its maximum at some interior point of $\widetilde C$, then we conclude that $u\equiv 0$ which is impossible since $L u = 1$. Therefore, $u < 0$ in $\mbox{int}(\widetilde C)$ and it vanishes on the boundary $\partial \widetilde C$. Set $w = -u$ and consider $v$ to be a positive first eigenfunction of $C$, that is, $v$ is the solution of the problem \[ \left\lbrace \begin{array}{rl} Lv +\lambda_{1}^{L}(C) v = 0 & \text { in}\ \ C, \\ v = 0 & \text { on}\ \partial C. \end{array}\right. \] Define the function $h=w-\tau v$ in $C$. We can choose $\tau$ such that $h\geq 0$ and $h(p)=0$ for some $p\in\mbox{int}(C) $. Suppose by contradiction that $\lambda_{1}^{L}(C)<0$. Then, $$ Lh=Lw-\tau Lv < \tau \lambda_{1}^{L}(C) v\leq 0 \ \ \mbox{in } C. $$ Therefore $h$ is superharmonic on $C$ and has a minimum ($h(p)=0)$ at $\mbox{int}(C)$. By the maximum principle $h$ is constant, a contradiction. Hence any $C\subset S$ is strongly stable and thus $S$ is strongly stable. \end{proof} Keeping the hypotheses of Theorem \ref{Theobessa} we have the following corollary. \begin{corollary}\label{corbessa} If $S$ is a compact leaf of $\mathcal{L}_{\varphi}$, then $S$ is totally umbilical and $Ric_{P}(\nu,\nu)=-2.$ \end{corollary} \begin{proof}Since $S$ is strongly stable, by a result of D. Fisher-Colbrie \cite{FC}, there exists a positive solution $u$ of the Jacobi operator $Lu =0$. Integrating over $S$, we then obtain \begin{eqnarray*} \int_S (|II|^2+Ric_{P}(\nu,\nu))u=0. \end{eqnarray*} Since $Ric_{\!_P}\geq -2$ and $|II|^2\geq 2H^2=2,$ we conclude that $S$ is totally umbilical and $Ric_{P}(\nu,\nu)=-2.$ \end{proof} We are now going to prove Proposition \ref{MPR-a} stated in the Introduction \begin{proposition}\label{MPR} There are no complete strongly stable $H$-surfaces with $H\geq 1$ immersed in a three-manifold with Ricci curvature $Ric_{\!_P}> -2$ \end{proposition} \begin{proof} Suppose by contradiction that there exists a complete strongly stable $H$-surface $M$ with $H\geq 1$. We assume that $M$ is non-compact, otherwise the constant function $1$ in the stability inequality will give a contradiction. Let $x_0\in M$ and $R>0$. It follows from the proof of item $4$ in \cite[Thm.2.13]{MPR} that we can find a constant $C>0$ such that \begin{equation}\label{stabmpr} 0<\int_{M}(|II|^2+Ric_{P}(\nu,\nu))f^2\leq \frac{C}{\log R}\raisepunct{,} \end{equation} where $f(q)=\varphi(r)$ is a radial logarithmic cut-off function given by \[ \varphi(r)=\left\{ \begin{array}{ccc}1 & \text { if } & 0 \leq r \leq 1, \\[0.2cm] \displaystyle 1-\frac{\log r}{\log R} & \text { if } & 1 \leq r \leq R, \\[0.3cm] 0 & \text { if } & R \leq r.\end{array}\right. \] Above $r(q)={\rm dist}(x_0,q)$ denotes the intrinsic distance from $q$ to $x_0$. The last right-hand side of \eqref{stabmpr} goes to $0$ as $R$ tends to infinity, while the integrand is strictly positive. This leads to a contradiction. \end{proof} The following result is a straightforward consequence of Proposition \ref{MPR}. \begin{corollary} Let $M$ and $N$ be two disjoint $1$-surfaces properly embedded in a Riemannian three-manifold $P$ with Ricci curvature ${\rm Ric}_P > -2$. Then, $M$ and $N$ can not bound a mean convex component between them. \end{corollary} \begin{proof} If $\Omega$ is a mean convex component whose boundary are $M$ and $N$, then by the proof of \cite[Thm.3]{RR} there is a strongly stable $1$-surface in $\Omega$, which is a contradiction by Proposition \ref{MPR}. \end{proof} \section{Proof of Theorems \ref{intersect}, \ref{teo2} and \ref{splitting}} In this section we are going to present the proofs of the results that are consequence of Theorem \ref{Theobessa} and Proposition \ref{MPR}. Although it is an unnatural ordering, for simplicity as it will be clear afterwards, we will leave the proof of Theorem \ref{intersect} to the last part of this section. \subsection{Proof of Theorem \ref{teo2}} To prove item $a)$ we assume by contradiction that $\varphi\colon M\to P$ is a complete non-compact $1$-surface injectively immersed with bounded Gaussian curvature in $P$. First, we observe that since $M$ is non-compact and $P$ is compact $\mathcal{L}_{\varphi}\neq \emptyset$. Moreover, there exists $\epsilon>0$ depending on the second fundamental form of $M$ and on the bounds of the geometry of $P$ such that $\varphi^{-1}(B_\epsilon^{P}(q))$ is a countable union of disjoint disks $D_i(\delta)\subset M$, with $\delta=\delta(\epsilon)>0$ for every $q\in \mathcal{L}_{\varphi}$, see \cite[Lem.1--3]{jorge-xavier5} and their proofs. We claim that $\varphi(M)$ can not lie in $\mathcal{L}_\varphi$, otherwise we can pick a point $q\in \varphi(M) \subset \mathcal{L}_\varphi$. As in {\bf Claim 1} of the proof of Theorem \ref{Theobessa} there is a family of disjoint disks $D_j\subset \varphi(M)$ converging to a disk $q\in D_{\infty}\subset \mathcal{L}_\varphi$. Take a minimizing geodesic $\gamma \colon [0, \epsilon)\to P$ with $\gamma(0)=q$ and $\gamma'(0)\perp T_qD_{\infty}$ pointing towards $D_j$. Since each point $q_j\in \gamma([0, \epsilon))\cap D_j\neq \emptyset$ belongs to $\mathcal{L}_\varphi$, it must be an accumulation point of $ \Gamma=\gamma([0, \epsilon))\cap \varphi (M)$. Therefore $\Gamma$ is a perfect set and $\varphi^{-1}(B_\epsilon^{P}(q))$ is uncountable, contradiction. Now, let $S \subset \mathcal{L}_{\varphi}$ be a complete leaf passing through a point $p \in P$. If $S$ would intersect $\varphi(M)$ transversally, then $\varphi $ would be not injectively immersed. Thus $S$ might intersect $\varphi (M)$ tangentially. Furthermore, such a intersection point must be a kissing point, for if the mean curvature vector fields of $S$ and $\varphi(M)$ coincide and by maximum principle $\varphi(M)= S \subset \mathcal{L}_{\varphi}$, which gives a contradiction. In any case we can conclude that $S$ might intersect $\varphi(M)$ only at kissing points, and by Theorem \ref{Theobessa} it must be strongly stable. This fact contradicts Proposition \ref{MPR}. For the proof of item $b)$ we let $P$ be a non-compact manifold and assume by contradiction that $M$ is non-proper, that is $\mathcal{L}_{\varphi}\neq\emptyset$. Arguing as above we obtain a complete leaf $S \subset \mathcal{L}_{\varphi}$ whose intersection with $\varphi(M)$ is either empty or only contains kissing points. Again, $S$ is strongly stable by Theorem \ref{Theobessa} and this leads to a contradiction with Proposition \ref{MPR} \subsection{Proof of Theorem \ref{splitting}} Let $\gamma:[0,l] \rightarrow P$ be a minimizing geodesic realizing the distance between $M$ and $N$ with initial data $x_{1}=\gamma(0) \in N$ and $x_{2}=\gamma(l) \in M$. By the first variation formula of arc-length the geodesic $\gamma$ must intersect $M$ and $N$ orthogonally. We define the normal exponential map $\Phi:[0, l] \times U_{1}\to P$ by $\Phi(t, x)=\exp _{x} (t\nu)$, where $U_{1}$ is a neighborhood of $N$ containing $x_1$ and $\nu$ is the normal vector field coinciding with $\gamma^{\prime}(0)$. Since $M$ is an $1$-surface and $\gamma(t)=\Phi\left(t, x_{1}\right)$ realizes the distance between $M$ and $N$, the Jacobian of $\Phi$ at $\left(l, x_{1}\right)$ is non-singular. Then, up to shrinking $U_{1}$, we can produce a foliation in a tubular neighborhood of $N$ given by regularly embedded surfaces $V_{t}=\Phi\left(t, U_{1}\right)$. It is now standard that the unit tangent vector field to the normal geodesic $\nu = \Phi(\partial_t)$ is parallel and each $V_t$ is equidistant to $U_{1}.$ Recalling that $M$ lies in a mean convex component of $P\backslash N$, the mean curvatures $ H(t)$ of these surfaces satisfy (see Lemma \ref{gray_lemmata}) \begin{eqnarray*} 2H'(t)&=& Ric_P(\nu,\nu)+|II(t)|^{2}, \end{eqnarray*} where $II(t)$ is the second fundamental form of $V_{t}$. By Newton's inequality the function $H(t)$ satisfies $H'(t) \geq H^2(t) - 1$ with $H(0) = 1$. Using the Riccati's comparison theorem we conclude that $H(t) \geq 1$ and thus $H'(t) \geq 0$. The surface $V_{l}=\Phi(l,U)$ must be tangent to a neighborhood $U_2 \subset M$ containing $x_{2}$. Since $d(U_{1}, V_{l})=l$, we have that $V_{l}$ stays below $U_{2}$ and by Lemma \ref{maximum-principle} they must coincide near $x_2$. This means that $H(t) \equiv 1$. Now, if another piece of $M$ touches $U_{2}$, then they must coincide provided $U_{2}\subset V_{l}$. Observing that the distance from $M$ to $N$ is also realized at the boundary points of $V_l$, we can apply the precedent argument to continue $V_l$ parallel to $N$, showing that $M$ and $N$ are equidistant everywhere, unless they intersect and, in this case, they are equal. Since the foliation $V_t$ is given by $1$-surfaces we have $$ Ric_P(\nu,\nu)=-2\quad \mbox{and}\quad|II(t)|^{2}=2. $$ Therefore, $M$ and $N$ are embedded and totally umbilical. To prove item $b)$ we consider $\Omega$ be the connected component of $P$ whose boundary contains $M$ and $N$. Then, $\Omega$ is foliated by umbilical surfaces. The induced metric on each leaf evolves as $ g'(t)=2 g(t)$ (see \cite{HP}). Therefore, the metric induced by $\Phi(t, x)$ on $[0,l] \times M$ is given by $d t^{2}+e^{2 t} g$, where $g$ is the metric of $N$. Finally, suppose by contradiction that $N$ is compact, but not separating. Then we argue along similar lines from the proof of \cite[Thm.2.5]{CF} in order to construct a cyclic cover $\hat P$ of $P$. Since $N$ is two-sided, we can define a smooth function on $P\backslash N$ which is equal to $0$ on $N$ and in a neighborhood of one side of $N$, and equal to $1$ in a neighborhood of the other side of $N$. By passing to the quotient $\bmod\; \mathbb{Z}$ we obtain a non-constant smooth function $$ f: P \rightarrow \mathbb{R} / \mathbb{Z}=\mathbb{S}^{1}. $$ Let $\tilde{P}$ be the universal cover of $P$ and $f_{*} \colon \pi_1(P) \to \mathbb{Z}$ be the induced map on the fundamental groups. Then $\hat{P}=\tilde{P} / \mbox{ker} f_{*}$ is a cyclic cover of $P$ and the preimage of $N$ under the projection $\pi: \hat{P} \rightarrow P$ divides $\hat{P}$ into two infinite parts. Choosing the component with adequate normal direction, say $\Omega$, it follows from \cite[Lemma 1]{schoen1982lower} that $(n-1)\operatorname{Vol}(\Omega) \leq \operatorname{Vol}(\partial \Omega)<\infty,$ which is a contradiction. Thus $N$ is separating. The last assertion follows from the splitting theorem \cite[Thm.2]{croke1992warped}. \subsection{Proof of Theorem \ref{intersect}} Let $M$, $N$ be complete $1$-surfaces immersed in a compact three-manifold $P$ with Ricci curvature ${\rm Ric}_{_P}> -2$. Recall that $N$ is proper and $M$ has bounded curvature. Suppose by contradiction that $M$ lies in a mean convex component $\Omega$ of $P\backslash N$. Given a point $p \in \overline{M}$ let $S \subset \mathcal{L}_\varphi$ be the complete $1$-leaf immersed in $P$ with bounded curvature and passing through $p$, given by {\bf Claim 1} in the proof of Theorem \ref{Theobessa}. \vspace{0.2cm} \noindent \textbf{Case 1:} $ \overline{M} \cap N\neq \emptyset$ \vspace{0.2cm} Let $p \in \overline{M} \cap N$ and $S \subset \mathcal{L}_\varphi$ be the leaf passing through $p$. Recalling that $S$ is obtained by uniform limit of disks with radius uniformly bounded from below, we may assume that $S$ intersects $N$ tangentially, otherwise there were some disk of $M$ intersecting $N$ transversally, contradicting $M\cap N = \emptyset$. Moreover, by the maximum principle (Lemma \ref{maximum-principle}) $S$ must coincide with $N$ in a neighborhood $U$ of $p$. Thus, repeating the above argument with points of $\partial U$ we can show that $S \subset N$. We have proved that $S$ is an $1$-surface, which is a leaf from $\mathcal{L}_\varphi$ satisfying $S\cap \varphi(M) = \emptyset$. Then, by item $(b)$ of Theorem \ref{Theobessa} it must be strongly stable. However, this contradicts Proposition \ref{MPR} \vspace{0.2cm} \noindent\textbf{Case 2:} $\overline{M}\cap N= \emptyset.$ \vspace{0.2cm} Since $N$ is compact there exist points $p \in \overline{M}$ and $q \in N$ such that $d(\overline{M}, N)= d(p,q)= l$, for some $l > 0$. We take $S \subset \mathcal{L}_\varphi$ be the $1$-surface immersed with bounded curvature and passing through $p$. In this case, there exists a minimizing geodesic $\sigma \colon [0,l] \rightarrow P$ realizing the distance between $N$ and $S$ such that $\sigma(0) = q \in N$ and $\sigma(l) = p \in S$. Since $S$ lies in a mean convex component of $P\backslash N$ by item $a)$ of Theorem \ref{splitting}, $S$ and $N$ are embedded and totally umbilical parallel $1$-surfaces. However, we know that the mean curvatures $H(t)$ of the parallel surfaces of $N$, in the direction of its mean curvature vector field, satisfy (see Lemma \ref{gray_lemmata}) \begin{eqnarray*} 2H'(t)&=& Ric_P(\nu,\nu)+|II(t)|^{2} \\[0.2cm] &>& -2 + 2H^2(t) > 0, \end{eqnarray*} where in the last inequality we have used that $H(t) \geq 1$. Therefore, any neighborhood $U \subset N$ of $q$ evolving paralleling along $\gamma$ must intersect $S$ tangentially at $p$ with mean curvature $H(l) > 1$. Once again, we apply the maximum principle to get a contradiction. \section{Proof of Theorem \ref{half_bc}} The following lemma is the core of the proof of Theorem \ref{half_bc} and it is the correspondent of \cite[Thm.1.2]{bessa-jorge-oliveira}. \begin{lemma}\label{horosphere_lim_set} Let $\Omega\subset\mathbb{H}^3$ be an open domain whose boundary is a union of pieces of regular $1$-surfaces with respect to normal vector field pointing towards the interior of $\Omega$. Assume $\varphi \colon M \to \Omega \subset \mathbb{H}^3$ is a complete $1$-surface immersed with bounded curvature. Then there is a horosphere $\mathcal H$ separating $\overline{M}$ from $\partial \Omega$, unless $\partial \Omega$ is a horosphere contained in the limit set $\mathcal{L}_\varphi$. \end{lemma} \begin{proof} We first consider the case where $\mathcal{L}_\varphi \cap \partial \Omega \neq \emptyset$. It then follows from Theorem \ref{Theobessa} and the maximum principle that $\partial \Omega$ is a leaf from $\mathcal{L}_\varphi$ which is a complete strongly stable $1$-surface. However, in this case $\partial \Omega$ is a horosphere by \cite[Thm.2.13]{MPR}. Let us assume that $\mathcal{L}_\varphi \cap \partial \Omega = \emptyset$. Following ideas from \cite{bessa-jorge-oliveira} we consider an open ball $B_R$ centered at some point of $\mathbb{H}^3$ and radius $R>0$ which intersects both $\partial \Omega$ and $\overline{M}$. Since $\overline M$ lies in $\Omega$ and has bounded curvature, there exits $0<r< (\sup \sqrt{K_M})^{-1}$ (depending on $R$) such that the tubular neighborhood $T_r(\overline{M})$ has Lipschitz boundary (see \cite{RZ}), $T_r(\overline{M})\cap \partial\Omega = \emptyset$ and the outside tangent cone of $\partial T_r(\overline{M})$ has no angle bigger than $\pi$ (cf. \cite[Lemm.2.1]{bessa-jorge-oliveira}). Let $D_R$ be the connected open region of $(\Omega \backslash T_{r/2}(\overline{M}))\cap B_R $ whose boundary contains $\partial \Omega \cap B_R$. Precisely, $\partial D_R$ is composed by smooth pieces $\partial B_R \cap \Omega$, and by piecewise $1$-surfaces from $\partial \Omega \cap B_R$ and $\partial T_{r/2}(\overline{M})\cap B_R$. Let us denote by $\partial T_{r/2}$ be the part of $\partial D_R$ contained in $\partial T_{r/2}(\overline{M})$. Let $\mathcal{F}$ be the class of open domains $Q \subset D_R$ with rectifiable boundary such that $\partial T_{r/2} \subset \partial Q$. We define on $\mathcal{F}$ the functional \begin{equation*} F(Q) = A(\partial Q) - 2V(Q), \end{equation*} where $A(\partial Q)$ gives the area of the boundary of $Q$, and $V(Q)$ gives the volume of $Q$. The idea now is to minimize the functional $F$ and work along the strategy inspired in \cite{hauswirth_roitman_rosenberg,ros2010properly} and implemented in \cite[Sec.5]{M}. For the sake of completeness we will write down a sketch of the argument. Since $\partial \Omega$ is a piecewise regular $1$-surface whose mean curvature vector field points inward $\Omega$, for $\mu>0$ (depending on $R$) sufficiently small, the subset $\Omega_\mu = \{ x \in \Omega \colon d(x,\partial \Omega) \leq \mu \}$ does not intersect $\partial T_r$, and it is foliated by piecewise smooth surfaces whose mean curvature is greater than $1$ where it is defined. Further, the curvatures bounds of $M$ imply that $ T_{r}(\overline{M})$ is also foliated by piecewise smooth surfaces, but in this case the mean curvature evolution along these smooth pieces depends on the direction of the mean curvature vector field of the corresponding limit disks given in Theorem \ref{Theobessa}. \vspace{0.1cm} \noindent{\bf Claim:} Let $Q \in \mathcal{F}$. \begin{itemize} \item[i)] If $Q \cap \Omega_{\mu/2}\neq \emptyset$, then there is $\mu' \in (\mu/2,\mu]$ such that $Q\backslash \Omega_{\mu'} \in \mathcal{F}$ and $F(Q\backslash \Omega_{\mu'}) \leq F(Q)$. \vspace{0.1cm} \item[ii)] If $T_{r}(\overline{M}) \nsubseteq Q $, then there is $r' \in (r/2,r]$ and $Q' \in \mathcal{F}$ such that $T_{r'}(\overline{M}) \subset Q'$ and $F(Q') \leq F(Q)$. \end{itemize} \noindent To prove item $i)$ we denote by $\xi$ the inward unit normal vector field induced by the foliation of $\Omega_\mu$. It is easy to see that ${\rm div\,} \xi \leq -2$ a.e. in $\Omega_\mu$. Using that $\partial Q$ has finite two dimensional Hausdorff measure, we can apply the coarea formula to find $\mu' \in (\mu/2,\mu]$ such that the one dimensional Hausdorff measure of $\partial Q\cap \partial \Omega_{\mu'}$ is finite. Hence, it will be a negligible subset in our computations. Then, the subset $Q\cap \Omega_{\mu'} \neq \emptyset$ has rectifiable boundary and the Stokes formula (see e.g. \cite[Sec.2.2]{M}) gives \begin{eqnarray*} -2V(Q\cap \Omega_{\mu'}) &\geq & \int_{Q\cap \Omega_{\mu'}} {\rm div\,} \xi\\[0.2cm] &=& \int_{\partial Q\cap \Omega_{\mu'}} \langle \xi, \eta(Q\cap \Omega_{\mu'})\rangle + \int_{\partial\Omega_{\mu'}\cap Q} \langle \xi, \eta(Q\cap \Omega_{\mu'})\rangle, \end{eqnarray*} where $\eta(Q\cap \Omega_{\mu'})$ denotes the outward unit normal vector field along the boundary of $Q\cap \Omega_{\mu'}$. Since $\xi = \eta(Q\cap \Omega_{\mu'})$ on the subset $\partial\Omega_{\mu'}\cap Q$, by the Cauchy-Schwarz inequality we have \begin{eqnarray*} -A(\partial Q\cap \Omega_{\mu'}) + A(Q\cap \partial\Omega_{\mu'}) + 2V(Q\cap \Omega_{\mu'}) \leq 0. \end{eqnarray*} This implies that \begin{eqnarray*} F(Q\backslash \Omega_{\mu'}) &=& A(\partial (Q\backslash \Omega_{\mu'})) - 2V(Q\backslash \Omega_{\mu'}) \\[0.2cm] &=& F(Q) - A(\partial Q \cap \Omega_{\mu'}) + A(Q\cap \partial \Omega_{\mu'}) + 2 V(Q\cap \Omega_{\mu'})\\[0.2cm] &\leq & F(Q). \end{eqnarray*} To prove $ii)$, we first observe that $\partial Q \cap \partial T_{r'}$ has one dimensional Hausdorff measure zero, for some $r' \in (r/2,r]$, as in the proof of item $i)$. Let $\xi$ be the outward unit vector field along the parallel surfaces foliating $T_{r'}(\overline{M})$. Notice that, independently of the mean curvature sign with respect to $\xi$, we must have ${\rm div\,} \xi \leq 2$ a.e. in $T_{r'}(\overline{M})$. Set $\Gamma = T_{r'}(\overline{M})\backslash Q$ and compute \begin{eqnarray*} 2V(\Gamma) &\geq & \int_{\Gamma} {\rm div\,} \xi\\[0.2cm] &=& \int_{\partial Q\cap T_{r'}(\overline{M})} \langle \xi, \eta(\Gamma)\rangle + \int_{\partial T_{r'}\backslash Q} \langle \xi, \eta(\Gamma)\rangle \\[0.2cm] &\geq & - A(\partial Q\cap T_{r'}(\overline{M})) + A(\partial T_{r'}\backslash Q), \end{eqnarray*} where in the last inequality we have used that $\xi = \eta(\Gamma)$ along $\partial T_{r'}\backslash Q$. Thus, defining $Q' = Q \cup \Gamma$ it is easy to see that $Q' \in \mathcal{F}$ and \begin{eqnarray*} F(Q') &=& A(\partial Q) - A(\partial Q\cap T_{r'}(\overline{M})) + A(\partial T_{r'}\backslash Q) - 2V(Q) - 2V(\Gamma) \\[0.2cm] &\leq & F(Q). \end{eqnarray*} Once the above claim is proved, we can consider a minimizing sequence of subsets $Q_j \in \mathcal{F}$ satisfying $Q_j \cap \Omega_{\mu/2}=\emptyset$ and $T_{r'}(\overline{M}) \subset Q_j$. Thus, via the compactness theorem for integral currents (cf. \cite[Thm.5.5]{Morgan}) it is guaranteed the existence of a cluster point $Q_\infty$ as the limit of $Q_j$ in the flat topology, which still belongs to $\mathcal{F}$ and satisfies $Q_\infty \cap \overline{\Omega}_{\mu/2}=\emptyset$ and $T_{r'}(\overline{M}) \subset Q_\infty$. Since the area functional $A(\partial Q)$ is lower semi-continuous for the flat convergence and by the regularity properties of the volume differential form we can see that $Q_\infty$ minimizes $F$. Therefore, the piece of $\partial Q_\infty$ contained in the interior of $D_R$, here denoted by $S_R$, must be a local isoperimetric surface which by regularity theory (see \cite[Cor.3.6]{Morgan16}) is a smooth $1$-surface with mean curvature vector field pointing inward $Q_\infty$. Moreover, $S_R$ is strongly stable by \cite[Prop.2.3]{BC}, and the boundary of $S_R$ lies in the boundary of $B_R$. For each positive integer $k$, let $R_k$ be a divergence sequence of radius, and let $S_k$ be the corresponding strongly stable $1$-surfaces constructed as above, that is, $S_k$ lies in the domain $D_k = D_{R_k}\backslash \overline{\Omega}_{\mu_{k}/2}$ and its boundary $\partial S_k$ belongs to $\partial B_{R_k}$. If we pick points $p \in \overline{M}$ and $q \in \partial \Omega$ such that the interior of the geodesic segment $[p,q]$ does not meet $\overline{M}$, then by construction, for $k$ sufficiently large, all the surfaces $S_k$ will intersect this segment. As in \cite{M}, for every $n\geq k$ the sequence $S_n$ satisfies a uniform local area estimate and has second fundamental form bounded in $D_k$, so this sequence admits a subsequence that converges, with multiplicity one, to an embedded strongly stable $1$-surface contained in $D_k$ and whose intersection with the geodesic segment $[p,q]$ is non-empty. Proceeding in a diagonal process we will find a smooth limit $S$ which is a complete $1$-surface strongly stable. Therefore, by \cite[Thm.2.13]{MPR} $S$ must be a horosphere. \end{proof} \subsection{Proof of Theorem \ref{half_bc}} We argue by contradiction and assume that $M$ lies in an open mean convex component $\Omega$ of $\mathbb{H}^3\backslash N$. Hence, $N' \doteq \partial \Omega$ is given by a union of regular pieces of $N$ with mean curvature one with respect to normal vector field pointing towards the interior of $\Omega$, glued by their boundaries with inner angle less than or equal to $\pi$. If $N'\cap \overline{M} \not= \emptyset$, then by Lemma \ref{horosphere_lim_set} $N'$, thus $N$, must be a horosphere, which contradicts the non-horospherical assumption on $N$. On the other hand, if $N'\cap \overline{M} = \emptyset$, then Lemma \ref{horosphere_lim_set} gives the existence of a horosphere $S$ separating $\overline{M}$ from $N'$. Since $N$ is proper and the mean curvature vector field along $N'$ points towards $S$, we can apply \cite[Thm.7]{M} to conclude that $N$ is also a horosphere. This contradiction concludes the proof. \section{Proof of Theorems \ref{parabolic_half}, \ref{stochastic_half-1-surfaces} and \ref{maximum-principle-infinity}} Since the proofs to be presented in this section share a core argument based on the construction of a weak solution to the subequation $\triangle u \geq \lambda u$, for $\lambda\geq 0$, we will first provide a detailed introduction. Generically, let $\varphi \colon M \rightarrow P$ be an immersed surface on a manifold $P$ with Ricci curvature bounded from below and with sectional curvature bounded from above, and let $\psi \colon N \rightarrow P$ be a surface properly immersed in $P$ with bounded curvature. Henceforth, we assume that $M$ lies in a mean convex component of $P\backslash N$. Following the approach from \cite{bessa-jorge-pessoa} a key point in the proof will be to consider solutions of the subequation $\Delta u \geq \lambda u$ in a weak sense, more precisely, in the barrier sense. We then recall that a continuous function $u \colon M \rightarrow \mathbb{R}$ is said to satisfy $\triangle u \geq 0$ at a point $p \in M$ in the barrier sense if, for any $\delta > 0$ there exists a smooth support function $\phi_\delta$ defined around $p$ such that \begin{eqnarray*} \begin{array}{cc} \left\lbrace \begin{array}{rl} \phi_\delta = u & \text{at} \ \ p, \\[0.2cm] \phi_\delta \leq u & \text{near } p, \end{array} \right. & \quad \text{and} \qquad \triangle \phi_\delta (p) > -\delta . \end{array} \end{eqnarray*} The function $u$ to be constructed will be given in terms of the composition $u = g\circ t_{_{\!N}}\circ\varphi$ where $t_{_{\!N}} \colon N \rightarrow \mathbb{R}$ is the distance function to the surface $N$, and $g \colon \mathbb{R} \rightarrow \mathbb{R}$ is a smooth function to be chosen later satisfying $g'(t) < 0$ and $g''(t) >0$. Let $\Omega$ be the mean convex connected component of $P\backslash N$ containing $M$, and let $\overrightarrow{H}_{\!_N} = \nu_{_N}$ be the mean curvature vector field along $\partial \Omega \subset N$ pointing towards $\Omega$. The boundary of $\Omega$ is given as a union of smooth pieces of $N$ and whose inner angles are not bigger than $\pi$ along an intersection set $\Gamma$ From the curvature bounds of $P$ and $N$, we know that $N$ has the second fundamental form uniformly bounded. By the extended Rauch's theorem (see \cite[Cor.4.2]{warner}) there exists a regular tubular neighborhood $V(\varepsilon)$ of $N$, with $\varepsilon>0$ depending only on the curvature bounds, for which there is no focal points along any normal geodesic $\gamma : [0,\varepsilon) \to P$ issuing from a point $\gamma(0) \in N$ (cf. also \cite[Ch.10]{docarmo}). Therefore, along any geodesic minimizing the distance between a fixed point in $V(\varepsilon)$ and the surface $N$, the parallel surfaces along this normal geodesic are well-defined and non-degenerated. Although the neighborhood $V(\varepsilon)$ may not be embedded, the distance function from $N$ restricted to $V_+(\varepsilon) = V(\varepsilon)\cap \Omega$, namely $t_{\!_N} : V_+(\varepsilon) \rightarrow \mathbb{R}$ is a positive Lipschitz function. Moreover, for a fixed point $y \in V_{+}(\varepsilon)$ it is easy to see that the nearest points to $y$ on $N$ can not lie on the part of $\Gamma$ where the inner angle is less than to $\pi$, for if a minimizing segment connecting $y$ to $\partial \Omega$ would be normal to two different tangent planes. Fix a point $y \in V_+(\varepsilon)$ and let $z \in N$ be a nearest point to $y$ and consider a simply connected, locally embedded neighborhood $W_z\subset N$ of $z$ that is graph over an open ball $B_z\subset T_{z}N$ with radius uniformly bounded from below, and such that ${\rm dist}_{\!_{P}}(y,z)\leq {\rm dist}_{\!_{P}}(y,\bar z)$ for all $\bar z \in W_z$. For each neighborhood $W_z$ the oriented distance function to $W_z$, namely $t_z \colon C_z(\varepsilon) \to \mathbb{R}$, defined on a regular tubular neighborhood $C_z(\epsilon)=T_{\epsilon}(W_z)$ with radius $\epsilon$, such that $C_z(\varepsilon)=C_z^{+}(\varepsilon)\cup W_z\cup C_z^{-}(\varepsilon)$ and $t_z(y) > 0$. Now, since $y$ could also be in the cut locus of $W_z$, to construct a smooth support function for $t_{\!_N}$, following notations from \cite{GLM} we consider a supporting surface $S_z$ for $C_z^+(\varepsilon)$ at $z \in W_z$, that is, a smooth surface such that $z \in S_z$ and $C_z^+(\varepsilon)\cap S_z = \emptyset$. Indeed, by \cite[Lem.1]{GLM}, for any $\mu>0$, there exists a supporting surface $S_z^{\mu}$ for $C_z^+(\varepsilon)$ at $z \in W_z$ such that \begin{eqnarray} H_{z}^{\mu}(z) > 1-\mu & \text{and} & y \notin \text{cut}(S_z^{\mu}), \end{eqnarray} where $H_{z}^{\mu}$ is the mean curvature of $S_z^{\mu}$. We notice that a way to construct these supporting surfaces is by deforming smoothly the boundary of a small ball $B \subset C_z^-(\varepsilon)$ touching $W_z$ at $z$. Furthermore, recalling $N$ has bounded curvature we can take a universal constant $0<c = \sup\{\vert\kappa^1\vert,\vert\kappa^2\vert\}< + \infty$, where $\kappa^1 \leq \kappa^2$ are the ordered principal curvatures of $S_z^{\mu}$, for all $z \in N$ and $\mu>0$ sufficiently small. Therefore, the oriented distance function to $S_z^{\mu}$, here called $t_{z}^{\mu}$, is smooth around $y$ and touches $t_{\!_N}$ from above at $y$. Assuming that $\varphi(M) \cap V(\varepsilon/8) \not=\emptyset$, we can now define a bounded function $u \colon M \rightarrow \mathbb{R}$ by setting $u = \max\{v,0\}$, where $v \colon \varphi^{-1}(V_+(\varepsilon)) \rightarrow \mathbb{R}$ is given by $v(x) = g\circ t_{\!_N}(\varphi(x))$, with \begin{eqnarray} g(t) = \log \left(\frac{2+\varepsilon\, c}{2+ 4\,c \,t}\right). \end{eqnarray} We first observe that $v(x) > 0$ if and only if $ x \in \varphi^{-1}(V_+(\varepsilon/4))$. Thus, $u$ will be a weak solution of $\Delta u \geq \lambda u$ on $M$, for $\lambda\geq 0$, once we have proved that $v$ satisfies this subequation on $\varphi^{-1}(V_+(\varepsilon/2))$ in the barrier sense. Up to reducing $\varepsilon$, if necessary, we will assume \begin{eqnarray}\label{varepsilon_restrictions} 0<\varepsilon < \tanh^{-1}\left(\frac{1}{4c}\right) < \frac{1}{2c} \raisepunct{.} \end{eqnarray} For any fixed point $ p \in M$ with $y = \varphi(p) \in V_+(\varepsilon/2)$, we pick a point $z \in W_z \subset N$ and a neighborhood $V_z$ as described above. Given $\delta>0$, let us consider $\phi_\delta = g\circ t_{z}^{\mu}\circ \varphi$ as a support function to $v$ at $p$, where $t_{z}^{\mu}$ is the oriented distance function to $S_z^{\mu}$ with $t_{z}^{\mu}(y)>0$, and $S_z^{\mu}$ is the supporting surface provided by \cite[Lem.1]{GLM}, for some $\mu = \mu(\delta)>0$ to be chosen later. Since $t_z^{\mu}$ is smooth around $y$ and touches $t_{\!_N}$ from above, by the decreasing property of $g$, we can assert that $\phi_\delta$ is a smooth support function that touch $v$ from below at $p$. It is left to prove that $$\triangle_{_M} \phi_\delta (p) > -\delta.$$ To compute $\triangle_{_M} \phi_\delta$ we recall that \begin{eqnarray}\label{lapla_u} \triangle_{_M} \phi_\delta &=& \text{Tr}_{TM} \text{Hess}_{P}(g\circ t_z^{\mu}) + 2\langle \nabla_{P}(g\circ t_z^{\mu}),\overrightarrow H_{\!_M}\rangle \nonumber\\[0.2cm] &\geq & \text{Tr}_{TM}\left(g''(t_z^{\mu}) \nabla t_{z}^{\mu} \otimes \nabla t_{z}^{\mu} + g'(t_z^{\mu}) \nabla^2 t_{z}^{\mu}, \right) + 2g'(t_z^{\mu})\vert \overrightarrow H_{\!_M}\vert, \end{eqnarray} where \begin{eqnarray*} g'(t) = - \frac{2c}{1+2c\, t}<0 & \text{and} & g''(t) = \frac{4c^2}{(1+2c\, t)^2}\raisepunct{.} \end{eqnarray*} The eigenvalues of $\text{Hess}_{P}(g\circ t_z^{\mu})$ are given by $$ \mu_1 = \frac{2c}{1+2c\, t_{z}^{\mu}}\kappa_{1}^{t} , \quad \mu_2 = \frac{2c}{1+2c\, t_{z}^{\mu}}\kappa_{2}^{t}, \qquad \text{and} \quad \mu_3 = \frac{4c^2}{(1+2c\, t_{z}^{\mu})^2}\raisepunct{,}$$ where $\kappa_1^t, \kappa_2^t$ are the principal curvatures of the parallel surfaces to $S_z^{\mu}$ at $y$, and $\kappa_1 \leq \kappa_2$ are the principal curvatures of $S_z^{\mu}$. The main tool to estimate from below \eqref{lapla_u} is to consider the comparison theorem for the Riccati equation satisfied by the principal curvature of the parallel surfaces. For a fixed point $z \in N$ let $\xi$ be a unit-speed geodesic normal to $S_z^{\mu}$ at $z$ with $\xi(0) = z$, and let $\{\xi_1,\xi_2\}$ be an orthonormal basis that diagonalizes the Weingarten map on $T_z S_z^{\mu}$. Let us also denote by $H(t)$ be the signed mean curvature function of the parallel surfaces satisfying $\overrightarrow{H}_{S_t} = H(t) \xi'(t)$. In the following lemma we summarize Corollaries 3.5 and 3.6 from \cite{gray2012tubes}. \begin{lemma}\label{gray_lemmata} Let $\xi_1, \xi_2$ be two vector fields differentiable at time $t$. Then, \begin{enumerate} \item[a)] $ \kappa'_i(t) = \kappa_i^2(t) + \text{Sec}_{\!_{P}}(\xi'(t),\xi_i(t)).$ \vspace{0.2cm} \item[b)] $ 2H'(t) = \kappa_1^2(t) + \kappa_2^2(t) + \text{Ric}_{\!_{P}}(\xi'(t)).$ \end{enumerate} \end{lemma} With this preliminaries in hand we can now proceed with the proof of each theorem independently. \subsection{Proof of Theorem \ref{parabolic_half}} The proof follows by contradiction, so we assume that $N$ is a proper $1$-surface immersed in $\mathbb{H}^3$ with bounded curvature and that $M$ is a parabolic $1$-surface contained in a mean convex component of $\mathbb{H}^3\backslash N$ which is not parallel to $N$. As before $\Omega$ denotes this mean convex component and $\overrightarrow{H}_{\!_N} = \nu_{_N}$ is the mean curvature vector field along $\partial \Omega \subset N$ pointing towards $\Omega$. Moreover, since we are working on the hyperbolic space $\mathbb{H}^3$ up to an isometry we may assume $M \cap V(\varepsilon/8) \not= \emptyset$. From hypotheses the function $u$ described above must be non-constant, thus the parabolicity of $M$ will get a contradiction if we prove that $\triangle u \geq 0$ on $M$ in a weak sense. Indeed, applying \cite[Thm.5.1]{grigoryan} we have that $u$ must be constant. We first recall that the principal curvatures $\kappa_i^t$, for $i=1,2$, satisfying the equation $a)$ in Lemma \ref{gray_lemmata} are solutions of the following Riccati equation \begin{eqnarray*} \left(\kappa_i^t\right)' = \left(\kappa_i^t\right)^2 - 1, \end{eqnarray*} along orthogonal geodesics issuing from a point in $N$. These solutions are explicitly given by \begin{eqnarray*} \kappa_i^t = \frac{\kappa_i \cosh t - \sinh t}{\cosh t - \kappa_i \sinh t}\raisepunct{.} \end{eqnarray*} The restrictions on the value of $\varepsilon$ imposed in \eqref{varepsilon_restrictions} allow us to have a good monotonicity for the eigenvalues \begin{eqnarray}\label{monotonicity_ineq} \mu_1 \leq \mu_2 < \mu_3. \end{eqnarray} Indeed, the former inequality follows from the monotonicity $\kappa_1 \leq \kappa_2$. For the last one, note that since $0<t< \varepsilon/2 < 1/4c$ we easily deduce \begin{eqnarray}\label{eq_aux_1} \frac{2c}{1+2ct} > \frac{4c}{3}\raisepunct{.} \end{eqnarray} Using $\tanh t \leq 1/4c$ we also obtain \begin{eqnarray*} \tanh t \leq \frac{c}{4c^2 - 3}\raisepunct{,} \end{eqnarray*} which turns out to be equivalent to \begin{eqnarray}\label{eq_aux_2} \frac{c\,\cosh t - \sinh t}{\cosh t - c\, \sinh t} \leq \frac{4c}{3}\raisepunct{.} \end{eqnarray} Putting together \eqref{eq_aux_1} with \eqref{eq_aux_2}, and recalling that $\vert \kappa_i\vert \leq c$, we finally have \begin{eqnarray*} k_2^t = \frac{\kappa_2 \cosh t - \sinh t}{\cosh t - \kappa_2 \sinh t} \leq \frac{c\,\cosh t - \sinh t}{\cosh t - c\, \sinh t} < \frac{2c}{1+2ct}\raisepunct{.} \end{eqnarray*} Thus, the inequality $\mu_2^t < \mu_3^t$ follows. Now, applying \cite[Lem.2.3]{jorge2003barrier} we can estimate \eqref{lapla_u} from below as \begin{eqnarray}\label{lapla_u_ineq} \triangle \phi_\delta &\geq & -g'(t_z^{\mu})\left(\kappa_1^t + \kappa_2^t\right) + 2g'(t_z^{\mu}) \nonumber \\[0.2cm] &=& -2g'(t_z^{\mu})\left(H(t) - 1\right). \end{eqnarray} To estimate \eqref{lapla_u_ineq} we first note that due to Newton's inequality it is easy to deduce that \begin{eqnarray*} \left\lbrace \begin{array}{l} H'(t) \geq H^2(t) - 1, \\[0.2cm] H(0) = H \geq 1-\mu , \end{array}\right. \end{eqnarray*} where $H$ is the mean curvature of $S_z^\mu$. By the Riccati's comparison theorem we may have \begin{eqnarray}\label{H_ineq_final} H(t) \geq \frac{H \cosh t - \sinh t}{\cosh t - H \sinh t}\raisepunct{.} \end{eqnarray} Substituting \eqref{H_ineq_final} into \eqref{lapla_u_ineq} and recalling $H>0$ and $0< t < \tanh^{-1}(1/4c)$ we obtain \begin{eqnarray*} \triangle \phi_\delta &\geq & -2g'(t_z^{\mu}) \frac{(H-1)(\cosh t + \sinh t)}{\cosh t - H \sinh t} \\[0.2cm] &\geq & 2g'(t_z^{\mu}) \frac{\cosh t + \sinh t}{\cosh t - H \sinh t}\mu \\[0.2cm] &\geq & -\frac{4c}{1+2ct} \frac{1 + \tanh t}{1 - 2c \tanh t}\mu \\[0.2cm] &\geq & -16 c\,\mu . \end{eqnarray*} Now, taking $\mu(\delta) = \delta/16c$ we conclude that $\triangle \phi_\delta \geq -\delta$, and the function $v$ satisfies $\triangle v \geq 0$ in the barrier sense on $\varphi^{-1}(V_+(\varepsilon/2))$. Therefore, the function $u = \max\{v,0\}$ must be constant. A contradiction. \subsection{Proof of Theorem \ref{stochastic_half-1-surfaces}} To prove the stochastic theorem we need to obtain a strong inequality for the Laplacian of the function $u$, namely, we should prove that $\triangle u \geq \lambda u$, for some $\lambda>0$, in the barrier sense. Following computations from the proof of Theorem \ref{parabolic_half} it is easy to see that \eqref{monotonicity_ineq} holds. Thus, substituting in \eqref{lapla_u} and using \eqref{H_ineq_final} we have \begin{eqnarray}\label{ineq_phi_stochastic} \triangle \phi_\delta &\geq & -2g'(t_z^{\mu})\left(H(t) - H_{\!_M}\right) \nonumber\\[0.2cm] &\geq & -2g'(t_z^{\mu})\left(\frac{H \cosh t - \sinh t}{\cosh t - H \sinh t} - H + (H - H_{\!_M})\right) \nonumber \\[0.2cm] &\geq & -2g'(t_z^{\mu})\left(\frac{(H^2 - 1)\sinh t}{\cosh t - H \sinh t} + (H_{\!_N} - H_{\!_M}) - \mu \right) . \end{eqnarray} To prove item $i)$ we use that $H\geq H_{\!_N} \geq 1-\mu$ and $0< t < \tanh^{-1}(1/4c)$ to compute \begin{eqnarray*} \triangle \phi_\delta &\geq & -2g'(t_z^{\mu})\left(\frac{(\mu^2 - 2\mu)\sinh t}{\cosh t - H \sinh t} + (H_{\!_N} - H_{\!_M}) - \mu \right) \\[0.2cm] &\geq & -2g'(t_z^{\mu})\left(\frac{- 2\mu\tanh t}{1 - 2c \tanh t} + (H_{\!_N} - H_{\!_M}) - \mu \right) \\[0.2cm] &\geq & \frac{4c}{1+2c\,t_z^{\mu}}(H_{\!_N} - H_{\!_M}) - \frac{20c}{1+2c\,t_z^{\mu}}\mu \\[0.2cm] &\geq & (\inf_{N} H_{\!_N} - \sup_{M} H_{\!_M})\phi_\delta - 20c\mu . \end{eqnarray*} Taking $\mu(\delta) = \delta/20c > 0$ and $\lambda = (\inf_{N} H_{\!_N} - \sup_{M} H_{\!_M})>0$ we conclude that $\triangle v \geq \lambda v$ holds in the barrier sense. Again, from the Liouville theorem for stochastically complete surfaces \cite[Thm.6.1]{grigoryan} we must conclude that the function $u = \max\{v,0\}$ must be identically zero. This finishes the proof. Similarly, to prove item $ii)$ we assume by contradiction the existence of a positive constant $\ell$ with $\ell \leq t \leq \varepsilon/8$. Taking $\mu(\delta)$ sufficiently small such that $H^2 - 1 > (\inf_N H_{\!_N}^2 - 1)/2 > 0$, by \eqref{ineq_phi_stochastic} we obtain \begin{eqnarray*} \triangle \phi_\delta &\geq & -2g'(t_z^{\mu})\left(\frac{(H^2 - 1)}{\coth t - 1} + (H_{\!_N} - H_{\!_M}) - \mu \right) \\[0.2cm] &\geq & \frac{4c}{1+2c\,t_z^{\mu}}\left(\frac{\inf_N H_{\!_N}^2 - 1}{2\coth \ell - 2} + \inf_{N} H_{\!_N} - \sup_{M} H_{\!_M} - \mu \right) \\[0.2cm] &\geq & \frac{\inf_N H_{\!_N}^2 - 1}{2\coth \ell - 2}\phi_\delta - 4c\mu . \end{eqnarray*} Thus, taking $\mu(\delta) = \delta/4c$ and $\lambda = (\inf_N H_{\!_N}^2 - 1)/(2\coth \ell - 2)>0$ we will arrive at the same contradiction as above. \subsection{Proof of Theorem \ref{maximum-principle-infinity}} As in the proof of Theorems \ref{parabolic_half} and \ref{stochastic_half-1-surfaces}, from a translation argument we can assume that ${\rm dist}(M,N) = 0$. Moreover, the selected function $v = g\circ t_{\!_N}$ used before will be a bounded solution of $\triangle v \geq 0$ on $\varphi^{-1}(U(\varepsilon/2)) \cap {\rm int} M$, where $\varphi$ denotes the isometric immersion of $(M, \partial M)$ into $\mathbb{H}^3_{\raisepunct{.}}$ Therefore, $u = \max\{v,0\}\in C^0(M)\cap W^{1,2}_{loc}({\rm int} M)$ is a weak bounded subharmonic function on ${\rm int} M$ and since $M$ is parabolic, the Ahlfors maximum principle \cite[Prop.10]{pessoa-pigola-setti} says that \begin{eqnarray*} \sup_M u = \sup_{\partial M} u. \end{eqnarray*} The conclusion then follows by noticing that $u(x) \to \sup_M u$ if and only if ${\rm dist} (\varphi(x),N) \to 0$.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,115
var AWS = require('aws-sdk'); var CfnLambda = require('cfn-lambda'); var APIG = new AWS.APIGateway({apiVersion: '2015-07-09'}); var Delete = CfnLambda.SDKAlias({ api: APIG, method: 'deleteIntegration', keys: ['HttpMethod', 'ResourceId', 'RestApiId'], downcase: true, ignoreErrorCodes: [404] }); var Upsert = CfnLambda.SDKAlias({ api: APIG, method: 'putIntegration', downcase: true, returnPhysicalId: function(data, params) { return [ params.RestApiId, params.ResourceId, params.HttpMethod ].join(':'); } }); exports.handler = CfnLambda({ Create: Upsert, Update: Upsert, Delete: Delete, SchemaPath: [__dirname, 'schema.json'] });
{ "redpajama_set_name": "RedPajamaGithub" }
4,262
You can view introductions of applications for Mac OS X above in this site and purchase them via Mac App Store. Currently I am selling Tabbed PDF Viewer "Dioretsa" , Playlist Style QuickTime Movie Player "Meteoroid" mentioned above. Both "Dioretsa" and "Meteoroid" are $4.99. Please read each linked application pages to know application details. Moreover there is "Amazon Shopping Link" introducing webmaster's favorite merchandises in the site . Everybody, please feel free to stop by all means.
{ "redpajama_set_name": "RedPajamaC4" }
7,505
About Del Mar We Thank You for Being a Part of Breeders' Cup Festival Week in Del Mar Village For the first time ever, the Breeders' Cup ran where the turf meets the surf in Del Mar, California. Del Mar Village Association would like to cordially thank everyone who took part in the 2017 Breeders' Cup Festival, a week-long celebration from October 28th to November 4th. Still trying to relive the festivities? Take a look at some of the key moments captured in the photos below. Del Mar Racetrack Gallery Best at the Barn Gallery Jake's Fun Run Gallery CNNi Winning Post Inside the Breeders' Cup 2017 Breeders' Cup Highlights Del Mar Village Association Del Mar Village Community & Visitor Center 1104 Camino Del Mar Suite #1 Del Mar, CA 92014 Email: Info@VisitDelMarVillage.com Del Mar is a quaint seaside village rich in history, striking natural beauty and European charm. Located just 20 miles north of San Diego on the Southern California coastline, Del Mar Village is unique in its offering of a vibrant small-town atmosphere, sprawling natural playground along the Pacific Ocean and the casual sophistication of a world-class destination. Del Mar Village attracts visitors from across the globe seeking pristine beaches, award-winning cuisine with spectacular ocean views, unique shops, and boutique hotels. Del Mar is home to the renowned Del Mar Racetrack and San Diego County Fair. About DMVA Join DMVA Copyright © 2015 - 2020 Visit Del Mar Village Win a Would you help us? We would value your thoughts on planning a trip to Del Mar. By completing this short survey, you can enter to win a $200 gift card. Your information will be kept confidential and used for statistical purposes only. Open Survey
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,478
Jörg Stocker (c. 1461 – after 1527) was a German painter. Further reading Janez Höfler. "Stocker, Jörg." In Grove Art Online. Oxford Art Online, (accessed January 1, 2012; subscription required). Gerhard Weiland: "Die Ulmer Künstler und ihr Zunft." Meisterwerke massenhaft. Die Bildhauerwerkstatt des Niklaus Weckmann und die Malerei in Ulm um 1500. pp. 369–388. Württembergischen Landesmuseum Stuttgart, 1993, Daniela Gräfin von Pfeil: "Jörg Stocker – ein verkannter Maler aus Ulm." Meisterwerke massenhaft. Die Bildhauerwerkstatt des Niklaus Weckmann und die Malerei in Ulm um 1500. pp. 199–210. Württembergischen Landesmuseum Stuttgart, 1993, Hans Koepf, "Schüchlin, Herlin und Zeitblom." Schwäbische Kunstgeschichte, Vol. 3, Jan Thorbecke Verlag Konstanz 1963, pp. 110–111 External links Entry for Jörg Stocker in the Union List of Artist Names 15th-century German painters German male painters 16th-century German painters Renaissance painters People from Ulm Year of birth uncertain Year of death unknown
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,043
Carriera Iniziò a gareggiare da giovanissimo, dapprima con il gruppo sportivo "Mago" di Barbiano e poi con il "Pedale Fusignanese". Passato alla categoria dilettanti, nel 1966 conquistò il titolo di campione italiano nel chilometro da fermo e nella velocità su tandem in coppia con il bolognese Giordano Turrini. Si confermò campione nazionale nelle stesse specialità anche nel 1968 e nel 1969, questa volta in tandem con il forlivese Mauro Orlati. In tandem con Turrini vinse la medaglia di bronzo ai campionati mondiali del 1966 a Francoforte e quella d'oro nel 1968 a Montevideo. Lo stesso anno, in coppia con Luigi Borghetti, giunse quarto nella gara di tandem dei Giochi Olimpici di Città del Messico. Nel 1967, sul Velodromo di Forlì, stabilì il record del giro lanciato (400 metri), con il tempo straordinario di 23"2, alla media di 62,062 km/h. Nel 1969, con la maglia del G.S. "Leoni di Meldola", vinse a Budapest l'Internazionale di velocità. Palmarès 1968 Campionati del mondo, Tandem (Montevideo) Onorificenze Collare d'oro al merito sportivo - 2021 Piazzamenti Competizioni mondiali Campionati del mondo su pista Francoforte 1966 - Tandem: 3º Montevideo 1968 - Tandem: vincitore Giochi olimpici Città del Messico 1968 - Tandem: 4º Note Altri progetti Collegamenti esterni Insigniti con il Collare d'oro al merito sportivo
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,843
require 'rails_helper' describe Api::V1::InvoicesController, type: :controller do include Randomness include ResponseJson before(:all) do Invoice.destroy_all Transaction.destroy_all Revision.destroy_all Document.destroy_all end it 'should list all Invoices for a Transaction' do rand_array_of_models(:transaction).each do |tm| invoices = rand_array_of_models(:invoice, transact_id: tm.id) get(:index, transaction_id: tm.public_id) expect(response).to be_success expect(response_json).to eql(encode_decode(InvoiceSerializer.many(invoices))) end end it 'should list all Invoices for a User' do rand_array_of_models(:user).each do |um| ims = rand_array_of_models(:transaction, user: um).inject([]) do |a, trm| a + rand_array_of_models(:invoice, transact: trm) end get(:index, user_id: um.public_id) expect(response).to be_success expect(response_json).to eql(encode_decode(InvoiceSerializer.many(ims))) end end it 'should fail if there is no such transaction' do rand_array_of_numbers.each do |transaction_id| get(:index, transaction_id: transaction_id) expect(response).to_not be_success expect(response).to have_http_status(:not_found) end end it 'should yield the latest associated document' do rand_array_of_models(:invoice).each do |im| pdm = Document.create(invoice: im) ndm = Document.create(invoice: im) get(:latest, invoice_id: im.public_id) expect(response).to be_success expect(response_json).to eql(encode_decode(DocumentSerializer.serialize(ndm))) end end it 'should yield not found for latest if the invoice has no revisions' do rand_array_of_models(:invoice).each do |im| get(:latest, invoice_id: im.public_id) expect(response).to_not be_success expect(response).to have_http_status(:not_found) end end it 'should yield not found for latest if the invoice does not exist' do rand_array_of_uuids.each do |public_id| get(:latest, invoice_id: public_id) expect(response).to_not be_success expect(response).to have_http_status(:not_found) end end it 'should show an invoice' do rand_array_of_models(:invoice).each do |im| get(:show, id: im.public_id) expect(response).to be_success expect(response_json).to eql(encode_decode(InvoiceSerializer.serialize(im))) end end it 'should yield not found if the invoice shown does not exist' do rand_array_of_uuids.each do |id| get(:show, id: id) expect(response).to_not be_success expect(response).to have_http_status(:not_found) end end end
{ "redpajama_set_name": "RedPajamaGithub" }
2,358
\section{Introduction} Work is currently in progress to upgrade the cylindrical Ooty Radio Telescope (ORT\footnote{http://rac.ncra.tifr.res.in/})) so that it functions as a linear interferometric array the Ooty Wide Field Array (OWFA; Prasad \& Subrahmanya 2011a,b; Ram Marthi \& Chengalur 2014). This telescope operates at a nominal frequency of $\nu_o= 326.5 \,{\rm MHz}$ which corresponds to the neutral hydrogen (HI) $1,420 \, {\rm MHz}$ radiation from a redshift $z=3.35$. Observations of the fluctuations in the contribution from the HI to the diffuse background radiation are a very interesting probe of the large-scale structures in the high-$z$ universe (Bharadwaj, Nath \& Sethi 2001,Bharadwaj, \& Sethi 2001). In addition to the power spectrum (Bharadwaj \& Pandey 2003, Bharadwaj \& Srikant 2004) this is also a sensitive probe of the bispectrum (Ali, Bharadwaj and Pandey 2006, Guha Sarkar \& Hazra 2013). There has been a continued, growing interest towards the detection of the 21 cm signal from the lower redshifts $(0 < z < 4)$ to probe the post-reionization era (Chang et al. 2008; Visbal et al. 2009; Bharadwaj et al. 2009; Wyithe \& Loeb 2009; Bagla, Khandai \& Datta 2010; Seo et al. 2010; Mao 2012; Ansari et al. 2012; Bull et al. 2014; Villaescusa-Navarro et al. 2014). Recently, Ali \& Bharadwaj (2014) (henceforth, Paper I) have studied the prospects for detecting the HI signal from redshift $z=3.35$ using OWFA. The OWFA provides an unique opportunity to study the large scale structures at $z=3.35$. A number of similar upcoming packed radio interferometer (CHIME\footnote{http://chime.phas.ubc.ca/}, Bandura et al. 2014; BAOBAB\footnote{http://bao.berkeley.edu/} and the KZN array\footnote{A compacted array of 1225 dishes with diameter 5m each, based on BAOBAB and sited in South Africa}) have been proposed to probe the expansion history of the low-redshift universe ($z \le 2.55$) with an unprecedented precision using BAO measurements from the large-scale HI fluctuations. Even more innovative designs are being planned for the future low frequency telescope SKA\footnote{http://www.skatelescope.org/}. This promises to yield highly significant measurements of the HI power spectrum over a large redshift range spanning nearly the entire post-reionization era $(z < 6)$. However, the detection of the faint $21\,\rm cm$ HI signal ($ \sim 1\, {\rm mK}$) is extremely challenging due to the presence of different astrophysical foregrounds. The foregrounds are four to five orders of magnitude brighter than the post-reionization HI signal (Ghosh et al. 2011a,2011b). In this paper, we have considered the visibility correlation (Bharadwaj, \& Sethi 2001, Bharadwaj, \& Ali 2005) which essentially is the data covariance matrix that is necessary to calculate the Fisher matrix. We have employed the Fisher matrix technique to predict the expected signal-to-noise ratios (SNR) for detecting the HI signal. In our analysis we have assumed that the HI traces the total matter with a linear bias, and the matter power spectrum is precisely as predicted by the standard LCDM model with the parameter values mentioned later. The HI power spectrum is then completely specified by two parameters $A_{HI}$, which sets the overall amplitude of the power spectrum, and $\beta$ the redshift distortion parameter. The parameter $A_{HI}$ here is the product of the mean neutral hydrogen fraction ($\bar{x}_{{\rm HI}}$) and the linear bias parameter ($ b_{HI}$). For a detection, we focus on measuring the amplitude $A_{HI}$, marginalizing over $\beta$. We also consider the joint estimation of $A_{HI}$ and $\beta$. Our entire analysis is based on the assumption that the visibility data contains only the signal and the noise, and the foregrounds and radio-frequency interference have been completely removed from the data. The BAO feature is within the baseline range covered by OWFA (Paper I). However, the frequency coverage ($\sim 30 \, {\rm MHz}$) is rather small. Further, for the present analysis we have only considered observations in a single field of view. All of these result in having very few Fourier modes across the $k$ range relevant for the BAO, and we do not consider this here. The rest of the paper is organized as follows. Section 2 briefly discusses some relevant system parameters for OWFA. In Section 3, we present the theoretical model for calculating the signal and noise covariance, and predict their respective contributions. Here we also estimate the range of k-modes which are probed by the OWFA. In Section 4 we use the Fisher matrix analysis to make predictions for the SNR as a function of the observing time. Finally, we present summary and conclusions in Section 5. In this paper, we have used the (Planck $+$ WMAP) best-fit $\Lambda$CDM cosmology with cosmological parameters (Ade et al. 2013): $\Omega_{m}=0.318, \Omega_bh^2=0.022,\Omega_{\Lambda}=0.682, n_s=0.961, \sigma_8=0.834, h=0.67$. We have used the matter transfer function from the fitting formula of Eisenstein \& Hu (1998) incorporating the effect of baryonic features. \section{Telescope parameters} \begin{table} \begin{center} \caption{System parameters for Phases I, II, III and IV of the OWFA .} \vspace{.2in} \label{tab:array} \begin{tabular}[scale=.3]{|l|c|c|c|c|} \hline \hline Parameter & Phase I &Phase II& Phase III & Phase IV\\ \hline No. of antennas & 40 & 264 & 528 &1056 \\ ($N_A$)& & & &\\ \hline No. of dipoles $N_d$ & 24 & 4& 2 & 1 \\ \hline Aperture area & $30 \,{\rm m} \times 11.5 \,{\rm m}$ & $ 30 \,{\rm m} \times 1.92 \,{\rm m} $ & $ 30 \,{\rm m} \times 0.96 \,{\rm m} $ & $ 30 \,{\rm m} \times 0.48 \,{\rm m} $\\ ($b \times d$)& & & &\\ \hline Field of View & $ 1.75^{\circ} \times 4.6^{\circ}$ & $ 1.75^{\circ} \times 27.4^{\circ}$ & $ 1.75^{\circ} \times 54.8^{\circ}$& $ 1.75^{\circ} \times 109.6^{\circ}$ \\ (FoV)& & & &\\ \hline Smallest baseline & $11.5 \,{\rm m} $ & $1.9 \,{\rm m} $&$0.96 \,{\rm m} $ &$0.48 \,{\rm m} $ \\ ($d_{min}$) & & & &\\ \hline Largest baseline & $ 448.5 \,{\rm m}$ & $505.0 \,{\rm m}$& $506.0 \,{\rm m}$& $506.5 \,{\rm m}$ \\ ($d_{max}$)& & & &\\ \hline Total band- & $18 \,{\rm MHz}$ & $30 \,{\rm MHz}$&$60 \,{\rm MHz}$ &$120 \,{\rm MHz}$ \\ width (B) & & & &\\ \hline Single Visibility & $1.12$ Jy & $6.69 $ Jy & $13.38 $ Jy & $26.76 $ Jy \\ rms. noise ($\sigma$) & & & &\\ \hline \end{tabular} \end{center} \end{table} The ORT is a 530 m long and 30 m wide parabolic cylindrical reflector placed in the north-south direction on a hill with the same slope as the latitude $(11^{\circ})$ of the station (Swarup et al. 1971; Sarma et al. 1975). It thus becomes possible to observe the same part of the sky by rotating the parabolic cylinder along its long axis.The telescope has 1056 half-wavelength $ (0.5 \lambda_0 \approx 0.5 {\rm m})$ dipoles placed nearly end to end along the focal line of the cylinder. Work is underway to implement electronics that combines the digitized signal from every $N_d$ successive dipoles so that we have a linear array of $N_A$ antennas located along the length of the cylinder. The OWFA will, at present, have the ability to operate in two different modes one with $N_d=24$ and another with $N_d=4$, referred to as Pase I and Phase II respectively. For our theoretical analysis we have also considered two hypothetical (possibly future) upgrades Phases III and IV with $N_d=2$ and $N_d=1$ respectively. Table \ref{tab:array} summarizes various parameters for different phases of the array. The individual antennas get more compact, and the field of view increases from Phase I to IV. The number of antennas and the frequency bandwidth also increases from Phase I to IV. For any phase, each antenna has a rectangular aperture of dimensions $b\times d$, and is distributed at an interval ${\bf d}=d \, {\bf \hat{i}}$ along the length of the cylinder. The value of $b(=30 {\rm m})$, which corresponds to the width of the parabolic reflector, remains fixed for all the phases. The value of $d$ varies for the different phases (Table \ref{tab:array} ). The baseline $\vec{U}$ quantifies the antenna pair separation projected perpendicular to the line of sight measured in the units of the observing wavelength $\lambda$. Assuming observations vertically overhead, we have the baselines \begin{equation} \vec{U}_a = a \frac{{\bf d}}{\lambda} \hspace{2.0cm} (1 \le a \le N_A-1) \,. \end{equation} In reality $\vec{U}_1,\vec{U}_2,...$ vary across the observing bandwidth as frequency changes. However, for the present purpose of the paper we keep $\vec{U}_a$ fixed at the value corresponding to the nominal frequency. A schematic view of the OWFA array layout is presented in Paper I. The OWFA has a significant number of redundant baselines. For any baseline $\vec{U}_a$ we have $M_a = (N_A - a)$ times sampling redundancy of the baseline. \section{OWFA visibility covariance and the Fisher matrix} \label{sec:vc} The OWFA measures visibilities $\mathcal{V}(\vec{U}_a,\nu_m)$ at a finite number of baselines $\vec{U}_a$ and frequency channels $\nu_m$ with frequency channel width $\Delta \nu_c$ spanning a frequency bandwidth $B$. The measured visibilities can be expressed as a combination of the HI signal and the noise \begin{equation} \mathcal{V}(\vec{U}_a,\nu_m)={\mathcal S}(\vec{U}_a,\nu_m)+{\mathcal N}(\vec{U}_a,\nu_m) \label{eq:b1} \end{equation} assuming that the foregrounds have been removed. The correlation expected between the HI signal at two different baselines and frequencies can be calculated (Paper I and references therein) using \begin{eqnarray} \langle {\mathcal S}(\vec{U}_a,\nu_n) {\mathcal S}^{*}(\vec{U}_b,\nu_m) \rangle &=& \left(\frac{2 k_B}{\lambda^2}\right)^2 \int d^2 U^{'} \tilde{A}(\vec{U}_a-\vec{U}^{'}) \tilde{A}^{*}(\vec{U}_b-\vec{U}^{'}) \nonumber \\ &\times& \frac{1}{2 \pi r_{\nu}^2} \int d k_\parallel \cos(k_\parallel r_{\nu}^{'} \Delta \nu ) P_{\rm HI}(\frac{2 \pi \vec{U}^{'}}{r_{\nu}},k_\parallel) \label{eq:a3} \end{eqnarray} where $P_{\rm HI}({\bf k}_\perp,k_\parallel)$ is the power spectrum of the 21-cm brightness temperature fluctuation in redshift space, $\left(\frac{2 k_B}{\lambda^2}\right)$ is the conversion from brightness temperature to specific intensity, $r_{\nu}$ is the comoving distance from the observer to the region where the HI radiation originated, $r_{\nu}^{'}=dr/d \nu $ is the radial conversion factor from frequency interval to comoving separation ($r_{\nu}=6.85$ Gpc and $r_{\nu}^{'}=11.5$ Mpc MHZ$^{-1}$ for OWFA), $\Delta \nu=\nu_m-\nu_n$ and $\tilde{A}(\vec{U})$ is the Fourier transform of the OWFA primary beam pattern. The real and imaginary parts of the noise ${\mathcal N}(\vec{U}_a,\nu_n)$ both have equal variance $\sigma^2$ with \begin{equation} \sigma =\frac{\sqrt{2}k_BT_{sys}}{\eta A\sqrt{\Delta \nu_c t}} \label{eq:a4} \end{equation} where $T_{sys}$ is the system Temperature, $\eta$ and $A=b \times d$ are respectively the efficiency and the geometrical area of the antenna aperture and $t$ is the observing time. We have used the values $ T_{sys}=150 \, {\rm K}$, $\eta=0.6$ and $\Delta \nu_c=0.1 \, {\rm MHz}$ which are the same as in Paper I. The noise in the visibilities measured at different baselines and frequency channels are uncorrelated. We then have \begin{equation} \langle {\mathcal N}(\vec{U}_a,\nu_n) {\mathcal N}^{*}(\vec{U}_b,\nu_m) \rangle =\delta_{a,b} \delta_{n,m} 2 \sigma^2 \,. \label{eq:a5a} \end{equation} Earlier studies (Paper I) have shown that for a fixed baseline ($U_a=U_b$) the HI signal (eq.~\ref{eq:a3}) is correlated out to frequency separations $\mid \nu_n -\nu_m \mid \sim 0.5 \, {\rm MHz}$ which spans several frequency channels. This implies that the data covariance matrix $\langle \mathcal{V}(\vec{U}_a,\nu_n) \mathcal{V}^{*}(\vec{U}_a,\nu_m) \rangle$ has considerable off-diagonal terms, a feature that is not very convenient for the Fisher matrix analysis. For the Fisher Matrix analysis it is convenient to use the delay channels $\tau_m$ (Morales 2005) instead of the frequency channels $\nu_c$. We define \begin{equation} v(\vec{U}_a,\tau_m)=\Delta \nu_c \sum_n e^{2 \pi i \tau_m (\nu_n-\nu_0)} \mathcal{V}(\vec{U}_a,\nu_n) \label{eq:b4} \end{equation} where $N_c$ is the number of frequency channels, $B= N_c \Delta \nu_c$ and $$\tau_m=\frac{m}{B} \hspace{1.3cm} \frac{-N_c}{2} < m \le \frac{N_c}{2}\, . $$ The covariance matrix $\langle v(\vec{U}_a,\tau_m) v^*(\vec{U}_b,\tau_n) \rangle $ is zero if $n \neq m$, and we need only consider the diagonal terms $n=m$. Defining $C_{ab}(m)=\langle v(\vec{U}_a,\tau_m) v^*(\vec{U}_b,\tau_m) \rangle $ we have \begin{eqnarray} C_{ab}(m) &=& \frac{B}{ r_{\nu}^2 r_{\nu}^{'}} \left(\frac{2 k_B}{\lambda^2}\right)^2 \int d^2 U^{'} \tilde{A}(\vec{U}_a-\vec{U}^{'}) \tilde{A}^{*}(\vec{U}_b-\vec{U}^{'}) P_{\rm HI}(\frac{2 \pi \vec{U}^{'}}{r_{\nu}},\frac{2 \pi \tau_m}{r_{\nu}^{'}}) \nonumber \\ &+& \delta_{a,b} \, 2 \, \Delta \nu_c \, B \, \frac{\sigma^2}{(N_A - a)} \,. \label{eq:a5} \end{eqnarray} The factor $(N_A - a)^{-1}$ in the noise contribution accounts for the redundancy in the baseline distribution. The functions $\tilde{A}(\vec{U}_a-\vec{U}^{'})$ and $\tilde{A}^{*}(\vec{U}_b-\vec{U}^{'})$ have an overlap only if $a=b$ or $a=b \pm 1$ (Paper I). The visibilities at two baselines$\vec{U}_a$ and $\vec{U}_b$ are uncorrelated $(C_{ab}(m)=0)$ if $\mid a -b \mid > 1$ {\it ie.} the visibility at a particular baseline $\vec{U}_a$ is only correlated with the other visibility measurements at the same baseline or the adjacent baselines $\vec{U}_{a \pm 1}$. Thus, for a fixed $m$, $C_{ab}(m)$ is a symmetric, tridiagonal matrix. Further, the noise only contributes to the diagonal terms, and it does not figure in the off-diagonal terms. We use the data covariance $C_{ab}(m)$to calculate the Fisher Matrix using \begin{equation} F_{\alpha \beta}=\frac{1}{2} \sum_m C^{-1}_{ab}(m) [C_{bc}(m)]_{,\alpha} C^{-1}_{cd}(m) [C_{da}(m)]_{,\beta} \label{eq:a6} \end{equation} where the indices $a,b,c,d$ are to be summed over all baselines, and $\alpha,\beta$ refer to the various parameters which are to be estimated from the OWFA data. It is possible to get further insight into the cosmological information contained in the data covariance $C_{ab}(m)$ by considering large baselines $U_a \gg d/\lambda$ where it is reasonable to assume that the function $\tilde{A}(\vec{U}_a-\vec{U}^{'}) \tilde{A}^{*}(\vec{U}_b-\vec{U}^{'}) $ in eq.~(\ref{eq:a5}) falls sharply in comparison to the slowly changing HI power spectrum as $\vec{U}^{'}$ is varied. The integral in equation eq.~(\ref{eq:a5}) can then be approximated as \begin{equation} \approx P_{\rm HI}({\bf k}) \int d^2 U^{'} \tilde{A}(\vec{U}_a-\vec{U}^{'}) \tilde{A}^{*}(\vec{U}_b-\vec{U}^{'}) \label{eq:a7} \end{equation} where \begin{equation} {\bf k} \equiv ({\bf k}_\perp,k_\parallel) \equiv (\pi[\vec{U}_a+\vec{U}_b]/r_{\nu},2 \pi \tau_m/r_{\nu}^{'})\,. \label{eq:a8} \end{equation} The integral in eq.~(\ref{eq:a7}) can be evaluated analytically, and we have the approximate formula \begin{eqnarray} C_{ab}(m) = B \left[ \frac{(2 k_B)^2 (4 \delta_{a,b}+\delta_{a,b \pm 1})} { 9 \lambda^2 b d r_{\nu}^2 r_{\nu}^{'}} P_{\rm HI}({\bf k})+ \frac{\delta_{a,b} \, 2 \, \Delta \nu_c \sigma^2}{(N_A - a)} \,\right] \,. \label{eq:a9} \end{eqnarray} \begin{figure}[ht] \begin{center} \psfrag{Signal Covariance}[c][c][1.0][0]{$S_{ab}(m)$\ Jy$^{2}$MHZ$^{2}$} \psfrag{p1}[c][c][0.8][0]{$(a=b)$} \psfrag{p2}[c][c][0.8][0]{$(a=b\pm1)$} \psfrag{PHASE II}[c][c][1.1][0]{PHASE II} \includegraphics[scale =.5]{comp_PH2.ps} \caption{This shows the signal contribution to the covariance matrix $C_{ab}(m)$ for $m=1$ calculated using eq. (\ref{eq:a5}) (solid curves) and the approximate formula eq. (\ref{eq:a9}) (dashed curves).} \label{fig:convo2} \end{center} \end{figure} Figure \ref{fig:convo2} shows a comparison of the signal contribution to the covariance matrix calculated using eq. (\ref{eq:a5}) and the approximate formula eq. (\ref{eq:a9}). We find that the results are in reasonably good agreement over the entire $U$ range for $m=1$. The agreement is better at large baselines $U \ge 30$ where the two curves are nearly undistinguishable. The results are indistinguishable for the entire $U$ range for $m>1$ which has not been shown here. Although we have used the approximate equation (eq. \ref{eq:a9}) to interpret $C_{ab}(m)$ in the subsequent discussion, we have used eq. (\ref{eq:a5}) to compute $C_{ab}(m)$ throughout the entire analysis. Returning to eq. (\ref{eq:a9}), first, the signal contribution to the diagonal terms is found to be $4$ times larger than the off-diagonal terms. Next, we see that each non-zero element of the covariance matrix $C_{ab}(m)$ corresponds to the HI power spectrum at a particular comoving Fourier mode ${\bf k}$ given by eq.~(\ref{eq:a8}). Each delay channel $\tau_m$ corresponds to a $k_{\parallel m}=2 \pi \tau_m/r_{\nu}^{'}$ which spans the values \begin{equation} k_{\parallel m}=m \left( \frac{2 \pi }{B r_{\nu}^{'}} \right) \hspace{1cm} \frac{-N_c}{2} < m \le \frac{N_c}{2}\, . \end{equation} For a fixed $\tau_m$, the diagonal terms of $C_{ab}(m)$ with $\vec{U}_a=\vec{U}_b$ correspond to $k_{\perp a}= 2 \pi U_a/r_{\nu}$ which spans the values \begin{equation} k_{\perp a}= a \left( \frac{2 \pi d}{\lambda r_{\nu}} \right) \hspace{1cm} 1 \le a \le N_A-1 \,, \end{equation} and the off-diagonal terms of $C_{ab}(m)$ with $\vec{U}_b=\vec{U}_{a+1}$ correspond to $k_{\perp a}= \pi [U_a+U_b]/r_{\nu}$ which spans the values \begin{equation} k_{\perp a}= (a \pm 0.5) \left( \frac{2 \pi d}{\lambda r_{\nu}} \right) \hspace{1cm} 1 \le a \le N_A-2 \,, \end{equation} We see that the $k_{\perp}$ value probed by any off-diagonal term is located mid-way between the $k_{\perp}$ values probed by the two nearest diagonal terms. Considering both the diagonal and the off-diagonal terms, we find that the different $k_{\perp}$ values that will be probed by OWFA are located at an interval of $\Delta k_{\perp}= \pi d/(\lambda r_{\nu})$. \begin{table} \begin{center} \caption{The $k_{\perp}$ and $k_{\parallel}$ range that will be probed by the different Phases of OWFA.} \vspace{.2in} \label{tab:kperp} \begin{tabular}[scale=.3]{|l|c|c|c|c|} \hline \hline ${\rm Mpc^{-1}}$ & Phase I &Phase II& Phase III & Phase IV\\ \hline ${k_{\perp}[min]}$\, & $1.1\times 10^{-2}$ & $ 1.9\times 10^{-3} $ & $ 9.5\times 10^{-4} $ & $4.8\times 10^{-4} $ \\ \hline ${k_{\perp}[max]}$ & $4.8\times 10^{-1}$ & $ 5.0\times 10^{-1} $ & $ 5.1\times 10^{-1} $ & $5.1\times 10^{-1} $ \\ \hline ${k_{\parallel}[min]}$ & $3.0\times 10^{-2}$ & $ 1.8\times 10^{-2} $ & $ 9.1\times 10^{-3} $ & $4.6\times 10^{-3} $ \\ \hline ${k_{\parallel}[max]} $ & $2.73 $ & $2.73 $ & $ 2.73$ & $2.73 $ \\ \hline \end{tabular} \end{center} \end{table} In addition to the HI signal and the noise considered in this paper, the OWFA visibilities will also contain a foreground contribution. For the purpose of this work we make the simplifying assumption that the foregrounds are constant across different frequency channels, and hence they only contribute to the $k_\parallel=0$ mode. In reality the foreground contamination will possibly extend to other modes also. However, in this work we make the most optimistic assumption that the foregrounds will be restricted to the $k_\parallel=0$ mode and we have excluded this in the subsequent analysis. Table~\ref{tab:kperp} shows the $k_{\perp},k_{\parallel}$ range that will be probed by the different Phases of OFWA. We see that for all the phases (except Phase I) the minimum value of $k_\parallel$ is approximately $10$ times larger than the corresponding $k_{\perp}[min]$. The sampling along $k_\parallel$, which is decided by $1/B$, has a spacing $\Delta k_\parallel=k_\parallel[min]$ which also is $\sim 5$ times larger than $\Delta k_{\perp}=k_{\perp}[min]/2$ which is decided by the antenna spacing $d$. The maximum $k_\parallel$ values also are approximately $4$ larger than the corresponding $k_{\perp}[max]$. It is thus clear that the sampling in $k_{\perp}$ is quite different from the $k_{\parallel}$ sampling, and the $k_{\parallel}$ values are several times larger than the $k_{\perp}$ values. This disparity in the $k_\parallel$ and $k_{\perp}$ coverage and sampling poses a problem for using OWFA to quantify redshift space distortion. We shall return to this in Section \ref{sec:sum} where we discuss the results of our analysis. \begin{figure}[ht] \begin{center} \vskip.2cm \psfrag{k1}[c][c][1.0][0]{$P_{HI}(k)$\ Mpc$^{3}$ mK$^{2}$} \psfrag{k2}[c][c][1.0][0]{$k$\ Mpc$^{-1}$} \psfrag{PHASE II}[c][c][1.1][0]{PHASE II} \psfrag{z=3.35}[c][c][1.0][0]{z=3.35} \centerline{\includegraphics[scale =.5]{pkHI_PH2.ps}} \caption{The $k$ range that will be probed by $C_{ab}(m)$ for different values of $m$. The curves for different $m$ have been arbitrarily displaced vertically to make them distinguishable. For reference, we have also shown the expected 21-cm brightness temperature fluctuation $P_{HI}(k)$ (dashed curve) where $P_{HI}(k) \equiv P_{HI}(k,\mu=0)$ is the $z=3.35$ HI 21-cm brightness temperature power spectrum (eq. \ref{eq:pk}).} \label{fig:pkHI1} \end{center} \end{figure} Figure \ref{fig:pkHI1} shows the $k=\mid {\bf k} \mid =\sqrt{k_\parallel^2 + k_{\perp}^2}$ range that will be probed by $C_{ab}(m)$ for different values of $m$. We see that the range $k \sim k_{\parallel}[min]$ to $k \sim k_{\perp}[max]$ is probed for $m=1$. The $k$ range shifts to larger $k$ values as $m$ is increased, and the entire $k$ range lies beyond $1 \, {\rm Mpc}^{-1}$ for $m \ge 64$. Figure \ref{fig:pkHI2} shows a histogram of all the different $k$ modes that will be probed by OWFA Phase II. We expect the number of modes $\Delta N_k$ in bins of constant width $\Delta k$ to scale as $\Delta N_k \sim k^2 \, \Delta k$ if the ${\bf k}$ modes are uniformly distributed in three dimensions (3D). The modes ${\bf k}$ have a 2D distribution for OWFA, and we expect $\Delta N_k \sim k \, \Delta k$ if the modes are uniformly distributed. However, we have seen that the distribution is not uniform ($\Delta k_{\parallel}$ and $\Delta k_{\perp}$ have different values) and the histogram does not show the expected linear behaviour. The increase in $\Delta N_k$ is faster than linear, it peaks at $k \sim 1 \, {\rm Mpc}^{-1}$ and is nearly constant at $\sim 60 \, \%$ of the peak value for larger modes out to $k \le k_{\parallel}[max] \sim \, 3 \, {\rm Mpc}^{-1}$. It is clear that the a very large fraction of the Fourier modes $k$ that will be probed by OWFA are in the range $ 1 -3 \, {\rm Mpc}^{-1}$. We see that the Fourier modes all lie in this range for $m \ge 64$ (Figure \ref{fig:pkHI1}). The range $ k < 1 \, {\rm Mpc}^{-1}$ will be sampled by a relatively small fraction of the modes, and the range $ k < 0.1 \, {\rm Mpc}^{-1}$ will only be sampled for $m \le 5$. \begin{figure}[t] \begin{center} \psfrag{Number of k-modes}[c][c][1.0][0]{$\Delta N_k$} \psfrag{k}[c][c][1.0][0]{$k$\ Mpc$^{-1}$} \psfrag{PHASE II}[c][c][1.1][0]{PHASE II} \psfrag{p1}[c][c][0.9][0]{$\Delta k = 0.03$ Mpc$^{-1}$} \vskip.2cm \centerline{\includegraphics[scale =.5]{hist_PH2.ps}} \caption{The histogram shows the number of $k$ modes, $\Delta N_k$ within bin width $\Delta k$. } \label{fig:pkHI2} \end{center} \end{figure} \begin{figure}[t] \begin{center} \psfrag{Covariance}[c][c][1.0][0]{$C_{ab}(m)$\ Jy$^{2}$ MHZ$^{2}$} \psfrag{PHASE II}[c][c][1.1][0]{PHASE II} \psfrag{10hr}[c][c][1.0][0]{10 hr} \psfrag{100hr}[c][c][1.0][0]{100 hr} \psfrag{1000hr}[c][c][1.0][0]{1000 hr} \includegraphics[scale =.5]{signal_PH2.ps} \caption{This shows the diagonal (thick red curve) and the off-diagonal (thin blue curve) elements of the signal contribution to the covariance matrix $S_{ab}(m)$ $m= 1$, $8$ and $32$. The system noise contribution (thick dashed black curves) to $C_{ab}(m)$ is shown for the different observing times indicated in the figure.} \label{fig:convo1} \end{center} \end{figure} Figure \ref{fig:convo1} shows the diagonal and the off-diagonal elements of the signal contribution to the covariance matrix $C_{ab}(m)$ (eq. \ref{eq:a5}). The noise contribution is also shown for reference. The noise contribution is independent of $m$, and it increases at the larger baselines which have a lesser redundancy $N_{A}-a$. The power spectrum $P_{HI}(k)$ is a decreasing function of $k$ for $k \ge 0.1 \, {\rm Mpc}^{-1}$, and most of the modes that will be probed by OWFA lie in this range. For a fixed $m$, the signal contribution is nearly flat for $U < r_{\nu} m/(B r^{'}_{\nu})$ and then decreases if $U$ is increased further. For $m=1$, the signal at small baselines $U \le 10$ is comparable to the noise for $T=100 \, {\rm hr}$. The signal is smaller than the noise at larger baselines. The overall amplitude of the signal contribution decreases for larger values of $m$, The signal covariance falls by a factor of $\sim 10$ from $m=1$ to $m=8$, and it is comparable to the noise for $T=1,000 \, {\rm hr}$. The signal falls by another factor of $\sim 20$ from $m=8$ if we consider $m=32$. We see that the HI signal is relatively more dominant at the small delay channels and the small baselines. The HI signal is considerably weaker at the larger $m$ and $U$, the noise also is considerably higher at the larger baselines. \section{Results} \label{fma} We have assumed that the HI gas, which is believed to be associated with galaxies, traces the underlying matter distribution with a constant scale independent large-scale linear HI bias $b_{HI}$. Incorporating redshift space distortion, we have the HI power spectrum \begin{equation} P_{{\rm HI}}({\bf k})= A_{HI}^2 \, {\bar{T}^2} \, \left[ 1+ \beta\, {\mu^2} \right]^2 \,P(k) \,. \label{eq:pk} \end{equation} where $P(k)$ is the matter power spectrum, $\mu= k_\parallel/k$, and \begin{equation} \bar{T}(z) = 4.66 \, {\rm mK} \, (1+z)^2\, \left(\frac{\Omega_b h^2}{0.022} \right) \, \left(\frac{0.67}{h} \right) \, \left( \frac{H_0}{H(z)} \right) \,. \end{equation} The parameter $A_{HI}$ in eq. (\ref{eq:pk}) sets the overall amplitude of the HI power spectrum, and $A_{HI}= \bar{x}_{{\rm HI}} \, b_{HI}$ where $\bar{x}_{{\rm HI}}$ is the mean neutral hydrogen fraction. The parameter $\beta=f(\Omega)/ b_{HI}$ is the linear redshift distortion parameter. Note that the various terms in eq. (\ref{eq:pk}) are all at the redshift where the HI radiation originated, which is $z=3.35$ for the OWFA. We have used the value $\bar{x}_{{\rm HI}} =0.02$ which corresponds to $\Omega_{gas}=10^{-3}$ from DLA observations (Prochaska \& Wolfe 2009; Noterdaeme et al. 2012; Zafar et al. 2013) in the redshift range of our interest. N-body simulations (Bagla, Khandai \& Datta 2010; Guha Sarkar et al. 2012) indicate that it is reasonably well justified to assume a constant HI bias $b_{HI}=2$ at wave numbers $k \le 1 \, {\rm Mpc}^{-1}$, and we have used this value for our entire analysis. This is also consistant with the Semi-empirical simulations of Mar{\'{\i}}n et al. (2010). Using these values and the cosmological parameters values assumed earlier, we have $A_{HI}=4.0\times 10^{-2}$ and $\beta=4.93 \times10^{-1}$ which serve as the fiducial values for our analysis. We have assumed that $\bar{T}$ and the $\Lambda$CDM matter power spectrum $P(k)$ are precisely known, and we have used the Fisher matrix analysis to determine the accuracy with which it will be possible to measure the parameters $A_{HI}$ and $\beta$ using OWFA observations. The Fisher matrix analysis (eq. \ref{eq:a6}) was carried out with the two parameters $q_1=\ln(A_{HI})$ and $q_2=\ln(\beta)$. \begin{figure} \begin{center} \psfrag{SNR}[c][c][1.0][0]{SNR} \psfrag{T (hours)}[c][c][1.0][0]{t hr} \vskip.2cm \centerline{{\includegraphics[scale =.33]{snr_cnd.ps}} \hskip0.01cm { \includegraphics[scale =.33]{snr_mnd.ps}}} \caption{The Conditional (left) and Marginalized (right) SNR for $A_{HI}$ as a function of the observing time for the different Phases as indicated in the figure. The horizontal dashed and solid lines show SNR $=3$ and $5$ respectively.} \label{fig:snr} \end{center} \end{figure} We first focus on estimating $A_{HI}$ the amplitude of the HI signal. The Fisher matrix element $\sqrt{F_{11}}$ gives the signal to noise ratio (SNR) for a detection of the HI signal ($A_{HI}$) provided the value of $\beta$ is precisely known apriori (Conditional SNR). The left panel of Figure \ref{fig:snr} shows the expected Conditional SNR as a function of the observing time, and $t_C$ in Table \ref{tab:error} summarizes the time requirements for $3-\sigma$ and $5-\sigma$ detections. In reality, the value of $\beta$ is not known apriori, and one hopes to measure this from HI observations. While the cosmological parameters which determine $f(\Omega)$ are known to a relatively high level of accuracy, there is no direct observational handle on the value of $b_{HI}$ at present. It is therefore necessary to allow for the possibility that $b_{HI}$ can actually have a value different from $b_{HI}=2$ assumed here. A recent compilation of the results from several studies (Padmanabhan, Roy Choudhury \& Refregier, 2014) has constrained $b_{HI}$ to be in the range $1.090 \leq b_{HI} \leq 2.06$ in the redshift range $3.25 \leq z \leq 3.4$. In our analysis we have allowed $b_{HI}$ to have a value in a larger interval $1.0 \leq b_{HI} \leq 3.0$, and we have marginalized $\beta $ over the corresponding interval $0.329 \leq \beta \leq 0.986$. The right panel of Figure \ref{fig:snr} shows the expected Marginalized SNR as a function of the observing time, and $t_M$ in Table \ref{tab:error} summarizes the time requirements for $3-\sigma$ and $5-\sigma$ detections. \begin{table} \begin{center} \caption{Here $t_C$ ($t_M$) is the observing time required for the Conditional ( Marginalized) SNR $=3$ and $5$ as respectively indicated in the Table.} \vspace{.2in} \label{tab:error} \begin{tabular}[scale=.3]{|l|c|c|c|c|} \hline \hline Phase & SNR & $t_C\,({\rm hr})$ & $ t_M\, ({\rm hr})$\\ \hline Phase I & 5, 3 & $800,350 $ & $ 1190,390 $ \\ \hline Phase II & 5, 3 & $110, 60 $ & $150,70 $ \\ \hline Phase III & 5, 3 & $50, 20 $ & $50,20 $\\ \hline Phase IV & 5, 3 & $20, 10 $ & $ 25,15 $ \\ \hline \end{tabular} \end{center} \end{table} We find (Figure \ref{fig:snr}) that for small observing times $(t \le 50 \, {\rm hr})$, where the visibilities are dominated by the system noise, the Conditional and the Marginalized SNR both increase as ${\rm SNR} \propto t$. The increase in the SNR is slower for larger observing times, and it is expected to subsequently saturate at a limiting value which is set by the cosmic variance for very large observing times not shown here. We see (Table \ref{tab:error}) that $\sim 1190 \, {\rm hr}$ of observation are needed for a $5-\sigma$ detection with Phase I. The corresponding observing time for Phase II falls drastically to $110 \, {\rm hr}$ and $150 \, {\rm hr}$ for the Conditional and the Marginalized cases respectively. For Phase II, the HI signal is largely dominated by the low wave numbers $k\leq 0.2 \, {\rm Mpc}^{-1}$ (discussed later). Phase I which has a larger antenna spacing and smaller frequency bandwidth does not cover many of the low $k$ modes which dominate the signal contribution for Phase II. The required observing times are $\sim 0.5$ and $\sim 0.25$ of those for Phase II for Phases III and IV respectively. The Marginalized SNR are somewhat smaller than the Conditional ones, the difference however is not very large. The required observing time does not differ very much except for Phase II where it increases from $110 \, {\rm hr}$ to $150 \, {\rm hr}$ for a $5-\sigma$ detection. \begin{figure}[ht] \begin{center} \vskip.2cm \psfrag{k1}[c][c][0.8][0]{$\Delta A_{HI}/A_{HI}$} \psfrag{k2}[c][c][0.8][0]{$\Delta \beta/\beta$} \centerline{{\includegraphics[scale =.33]{contour_II.ps}} { \includegraphics[scale =.33]{contour_III.ps}} { \includegraphics[scale =.33]{contour_IV.ps}}} \caption{This shows the expected $1 \sigma$ contours for $\Delta \beta/\beta $ and $\Delta A_{HI}/A_{HI} $ with observing time 630 hrs (outer ellipses), 1600 hours (intermediate ellipses) and 4000 hours (inner ellipses) respectively for different phases indicated in the figures.} \label{fig:contour} \end{center} \end{figure} We have considered the joint estimation of the two parameters $A_{HI}$ and $\beta$ using OWFA. Figure \ref{fig:contour} shows the expected $1 \sigma$ confidence intervals for $\Delta \beta/\beta $ and $\Delta A_{HI}/A_{HI} $ with three different observing times (630, 1600 and 4000 hr) for Phases II, III and IV. For Phase II, a joint estimation of the parameters $A_{HI}$ and $\beta$ is possible with 15\% and 60\% errors respectively using 1600 hr of observation. The errors on the parameters $A_{HI}$ and $\beta$ for 4000 hr are $\sim 2$ times smaller as compared to 1600 hr. The constraints are more tight in case of Phases III and IV. A joint detection of $A_{HI}$ and $\beta$ with 3\% and 15\% errors respectively is feasible with 1600 hr of observation with Phase IV. \begin{figure}[h] \begin{center} \psfrag{l1}[c][c][1.0][0]{$F_{ab}$} \psfrag{l2}[c][c][1.0][0]{$k$ Mpc$^{-1}$} \psfrag{p1}[c][c][0.7][0]{$F_{11}$} \psfrag{p2}[c][c][0.7][0]{$F_{12}$} \psfrag{p3}[c][c][0.7][0]{$F_{22}$} \psfrag{PHASE II}[c][c][0.8][0]{PHASE II} \psfrag{T=150 hr}[c][c][0.7][0]{$t$=150 hr} \psfrag{T=800 hr}[c][c][0.7][0]{$t$=800 hr} \centerline{{\includegraphics[scale =.5]{bin_150.ps}}\hskip0.01cm {\includegraphics[scale =.5]{bin_800.ps}}} \caption{The relative contribution to the Fisher matrix components $F_{ab}$ from the different k-modes probed by Phase II for 150 hr and 800 hr of observation respectively.} \label{fig:pkbin} \end{center} \end{figure} It is interesting to investigate the $k$ range that contribute most to the HI signal at OWFA. We have seen that the Fourier modes $k$ sampled by OWFA are predominantly in the range $1 \le k \le 3 \, {\rm Mpc}^{-1}$, and there are relatively few modes in the range $ k \le 1 \, {\rm Mpc}^{-1}$ (Figure \ref{fig:pkHI2}). However, the HI signal (Figure \ref{fig:convo1}) is much stronger at the smaller modes, whereas the larger modes have a weaker HI signal and are dominated by the noise. It is therefore not evident as to which $k$ range contributes the most to the OWFA HI signal detection. Figure \ref{fig:pkbin} shows the relative contributions to the Fisher matrix from the different $k$ modes. We see that for $t=150 \, {\rm hr}$, which corresponds to a $5-\sigma$ detection, the bulk of the contribution is from the range $ k \le 0.1 \, {\rm Mpc}^{-1}$. The larger modes do not contribute much to the signal. We have also considered $t=800 \, {\rm hr}$. Here we have a slightly larger range $k \le 0.2 \, {\rm Mpc}^{-1}$ and the contribution peaks around $k \approx 0.1 \, {\rm Mpc}^{-1}$. In a nutshell, the OWFA HI signal is predominantly from the $k$ range $0.018 \le k \le 0.2 \, {\rm Mpc}^{-1}$. The larger modes, though abundant, do not contribute much to the HI signal. \section{Summary and conclusions} \label{sec:sum} We have considered four different Phases of OFWA, and studied the prospects of detecting the redshifted 21-cm HI signal at $326.5 \, {\rm MHz}$ which corresponds to redshift $z = 3.35$. Phases I and II are currently under development and are expected to be functional in the near future. Phases III and IV are two hypothetical configurations which have been considered as possible future expansions. We have used the Fisher matrix analysis to predict the accuracy with which it will be possible to estimate the two parameters $A_{HI}$ and $\beta$ using OWFA. Here $A_{HI}$ is the amplitude of the 21-cm brightness temperature power spectrum and $\beta$ is the linear redshift space distortion parameter. For the purpose of this work we make the most optimistic assumption that the foreground contributions are not changing across different frequency channels, and hence they only contribute to the $k_\parallel=0$ mode. In reality the foreground contamination will extend to other modes also. Further, the chromatic response of the interferometer, calibration errors, systematics in the receivers and radio-frequency interference (RFI) have not been considered in the paper. Focusing first on just detecting the HI signal, we have marginalized $\beta$ and considered the error estimates on $A_{HI}$ alone. We find that a $5-\sigma$ detection of the HI signal is possible with $1190$ and $150 \, {\rm hr}$ of observation for Phases I and II respectively. The observing time is reduced by factor $\sim 0.5$ and $\sim 0.25$ compared to Phase II for Phases III and IV respectively. We find that there is a significant improvement in the prospects of a detection using Phase II as compared to Phase I, and we have mainly considered Phase II for much of the discussion in the paper. We have also considered the joint estimation of the parameters $A_{HI}$ and $\beta$. For Phase II, a joint estimation of the parameters $A_{HI}$ and $\beta$ is possible with 15\% and 60\% errors respectively using 1600 hr of observation. To estimate $\beta$ it is necessary to sample Fourier modes ${\bf k}$ of a fixed magnitude $k$ which are oriented at different directions to the line of sight. In other words, $\mu=k_{\parallel}/k$ should uniformly span the entire range $-1 \le \mu \le 1$. However, the $k_{\parallel}$ values are much larger than $k_{\perp}$, and the Fourier modes are largely concentrated around $\mu \sim 1$ , for Phase II (Section \ref{sec:vc}). The restriction arises from the limited OWFA frequency bandwidth (Table~\ref{tab:array}) which is restricted by the anti-aliasing filter. Multi-field observations and larger bandwidth ($> 30 \,{\rm MHz})$ of the OWFA hold the potential to probe of the expansion history and constrain cosmological parameters using BAO measurements from the large-scale HI fluctuations at $z = 3.35$. Anisotropies in the clustering pattern in redshifted 21-cm maps at this redshift produced by Alcock-Paczyski effect has the possibility of probing cosmology and structure formation. It is also possible to constrain neutrino masses of using OWFA and compare among different fields of cosmology (LSS, CMBR, BBN). Thus the OWFA could provide highly complementary constraints on neutrino masses. We leave investigation of such issues for future studies. The present work has assumed that the shape of the HI power spectrum is exactly determined by the $\Lambda$CDM model, and has only focused estimating the overall amplitude $A_{HI}$ from OWFA observations. The OWFA HI signal is predominantly from the $k$ range $0.02 \le k \le 0.2 \, {\rm Mpc}^{-1}$. It is possible to use OWFA observations to estimate $P_{HI}(k)$ in several separate bins over this $k$ range, without assuming the anything about the shape of the HI power spectrum. In a forthcoming paper, we plan to calculate Fisher matrix estimates for the binned power spectrum. \section*{Acknowledgment} The authors acknowledge Jayaram N. Chengalur, Jasjeet S. Bagla, Tirthankar Roy Choudhury, C.R. Subrahmanya, P.K. Manoharan and Visweshwar Ram Marthi for useful discussions. AKS would like to acknowledge Rajesh Mon- dal and Suman Chatterjee for their help. SSA would like to acknowledge CTS, IIT Kharagpur for the use of its facilities and the support by DST, India, under Project No. SR/FTP/PS-088/2010. SSA would also like to thank the authorities of IUCAA, Pune, India for providing the Visiting As- sociateship programme.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,119
During the next few weeks, we'll be exploring the many facets of electronic slide presentations as we work on the Slide Deck Redesign assignment. We'll use part of each class period for hands-on work, so you'll need to bring your own laptop to class or make sure your project files are accessible on one of the classroom laptops. On Thursday, we'll experiment with color and images in our presentations. Before you come to class, please read pages 63–125 in Presentation Zen Design and be ready to apply the principles in those chapters to your project. If you'd like a second opinion about potential files for your Slide Deck Redesign project, or if you have questions about anything else, feel free to email me this weekend.
{ "redpajama_set_name": "RedPajamaC4" }
8,001
\section{Introduction} The application of laser spectroscopic techniques to elucidate the subtle perturbations of atomic energy levels due to the nuclear electromagnetic properties has given rise to the study of fundamental nuclear structure, in particular magnetic dipole moments ($\mu$), electric quadrupole moments ($Q$) and changes in the mean-squared nuclear charge radii $\delta \left\langle r^2 \right\rangle$. These methods, in combination with modern radioactive ion beam (RIB) facilities, offer a powerful probe of changes in the structure of exotic nuclei. They provide information on nuclear shell evolution, nuclear shapes and sizes, and single-particle correlations~\cite{neyens2005,flanagan2009,ruiz2016,marsh2018,ichikawa2018,miller2019}. The majority of the experimental techniques in current use at RIB facilities provide measurements of hyperfine frequency splittings with a precision of the order of 1 MHz~\cite{Campbell2016}. This limitation restricts the sensitivity to higher order terms in the electromagnetic multipole expansion of the nuclear current densities, as well as to higher order radial moments of the charge density distribution. The progress in the development of higher precision methods along with ongoing development of theoretical tools has the potential to provide new perspectives which could help shape our understanding of the atomic nucleus. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{hfs.pdf} \caption{a) Schematic illustration of the experimental setup. The atom beam is produced from a tantalum oven mounted in the bottom vacuum vessel, intersects with the optical pumping laser beam, and then crosses the RF interaction region in the second vessel. After passing through a collimating slit, the atoms are then ionized using three lasers and subsequently counted using an ion detector. The laser ionization scheme used to study the D$_{5/2}$ state is shown in b), indicating also the hyperfine structure schematically. Figures c) and d) show an example spectra obtained by scanning the first step in the laser ionization scheme, for the transitions starting from the D$_{3/2}$ and D$_{5/2}$ states respectively.} \label{fig:expt} \end{figure*} Recently, high-precision isotope shift measurements combined with improved atomic calculations were proposed for a determination of the fourth-order radial moment of the charge density~\cite{Papoulia2016}, which can in turn be directly linked to the surface thickness of nuclear density~\cite{reinhard2020}. The hyperfine anomaly, only measured for a handful of radioactive isotopes (see e.g.~\cite{Takamine2014, papuga2014, zhang2015, schmidt2018, Persson2020}), would shed light on the distribution of magnetisation inside the nuclear volume~\cite{stroke2000,karpeshim2015}. In addition to the M1 and E2 moments, $\mu$ and $Q$ respectively, the M3 magnetic octupole moment $\Omega$ is in principle accessible using existing techniques for radioactive isotopes. To our knowledge, this observable has only been measured for 18 stable isotopes~\cite{Childs1971,Daly1954,Brown1966,Faust1963,Kusch1957,Jaccarino1954,Faust1961,Gerginov2003,Lewty2012,Unsworth1969,Singh2013,McDermott1960,Landman1970,fuller1976}. The general features of these values can be understood in terms of the Schwartz limits~\cite{Schwartz1955}. There are a few notable exceptions: the recently measured $\Omega$ of $^{133}$Cs~\cite{Gerginov2003} and $^{173}$Yb~\cite{Singh2013} are significantly larger than expected from shell model theory. For $^{173}$Yb, we recently performed an experiment to validate the earlier measurements, where a value of $\Omega$ which is zero within experimental uncertainties was obtained \cite{degroote2021}. In this Letter, we aim to further contribute to this ongoing work with an experimental and theoretical investigation of the hyperfine structure and nuclear electromagnetic moments of $^{45}$Sc. Our approach is threefold. Firstly, we describe a measurement protocol which combines the efficiency of resonance laser ionization spectroscopy (RIS) \cite{marsh2018,degroote2019,reponen2021} with the precision of radio-frequency (RF) spectroscopy \cite{Childs1992}. The efficiency provided by the RIS method is vital for future applications on radioactive isotopes due to limited production rates of radioactive ion beams at on-line facilities. The combination with RF spectroscopy offers a dramatic improvement in the precision as compared to conventional optical methods, by at least three orders of magnitude. We demonstrate this with a high-precision measurement of three nuclear electromagnetic moments of $^{45}$Sc, including $\Omega$. Secondly, we combine these measurements with state-of-the-art atomic-structure calculations to evaluate the sensitivity of the $3d4s^2 \ ^2D_{3/2, 5/2}$ states in neutral scandium to the nuclear octupole moment $\Omega$. We evaluate the impact of off-diagonal HFS effects, essential to extract $\Omega$ from the measurements. We note that both the $D_{3/2}$ and meta-stable $D_{5/2}$ state are expected to be well-populated in a fast-beam charge exchange reaction~\cite{Vernon2019}. Therefore, radioactive scandium isotopes could be studied using collinear laser-double resonance methods~\cite{Nielsen1983, Childs1992} in the future. Thirdly, with a single proton outside a doubly-magic calcium ($Z=20$) core, comparison of $\Omega$ for a chain of scandium isotopes provides a first important testing ground for nuclear theory calculations. Furthermore, such measurements could help shed light on the many intriguing nuclear structure phenomena observed in the calcium isotopes~\cite{steppenbeck2013,wienholtz2013,ruiz2016,Tanaka2020}. The proximity to proton- and neutron shell closures makes it possible to perform both e.g. shell-model and Density Functional Theory (DFT) calculations. As we seek to eventually examine all existing values of $\Omega$ in one consistent framework, with measurements for nuclei scattered throughout the nuclear landscape, developing a reliable global theory for magnetic properties would be highly advantageous. So far, very little is known regarding the overall performance of standard nuclear DFT in describing $\mu$, cf.~Refs.~\cite{(Ach14),(Bon15),(Bor17)}, and nothing is known about the DFT values of $\Omega$. Here, we thus start this investigation with $^{45}$Sc. The comparison to nuclear shell-model calculations, which have a more well-established track record in computing both $\mu$ and $\Omega$ (see e.g.~\cite{brown1980}), serves to benchmark these developments. These three aspects are all required ingredients for a systematic study of $\Omega$ throughout the nuclear chart. The extraction of a higher-order electromagnetic moment from the evaluation of atomic spectra in a nuclear-model independent manner has the potential to provide new insight into the distribution of protons and neutrons within the nuclear volume. $\Omega$ is affected by correlations (core polarization and higher order configuration mixing) differently than the magnetic dipole moment, as was highlighted via calculations of the nuclear magnetization distribution of $^{209}$Bi~\cite{senkov2002}. Measurements of $\Omega$ may thus furthermore help to address open questions related to e.g. effective nucleon $g$-factors and charges. \section{Overview of the experiment} The value of $\Omega$ can be extracted from the first-order shift ($E_F^{(1)}$) in the hyperfine structure (HFS) interval, governed by the hyperfine interaction Hamiltonian: \begin{align} \mathcal{H}_{\text{hyp}} & = A{\bf I \cdot J} + B\frac{{3({\bf I\cdot J})^2 + \frac {3}{2}({\bf I \cdot J})- I(I+1)J(J+1)}}{2I(2I-1)J(2J-1)} \notag \\ + & C \left[ \frac{10({\bf I \cdot J})^3 + 20({\bf I \cdot J})^2 }{I(I-1)(2I-1)J(J-1)(2J-1)} \right. \notag \\ + &\left.\frac{2{\bf I \cdot J}\{ I(I+1) + J(J+1) - 3N + 3\} - 5N} {I(I-1)(2I-1)J(J-1)(2J-1)}\right], \label{eq:hfs} \end{align} where $N = I(I+1)J(J+1)$, and noting $\left\langle F, m_F \right| {\bf I \cdot J} \left| F, m_F\right\rangle = \frac12 [F(F+1) - I(I+1) - J(J + 1)]$. In these expressions, $I$, $J$ and $F$ are the nuclear, atomic, and total angular momentum, while $A$, $B$ and $C$ are the magnetic dipole (M1), electric quadrupole (E2) and magnetic octupole (M3) HFS constants, respectively. These are all proportional to their corresponding nuclear moment, in a way which depends on the field distribution generated by the electrons at the site of the nucleus. Thus, accurate atomic structure calculations of $C/ \Omega$ have to be performed to extract $\Omega$ from $C$. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{lines.pdf} \caption{RF scans of several $(F, m_{F}) \rightarrow (F-1,m_{F})$ transitions in the D$_{5/2}$ and D$_{3/2}$ hyperfine manifolds. Green vertical lines indicate the $\Delta m_F = 0$ resonance locations governed by Eq.\eqref{eq:fullH} with the best-fitting hyperfine constants and magnetic field values. The y-axis represents the ratio of RF-on and RF-off datapoints, as described in the text.} \label{fig:rf_data} \end{figure*} There are three stages in our experiment, schematically illustrated in Fig.~\ref{fig:expt}. First, by tuning a continuous wave (cw) laser into resonance with a transition from one of the hyperfine levels ($F$) of the atomic ground state into a corresponding hyperfine level of an excited $J$ state, population may be optically pumped. Through de-excitation from the excited state into either another level ($F'$) of the ground-state hyperfine manifold, or into other dark states, the population of the state $F$ is depleted. If RIS is subsequently performed starting from the same $F$ state, a reduced ion count rate is observed. If now, prior to the laser ionization stage, an RF field is tuned into resonance with a $(F, m_F) \rightarrow (F-1,m_{F})$ transition, the observed ion count rate increases. By scanning the frequency of the RF and recording the ion count rate, the hyperfine spacing between the levels $F$ and $F'$ of the ground-state manifold can thus be measured precisely. Due to the relative orientation of the oscillating magnetic field and the earth's magnetic field, both pointing along the atom beam axis, only $\Delta m_F=0$ resonances are observed. The vacuum chamber used for the experiments is shown in Fig.~\ref{fig:expt}a. It consists of three cylindrical vessels, one to produce the atom beam, a second one for optical pumping and RF spectroscopy, and the third and final one for laser ionization and ion detection. These vessels are separated by metal walls with a thin slit (1x20mm) used to collimate the atom beam. We produced an atomic beam of stable scandium in the bottom chamber by resistively heating a tantalum furnace. In the second chamber, up to 15\,mW of cw laser light crossed the atom beam orthogonally, in order to optically pump the atoms. This light was produced with a frequency-doubled Sirah Matisse Ti:Sapphire laser, focused to a $\sim$1\,mm spot. The laser was tuned to drive either the 25 014.190\,cm$^{-1}$ $3d4s({}^3D)4p \ ^2D^{\circ}_{5/2}$ state or the 24866.172\,cm$^{-1}$ $3d4s({}^3D)4p \ ^2D^{\circ}_{3/2}$, starting from respectively the thermally populated $3d4s^2 \ ^2D_{5/2}$ state at 168.3371\,cm$^{-1}$ and the $3d4s^2 \ ^2D_{3/2}$ ground state. The atomic beam then passed through a loop of wire, 8\,cm above the optical pumping region. This wire was terminated with 50 $\Omega$ in order to ensure good impedance matching, minimizing reflected RF power. The voltage standing wave ratio (VSWR) was measured using a Rhode\&Schwarz ZVL Network analyser, and was found to vary negligibly within the scan range. The generator is a DS instruments DS6000 pro PureSine signal generator, referenced to an internal 10\,MHz reference with a quoted accuracy of 280 parts per billion. For the measurements, the generator was set to output 5 mW of RF power. The atoms are exposed to the RF field for an estimated few 10\,$\mu$s, which thus leads to expected linewidths of a few 10\,kHz. The atoms are then further collimated and orthogonally overlapped with the ionization lasers which are focused into a 1x1mm$^2$ spot, 13\,cm above the RF interaction region, in the third chamber. A three-step resonant laser ionization scheme was used to ionize the scandium atoms, derived from the scheme in~\cite{raeder2013source}, shown in Fig.~\ref{fig:expt}b. The first step is provided using a $\sim$5~$\%$ pick-off from the cw laser beam used for the optical pumping stage. The other two steps were produced by pulsed Ti:Sapphire lasers (10\,kHz repetition rate), tuned to the 25014.190\,cm$^{-1} \rightarrow$ 46989.493\,cm$^{-1}$ transition or the 24866.172\,cm$^{-1} \rightarrow$ 46914.540\,cm$^{-1}$ transition, and to a broad auto-ionizing state at $\sim$58104\,cm$^{-1}$ or 58037\,cm$^{-1}$. The laser powers used for the laser ionization were approximately 0.75 mW, 50 mW and 500 mW for the first, second and third steps, respectively. Prior to performing any double-resonance measurements, an estimate of the HFS constants can be obtained by scanning the frequency of the first laser step, as shown in Fig.\,\ref{fig:expt}c,d. During the RF scans, the laser wavelength was kept fixed to pumping wavelengths suitable for the different RF lines, and the RF field was introduced and scanned. Fig.~\ref{fig:rf_data} shows examples of the RF lines which were obtained. The Zeeman splitting observed in wider-range scans can be used to determine the magnetic field strength. As the measurements presented in this work were performed over a time scale of two years, different values of this field are obtained between the different datasets: 1.03\,G for the first set of measurements, 80\,mG for a second set, and 0.83\,G for the third. The measurements with a field of 80\,mG were performed only for the $(1,0) \rightarrow (2,0)$ transition of the D$_{5/2}$ state, where the external field was partially shielded with mu-metal foils. This was done in order to evaluate possible systematic errors, since this line is more sensitive to the B-field than the others (e.g. at 1\,G the $m_F = 0 \rightarrow m_{F'}=0$ shifts by as much as 39\,kHz). The data from all measurements was found to be consistent, indicating the measurement protocol is reliable and the magnetic field strengths can be accurately assessed from the Zeeman splitting. \section{Analysis} Extracting accurate HFS constants requires atomic structure calculations to estimate the second-order shift ($E_F^{(2)}$) due to M1-M1, M1-E2 and E2-E2 interactions. These calculations will be discussed first. \subsection{Calculation of hyperfine constants and second-order shifts} The relativistic coupled-cluster (RCC) theory, known as the gold-standard of many-body theory~\cite{shavitt2009}, is used to evaluate $C/ \Omega$ and the matrix elements involving the second-order hyperfine interaction Hamiltonians. In this work, we expand on earlier calculations~\cite{sahoo2005} presenting $A/g_I$ (with $g_I= \mu/I$), $B/Q$ and $C/ \Omega$ with a larger set of orbitals, using up to $19s$, $19p$, $19d$, $18f$, $17g$, $16h$ and $15i$ orbitals in the singles- and doubles-excitation approximation in the RCC theory (RCCSD method). Due to limitations in computational resources, we correlate electrons up to $g$-symmetry orbitals in the singles-, doubles- and triples-excitation approximation in the RCC theory (RCCSDT method). We quote the differences in the results from the RCCSD and RCCSDT methods as `$+$Triples'. Contributions from the Breit and lower-order quantum electrodynamics (QED) interactions are determined using the RCCSD method, and added to the final results as `$+$Breit' and `$+$QED', respectively. Contributions due to the Bohr-Weisskopf (BW) effect are estimated in the RCCSD method considering a Fermi-charge distribution within the nucleus and corrections are quoted as `$+$BW'. We also extrapolated contributions from an infinite set of basis functions and present these as `Extrapolation'. \begin{table*}[tb] \centering \small \caption{Theoretical HFS constants of the $3d4s^2 \ ^2D_{3/2,5/2}$ state. The dominant off-diagonal reduced matrix elements $T_k = \langle 3/2 ||T_e^{(k)}|| 5/2 \rangle = -\langle 5/2 ||T_e^{(k)}|| 3/2 \rangle$ required for the estimation of the second-order corrections to the hyperfine intervals are provided in the last two rows.} \label{tab:theor} \begin{tabular}{r|ccc|ccccc|rl} & Dirac-Fock & RMBPT(2) & RCCSD & + Triples & + QED & + Breit & + BW & Extrapolation & Total & \\ \hline D$_{3/2}$ &&&&&&&&&&\\ $A/\mu$ & 49.520 & 54.008 & 56.172 & 0.656 & 0.013 & 0.153 & -0.002 & 0.055 & 57.0(6)& MHz$/\mu_N$ \\ $B/Q$ & 107.037& 122.824& 126.343& -0.787& 0.000& -0.046& 0.000& 0.000& 125(2) & MHz$/$b \\ $C/ \Omega $ & 1.91 & -5.85 & -4.86 & -0.65 & -0.02 & -0.35 & ~0 & 0.09 & -5.8(3) & $10^{-2}$ kHz/($\mu_N$ b) \\ \hline D$_{5/2}$ &&&&&&&&&&\\ $A/\mu$ & 21.066 & 19.744 & 22.416 & 0.179 & 0.013 & 0.057 & 0.002 & 0.021 & 22.7(4)& MHz$/\mu_N$ \\ $B/Q$ & 151.39 & 173.84 & 176.11 & -1.30 & 0.05 & 0.09 & $\sim 0$ & -0.60 & 175(2)& MHz$/$b \\ $C/ \Omega $ & 0.78 & 2.33 & -17.09 & 0.58 &$\sim 0$& 0.80 &$\sim 0$& - 0.14 & -15.9(2) & $10^{-2}$ kHz/($\mu_N$ b) \\ \hline $T_1$ & 83.52 & 176.18 & 145.03 & 10.15 & -0.1 & 0.61 & & 0.1 & 156(6) & MHz/$\mu_N$ \\ $T_2$ & 311.27 & 355.29 & 376.17 & -11.27& 0.19 & 0.48 & & 0.02 & 366(7) & MHz/b \\ \end{tabular} \end{table*} The $A/\mu$, $B/Q$ and $C/ \Omega$ values of the $3d4s^2 \ ^2D_{3/2, 5/2}$ states are tabulated in Table~\ref{tab:theor}. To obtain $A$ and $B$, listed in Table~\ref{tab:hyperfine}, recommended literature values of the moments were used ($\mu = +4.75400(2)\,\mu_N$~\cite{stone2019} and $Q = -0.220(9)$\,b~\cite{stone2016}). Uncertainties are estimated from the neglected higher-level excitations of the RCC theory. The shift $E_F^{(2)}$ due to M1-M1, M1-E2 and E2-E2 interaction terms is given by \cite{sahoo2015}: \begin{eqnarray} E_F^{(2)} &=& E_F^{M1-M1} + E_F^{M1-E2} + E_F^{E2-E2} \nonumber \\ &=& \sum_{J'} \left | \left \{ \begin{matrix} F & J & I \cr 1 & I & J' \cr \end{matrix} \right \} \right |^2 \eta \nonumber \\ &+& \sum_{J'} \left \{ \begin{matrix} F & J & I \cr 1 & I & J' \cr \end{matrix} \right \} \left \{ \begin{matrix} F & J & I \cr 2 & I & J' \cr \end{matrix} \right \} \zeta \nonumber \\ &+& \sum_{J'} \left | \left \{ \begin{matrix} F & J & I \cr 2 & I & J' \cr \end{matrix} \right \} \right |^2 \epsilon \label{eq:seqorder} \end{eqnarray} where \begin{eqnarray} \eta &=&\frac{(I+1)(2I+1)}{I} \mu^2 \frac{|\langle J'||T_e^{(1)}||J \rangle|^2}{E_J -E_{J'}} , \nonumber \\ \zeta &=&\frac{(I+1)(2I+1)}{I} \sqrt{\frac{2I+3}{2I-1}} \mu Q \frac{\langle J'||T_e^{(1)}||J \rangle \langle J'||T_e^{(2)}||J \rangle}{E_J -E_{J'}} \nonumber \\ && \text{and} \nonumber \\ \epsilon &=&\frac{(I+1)(2I+1)(2I+3)}{I(2I-1)} Q^2 \frac{|\langle J'||T_e^{(2)}||J \rangle|^2}{E_J -E_{J'}} . \nonumber \end{eqnarray} In these expressions, ${\bf T}_e^{(k)}$ is the spherical tensor operator with rank ``$k$ ($>0$)" in the electronic coordinates. We quote numerical values for these second-order matrix elements in Table~\ref{tab:theor}. We only consider the dominant contributing matrix elements between the $3d4s^2$ $^2D_{5/2}$ state and the $3d4s^2$ $^2D_{3/2}$ state. Intermediate results from the zeroth-order calculation using the Dirac-Fock method and the second-order relativistic many-body perturbation theory (RMBPT(2) method) are presented to demonstrate the propagation of electron correlation effects from lower to all-order RCC methods. \subsection{Analysis of hyperfine resonances} The data is processed and analysed as follows. For each value of the rf frequency, the number of ion counts is recorded for a time interval of typically one second, once with the output of the rf generator on, and once with the output of the generator off. By repeating this procedure for the desired range of frequencies, a spectrum is obtained by taking the ratio of the two measured counts. If needed, a rebinning of the data is performed in order to improve the signal-to-noise ratio. All spectra obtained in this way are then fitted by explicitly diagonalizing the following Hamiltonian: \begin{align} \mathcal{H} = \mathcal{H}_{\text{hyp}} + B_0 \cdot(g_J \mu_B J_z + g \mu_N I_z),\label{eq:fullH} \end{align} with $\mathcal{H}_{\text{hyp}}$ given in Eq.\eqref{eq:hfs} and $B_0$ the external magnetic field, and then correcting these eigenvalues using the expressions for the second-order shift given in Eq. \eqref{eq:seqorder}. Resonance locations can then be calculated as differences of these eigenvalues, $\Delta F = \pm1, \Delta m_F = 0, \pm 1$. The best-fitting values of the hyperfine constants $A$, $B$ and $C$ are found by comparing the calculated resonance locations with those observed in the experimental data using least-squares minimization. Additional free parameters in the fit are the value of the magnetic field $H$ and the heights of the resonances predicted by the above procedure, all of which are allowed to vary from one spectrum to the next in order to obtain the best goodness-of-fit. Note that the influence of the nuclear $g$-factor on the total Zeeman splitting is negligible, but the effect was included explicitly for completeness. The hyperfine constants of $^{45}$Sc, with and without use of second-order shifts, are given in Table~\ref{tab:hyperfine}, and the values extracted for the octupole moment are shown in Fig.~\ref{fig:comparison}. The systematic uncertainty due to the atomic calculations is given in square brackets. This uncertainty was estimated by the change in hyperfine constants obtained by varying the values of $T_1$ and $T_2$ within the theoretical error bar. Our results for $A,B$ and $C$ agree well with literature~\cite{Childs1971}, and are at least an order of magnitude more precise. \section{Results and interpretation} \begin{table*}[ht!] \centering \small \caption{Experimental and theoretical HFS constants and $\Omega$ values, without and with the second-order corrections. HFS constants beyond the octupole term were found to be zero within errors, and were thus not included in the fit.} \label{tab:hyperfine} \begin{tabular}{r|r|c|cc|cc} & & Theory & \multicolumn{2}{c|}{ Expt. Ref.~\cite{Childs1971}} & \multicolumn{2}{c}{Expt. this work} \\ & & This work & Uncorrected & Corrected & Uncorrected & Corrected \\ \hline D$_{3/2}$& A [MHz] & 271(3) & 269.556(1) & 269.558(1) & 269.55817(5) & 269.55844(7)[3] \\ & B [MHz] & -27.5(5) & -26.346(4) & -26.360(8) & -26.3531(9) & -26.3596(5)[5] \\ & C [kHz] & & -- & -- & -0.010(22) & 0.039(28)[2] \\ & $\Omega\,[\mu_N$b]& & -- & -- & 0.17(38) & -0.68(49)[6] \\ \hline D$_{5/2}$ & A [MHz] & 108(2) & 109.032(1) & 109.033(1) & 109.03275(7) & 109.03297(5)[3] \\ & B [MHz] & -38.5(5) & -37.387(12) & -37.373(15) & -37.3954(12) & -37.3745(8)[15] \\ & C [kHz] & & 1.7(10) & 1.5(12) & 0.31(8) & -0.062(59)[17] \\ & $\Omega\,[\mu_N$b]& & -10.7(63) & -9.4(75) & -1.92(51) & 0.39(37)[11] \end{tabular} \end{table*} \begin{table}[ht!] \centering \small \caption{Experimental and theoretical values of $\Omega$. The experimental value obtained in this work is the dispersion-corrected weighted mean of the values for the two $D_J$ states, where the total (statistical + systematic) was used in the weighting and to compute the total uncertainty.} \label{tab:omega} \begin{tabular}{c|r|c} & & $\Omega$ [$\mu_N$b] \\ \hline Expt. & Literature \cite{Childs1971} & -9.4(75) \\ & This work & -0.07(53) \\ \hline Theory & Schwartz $g_s=1/0.6$ & 0.65 / 0.46 \\ & SM $g_s=1/0.6$ & 0.45(4) / 0.32(4) \\ & DFT & 0.245(17) \\ \end{tabular} \end{table} Since scandium has a single proton outside of the magic shell of $Z=20$, a single-particle shell model estimate for $\Omega$~\cite{Schwartz1955} would be expected to be fairly good. We find $\Omega_{\text{sm}} = 0.46 \ \mu_N b$, using $\left\langle r^2 \right\rangle^{1/2} = 4.139$\,fm as the radius of the $f_{7/2}$ orbit (obtained from DFT calculations discussed later). This value is in good agreement with the experimental values. As a step towards a more complete understanding of $\Omega$ for $^{45}$Sc, and as a step towards understanding this observable in general, we examine it in more detail using more realistic nuclear models. \subsection{Nuclear shell-model} Shell-model calculations were performed using different interactions in a $(sd) pf$-shell model space ~\cite{PhysRevC.65.061301, EurPhysJA.25.499, PhysRevLett.66.1134, POVES2001157}. The values of $\Omega$ are calculated by the nuclear shell model through the code KSHELL \cite{SHIMIZU2019372}. The expression of $\Omega$ is defined as \begin{eqnarray*} \Omega=-M_{3}&=& -\sqrt{\frac{4\pi}{7}}\left( \begin{array}{ccc} J & 3 & J \\ -J & 0 & J \\ \end{array} \right) \nonumber \\ &&\times(g^{(l)}_{p}l_{p} + g^{(l)}_{n}l_{n} + g^{(s)}_{p}s_{p} + g^{(s)}_{n}s_{n}) \nonumber \end{eqnarray*} where $l_{p(n)}$ and $s_{p(n)}$ are the proton (neutron) angular momentum and spin terms of nuclear matrix elements, respectively, and $g^{(l)}_{p(n)}$ and $g^{(s)}_{p(n)}$ are corresponding proton (neutron) $g$ factors. The structure of $^{45}$Sc is calculated using seven Hamiltonians, GXPF1~\cite{PhysRevC.65.061301}, GXPF1A~\cite{EurPhysJA.25.499}, KB3~\cite{PhysRevLett.66.1134}, and KB3G~\cite{POVES2001157} for the $pf$-shell model space, and SDPF-M~\cite{PhysRevC.60.054315}, SDPF-MU~\cite{PhysRevC.86.051301}, and SDPFUSI~\cite{PhysRevC.79.014310} for the $sdpf$-shell model space. The $\Omega$ for $^{45}$Sc is dominated by the proton contribution, with the angular momentum and spin contributions having the same sign. We obtain values in the range 0.41-0.49\,$\mu_N$b with free $g$-factors and 0.28-0.35\,$\mu_N$b with a spin-quenching factor of 0.6 for the different shell model calculations. The inclusion of cross-shell excitations from the $sd$-shell to the $pf$-shell enhances the correlation beyond the single $f_{7/2}$ proton configuration, which results in small increases in $\Omega$. \subsection{Nuclear Density Functional Theory} We determined values of $\mu$, $Q$, and $\Omega$ for oblate states in $^{45}$Sc. We used constrained intrinsic mass quadrupole moments $Q_{20}=\langle2z^2-x^2-y^2\rangle$ varying between $-$1\,b and 0, with points at $-$1\,b marked by stars, see Fig.~\ref{fig:M1-M3}. The obtained unpaired mean-field solutions were projected on the $I=7/2^-$ ground-state angular momentum. Proton and neutron configurations were fixed at $\pi3^1$ and $\nu3^4$, where $3^n$ represents the occupied $n$ lowest oblate orbitals in the $\ell=3$ $f_{7/2}$ shell. No effective charges or effective $g$-factors were used. Results of the DFT calculations were obtained using the code {\sc hfodd} (version 2.95j) \cite{(Dob21f)}. To represent single-particle wave functions, we used the basis of $N_0=14$ spherical harmonic oscillator shells. We run the code in the mode of conserved parity along with broken simplex and broken time reversal. We used an infinitesimal angular frequency of $\hbar\omega=1$\,keV aligned along the $z$ direction. Simultaneously, the nucleus was oriented in space so that the axial-symmetry axis was also aligned along the $z$ direction. This allowed for splitting single-particle energies according to their projections of the angular momentum $K$ on the symmetry axis, without affecting their wave functions. At the same time, all single-particle wave functions acquired good $K$ quantum numbers. To stabilize the convergence, during the self-consistent iterations the total wave functions were additionally projected on the axial symmetry \cite{(Dob21f)}. Occupied single-particle wave functions were fixed by distributing the neutrons and protons according to the partitions of numbers of occupied states in individual blocks of given $K$ \cite{(Dob21f)}. This defined specific intrinsic configurations $\pi3^1$ and $\nu3^4$ in $^{45}$Sc. We note here that the configurations fixed for $^{45}$Sc pertain to deformed orbitals; therefore, they represent much richer correlations than the spherical $f_{7/2}$ configurations usually defined in the context of the shell model. Calculations were performed for eight zero-range Skyrme-type functionals, UNEDF0~\protect\cite{(Kor10c)}, UNEDF1~\protect\cite{(Kor12b)}, SkXc~\protect\cite{(Bro98)}, SIII~\protect\cite{(Bei75b)}, SkM*~\protect\cite{(Bar82c)}, SLy4~\protect\cite{(Cha98a)}, SAMi~\protect\cite{(Roc12b)}, and SkO$^\prime$~\protect\cite{(Rei99)}, and for two finite-range functionals, D1S~\protect\cite{(Ber91c)} and N$^3$LO REG6d.190617~\protect\cite{(Ben20)}. The goal of trying several different variants of functionals was to estimate the order of magnitude and spread of the results. For all functionals, the experimental value of the electric quadrupole moment of $Q=-0.216(9)$\,b was reached near $Q_{20}=-1$\,b. The calculated values of $\mu$ and $\Omega$ strongly depend on several input ingredients of the calculation. First, even at $Q_{20}=0$ these values lie far from the Schmidt~\cite{(Sch37)} and Schwartz~\cite{Schwartz1955} single-particle estimates. This can be attributed to a strong quadrupole coupling to the occupied neutron $f_{7/2}$ orbitals, which decreases both $\mu$ and $\Omega$. Second, the spin polarization, which acts for the Landau spin-spin terms included, also significantly decreases $\mu$ and $\Omega$. Following Ref.~\cite{(Ben02d)}, we parametrized the spin-spin terms by the standard isoscalar and isovector Landau parameters $g_0=0.4$ and $g'_0=1.2$, respectively. The value of $g'_0$ was confirmed in global adjustments performed in Ref.~\cite{(Sas21)}, which gave $g'_0=1.0(4)$, 1.3(4), and 1.7(4) for functionals SkO$^\prime$, SLy4, and UNEDF1, respectively. Third, with increasing intrinsic oblate deformation, both $\mu$ and $\Omega$ increase. The latter effect can be removed by pinning down the intrinsic deformation to the experimental value of $Q$, see stars in Fig.~\ref{fig:M1-M3}. The shaded area in Fig.~\ref{fig:M1-M3} covers the range of results given by all starred points, and thus represents a very rough estimate of the averages and rms deviations of the DFT results: $\mu_{\text{DFT}}=+4.74(6)$\,$\mu_N$ and $\Omega_{\text{DFT}}=+0.245(17)$\,$\mu_N$\,b. \begin{figure}[tb] \begin{center} \includegraphics[width=\columnwidth]{sc045-f07-HFX-kann-AMP-N14-s04-UUUU_295j_out-5d.pdf} \caption{Values of $\mu$ and $\Omega$ of the $I=7/2^-$ angular-momentum-projected ground states of $^{45}$Sc. Panels (a) and (b) show results obtained with Skyrme functionals supplemented by the Landau spin-spin terms and with no spin-spin terms, respectively. Arrows mark the experimental value of $\mu$ and visualize the experimental error bars of $\Omega=-0.07(53)$, which are outside the scale of the figure. \label{fig:M1-M3} } \end{center} \end{figure} \subsection{Interpretation} We summarize our experimental and theoretical results in Table~\ref{tab:omega} and graphically in Fig.\ref{fig:comparison}. The inclusion of the second-order shifts brings the extracted value of $\Omega$ obtained for the two different $D_J$ states into reasonable agreement, providing a measure of confidence that these second-order shifts and the values of $C/\Omega$ are calculated accurately. The final value, obtained as the dispersion-corrected weighted mean of the two values, is also shown on the figure, alongside the theoretical values, which are shown as shaded bands. The final experimental value agrees well with all theory values. It is interesting to note however that the large-scale shell model and DFT calculations yield smaller values of $\Omega$ than the single-particle Schwartz estimate, bringing these more refined models into closer agreement with experiment. A reduction of the experimental error bar by at least one order of magnitude would be required to provide a more stringent test of the different theoretical approaches. \begin{figure}[tb] \begin{center} \includegraphics[width=\columnwidth]{theory_comparison.pdf} \caption{Graphical comparison of experimental values of $\Omega$, with and without second-order corrections, and the theoretical predictions. The coloured bands indicate the theoretical uncertainties. \label{fig:comparison} } \end{center} \end{figure} \section{Conclusion} We have measured the magnetic octupole moment $\Omega$ in $^{45}$Sc, using a high-precision experimental technique and state-of-the-art atomic calculations. Our shell-model and DFT calculations (with no parameter adjustments) reproduce the values of $\Omega$, of $Q$ up to about 10\%, and of $\mu$ up to 3\%. Further work is required to improve the experimental precision further in order to stringently test nuclear theory. An increase in precision of about a factor of 10 would likely be required to do so, which is out of reach of our current experimental apparatus. A longer rf-interaction region and finer control of the external magnetic field strength would be required. Future experimental work on extending the measurements to other elements, and also to radioactive isotopes, would be very beneficial. This experimental effort should be matched by accurate atomic structure and nuclear structure calculations. As illustrated in this work, atomic and nuclear theory are capable of producing results with sufficient accuracy for such future programs. As a next experimental step, we are currently designing and constructing a collinear RIS laser-RF apparatus which we will use to perform measurements on radioactive isotopes. Candidates for future studies on radioactive isotopes include In and Bi, both having a single proton (hole) outside (inside) of a closed shell, which furthermore feature comparatively larger values of the hyperfine $C$-constant~\cite{Kusch1957,Landman1970}. \section{Acknowledgements} RPDG received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk{\l}odowska-Curie grant agreement No 844829. BKS acknowledges use of Vikram-100 HPC cluster of Physical Research Laboratory, Ahmedabad for atomic calculations. CY acknowledges support of National Natural Science Foundation of China (11775316). This work was supported in part by STFC Grant numbers ST/M006433/1 and ST/P003885/1, and by the Polish National Science Centre under Contract No.\ 2018/31/B/ST2/02220. We acknowledge the CSC-IT Center for Science Ltd., Finland, for the allocation of computational resources. Fruitful discussions with W. Gins and \'A. Koszor\'us are gratefully acknowledged.
{ "redpajama_set_name": "RedPajamaArXiv" }
346
{"url":"https:\/\/www.physicsforums.com\/threads\/describing-singularities.569529\/","text":"# Homework Help: Describing Singularities\n\n1. Jan 21, 2012\n\n### Ted123\n\n1. The problem statement, all variables and given\/known data\n\n3. The attempt at a solution\n\nBoth $\\displaystyle \\frac{\\cos(z)-1}{z^2}$ and $\\displaystyle \\frac{\\sinh(z)}{z^2}$ have 1 singular point at $z=0$.\n\nFor (a):\n\nz=0 is a removable singularity since defining f(0)=1 makes it analytic at all $z\\in\\mathbb{C}$.\n\nz=0 is isolated since f(z) is analytic for 0<|z|<1. But z=0 is not a pole since cos(0)-1 =0, and so z=0 is an essential singularity.\n\nFor (b):\n\nz=0 is a removable singularity since defining f(0)=1 makes it analytic at all $z\\in\\mathbb{C}$.\n\nz=0 is isolated since f(z) is analytic for 0<|z|<1. But z=0 is not a pole since sinh(0)=0, and so z=0 is an essential singularity.\n\nIs this correct?\n\n2. Jan 21, 2012\n\n### Dick\n\nParts of it might be true. You said basically the same thing about both functions and you didn't prove anything you said. Give some arguments. If f(z)=sinh(z)\/z^2, why does defining f(0)=1 make it analytic on C?\n\n3. Jan 21, 2012\n\n### Ted123\n\nProbably because I'm not understanding the definitions correctly!\n\nThese are my set of definitions:\n\nI think for both (a) and (b), z=0 is an isolated singularity but not a pole, so an essential singularity. But they probably aren't removable.\n\n4. Jan 21, 2012\n\n### Dick\n\nThe definitions will be clearer to you if you look at a power series expansion of each function around z=0.\n\n5. Jan 21, 2012\n\n### Ted123\n\nI don't like how some of these definitions are given so if I use this definition of pole:\n\nClearly $z_0=0$ is an isolated singularity since it is the only singularity for both (a) and (b).\n\n(a) $\\displaystyle \\lim_{z\\to 0} \\;(z-0)^N f(z) = \\lim_{z\\to 0} \\; z^{N-2} (\\cos(z)-1) = 0 \\;\\; \\forall \\;N>0$ so $z_0=0$ is not a pole. Hence it is an essential singularity.\n\n(b) If N=1 then $\\displaystyle \\lim_{z\\to 0} \\;(z-0) f(z) = \\lim_{z\\to 0} \\frac{\\sinh(z)}{z} = 1 \\neq 0$ so [itex]z_0=0[\/tex] is a simple pole (of order 1). What would be the strength of the pole? It is not an essential singularity.\n\nI'm not understanding how to see if 0 is a removable singularity in each case?\n\n6. Jan 21, 2012\n\n### Dick\n\nYou know how to expand cos(z) and sinh(z) in a power series around z=0. Put those expansions into the two functions and simplify. See what you think. Then look back at the definitions.","date":"2018-05-20 12:30:52","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8786382079124451, \"perplexity\": 1092.1412812715348}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-22\/segments\/1526794863410.22\/warc\/CC-MAIN-20180520112233-20180520132233-00605.warc.gz\"}"}
null
null
Biografia Laureato in Giurisprudenza presso l'Università degli Studi di Roma "La Sapienza", è consigliere di stato dal 1974, presidente di sezione dal 1997, è stato presidente del TAR del Lazio dal 2008 al 2012, presidente aggiunto del Consiglio di Stato dal marzo 2012 al gennaio 2013, ha ricoperto incarichi presso vari ministeri, capo di gabinetto del ministero del tesoro e del ministero delle partecipazioni statali, capo di gabinetto del Presidente del Consiglio dei Ministri. Il 28 gennaio 2013 si è insediato ufficialmente alla presidenza del Consiglio di Stato, subentrando a Giancarlo Coraggio che è stato eletto giudice costituzionale; è cessato dall'incarico il 31 dicembre 2015, sostituito da Alessandro Pajno. Onorificenze Note Voci correlate Consiglio di Stato (Italia) Altri progetti Collegamenti esterni Consiglieri di Stato (Italia) Cavalieri di gran croce OMRI
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,094
5 London acid attacks in 90 mins: Teenager arrested London Acid attacks – Barbaric says Met commissioner A teenager has been arrested after acid was thrown in people's faces in five attacks over one night in London. Two moped riders attacked the victims over 90 minutes in Islington, Stoke Newington and Hackney on Thursday. Police said the attackers pulled up to five people and doused them with corrosive liquid in five separate attacks between 10.25pm and 11.37pm. The acid attacks were carried out by two male suspects and involved the theft of two mopeds, police said The attacks began when a 32-year-old moped rider was approached by the two male suspects as he rode on the Hackney Road junction with Queensbridge Road. The pair tossed the noxious substance into his face before one of them jumped on to his vehicle and drove away. Police said the man had gone to an east London hospital and they were awaiting an update on his injuries. An eyewitness said he heard a victim, who he believed was a delivery driver, "screaming in pain". One victim suffered "life-changing injuries". Little more than 20 minutes later, at about 10.50pm, the pair sprayed another victim with searing liquid on the Upper Street junction with Highbury Corner in Islington. The victim was taken to hospital in north London. At about 11.05pm, a further victim was attacked by two men on a moped on Shoreditch High Street, having liquid thrown in his face. His injuries were not life-threatening, police said. Within 15 minutes the attackers appear to have struck again, launching the corrosive substance at a man on Cazenove Road, causing "life-changing" injuries. The final assault of the night was reported to police at 11.37pm, when another man was confronted as he sat on his moped in traffic on Chatsworth Road. After again spraying the liquid in a victim's face, the moped was stolen and both attackers fled. Met Commissioner Cressida Dick said the growing trend of victims being doused with corrosive liquids was concerning. Ms Dick told LBC Thursday night's attacks were "completely barbaric". "The acid can cause horrendous injuries," she said. "The ones last night involved a series of robberies we believe are linked – I am glad to see we have arrested somebody." Assaults involving corrosive substances have more than doubled in England since 2012, with the number of acid attacks in the capital showing the most dramatic rise in recent years. The Met's own figures show there were 261 acid attacks in 2015, rising to 458 last year. So far this year – excluding Thursday night – the Met has recorded 119 such attacks. Related Items:Acid attack, featured, London UK News Briefing – Hunt searching Iran resolution, Corbyn challenges Climate Change approach & Boris conspired to beat a journalist Stobart Acquires stake in speedy freight Turkey to splash out $2.5bn for Russia's S-400 air defense system
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,688
Pointe LNG, a newly set up company has sought a permit from the United States Federal Energy Regulatory Commission to start a pre-filing process for an LNG export plant in Louisiana. The project has initially been started by Louisiana LNG Energy, and Pointe LNG notes that the site showed no significant environmental impacts through FERC process in the previous filing. According to the Pointe LNG website, the facility on the east bank of the Mississippi River at mile marker 46, will have the capacity to export 6 mtpa of the chilled fuel. The construction would be carried out in a modular manner and assembled on-site, the company said. It will also include two LNG storage tanks of 140,000-cb, each. The company also intends to build and operate two pipelines that would feed natural gas to the liquefaction facility. The two pipelines would connect to the Tennessee gas pipeline and the High Point Gas Transmission 2.4 and 2 miles away from the plant, respectively. Speaking to Platts, Jim Lindsay, co-founder of Pointe LNG, said the project already secured non-binding letters of intent with customers in Asia for 2 mtpa of project's capacity. He added the company is not only in talks to secure customers but also investors in order to secure $3.2 billion that would cover the estimated project costs. The project is expected to start production in the fourth quarter of 2025. Posted on September 18, 2018 with tags FERC, Pointe LNG.
{ "redpajama_set_name": "RedPajamaC4" }
47
const request = require('supertest') , { expect } = require('chai') , db = require('APP/db') , app = require('./start') const supertest = require('supertest-as-promised') const Review = db.model('review') import chai from 'chai' import chaiProperties from 'chai-properties' import chaiThings from 'chai-things' chai.use(chaiProperties) chai.use(chaiThings) describe('Backend', () => { before('Await database sync', () => db.didSync) afterEach('Clear the tables', () => db.truncate({ cascade: true })) let agent beforeEach('Set up agent for testing', () => { agent = supertest(app) }) describe('routes', () => { let firePowerID let terriblePowerID beforeEach('Seed Reviews', () => { const reviews = [ { title: 'Fire Power', text: "Throw fire from person's palms", stars: 5 }, { title: 'Ice Power', text: "Throw's ice from person's palms", stars: 1 } ] return Review.bulkCreate(reviews, { returning: true }) .then(createdReviews => { firePowerID = createdReviews[0].id terriblePowerID = createdReviews[1].id }) }) describe('for reviews', () => { it('serves up all users on request to GET /', () => agent .get('/api/reviews') .expect(201) .then(res => { expect(res.body).to.be.an('array') expect(res.body.length).to.be.equal(2) expect(res.body).to.contain.a.thing.with('id', firePowerID) expect(res.body).to.contain.a.thing.with('id', terriblePowerID) })) it('updates a user at PUT /{{usersId}}, sending a 201 response', () => agent .put(`/api/reviews/${firePowerID}`) .send({ text: 'This spits fire, and does other stuff' }) .expect(201) .then(res => { return Review.findById(firePowerID); }) .then(review => { expect(review.text).to.be.equal(`Throw fire from person's palms`); })) it('creates a new Review via a POST request', () => agent .post("/api/reviews") .send({ title: "Throw Rocks Power", text: "Individual is able to throw rocks at enemies", stars: 3, }) .expect(201) .then(res => { const newReview = res.body; return Review.findById(newReview.id) }) .then(foundReview => { expect(foundReview.title).to.be.equal("Throw Rocks Power") expect(foundReview.text).to.be.equal("Individual is able to throw rocks at enemies") expect(foundReview.stars).to.be.equal(3) })) }) }) })
{ "redpajama_set_name": "RedPajamaGithub" }
14
\section{Introduction} Both the atmospheric and oceanic flows are influenced by the rotation of the earth. In fact, the fast rotation and small aspect ratio are two main characteristics of the large scale atmospheric and oceanic flows. The small aspect ratio characteristic leads to the primitive equations, and the fast rotation leads to the quasi-geostropic equations (cf. [2], [6], [7], [9]). A main objective in climate dynamics and in geophysical fluid dynamics is to understand and predict the periodic, quasi-periodic, aperiodic, and fully turbulent characteristics of the large scale atmospheric and oceanic flows (e.g., cf. [4], [5]). The Boussinesq system for the incompressible fluid follows in $\mathbb{R}^2$ is $$u_t+uu_x+vu_y-\nu\Delta u=-p_x,\qquad v_t+uv_x+vv_y-\nu\Delta v-\theta=-p_y,\eqno(1.1)$$ $$\theta_t+u\theta_x+v\theta_y-\kappa \Delta\theta=0,\qquad u_x+v_y=0,\eqno(1.2)$$ where $(u,v)$ is the velocity vector field, $p$ is the scalar pressure, $\theta$ is the scalar temperature, $\nu\geq 0$ is the viscosity and $\kappa\geq 0$ is the thermal diffusivity. The above system is a simple model in atmospheric sciences (e.g., cf. [8]). Chae [1] proved the global regularity, and Hou and Li [3] obtained the well-posedness of the above system. Aonther slightly simplified version of the system of primitive equations is the three-dimensional stratified rotating Boussinesq system (e.g., cf. [7], [9]): $$u_t+uu_x+vu_y+wu_z-\frac{1}{R_0}v=\sigma(\Delta u-p_x),\eqno(1.3)$$ $$v_t+uv_x+vv_y+wv_z+\frac{1}{R_0}u=\sigma(\Delta v-p_y),\eqno(1.4)$$ $$w_t+uw_x+vw_y+ww_z-\sigma R T=\sigma(\Delta w-p_z),\eqno(1.5)$$ $$T_t+uT_x+vT_y+wT_z=\Delta T+w,\eqno(1.6)$$ $$ u_x+v_y+w_z=0,\eqno(1.7)$$ where $(u,v,w)$ is the velocity vector filed, $T$ is the temperature function, $p$ is the pressure function, $\sigma$ is the Prandtle number, $R$ is the thermal Rayleigh number and $R_0$ is the Rossby number. Moreover, the vector $(1/R_0)(-v,u,0)$ represents the Coriolis force and the term $w$ in (1.6) is derived using stratification. So the above equations are the extensions of Navier-Stokes equations by adding the Coriolis force and the stratified temperature equation. Due to the Coriolis force, the two-dimensional system (1.1) and (1.2) is not a special case of the above three-dimensional system. Hsia, Ma and Wang [4] studied the bifurcation and periodic solutions of the above system (1.3)-(1.7). In [10], we used the stable range of nonlinear term to solve the equation of nonstationary transonic gas flow. Moreover, we [11] solved the three-dimensional Navior-Stokes equations by asymmetric techniques and moving frames. Based on the algebraic characteristics of the above equations, we use in this paper asymmetric ideas and moving frames to solve the above two Boussinesq systems of partial differential equations. New families of explicit exact solutions with multiple parameter functions are obtained. Many of them are the periodic, quasi-periodic, aperiodic solutions that may have practical significance. Using Fourier expansion and some of our solutions, one can obtain discontinuous solutions. The symmetry transformations for these equations are used to simplify our arguments. For convenience, we always assume that all the involved partial derivatives of related functions always exist and we can change orders of taking partial derivatives. The parameter functions are so chosen that the involved expressions make sense. We also use prime $'$ to denote the derivative of any one-variable function. Observe that the two-dimensional Boussinesq system (1.1) and (1.2) is invariant under the action of the following symmetry transformation: $${\cal T}(u)=a^{-1}\epsilon_1u(a^2(t+b),a\epsilon_1(x+\alpha),a\epsilon_2(y+\beta))-\alpha',\eqno(1.8)$$ $${\cal T}(v)=a^{-1}\epsilon_2v(a^2(t+b),a(x+\alpha),a(y+\beta))-\beta',\eqno(1.9)$$ $${\cal T}(p)=a^{-2}p(a^2(t+b),a\epsilon_1(x+\alpha),a\epsilon_2(y+\beta))+{\alpha'}'x+{\beta'}'y+\gamma,\eqno(1.10)$$ $${\cal T}(\theta)=a^{-3}\epsilon_2\theta(a^2(t+b),a\epsilon_1(x+\alpha),a\epsilon_2(y+\beta)),\eqno(1.11)$$ where $a,b\in\mathbb{R}$ with $a\neq 0$, $\epsilon_1,\epsilon_2\in\{1,-1\}$ and $\alpha,\beta,\gamma$ are arbitrary functions of $t$. The above transformation transforms a solution of the equation (1.1) and (1.2) into another solution with additional three parameter functions. Denote $\vec x=(x,y)$. The three-dimensional stratified rotating Boussinesq system is invariant under the following transformations: $${\cal T}_1[(u,v,w)]=((u(t+b,\vec x A,\epsilon z),v(t+b,\vec x A),\epsilon z)A,\epsilon w),\eqno(1.12)$$ $${\cal T}_1(p)=p(t+b,\vec x A,\epsilon z),\qquad {\cal T}_1(T)=T(t+b,\vec x A,\epsilon z);\eqno(1.13)$$ $${\cal T}_2(u)=u(t,x+\alpha,y+\beta,z+\gamma)-\alpha',\qquad {\cal T}_2(v)=v(t,x+\alpha,y+\beta,z+\gamma)-\beta',\eqno(1.14)$$ $${\cal T}_2(w)=w(t,x+\alpha,y+\beta,z+\gamma)-\gamma',\qquad {\cal T}_2(T)=T(t,x+\alpha,y+\beta,z+\gamma)-\gamma,\eqno(1.15)$$ $${\cal T}_2(p)=p(t,x+\alpha,y+\beta,z+\gamma)+\sigma^{-1}({\alpha'}'x+{\beta'}'y+{\gamma'}'z)-R\gamma z+\mu;\eqno(1.16)$$ where $\epsilon=\pm 1$, $b\in\mathbb{R}$, $A\in O(2,\mathbb{R})$, and $\alpha,\beta,\gamma,\mu$ are arbitrary functions of $t$. The above transformations transform a solution of the equation (1.3)-(1.7) into another solution. In particular, applying the transformation ${\cal T}_2$ to any solution in this paper yields another solution with extra four parameter functions. To simplify problems, we always solve the Boussinesq systems modulo the above corresponding symmetry transformations, which is an idea that geometers and topologists often use. The paper is organized as follows. In Section 2, we solve the two-dimensional Boussinesq equations (1.1)-(1.2) and obtain four families of explicit exact solutions. In Section 3, we present an approach with $u,v,w,T$ linear in $x,y$ to the equations (1.3)-(1.7), and obtain two families of explicit exact solutions. Assuming $u_z=v_z=w_{zz}=T_{zz}=0$ in Section 4, we find another two families of explicit exact solutions of the equations (1.3)-(1.7). In Section 5, we obtain a family of explicit exact solutions of (1.3)-(1.7) that are independent of $x$. The status can be changed by applying the transformation in (1.12) and (1.13) to them. \section{Solutions of the 2D Boussinesq Equations} In this section, we solve the two-dimensional Boussinesq equations (1.1)-(1.2) by an asymmetric method and by an moving frame. According to the second equation in (1.2), we take the potential form: $$u=\xi_y,\qquad v=-\xi_x\eqno(2.1)$$ for some functions $\xi$ in $t,x,y$. Then the two-dimensional Boussinesq equations become $$\xi_{yt}+\xi_y\xi_{xy}-\xi_x\xi_{yy}-\nu\Delta \xi_y=-p_x,\qquad \xi_{xt}+\xi_y\xi_{xx}-\xi_x\xi_{xy}-\nu\Delta \xi_x+\theta=p_y,\eqno(2.2)$$ $$\theta_t+\xi_y\theta_x-\xi_x\theta_y-\kappa \Delta\theta=0.\eqno(2.3)$$ By our assumption $p_{xy}=p_{yx}$, the compatible condition of the equations in (2.2) is $$(\Delta \xi)_t+\xi_y(\Delta \xi)_x-\xi_x(\Delta \xi)_y-\nu\Delta^2\xi+\theta_x=0.\eqno(2.4)$$ Now we first solve the system (2.3) and (2.4). Our asymmetric approach is to assume $$\theta=\varepsilon(t,y),\qquad\xi=\phi(t,y)+x\psi(t,y)\eqno(2.5)$$ for some functions $\varepsilon,\phi$ and $\psi$ in $t,y$. Then (2.3) becomes $$\varepsilon_t-\psi\varepsilon_y-\kappa\varepsilon_{yy}=0.\eqno(2.6)$$ Moreover, (2.4) becomes $$\phi_{yyt}+x\psi_{yyt}+(\phi_y+x\psi_y)\psi_{yy}-\psi(\phi_{yyy}+x\psi_{yyy})-\nu(\phi_{yyyy}+x\psi_{yyyy})=0, \eqno(2.7)$$ equivalently, $$\phi_{yyt}+\phi_y\psi_{yy}-\psi\phi_{yyy}-\nu\phi_{yyyy}=0, \eqno(2.8)$$ $$\psi_{yyt}+\psi_y\psi_{yy}-\psi\psi_{yyy}-\nu\psi_{yyyy}=0. \eqno(2.9)$$ The above two equations are equivalent to: $$\phi_{yt}+\phi_y\psi_y-\psi\phi_{yy}-\nu\phi_{yyy}=\alpha_1, \eqno(2.10)$$ $$\psi_{yt}+\psi_y^2-\psi\psi_{yy}-\nu\psi_{yyy}=\alpha_2 \eqno(2.11)$$ for some functions $\alpha_1$ and $\alpha_2$ of $t$ to be determined. Let $c$ be a fixed real constant and let $\gamma$ be a fixed function of $t$. We define $$\zeta_1(s)=\frac{e^{\gamma s}-ce^{-\gamma s}}{2},\qquad \eta_1=\frac{e^{\gamma s}+ce^{-\gamma s}}{2},\eqno(2.12)$$ $$\zeta_0(s)=\sin\gamma s,\qquad \eta_0(s)=\cos\gamma s.\eqno(2.13)$$ Then $$\eta_r^2(s)+(-1)^r\zeta_r^2(s)=c^r\eqno(2.14)$$ and $$\partial_s(\zeta_r(s))=\gamma\eta_r(s),\qquad \partial_s(\eta_r(s))=-(-1)^r\gamma\zeta_r(s),\eqno(2.15)$$ where we treat $0^0=1$ when $c=r=0$. First we assume $$\psi=\beta_1y+\beta_2\zeta_r(y)\eqno(2.16)$$ for some functions $\beta_1$ and $\beta_2$ of $t$, where $r=0,1$. Then (2.11) becomes \begin{eqnarray*}\hspace{2cm}& &\beta_1'+c^r\beta_2^2\gamma^2+\beta_1^2+[(\beta_2\gamma)'+(-1)^r\nu\beta_2\gamma^3+2\beta_1\beta_2\gamma]\eta_r(y) \\ & &+(-1)^r\beta_2\gamma(\beta_1\gamma-\gamma')y\zeta_r(y)=\alpha_2,\hspace{6.2cm} (2.17)\end{eqnarray*} which is implied by the following equations: $$\beta_1'+c^r\beta_2^2\gamma^2+\beta_1^2=\alpha_2,\qquad\beta_1\gamma-\gamma'=0,\eqno(2.18)$$ $$(\beta_2\gamma)'+(-1)^r\nu\beta_2\gamma^3+2\beta_1\beta_2\gamma=0.\eqno(2.19)$$ For convenience, we assume $\gamma=\sqrt{\alpha'}$ for some function $\alpha$ of $t$. Thus we have $$\beta_1=\frac{\gamma'}{\gamma}=\frac{{\alpha'}'}{2\alpha'},\qquad \beta_2=\frac{b_1e^{-(-1)^r\nu\alpha}}{\sqrt{(\alpha')^3}},\qquad b_1\in\mathbb{R}.\eqno(2.20)$$ To solve (2.10), we assume $$\phi=\beta_3\eta_r(y)\eqno(2.21)$$ for some function $\beta_3$, modulo the transformation in (1.8)-(1.11). Now (2.10) becomes $$[(-1)^r((\beta_3\gamma)'+\beta_1\beta_3\gamma)+\nu\beta_3\gamma^3]\zeta_r(y)=-\alpha_1, \eqno(2.22)$$ which is implied by $$(-1)^r((\beta_3\gamma)'+\beta_1\beta_3\gamma)+\nu\beta_3\gamma^3=0.\eqno(2.23)$$ Thus $$\beta_3=\frac{b_2e^{-(-1)^r\nu\alpha}}{\alpha'},\eqno(2.24)$$ where $b_2$ is a real constant. In order to solve (2.6), we assume $$\varepsilon=be^{\gamma_1\eta_r(y)},\eqno(2.25)$$ where $b$ is a real constant and $\gamma_1$ is a function of $t$. Then (2.6) is implied by $$\gamma_1'\eta_r(y)+(-1)^r\beta_2\gamma\gm_1\zeta_r^2(y)+\kappa\gamma^2\gamma_1((-1)^r\eta_r(y)-\gamma_1\zeta_r^2(y))=0, \eqno(2.26)$$ which is implied by $$\gamma_1'+(-1)^r\kappa\gamma^2\gamma_1=0,\qquad(-1)^r\beta_2-\kappa\gamma\gm_1=0.\eqno(2.27)$$ Then the first equation implies $$\gamma_1=b_3e^{-(-1)^r\kappa\alpha}\eqno(2.28)$$ for some constant $b_3$. By the second equations in (2.20) and (2.27), we have: $$(-1)^r\frac{b_1e^{-(-1)^r\nu\alpha}}{\sqrt{(\alpha')^3}}=b_3\kappa\sqrt{\alpha'}e^{-(-1)^r\kappa\alpha}.\eqno(2.29)$$ For convenience, we take $$b_1=(-1)^rb_0^2\kappa b_3,\qquad b_0\in\mathbb{R}.\eqno(2.30)$$ Then (2.29) is implied by $$\alpha'e^{(-1)^r(\nu-\kappa)\alpha/2}=b_0.\eqno(2.31)$$ If $\nu=\kappa$, then we have $\alpha=b_0t+c_0$. Modulo the transformation in (1.8)-(1.11), we take $b_0=1$ and $c_0=0$, that is, $\alpha=t$. When $\nu\neq \kappa$, we similarly take $b_0=1$ and $$\alpha=\frac{2(-1)^r}{\nu-\kappa}\ln[(-1)^r(\nu-\kappa)t/2+c_0],\qquad c_0\in\mathbb{R}.\eqno(2.32)$$ Suppose $\nu=\kappa$. Then $\gamma=1$ and $$\phi=b_2e^{-(-1)^r\nu t}\eta_r(y),\qquad\psi=(-1)^rb_3\nu e^{-(-1)^r\nu t}\zeta_r(y).\eqno(2.33)$$ Moreover, $$\theta=b\exp(b_3e^{-(-1)^r\nu t}\eta_r(y)),\eqno(2.34)$$ $$\xi=b_2e^{-(-1)^r\nu t}\eta_r(y)+(-1)^rb_3\nu e^{-(-1)^r\nu t}x\zeta_r(y)\eqno(2.35)$$ by (2.5). According to (2.1), $$u=\xi_y=(-1)^r[-b_2e^{-(-1)^r\nu t}\zeta_r(y)+b_3\nu e^{-(-1)^r\nu t}x\eta_r(y)],\eqno(2.36)$$ $$ v=-\xi_x=-(-1)^rb_3\nu e^{-(-1)^r\nu t}\zeta_r(y).\eqno(2.37)$$ Note $$u_t+uu_x+vu_y-\nu\Delta u= b_3^2\nu^2c^r e^{-(-1)^r2\nu t}x,\eqno(2.38)$$ $$v_t+uv_x+vv_y-\nu\Delta v-\theta =vv_y-b\exp(b_3e^{-(-1)^r\nu t}\eta_r(y)).\eqno(2.39)$$ By (1.1), we have $$p= b\int\exp(b_3e^{-(-1)^r\nu t}\eta_r(y))dy-\frac{1}{2}b_3^2\nu^2 e^{-(-1)^r2\nu t}(c^rx^2+\zeta_r^2(y))\eqno(2.40)$$ modulo the transformation in (1.8)-(1.11). Consider the case $\nu\neq \kappa$. Then $$\gamma=\sqrt{\alpha'}=\frac{1}{\sqrt{(-1)^r(\nu-\kappa)t/2+c_0}}\eqno(2.41)$$ by (2.32). Moreover, $$\phi=b_2[(-1)^r(\nu-\kappa)t/2+c_0]^{2\nu/(\kappa-\nu)+1}\eta_r(y)\eqno(2.42)$$ by (2.21) and (2.24). Furthermore, $$\psi=\frac{(-1)^r(\kappa-\nu)y}{4[(-1)^r(\nu-\kappa)t/2+c_0]}+ (-1)^rb_3\kappa [(-1)^r(\nu-\kappa)t/2+c_0]^{2\nu/(\kappa-\nu)+3/2}\zeta_r(y)\eqno(2.43)$$ by (2.16), (2.20) and (2.30). According to (2.25), (2.28) and (2.32), $$\theta=be^{b_3[(-1)^r(\nu-\kappa)t/2+c_0]^{2\kappa/(\kappa-\nu)}\eta_r(y)}.\eqno(2.44)$$ Similarly, we have \begin{eqnarray*}\hspace{1cm}u_t+uu_x+vu_y-\nu\Delta u&=&b_3^2c^r\kappa^2 [(-1)^r(\nu-\kappa)t/2+c_0]^{4\nu/(\kappa-\nu)+2}x \\ & &+\frac{3(\nu-\kappa)^2x}{16[(-1)^r(\nu-\kappa)t/2+c_0]^2}, \hspace{3.8cm}(2.45)\end{eqnarray*} \begin{eqnarray*}& &v_t+uv_x+vv_y-\nu\Delta-\theta =-\psi_t+\psi\psi_y+\nu\psi_{yy}-\theta\\ &=&-be^{b_3[(-1)^r(\nu-\kappa)t/2+c_0]^{2\kappa/(\kappa-\nu)}\eta_r(y)} +\frac{3}{4}b_3\kappa(\kappa-\nu) [(-1)^r(\nu-\kappa)t/2+c_0]^{2\nu/(\kappa-\nu)+1/2}\zeta_r(y) \\ & &+\frac{3(\nu-\kappa)^2y}{16[(-1)^r(\nu-\kappa)t/2+c_0]^2}+ \frac{b_3^2}{2}\kappa^2 [(-1)^r(\nu-\kappa)t/2+c_0]^{4\nu/(\kappa-\nu)+3}\partial_y\zeta_r^2(y).\hspace{0.6cm}(2.46) \end{eqnarray*} According (1.1), we have \begin{eqnarray*}p&=&b\int e^{b_3[(-1)^r(\nu-\kappa)t/2+c_0]^{2\kappa/(\kappa-\nu)}\eta_r(y)}dy -\frac{b_3^2}{2}c^r\kappa^2 [(-1)^r(\nu-\kappa)t/2+c_0]^{4\nu/(\kappa-\nu)+2}x^2 \\ & &-\frac{3(\nu-\kappa)^2(x^2+y^2)}{32[(-1)^r(\nu-\kappa)t/2+c_0]^2} -\frac{b_3^2}{2}\kappa^2 [(-1)^r(\nu-\kappa)t/2+c_0]^{4\nu/(\kappa-\nu)+3}\zeta_r^2(y) \\ & &+\frac{3}{4}(-1)^rb_3\kappa(\kappa-\nu) [(-1)^r(\nu-\kappa)t/2+c_0]^{2\nu/(\kappa-\nu)+1}\eta_r(y)\hspace{3.6cm}(2.47)\end{eqnarray*} modulo the transformation in (1.8)-(1.11).\vspace{0.4cm} {\bf Theorem 2.1}. {\it Let $b,b_2,b_3,c,c_0\in\mathbb{R}$ and let $r=0,1$. If $\nu=\kappa$, we have the solution (2.34), (2.36), (2.37) and (2.40) of the two-dimensional Boussinesq equations (1.1)-(1.2), where $\zeta_r(y)$ and $\eta_r(y)$ are defined in (2.12)-(2.13) with $\gamma=1$. When $\nu\neq\kappa$, we have the following solutions of the two-dimensional Boussinesq equations (1.1)-(1.2): \begin{eqnarray*}u&=&\frac{(-1)^r(\kappa-\nu)x}{4[(-1)^r(\nu-\kappa)t/2+c_0]}+ (-1)^rb_3\kappa [(-1)^r(\nu-\kappa)t/2+c_0]^{2\nu/(\kappa-\nu)+1}x\eta_r(y)\\ && -(-1)^rb_2[(-1)^r(\nu-\kappa)t/2+c_0]^{2\nu/(\kappa-\nu)+1/2}\zeta_r(y), \hspace{5cm}(2.48)\end{eqnarray*} $$v=\frac{(-1)^r(\nu-\kappa)y}{4[(-1)^r(\nu-\kappa)t/2+c_0]}- (-1)^rb_3\kappa [(-1)^r(\nu-\kappa)t/2+c_0]^{2\nu/(\kappa-\nu)+3/2}\zeta_r(y),\eqno(2.49)$$ $\theta$ is given in (2.44) and $p$ is given in (2.47), where $\zeta_r(y)$ and $\eta_r(y)$ are defined in (2.12)-(2.13) with $\gamma=[(-1)^r(\nu-\kappa)t/2+c_0]^{-1/2}$.}\vspace{0.4cm} Observe that $$\psi=6\nu y^{-1}\eqno(2.50)$$ is another solution of (2.11). In order to solve (2.10), we assume $$\phi=\sum_{i=1}^\infty\gamma_iy^i\eqno(2.51)$$ modulo the transformation in (1.8)-(1.11), where $\gamma_i$ are functions of $t$ to be determined. Now (2.10) becomes $$-6\nu\gamma_1y^{-2}-18\nu\gamma_2y^{-1}+\sum_{i=1}^\infty[i\gamma_i'-\nu(i+2)(i+3)(i+4)\gamma_{i+2}] y^{i-1}=\alpha_1,\eqno(2.52)$$ equivalently, $$\gamma_1=\gamma_2=0,\qquad \alpha_1=-60\nu\gamma_3,\eqno(2.53)$$ $$i\gamma_i'-\nu(i+2)(i+3)(i+4)\gamma_{i+2}=0,\qquad i> 1.\eqno(2.54)$$ Thus $$\gamma_{2i+2}=\frac{2i\gamma_{2i}'}{\nu(2i+2)(2i+3)(2i+4)}=0,\qquad i\geq 1,\eqno(2.55)$$ $$\gamma_{2i+3}=\frac{(2i+1)\gamma_{2i+1}'}{\nu(2i+3)(2i+4)(2i+5)}=\frac{360\gamma_3^{(i)}}{\nu^i(2i+2)(2i+5)!},\qquad i\geq 1.\eqno(2.56)$$ Hence $$\phi=360\sum_{i=0}^\infty \frac{\alpha^{(i)}y^{2i+3}}{\nu^i(2i+3)(2i+5)!},\eqno(2.57)$$ where $\alpha$ is an arbitrary function of $t$ such that the series converges, say, a polynomial in $t$. To solve (2.6), we also assume $$\varepsilon=\sum_{i=0}^\infty\beta_i y^i,\eqno(2.58)$$ where $\beta_i$ are functions of $t$. Then (2.6) becomes $$6\nu\beta_1y^{-1}+\sum_{i=0}^\infty[\beta_i'+(i+2)(6\nu-(i+1)\kappa)\beta_{i+2}]y^i=0,\eqno(2.58)$$ that is, $\beta_1=0$ and $$\beta_i'-(i+2)(6\nu+(i+1)\kappa)\beta_{i+2}=0,\qquad i\geq 0.\eqno(2.59)$$ Hence $$\theta=\beta+\sum_{i=1}^\infty\frac{\beta^{(i)}y^{2i}}{2^ii!\prod_{r=1}^i(6\nu+(2r-1)\kappa)},\eqno(2.60)$$ where $\beta$ is an arbitrary function of $t$ such that the series converges, say, a polynomial in $t$. In this case, $$u_t+uu_x+vu_y-\nu\Delta u=-60\nu\alpha,\eqno(2.61)$$ $$v_t+uv_x+vv_y-\nu\Delta-\theta =-36\nu^2 y^{-3}-\beta-\sum_{i=1}^\infty\frac{\beta^{(i)}y^{2i}}{2^ii!\prod_{r=1}^i(6\nu+(2r-1)\kappa)}. \eqno(2.62)$$ According (1.1), we have $$p=60\nu\alpha x-18\nu^2 y^{-2}+\beta y+ \sum_{i=1}^\infty\frac{\beta^{(i)}y^{2i+1}}{2^ii!(2i+1)\prod_{r=1}^i(6\nu+(2r-1)\kappa)}\eqno(2.63)$$ modulo the transformation in (1.8)-(1.11).\vspace{0.4cm} {\bf Theorem 2.2}. {\it We have the following solutions of the two-dimensional Boussinesq equations (1.1)-(1.2): $$u=360\sum_{i=0}^\infty \frac{\alpha^{(i)}y^{2i+2}}{\nu^i(2i+5)!}-6\nu xy^{-2},\qquad v=-6\nu y^{-1},\eqno(2.64)$$ $\theta$ is given in (2.60) and $p$ is given in (2.63), where $\alpha $ and $\beta$ are arbitrary functions of $t$ such that the related series converge, say, polynomials in $t$.}\vspace{0.4cm} Let $\gamma$ be a function of $t$. Denote the moving frame $$\tilde\varpi=x\cos\gamma+y\sin\gamma,\qquad \hat\varpi=y\cos\gamma-x\sin\gamma.\eqno(2.65)$$ Then $$\partial_t(\tilde\varpi)=\gamma'\hat\varpi,\qquad \partial_t(\hat\varpi)=-\gamma'\tilde\varpi.\eqno(2.66)$$ Moreover, $$\partial_{\tilde\varpi}=\cos\gamma\:\partial_x+\sin\gamma\:\partial_y,\qquad \partial_{\hat\varpi}=-\sin\gamma\:\partial_x+\cos\gamma\:\partial_y.\eqno(2.67)$$ In particular, $$\Delta=\partial_x^2+\partial_y^2=\partial_{\tilde\varpi}^2+\partial_{\hat\varpi}^2.\eqno(2.68)$$ We assume $$\xi=\phi(t,\tilde\varpi)-\frac{\gamma'}{2}(x^2+y^2) ,\qquad\theta=\psi(t,\tilde\varpi),\eqno(2.69)$$ where $\phi$ and $\psi$ are functions in $t,\tilde\varpi$. Then (2.3) becomes $$\psi_t-\kappa\psi_{\tilde\varpi\tilde\varpi}=0\eqno(2.70)$$ and (2.4) becomes $$-2{\gamma'}'+\phi_{t\tilde\varpi\tilde\varpi} -\nu\phi_{\tilde\varpi\tilde\varpi\tilde\varpi\tilde\varpi}+\psi_{\tilde\varpi}\cos\gamma =0.\eqno(2.71)$$ Modulo the transformation in (1.8)-(1.11), the above equation is equivalent to $$-2{\gamma'}'\tilde\varpi+\phi_{t\tilde\varpi} -\nu\phi_{\tilde\varpi\tilde\varpi\tilde\varpi}+\psi\cos\gamma =0.\eqno(2.72)$$ Assume $\nu=\kappa$. We take the following solution of (2.70): $$\psi=\sum_{i=1}^m a_id_ie^{a_i^2\kappa t\cos 2b_i+a_i\tilde\varpi\cos b_i}\sin(a_i^2\kappa t\sin 2b_i+a_i\tilde\varpi\sin b_i+b_i+c_i)\eqno(2.73)$$ where $a_i,b_i,c_i,d_i$ are real numbers. Moreover, (2.72) is equivalent to solving the following equation: \begin{eqnarray*}\hspace{1.5cm}& &2\nu\gamma'-{\gamma'}'\tilde\varpi^2+\phi_t -\nu\phi_{\tilde\varpi\tilde\varpi}+[\sum_{i=1}^md_ie^{a_i^2\kappa t\cos 2b_i+a_i\tilde\varpi\cos b_i}\\ & &\times\sin(a_i^2\kappa t\sin 2b_i+a_i\tilde\varpi\sin b_i+c_i)]\cos\gamma=0\hspace{4.9cm}(2.74)\end{eqnarray*} by (2.1). Thus we have the following solution of (2.74): \begin{eqnarray*}\phi&=&-[\sum_{i=1}^md_ie^{a_i^2\kappa t\cos 2b_i+a_i\tilde\varpi\cos b_i}\sin(a_i^2\kappa t\sin 2b_i+a_i\tilde\varpi\sin b_i+c_i)]\int \cos\gamma\:dt\\ & &+\gamma'\tilde\varpi^2+\sum_{s=1}^n\hat d_se^{\hat a_s^2\kappa t\cos 2\hat b_s+\hat a_s\tilde\varpi\cos \hat b_s}\sin(\hat a_s^2\kappa t\sin 2\hat b_s+\hat a_s\tilde\varpi\sin \hat b_s+\hat c_s),\hspace{1.6cm}(2.75)\end{eqnarray*} where $\hat a_s,\hat b_s,\hat c_s,\hat d_s$ are real numbers. Suppose $\nu\neq \kappa$. To make (2.72) solvable, we choose the following solution of (2.70): $$\psi=\sum_{i=1}^m a_id_ie^{a_i^2\kappa t+a_i\tilde\varpi}.\eqno(2.76)$$ Now (2.72) is equivalent to solving the following equation: $$\nu\gamma'-{\gamma'}'\tilde\varpi^2+\phi_t -\nu\phi_{\tilde\varpi\tilde\varpi}+\sum_{i=1}^md_ie^{a_i^2\kappa t+a_i\tilde\varpi}\cos\gamma=0\eqno(2.77)$$ by (2.1). We obtain the following solution of (2.77): \begin{eqnarray*}\hspace{1cm}\phi&=&\gamma'\tilde\varpi^2+\sum_{s=1}^n\hat d_se^{\hat a_s^2\kappa t\cos 2\hat b_s+\hat a_s\tilde\varpi\cos \hat b_s}\sin(\hat a_s^2\kappa t\sin 2\hat b_s+\hat a_s\tilde\varpi\sin \hat b_s+\hat c_s)\\ & &-\sum_{i=1}^md_ie^{a_i^2\nu t+a_i\tilde\varpi}\int e^{a_i^2(\kappa-\nu)t}\cos\gamma\:dt.\hspace{6.2cm}(2.78)\end{eqnarray*} Note $$u=\phi_\varpi\sin\gamma-\gamma'y,\qquad v=\gamma'x -\phi_\varpi\cos\gamma.\eqno(2.79)$$ By (2.72), \begin{eqnarray*} \hspace{1cm}& & u_t+uu_x+vu_y-\nu\Delta u\\&=&(\phi_{\varpi t}-\nu\phi_{\varpi\varpi\varpi})\sin\gamma+2\gamma'\phi_\varpi\cos\gamma-\gamma'^2x-{\gamma'}'y \\ &=&(2{\gamma'}'\tilde\varpi-\psi\cos\gamma)\sin\gamma+2\gamma'\phi_\varpi\cos\gamma-\gamma'^2x-{\gamma'}'y, \\ &=&{\gamma'}'(x\sin 2\gamma-y\cos 2\gamma) +(2\gamma'\phi_\varpi-\psi\sin\gamma) \cos\gamma-\gamma'^2x, \hspace{3cm}(2.80) \end{eqnarray*} \begin{eqnarray*} \hspace{1cm}& &v_t+uv_x+vv_y-\nu\Delta v-\theta\\ &=&(\nu\phi_{\varpi\varpi\varpi}-\phi_{\varpi t})\cos\gamma+2\gamma'\phi_\varpi\sin\gamma-\gamma'^2y+{\gamma'}'x -\psi\\ &=&(\psi\cos\gamma-2{\gamma'}'\tilde\varpi)\cos\gamma+2\gamma'\phi_\varpi\sin\gamma-\gamma'^2y+{\gamma'}'x -\psi\\ &=&-{\gamma'}'(x\cos 2\gamma+y\sin 2\gamma)+(2\gamma'\phi_\varpi-\psi\sin\gamma)\sin\gamma-\gamma'^2y.\hspace{2.8cm}(2.81) \end{eqnarray*} According to (1.1), $$p=\frac{{\gamma'}^2-{\gamma'}'\sin 2\gamma}{2}x^2+\frac{{\gamma'}^2+{\gamma'}'\sin 2\gamma}{2}y^2+{\gamma'}'xy\cos2\gamma+\int\psi d\tilde\varpi\:\sin\gamma-2\gamma'\phi \eqno(2.82)$$ modulo the transformation in (1.8)-(1.11).\vspace{0.4cm} {\bf Theorem 2.3}. {\it Let $\gamma$ be any function of $t$ and denote $\tilde\varpi=x\cos\gamma+y\sin\gamma$. Take $$\{a_i,b_i,c_i,d_i,\hat a_s,\hat b_s,\hat c_s,\hat d_s\mid i=1,...,m;s=1,...,n\}\subset\mathbb{R}.\eqno(2.83)$$ If $\nu=\kappa$, we have the following solutions of the two-dimensional Boussinesq equations (1.1)-(1.2): \begin{eqnarray*}u=-\gamma' y+\sin\gamma\{2\gamma'\tilde\varpi+\sum_{s=1}^n\hat a_s\hat d_se^{\hat a_s^2\kappa t\cos 2\hat b_s+\hat a_s\tilde\varpi\cos \hat b_s}\sin(\hat a_s^2\kappa t\sin 2\hat b_s+\hat a_s\tilde\varpi\sin \hat b_s+\hat b_s+\hat c_s)\\ -[\sum_{i=1}^m a_id_ie^{a_i^2\kappa t\cos 2b_i+a_i\tilde\varpi\cos b_i}\sin(a_i^2\kappa t\sin 2b_i+b_i+a_i\tilde\varpi\sin b_i+c_i)]\int \cos\gamma\:dt\},\hspace{0.7cm}(2.84)\end{eqnarray*} \begin{eqnarray*}v=\gamma'x-\cos\gamma\{2\gamma'\tilde\varpi+\sum_{s=1}^n\hat a_s\hat d_se^{\hat a_s^2\kappa t\cos 2\hat b_s+\hat a_s\tilde\varpi\cos \hat b_s}\sin(\hat a_s^2\kappa t\sin 2\hat b_s+\hat a_s\tilde\varpi\sin \hat b_s+\hat b_s+\hat c_s)\\-[\sum_{i=1}^m a_id_ie^{a_i^2\kappa t\cos 2b_i+a_i\tilde\varpi\cos b_i}\sin(a_i^2\kappa t\sin 2b_i+a_i\tilde\varpi\sin b_i+b_i+c_i)]\int \cos\gamma\:dt\},\hspace{1cm}(2.85)\end{eqnarray*} $\theta=\psi$ in (2.73), and \begin{eqnarray*} p&=&(\sin\gamma+2\gamma'\int\cos\gamma)[\sum_{i=1}^md_ie^{a_i^2\kappa t\cos 2b_i+a_i\tilde\varpi\cos b_i}\sin(a_i^2\kappa t\sin 2b_i+a_i\tilde\varpi\sin b_i+c_i)]\\ & &+\frac{{\gamma'}^2-{\gamma'}'\sin 2\gamma}{2}x^2+\frac{{\gamma'}^2+{\gamma'}'\sin 2\gamma}{2}y^2+{\gamma'}'xy\cos2\gamma-2\gamma'^2\tilde\varpi^2\\ & &-2\gamma'\sum_{s=1}^n\hat d_se^{\hat a_s^2\kappa t\cos 2\hat b_s+\hat a_s\tilde\varpi\cos \hat b_s}\sin(\hat a_s^2\kappa t\sin 2\hat b_s+\hat a_s\tilde\varpi\sin \hat b_s+\hat c_s).\hspace{2.4cm}(2.86)\end{eqnarray*} When $\nu\neq\kappa$, we have the following solutions of the two-dimensional Boussinesq equations (1.1)-(1.2): \begin{eqnarray*}\hspace{1cm}u&=&\{\sum_{s=1}^n\hat a_s\hat d_se^{\hat a_s^2\kappa t\cos 2\hat b_s+\hat a_s\tilde\varpi\cos \hat b_s}\sin(\hat a_s^2\kappa t\sin 2\hat b_s+\hat a_s\tilde\varpi\sin \hat b_s+\hat b_s+\hat c_s)\\ & &+2\gamma'\tilde\varpi-\sum_{i=1}^ma_id_ie^{a_i^2\nu t+a_i\tilde\varpi}\int e^{a_i^2(\kappa-\nu)t}\cos\gamma\:dt \}\sin\gamma-\gamma'y,\hspace{2.3cm}(2.87)\end{eqnarray*} \begin{eqnarray*}\hspace{1cm}v&=&-\{\sum_{s=1}^n\hat a_s\hat d_se^{\hat a_s^2\kappa t\cos 2\hat b_s+\hat a_s\tilde\varpi\cos \hat b_s}\sin(\hat a_s^2\kappa t\sin 2\hat b_s+\hat a_s\tilde\varpi\sin \hat b_s+\hat b_s+\hat c_s)\\ & &+2\gamma'\tilde\varpi-\sum_{i=1}^m a_id_ie^{a_i^2\nu t+a_i\tilde\varpi}\int e^{a_i^2(\kappa-\nu)t}\cos\gamma\:dt \}\cos\gamma+\gamma'x,\hspace{2.3cm}(2.88)\end{eqnarray*} $\theta=\psi$ in (2.76), and \begin{eqnarray*} p&=&\frac{{\gamma'}^2-{\gamma'}'\sin 2\gamma}{2}x^2+\frac{{\gamma'}^2+{\gamma'}'\sin 2\gamma}{2}y^2+{\gamma'}'xy\cos2\gamma-2\gamma'^2\tilde\varpi^2 \\&&-2\gamma'\sum_{s=1}^n\hat d_se^{\hat a_s^2\kappa t\cos 2\hat b_s+\hat a_s\tilde\varpi\cos \hat b_s}\sin(\hat a_s^2\kappa t\sin 2\hat b_s+\hat a_s\tilde\varpi\sin \hat b_s+\hat c_s)\\ & &+\sum_{i=1}^m d_ie^{a_i^2\nu t+a_i\tilde\varpi}(2\gamma'+\sin\gamma)\int e^{a_i^2(\kappa-\nu)t}\cos\gamma\:dt).\hspace{5cm}(2.89)\end{eqnarray*} }\vspace{0.2cm} {\bf Remark 2.4}. By Fourier expansion, we can use the above solution to obtain the one depending on two piecewise continuous functions of $\tilde\varpi$. \section{Asymmetric Approach I to the 3D Equations} Starting from this section, we use asymmetric approaches developed in [11] to solve the stratified rotating Boussinesq equations (1.3)-(1.7). For convenience of computation, we denote $$\Phi_1=u_t+uu_x+vu_y+wu_z-\frac{1}{R_0}v-\sigma(u_{xx}+u_{yy}+u_{zz}),\eqno(3.1)$$ $$\Phi_2=v_t+uv_x+vv_y+wv_z+\frac{1}{R_0}u-\sigma(v_{xx}+v_{yy}+v_{zz}),\eqno(3.2)$$ $$\Phi_3=w_t+uw_x+vw_y+ww_z-\sigma R T-\sigma(w_{xx}+w_{yy}+w_{zz}).\eqno(3.3)$$ Then the equations (1.3)-(1.5) become $$\Phi_1+\sigma p_x=0,\qquad \Phi_2+\sigma p_y=0,\qquad \Phi_3+\sigma p_z=0. \eqno(3.4)$$ Our strategy is to solve the following compatibility conditions: $$\partial_y(\Phi_1)=\partial_x(\Phi_2),\qquad \partial_z(\Phi_1)=\partial_x(\Phi_3),\qquad\partial_z(\Phi_2)=\partial_y(\Phi_3). \eqno(3.5)$$ First we assume $$u=\phi_z(t,z) x+\varsigma(t,z) y+\mu(t,z),\qquad v=\tau(t,z) x+\psi_z(t,z) y+\varepsilon(t,z),\eqno(3.6)$$ $$ w=-\phi(t,z)-\psi(t,z),\qquad T=\vartheta(t,z)+z,\eqno(3.7)$$ where $\phi,\vartheta,\varsigma,\mu,\tau,$ and $\varepsilon$ are functions of $t,z$ to be determined. Then \begin{eqnarray*}\Phi_1&=&\phi_{tz}x+\varsigma_t y+\mu_t+ \phi_z(\phi_z x+\varsigma y+\mu)+(\varsigma-1/R_0)(\tau x+\psi_zy+\varepsilon)\\ & &-(\phi+\psi)(\phi_{zz}x+\varsigma_z y+\mu_z) -\sigma(\phi_{zzz}x+\varsigma_{zz} y+\mu_{zz})\\ &=&[\phi_{tz}+\phi_z^2+\tau(\varsigma-1/R_0)-\phi_{zz}(\phi+\psi)-\sigma\phi_{zzz}]x\\ & &+[\varsigma_t+\varsigma\phi_z+\psi_z(\varsigma-1/R_0)-\varsigma_z(\phi+\psi)-\sigma \varsigma_{zz}]y\\ & &+\mu_t+ \mu\phi_z+(\varsigma-1/R_0)\varepsilon-\mu_z(\phi+\psi)-\sigma\mu_{zz}, \hspace{5.3cm}(3.8)\end{eqnarray*} \begin{eqnarray*}\Phi_2&=&\tau_tx+\psi_{tz}y+\varepsilon_t+\psi_z(\tau x+\psi_zy+\varepsilon)+ (\tau+1/R_0)(\phi_zx+\varsigma y+\mu)\\ & &-(\phi+\psi)(\tau_zx+\psi_{zz}y+\varepsilon_z) -\sigma(\tau_{zz}x+\psi_{zzz}y+\varepsilon_{zz})\\ &=&[\psi_{tz}+\psi_z^2+\varsigma(\tau+1/R_0)-(\phi+\psi)\psi_{zz}-\sigma\psi_{zzz}]y\\ & &+[\tau_t+\tau\psi_z+(\tau+1/R_0)\phi_z-(\phi+\psi)\tau_z-\sigma \tau_{zz}]x\\ & &+\varepsilon_t+ \varepsilon\psi_z+(\tau+1/R_0)\mu-(\phi+\psi)\varepsilon_z-\sigma\varepsilon_{zz}, \hspace{5.3cm}(3.9)\end{eqnarray*} $$\Phi_3=-\phi_t-\psi_t+(\phi+\psi)(\phi_z+\psi_z)-\sigma R(\vartheta+z)+\sigma(\phi_{zz}+\psi_{zz}).\eqno(3.10) $$ Thus (3.5) is equivalent to the following system of partial differential equations: $$\phi_{tz}+\phi_z^2+\tau(\varsigma-1/R_0)-\phi_{zz}(\phi+\psi)-\sigma\phi_{zzz}=\alpha_1,\eqno(3.11)$$ $$\varsigma_t+\varsigma\phi_z+\psi_z(\varsigma-1/R_0)-\varsigma_z(\phi+\psi)-\sigma \varsigma_{zz}=\alpha,\eqno(3.12)$$ $$\mu_t+ \mu\phi_z+(\varsigma-1/R_0)\varepsilon-\mu_z(\phi+\psi)-\sigma\mu_{zz}=\alpha_2,\eqno(3.13)$$ $$\psi_{tz}+\psi_z^2+\varsigma(\tau+1/R_0)-(\phi+\psi)\psi_{zz}-\sigma\psi_{zzz}=\beta_1,\eqno(3.14)$$ $$\tau_t+\tau\psi_z+(\tau+1/R_0)\phi_z-(\phi+\psi)\tau_z-\sigma \tau_{zz}=\alpha,\eqno(3.15)$$ $$\varepsilon_t+\varepsilon\psi_z+(\tau+1/R_0)\mu-(\phi+\psi)\varepsilon_z-\sigma\varepsilon_{zz}=\beta_2\eqno(3.16)$$ for some $\alpha,\alpha_1,\alpha_2,\beta_1,\beta_2$ are functions of $t$. Let $0\neq b$ and $c$ be fixed real constants. Recall the notions in (2.12) and (2.13) with $\gamma=b$. We assume $$\phi=b^{-1}\gamma_1\zeta_r(z),\qquad \psi=b^{-1}(\gamma_2\zeta_r(z)+\gamma_3\eta_r(z)),\eqno(3.17)$$ $$\varsigma=\gamma_4(\gamma_2\eta_r(z)-(-1)^r\gamma_3\zeta_r(z)),\qquad\tau=\gamma_5\gamma_1\eta_r(z),\qquad\gamma_4\gamma_5=1,\eqno(3.18)$$ where $\gamma_i$ are functions of $t$ to be determined. Moreover, (3.11) becomes $$(\gamma_1'+(-1)^rb^2\sigma\gamma_1-\gamma_1\gamma_5/R_0)\eta_r(z)+(\gamma_1+\gamma_2)\gamma_1c^r =\alpha_1,\eqno(3.19)$$ which is implied by $$\alpha_1=(\gamma_1+\gamma_2)\gamma_1c^r,\eqno(3.20)$$ $$\gamma_1'+(-1)^rb^2\sigma\gamma_1-\gamma_1\gamma_5/R_0=0.\eqno(3.21)$$ On the other hand, (3.15) becomes $$[(\gamma_1\gamma_5)'+ \gamma_1/R_0+(-1)^rb^2\sigma\gamma_1\gamma_5]\eta_r+ \gamma_1\gamma_5(\gamma_1+\gamma_2)c^r=\alpha,\eqno(3.22)$$ which gives $$\alpha=\gamma_1\gamma_5(\gamma_1+\gamma_2)c^r,\eqno(3.23)$$ $$(\gamma_1\gamma_5)'+(-1)^rb^2\sigma\gamma_1\gamma_5+ \gamma_1/R_0=0.\eqno(3.24)$$ Solving (3.21) and (3.24) for $\gamma_1$ and $\gamma_1\gamma_5$, we get $$\gamma_1=b_1e^{-(-1)^rb^2\sigma t}\sin\frac{t}{R_0},\qquad\gamma_1\gamma_5= b_1e^{-(-1)^rb^2\sigma t}\cos\frac{t}{R_0},\eqno(3.25)$$ where $b_1$ is a real constant. In particular, we take $$\gamma_5=\cot\frac{t}{R_0}.\eqno(3.26)$$ Observe that (3.12) becomes \begin{eqnarray*}\hspace{1cm}& &[(\gamma_2\gamma_4)'+(-1)^rb^2\sigma\gamma_2\gamma_4-\gamma_2/R_0]\eta_r(z) +\gamma_4(\gamma_1\gamma_2+\gamma_2^2+(-1)^r\gamma_3^2)c^r\\ &&-(-1)^r[(\gamma_3\gamma_4)'+(-1)^rb^2\sigma\gamma_2\gamma_4-\gamma_3/R_0]\zeta_r(z)=\alpha \hspace{4.3cm}(3.27)\end{eqnarray*} and (3.14) becomes \begin{eqnarray*}\hspace{1cm}& &[\gamma_2'+(-1)^rb^2\sigma\gamma_2+\gamma_2\gamma_4/R_0]\eta_r(z) +(\gamma_1\gamma_2+\gamma_2^2+(-1)^r\gamma_3^2)c^r\\ &&-(-1)^r[\gamma_3'+(-1)^rb^2\sigma\gamma_3+\gamma_3\gamma_4/R_0]\zeta_r(z)=\beta_1, \hspace{4.8cm}(3.28)\end{eqnarray*} equivalently, $$\alpha=\gamma_4(\gamma_1\gamma_2+\gamma_2^2+(-1)^r\gamma_3^2)c^r,\eqno(3.29)$$ $$\beta_1=(\gamma_1\gamma_2+\gamma_2^2+(-1)^r\gamma_3^2)c^r,\eqno(3.30)$$ $$(\gamma_2\gamma_4)'+(-1)^rb^2\sigma\gamma_2\gamma_4-\gamma_2/R_0=0,\eqno(3.31)$$ $$\gamma_2'+(-1)^rb^2\sigma\gamma_2+\gamma_2\gamma_4/R_0=0,\eqno(3.32)$$ $$(\gamma_3\gamma_4)'+(-1)^rb^2\sigma\gamma_2\gamma_4-\gamma_3/R_0=0,\eqno(3.33)$$ $$\gamma_3'+(-1)^rb^2\sigma\gamma_3+\gamma_3\gamma_4/R_0=0.\eqno(3.34)$$ Solving (3.31)-(3.34) under the assumption $\gamma_4\gamma_5=1$, we obtain $$\gamma_2\gamma_4=b_2e^{-(-1)^rb^2\sigma t}\sin\frac{t}{R_0},\qquad\gamma_2= b_2e^{-(-1)^rb^2\sigma t}\cos\frac{t}{R_0},\eqno(3.35)$$ $$\gamma_3\gamma_4=b_3e^{-(-1)^rb^2\sigma t}\sin\frac{t}{R_0},\qquad\gamma_3= b_3e^{-(-1)^rb^2\sigma t}\cos\frac{t}{R_0}.\eqno(3.36)$$ In particular, we have: $$\gamma_4=\tan\frac{t}{R_0}.\eqno(3.37)$$ According to (3.23) and (3.29), $$\gamma_1\gamma_5(\gamma_1+\gamma_2)c^r=\gamma_4(\gamma_1\gamma_2+\gamma_2^2+(-1)^r\gamma_3^2)c^r, \eqno(3.38)$$ equivalently $$-2b_1b_2\cos\frac{2t}{R_0}+(b_2^2-b_1^2+(-1)^rb_3^2)\sin\frac{2t}{R_0} =0.\eqno(3.39)$$ Thus $$b_1b_2=0,\qquad b_2^2-b_1^2+(-1)^rb_3^2=0.\eqno(3.40)$$ So $$ r=0,\qquad b_2=0,\qquad b_1=b_3\eqno(3.41)$$ or $$ r=1,\qquad b_1=0,\qquad b_2=b_3.\eqno(3.42)$$ Assume $r=0$ and $b_1\neq 0$. Then $$\phi=b^{-1}b_1e^{-b^2\sigma t}\sin bz\:\sin\frac{t}{R_0},\qquad \psi= b^{-1}b_1e^{-b^2\sigma t}\cos bz\:\cos\frac{t}{R_0},\eqno(3.43)$$ $$\varsigma=-b_1e^{-b^2\sigma t}\sin bz\:\sin\frac{t}{R_0},\qquad\tau=b_1e^{-b^2\sigma t}\cos bz\:\cos\frac{t}{R_0}.\eqno(3.44)$$ Moreover, we take $\mu=\varepsilon=\vartheta=0$. Then $$\Phi_1=\gamma_1^2(x+\gamma_5y)=b_1^2e^{-2b^2\sigma t}\sin\frac{t}{R_0}\left(x\sin\frac{t}{R_0}+y\cos\frac{t}{R_0}\right)\eqno(3.45)$$ by (3.8), (3.11)-(3.12), (3.20) and (3.23). Similarly $$\Phi_2=b_1^2e^{-2b^2\sigma t}\cos\frac{t}{R_0}\left(x\sin\frac{t}{R_0}+y\cos\frac{t}{R_0}\right).\eqno(3.46)$$ According to (3.10) $$\Phi_3=\left[b^{-1}R_0^{-1}b_1e^{-b^2\sigma t}-b^{-1}b_1^2e^{-2b^2\sigma t}\cos \left(bz-\frac{t}{R_0}\right)\right]\sin\left(bz-\frac{t}{R_0}\right)-R\sigma z.\eqno(3.47)$$ By (3.4), we have \begin{eqnarray*}\hspace{1cm}p&=&\frac{Rz^2}{2}+\frac{b_1e^{-b^2\sigma t}}{b^2\sigma R_0}\cos \left(bz-\frac{t}{R_0}\right)-\frac{b_1^2e^{-2b^2\sigma t}}{2\sigma b^2}\cos ^2\left(bz-\frac{t}{R_0}\right)\\ & &-\frac{b_1^2e^{-2b^2\sigma t}}{2\sigma}\left(y^2\cos^2\frac{t}{R_0}+x^2\sin^2\frac{t}{R_0}+xy\sin\frac{2t}{R_0} \right)\hspace{3.7cm}(3.48)\end{eqnarray*} modulo the transformation in (1.14)-(1.16). Suppose $r=1$ and $b_2\neq 0$. Then $$\phi=\tau=\mu=\varepsilon=\vartheta=0,\;\;\psi=b^{-1}b_2e^{bz+b^2\sigma t}\cos\frac{t}{R_0}, \qquad\varsigma=b_2e^{bz+b^2\sigma t}\sin\frac{t}{R_0}.\eqno(3.49)$$ Moreover, $$\Phi_1=\Phi_2=0,\;\;\Phi_3=b^{-1}b_2R_0^{-1}e^{bz+b^2\sigma t}\sin\frac{t}{R_0}+b^{-1}b_2^2e^{2(bz+b^2\sigma t)}\cos^2\frac{t}{R_0}-R\sigma z.\eqno(3.50)$$ According to (3.4), $$p=\frac{Rz^2}{2}-\frac{b_2e^{bz+b^2\sigma t}}{b^2\sigma R_0}\sin\frac{t}{R_0}-\frac{b_2^2e^{2(bz+b^2\sigma t)}}{2b^2\sigma}\cos^2\frac{t}{R_0}\eqno(3.51)$$ modulo the transformation (1.14)-(1.16).\vspace{0.4cm} {\bf Theorem 3.1}. {\it Let $b,b_1,b_2\in\mathbb{R}$ with $b\neq 0$. We have the following solutions of the three-dimensional stratified rotating Boussinesq equations (1.3)-(1.7): (1) $$u=b_1e^{-b^2\sigma t}(x\cos bz-y\sin bz)\sin\frac{t}{R_0},\qquad v= b_1e^{-b^2\sigma t}(x\cos bz-y\sin bz)\cos\frac{t}{R_0},\eqno(3.52)$$ $$w=-b^{-1}b_1e^{-b^2\sigma t}\cos\left(bz-\frac{t}{R_0}\right),\qquad T=z\eqno(3.53)$$ and $p$ is given in (3.48); (2) $$u=b_2e^{bz+b^2\sigma t}y\sin\frac{t}{R_0},\qquad v=b_2e^{bz+b^2\sigma t}y\cos\frac{t}{R_0},\eqno(3.54)$$ $$w=-b^{-1}b_2e^{bz+b^2\sigma t}\cos\frac{t}{R_0}\qquad T=z\eqno(3.55)$$ and $p$ is given in (3.51).}\vspace{0.4cm} Next we assume $\phi=\varsigma=\psi=\tau=0$. Then $$\mu_t-\frac{1}{R_0}\varepsilon-\sigma\mu_{zz}=\alpha_2,\;\; \varepsilon_t+\frac{1}{R_0}\nu-\sigma\varepsilon_{zz}=\beta_2,\;\;\vartheta_t-\vartheta_{zz}=0.\eqno(3.56)$$ Solving them, we get:\vspace{0.4cm} {\bf Theorem 3.2}. {\it Let $a_i,b_i,c_i,d_i,\hat a_r,\hat b_r,\hat c_r,\hat d_r,\tilde a_s,\tilde b_s,\tilde c_s,\tilde d_s$ be real numbers. We have the following solutions of the three-dimensional stratified rotating Boussinesq equations (1.3)-(1.7): \begin{eqnarray*}u&=&\cos\frac{t}{R_0}\;\sum_{i=1}^md_ie^{a_i^2\sigma t\cos 2b_i+ a_iz\cos b_i}\sin(a_i^2\sigma t\sin 2b_i+a_iz\sin b_i+c_i)\\ & &+\sin\frac{t}{R_0}\;\sum_{r=1}^n \hat d_re^{\hat a_r^2\sigma t\cos 2\hat b_r+a_rz\cos\hat b_r}\sin(\hat a_r^2\sigma t\sin 2\hat b_r+\hat a_rz\sin \hat b_r+\hat c_r),\hspace{1.7cm}(3.57)\end{eqnarray*} \begin{eqnarray*}v&=&-\sin\frac{t}{R_0}\;\sum_{i=1}^md_ie^{a_i^2\sigma t\cos 2b_i+ a_iz\cos b_i}\sin(a_i^2\sigma t\sin 2b_i+a_iz\sin b_i+c_i)\\ & &+\cos\frac{t}{R_0}\;\sum_{r=1}^n \hat d_re^{\hat a_r^2\sigma t\cos 2\hat b_r+a_rz\cos\hat b_r}\sin(\hat a_r^2\sigma t\sin 2\hat b_r+\hat a_rz\sin \hat b_r+\hat c_r),\hspace{1.6cm}(3.58)\end{eqnarray*} $$w=0,\;\;T=z+\sum_{s=1}^k\tilde a_s\tilde d_s e^{\tilde a_s^2 t\cos 2\tilde b_s+ \tilde a_sz\cos \tilde b_s}\sin(\tilde a_s^2 t\sin 2\tilde b_s+\tilde a_sz\sin \tilde b_s+\tilde b_s+\tilde c_s),\eqno(3.59)$$ $$p=\frac{R z^2}{2}+R\sum_{s=1}^{m_3}\tilde d_s e^{\tilde a_s^2 t\cos 2\tilde b_s+ \tilde a_sz\cos \tilde b_s}\sin(\tilde a_s^2 t\sin 2\tilde b_s+\tilde a_sz\sin \tilde b_s+\tilde c_s).\eqno(3.60)$$ }\vspace{0.2cm} {\bf Remark 3.3}. By Fourier expansion, we can use the above solution to obtain the one depending on three arbitrary piecewise continuous functions of $z$. \section{Asymmetric Approach II to the 3D Equations} In this section, we solve the stratified rotating Boussinesq equations (1.4)-(1.7) under the assumption $$u_z=v_z=w_{zz}=T_{zz}=0.\eqno(4.1)$$ Let $\gamma$ be a function of $t$ and we use the moving frame $\tilde\varpi$ in (2.65). Assume $$u=f(t,\tilde\varpi)\sin\gamma-\gamma'y,\qquad v=-f(t,\tilde\varpi)\cos\gamma+\gamma'x,\eqno(4.2)$$ According to (4.3), we assume $$w=\phi(t,\varpi),\qquad T=\psi(t,\varpi)+z,\eqno(4.3)$$ for some functions $f,\;\phi$ and $\psi$ in $t$ and $\tilde\varpi$. Using (2.66)-(2.68), we get $$\Phi_1=-(\gamma'^2+\gamma'/R_0)x-{\gamma'}'y+f_t\sin\gamma+(2\gamma'+1/R_0)f\cos\gamma-\sigma f_{\tilde\varpi\tilde\varpi}\sin\gamma,\eqno(4.4)$$ $$\Phi_2=-(\gamma'^2+\gamma'/R_0)y+{\gamma'}'x-f_t\cos\gamma+(2\gamma'+1/R_0)f\sin\gamma+\sigma f_{\tilde\varpi\tilde\varpi}\cos\gamma,\eqno(4.5)$$ $$\Phi_3=\phi_t-\sigma\phi_{\tilde\varpi\tilde\varpi}-\sigma R(\psi+z).\eqno(4.6)$$ By (3.5), we have $$-2{\gamma'}'+f_{\tilde\varpi t}-\sigma f_{\tilde\varpi\tilde\varpi\tilde\varpi}=0,\eqno(4.7)$$ $$\phi_t-\sigma\phi_{\tilde\varpi\tilde\varpi}-\sigma R\psi=0.\eqno(4.8)$$ Moreover, (1.6) becomes $$\psi_t-\psi_{\tilde\varpi\tilde\varpi}=0.\eqno(4.9)$$ Solving (4.7), we have: $$f=2\gamma'\tilde\varpi+\sum_{i=1}^m a_id_ie^{a_i^2\kappa t\cos 2b_i+a_i\tilde\varpi\cos b_i}\sin(a_i^2\kappa t\sin 2b_i+a_i\tilde\varpi\sin b_i+b_i+c_i),\eqno(4.10)$$ where $a_i,b_i,c_i,d_i$ are arbitrary real numbers. Moreover, (4.8) and (4.9) yield $$\phi=\sum_{r=1}^n \hat d_re^{\hat a_r^2 t\cos 2\hat b_r+\hat a_r\tilde\varpi\cos \hat b_r}\sin(\hat a_r^2 t\sin 2\hat b_i+\hat a_r\tilde\varpi\sin \hat b_r+\hat c_r)+\sigma Rt\psi,\eqno(4.11)$$ $$\psi=\sum_{s=1}^k\tilde d_se^{\tilde a_s^2t\cos 2\tilde b_s+\tilde a_s\tilde\varpi\cos \tilde b_s}\sin(\tilde a_s^2 t\sin 2\tilde b_s+\tilde a_s\tilde\varpi\sin \tilde b_s+\tilde c_s)\eqno(4.12)$$ if $\sigma=1$, and \begin{eqnarray*}\phi&=&\sum_{r=1}^n \hat d_re^{\hat a_r^2\sigma t\cos 2\hat b_r+\hat a_r\tilde\varpi\cos \hat b_r}\sin(\hat a_r^2\sigma t\sin 2\hat b_i+\hat a_r\tilde\varpi\sin \hat b_r+\hat c_r)\\ & &+\frac{\sigma R}{1-\sigma}\sum_{s=1}^k\tilde d_se^{\tilde a_s^2t\cos 2\tilde b_s+\tilde a_s\tilde\varpi\cos \tilde b_s}\sin(\tilde a_s^2 t\sin 2\tilde b_s+\tilde a_s\tilde\varpi\sin \tilde b_s+\tilde c_s),\hspace{2.2cm}(4.13)\end{eqnarray*} $$\psi=\sum_{s=1}^k\tilde a_s^2\tilde d_se^{\tilde a_s^2t\cos 2\tilde b_s+\tilde a_s\tilde\varpi\cos \tilde b_s}\sin(\tilde a_s^2 t\sin 2\tilde b_s+\tilde a_s\tilde\varpi\sin \tilde b_s+2\tilde b_s+\tilde c_s)\eqno(4.14)$$ when $\sigma\neq 1$, where $\hat a_r,\hat b_r,\hat c_r,\hat d_r,\tilde a_s,\tilde b_s,\tilde c_s,\tilde d_s$ are arbitrary real numbers. Now $$\Phi_1=({\gamma'}'\sin2\gamma-\gamma'^2-\gamma'/R_0)x-{\gamma'}'y\cos2\gamma+(2\gamma'+1/R_0)f\cos\gamma,\eqno(4.15)$$ $$\Phi_2=-({\gamma'}'\sin2\gamma+\gamma'^2+\gamma'/R_0)y-{\gamma'}'x\cos2\gamma+(2\gamma'+1/R_0)f\sin\gamma\eqno(4.16)$$ and $\Phi_3=-\sigma Rz$. According (3.4), we have \begin{eqnarray*}p&=& -\frac{2\gamma'+1/R_0}{\sigma}[\gamma'\tilde\varpi^2+\sum_{i=1}^m d_ie^{a_i^2\kappa t\cos 2b_i+a_i\tilde\varpi\cos b_i}\sin(a_i^2\kappa t\sin 2b_i+a_i\tilde\varpi\sin b_i+c_i)]\\ &&+\frac{R}{2}z^2+\frac{(\gamma'^2+\gamma'/R_0)(x^2+y^2)+{\gamma'}'(y^2-x^2)\sin2\gamma} {2\sigma}+\frac{{\gamma'}'}{\sigma}xy\cos2\gamma\hspace{1.8cm}(4.17)\end{eqnarray*} modulo the transformation in (1.14)-(1.16).\vspace{0.4cm} {\bf Theorem 4.1}. {\it Let $a_i,b_i,c_i,d_i,\hat a_r,\hat b_r,\hat c_r,\hat d_r,\tilde a_s,\tilde b_s,\tilde c_s,\tilde d_s$ be real numbers and let $\gamma$ be any function of $t$. Denote $\tilde\varpi=x\cos\gamma+y\sin\gamma$. We have the following solutions of the three-dimensional stratified rotating Boussinesq equations (1.3)-(1.7): \begin{eqnarray*}u&=&[\sum_{i=1}^m a_id_ie^{a_i^2\kappa t\cos 2b_i+a_i\tilde\varpi\cos b_i}\sin(a_i^2\kappa t\sin 2b_i+a_i\tilde\varpi\sin b_i+b_i+c_i)\\ & &+2\gamma'\tilde\varpi]\sin\gamma-\gamma' y,\hspace{10.1cm}(4.18)\end{eqnarray*} \begin{eqnarray*}v&=&[-\sum_{i=1}^m a_id_ie^{a_i^2\kappa t\cos 2b_i+a_i\tilde\varpi\cos b_i}\sin(a_i^2\kappa t\sin 2b_i+a_i\tilde\varpi\sin b_i+b_i+c_i)\\ & &+2\gamma'\tilde\varpi]\cos\gamma+\gamma' x,\hspace{10.1cm}(4.19)\end{eqnarray*} $p$ is given in (4.17); \begin{eqnarray*}w&=&\sum_{r=1}^n \hat d_re^{\hat a_r^2 t\cos 2\hat b_r+\hat a_r\tilde\varpi\cos \hat b_r}\sin(\hat a_r^2 t\sin 2\hat b_i+\hat a_r\tilde\varpi\sin \hat b_r+\hat c_r)\\ & &+\sigma Rt\sum_{s=1}^k\tilde d_se^{\tilde a_s^2t\cos 2\tilde b_s+\tilde a_s\tilde\varpi\cos \tilde b_s}\sin(\tilde a_s^2 t\sin 2\tilde b_s+\tilde a_s\tilde\varpi\sin \tilde b_s+\tilde c_s),\hspace{2.5cm}(4.20)\end{eqnarray*} $$T=z+\sum_{s=1}^k\tilde d_se^{\tilde a_s^2t\cos 2\tilde b_s+\tilde a_s\tilde\varpi\cos \tilde b_s}\sin(\tilde a_s^2 t\sin 2\tilde b_s+\tilde a_s\tilde\varpi\sin \tilde b_s+\tilde c_s)\eqno(4.21)$$ if $\sigma=1$, and \begin{eqnarray*}w&=&\sum_{r=1}^n \hat d_re^{\hat a_r^2\sigma t\cos 2\hat b_r+\hat a_r\tilde\varpi\cos \hat b_r}\sin(\hat a_r^2\sigma t\sin 2\hat b_i+\hat a_r\tilde\varpi\sin \hat b_r+\hat c_r)\\ & &+\frac{\sigma R}{1-\sigma}\sum_{s=1}^k\tilde d_se^{\tilde a_s^2t\cos 2\tilde b_s+\tilde a_s\tilde\varpi\cos \tilde b_s}\sin(\tilde a_s^2 t\sin 2\tilde b_s+\tilde a_s\tilde\varpi\sin \tilde b_s+\tilde c_s),\hspace{2.2cm}(4.22)\end{eqnarray*} $$T=z+\sum_{s=1}^k\tilde a_s^2\tilde d_se^{\tilde a_s^2t\cos 2\tilde b_s+\tilde a_s\tilde\varpi\cos \tilde b_s}\sin(\tilde a_s^2 t\sin 2\tilde b_s+\tilde a_s\tilde\varpi\sin \tilde b_s+2\tilde b_s+\tilde c_s)\eqno(4.23)$$ when $\sigma\neq 1$. }\vspace{0.4cm} {\bf Remark 4.2}. By Fourier expansion, we can use the above solution to obtain the one depending on three arbitrary piecewise continuous functions of $\tilde\varpi$.\vspace{0.4cm} Next we let $\alpha$ be any fixed function of $t$ and set $$\varpi=\alpha(x^2+y^2).\eqno(4.24)$$ We assume $$u=y\phi(t,\varpi)-\frac{\alpha'}{2\alpha}x,\qquad v=-x\phi(t,\varpi)-\frac{\alpha'}{2\alpha}y,\eqno(4.25)$$ $$w=\psi(t,\varpi)+\frac{\alpha'}{\alpha} z,\qquad T=\vartheta(t,\varpi)+z\eqno(4.26)$$ where $\phi,\psi$ and $\vartheta$ are functions in $t,\varpi$. Note $$\Phi_1=-\frac{{\alpha'}^2+2\alpha{\alpha'}'}{4\alpha^2}x +\frac{\alpha'}{2R_0\alpha}y+y\phi_t+\left(\frac{x}{R_0} -\frac{\alpha'}{\alpha}y \right)\phi-x\phi^2-4\sigma \alpha y(\varpi\phi)_{\varpi \varpi},\eqno(4.27)$$ $$\Phi_2=-\frac{{\alpha'}^2+2\alpha{\alpha'}'}{4\alpha^2}y -\frac{\alpha'}{2R_0\alpha}x-x\phi_t+\left(\frac{y}{R_0} +\frac{\alpha'}{\alpha}x \right)\phi-y\phi^2+4\sigma \alpha x(\varpi\phi)_{\varpi \varpi}.\eqno(4.28)$$ According to the first equation in (3.5), $$\left[\varpi\left(\phi_t-\frac{\alpha'}{\alpha}\phi-4\sigma \alpha (\varpi\phi)_{\varpi \varpi}\right)\right]_\varpi+\frac{\alpha'}{2R_0\alpha}=0,\eqno(4.29)$$ equivalently, $$(\varpi\phi)_t-\frac{\alpha'}{\alpha}\varpi\phi-4\sigma \alpha \varpi(\varpi\phi)_{\varpi \varpi}+\frac{\alpha'\varpi}{2R_0\alpha}=\alpha\beta'\eqno(4.30)$$ for some function $\beta$ of $t$. Write $$\hat\phi=\frac{\varpi\phi}{\alpha}+\frac{\varpi}{2R_0\alpha}-\beta.\eqno(4.31)$$ Then (4.30) becomes $$\hat\phi_t-4\sigma \alpha\varpi\hat\phi_{\varpi \varpi}=0.\eqno(4.32)$$ Suppose $$\hat\phi=\sum_{i=1}^\infty\gamma_i\varpi^i,\eqno(4.33)$$ where $\gamma_i$ are functions of $t$ to be determined. Equation (4.32) yields $$(\gamma_i)_t=4i(i+1)\sigma\alpha\gamma_{i+1}.\eqno(4.34)$$ Hence $$\gamma_{i+1}=\frac{(\alpha^{-1}\partial_t)^i(\gamma)}{i!(i+1)!(4\sigma)^i}\eqno(4.35)$$ for some function $\gamma$ of $t$. Thus $$\hat\phi= \sum_{i=0}^\infty\frac{(\alpha^{-1}\partial_t)^i(\gamma)\varpi^{i+1}}{i!(i+1)!(4\sigma)^i}. \eqno(4.36)$$ By (4.31), we get $$\phi=\frac{\alpha\beta}{\varpi}-\frac{1}{2R_0}+ \alpha\sum_{i=0}^\infty\frac{(\alpha^{-1}\partial_t)^i(\gamma)\varpi^i} {i!(i+1)!(4\sigma)^i}.\eqno(4.37)$$ Note $$\Phi_3=\psi_t+\frac{\alpha'}{\alpha}\psi-4\sigma(\varpi\psi_{\varpi})_{\varpi}-\sigma R(\vartheta+z).\eqno(4.38)$$ By the last two equations in (3.5), $$\psi_t+\frac{\alpha'}{\alpha}\psi-4\sigma(\varpi\psi_{\varpi})_{\varpi}-\sigma R\vartheta=0\eqno(4.39)$$ modulo the transformation in (1.14)-(1.16). On the other hand, (1.6) becomes $$\vartheta_t-4(\varpi\vartheta_{\varpi})_{\varpi}=0.\eqno(4.40)$$ Hence $$\vartheta=\sum_{i=0}^\infty\frac{\theta_1^{(i)}\varpi^{i+1}}{4^i((i+1)!)^2}\eqno(4.41)$$ modulo the transformation in (1.14)-(1.16), where $\theta_1$ is an arbitrary function of $t$. Substituting (4.41) into (4.39), we obtain $$\psi= \alpha^{-1}\theta_2\varpi+ \alpha^{-1}\sum_{i=1}^\infty\frac{\theta_2^{(i)}+ R\sum_{r=0}^{i-1}\sigma^{i-r}(\alpha\theta_1^{(i-s-1)})^{(s)}}{(4\sigma)^i((i+1)!)^2} \varpi^{i+1},\eqno(4.42)$$ where $\theta_2$ is another arbitrary function of $t$. Now $$\Phi_1=-\frac{{\alpha'}^2+2\alpha{\alpha'}'}{4\alpha^2}x +\frac{\alpha\beta' y}{\varpi}+\frac{x}{R_0}\phi-x\phi^2,\eqno(4.43)$$ $$\Phi_2=-\frac{{\alpha'}^2+2\alpha{\alpha'}'}{4\alpha^2}y -\frac{\alpha\beta' x}{\varpi}+\frac{y}{R_0}\phi-y\phi^2\eqno(4.44)$$ by (4.27) and (4.28), and $$\Phi_3=(\alpha^{-1}\alpha'-\sigma R)z\eqno(4.45)$$ by (4.38). According to (3.4), we obtain \begin{eqnarray*}p&=&\left(\frac{{\alpha'}^2+2\alpha{\alpha'}'}{4\sigma\alpha^2}+\frac{3}{8\sigma R_0^2}\right)(x^2+y^2) +\frac{\beta'}{\sigma}\arctan\frac{y}{x} +\frac{(R_0\alpha\gamma-1)\beta}{\sigma R_0}\ln\alpha(x^2+y^2)\\ & &-\frac{\sigma^{-1}\beta^2}{2(x^2+y^2)}+\frac{\sigma R-\alpha^{-1}\alpha'R}{2\sigma}z^2 -\frac{1}{\sigma R_0}\sum_{i=0}^\infty\frac{(\alpha^{-1}\partial_t)^i(\gamma)\alpha^{i+1}(x^2+y^2)^{i+1}} {((i+1)!)^2(4\sigma)^i}\\ & &+\frac{\alpha}{2\sigma}\sum_{i,j=0}^\infty \frac{(\alpha^{-1}\partial_t)^i(\gamma)(\alpha^{-1}\partial_t)^j(\gamma)(\alpha(x^2+y^2))^{i+j+1}} {i!j!(i+1)!(j+1)!(i+j+1)(4\sigma)^{i+j}}\\ & &+\frac{\alpha\beta}{2\sigma}\sum_{i=1}^\infty\frac{(\alpha^{-1}\partial_t)^i(\gamma)(\alpha(x^2+y^2))^i} {i!(i+1)!i(4\sigma)^i}\hspace{7.6cm}(4.46) \end{eqnarray*} modulo the transformation in (1.14)-(1.16). By (4.25), (4.26), (4.37), (4.41) and (4.42), we have:\vspace{0.4cm} {\bf Theorem 4.3} {\it Let $\alpha,\beta,\gamma,\theta_1,\theta_2$ be any function of $t$ such that the following involved series converge. We have the following solutions of the three-dimensional stratified rotating Boussinesq equations (1.3)-(1.7): $$u=\frac{\beta y}{x^2+y^2}-\frac{y}{2R_0}-\frac{\alpha'}{2\alpha}x+ \alpha y\sum_{i=0}^\infty\frac{(\alpha^{-1}\partial_t)^i(\gamma)\alpha^i(x^2+y^2)^i} {i!(i+1)!(4\sigma)^i},\eqno(4.47)$$ $$v=\frac{x}{2R_0}-\frac{\alpha'}{2\alpha}y-\frac{\beta x}{x^2+y^2}+ \alpha x\sum_{i=0}^\infty\frac{(\alpha^{-1}\partial_t)^i(\gamma)\alpha^i(x^2+y^2)^i} {i!(i+1)!(4\sigma)^i},\eqno(4.48)$$ $$w=\theta_2(x^2+y^2)+\frac{\alpha'}{\alpha} z+\frac{1}{\alpha}\sum_{i=1}^\infty\frac{\theta_2^{(i)}+ R\sum_{r=0}^{i-1}\sigma^{i-r}(\alpha\theta_1^{(i-s-1)})^{(s)}}{(4\sigma)^i((i+1)!)^2} \alpha^{i+1}(x^2+y^2)^{i+1},\eqno(4.49)$$ $$T=z+\sum_{i=0}^\infty\frac{\theta_1^{(i)}\alpha^{i+1}(x^2+y^2)^{i+1}}{4^i((i+1)!)^2} \eqno(4.50)$$ and $p$ is given in (4.46).} \section{Asymmetric Approach III to the 3D Equations} In this section, we solve (1.3)-(1.7) with $v_x=w_x=T_x=0$. Let $c$ be a real constant. Set $$\varpi=y\cos c+z\sin c.\eqno(5.1)$$ Suppose $$u=f(t,\varpi),\qquad v=\phi(t,\varpi)\sin c,\eqno(5.2)$$ $$w=-\phi(t,\varpi)\cos c,\qquad T=\psi(t,\varpi)+z,\eqno(5.3)$$ where $f,\;\phi$ and $\psi$ are functions in $t$ and $\varpi$. Then $$\Phi_1=f_t-\sigma f_{\varpi\varpi}-\frac{\sin c}{R_0}\phi,\eqno(5.4)$$ $$\Phi_2=(\phi_t-\sigma\phi_{\varpi\varpi})\sin c+\frac{1}{R_0}f,\eqno(5.5)$$ $$\Phi_3=(\sigma\phi_{\varpi\varpi}-\phi_t)\cos c-\sigma R(\psi+z).\eqno(5.6)$$ By (3.5), $$f_{\varpi t}-\sigma f_{\varpi\varpi\varpi}-\frac{\sin c}{R_0}\phi_{\varpi}=0,\eqno(5.7)$$ $$(\phi_t-\sigma\phi_{\varpi\varpi})_{\varpi}+\frac{\sin c}{R_0}f_{\varpi}+\sigma R\psi_{\varpi}\cos c=0.\eqno(5.8)$$ Modulo (1.14)-(1.16), we have $$f_t-\sigma f_{\varpi\varpi}-\frac{\sin c}{R_0}\phi=0,\eqno(5.9)$$ $$\phi_t-\sigma\phi_{\varpi\varpi}+\frac{\sin c}{R_0}f+\sigma R\psi\cos c=0.\eqno(5.10)$$ Denote $$\left(\begin{array}{c}\hat f\\\hat\phi\end{array}\right)=\left(\begin{array}{cc}\cos\frac{t\sin c}{R_0}&-\sin\frac{t\sin c}{R_0}\\ \sin\frac{t\sin c}{R_0}&\cos\frac{t\sin c}{R_0}\end{array}\right)\left(\begin{array}{c} f\\\phi\end{array}\right).\eqno(5.11)$$ Then (5.9) and (5.10) become $$\hat f_t-\sigma\hat f_{\varpi\varpi}-\sigma R\psi\cos c\;\sin\frac{t\sin c}{R_0}=0,\eqno(5.12)$$ $$\hat\phi_t-\sigma\hat\phi_{\varpi\varpi}+\sigma R\psi\cos c\;\cos\frac{t\sin c}{R_0}=0.\eqno(5.13)$$ On the other hand, (1.6) becomes $$\psi_t-\psi_{\varpi\varpi}=0.\eqno(5.14)$$ Assume $\sigma=1$. We have the following solution: $$\psi=\sum_{i=1}^m a_id_ie^{a_i^2t\cos 2b_i+a_i\varpi\cos b_i}\sin(a_i^2t\sin 2b_i+a_i\varpi\sin b_i+b_i+c_i),\eqno(5.15)$$ \begin{eqnarray*}\hat f&=&- RR_0\cot c\;\cos\frac{t\sin c}{R_0}\;\sum_{i=1}^m a_id_ie^{a_i^2 t\cos 2b_i+a_i\varpi\cos b_i}\sin(a_i^2t\sin 2b_i+a_i\varpi\sin b_i+b_i+c_i)\\ & &+\sum_{r=1}^n \hat a_r\hat d_re^{\hat a_r^2 t\cos 2\hat b_r+\hat a_r\varpi\cos \hat b_r}\sin(\hat a_r^2t\sin 2\hat b_i+\hat a_r\varpi\sin \hat b_r+\hat b_r+\hat c_r),\hspace{2.1cm}(5.16)\end{eqnarray*} \begin{eqnarray*}\hat \phi&=&- RR_0\cot c\;\sin\frac{t\sin c}{R_0}\;\sum_{i=1}^m a_id_ie^{a_i^2t\cos 2b_i+a_i\varpi\cos b_i}\sin(a_i^2t\sin 2b_i+a_i\varpi\sin b_i+b_i+c_i)\\ & &+\sum_{s=1}^k\tilde a_s\tilde d_se^{\tilde a_s^2t\cos 2\tilde b_s+\tilde a_s\varpi\cos \tilde b_s}\sin(\tilde a_s^2 t\sin 2\tilde b_s+\tilde a_s\varpi\sin \tilde b_s+\tilde b_s+\tilde c_s),\hspace{2.1cm}(5.17)\end{eqnarray*} where $a_i,b_i,c_i,\hat a_r,\hat b_r,\hat c_r,\hat d_r, \tilde a_s,\tilde b_s,\tilde c_s,\tilde d_s$ are arbitrary real numbers. According to (5.11), \begin{eqnarray*}f=-RR_0\cot c\;\cos\frac{2t\sin c}{R_0}\;\sum_{i=1}^m a_id_ie^{a_i^2t\cos 2b_i+a_i\varpi\cos b_i}\sin(a_i^2t\sin 2b_i+a_i\varpi\sin b_i+b_i+c_i) \\ +\cos\frac{t\sin c}{R_0}\;\sum_{r=1}^n \hat a_r\hat d_re^{\hat a_r^2 t\cos 2\hat b_r+\hat a_r\varpi\cos \hat b_r}\sin(\hat a_r^2t\sin 2\hat b_i+\hat a_r\varpi\sin \hat b_r+\hat b_r+\hat c_r)\hspace{2.6cm} \\ +\sin\frac{t\sin c}{R_0}\;\sum_{s=1}^k\tilde a_s\tilde d_se^{\tilde a_s^2t\cos 2\tilde b_s+\tilde a_s\varpi\cos \tilde b_s}\sin(\tilde a_s^2 t\sin 2\tilde b_s+\tilde a_s\varpi\sin \tilde b_s+\tilde b_s+\tilde c_s),\hspace{1.5cm}(5.18)\end{eqnarray*} \begin{eqnarray*}& &\phi=-\sin\frac{t\sin c}{R_0}\;\sum_{r=1}^n \hat a_r\hat d_re^{\hat a_r^2 t\cos 2\hat b_r+\hat a_r\varpi\cos \hat b_r}\sin(\hat a_r^2t\sin 2\hat b_i+\hat a_r\varpi\sin \hat b_r+\hat b_r+\hat c_r)\\ & &+\cos\frac{t\sin c}{R_0}\;\sum_{s=1}^k\tilde a_s\tilde d_se^{\tilde a_s^2t\cos 2\tilde b_s+\tilde a_s\varpi\cos \tilde b_s}\sin(\tilde a_s^2 t\sin 2\tilde b_s+\tilde a_s\varpi\sin \tilde b_s+\tilde b_s+\tilde c_s).\hspace{0.8cm}(5.19)\end{eqnarray*} Suppose $\sigma\neq 1$. We take the following solution of (5.11)-(5.14): $$\psi=\sum_{i=1}^m a_id_ie^{a_i^2t+a_i\varpi},\eqno(5.20)$$ \begin{eqnarray*}\hat f&=&\sigma R\sum_{i=1}^m a_id_ie^{a_i^2t+a_i\varpi}\frac{\cos c\:\left[a_i^2(1-\sigma)\sin\frac{t\sin c}{R_0}-R_0^{-1}\sin c\:\cos\frac{t\sin c}{R_0}\right]}{a_i^4(1-\sigma)^2+R_0^{-2}\sin^2c} \\ & &+\sum_{r=1}^n \hat a_r\hat d_re^{\hat a_r^2\sigma t\cos 2\hat b_r+\hat a_r\varpi\cos \hat b_r}\sin(\hat a_r^2\sigma t\sin 2\hat b_i+\hat a_r\varpi\sin \hat b_r+\hat b_r+\hat c_r),\hspace{1.5cm}(5.21)\end{eqnarray*} \begin{eqnarray*}\hat \phi&=&\sigma R\sum_{i=1}^m a_id_ie^{a_i^2t+a_i\varpi}\frac{\cos c\:\left[a_i^2(\sigma-1)\cos\frac{t\sin c}{R_0}-R_0^{-1}\sin c\:\sin\frac{t\sin c}{R_0}\right]}{a_i^4(1-\sigma)^2+R_0^{-2}\sin^2c} \\ & &+\sum_{s=1}^k\tilde a_s\tilde d_se^{\tilde a_s^2\sigma t\cos 2\tilde b_s+\tilde a_s\varpi\cos \tilde b_s}\sin(\tilde a_s^2\sigma t\sin 2\tilde b_s+\tilde a_s\varpi\sin \tilde b_s+\tilde b_s+\tilde c_s),\hspace{1.4cm}(5.22)\end{eqnarray*} where $a_i,b_i,c_i,\hat a_r,\hat b_r,\hat c_r,\hat d_r, \tilde a_s,\tilde b_s,\tilde c_s,\tilde d_s$ are arbitrary real numbers. According to (5.11), \begin{eqnarray*}f&=&\cos\frac{t\sin c}{R_0}\;\sum_{r=1}^n \hat a_r\hat d_re^{\hat a_r^2\sigma t\cos 2\hat b_r+\hat a_r\varpi\cos \hat b_r}\sin(\hat a_r^2\sigma t\sin 2\hat b_i+\hat a_r\varpi\sin \hat b_r+\hat b_r+\hat c_r) \\&& +\sin\frac{t\sin c}{R_0}\;\sum_{s=1}^k\tilde a_s\tilde d_se^{\tilde a_s^2\sigma t\cos 2\tilde b_s+\tilde a_s\varpi\cos \tilde b_s}\sin(\tilde a_s^2\sigma t\sin 2\tilde b_s+\tilde a_s\varpi\sin \tilde b_s+\tilde b_s+\tilde c_s)\\ & &-\sigma R\sum_{i=1}^m\frac{a_id_ie^{a_i^2t+a_i\varpi}\sin 2c}{2R_0(a_i^4(1-\sigma)^2+R_0^{-2}\sin^2c)} ,\hspace{6.6cm}(5.23)\end{eqnarray*} \begin{eqnarray*}\phi&=&-\sin\frac{t\sin c}{R_0}\;\sum_{r=1}^n \hat a_r\hat d_re^{\hat a_r^2\sigma t\cos 2\hat b_r+\hat a_r\varpi\cos \hat b_r}\sin(\hat a_r^2\sigma t\sin 2\hat b_i+\hat a_r\varpi\sin \hat b_r+\hat b_r+\hat c_r)\\ & &+\cos\frac{t\sin c}{R_0}\;\sum_{s=1}^k\tilde a_s\tilde d_se^{\tilde a_s^2\sigma t\cos 2\tilde b_s+\tilde a_s\varpi\cos \tilde b_s}\sin(\tilde a_s^2\sigma t\sin 2\tilde b_s+\tilde a_s\varpi\sin \tilde b_s+\tilde b_s+\tilde c_s)\\ & &-\sigma R\sum_{i=1}^m\frac{a_i^3d_i(\sigma-1)e^{a_i^2t+a_i\varpi}\cos c}{a_i^4(1-\sigma)^2+R_0^{-2}\sin^2c} .\hspace{7.5cm}(5.24)\end{eqnarray*} By (5.4)-(5.6), (5.9) and (5.10), $\Phi_1=0$, $$\Phi_2=\left(\frac{\cos c}{R_0}f-\sigma R\psi\sin c\right)\cos c,\eqno(5.25)$$ $$\Phi_3=\left(\frac{\cos c}{R_0}f-\sigma R\psi\sin c\right)\sin c-\sigma Rz.\eqno(5.26)$$ According to (3.4), \begin{eqnarray*}p&=&\frac{R\cos^2 c}{\sin c}\cos\frac{2t\sin c}{R_0}\;\sum_{i=1}^m d_ie^{a_i^2t\cos 2b_i+a_i\varpi\cos b_i}\sin(a_i^2t\sin 2b_i+a_i\varpi\sin b_i+c_i) \\ & &-\frac{\cos c}{ R_0}\cos\frac{t\sin c}{R_0}\;\sum_{r=1}^n \hat d_re^{\hat a_r^2 t\cos 2\hat b_r+\hat a_r\varpi\cos \hat b_r}\sin(\hat a_r^2t\sin 2\hat b_i+\hat a_r\varpi\sin \hat b_r+\hat c_r) \\&& -\frac{\cos c}{ R_0}\sin\frac{t\sin c}{R_0}\;\sum_{s=1}^k\tilde d_se^{\tilde a_s^2t\cos 2\tilde b_s+\tilde a_s\varpi\cos \tilde b_s}\sin(\tilde a_s^2 t\sin 2\tilde b_s+\tilde a_s\varpi\sin \tilde b_s+\tilde c_s)\\ & &+R\sin c\:\sum_{i=1}^m d_ie^{a_i^2t\cos 2b_i+a_i\varpi\cos b_i}\sin(a_i^2t\sin 2b_i+a_i\varpi\sin b_i+c_i)+\frac{R}{2}z^2\hspace{1.2cm}(5.27)\end{eqnarray*} modulo if $\sigma=1$, and \begin{eqnarray*}p&=&-\frac{\cos c}{\sigma R_0}\cos\frac{t\sin c}{R_0}\;\sum_{r=1}^n \hat d_re^{\hat a_r^2\sigma t\cos 2\hat b_r+\hat a_r\varpi\cos \hat b_r}\sin(\hat a_r^2\sigma t\sin 2\hat b_i+\hat a_r\varpi\sin \hat b_r+\hat c_r) \\&& -\frac{\cos c}{\sigma R_0}\sin\frac{t\sin c}{R_0}\;\sum_{s=1}^k\tilde d_se^{\tilde a_s^2\sigma t\cos 2\tilde b_s+\tilde a_s\varpi\cos \tilde b_s}\sin(\tilde a_s^2\sigma t\sin 2\tilde b_s+\tilde a_s\varpi\sin \tilde b_s+\tilde c_s)\\ & &+ \sum_{i=1}^m\frac{d_iRe^{a_i^2t+a_i\varpi}\sin 2c\;\cos c}{2R_0^2(a_i^4(1-\sigma)^2+R_0^{-2}\sin^2c)}+R\sin c\;\sum_{i=1}^m d_ie^{a_i^2t+a_i\varpi}+\frac{R}{2}z^2 ,\hspace{1.7cm}(5.28)\end{eqnarray*} modulo the transformation in (1.14)-(1.16). In summary, we have:\vspace{0.4cm} {\bf Theorem 5.1}. {\it Let $a_i,b_i,c_i,\hat a_r,\hat b_r,\hat c_r,\hat d_r, \tilde a_s,\tilde b_s,\tilde c_s,\tilde d_s,c$ be arbitrary real numbers. Denote $\varpi=y\cos x+z\sin c$. We have the following solutions of the three-dimensional stratified rotating Boussinesq equations (1.3)-(1.7): $$u=f,\qquad v=\phi\sin c,\qquad w=-\phi\cos c,\qquad T=\psi+z,\eqno(5.29)$$ where (1) $f$ is given in (5.18), $\phi$ is given in (5.19), $\psi$ is given in (5.15) and $p$ is given in (5.27) if $\sigma=1$; (2) $f$ is given in (5.23), $\phi$ is given in (5.24), $\psi$ is given in (5.20) and $p$ is given in (5.28) when $\sigma\neq 1$.} \vspace{0.4cm} {\bf Remark 5.2}. By Fourier expansion, we can use the above solution to obtain the one depending on three arbitrary piecewise continuous functions of $\varpi$. Applying the transformation ${\cal T}_1$ in (1.12)-(1.13) to the above solution, we get a solution involving all the variables $t,x,y,z$. \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,478
Economy and consumption: News Releases Topic Economy and Consumption Environmental aspects in financial market policy: Federal Council is informed of progress made Zurück Zurück zu Topics Information for specialists Data, indicators and maps Bern, 03.03.2017 - Sustainability is important also for the financial markets. During its meeting on 3 March 2017, the Federal Council was informed about the relevant national and international developments in the area of environmental sustainability in financial market policy and the Confederation's commitment. Information and opinions on this topic are regularly exchanged between the competent authorities and the sector, the last time was at the end of February 2017. Due to the increased demand for sustainability also in financial business, the financial sector is facing a challenge. Environmental issues such as climate change and water scarcity, for example, not only carry possible risks for financial stability, but also open up potential for innovation. This in turn creates opportunities for Switzerland's financial centre. In February 2016, the Federal Council established principles for a consistent Swiss policy in this area. These principles have also been integrated in the Federal Council's report on the strategic thrusts of the financial market policy of October 2016. In 2016, the competent federal authorities were not only actively involved in the corresponding international bodies, e.g. the G20 Green Finance Study Group (GFSG), but also deepened the dialogue with the sector. The parties discussed in existing forums how transparency could be increased in business operations and how potential risks and opportunities for the financial centre could be identified. In cooperation with the Federal Office for the Environment (FOEN), the State Secretariat for international Finance Matters (SIF) organised on 28 February 2017 a further round of talks with representatives of the finance industry on the inclusion of environmental criteria in financial business. One of the joint objectives is to harmonise sustainable investment methodologies developed by the sector and to work towards best practice in implementing them in the day-to-day business. Furthermore, the FOEN has prepared a publication together with experts from the financial sector, academia, non-governmental organisations and other federal offices which recommends measures for a sustainable Swiss financial system. Finally, with the support of SIF, the FOEN has started to pave the way for measuring the climate compatibility of investments and financing. The competent authorities will continue the dialogue with the financial industry and industry associations and selectively further intensify it. On the climate side, they will elaborate fundamentals. These will be based on the goal of the Paris Agreement to make financial flows consistent with a low carbon pathway. At the international level, SIF, in cooperation with the FOEN, will continue to engage actively with international financial bodies and in particular will provide support for the work of the G20 in the area of sustainability. Address for enquiries Anne Césard, SIF Communications Tel. +41 58 462 62 91, anne.cesard@sif.admin.ch Eliane Schmid, FOEN Media Section Tel. +41 58 462 90 00, eliane.schmid@bafu.admin.ch https://www.admin.ch/gov/en/start.html Federal Department of Finance https://www.efd.admin.ch/efd/en/home.html https://www.bafu.admin.ch/en https://www.bafu.admin.ch/content/bafu/en/home/topics/economy-consumption/news-releases.msg-id-65890.html
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
970
Joël Zakarias Kabongo (* 5. April 1998 in Albertslund) ist ein dänisch-sambischer Fußballspieler. Er spielt seit seiner Jugend für Brøndby IF und absolvierte Spiele für dänische Jugendnationalmannschaften. Karriere Verein Kabongo wurde im Nachwuchsleistungszentrum von Brøndby IF ausgebildet und erhielt im Januar 2017 einen Profivertrag. In der Saison 2016/17 kam er zu keinem Einsatz für die Profimannschaft des Kopenhagener Vorortklubs und wurde Ende August 2017 an den Zweitligisten Fremad Amager verliehen. Am 3. September 2017 absolvierte Kabongo sein erstes Spiel im Herrenbereich, als er beim 1:0-Sieg am dritten Spieltag gegen Esbjerg fB in der 82. Minute für Heini Vatnsdal eingewechselt wurde. Er war in der dänischen Zweitklassigkeit zu zehn Einsätzen gekommen, ehe er zu Brøndby IF zurückkehrte. Nationalmannschaft Kabongo war zu einem Einsatz für die dänische U-16-Nationalmannschaft und zu vier für die U-18-Nationalauswahl gekommen, ehe er am 2. September 2016 beim 1:2 im Testspiel in Nyborg gegen Norwegen seinen ersten von sechs Einsätzen für die dänische U-19-Nationalmannschaft absolvierte. Im August 2018 wurde Kabongo mit seiner Nominierung für die EM-Qualifikationsspiele gegen Finnland und Litauen erstmals in die dänische U-21-Nationalmannschaft eingeladen. Am 14. November 2018 lief Kabongo bei der 1:4-Niederlage im Freundschaftsspiel in Logroño gegen Spanien erstmals für die dänische U21 auf. Weblinks Datenbank auf der Webpräsenz des dänischen Fußballverbandes Profil auf transfermarkt.de Einzelnachweise Fußballspieler (Brøndby IF) Fußballspieler (Fremad Amager) Däne Sambier Geboren 1998 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,505
{"url":"http:\/\/www.hellenicaworld.com\/Science\/Physics\/en\/ExperimentsofRayleighandBrace.html","text":"### - Art Gallery -\n\nThe experiments of Rayleigh and Brace (1902, 1904) were aimed to show whether length contraction leads to birefringence or not. They were some of the first optical experiments measuring the relative motion of Earth and the luminiferous aether which were sufficiently precise to detect magnitudes of second order to v\/c. The results were negative, which was of great importance for the development of the Lorentz transformation and consequently of the theory of relativity. See also Tests of special relativity.\n\nThe experiments\n\nTo explain the negative outcome of the Michelson\u2013Morley experiment, George FitzGerald (1889) and Hendrik Lorentz (1892) introduced the contraction hypothesis, according to which a body is contracted during its motion through the stationary aether.\n\nLord Rayleigh (1902) interpreted this contraction as a mechanical compression which should lead to optical anisotropy of materials, so the different refraction indices would cause birefringence. To measure this effect, he installed a tube of 76 cm length upon a rotatable table. The tube was closed by glass at its ends, and was filled with carbon bisulphide or water, and the liquid was between two nicol prisms. Through the liquid, light (produced by an electric lamp and more importantly by limelight) was sent to and fro. The experiment was sufficiently precise to measure retardations of $${\\displaystyle {\\tfrac {1}{6000}}}$$ of a half wavelength, i.e. of the order 1.2\u00d710\u221210. Depending on the direction relative to Earth's motion, the expected retardation due to birefringence was of order 10\u22128, which was well within the accuracy of the experiment. Therefore, it was, besides the Michelson-Morley experiment and the Trouton\u2013Noble experiment, one of the few experiments by which magnitudes of second order in v\/c could be detected. However, the result was completely negative. Rayleigh repeated the experiments with layers of glass plates (although with a diminished precision by a factor of 100), and again obtained a negative result.[1]\n\nHowever, those experiments were criticized by DeWitt Bristol Brace (1904). He argued that Rayleigh hadn't properly considered the consequences of contraction (0.5\u00d710\u22128 instead of 10\u22128) as well as of the refraction index, so that the results were in no way conclusive. Therefore, Brace conducted experiments of much higher precision. He employed an apparatus that was 4.13 m long, 15 cm wide, and 27 cm deep, which was filled with water, and which could be rotated (depending on the experiment) about a vertical or a horizontal axis. Sun light was directed into the water through a system of lenses, mirrors and reflexion prisms, and was reflected 7 times so that it traversed 28.5 m. In this way, a retardation of order 7.8\u00d710\u221213 was observable. However, also Brace obtained a negative result. Another experimental installation with glass instead of water (precision: 4.5\u00d710\u221211), also yielded no sign of birefringence.[2]\nBrace1904\n\nThe absence of birefringence was initially interpreted by Brace as a refutation of length contraction. However, it was shown by Lorentz (1904) and Joseph Larmor (1904) that when the contraction hypothesis is maintained and the complete Lorentz transformation is employed (i.e. including the time transformation), then the negative outcome can be explained. Furthermore, if the relativity principle is considered as valid from the outset, as in Albert Einstein's theory of special relativity (1905), then the result is quite clear, since an observer in uniform translational motion can consider himself as at rest, and consequently won't experience any effect of his own motion. Length contraction is thus not measurable by a comoving observer, and has to be supplemented by time dilation for non-comoving observers, which was subsequently also confirmed by the Trouton\u2013Rankine experiment (1908) and the Kennedy\u2013Thorndike experiment (1932).[3][4][A 1][A 2]\n\nHistory of special relativity\n\nPrimary sources\n\nLord Rayleigh (1902). \"Does Motion through the Aether cause Double Refraction?\" . Philosophical Magazine. 4: 678\u2013683. doi:10.1080\/14786440209462891.\nBrace, DeWitt Bristol (1904). \"On Double Refraction in Matter moving through the Aether\" . Philosophical Magazine. 7 (40): 317\u2013329. doi:10.1080\/14786440409463122.\nLorentz, Hendrik Antoon (1904), \"Electromagnetic phenomena in a system moving with any velocity smaller than that of light\" , Proceedings of the Royal Netherlands Academy of Arts and Sciences, 6: 809\u2013831\n\nLarmor, Joseph (1904). \"On the ascertained Absence of Effects of Motion through the Aether, in relation to the Constitution of Matter, and on the FitzGerald-Lorentz Hypothesis\" (PDF). Philosophical Magazine. 7 (42): 621\u2013625. doi:10.1080\/14786440409463156.\n\nSecondary sources\n\nLaub, Jakob (1910). \"\u00dcber die experimentellen Grundlagen des Relativit\u00e4tsprinzips\". Jahrbuch der Radioaktivit\u00e4t und Elektronik. 7: 405\u2013463.\nWhittaker, Edmund Taylor (1910). A History of the Theories of Aether and Electricity (1. Ausgabe ed.). Dublin: Longman, Green and Co.\n\nvte\n\nvte\n\nTests of special relativity\nSpeed\/isotropy\n\nMichelson\u2013Morley experiment Kennedy\u2013Thorndike experiment Moessbauer rotor experiments Resonator experiments de Sitter double star experiment Hammar experiment Measurements of neutrino speed\n\nLorentz invariance\n\nModern searches for Lorentz violation Hughes\u2013Drever experiment Trouton\u2013Noble experiment Experiments of Rayleigh and Brace Trouton\u2013Rankine experiment Antimatter tests of Lorentz violation Lorentz-violating neutrino oscillations Lorentz-violating electrodynamics\n\nTime dilation\nLength contraction\n\nIves\u2013Stilwell experiment Moessbauer rotor experiments Experimental testing of time dilation Hafele\u2013Keating experiment Length contraction confirmations\n\nRelativistic energy\n\nFizeau\/Sagnac\n\nAlternatives\n\nRefutations of aether theory Refutations of emission theory\n\nGeneral\n\nOne-way speed of light Test theories of special relativity Standard-Model Extension\n\nPhysics Encyclopedia\n\nWorld\n\nIndex","date":"2021-04-16 20:23:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7183284163475037, \"perplexity\": 2136.720819413188}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038089289.45\/warc\/CC-MAIN-20210416191341-20210416221341-00057.warc.gz\"}"}
null
null
Helmut Cämmerer (Hamburgo, Alemania, 5 de mayo de 1911-fecha de fallecimiento desconocida) fue un deportista alemán que compitió en piragüismo en la modalidad de aguas tranquilas. Participó en los Juegos Olímpicos de Berlín 1936, obteniendo una medalla de plata en la prueba de K1 1000 m. Ganó una medalla en el Campeonato Mundial de Piragüismo de 1938, y dos medallas en el Campeonato Europeo de Piragüismo en los años 1933 y 1934. Palmarés internacional Referencias Enlaces externos Lista de medallistas olímpicos y mundiales en piragüismo (1936-2007): parte 1, parte 2. Federación Internacional de Piragüismo . Piragüistas de Alemania Medallistas olímpicos de plata de Alemania Nacidos en Hamburgo
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,224
KEANU Keanu Reeves Claims He Had No Idea the Internet Loves Him "That's, uh…that's wacky." Davey Adesida At this point, failing to notice that Keanu Reeves has been everywhere lately would be something of an accomplishment. Over the past few months, the 54-year-old actor has appeared on-screen in roles as varied as John Wick and a parody of himself; been named the new face of Saint Laurent; and, most recently, been dutifully promoting his upcoming film Toy Story 4. Oh, and he's also gone viral for everything from posing for photos with women to buying ice cream to responding to a question about what happens after death. And yet, someone has failed to notice: none other than Keanu Reeves himself. At least to the point that he appears to have never heard of the term "Internet boyfriend" until a reporter from People asked him how he felt about being "dubbed" one on Tuesday, at the Los Angeles premiere of Toy Story 4. "I've been what?" he asked, leaning to turn his ear closer to the reporter. To be fair to Reeves, the term is a bit antiquated at this point—it hit its peak between 2016 and 2017—but if anyone fits the bill at the moment, it's probably Keanu Reeves. "Everyone is just kind of gushing over you on the Internet," the reporter explained. "You didn't know that yet?" "No. That's, uh," he said, pausing to laugh. "That's wacky." Reeves did seem to appreciate hearing that "it's all good things" that have been said about him online. "Well, the positivity's great," he added. "It's really special how John Wick was embraced." The actor sounded much more excited about getting to voice the character of Duke Caboom, Toy Story 4's daredevil motorcyclist. He was so excited, in fact, that he jumped onto a table at Pixar when he first learned of the role. "I just kind of got inspired," Reeves recalled. "The character's so full of life." Duke Caboom, Keanu Reeves's character in *Toy Story 4* (2019). © 2019 Disney/Pixar Related: Sad Keanu: An Encounter With Keanu Reeves, Poet A Complete Guide to Recognizing the Internet's Boyfriends in the Wild Who: Tom Hiddleston, Benedict Cumberbatch, Matt Smith. Where: The BBC. Wearing: Burberry. Who: Riz Ahmed, Kit Harington, Rami Malek. Where: HBO or adjacent. Wearing: Dior Homme. Who: Jeff Goldblum, Kyle MacLachlan. Where: The hotly anticipated remake of the project that first made them famous. Wearing: A turtleneck; Balenciaga. Who: Mahershala Ali, Oscar Isaac. Where: The playground, but also the Oscars. Wearing: Unfortunate hat choices. (Not pictured.) Who: Chris Evans, Andrew Garfield. Where: In a cryochamber, until the corporate overlords permit them to talk about the new movie. Wearing: Lycra. Who: Frank Ocean, Chance the Rapper. Where: The festival circuit; Tumblr. (Actually on the internet.) Wearing: Custom coveralls. Who: Idris Elba, Tom Hardy, Jon Hamm Where: Playing the antihero on that show you never finished; vying to be the next James Bond. Wearing: Doesn't matter. Who: Milo Ventimiglia, Adam Brody. Where: A network dramedy everyone says is great but no one watches; the walls of your teen bedroom. Wearing: Whatever's "in" among Brooklyn dads. Who: Donald Glover, Harry Styles. Where: Your dreams. Wearing: Fashion with a capital F. Who: John Boyega, Tom Holland. Where: This year's blockbusters. Wearing: Ask them in a couple months.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,165
Paradiclybothrium pacificum är en plattmaskart. Paradiclybothrium pacificum ingår i släktet Paradiclybothrium och familjen Diclybothriidae. Inga underarter finns listade i Catalogue of Life. Källor Sugmaskar pacificum
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,978
A.Eracleous Holdings is a member of A. Eracleous Electrical Installations Ltd which is a company whose activities in Cyprus begun in1983 by by Mr. Andreas Eracleous. Ever since the company became one of the leading companies in the Electrical Installations Sector in Cyprus very well known for quality & consistency. Some of its project are New Nicosia General Hospital, Anassa Hotel, Tunnel of Paphos, Nicosia Government Buildings, Famagusta Hospital, Aretaieion Private Hospital, Piraeus Bank, Tiffany Shopping Mall and various Hotels, Army, Hospitals, Banks, Shopping Centers and Housing Complex, all over Cyprus. A. Eracleous Holdings Ltd is a land & property development company, founded in 2002. From its early stage, the firm has set the highest possible standards for a successful future based on customers needs. Our philosophy is to meet our customer's satisfaction & highest expectations by offering the finest residential & commercial properties which are individual in character, style & concept only at prime locations with exceptional, Stylish Architectural & Construction Quality, focused to the comfort & practical requirements of our customers.
{ "redpajama_set_name": "RedPajamaC4" }
0
Gun Violence in America: The Sandy Hook Story Posted by Diamond Estrada November 14, 2019 The Aaron Hernandez Story: Football is More Than Just a Game Posted by Luis Arroyos November 13, 2019 A Woman's Shot at Making History: The First Female CEO of Mylan and the EpiPen Posted by Courtney Pena November 15, 2019 Woman Made of Stone: The Murder of Albert Snyder Posted by Patricia Arechiga November 12, 2019 The Opiate Epidemic: The FDA's Struggle to Control Opiate Abuse Posted by Abigale Carney November 15, 2019 The Murder of Kitty Genovese Posted by Stephanie Cerda November 14, 2019 Creating a Monster: Richard Ramirez, The Night Stalker Posted by Claudia Sanchez May 8, 2019 Adolfo de Jesus Constanzo and His Satanic Cult Posted by Briana Montes November 12, 2019 Explanatory Article, Journalistic Explanatory, Political History, United States History, US-Contemporary United States (1968-present) The Vietnam War Reaches Kent State University in a Matter of 13 Seconds Maria Esquivel Alison Krause, 19; William Schroeder, 19; Jeffrey Miller, 20; and Sandra Scheuer, 20. These four individuals come from different backgrounds, but they share the same piercing story. What is it that binds them together? Could it be their age? Could it be that they each attended the same university? Or maybe they shared the last day of their life together? During the latter half of 1960s, anti-war rallies were common in the United States; most of them were led by college activists. A reason for this was their opposition to the Vietnam War, and particularly to the draft lottery of 1969, which targeted young men between the ages of nineteen and twenty-five. Students united to fight for one common cause and that was to stay out of the Vietnam War. Many individuals were angered by President Nixon's decision to continue the United States' involvement in the Vietnam War, which led to a great number of student protests and anti-war rallies nationwide. A variety of protest took place in Kent, Ohio during a four-day event in 1970. There was a buildup of anger, violence, and social upheaval, until finally the ticking bomb detonated and four individuals lost their lives. On May 4, 1970, the life they once had had left their bodies by the penetration of a single bullet. This tragedy divided the nation even more. But what exactly led the Kent State University shooting? During the 1968 presidential election, Republican nominee Richard Nixon won the presidency. One of his main campaign promises was that he would end the Vietnam War. Nixon started to keep his promise, as the United States' troop commitment stopped increasing; but on April 28, 1970, he broke his promise by sending U.S. military forces to invade Cambodia. This was known as the Cambodia Incursion; the purpose of the invasion was to attack the Viet Cong. The Viet Cong were a group of North Vietnamese communists who had been seizing the Cambodian territory as a sanctuary.1 The news of the invasion was announced to the American public on April 30, when the president addressed his plan through television and the radio. The news of the invasion did not sit well with the public, and many became infuriated by the news. Public opinion was divided by those who agreed with the president and those who wanted to get out of the Vietnam War. Former President Richard Nixon Announcing the Cambodian Attack | Courtesy of WETA The following day, on May 1, 1970, protests begin at Kent State University. The rally was held on the school's commons by a group known as the World History Opposed to Racism and Exploitation (WHORE). The commons was a field located at the center of Kent State, which was popular for rallies and campus meetings among the students. WHORE, along with the New University Conference, sponsored the anti-war rally. The event transpiring at this time was not violent in any way, but was rather calm. About 500 demonstrators attended to protests the Cambodian Incursion. Nonetheless, a group of "rally leaders buried a copy of the United States Constitution, declaring that it had been 'murdered' when troops had been sent into Cambodia without a declaration of war or consultation with Congress."2 As the rally was coming to an end, a new protest was planned for May 4 at the university's commons. However, the May 1 protests was not completely over yet, as many students began assembling in downtown Kent at the North Water Street Bar, which consisted of six bars. It was a well-known place for students because the "sale of 3.2 beer to person 18 or older, and of liquor to 21 year olds" was legal in Kent.3 Protesting started off rather peaceful; then things turned violent when demonstrators started to taunt police officers and began throwing beer bottles at their vehicles. The mayor of Kent, Leroy M. Satrom, was made aware of what was happening in the late hours, so he ordered all bars to be closed. This only angered the mob more. Protesters vandalized businesses, breaking windows and even stealing store goods. A discussion of Cambodia turned into a riot when protesters then began igniting a bonfire on the road, making it difficult for vehicles to get around.4 Around 2:00 AM deputies were able to clear the crowd around downtown Kent by using tear gas and moving most of them back to Kent State campus.5 ROTC Building Burned by Protesters | Courtesy of CNN Online The second wave of protests in Kent started to become excessively violent; rallies turned into riots and threats persisted across the city. On May 2, Mayor Leroy asked Ohio governor James Rhodes for assistance in sending the Ohio National Guard to Kent. The National Guard was supposed to arrive at Kent during the afternoon but did not arrive until 10:00 PM. This was because the Ohio National Guard was stationed in Northeast Ohio. However, a civil emergency was declared for Kent and a curfew was implemented from 8:00 PM until 6:00 AM.6 In Kent State the supposed curfew was ignored, as damage was occurring when a group of individuals burned down the Reserve Officer Training Corps (ROTC) building. "A large crowd of over five hundred protesters gathered around the burning building to cheer; the crowd slashed the firefighters' hoses, temporarily preventing them from extinguishing the blaze."7 The demonstrators fought against the guardsmen and once again tear gas was used to disperse the crowd. It was the second day of protest in Kent, Ohio, and rallies were becoming increasingly violent as the hours went by. The city was becoming a war zone between anti-war protesters and the Ohio National Guard. Richard Nixon was sworn in as the 37th president of the United States of America in 1969. Richard Nixon was a prominent and successful president in foreign affairs. He was "something of a closet intellectual who read widely and thought deeply about history and diplomacy.8 One of his promises during his first term as president was that he would be the president to end the Vietnam War once and for all; he would "relieve the anti-war and anti-draft pressure at home" by adopting Vietnamization. But regardless of his approach to end the Vietnam War, he increased turmoil by secretly bombing Cambodia, which would decrease communist control, but it only made matters worse. In a way, he believed, he was far greater and more powerful than Congress: "Nixon wanted to demonstrate to his 'enemies' that he could operate a secret diplomacy just as they did, and that he would not be pushed around by anti-war mobs in the streets, Congress, and his special enemy, the media." With his invasion in Cambodia, he created the "greatest violence and instability in history on American campuses, including the killing of four students by National Guardsmen at Kent State University in May 1970 and strong opposition in the Democratic-controlled Congress."9 So, now did the president have blood on his hands because he went against telling Congress and the American people about the Cambodia invasion? Not only did he cause great distress within the nation, but during his second term as president he was guilty of covering up the break in to the Democratic National Committee headquarters, which became the scandal that brought down his presidency, known as the Watergate Scandal. Richard Nixon was forced to resign as the thirty-eight president of the United States. The Vietnam War caused great distress to American soldiers and American civilians seeing the destruction of the war. Many wanted to stay out of Vietnam, but the President had other ideas. Vietnamization was the term first used by the Secretary of Defense Melvin Laird to describe Richard Nixon's plan for the Vietnam War. "Vietnamization entailed the progressive withdrawal of U.S. forces from South Vietnam combined with efforts to enhance the training and modernization of all South Vietnamese military forces to enable the government of South Vietnam to assume greater responsibility for the conduct of the war."10 This meant that the United States was slowly withdrawing troops from the war and giving complete responsibility and control to the Vietnamese. By 1970, 150,000 American forces had been withdrawn from Vietnam. However, although this might have worked in many aspects, "Nixon's plan to Vietnamize the war actually increased the number of American casualties. The American public was traumatized by media coverage of the death and destruction."11 The Most Famous Image of The Kent State Shooting, Mary Ann Vecchio, a 14 year old Runaway Over the Body of One of the Victims; Jeffrey Miller | Courtesy of Slate Magazine The protests continued at Kent State University, it was now the third day in a row that students and anti-war activists were protesting the United States involvement in the Vietnam War. Around Kent State University, guardsmen surrounded the campus. "Nearly 1,000 Ohio National Guardsmen occupied the campus, making it appear like a military war zone."12 Governor Rhode was irritated with the events that had been transpiring, so, on Sunday morning he held a press conference, where he voiced a harsh statement against the protesters. He stated that these protests were the "most vicious form of campus-orientated violence" he had witnessed and that he would provide everything in his power to have all forms of authority regulate Kent, Ohio; he continued by calling them the worst type of people that harbor America. He said, "We are going to eradicate the problem…we are not going to treat the symptoms."13 However, this did not stop the rallies; it only worsened an already violent atmosphere. Throughout the day, confrontations between the people and the guards continued. "Rocks, tear gas, and arrests characterized a tense campus."14 Following the press conference, 12,000 leaflets were distributed throughout the public. The leaflets listed "curfew hours; said the governor through the National Guard had assumed legal control of the campus; stated that all outdoor demonstrations and rallies, peaceful or otherwise, were prohibited by the state of emergency; and said that the Guard was empowered to make arrests."15 With these leaflets and the governor's press conference, many believed that the worse was finally over, but they were wrong. The worst was still on its way. School property, such as windows, were destroyed and quite a few arrests were made that night. The guardsmen were becoming highly outraged because of the protesters unwillingness to cooperate; therefore, a curfew was put in place.16 The following day, May 4, the last day of the violent protests, was about to bring an unexpected end. "Student movements have the potential to generate major social change in the context of underlying economic, demographic, and other social forces. This makes student movements a strategic factor in assessing the nature of some consequential social change developments in society."17 The United States had experienced many types of social change. One such movement was triggered by the 1954 Supreme Court decision Brown v. Board of Education; another was triggered by the 1960 Greensboro sit-ins, and another by the 1964 Freedom Summer, all civil rights movements for African Americans for the goal of equality and respectability. However, many of the civil rights movements all had one thing in common, and that was to bring about change. The Student Non-violent Coordinating Committee, the Red Power Movement, the Chicano Movement, and the Anti-War Movement were some of the many movements that fought for change; all involving mass protests demanding equality, fighting against racism and police brutality, and fighting for improved labor conditions. These movements later inspired other movements for change, such as the environmentalism of the 1970s, women's rights and the gay rights movements to name the most prevalent. There were thousands of student protests in the United States in the late 60s and early 70s, many going unnoticed, others gaining the attention of the nation. Whatever the cause, student protests "in their manifestos and calls for nonviolent cultural revolution, democratic reform, unity with the oppressed, or violent revolution, students worldwide called into question the system, that is, the entire ordering of modern society. The protests were often as much a celebration of youth as efforts directed at sharply defined goals."18 The freedom to shift and bring attention to a cause by organizing student protests, strikes, or boycotting has become familiar in the United States; as many students began to realize their potential for bringing change to the nation, and all they had to do was speak up and become a part of something greater. An example of this is the most recent school shooting that occurred in Parkland, Florida in 2018. High School students around the nation participated in a school walkout for seventeen minutes to honor the seventeen lives lost at Marjory Stoneman Douglas High School. The attention these students received was taken to bring change to America. Their purpose was not to allow themselves to become just another statistic; they didn't want this to become just another mass shooting in America, where people would soon forget and move on. They sought to use this opportunity for the young people to spark a change that would make a difference. This led students, activists, believers, parents, teachers, supporters to Washington, D.C. to plea for stricter gun laws. This movement has become known as the March for Our Lives, and its objective is to create regulations for gun owners; not take away their protection, but have better universal background checks and raise the age of gun purchases to 21 rather than 18. Their goal is to stop other students from experiencing the horror that they had to live through on February 14, 2018; and to cease another parents suffering from losing their loved ones to yet another shooting massacre. On the last day of the anti-war rallies, students were prohibited by University officials from protesting at the school's Commons. However, by noon there was already a large crowd of protesters. "About 500 core demonstrators were gathered around the Victory Bell at one end of the Commons, another 1,000 people were 'cheerleaders' supporting the active demonstrators, and an additional 1,500 people were spectators standing around the perimeter of the Commons. Across the Commons at the burned-out ROTC building stood about 100 Ohio National Guardsmen carrying lethal M-1 military rifles."19 The night before, the Ohio National Guard had had at least three hours of sleep. Therefore, many were hoping for the protests to not take place. The rally, however, was quite peaceful at the beginning. It is not clear if this rally was to protest the National Guard stationed at the university or the Cambodian invasion. Either way, there was a record attendance. Harold E. Rice was a Kent State officer who ordered students to move away from the Commons. Students responded by using profanity against the guards, taunting them with "Pigs off campus," and started to throw rocks at them. Guards begin throwing tear gas canisters at the crowds, causing many to leave the premises; but others threw the canisters back to the guards. "Some among the crowd came to regard the situation as a game—'a tennis match' one called it—and cheered each exchange of tear gas canisters." Guardsmen begin advancing towards the students to clear the Commons. As the students moved away, they headed towards Blanket Hill on the university grounds. The Guardsmen headed straight towards an enclosed practice field. The guards tried to make their way back to Blanket Hill, but some felt fearful for their lives.20 Although there were numerous aggressive individuals, many were only bystanders. It is not clear why one guardsman fired his pistol, but soon other troops begin to fire into the air, on the ground, and into the crowd. In a matter of thirteen seconds, between 61 and 67 shots were fired. 21 Four students were killed and nine were injured. Two were protesters and the other two where bystanders. Allison Krause, 19, was killed by a bullet that went through her left upper arm and into her left side. She was protesting and was 110 yards away when she was killed. William Schroeder, 19, was killed by a bullet that went through his left back and seventh rib. He was a bystander and was 130 yards away when he was killed. Jeffrey Miller, 20, was killed by a bullet in the mouth. He was protesting and was the closet to the guards, 85 to 90 yards away when he was killed. Sandra Scheuer, 20, was killed by a bullet through the left side of her neck. She was a bystander and was 130 yards away when she was killed.22 Many believe it was the fault of both parties; the President's Commission on Campus Unrest reported: "Violence by students on or off the campus can never be justified by any grievance, philosophy, or political idea. There can be no sanctuary or immunity from prosecution on the campus. Criminal acts by students must be treated as such wherever they occur and whatever their purpose. Those who wrought havoc on the town of Kent, those who burned the ROTC building, those who attacked and stoned National Guardsmen, and all those who urged them on and applauded their deeds share the responsibility for the deaths and injuries of May 4."23 The tragedy that happened on May 4, 1970 was an awakening call to America; what are we doing when our children are being shot? Although, the Kent State Shooting was violent at times, no individual deserved to be killed for protesting or for being a bystander to the protest. After this tragedy, those in favor of the war fell suddenly silent, while America mourned. Shortly after the event, the tragic day was further memorialized in the famous song by the rock group Crosby, Stills, Nash, and Young: "Ohio." This is that song: Jerry Lewis M and Thomas R. Hensley, "The May 4 Shootings at Kent State University: The Search for Historical Accuracy,"Kent State University (1998). https://www.kent.edu/may-4-historical-accuracy. ↵ "The Report of the President's Commission on Campus Unrest, (U.S. Government Printing Office, Washington, D.C, 1970), 240. ↵ "The Report of the President's Commission on Campus Unrest" (U.S. Government Printing Office, Washington, D.C, 1970), 241. ↵ Government, Politics, and Protest: Essential Primary Sources, 2006, s.v. "Kent State Shootings," by John Filo. ↵ The Scribner Encyclopedia of American Lives, Thematic Series: The 1960s, 2003, s.v. "Nixon, Richard Milhous," by Melvin Small. ↵ The Scribner Encylopedia of American Lives, Thematic Series: The 1960s, 2003, s.v. "Nixon, Richard Milhous," by Melvin Small. ↵ Dictionary of American History, 2003, s.v. "Vietnamization," by Vincent H. Demma. ↵ Encylopedia of Modern Asia, 2002, s.v. "Vietnam War," by Richard C. Kagan. ↵ Jerry M Lewis and Thomas R Hensley, "The May 4 Shootings at Kent State University: The Search for Historical Accuracy," Kent State University, (1998). https://www.kent.edu/may-4-historical-accuracy. ↵ "The Report of the President's Commission on Campus Unrest, (U.S. Government Printing Office, Washington, D.C. 1970), 254. ↵ "The Report of the President's Commission on Campus Unrest, (U.S. Government Printing Office, Washington, D.C. 1970), 258-259. ↵ Encyclopedia of Sociology, 2001, s.v. "Student Movements," by Leonard Gordon. ↵ World History Encyclopedia, 2011, s.v. "Student Protest Movements, 1945-1960," by Alfred J. Andrea and Carolyn Neel. ↵ "The Report of the President's Commission on Campus Unrest, (U.S. Government Printing Office, Washington, D.C. 1970), 259, 260, 263, 267, 268, 274. ↵ Kent State massacre, Student Protests, Vietnam War He Would Have Killed Me: The Story of Jennifer Teege, Granddaughter of Nazi commandant Amon Goeth "I am dreaming: I am swimming in a dark lake, the water as thick as Assassination on the Innocent: The St. Bartholomew's Day Massacre of 1572 In the early hours of August 24, 1572, all was calm in France until an unexpected Edgar Velazquez Reynald 4 Feb 2019 Reply I think this is a very concise article that is able to shed light on a tragic event and what led up to it. I'm glad you were able to use the photograph with Mary Ann. That is a very powerful image, one that sticks with you forever. What I especially like about your piece is how you make it relevant for college students of today. Hopefully, people can see parallels with situations going on today. Danielle A. Garza The article kept a very respectful tone throughout and I feel like that's important when someone is talking about an event like this. I enjoyed your article in really getting the description of the days before to tell a story. I do wish I could have known what happened after, that being any type of reforms by government. I would have also like if there were more big picture ideas. Your article was very emotional and insightful. Scott Sleeter This is a very informative piece of writing. I like how both the event is covered and the lead up to is handled. Excellent use of explanatory narrative to weave the story together and keep the read on top of all the facts. It was easy to understand how this tragedy happened, and how it could have been avoided. It is sad that the biggest turning point in the Vietnam war was a battle in Kent, Ohio. Sara Ramirez Very informative article. You did a good job of conveying a great deal of information throughout the article and giving the reader some historical context. I especially liked that you connected this incident to social change movements throughout history, with the most recent example being the Parkland, Florida school shooting. The Kent State shooting sounds awful and your description of the event conveyed the chaos that ensued on that fateful day. Great job! Shine Trabucco I'm impressed with the structure and story-telling of this chaotic and horrific event. I knew of the Kent State U shooting but I did not know nor understand all of the details and pressure behind the situation. I did not know the time period and the pressures of the Vietnam war, the Cambodia bombing or the acts that were occurring. With the drama of Nixon and all of the international issues was just a recipe for a disaster. Thank you for sharing this story. I thought the images and video was very impactful and you made great use of them. Mario Sosa Good job on detailing the events of the Kent state shooting and giving an in-depth background to the political situation in the U.S. Not only that, but you also made the topic relevant to today's society. What I wonder is what were the immediate effects of the Kent state shooting in America? Were any of the national guards singled out for the shootings? Either way, nice analysis of this tragic topic. Gabriel Cohen This article did so many things well, but the most important thing was maintaining a FANTASTIC story structure while juggling many smaller narrative bits. I'm really impressed at how accessible you made this article to those who may not be familiar with the details of this incident. Your use of images was also very effective, and they definitely belonged in their respective sections. Ysenia Rodriguez 9 Oct 2018 Reply This article was extremely well written and well researched. I have never heard of this incident in detail however it's tragic what happened to Alison Krause, William Schroeder, Jeffrey Miller, and Sandra Scheuer–especially because Schroeder and Scheuer were bystanders. They did not need to die. The students protesting did not need to die. Thank you for your hard work and dedication to this article. It was very informative. Alexander Manibusan Not only am I impressed by the quality of the article, I'm also impressed by the sheer amount of research and dedication put into this work and how it linked the Vietnam protest to all the other organizations and strikes around the country. I also agree that it was both parties' fault, the authorities and the protestors, which led to the tragic deaths of those four students. It conveys that change is never easy and that it can lead to a violent path. Luis Magana The article was extremely informative on the events that lead to shooting but I feel a sense of remorse and sadness for the people that were affected. What happened to the four students at Kent State University was unnecessary. It was a peaceful protest that will forever leave a mark at the university. That is why at times there is no need for weapons. The author of the article did very good background research and like the pictures they go along with the article very well.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,748
Based on the Principles agreed on the Combination Agreement, the Governance is set out in the by-laws and the Board internal rules of EssilorLuxottica. 1. The EssilorLuxottica Executive Chairman will have equal powers with the EssilorLuxottica Executive Vice-Chairman in compliance with the by-laws of EssilorLuxottica as approved by Essilor's general shareholders' meeting on May 11, 2017. As referred to in the aforementioned by-laws, the Board rules of procedure reflect equal powers of the EssilorLuxottica Executive Chairman and the EssilorLuxottica Executive Vice-Chairman. In addition, neither the EssilorLuxottica Executive Chairman nor the Chairman of any of the Committees referred to below will have a casting vote. The EssilorLuxottica Executive Chairman of the Board, together with the EssilorLuxottica Executive Vice-Chairman of the Board, will organize and direct the work and meetings of the Board of Directors of EssilorLuxottica, on which they will report to the EssilorLuxottica shareholders' general meeting. They will ensure the smooth functioning of the Board of Directors and, in particular, that the Directors are able to fulfil their missions. As provided in the Board internal rules, except in the case where the EssilorLuxottica Executive Chairman and/or the EssilorLuxottica Executive Vice-Chairman would be absent of a meeting, the EssilorLuxottica Executive Chairman will associate the EssilorLuxottica Executive Vice-Chairman in each mission he is vested in pursuant to his function as Chairman of the Board pursuant to applicable laws, regulations and recommendations from the AFEP/ MEDEF Code, including in the missions relating to the Company's shareholders' meetings (which he shall organize and direct together with the EssilorLuxottica Vice-Chairman), and all corresponding decisions will be taken jointly with the EssilorLuxottica Executive Vice-Chairman. 2. The Board of Directors is composed of 16 members, of whom 8 members designated by Essilor (including the EssilorLuxottica Executive Vice-Chairman) and 8 members designated by Delfin (including the EssilorLuxottica Executive Chairman). They are all appointed for the initial term (i.e. as from the completion of the Contribution until the date of the annual general shareholders' meeting called to approve the 2020 annual accounts of EssilorLuxottica). Following the initial term, the Board members of EssilorLuxottica will have a term of office of three years; and any new member of the Board of Directors of EssilorLuxottica will be proposed for election at the EssilorLuxottica's general shareholders' meeting by the Board of Directors of EssilorLuxottica upon recommendation by the compensation and nomination committee of EssilorLuxottica or by any EssilorLuxottica's shareholder in accordance with applicable law, without any regard to the provenance of the nominees from Luxottica or Essilor. the only one represented on the Board of Directors of EssilorLuxottica. 3. Four specialised committees - Nomination and Compensation Committee, Audit & Risk Committee, Corporate Social Responsibility (CRS) Committee and Strategy Committee - are set up and comprise four members (two from the current Board of Directors of Essilor and two designated by Delfin); they are chaired by a representative of Luxottica or by a Director who was a member of the Board of Directors of Essilor in office as of the date of the Combination Agreement and designated by Essilor, as defined by the Board internal rules. With respect to the strategy committee, unless otherwise determined by a joint decision of the EssilorLuxottica Executive Chairman and the EssilorLuxottica Executive Vice-Chairman, the Chairman of such committee shall invite all members of the EssilorLuxottica's Board of Directors to attend (but not to vote at) the meetings of such strategy committee, except for meetings convened to discuss sensitive and significant acquisition projects. The Chairman of the Nomination and Compensation Committee and of the Corporate Social Responsibility Committee is held by a Director who was a member of the Board of Directors of Essilor in office as of the date of the Combination Agreement designated by Essilor; the chair of the Audit & Risk Committee and of the Strategy Committee is held by a Director designated by Delfin. None of the committee chairpersons has a casting vote. The management of EssilorLuxottica (and its staff members) will be located in Paris, 1-6 rue Paul Cézanne, 75008 Paris. An Integration Committee will be put in place, co-chaired by EssilorLuxottica Executive Chairman and EssilorLuxottica Executive Vice-Chairman, to provide a forum for them and define measures required to implement the Integration and the synergies.
{ "redpajama_set_name": "RedPajamaC4" }
6,309
// // UIImage+Ext.h // HHMusic // // Created by liumadu on 14-10-2. // Copyright (c) 2014年 hengheng. All rights reserved. // #import <UIKit/UIKit.h> @interface UIImage (Ext) + (UIImage *)stretchableImage:(NSString *)imagePath left:(NSInteger)leftCapWidth topCapHeight:(NSInteger)topCapHeight; /** * 加载图片 * * @param name 图片名 */ + (UIImage *)imageWithName:(NSString *)name; /** * 返回一张自由拉伸的图片 */ + (UIImage *)resizedImageWithName:(NSString *)name; + (UIImage *)resizedImageWithName:(NSString *)name left:(CGFloat)left top:(CGFloat)top; /** * 根据所给颜色创建一张100*128的图片 */ + (UIImage *)imageWithColor:(UIColor *)color; /** * 根据所给颜色以及size创建一张图片 */ + (UIImage *)imageWithColor:(UIColor *)color size:(CGSize)size; /** * 图片压缩 */ + (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize; @end
{ "redpajama_set_name": "RedPajamaGithub" }
4,094
Our Top Stories Search DailyNews by date Search DailyNews by topic DailyNews Archive Contact DailyNews Dogs will offer UWindsor students "Paws from Stress," today in the International Student Centre. Therapeutic dogs to help relieve exam stress Students experiencing stress during final exams can find relief today—Thursday, December 12—in the International Student Centre, with a little canine counselling. The highly trained dogs of Therapeutic Paws of Canada will be on hand from 2 to 3:30 p.m. for any student needing a break from end-of-semester pressures. The centre is located on the second floor of Laurier Hall. International Student Centre Strategic Priority: Provide an exceptional undergraduate experience External Job Postings Jan 7, 2022 - Online Learning Technologies Analyst "VII" in the Office of Open Learning Sep 7, 2021 - Major Gift Officer Classification "VII" in the Faculty of Engineering (Substitute vacancy) Sep 7, 2021 - Major Gift Officer Classification "VII" in the Faculty of Human Kinetics / Advancement Departments Internal/External Job Postings Jan 17, 2022 - Digital Media and Production Technologist Classification "VI" in the Department of Public Affairs and Communications Jan 14, 2022 - Multi-Media Coordinator Classification "VI" in the Department of Public Affairs and Communications Jan 14, 2022 - Facilities / Equipment / Events Technician Classification "IV" in the Department of Athletics & Recreational Services Jan 12, 2022 - Sustainability Officer Classification "VII" in the Office of the Provost and Vice-President, Academic Jan 11, 2022 - Communications Coordinator - Research Classification "IV" in the Department of Public Affairs & Communications (PAC) Security vulnerability for SPSS users UWinsite Student reboot Firewall update Nov 25 6:00-7:30am (complete) UPDATED: UWinsite Student performance issues UWinsite Student: unavailable 6 p.m. Nov 18 to Nov 22 - Resolved Jan 11, 2022 - Metal Frame Desks For Sale Project Notifications Education Building - Access/Classroom 1101 Renovations, January 24-26, 2022, 7:00 a.m. - 11:00 a.m. Great Lakes Institute of Environmental Research (GLIER) - Hot Water Shutdown/Water Leak Repair, January 20, 2022, 8:00 a.m. - 12:00 p.m. Alan Wildeman Centre of Creative Arts, Fire Alarm System Testing, January 25, 2022, 8:00 a.m. - 4:00 p.m. Windsor Hall, Fire Alarm Systems Operations and Verification, January 28, 2022, 8:00 a.m. - 4:00 p.m. SoCA Armouries, Fire Alarm Systems Operations and Verification, January 25, 2022, 9:00 a.m. - 12:00 p.m.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,701
2 Tone — музичний жанр, створений в Англії наприкінці 1970-х років злиттям елементів ска, панк-року, реґі, Rocksteady та Нової хвилі. 2 Tone класифікують як 2 хвилю ска і попередником 3 хвилі ска сцени 1980-х та 1990-х. Історія Звук 2 Tone, винайшли молоді музиканти з Ковентрі, графство Західний Мідленд, Англія, що зростали на Ямайській музиці 1960-х. Вони об'єднали вплив ска, реґі та Rocksteady з панк-роком та Новою хвилею. Гуртами, що вважаються частиною жанру є The Specials, The Selecter, The Beat, Madness, Bad Manners та The Bodysnatchers Термін було придумано клавішником The Specials, Джеррі Даммерсом. Музика була поширена серед скінхедів, руд-боїв, та деякими шанувальниками жанру mod revival. Музей 1 жовтня 2010 в будівлі Коветрівського університету, було відкрито Центральний музей 2-Tone, до серпня 2011, він переїхав до селища 2-Tone в містечку Сток. Він включає в себе виставочний простір, Стіну слави Ковентрівської музики, кав'ярню, сувенірний магазинчик та карибський ресторан. Багато речей в музеї запозичено у учасників The Selecter, The Beat та The Specials. Додаткова література Невілл Стапле, Original Rude Boy, (Aurum Press, 2009) ISBN 978-1-84513-480-8 Пол Вільямс You're Wondering Now-The Specials From Conception to Reunion (Cherry Red Books, 2009) ISBN 978-1-901447-51-4 Піт Чамберс Coventry Market in a Round About Way (Tencton Planet Publications, 2009) ISBN 978-0-9544125-7-9 Піт Чамберс The 2-Tone Trail: The Roots of Two-tone Music (Tencton Planet Publications, 2005) ISBN 978-0-9544125-3-1 Дейв Томпсон Wheels Out Of Gear: 2-Tone, The Specials and a World In Flame (Soundcheck Books, 2011) ISBN 978-0-9566420-2-8 Примітки Посилання 2 Tone info  — історія та дискографія 2 Tone 2 Tone Tribute  — з BBC Coventry & Warwickshire 2 Tone Collection — колекція записів 2 Tone 2-Tone Central — музей, кафе Ска Регі
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,465
{"url":"https:\/\/www.physicsforums.com\/threads\/why-do-photons-allow-doppler-shift.929654\/","text":"# B Why do photons allow Doppler shift\n\nTags:\n1. Oct 25, 2017\n\n### Buckethead\n\nIf we (a detector) are moving toward a star that emits a single photon (due to its distance) and that photon hits our detector, it will be blue shifted. My question is why. If the color of a photon is a reflection of its energy level and since the speed of the photon is always coming at us at c irrespective of the speed at which we are traveling toward the star, then why does the color of the photon change if we increase our speed?\n\n2. Oct 25, 2017\n\n### Paul Colby\n\nIn current theory light is composed of electromagnetic waves that have quantized amplitudes. The wave (of which the photon is the energy (and momentum) absorbed by the detector thus changing the waves state by one amplitude step) is Doppler shifted just like a classical wave would be.\n\n3. Oct 25, 2017\n\n### Buckethead\n\nBut why, even if we view the photon as a wave, would it be Doppler shifted if its velocity relative to the detector does not change? It seems to me this is equivalent to being in a plane and talking to someone while the plane is in the air. The velocity of the sound wave does not change relative to you (just like light in my example) and as a result its pitch does not change.\n\n4. Oct 25, 2017\n\n### Staff: Mentor\n\nIf you consider the receiver to be at rest, then the source is moving towards the receiver. This does not affect the speed with which the waves approach the receiver, but every successive wave crest has to cover slightly distance than the one immediately before it. Thus, the time between the reception of two consecutive crests is not the time between their emission; it is somewhat less because the second crest travels a shorter distance so arrives just a bit sooner than it otherwise would. This will probably be clearer if you try drawing a spacetime diagram showing the paths of successive wave crests through spacetime.\n\nThat explains the classical Doppler effect. There is an additional relativistic correction from time dilation because the emitter is moving relative to the receiver.\n\n5. Oct 25, 2017\n\n### Mister T\n\nThe speed is the same but the energy is different. It's the energy, not the speed, that determines the frequency (color).\n\nFor massive particles the increase in speed with respect to energy approaches zero as the speed approaches $c$. In other words, as you approach speed $c$ huge increases in energy produce negligible increases in speed. My point is that the relationship between speed and energy is strange compared to the non-relativistic relationship and the photon is a purely relativistic particle.\n\n6. Oct 26, 2017\n\n### Buckethead\n\nI'm specifying a single photon which left the star mellenia ago, when the sensor was at rest and then later accelerated to some modest speed toward the star, so the relative speed between the star and the observer couldn't be a factor\nBut why is the energy different? If the speed of the photon never changes relative to the observer, then why would its energy change?\n\n7. Oct 26, 2017\n\n### Staff: Mentor\n\nThere's no quantum mechanics involved in this problem (aside from statistical effects at the receiver, which are a distraction here) so no photons involved - you have a flash of light travelling from the emitter to the receiver. This flash of light is an electromagnetic wave.\n\nIf the receiver is approaching the source, then the crests of the wave will be closer to one another using the frame in which the receiver is at rest than using the frame in which the emitter is at rest. This will become clear if you draw a diagram.\n\nThe light is moving at $c$ in all frames, but the wavelength is shorter in the frame in which the receiver is at rest. Therefore the frequency and the energy are greater. This is just another example of the general fact that kinetic energy is always frame-dependent; the only surprising thing is that the energy carried by a flash of light is dependent on the frequency and wavelength, not the speed.\n\n8. Oct 26, 2017\n\n### Buckethead\n\nI'm good with this since the property of light (wave or particle) depends on the experiment you are performing, so wave it is.\nI'm not sure how to draw a diagram to reflect this, and I get what you are saying with regard to viewing this as the receiver at rest, but let's look at this another way. Suppose the emitter and receiver are at rest and the star emits a short pulse. Sometime later before the light reaches the receiver, the emitter accelerates toward the receiver then coasts to a constant speed. I expect the receiver will detect no color change. Now reverse the experiment and instead accelerate the receiver then coast to a constant speed just before the pulse reaches the receiver. I expect the receiver will see a blue shift. First, is this correct? If so I find this curious since when the pulse was emitted, the relative speed between the two was 0. And at the time the receiver finished accelerating and was in coast mode, its speed relative to the emitter is no longer relevant since the pulse is in space between the two. In other words, the emitter could explode and vanish so that all that's left is the receiver and the pulse of light somewhere in space. The receiver is moving at c relative to the pulse of light and yet when it arrives it will be blue.\n\nThe only difference between the emitter accelerating and the receiver accelerating is the acceleration itself. But if the accelerations occur only during the time the pulse is between the emitter and receiver, and if both are moving at a constant velocity relative to each other when the pulse is still in between, then in both cases the situation between the pulse and the receiver is identical. The relative speed between the pulse and receiver remains at c in both cases and in both cases the speed of the emitter should be irrelevant since what the emitter is doing in both cases is isolated from the pulse already on its way.\n\n9. Oct 26, 2017\n\n### PAllen\n\nAnother way to look at this is purely kinematically. 4 momentum is a vector that transforms per the Lorentz transform. If something has a 4 momentum in an emission frame, it will have the Lorentz transform of that 4 momentum in another frame, e.g. the receiver. If you Lorentz transform (E,p), then specialize to that case of E=p for a massless particle (or light), you get the relativistic Doppler formula. So given that light must have energy and momentum, it must undergo relativistic Doppler between frames due to Lorentz invariance. I should say, you get the Doppler formula applied to E. Then the energy of a photon determines its frequency.\n\nLast edited: Oct 26, 2017\n10. Oct 26, 2017\n\n### Staff: Mentor\n\nYes.\nIf the light is moving to the right in some given frame, and the frequency of the light is $\\nu$ as measured by an observer at rest in that frame:\n- If the receiver is moving to the left in that frame at the moment of reception then the frequency in the frame in which the receiver is at rest will be $\\nu+a$; the receiver will measure a blueshift.\n- If the receiver is moving to the right in that frame at the moment of reception then the frequency in the frame in which the receiver is at rest will be $\\nu-b$; the receiver will measure a redshift.\nWhat happens to the emitter after the light is emitted is irrelevant; all that matter is that the frequency was $\\nu$ in the frame in which the emitter was at rest at the moment of emission. Likewise, what happens to the receiver before the light is received is irrelevant; all that matters is the speed of the receiver at the moment of reception.\nWe get different results in the emitter-accelerates and the receiver-accelerates cases because after the acceleration:\n- in the first one the receiver is at rest in the frame in which the frequency is $\\nu$.\n- in the second one the receiver is moving to the left in the frame in which the frequency is $\\nu$.\n\nLast edited: Oct 26, 2017\n11. Oct 26, 2017\n\n### robphy\n\nAlong the lines of @PAllen 's comment, here are some energy-momentum diagrams of photons in different frames of reference.\n(These were taken from my post to a different question on another site https:\/\/physics.stackexchange.com\/questions\/362125\/momentum-conservation-with-photons\/363478#363478 )\n\nIn the rest frame of the source, light signals of the same frequency are emitted in the forward and backward direction.\n(The original question I responded to asked about the velocity of the source after emission.)\nThe diagram visualizes the conserveration of 4-momentum problem:\n$$\\tilde P_{fin}+\\tilde {k_1}+\\tilde {k_2}=\\tilde P_{init}$$\nin the rest frame of the source,\nand in the lab frame [which observes the source moving].\n\nNote, in the lab frame,\nthe forward light-signal's 4-momentum is increased\n(compared to that signal's 4 momentum in the rest frame)\nand the backward light-signal's 4-momentum decreased (similarly).\nThus, in the lab frame, the light-signals have different frequencies.\n\n12. Oct 26, 2017\n\n### phyzguy\n\nYes, this is correct. I'm surprised you find this curious. Suppose I throw a baseball at you at 50 miles per hour. While the baseball is in flight, you accelerate towards it to 200 miles per hour. Is it surprising that the baseball will hit harder than if you stayed stationary? I think not. The analogy is not perfect, since light moves at a constant speed, but it shows that the receiver's speed can matter. As Nugatory has explained, the receiver's speed at the point of absorbing the pulse is what matters, because it causes the wave crests to arrive closer together.\n\n13. Oct 26, 2017\n\n### PAllen\n\nThe situation is not symmetric at all. What the emitter does after emission is obviously wholly irrelevant, as is what the receiver does after reception. If the emetter changes speed before emission, that will have an effect. If the receiver changes speed before reception, that will have an effect. Both of these before\/after statements are invariant because they events along one world line (timelike).\n\n14. Oct 26, 2017\n\n### jartsa\n\nWhen you do a small acceleration, everything in the universe changes shape (for you). The change is very small for slow moving objects, not very small for fast moving objects. Lorentz-contraction formula can be used to calculate the change of shapes of all things, except those 'things' that move at the speed of light. If we use 0.99999c as the speed that the light pulse moves, we get almost correct results using the Lorentz-contraction formula.\n\nAnd the derivation of a general contraction formula is trivial.\n\nOh yes this was about energy. During your acceleration there is a (gravitational) potential difference between every point of the universe (for you), and when you do a small acceleration, everything in the universe changes energy (for you). The amount of change is the original energy of the object multiplied by the potential between the starting position and end position of the object.\n\nTypically fast moving objects, like light pulses, have a large distance between their starting position and end position.\n\nLast edited: Oct 26, 2017\n15. Oct 26, 2017\n\n### Mister T\n\nAsk yourself why you expect that the energy should depend on the speed? In non-relativistic physics we expect the energy to be proportional to the square of the speed, but that is just the non-relativistic approximation. It's flawed and the flaws become more and more apparent the closer you get to speed $c$. At speed $c$ there is no dependence at all. Photons of various energies all have the same speed.\n\n16. Oct 26, 2017\n\n### Wes Tausend\n\nSkinnier, but taller photons. :)\n\n17. Oct 27, 2017\n\n### Buckethead\n\nThanks everyone for contributing to the answers. I've read everyone's post carefully, (understanding some more than others) and enjoy that it is indeed clearing my head a little. I understand the conclusions and have no reservations that red\/blue shift occur under the stated circumstances, but am still struggling with this from a strictly logical point of view. Here is my most concise way of putting it:\n\nEmitter and receiver have zero relative velocity. Emitter strikes out, then both emitter and receiver accelerate in the same direction and cease acceleration some time later, all while the pulse of light is somewhere in between the two. At all times the emitter and receiver have zero relative velocity. The pulse reaches the receiver and is blue shifted. During the entire time, the pulse should also have had a velocity of c relative to both the emitter and receiver. Therefore, since the only players in question are the emitter, the pulse, and the receiver and since their relative velocities never changed there should not have been a blue shift.\n\nDo I see a flaw in my argument? Yes. From a third frame watching this whole thing from the side the acceleration caused the relative velocities between the pulse and the receiver to change hence we have blue shift. So there is a way out, but I still don't understand why, if the velocity between the receiver and the pulse never changed from the perspective of the receiver, why it would see blue shift. Not sure I'll ever really understand this.\n\n18. Oct 27, 2017\n\n### Ibix\n\nNot true in relativity from the receiver's perspective due to the relativity of simultaneity - see Bell's spaceship paradox.\n\n19. Oct 27, 2017\n\n### PAllen\n\nI am not sure what the confusion is, but I'll try another clarification. The only thing that matters for Doppler in SR is the relative velocity between the emitter at the emission event and the receiver at the corresponding reception event. What the emitter or receiver do before or after these events is wholly irrelevant (for that reception event). Simultaneity is irrelevant.The Doppler then results from the Lorentz boost for this relative velocity. The analysis can be done in any frame because the relative velocity described is frame invariant.\n\n20. Oct 27, 2017\n\n### jartsa\n\nI mean we pick some speed close to c as the the speed that the light pulse moves according to the observer before the observer has accelerated. When the observer accelerates some small amount the speed that the light pulse moves according to the observer changes a tiny amount.\n\nKnow someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook\n\nHave something to add?\nDraft saved Draft deleted","date":"2018-03-25 05:59:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6386374831199646, \"perplexity\": 442.466161178351}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-13\/segments\/1521257651820.82\/warc\/CC-MAIN-20180325044627-20180325064627-00488.warc.gz\"}"}
null
null
{"url":"https:\/\/gmatclub.com\/forum\/the-formula-for-calculating-the-final-veloc-ity-of-a-body-initially-260283.html","text":"GMAT Question of the Day - Daily to your Mailbox; hard ones only\n\n It is currently 17 Dec 2018, 09:03\n\n### GMAT Club Daily Prep\n\n#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.\n\nCustomized\nfor You\n\nwe will pick new questions that match your level based on your Timer History\n\nTrack\nYour Progress\n\nevery week, we\u2019ll send you an estimated GMAT score based on your performance\n\nPractice\nPays\n\nwe will pick new questions that match your level based on your Timer History\n\n## Events & Promotions\n\n###### Events & Promotions in December\nPrevNext\nSuMoTuWeThFrSa\n2526272829301\n2345678\n9101112131415\n16171819202122\n23242526272829\n303112345\nOpen Detailed Calendar\n\u2022 ### 10 Keys to nail DS and CR questions\n\nDecember 17, 2018\n\nDecember 17, 2018\n\n06:00 PM PST\n\n07:00 PM PST\n\nJoin our live webinar and learn how to approach Data Sufficiency and Critical Reasoning problems, how to identify the best way to solve each question and what most people do wrong.\n\u2022 ### R1 Admission Decisions: Estimated Decision Timelines and Chat Links for Major BSchools\n\nDecember 17, 2018\n\nDecember 17, 2018\n\n10:00 PM PST\n\n11:00 PM PST\n\nFrom Dec 5th onward, American programs will start releasing R1 decisions. Chat Rooms: We have also assigned chat rooms for every school so that applicants can stay in touch and exchange information\/update during decision period.\n\n# The formula for calculating the final veloc- ity of a body, initially\n\n new topic post reply Question banks Downloads My Bookmarks Reviews Important topics\nAuthor Message\nTAGS:\n\n### Hide Tags\n\nMath Expert\nJoined: 02 Sep 2009\nPosts: 51263\nThe formula for calculating the final veloc- ity of a body, initially\u00a0 [#permalink]\n\n### Show Tags\n\n22 Feb 2018, 20:00\n00:00\n\nDifficulty:\n\n15% (low)\n\nQuestion Stats:\n\n83% (01:35) correct 17% (01:48) wrong based on 58 sessions\n\n### HideShow timer Statistics\n\nThe formula for calculating the final velocity of a body, initially at rest, that undergoes a constant acceleration is v^2 = 2ad; where v is final velocity, a is acceleration, and d is distance traveled. If a body initially at rest is subjected to a constant acceleration of 10 meters\/second^2 until it reaches a velocity of 20 meters\/second, how far, expressed in meters, has the body traveled?\n\n(A) 200\n\n(B) 100\n\n(C) 40\n\n(D) 20\n\n(E) 10\n\n_________________\nSenior PS Moderator\nJoined: 26 Feb 2016\nPosts: 3327\nLocation: India\nGPA: 3.12\nThe formula for calculating the final veloc- ity of a body, initially\u00a0 [#permalink]\n\n### Show Tags\n\n22 Feb 2018, 22:15\nBunuel wrote:\nThe formula for calculating the final velocity of a body, initially at rest, that undergoes a constant acceleration is v^2 = 2ad; where v is final velocity, a is acceleration, and d is distance traveled. If a body initially at rest is subjected to a constant acceleration of 10 meters\/second^2 until it reaches a velocity of 20 meters\/second, how far, expressed in meters, has the body traveled?\n\n(A) 200\n\n(B) 100\n\n(C) 40\n\n(D) 20\n\n(E) 10\n\nSubstituting the values in the equation, we get the solution to this problem\n\nGiven:\n$$v^2 = 2ad$$ -> $$d = \\frac{v^2}{2a}$$\nHere, v = 20 and a = 10\n\nTherefore, the distance can be calculated using the formula $$\\frac{v^2}{2a} = \\frac{400}{2*10} = \\frac{400}{20} =$$20(Option D)\n\n_________________\n\nYou've got what it takes, but it will take everything you've got\n\nexamPAL Representative\nJoined: 07 Dec 2017\nPosts: 842\nRe: The formula for calculating the final veloc- ity of a body, initially\u00a0 [#permalink]\n\n### Show Tags\n\n22 Feb 2018, 23:49\nBunuel wrote:\nThe formula for calculating the final velocity of a body, initially at rest, that undergoes a constant acceleration is v^2 = 2ad; where v is final velocity, a is acceleration, and d is distance traveled. If a body initially at rest is subjected to a constant acceleration of 10 meters\/second^2 until it reaches a velocity of 20 meters\/second, how far, expressed in meters, has the body traveled?\n\n(A) 200\n\n(B) 100\n\n(C) 40\n\n(D) 20\n\n(E) 10\n\nWe'll show a complementary approach to the above, useful especially if you get lost in the wordiness of the question.\nThis is an Alternative approach and uses the answers.\n\nLet's say the median answer, (C), is correct.\nThen v^2 = 2*a*d = 2*10*40 = 800 meaning that v is the square root of 800, more than 20.\nToo much! Since (C) gave too large of an answer, we'll look for a smaller option.\nSay (D) was correct. Then v^2 = 2*10*20 = 400 and v=20, as required.\n\n(D) is our answer.\n_________________\n\nSign up for 7-day free trial\n\nClaim your offer here.\n\nHappy holidays!\n\nBSchool Forum Moderator\nJoined: 07 Jan 2016\nPosts: 830\nLocation: India\nGMAT 1: 710 Q49 V36\nRe: The formula for calculating the final veloc- ity of a body, initially\u00a0 [#permalink]\n\n### Show Tags\n\n23 Feb 2018, 00:54\nBunuel wrote:\nThe formula for calculating the final velocity of a body, initially at rest, that undergoes a constant acceleration is v^2 = 2ad; where v is final velocity, a is acceleration, and d is distance traveled. If a body initially at rest is subjected to a constant acceleration of 10 meters\/second^2 until it reaches a velocity of 20 meters\/second, how far, expressed in meters, has the body traveled?\n\n(A) 200\n\n(B) 100\n\n(C) 40\n\n(D) 20\n\n(E) 10\n\nv^2 = 2ad\n\ngiven v =20\na= 10\n\nd=?\n\nd= v^2\/2a = 20x20\/2x10 =20\n\n(D) imo\nRe: The formula for calculating the final veloc- ity of a body, initially &nbs [#permalink] 23 Feb 2018, 00:54\nDisplay posts from previous: Sort by\n\n# The formula for calculating the final veloc- ity of a body, initially\n\n new topic post reply Question banks Downloads My Bookmarks Reviews Important topics\n\n Powered by phpBB \u00a9 phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT\u00ae test is a registered trademark of the Graduate Management Admission Council\u00ae, and this site has neither been reviewed nor endorsed by GMAC\u00ae.","date":"2018-12-17 17:03:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6206542253494263, \"perplexity\": 5100.189291714712}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-51\/segments\/1544376828697.80\/warc\/CC-MAIN-20181217161704-20181217183704-00457.warc.gz\"}"}
null
null
Copyright © 2010 by Lisa Napoli All rights reserved. Published in the United States by Crown Publishers, an imprint of the Crown Publishing Group, a division of Random House, Inc., New York. www.crownpublishing.com CROWN and the Crown colophon are registered trademarks of Random House, Inc. Library of Congress Cataloging-in-Publication Data Napoli, Lisa Radio Shangri-la: what I learned in the happiest kingdom on earth / Lisa Napoli. p. cm. 1. Bhutan—Description and travel. 2. Napoli, Lisa, 1963—Travel—Bhutan. I. Title. DS491.5.N37 2010 954.98—dc22 [B] 2009049176 eISBN: 978-0-307-45304-4 _Title page photo by Lauren Dong Jacket design by Laura Duffy_ v3.1 _For Kinga Norbu and all the children, may they find a happy path in a peaceful world_ _For my friends, may they feel as much love and support as they've given_ _For my parents, who taught me that family doesn't have to be biological_ # CONTENTS _Cover_ _Title Page_ _Copyright_ _Dedication_ _Epigraph_ _Preface: Three Good Things_ _Chapter 1:_ THE THUNDERBOLT, PART ONE _Chapter 2:_ "WELCOME, JANE!" _Chapter 3:_ RADIO SHANGRI-LA _Chapter 4:_ BEWARE THE _E MADATSE_ _Chapter 5:_ GOD OF THE NIGHT _Chapter 6:_ BHUTAN ON THE BORDER, OR, THE START-UP COUNTRY _Chapter 7:_ THE SYMPHONY OF LOVE _Chapter 8:_ MY BEST FRIENDS IN THE WORLD RIGHT NOW _Chapter 9:_ THE THUNDERBOLT, PART TWO _Chapter 10:_ DAWN OF DEMOCRACY _Chapter 11:_ AMERICA 101: "THAT'S COOL" _Chapter 12:_ BABY WATCH POSTSCRIPT _Epilogue:_ LOOSE MOTION _Acknowledgments_ _Selected Bibliography_ _About the Author_ # _Grant your blessings so that confusion on the path may be eliminated_. _Grant your blessings so that confusion may dawn as wisdom_. _Please bless me so that I may liberate myself by attaining realisation_. _Bless me so that I may liberate others by the strength of compassion_. _May all connections I develop be meaningful_. —HIS HOLINESS THE TWELFTH GYALWANG DRUKPA, _The Preliminary Practice of Guru Yoga_ _We are the station that makes you smile_. _We can help you walk a mile_. _And even when you stop and think_ _We can make you dance and sing_. _Always do your thing, on Kuzoo FM_. _Always do your thing, on Kuzoo FM_. —KUZOO FM PROMOTIONAL JINGLE # PREFACE: THREE GOOD THINGS THE APPROACH TO the most sacred monastery in the Kingdom of Bhutan is steep and winding and, especially as you near the top, treacherous. You are sure with one false step you'll plummet off the edge. Had I been here during certain times over the last few years, I might have hoped I would. It is a cold winter's Saturday, dark and overcast. Misty gray clouds, pregnant with snow, hug the mountains. My companions are several of the twenty-somethings who staff the new radio station in Bhutan's capital city, where I've come to volunteer. Kuzoo FM 90: The voice of the youth. Pema is wearing jeans and a sweatshirt and flat white dress shoes, the kind you might put on with a demure frock for a tea party. Ngawang's wearing the same stuff on top, but she's got sneakers on her feet. Each woman carries a satchel stuffed with her _kira_ , the official national dress, requisite attire for Bhutanese who reach the summit. Kesang is already wearing his _gho_ , the male equivalent. Over it, he's carrying a backpack filled with ten pounds of oil to fuel dozens of butter lamps, offerings to be left for the gods. Me, I'm twenty years older, and practicality reigns: I've got on my thick-soled boots, an ugly long black down coat with a hood, and six layers of clothing underneath. So much for the strength I've gained from my daily swimming regime; I am huffing and puffing against the altitude and the intensity of the climb. My new friends modulate their sprints to let me keep up. Bhutanese are hearty in many matters—they are used to living off the land, the hard lives of farmers—but they are particularly strong when it involves making the trek to this place called Takshang, built on a sheer cliff that soars ten thousand feet into the sky. The depth of their devotion becomes abundantly clear when, out of nowhere, a radiant twelve-year-old boy scurries down past us, stark naked, completely unaffected by the temperature and the incline. He's trailed by a solemn entourage of grown men. Not one of them misses a step. Later, we learn this beatific adolescent is a reincarnated lama on pilgrimage from the remote eastern reaches of this tiny country. A pilgrimage to Takshang is the highlight of a trip to Bhutan, but it is commonplace for the Bhutanese. They are carried here from babyhood. Slight, frail seniors navigate the twists and turns and inclines deftly from memory, in a fraction of the time it takes foreigners half their age. Tales are told of people with physical disabilities who labor for twelve hours so they might reach the top, where a cluster of temples awaits. The most sacred of the altar rooms there is open to the general public only once a year. It is believed that meditating for just one minute at Takshang will bring you exponentially greater blessings than meditating for months at any other sacred site. If you travel here on a day the calendars deem to be auspicious, the merit you accumulate will be even more abundant. Ngawang tells us that the first time she remembers visiting was two decades ago, when she was four years old; her mother had died and the monks sent her here to pray. What Takshang promises all who visit is cleansing and renewal. Into this valley in the eighth century a sage named Guru Rinpoche rode in on the back of a tigress. Then he retreated to a cave for three months and, with the most powerful weapon there is—his mind—swashbuckled away evil spirits. In so doing, he persuaded the Bhutanese to adopt Buddhism as their guiding light. Hundreds of years later, to mark the feat, a colony of structures was built in this precarious location—testament to how the people of Bhutan have long revered him, this being they consider the Second Buddha. As we climb higher and higher, and as the gold-topped Takshang comes into view, I can feel Guru Rinpoche's strength bolstering my own, diminishing demons, softening my heart. THIS IS THE STORY of my midlife crisis—and how I wrestled with and then transcended it, thanks to a chance encounter that led me to a mysterious kingdom in Asia few have visited. In the march of years leading up to my fortieth birthday, and on the rapid ascent into that menacing decade, I'd found myself Monday-morning quarterbacking every step of my life, haunted by the revisionist history of regret. A near-continuous looping chorus of "what ifs" and "if onlys" became my soundtrack: _Why had I failed to have a family with a man I loved?_ _Why had I squandered my youth so haphazardly?_ _Why had I stuck with a profession that infuriated me so intensely?_ _What could I do with the second half of my life to make it more meaningful than the first?_ _How was I going to grow old gracefully?_ Inhaling the cold, clear air on that trek up to Takshang, on the other side of the world from home, the pain and noise of those questions began, finally, to melt away. To morph into a sense of acceptance and peace. No longer did I feel stuck on a treadmill of emptiness; now my life story read as full, exciting, wondrous—with limitless possibilities for the future. And we hadn't even reached the most sacred spot on the mountain. THE GROUNDWORK FOR this awakening had been laid months earlier, when I had only the vaguest of ideas where this place called Bhutan was on the planet. Every Wednesday evening, I headed west on the clogged I-10 freeway in Los Angeles for an experimental workshop in positive psychology. In classic therapy—where you endlessly review all your personal history—you work to gain a better understanding of why you are the way you are, have done what you've done. But it isn't necessarily designed to help you move forward, much less reframe the way you look at the world. By now, with the help of various counselors, I'd navel-gazed a giant gaping hole in my belly button, dissecting my own personal history the way a Proust scholar did _Remembrance of Things Past_. And yet I still found myself swirling in a vortex of despair. I summoned a sense of optimism about this "happiness" class and hoped that it might at least be a salve, if not a cure, for how poorly I'd been handling the approach into middle age. It seemed unlikely, though, that a six-week class could possibly jolt me into contentment, or anything approximating "happiness." Since the workshop wasn't yet fully developed, the teacher, Johnny, asked for our patience. We'd be acting as guinea pigs for this program, he said, and there wouldn't be any charge. A romp into positive thinking—gratis? I couldn't resist. One of the first things Johnny taught us was a Zulu warrior greeting. Everyone paired off, standing an arm's length across from his partner. Then we looked deep into each other's eyes. When you felt a connection, you were supposed to say, "I have come to see you." The other would respond, "I have come to be seen." The gazing would continue until that person felt the click and proclaimed, "I have come to see you." Then, you moved on to the next classmate. Each session began this way, to affirm that we were here for real, pure, honest interaction, heart to heart. You weren't supposed to say you had come to see and be seen until and unless you really meant it. Doing this exercise made me want to stare right into the eyeballs of every single person I encountered, to make up for all the times I'd distractedly engaged in "conversation." Gazing intensely at people I hardly knew reinforced how rarely I looked directly at the people I did know. We were all perennially distracted, attempting to triage the various competing demands on our time, multitasking our days away. Almost worse was communicating with the family and longtime friends who lived in other zip codes and time zones altogether. Static-filled cell phone conversations and emails were the hollow tools that connected us, eye contact sorely lacking. I craved meaningful human contact. Other simple assignments from Johnny were crafted to help us discover what we appreciated in ourselves, and what inspired us about others. "Describe in detail a person you love—and why"; "Write a toast to four difficult periods in your life, and how you handled them"; "Summarize your life story as if you were ninety and telling a child." The themes were the same for me each time. I could see that it was a triumph to have made my way in the world, despite various misguided choices with men, the random steps that comprised my career, even an act of violence I'd experienced as a young woman. I saw how fortunate I was to have been cloaked in love and support all along. How, in the absence of creating a traditional family for myself, I'd cobbled together an army of dear friends around the world with whom I enjoyed rich, textured relationships. It moved me how many of the people I knew ended our conversations with the words "I love you." That counted for something. It counted for a lot. No, it wasn't perfect. What was that, anyway? No one I knew would say their choices had yielded an ideal existence. Maybe my life wasn't conventional, by some old-fashioned definition of convention. The years had been flawed, but they had been leavened with much good. Such were the ingredients of a life. At the end of the third session, almost as a throwaway, Johnny assigned an exercise that really started to bring the jumble in my brain to order. It was a simple nightly ritual, and it taught me how to appreciate life in the most basic terms. "I want you to keep a notebook by your bed," he said. "And every night, before you go to sleep, I want you to review your day. Make a short list of three things that happened that were good." "What if three good things didn't happen?" several of us asked in unison. Clearly, we weren't naturally wired with a positive way of looking at the world. "Well, that's the point," he said. "This exercise challenges you to find three good things in each day. They don't have to be big things. In fact, most of the time they're _not_ going to be big things. Big, important things don't happen to us every day. Winning an award, getting married, starting a new job, going on vacation. It's the big spaces of time in between those monumental events that make up life. Right? The idea here is that little things have power. An interaction with someone in an elevator, or a clerk in a store. Small victories, like fixing something that's been annoying you in the house. Going for a really long run. I want you to see that every single day, three good things do happen. It will help you discover that goodness exists all around us, already." You could feel the more skeptical of the class participants sigh. They wanted a different formula for happiness than that, the equivalent of a diet pill for the spirit. A "do this, do that, don't do this" list of action points, where they could just fill in the blanks and come out on the other end in a matter of weeks, angst-free—blissful. But what Johnny said resonated with me, immediately. An image of my UPS man popped into my mind's eye, one of the few consistent characters in my life; each day, he delivered packages to both my apartment building and the office complex just across the street that housed my employer, a public radio show. Even though our exchanges rarely amounted to anything more than pleasantries and chatter about the weather, his kindly presence was often one of the day's highlights. Especially when I was working the graveyard shift and trying to adjust to sleeping during the day, he was often the most pleasant human interaction I'd experience. People wouldn't be all I'd consider for my three good things. Since I'd moved to Los Angeles, no matter how uneventful or difficult the day had been, I'd marveled at the dance of light at sunset from my apartment on the eighteenth floor. That would undoubtedly make it onto my lists. One of the more optimistic in our group asked, "What if you have more than three good things?" "Lucky you!" said Johnny, laughing. "Well, write them all down. But try to pick the top three. Over time, you'll start to see which things make you happiest. It probably isn't what you think." That night, I couldn't wait to go home and go to bed with my notebook. THREE GOOD THINGS 1. Chris the sound engineer saying he was looking forward to working with me again 2. My friend Michael, who I never get to see because of work schedules and L.A. geography, taking me to a pre-birthday lunch 3. Happiness class Throughout the next few days, I found myself taking mental notes of the interactions and experiences that might make my written list each night. There were many bedtimes when I had to search for the good things, when the three items of note might simply be my daily swim, the Goodyear blimp gliding by my building on its way to nearby Dodger Stadium, and the taste of a pork chop I'd fried up. Food items, the glow of the magnificent Southern California "golden hour," and quick, often silent exchanges with strangers frequently made my lists. So did shared meals, especially shared lunches, which I got to have only when I worked those overnight hours. Writing down that the food and the friend I'd enjoyed it with were great countered all that wasn't about having to get up and go to work at one in the morning as often as I did. And, of course, this was the point. Something good actually happened, even on the crummiest, hardest days. And those good things, the simplest things, were the most nourishing. The ritual of this nightly exercise worked like a gym for the brain; over time, the lists started to strengthen me, to reverse my march of distress. I began to reframe the way I'd been thinking about life the past few long, heavy years, excavating the positive developments that had come out of it: How, after a painful end to an engagement when I was thirty-seven, I'd learned to swim, and now made the activity an essential part of every day. How, during a long period of unemployment, I'd taught myself to make soufflés, so I could continue hosting friends for dinner while not breaking my very tight budget. Most important, I was learning to slow down, to sit with myself and the uncertainties of the future. To enjoy not knowing what was next, instead of fearing and panicking over what might be. To appreciate the successes I'd had, instead of dwelling on my failure to have accomplished more. As I sat in bed each night with my notebook, I didn't completely understand what was happening. I didn't see that I was making peace with myself, relaxing after a long war. Then came that day on the mountain in Bhutan, when my fog began to lift, and my life began to focus. # 1 # THE THUNDERBOLT, PART ONE HARRIS SAID HE'D BE AT THE COOKBOOK PARTY BY 7:00 p.m., which gave me an hour to hang out with him there before I headed uptown to have dinner with another old friend and his family. The party was a bit out of the way, and I almost skipped it, but since I was only in my hometown, New York City, on rare occasions, I figured I might as well get out and see as many of the people I loved as I could. What had brought me here from Los Angeles was the chance to fill in for a month at the New York bureau of the radio show where I was on staff as a reporter. I bolstered my energy for a busy evening of flitting around the city in hyper–social butterfly mode—a way of life I rarely indulged in anymore. The walk from the office on East 47th Street to the party on 66th Street filled me with wonder and made me wistful for this place I loved so dearly. In early autumn, twilight in New York is magical; the sky glimmers and there's energy in the streets. You feel powerful, invincible, as if every gritty bit of the city is yours. I found myself doing a mental trick I hadn't done since I'd moved away: reciting the address of my destination while I walked as if it were the lyrics to a song. _Two-three-four_ / _East Sixty-sixth Street_ , I sang to myself over and over again this September evening, the clunky tune mingling with the click-clack of my bright pink "comfort" high heels. Inevitably, after all that repetition, I would muck up the street number, and I did this time, too. But there was such a crowd in front of one particularly gorgeous old brownstone, I didn't need to check the little slip of paper in my purse to know I'd arrived. Crazy busy. Some swanky food magazine editor was debuting a new cookbook. Harris had long been a foodie, and in the last few years had broken into writing about all things gourmet. Good for him to be mingling in such well-fed company. Now it seemed I'd have to fight a dreaded crowd to find him. How could I be a city person and hate mob scenes? As I made my way to the front door, I took a look up the staircase. It was packed with a crush of people. In the thick of it, facing in my direction, was _the_ most handsome man. He had a shock of brown hair and big brown eyes to match. I know it sounds ridiculous, but in that instant, the mob seemed to disappear. Much to my surprise and delight, I saw him looking right back. Not just in my direction, but at _me_. Our eyes locked, and, even from a distance, I could swear a sort of chemical reaction erupted between us. I'd read about these celebrated _coups de foudre_ , thunderbolts, where people met and fell in love at first sight. I knew from experience that an instant attraction could be intoxicating—and dangerous. As was the impulse to imagine that a momentary connection was something larger. But this thunderbolt felt different. This was a beautiful, instant intensity I'd never, ever experienced. Practical me prevailed: I had to find Harris. Time was tight. I peeled my eyes away from the handsome stranger and pushed through the thicket of people. After a series of wrong turns, I spotted him holding court in a corner of the room, smiling and gesturing as if he owned the place. Harris was so good at making people feel welcome, connected. Everyone clutched goblets of wine—no disposable plastic cups for this crowd. My friend did a round of introductions, and as he got to the end of the group, I was happily surprised to see the man from the staircase. "Lisa, this is my friend Sebastian I've been telling you about, who I'm going to Asia with next week. You know, for that story I'm writing for _Gourmet_ magazine. And Sebastian, this is Lisa, my friend who works in public radio out in L.A." He was better looking now that I could see him up close, and there was a warmth about him, an easy friendliness. I felt a bit self-conscious and suddenly a little off-kilter in my pink shoes. Long ago, I'd been one of those kids who hid under her mother's armpit to avoid looking at strangers. Then I went into the news business. Earning my living posing questions to people I didn't know had cured me of my innate shyness. Confidence was a good quality, one I was happy to have cultivated—especially now faced with this handsome man. Right at this instant, though, I found myself feeling unsure about how to proceed. I wanted to say something clever and prophetic, but I couldn't find the words. So I stuck out my hand, and he stuck out his, and we shook. Sebastian asked if I wanted a drink, and I said yes, and he said he'd get me one from upstairs, and I said I'd go with him, and there we were, presto, in our own conversational bubble. We talked a bit about public radio—always reliable upscale cocktail-party chitchat. With everyone captive in their cars, and smart programming in short supply thanks to budget cutbacks and media consolidation, the public-radio audience tuned in with almost cultlike devotion. Personally, I was sick of the news, and tried to avoid it as much as possible. At the same time, I appreciated the attention those commuters paid our show, and was grateful to have a job at a news outlet that had such an enormous, attentive audience. Better than having no audience at all. I'd been out of work a number of times, and underemployed, so I knew well what that was like. I also was very aware that in situations like this one, my profession converted into useful social currency. Once we had my wine and a refill for him, I started plying Sebastian with questions about his upcoming trip to Asia. He ticked off the itinerary: a swing through Hong Kong, a few provinces in China I had never heard of, two places in India whose names I knew simply because of their tea—Assam and Darjeeling—and, for a few days, the tiny neighboring Kingdom of Bhutan. "Ahh. The happiest place on earth," I said. I hoped my being dimly familiar with one relatively unknown country in all of Asia—and knowing the factoid that it was purportedly filled with blissfully happy people—might impress him. Although I'd never come anywhere close to the continent. I wasn't even certain just where on the continent Bhutan _was_. "Yes," he said, smiling. "Exactly." "I've always been curious about this happiness thing and Bhutan. It has to have something to do with the fact that television is banned there, right?" I'd now exhausted the extent of my knowledge about the obscure little nation. "Right, although His Majesty did let TV in a few years back," Sebastian said, his smile broadening and his eyes intense. "But it's still a very happy place. Hey, get a visa and come with us. Harris and I will be your guides." What I wanted to say was that I would have driven to the airport and boarded a rocket to another galaxy with this man, whether or not my dear old friend Harris came along as chaperone. We kept talking, but I really don't remember what we said. I was lost in Sebastian. Then, a sort of internal alarm rang and jolted him into remembering he was looking for quarters for the parking meter. After I dug a bunch out of my purse and handed them over, I asked the time and discovered that the clock was ticking for me, too. I needed to head to the other side of town for dinner. A quick good-bye, and off I ran. The friend I was meeting turned out to be running very late; I sat at the restaurant with his family as he called every five minutes with updates from the traffic jam. Ordinarily this would have annoyed me, but not tonight. Just knowing Sebastian was out there in the world improved my disposition immeasurably. THE NEXT DAY, I sat in our midtown offices trying to motivate myself to research a story about rich young couples who were trading the plush suburbs surrounding New York City for a new crop of multimillion-dollar kid-friendly condo complexes being built right in the heart of Manhattan. With enough money, you could now have a family without disrupting your metropolitan lifestyle. Among other luxuries, like on-staff dog walkers and a wine cellar, these buildings offered concierges to assist the nannies. An email popped into my inbox and saved me from my internal rant about conspicuous consumption and the decline of civilization. The very sight of the man's name made my heart beat faster. Dear Lisa: It was great to meet you last night. I owe you a drink for all that change you dug up for me. When can you get together? —Sebastian Sebastian and Harris were leaving on their journey in just a few days, and by the time they returned, I'd be back home in Los Angeles. I could find a way to see him tonight. My calendar was totally open after work. I liked it that way, and this invitation reinforced why: The most interesting experiences seemed to happen spontaneously—just the opposite of how most everything worked in New York City, where every moment had to be planned by the quarter hour, lest you felt as if you might be "wasting" a bit of your precious time. And yet I found myself hesitating to accept this invitation. I'd witnessed many a friend as they sabotaged or just plain avoided opportunities out of some sort of unexpressed fear that success or happiness might result. They became riddled with anxiety and self-loathing before they'd even sent in that cover letter or gone on that date. Now here I was, similarly paralyzed. The voice of this other me politely declined. It was easy to justify not seeing him. We lived on opposite sides of the country; launching into a relationship that was destined to be long-distance was preposterous, a mistake I'd made in the past that I'd vowed not to repeat. My, I was getting way, way ahead of myself. Of course, none of this meant I just forgot him. Clicking out of the Web sites about yuppie family-friendly condos, I did what any smart, savvy person in the age of the Internet would do. I Googled him. He appeared, from what I could deduce, to be about my age. He had been in the tea business for a decade. He had been going to Bhutan, it seemed, for twenty years. It looked like he'd started as a guide, leading people there on exotic treks. Exhausting what I could dig up about him, I then searched for "Bhutan," and realized his offhand comment about my tagging along was a joke. There was no just "getting a visa" to this remote Himalayan nation. Tourism to Bhutan had been permitted only since the 1970s, a time when the nation began to step out of its long-imposed isolation. An airport hadn't been built until 1984, and even now there were many restrictions; the government-run airline owned only two planes. You couldn't just tool around the country unescorted; you had to hire a guide to travel with you, and some areas still remained off-limits. To keep out all but the wealthiest visitors, a $200 per person, per day tourist tax was imposed. Other colorful, curious facts unfolded: Bhutan was considered the last Buddhist kingdom, as others around it like Tibet and Sikkim had been swallowed up in political battles waged by giant neighbors China and India. Little, independent Bhutan had been known as the Land of the Thunder Dragon since the twelfth century, when an important religious man heard a clap of thunder—believed to be the voice of a dragon—as he consecrated a new monastery. The nation had long deflected colonization and outside influence. Christian missionaries had come calling in 1627, but the only lasting legacy of these Jesuit priests from Portugal is a detailed written description of their travels there and the hospitality they enjoyed from the locals, who politely resisted conversion. Today, the majority of the people subsist by farming. There isn't a single traffic light anywhere in the country, not even in the capital city, the only capital in the world without them; instead, a uniformed police officer directs cars at a handful of particularly tricky intersections. As part of a campaign to preserve the culture, citizens are obliged to wear the traditional dress—intricate, colorful hand-woven pieces of cloth called _kira_ and _gho_. The reigning king had married four sisters simultaneously—the queens, they were called. Among them they had had ten children—eight of them born before an official marriage ceremony had taken place in 1988. There was a surreal portrait of the women standing shoulder to shoulder, wrapped meticulously in brightly colored _kira_ , perfect as dolls, each one gorgeous and just slightly different from the next. What was _that_ family dynamic like? Multiple simultaneous marriages weren't reserved for royalty, it seemed; this practice was allowed for all the citizens of Bhutan. Men and women, both. An Internet search didn't reveal how common this was. King Jigme Singye Wangchuck and his father before him had been progressive in a variety of ways: They'd been responsible for nudging, then catapulting Bhutan into the modern world after years of seclusion. Hard currency, roads, schools other than that of the monastic variety—all had been introduced in only the past forty years. Since Bhutanese would now need to study abroad to become doctors and lawyers and scientists necessary for the health and measured growth of the nation, the native tongue, Dzongkha, was replaced by English as the language of instruction. The ability to speak English was perceived as a passport to almost anywhere, a vital connection to the outside world as Bhutan moved into an era of progress and relative openness it had previously worked to avoid. Despite its isolationism, Bhutan had been at the vanguard in other ways. Long before the rest of the world started flaunting environmental concerns as a trendy marketing strategy, Bhutan's king had been winning awards for his genuine commitment to conservation. Clear-cutting was not allowed, and if a single tree was chopped, three had to be planted in exchange. By royal covenant, he had committed that 60 percent of Bhutan's forests would always be preserved. Unlike many Asian countries, Bhutan had not been transformed into a giant pollution-generating smokestack, nor was it overpopulated, with only 650,000 citizens. It was poor, but it prided itself on the fact that no beggars were on its streets. Babies weren't left on the doorsteps of orphanages; such institutions didn't exist. Everyone had roofs over their heads and something to eat. The people took care of one other. A royal form of welfare called _kidu_ allowed citizens in the most dire circumstances to petition the king for help. Perhaps the most unusual and intriguing aspect of this Land of the Thunder Dragon was its attitude toward development and consumerism—the policy that catapulted Bhutan to the formidable (if unqualifiable) distinction as a place populated with supremely happy people. Instead of measuring its economic progress by calculating the gross national product—a complex matrix detailing the monetary value of what a country churns out—His Majesty created a different scale. He proclaimed this philosophy, ironically, poetically, "Gross National Happiness." Economic progress at any cost, went the thinking, was not progress at all. Any force that threatened Bhutan's traditions or environment was cause for concern—and not worth inviting into the country. The well-being of the people was to be considered before the sheer generation of goods and cash, before rampant growth just for the sake of an upward slope on a graph. Quality of life was to take precedence over financial and material success. Compassion toward and cooperation with your fellow citizens was fundamental, essential, rather than mowing down the other guy with abandon so you could succeed. Social scientists and economists around the globe curiously studied GNH and this place that because of it had been dubbed "the happiest place on earth." What would the New York City couples buying $2.7 million apartments with nannies to assist their nannies think about these ideals? How about the audience and staff of the radio show where I worked, where the theme was money and business? Being, not having. Happiness above wealth. It sounded great to me; Bhutan certainly appeared to have its priorities straight. At least, it seemed to have the same priorities I was craving more of in my world. Could it be real? Or was it brilliant sloganeering, a marketing mirage? Maybe I'd figure out a way to get to Bhutan one day, to find out for myself. THREE WEEKS LATER, I'd returned to Los Angeles. One particularly frustrating day at work, I was sitting around, trying to invent some idea for a fifty-second story that would please the editors and fill the news hole in the next morning's show. Once the idea was approved, I'd begin chasing down sources by phone and begging for just five minutes for an interview. At least this wasn't one of the weeks where I had to go to work at 1:00 a.m. That shift required a different sort of madness than wrangling sound bites into radio news blurbs. Sebastian's name in my inbox provided relief once again. It was ridiculous how excited I got just seeing an email from him. I didn't think I was capable of being so smitten. Hi Lisa. How are you? Hope all's well in L.A. Harris is being an excellent sherpa on this trip. How would you like to go work for a start-up radio station in Bhutan? If so, let me know and I'll make an introduction to a friend of mine here who knows someone who needs help. Seems like a good way to get to Bhutan and up your alley, too? —Sebastian Was this for real? He couldn't be making up this kind of offer just to impress me. Could he? Suddenly, an exotic foreign experience seemed the antidote to my malaise; without thinking it through I wrote back and said yes. As soon as I hit Send, the questions surfaced: How would I take more than a week off? I was constantly reminded at work that younger and therefore less expensive talent lurked in the wings; I'd been unemployed for so long before taking this job, I couldn't just frivolously run away. Besides, impetuous work-related decisions weren't my style. And yet, even though I had no idea how it would sort out, I didn't worry for long. The possibility that my few skills might be useful to people in this faraway "happiest place on earth" warmed me. Sebastian virtually introduced me to a Mr. Phub Dorji and we began an email correspondence. He asked for my résumé, inquired how soon I could get to Bhutan, and told me that if I paid my own way, the station would cover the cost of my room and board. A plane ticket seemed a small price for this kind of experience; who knew what it might lead to? Mr. Dorji sent along a list of goals he hoped I could achieve: taking the station national, improving the professionalism of the on-air talent, figuring out how to better report on and deliver news, creating and selling radio advertisements. The station was called Kuzoo FM. _Kuzu zampo_ was Dzongkha for hello, which is how in truncated form it became the name of the radio station. The accompanying Web site, Kuzoo.net, looked to be a kind of social-networking hub for Bhutanese kids—as if that would cordon them off from everyplace else on the Net, keep them from interacting beyond Bhutan's borders, I thought cynically. "Kuzoo was started by the crown prince for the young people of Bhutan," Mr. Dorji wrote. Naturally, I thought, in this happy kingdom, the royalty would be in touch with the youth. When I asked him his exact role at Kuzoo, he was elusive: "I will keep that a mystery until you get here." As I worked out the details with this mysterious man on the other side of the world, a steady stream of communication with Sebastian erupted. He became my live human resource for all things Bhutanese. Was there really a radio station? Had he heard it? Were women respected? Was it safe for me to travel to Bhutan alone? While he patiently reviewed my many questions and offered as many answers as he could, I got the sense that he didn't understand what I was worried about. When you've been visiting a place for so long, very little about it seems daunting. One query Sebastian didn't (or wouldn't) answer was how he first got involved with Bhutan. Becoming a tour guide in Bhutan twenty years ago wasn't like picking up and heading to Tahoe to be a ski instructor. You had to have an in. "Ask one of these guys to tell you the story when you get there," he said coyly, and he attached to his email a list of people to look up when I arrived. Soon, our trip consultations graduated to the telephone. We were talking practically every day. He'd call with a quick thought or reminder. Like the importance of bringing long black socks as gifts for the men I'd meet; Sebastian said this leg covering was essential not just for warmth in winter but for style. "Buy half a dozen pairs, or more. They prefer the Gold Toe brand, because they stay up better and last longer. Get them in solid black. Bring lip gloss or boxes of tea for women." Not fancy Asian loose tea, he added. Plain old tea bags from America would impress. I trekked to Target and loaded up on a dozen pairs of Gold Toes, boxes of Celestial Seasonings, and various lipsticks. Finally, the most important detail of the trip had been arranged: I had in my hands a faxed copy of my visa from the Royal Government of Bhutan permitting me to enter the country. Now it was official. That's when I marched into my boss's office to propose an unpaid leave of absence of no more than six weeks. I was surprised at how easily he said yes. "Isn't that the place where there's a two-hundred-dollar-a-day tourist tax? And you don't have to pay? Go for it. What an amazing opportunity." Then he muttered something about an old acquaintance who'd visited the place a decade before, and how while I was there I should try to file some stories for our shows, before he swung back around to his mound of paperwork. The only not-so-smooth part of the plan came from my father, who couldn't quite grok the adventure I was about to have: _YOUR GOING TO A THIRD WORLD COUNTRY TO DO WHAT FOR FREE?_ he wrote in an email, which, given the block letters and misspelling, conveyed the concern he felt about his dear and only daughter going off to a foreign land he'd initially thought was in Africa. (As did many people, although most were too timid to even venture a geographic guess.) What had happened to me as a young woman years ago weighed heavy on his heart. The fact that he'd read online that the United States didn't have a diplomatic presence in Bhutan made this already faraway place seem even riskier. I assured him I wouldn't be going if I didn't feel safe. But my safety wasn't what I was thinking about. I had absolutely no idea what I would find on the other end—and that was the point. A FEW WEEKS before my departure, I did a routine online check of the government-owned Bhutanese newspaper _Kuensel_. It published in hard copy twice a week, but new stories were added online every day. In anticipation of my trip, I'd taken to looking at the Web site every morning while my editors decided the fate of us reporters for the day. I hadn't read the news in my own country so closely or with such interest in years. Even for a newbie to "Bhutanalia," the enormity of the newly published lead item was evident. "His Majesty Jigme Khesar Namgyel Wangchuck becomes the fifth Druk Gyalpo," read the headline. _Druk Gyalpo_ meant "Dragon King." The tone was so subtle, it read like a whisper. No _New York Post_ –style fanfare trumpeting this news. The matter-of-fact report detailed how the fourth king had announced his abdication during a speech to a group of yak herders in a remote village. By handing over the throne now, he would allow his eldest son to reign for a few years before democratic elections would be held. A constitutional monarchy, the king rationalized, was a more modern form of government, one he wanted to gift to his people during a peaceful time. He'd been slowly giving up power over the last two decades, establishing councils of advisors for various matters. Now, he said, was the time for his son to lead, and he was confident that under his guidance, "the Bhutanese people would enjoy a greater level of contentment and happiness." The newspaper described the reaction of his subjects as "stunned." They wanted nothing of this, no dilution of power for their monarchy. They weren't ready for this ruler to step down yet, either. The king was only fifty. The only person I could talk to about this—the only person I knew who would care—was Sebastian. He wasn't a slave to a computer all day and probably hadn't seen the news, so I called him. My hunch that this was big and unexpected was right. _"What?"_ he exclaimed. "Can you read that to me, please? Every word!" And I did. "I just can't believe it," he said. "But it's not that big a surprise, is it?" "Well, yes, in a way. Everyone loves the king." I imagined Sebastian shaking his head, stunned—the same reaction as the people of Bhutan. "But, no, of course we knew this would happen eventually," he said with a sigh. "Now it'll be impossible to see him anymore." "Him, like the new king?" "Yeah." "You know the crown prince?" "Yeah, he's a nice guy. I've known him since he was a kid. But he'll be off-limits now. Wow." My curiosity intensified. Sebastian knew the crown prince. The crown prince had founded the station where I was going to work. Now he would rule as king. Was this who had asked Sebastian for an American radio volunteer? Was Phub Dorji connected to the king? Maybe Phub Dorji was a pseudonym for the king! Of course, that was ridiculous. But who knew? There were so many vagaries, so many dangling threads. These speculations made me even more eager to go. And so, in January 2007, I embarked on my journey to Bhutan. Where I would be working with the eager young staff of newly launched radio station Kuzoo FM. Which I took on faith actually existed. To do what exactly wasn't clear. All because of an email introduction from a devastatingly attractive man I'd met once, for twenty minutes, at a party I almost didn't bother attending. It all seemed completely strange, and yet, completely normal, the way huge, life-altering experiences can feel almost like an invention, or a dream. Except that never in your wildest imagination could you have made them up. # 2 # "WELCOME, JANE!" NGAWANG PEM TOOK HER ASSIGNMENT TO FETCH me from the airport in Paro very seriously. If the threat of security guards tackling her hadn't loomed, she probably would have made her way out onto the runway so she could hold my hand and escort me the second I stepped off the plane. Despite the regulations, she got pretty close. As I walked across the tarmac to the terminal entrance, there she stood, _kira_ crisp, her long, thick black hair piled on top of her head, cell phone in hand, neck craned expectantly. "Madam Jane!" she said as I walked past her into the customs area. When she got no response, she tried again. "Lady Napoli?" "You must be from Kuzoo?" I said, reaching out to hug her. I was so happy someone knew who I was, even though we'd never met. I didn't know how to pronounce her name, which had been sent to me in an email a few days before I left on the three-day journey. And I was too exhausted to correct her about mine; I figured she'd assumed my middle name was my first. I liked the mistake. "Yes, and I recognize you from your passport photo," she said, and giggled. She sounded like a teenager, and didn't look much older than one. "I was in charge of getting your visa. Welcome to Bhutan." She said I could call her Ngawang— _Na-wang_ , we practiced saying together. It was much easier to pronounce than I'd feared. Or I could use her second name, Pem—whichever I preferred. With the pluck of a New Yorker navigating the subway, Ngawang whisked me into the line marked "diplomat," reserved for those with official visas; tourists used another line. The airport held a handful of Westerners, who emitted that eager and bewildered look vacationers have when they've just arrived at their destinations, and several people who were clearly Bhutanese. Like Ngawang, they wore the official national dress—the kilt/bathrobe-like _gho_ for the men, a belted neck-to-floor swath of beautiful fabric called a _kira_ for the women, accentuated with a bright blouse and color-coordinated silky jacket. _Kira_ are colorful and elegant, simpler than a sari and more practical than a kimono. This formal wear left me feeling underdressed. "They can be difficult here, especially with foreigners, and I didn't want you to get stuck," Ngawang said, smiling with the knowledge of her conspiracy. "So I used my connections." She gestured toward someone behind the stalls where passports were being checked as a way of explaining how she'd talked her way back here. "See him? He's my brother." I wasn't sure if she meant one of several men wearing military gear, or a man who was wearing what looked to be a police uniform, or, for that matter, whether she was pointing toward an older gentleman clad in a dark-colored _gho_. I had arrived safely in Bhutan to a warm welcome, and that was what mattered. Soon I'd learn that Ngawang knew someone everywhere we went, or anywhere that I needed anything. This made her not only an excellent candidate for her job in radio but an indispensible guide for me. The hours of endless travel had addled my brain. Instead of being elated about this adventure, I had succumbed to the perilous trap of feeling sorry for myself as I trekked around the globe alone. What was I doing? Where was I going? Why was I headed to this strange little country most people hadn't heard of and couldn't find on a map? Shouldn't a woman in her early forties be doing something normal, like taking her kids to Disneyland? Or enlisting the grandparents to babysit, so she could steal away on romantic trips with her husband? Or, if the husband and kids had been around for a while, plotting spa getaways with her similarly beleaguered girlfriends? This grand adventure seemed, all of a sudden, pathetic and sad and a bit rootless. To be running to the other side of the planet at age forty-three to volunteer with a bunch of people I didn't know, in a country that had fewer people than there were students in public school in Los Angeles—all in the hope that the experience might justify my existence, fill the emptiness in my heart. A normal single woman would have met a handsome man at a party and been whisked off on an exotic whirlwind affair. Wouldn't she? Every step of the long journey here, I was regaled with a chorus of "if onlys" and "what ifs" I thought I'd silenced. My trusty exercise of making a list of three good things only briefly helped halt the noise: _(1) Lunch at the airport with my friends_ _Hal and Phil; (2) Seeing_ The Darjeeling Limited _and_ Into the Wild _on the plane; (3) The surprisingly lovely airport hotel in Bangkok_. Ngawang snapped me back to the present with an offer of a stick of chewing gum. The sweet, fruity taste felt good after days in transit, hours cooped up on a plane. Wasting time wallowing, here, was just dumb. In addition to her work as a radio jockey at Kuzoo FM, Ngawang had also been assigned the important job of watching over me during my stay, helping with whatever I needed. After I presented my visa papers to the customs official, ponied up the $20 fee I'd been told to expect, and officially entered the country, we made our way toward the baggage claim. The airport was so tiny it needed just one carousel. Even though she was clacking along in her heels, Ngawang insisted on grabbing my bag, as well as the heavy backpack I was carrying, and wheeling the load outside. Ringed by mountains, Bhutan's only airport has been called the scariest in the world. Only eight pilots are certified to navigate it. The runway is narrow and visibility is often a problem, apt metaphors for the official and cultural barriers that make it difficult for a person to enter Bhutan's borders. Once you're on the ground, peaceful simplicity reigns. With just two planes in the Druk Air fleet, there's little danger of collision when an aircraft, after it lands, does a one-eighty to move closer to the terminal. My travel began with an eighteen-hour flight to Bangkok, a brief overnight layover, a five-hour delay due to fog hovering over the airport in Paro Valley, and a four-hour flight that hopped through India. That allowed for the plane to be stuffed with Indian businessmen, their eyes dark and expressions stoic. Besides being an economic necessity for the airline, this brief layover served another, unintended purpose, at least for me. A glimpse from the tarmac of smoggy, congested Calcutta raised the prospect of the stark, empty landscape of Bhutan to an even higher level of mystique and otherworldliness. By the time we landed, I was so blind from the overstimulation and exhaustion, I couldn't keep track of what hour it was or how long Ngawang must have been hanging around. I apologized for keeping her waiting. "No problem," she said. "We went into town to my sister's place when we heard you would be delayed and ate breakfast." A sister, a brother. I wondered how large Ngawang's family was. As we approached a tinny white passenger van with the orange Kuzoo FM logo painted on the side, it became apparent who she meant by "we." A handsome _gho_ -clad young man with a Kennedyesque square jaw hopped out and bowed slightly. I felt like the newest member of the royal family. "Madam Jane," he said shyly, averting his gaze. My eyes were drawn to the black socks that covered his calves. I looked forward to fishing out a few pairs from my Gold Toe stash and presenting him with them. "This is Kesang, the Kuzoo driver," said Ngawang. "But he doesn't understand English. I made him practice your name." _"Kuzu zampo,"_ I said. My first attempt at speaking the only words I knew in Dzongkha was easy. I'd been thinking this word _Kuzu_ for months; now here I was saying it out loud, to someone for whom it actually had meaning. He smiled, whisked my bags into the back of the vehicle, and hopped behind the wheel, as Ngawang installed me in the front seat next to him. It was a British-style vehicle, driver on the right. It had been ages since I had ridden as a passenger on the left side, but I was so disoriented that it didn't feel as off balance as it otherwise might have. "We have to go now or else we'll get stuck on the road," Ngawang said, sliding the van door shut. "If we can get behind Her Royal Highness, we can proceed on to Thimphu. If we do not, we'll have to wait for the go-ahead. Maybe a few hours, even." She explained that construction was under way to widen and smooth the forty-mile stretch between Paro and Thimphu, the capital city. Paved roads were chief among the modernization plans launched forty years ago, yet still only six main arteries traversed the entire country. Though they were called highways, this one resembled a rocky country thoroughfare you hoped lasted only a few miles. Improvements to this essential stretch of road were among the projects in anticipation of the coronation of the new king; the date had still to be determined, as the royal astrologers hadn't yet weighed in on the most auspicious moment for the occasion. But in anticipation, and to meet the country's growing dependence on motor vehicles, traffic was stopped for several hours every afternoon in order for the work to proceed. Of course, royalty would not have to be subjected to this inconvenience. "How do you know a queen is on the road?" I asked, bewildered by the thought of being so close to royalty—I, who didn't even notice or care about the inevitable celebrity sightings that occurred at various spots around Los Angeles. "Not one of the queens. A princess. Elder sister to the new king. She was on your plane." Which of the Bhutanese women on that plane could have been a princess? I mentally ticked through the few passengers stuck in the waiting area in Bangkok. Ngawang read my mind. "A lady with a baby, wearing _kira,_ " she said. "And several ladies helping her." I had seen a baby being passed among several bored Bhutanese ladies. We waited at the gate for so long I was surprised I didn't play with the kid myself, much less memorize the face of every single passenger. Much of the delay I'd spent talking to a physical therapist named Beda, who was returning home to her husband and two kids after six months of study in the United States. We'd met before dawn at the Druk Air check-in line at the edge of the vast departure area in Bangkok's brand-new billion-dollar Suvarnabhumi Airport. Both of us were crossing our fingers that our baggage didn't exceed the allotted thirty pounds each. When Beda's did tip the scales, we pretended to be traveling together and I took up her slack. Between that and the cappuccinos I bought us with the little bit of Thai currency I had in my purse, my first attempt at Bhutanese-American relations proved a resounding success. Together we kept each other company in the gleaming glass-enclosed terminal D. The airport was so new that the water fountains and televisions in the waiting area still bore labels. We picked at the free lukewarm Burger King sandwiches the airline had provided as an apology for the inconvenience. And we used my laptop to check for more information about the weather. Since I had no idea how long it would be before I got online again, I fired off an email to my nervous family, too. _Almost there_ , I wrote. Though the longest leg of the journey was over, I didn't realize the most dangerous part of it was ahead. THE RICKETY WHITE Kuzoo van was making its way onto the "highway." This thoroughfare was the automotive equivalent of the approach to the airport: simply treacherous. The difference was that the pilot took great care to steady the plane as it wobbled in the wind; the drivers here seemed to fancy themselves participants in a demolition derby. Immediately, the need for widening the road became obvious. Instead of two lanes there was a slightly wider-than-average one-lane sliver of bumpy pavement. Making it all the more precarious was the fact that every other vehicle was a giant brightly colored truck that looked like it had driven out of a Bollywood-style cartoon. Hand-lettered words on the fronts ironically proclaimed, LONG LIFE; the bumpers admonished BLOW HORN. Blowing a horn wasn't going to do a thing to facilitate passing, since the oncoming vehicles were obscured from sight. Still, Kesang impatiently—but expertly—barreled past every vehicle in our path. Cars precipitously hugged the road's edge. And that edge was unprotected by guardrails to keep you from careening off and dropping hundreds of feet, straight into the valley. Just when it seemed the road might go straight for a bit of a reprieve, on came the snakiest S curve. Without exception, all the vehicles were traveling at high speeds. No ride on the autobahn could match this. I was happy the passenger-side safety belt worked. But as we kept moving, it hit me that even being straitjacketed to the seat wouldn't help a bit were the van to slip. I fought the urge to bite my nails; I wanted nothing to dwarf my absorption of the scenery and the feeling that I'd landed on another planet. Bhutan's tourism industry sold the place as the last Shangri-la, and it became clear from what I saw out the van windows that this was indeed a land that time and rampant development had forgotten. Rolling hills punctuated by spectacular mountains, vast expanses of meticulously terraced land and the clearest river rushing through, interrupted only occasionally by a cluster of unusual-looking houses. Within the array, a tiny store, marked by a simple blue sign bearing white hand-drawn letters, provided a hint of commerce: KUENGA WANGMO GENERAL SHOP CUM BAR. All the signs were in English, topped with the squiggles of Dzongkha letters, painted in royal blue with white, and they all looked the same. The buildings themselves, too; every structure had sloping roofs and ornately carved orange wooden frames around the windows. They weren't ugly in their uniformity, nothing like a Levittown suburb or subprime development might be, but rustic and charming—like Asian-infused Swiss chalets. The view repeated itself over and over again so that it began to feel like a driving scene from a _Flintstones_ episode, in which an occasional variation pops up every tenth frame to remind you there is indeed forward motion. Every mountain and valley was so picturesque I half expected Julie Andrews to emerge, singing sweetly about the sound of music. Except that there wasn't a blonde as far as the eye could see. To my American eyes, the ethnic homogeneity of the people was as unfamiliar as the houses, an endless chorus line of humans whose heads were topped with thick, shiny jet-black hair, close-cropped even for the women, longer for the children. From babes in arms to barely school-age kids all the way up the generational line to weathered old men and toothless old women. Regardless of their age, each was wrapped in colorful variations of the national dress, in bright blues and oranges, yellows and pinks. The erratic madness of the road didn't seem to cause concern; kids sat at the edge of it, older people meandered across it, cows clustered serenely in the middle of it. Even as we whizzed by and stirred up dust and pebbles, both the human beings and the animals went about their business, undisturbed. And then there was a visual punch line, adding a bawdy comic-strip-like touch to the landscape: Houses were adorned with giant, brightly colored paintings, sometimes of a rooster or a lotus flower or, occasionally, a ten-foot giant winged phallus, wrapped sweetly in a bow. When I'd found pictures of these online, they'd appeared humorous; here, they seemed ordinary, just part of the scenery. As we drove, Ngawang was like a windup doll, chattering from the middle row of the van. She narrated the sights: Animals lived on the ground level of a house, she said, and the people one flight above. You could tell we were in Paro, and not Thimphu, because the houses had three rows of windows, not two. From the license plates, you could distinguish whether a vehicle belonged to the government, was a taxi, or was a private car. She had earned a tour guide license, she explained, so if I had any questions, she was equipped to answer them. I had one. "What exactly is the meaning of the giant penises?" There had been various discussions on the Web about their meaning. They weren't fertility symbols, nor were they indicators that prostitutes were available inside, as was the case in other countries. It had something to do with a bawdy mystic named Drukpa Kunley, also known as the Divine Madman, who tamed demons (and just about everyone else he came into contact with) using his abundant sexual powers. But the reason for their prominence on the sides of houses hadn't been properly explained on the Internet. Ngawang deciphered the mystery. "We believe it is wrong to envy what someone else has. When you have a phallus painted on the house, people will be too ashamed to look and to covet what they don't have," she said. "In this way, the phallus wards off evil spirits." This had to be the most beautiful circuitous logic I'd ever heard. "I imagine a lot of visitors ask that question, huh?" "Yes," Ngawang said, giggling, lingering on the _s_ for emphasis. "They think it's strange. For us, it's just a part of Bhutan." Then she explained that having to answer the same question over and over showed her that a career as a tour guide was not for her, which is why she was so excited that she'd been hired at Kuzoo. And when she read my résumé, she decided she wanted to learn everything I knew. "I want to be the best radio jockey ever," she said. "Please teach me how, Madam Jane." Kesang interrupted the flattery, the pitch of his voice indicating it was urgent. Ngawang translated that we had indeed gotten stuck in the roadblock. For the next ninety minutes, parked on the side of the very scary, very narrow road, surrounded front and back by several dozen other vehicles, and in between the almost constant trill of her cell phone, I learned a lot more about Ngawang. Her mother had died when she was four. Her father proudly served in the Bhutanese army. It seemed a curiosity that a peaceful Buddhist nation would require a military, but perhaps that was how the country had avoided being annexed by neighboring China or India. Since her father was stationed in the west at a military base on the border, Ngawang had been living with her uncle and aunt in Thimphu. She had attended college in India. In her large extended family, she had many, many "cousin brothers and cousin sisters." She was twenty-three. She dreamed of visiting America. And of having a baby. "Oh!" she exclaimed, interrupting her autobiographical monologue as if she'd remembered something more important. "How did you come to know about Bhutan?" I told the story of that fateful night in New York. "Do you know Sebastian? He's in the tea business." "American?" "Yes, yes. Tall, thin man. American," I said hopefully. I imagined Sebastian must be a revered figure here. "Nope," she said, as if she were flipping through mental images of people he might be. "Don't know him." She paused for a moment. "So it was your karma that brought you to Bhutan. That's cool." "No, it..." As I disagreed, I understood. "Yes, it must be because of my karma that I came to Bhutan." "What do people know about Bhutan where you are from?" "Well, I knew about Bhutan because I heard you didn't have television," I said, refraining from launching into a rant against the evils of the boob tube. It didn't seem good form to introduce myself in this way, particularly given the reason for my visit. "But honestly, most people don't know very much about Bhutan." I also didn't mention that several family members were concerned I might be held hostage here simply because they worried about everything. Or that my father had warned me in his bon voyage phone call that I'd likely have to carry my own toilet paper, as if that were the most barbaric proposition. There wasn't any need to tell Ngawang, either, about how one of my more sour coworkers deemed Gross National Happiness "old news," and "a gimmick." Ngawang laughed. "We have television now!" she said. "The fourth king allowed it. I love watching TV!" That's what I was afraid of, I thought. "The fourth king?" I asked. "The father of the new king, who is Bhutan's fifth king," Ngawang explained. "Our monarchy is one hundred years old, and His Majesty is the fifth in his family to serve." "Do you like the new king?" "Oh, yes, very much. Everyone loves our king. He is a man of the people and devoted in his service to Bhutan." The words sounded as if they'd been lifted from a brochure, and yet the tone was heartfelt. Her scattershot line of questioning continued. She brimmed with the energy of a teenager. How big was my family? And oh, did I believe in God? My parents were both alive, I told her, and I had a younger brother with whom I was close. We had lots of aunts and uncles and cousins, but none of us saw one another much. As far as what I believed, I wasn't sure, but I liked the idea of believing in something. What little I knew about the Bhutanese faith, Buddhism, seemed to make a lot of sense to me. "What about you?" I asked. "Do you believe there is a God?" "I don't know what I believe, either," Ngawang said. "But being raised Buddhist, I follow what my family tells me. If it's an auspicious day and we have to make offerings, I obey. So have you ever been in love?" _That's complicated_ , I thought, glad she hadn't asked if I'd been married, because I hated answering that question. "In love a thousand times with silly infatuations, but for real, yes, twice. And you?" Ngawang said she'd had a boyfriend who'd taken up with her best friend. She didn't believe in love anymore. What mattered to her was having a baby. "Well," I said, "you've got plenty of time..." At last we were moving again. It seemed that each time Ngawang's cell phone trilled, it played a different tune; now it was playing "Hotel California," and I wondered whose ringtone it was. The interruption saved me from being asked about my interest in children, which was even more complicated to answer than the love question. On the phone Ngawang was speaking in her native language, but every so often I'd make out a reference to me, "Madam Jane." Listening to the cadence of her voice and trying to discern the tone provided further welcome distraction from the terrifying roller coaster that passed for a road. As did more unusual visions punctuating the landscape: groups of monkeys skipping alongside the river that ran below, a gold-topped temple emerging from the side of a mountain, clusters of skinny cows sunning themselves. "We're coming to the checkpoint," Ngawang said. The government kept track of who traversed these roads, inspecting whether visitors were in possession of the proper permits. Bhutan had opened up, but that didn't mean total freedom of movement. Thanks to the government's increasing engagement with the outside over the past few decades, including the introduction of modern air travel, and, apparently, my karma, here I was in this faraway kingdom. With each passing kilometer, it became more evident just how distinct this universe was from my own. Gazing out at the foreign landscape dissolved my worries, my failures, my triumphs. I was humbled by all that surrounded me. What difference did it make what I had and hadn't accomplished, under what circumstances I had come here, or that I had come here alone? I was here. I was fortunate to be here, to see how other humans lived in a place unlike anywhere else in the world. I had the chance to interact with people from an entirely different culture, on this planet we all shared. My world had simultaneously become infinitely larger and smaller, and the very anticipation of what lay ahead drowned out my usual litany of concerns and self-criticisms. Worries petty and large began to shrink away. No longer was I some burnt-out career journalist with no idea how to escape the grind. No longer would I see myself as a failure as a woman, either, for not having had a successful long-term romantic partnership that yielded a happy home filled with children. This long-crafted definition of myself, of a nice gal who had made a mess of her life, started to melt away. Replacing it now was a new vision of me: one part proud ambassador from the United States, one part curious anthropologist, 100 percent human. I resolved to be the best person I could be—and to stay alert to the possibilities before me. TWINKLING LIGHTS ACROSS the landscape signaled that we had entered the capital city of Thimphu. It was dusk now. A cold gust of wind blew through the valley, as if to welcome us and remind us it was winter. Eight hours later than our anticipated arrival, the Kuzoo van drove through town and up a short hill to the Rabten Apartments, a small two-story building that was to be my home. Ngawang let us in with an enormous brass key, and Kesang hauled my suitcase up the steps and into my apartment, then downstairs to the bedroom. Ngawang plugged in the space heater in the living room, the only source of heat. As the sun set, the air cooled fiercely. She asked how I liked the place, was it okay? As long as there was a bed for me to collapse into, I was sure it was. I did a quick survey of the accommodations. The focal point of the living room was the television set. Pushed up against the wall to provide the perfect line of sight was a worn old wood-framed couch, two matching chairs whose cushions had seen better days, and a wall of mostly empty bookcases, fringed with colorful, ornate Bhutanese woodwork. All that sat on the shelves was a pamphlet from the Center for Bhutan Studies explaining Gross National Happiness and two old Bhutan Telecom phone directories, neither volume thicker than an inch. So accessible was the royal family that the king's private phone number was said to be listed; I'd have to take a look for that later. "I picked out this apartment for you," said Ngawang with pride in her voice. She clunked around in her heels, flipping switches, and determined the TV remote control was dead. "That's bad. We'll get that fixed." Then she hurried out the front door while dialing her mobile phone, explaining that she was off to fetch the landlady to bring me tea. "Sir Phub Dorji asked me to call as soon as we got here," she added. The twin impact of jet lag and bewilderment weighed on me now that we were "home." As eager as I was to meet my official host, with whom I'd been corresponding almost daily for months, I couldn't imagine having to talk in a professional or formal way at that moment. A young lady carrying a tray filled with steaming cups entered the front door. Ngawang talked to her urgently in Dzongkha as she served us. I could tell the faulty remote was at issue; I didn't confess that I wouldn't need it, certain that when I bothered to turn on the set, I'd keep it glued to the Bhutan Broadcasting Service. The warm cup felt good in my hands and I wandered into the tiny kitchen. No oven, just a two-burner hot plate fueled by a propane canister, the kind you might find attached to a barbecue grill. A worn-looking half-size fridge by a window, and a rice cooker sat on the counter. In a plastic dish drainer were a couple of plates, a few forks and bowls, and some unmatched glasses. Ngawang watched me drinking it all in. "You have a geyser in your kitchen—that's very fancy!" She meant the sink, which hardly seemed fancy until she said her place didn't have one. For many Bhutanese families, even in the city, she said, their water source was outside. What seemed very modest to me was very lavish to her, which made the gift of this apartment all the more grand. Down a short flight of stairs was the bedroom and bath, and they were simple, too: A blanket covered twin beds pushed together to make a king. A tired old cabinet resting against the wall served as a closet. A stall shower covered by a moldy plastic curtain, two thin white towels, and a sad wooden shelf made up the bathroom. This is fine, I thought. I can live anywhere—as long as there aren't any mice, or worse, rats. Everything seemed too tidy for that. A tall, slender man in his late thirties appeared in the doorway, a commanding, if solemn, presence. If his dark gray _gho_ had been a suit, he could have stepped out of a bank in midtown Manhattan. It was Sir Phub Dorji, my benefactor. He had an air about him of a grown-up choirboy. Innocent, sincere, earnest, strictly business. After a flurry of greetings, he asked Ngawang to please make sure the landlady got me something to eat now—and to be certain she brought breakfast in the morning, too. With the plane arriving so late, we hadn't had time to go shopping for kitchen provisions. "We are so grateful that you are here," he said. His tone wasn't warm as much as it was serious and matter-of-fact, the same as in his emails. "I can't quite believe I am." "It's a very exciting time for us right now. Kuzoo FM is causing quite a sensation. But we really could use your professional guidance." "I'm ready to help, however I can." I sat straight up in my chair, thrilled to be of service to these kind people in this unusual place. "I am sure this is quite modest, quite simple, these accommodations, compared to what you have back home," he said. "My wife watches _Desperate Housewives_ , and I've seen the kitchens. I hope this will be suitable." "This place looks to be about the same size as mine in Los Angeles," I said. "Including the kitchen. Trust me—most of us don't live like you see on television." The look on his face suggested he didn't believe me. Or that I didn't quite understand the extent to which daily life here was different from that in my world. A plate of pinkish rice arrived; alongside it was a little bowl of what looked like stewed vegetables. They were fiery hot, and a bit too gloppy for my taste—covered in a runny, oily cheese sauce. So I concentrated on the grains. "Let me tell you the story of Kuzoo," said Phub Dorji. "It was a pet project of His Majesty the fifth king, created when he was crown prince. What happened was this: The youth wanted a radio station, and they approached him. He had been given the gift of a BMW car. He sold it at auction to raise funds, which he donated to start Kuzoo. And that is how the station began, as a gift from His Majesty to the youth of Bhutan." He paused. This was all so radically different from the big media universe in which I'd been dwelling for over twenty years. The media here seemed pure, neat, a public service—not another quiver of power for a mogul like Rupert Murdoch. Providing a voice for the people, plumbing the depths of the community, that was what newspapers and radio and television were supposed to do, what had attracted me to the news business in the first place. "Remember you asked my specific role at Kuzoo, and I said it was a surprise I would reveal to you when you got here?" "Oh, yes, I do." I put down my teacup and moved forward a bit in my chair. "Well, the secret is that I have nothing to do with Kuzoo." Phub Dorji smiled a bit, the most I'd seen of his teeth so far. I was intrigued. "I don't understand. You sent me all those documents about what you needed... and you said you oversaw the station." "Yes, I do. But from a distance. It's okay. You are in very excellent hands. Mr. Tenzin Dorji will be here in a few minutes to say hello. He is a former high school principal now in charge of Kuzoo. No relation to me. You won't see me every day, but if you need anything, I insist that you call. Or have Ngawang call me." Phub Dorji motioned to a piece of paper taped on the wall near the phone. "Ngawang, can you write down my mobile number for Lisa? Can you lend her a mobile phone so we can reach her?" "Yes, sir," said Ngawang, with her head bowed respectfully, before I could refuse the offer. I'd been looking forward to not having a digital leash during my time here, but I supposed a cell phone wouldn't be a bad tool to have. Just then, the front door opened, and a cyclone of energy in the form of Mr. Tenzin Dorji entered the room with his eight-year-old son and a little white Maltese in tow. "Welcome, Jane!" he bellowed. # 3 # RADIO SHANGRI-LA A RADIO STATION MAY SEEM QUAINT AND RETRO, AN old-fashioned medium in this age of all things digital and pod. But in the last Shangri-la, it proved to be an invention as modern as a spaceship. As soon as Kuzoo FM started broadcasting on September 28, 2006, the entire population of Thimphu tuned in. That's not an exaggeration. The few stores that carried radios promptly sold out their stock. Farmers in the nearby valley twisted the angles of their antennae in order to tune in a signal so Kuzoo could keep them company as they worked the land. Drivers of the kingdom's growing number of motor vehicles (up from just under four thousand in 1999 to well over thirty thousand less than a decade later) were happy to have Kuzoo's radio jockeys entertain them as they cruised the capital city. Many of the cars proudly displayed Kuzoo bumper stickers in enthusiastic support of the new station. Before Kuzoo came along, there wasn't much else to listen to. Recorded music—if you could even get your hands on it—was far more expensive than modest Bhutanese incomes would allow. Up until Kuzoo, the only sounds transmitted over the airwaves had been the dull news and announcements, punctuated by the occasional music program, churned out by the government-launched Bhutan Broadcasting Service. It didn't even broadcast all day. The rest of the dial was filled with static. Suddenly, a radio was a hot item, and Kuzoo FM was a real station, playing all kinds of music that most Bhutanese hadn't heard before: the saccharine epiphanies of pop divas, the aching twang of country music, the interlocking rhythms of rap, rock, hip-hop. All presented by friendly, if inexperienced, Bhutanese radio jockeys, who shyly stumbled through their pronunciation of English words, making it clear this was not some slick feed imported from afar. Even so, the capital city was a world away from the rest of the country. Villagers might visit Thimphu once in their lives—and only then for formal business or to be tended to at the hospital. Travel in Bhutan had long been utilitarian, not for pleasure. Leaving home to scratch through forest paths in order to venture to the next town meant time lost working the land, which yielded the food and other necessities that sustained the community. Only in the last few years had the beginnings of a leisure class blossomed. That the citizens of Bhutan could now be connected to one another through radio, without actually _going_ anywhere, was nothing short of magic. This eager audience immediately began to phone in to give thanks for the music, and to chatter on the air—simply because they could. To dedicate songs not just to their friends and family but to fellow callers whose voices they'd found pleasing. When nature intervened and the signal was temporarily disrupted, Kuzoo fans called and begged so plaintively in despair at the interruption that you'd wonder what they'd done before the station existed. "Please, my wife has stopped eating because she's so sad without Kuzoo," implored one man after a storm knocked the station off the air. "You must repair it." Perhaps the most fascinating aspect of the Kuzoo experience—the quality that endeared it to its audience above all others—was that listeners were allowed, even encouraged, to participate on-air. Besides making dedications, they could sing songs and talk to friendly radio jockeys. Or ask questions about Buddhism on a weekly show called _Dharma Bites_ , hosted by two young self-styled "spiritual jihadis." Disappointed that their fellow youth were becoming less engaged with the national religion and showing more of an interest in the trappings of the material world, they'd cooked up the idea for this program and showed up at the Kuzoo studio one day to ask for airtime. Like legions of evangelists around the world, they saw the power of the medium to educate and persuade. The excitement wasn't just because media in Bhutan hadn't been interactive before. For generations, the tiny, landlocked Himalayan kingdom had practically no media at all, and very little in the way of modern communications. It had been literally sealed off from the rest of the world, and virtually sequestered, too. TV had long been outlawed, lest the insidious forces of the outside infiltrate and pollute the minds of the people—dilute their unique culture, intoxicate them with images of an outside world to which they'd yearn to belong. Some of the bolder and more affluent citizens hooked contraband televisions up to video recorders, traded taped films they managed to smuggle into the country, or slipped cash to enterprising Indian businessmen who smuggled dismantled satellite dishes across the border and reassembled them, discreetly, around their customers' homes. The number of people who could afford such a luxury was few. Even as the burgeoning World Wide Web had been taking root most everyplace else, it had not been allowed to make inroads into Bhutan. Not that every person in every corner of the nation could have availed themselves of the service even if cost weren't an issue. A quarter of Bhutan's villages still lacked electricity—and half the population had to walk at least four hours just to get to the nearest all-season road. When mobile phones were introduced in 2003, the number of houses with landlines totalled in the hundreds. On the morning of June 2, 1999, King Jigme Singye Wangchuck issued the decree that would change Bhutan indelibly. At the silver jubilee celebration commemorating his twenty-five-year reign, he delivered a speech in which he bestowed several gifts on his constituency. Roads and bridges, he declared, would be built in the remotest corners of Bhutan, to give the people greater mobility. To accommodate a larger plane for the nation's airline, a new airport terminal would be constructed—another enticement to would-be visitors. As part of Bhutan's continuing environmental stewardship, the king announced that plastic bags would be banned—for the good of the planet and, ultimately, all people. The bombshell of His Majesty's address that day, the revelation that elicited cheers from the otherwise solemnly reverent assembled guests, was this: After years of self-imposed isolation, of gently dipping its toes into the outside world, of carefully restricting which foreigners entered the country and which Bhutanese left, the almighty superpowers known as television and Internet were to be permitted into the happy kingdom. With that, the switch was thrown on the brand-new televised Bhutan Broadcasting Service, which would run alongside a selection of international channels. From that day forward, the BBS television signal—transmitted, like its radio counterpart, for just a few hours each morning and evening—would mix with the pristine air and low-lying clouds of the Himalayas. Cable television would render those illicit satellite dishes obsolete; now they would perform another function as a handy surface on which to dry the nation's staple food, chili peppers. In his speech, the king acknowledged the Pandora's box he was unlatching, the effect media might have on his carefully crafted policy of Gross National Happiness. Television and the Internet, he reminded his audience, possessed both positive and negative qualities. Their use required good sense and judgment. When it came to their consumption, the king trusted his people to deploy what the Buddhists call the Middle Path: moderation. He believed firm religious footing would ultimately hold more sway than the mesmerizing power of the screen. Whether that made the king an idealist or simply naive would be revealed soon enough. THE NOTION OF Bhutan as Shangri-la has to do with its unspoiled landscapes and striking views of stunning Himalayan mountains, believed by some to be the dwelling place of the gods. Adding to the mystique is its fierce commitment to the preservation of age-old traditions that are the mainstay of the country's cultural heritage. Western Buddhists plan once-in-a-lifetime visits to worship at the hallowed historic monasteries scattered throughout the land and soak in the energy of the country's storied religious festivals, called _tsechus_. None of this was what drew me to Bhutan. For me, the prospect of a relatively media-free universe was as close to utopia as I could imagine. No wonder the country was considered the happiest on earth! The promise of a place where life was simpler—unsaturated by the menacing forces of mainstream media, which had kept a roof over my head for years but which I found increasingly noisy and bothersome to consume—appealed to me. That Bhutan was guided by intense spirituality, by connection to home and community, held great allure. I was tired of sleep-deprived, stressed-out, too-busy people who shirked downtime in the service of making money so they could buy more stuff; tired of it taking months to see dear friends who lived across town because traffic and overcommitment made it impossible to coordinate a shared meal. It felt like some people stuffed their calendars full so they could seem important, or at least, not have to face themselves during unplanned moments. In Bhutan, I suspected, human connections were more important than how many digital pals you racked up on Facebook. Rather than passively consuming depictions of the world pumped out to them on various screens, the Bhutanese, I imagined, must savor their lives, really live them, thoughtfully and yet spontaneously. It seemed unlikely that the Bhutanese swirled about busily through the days, never quite triaging their to-do lists, assaulted by the modern scourge known as multitasking. Surely life wasn't marred by such jarring interruptions as call waiting, and the disorientating phenomenon of barely audible cell phone calls made to "check in" with friends and family and give the illusion of connection. Thumbs weren't maniacally text-messaging so the humans they were attached to could "keep in touch" while averting their gaze from others in their paths. I longed for a way of life in which people made it a priority to look into each other's eyes and communicate, soul-to-soul, uninterrupted, like in that Zulu warrior greeting we'd practiced in happiness class. I yearned for meandering conversations about all things important, all things banal. Bhutan, I imagined, might be as close as you could get on earth to what I'd been craving—a real, live, actual community, where being wired took a backseat to being present, face-to-face, experiencing the here and now. After more than two decades of reducing even the most complex issues down to a thousand words or less, I was tired of observing life from a distance, of synthesizing and distilling data with little time to process meaning. I'd long considered myself fortunate to have made my way in a cutthroat profession; when I was a kid growing up in Flatbush, Brooklyn, the news business promised glamorous access to a world far different from my own. Being able to ask questions and tell stories, much less getting paid for it, had long felt like a tremendous privilege. And yet, over time, the warping nature of the business was clear: Every experience could be spun as an item, and human relationships existed to serve the demand for news and information. The rushed conversations to facilitate deadlines had left me impatient in normal interactions; why couldn't people get to the point faster? Quick, clipped scripts punctuated by quick, clipped communiqués via email seemed to be impacting my ability to think more deeply, to respond with the slow deliberation necessary for life's bigger decisions. I felt like I was rushing, reacting, all the time. Most of the jobs in journalism today are like information-factory work—Lucy-on-the-assembly-line style—or eating a steady diet of dim sum. You'd sample many items every day, fast, then gorge on the morsel assigned to you, trying to digest it as quickly as possible. A few hours later, you were expected to spew it out to the world in the form of a "story." And on this particular radio show where I was currently employed, it had to be a very short story: ninety seconds or less, for the most part. Even though you probably couldn't fake your way through a debate on the subject, you were suddenly an authority on it because millions of people had heard you talk about it on their commute home. The aftereffects were the same as an unsatisfying meal. Bloated, hungry—I felt hungry for knowledge, deeper meaning, time to synthesize the world around me. I also worried about the effects of the news on all who consumed it. For all the good the global village had achieved—exposing us to other cultures we'd never visit, making us aware of strife in other parts of the world, educating us about important issues in our communities—it also was making us info-zombies. Just because the images are constantly beamed into our lines of sight in our living rooms, at the grocery store, while we were pumping gas, we live with the delusion that this virtual ringside seat to the action equals a real understanding of the world at large. Stuffed with factoids about the stock market and electoral college and weather patterns, the general public believes they understand these complex systems, when in fact they can't adequately be boiled down to sound bites. Context isn't modern media's strong suit, and the broadcast media in particular; raw data and pictures and sound are. The gray areas, the nuance, rarely get explored. Wars and starving children become images on the screen that you can turn off at will, not real, live complex problems. Bad things happen to other people. Thank God that didn't happen to us, honey. Change the channel, please. Now, my media malaise wasn't simply a result of my midlife disillusionment. It had been brewing for ages. Since not long after I'd started working, in fact. When the space shuttle _Challenger_ exploded, I was a young copywriter at CNN Headline News. A tape editor and I were locked in a tiny room for the twelve hours afterward, charged with crafting half-hourly updates that incorporated into our report whatever information ticked off the AP news wire. I watched the shuttle blow up so many times that day, I knew in graphic detail how it had broken apart. The impact of seeing that horror play over and over led me to do exactly what you're not supposed to do in a workplace, particularly in a newsroom: break down and cry. I couldn't believe we were repeatedly watching people dissolve before our very eyes without pausing for a second to think about them. (It turned out later to be even worse: They hadn't exploded; they had sunk to the bottom of the ocean.) Thankfully, my colleague was also a friend, and he didn't blow my cover. He put his hand on my shoulder, sent me to the restroom to wash up, and told me to come right back to the edit room to keep working. Afterward, I got promoted for doing such a fine job turning around first-rate updates on deadline. Years later, during the standoff between cult members and federal agents in Waco, Texas, my dissatisfaction reached a new intensity. I was field producing for another network, and after sitting and staring for weeks at the embattled compound from the safety of the media village a mile away, my colleagues and I watched through high-powered telescopes as federal agents stormed in. Flames shot high, engulfing a building that had people—children—inside. We members of the media diligently trained our cameras on the action, shouting to one another about the logistics of "going live." Reporters in the foreground of the action boasted that their network was "at the scene." At the end of the day, when a friendly woman knocked on the door of the news truck to tell me she'd been dispatched so I could take a break, the tears flowed yet again. The presence of a new person jolted me back to the reality of the day. There were humans dying yards away as we frantically broadcast the disaster to the world. September 11, 2001, did me in. Mercifully, I wasn't even working at the time. A month earlier I'd lost my maddening job as "Internet correspondent," which meant reading viewer email aloud on cable TV. I'd been axed in a round of cutbacks after the dot-com bust. As the mayhem of the terrorist attack unfolded several miles south of where I lived, I couldn't bear the thought of watching it on television. The analysts, the experts, the pundits, the din of the speculative commentary were all being doled out before there was actual information to convey. I didn't want to follow along with this packaged-for-TV disaster, as each gruesome detail was revealed. All I wanted to do was be silent, meditate and pray. I wasn't conscious of ever having wanted to do those things much before, but it seemed a more productive course of action than staring zombielike at the television, wallowing in the unfolding chaos with a remote control. It was then I decided to turn off the news, in every medium, and instead deploy my own Human News Network. For weather reports, I stuck my hand out the window or chatted with neighbors. For hard news, I listened to and asked questions of friends; everyone loved to share opinions about stories they'd read, heard, or seen. If an item piqued my interest, I'd seek out more about it. It was like having my own personal clip service. I studied what I needed in order to write freelance news articles to pay my bills while I looked for full-time employment. There was room now for books, research on topics that interested me, conversations with friends. Downtime to just think. More time to walk and cook and swim. I felt liberated, smarter than ever, genuinely in touch with the world around me. The real world, not the mediated one. Except, I hadn't really cut the cord, not entirely. Like someone trapped in a bad marriage, I concocted an ironclad excuse for why I wasn't getting out. I needed a job to support myself, and what else was there that I could do after twenty years of little else but the news business? And after being out of work for so long, I considered myself very fortunate when I landed the gig in Los Angeles, in the vaunted medium of public radio. BESIDES BHUTAN'S UNDEVELOPED media culture, my other attraction to the country was its almost institutionalized resistance to conspicuous consumption. That shopping was a pastime for so many people in the United States distressed me. I was hardly an ascetic, or the type who makes compacts not to buy anything but toilet paper and food. I was simply trying not to suffocate under mounds of belongings: trying to live as simple, uncomplicated, and uncluttered a life as possible. There was enough extraneous junk and chaos waiting for you once you stepped outside. Finding pleasure in giving things away was a developing habit for me. I'd taken the once-unimaginable step of donating my twenty-years-in-the-making collection of five hundred books to the public library. This occurred after I'd paid several thousand dollars over two years for a storage closet in New York, indecisively letting them collect dust. ("Should I move them West, or should I hedge my bets that I'll move back East?" I mulled each month as I wrote the check.) Until one day I made the call and begged for the cartons to be hauled away, aware that for many educated people this was an action tantamount to suicide. Or, at least, cause for institutionalization. One dissenting alleged friend dubbed my action, simply, "idiotic." My coveted collection of books really wasn't necessary anymore. I felt secure enough in my intellect not to have to flaunt my impressive personal library in my tiny apartment any longer. I could get most anything I wanted to read, reread, or simply fondle for sentimental reasons in practically an instant at the spectacular public library located two blocks from where I lived in downtown Los Angeles. And I could support a vital (if, to some people, arcane) public institution in the process: With the money I'd saved not buying more books, I started making an annual donation to the Los Angeles Public Library, stoking my middle-age ambitions to be a patroness of worthy causes. As for my television, I'd hung on to that only to watch the very occasional DVD or videotape. I'd long ago ditched the cable. Until the rare occasions when I wanted to turn it on, I obscured the screen with a painting. And yet, I continued to be a hypocrite. Not only did I keep working in the news business, I had just arrived in one of the last places on earth to be corrupted by media and consumer cultures. My mission was to teach them how to "professionalize" this new radio station they'd just begun, which I suspected was code for "make us sound like every other radio station in the world," instead of letting it grow organically to become its own sort of Radio Shangri-la. EVER SINCE TELEVISION arrived in Bhutan in 1999, more people have been opting out of the agrarian lifestyle that supported their ancestors and is still the mainstay occupation. Now young people flock to Thimphu for their education and a chance at jobs that promise plush benefits (like working behind a desk, with a computer, and not in the fields). No one has officially drawn the connection between the introduction of mass media and the swelling of the population in Bhutan's capital city. But no one can deny it, either. A generation ago, it wouldn't have occurred to young people to leave their families and their villages. Being wired to the outside world, of course, doesn't make Bhutan any less geographically remote, or any less costly to leave. It still requires days of travel to get in or out. Besides, most people in Bhutan don't have much cash, and credit cards don't exist. No matter how rich you might be, there is just one airline and one airport. Permission to travel in either direction is meted out with great deliberation and is typically granted to the lucky few who win scholarships to colleges and universities outside the country. But thanks to the wonder of satellites and a vast network of interconnected servers, you can more clearly see what it's like out there in the world without having to go. And that window to the world changes your perspective. As fiercely, traditionally Bhutanese as you might be, as much as you might vow you'll never leave, those other ways of life depicted on TV look mighty tempting. When television beams a window on the world—the possibility of other—right into your home, it's hard not to become enthralled. Or, at least, intrigued. Those images sure get you thinking about what you have, and what you don't. IN 2006, THE KING allowed media infiltration to reach another milestone. Two private weekly newspapers were licensed to compete with _Kuensel_ , the once-government-backed paper that had long been the only game in town. Its founder had been primed for the job with an education at the finest schools. These new rivals, the _Bhutan Observer_ and _Bhutan Times_ , hired young staffs with no training or experience, and who flirted with the promise of press freedom guaranteed in the not-yet-signed new constitution. But only a bit. Free speech, Bhutan-style, still did not extend to criticism or examination of the monarchy—a line few would dare to cross even if it weren't forbidden. Most stories involving the royal family still had to be run through government censors, no matter how benign. And if they involved His Majesty, it was implicitly understood that they'd receive the most prominent placement, no matter how banal his actions. Nevertheless, the mere existence of competition began to transform _Kuensel_ from a polite, deferential organ to a more solid, inquisitive journalistic news organization. And then came this new entry: Kuzoo FM. Tenzin Dorji was plucked from the ranks of educators to manage the operation. A staff of nine twenty-somethings was hired. Anyone with time and curiosity was encouraged to just show up and volunteer, particularly if they were in high school. Kuzoo was a youth-focused station, after all. At first, the station broadcast from an old closet in a government building that housed the youth sports department. A few months after its debut, Kuzoo moved to another structure on the property, which long before had served as the residence for the foreign minister. It was a two-story building with a narrow staircase, a warren of rooms, and worn burgundy carpet. The building's two most charming features were a front porch on the second level that offered a lovely view of the grounds (although the floorboards wobbled precariously if too many people stood out there) and the sky-blue-tiled kitchen designated as the studio. The space wasn't converted as much as it was _adjusted_. Heavy white cardboard was laid over the sink to discourage anyone from turning on the faucets. A plastic tarp taped over a hole in the ceiling kept a flock of cooing pigeons from landing on whoever sat at the mixing board. Several old computers were set up on battered school desks in the adjoining rooms, and the young staff began waiting their turn to "prepare their shows" (which meant downloading music illegally off the Internet) and adding whatever few CDs they might own or could borrow to add to the library of on-air music. Not that the library could ever grow very large. The average middle-class teenager in the United States had an iPod with a bigger hard drive than the one that engined Kuzoo. As local radio stations and newspapers in the West downsized and dissolved in a rapid death-spiral—victims of media consolidation, the Internet, and the bottom line—Bhutan's media landscape was expanding with abandon. Media were seen as a crucial component of the impending democratic elections, and afterward, a force to keep watch over the newly elected government. The king knew that for democracy to take root in this long-standing monarchy, a competitive news landscape was a critical part of the equation. What he didn't factor in is how much the kids loved their music. # 4 # BEWARE THE _E MADATSE_ KESANG, THE KUZOO DRIVER, LOOKED CONCERNED; his lips were pursed and he was shaking his head disapprovingly. After driving Ngawang and me to the store, he was now in charge of holding the basket while I stocked my kitchen, courtesy of Kuzoo. He appeared to be unhappy with my selections. Ngawang had taken us to a little grocery on the lower road, across from Changlimithang Stadium, where the coronation celebrations would be held sometime in 2008. "My auntie owns this place," said Ngawang, waving to the woman behind the counter. The shop was the size of a small convenience store, but more chaotic in its inventory; floor to ceiling, every shelf was covered with all manner of packaged goods, from shampoo to potato chips to tea, side by side, in varying quantities. Two of these, ten of those, all teetering on top of one another. One abrupt move and it could all come crashing down. Nothing was organized in any particular way; it was as if each item, as it arrived in the store, was shoved into whatever sliver of free space was available. A few of the packages looked familiar, like Lay's Potato Chips (although I had never seen the Spicy Indian Masala flavor in Los Angeles) and, not surprisingly, Coca-Cola. (An attempt to bottle Pepsi in Bhutan had recently gone bust—because of a management issue, the newspaper reported, not a shortage of fans of carbonated beverages.) "Those are from India," said Ngawang, seemingly unaware that the origin of these big brands was, in fact, the United States. Giant next-door neighbor India was the closest and most important source of all things shiny and refined. At least a handful of items in here were unfamiliar to me, and several dry goods like lentils were unlabeled. No matter who was buying, I preferred to consume food that hadn't traveled from so far away. It was my second full day in the Kingdom of Bhutan. I was so hepped up on caffeine and adrenaline, I hadn't had the chance to succumb to jet lag. At every turn, one of the Kuzoo staff was offering, "Tea... coffee, madam?" "Tea... coffee, madam?" (By coffee, they meant Nescafé, which appeared to be the only kind to be had here, and the presumption was that most Westerners preferred that as their hot beverage of choice.) Even after I reached my limit, I kept saying yes—the prospect of a warm mug was so inviting. The message behind the repeated gesture was clear: I was a lady; I was senior to them; I had come from quite a distance to help; and this was the simplest form of welcome. And I did feel welcomed, despite how nervous most people seemed to be about striking up conversation with this curious visitor who'd dropped into their orbit. I hadn't been alone for a minute since I'd arrived, except when I slept. The first day had been a blur, a whirl of introductions, new faces, a busy lunch with Sir Tenzin at Plums Café right across from the traffic circle in town. He knew practically everyone in the restaurant, and they all heartily welcomed his foreign consultant. Kesang and Ngawang had shown up at my door again early this morning, just as I was getting dressed, to squire me to the studio. I hoped to persuade them that, really, making it the half mile down the hill and around the corner to the station would be a pleasure, not a hardship. As long as I could fend off the stray dogs, I'd love that walk each day. They seemed to be enjoying the responsibility of taking care of me. On our drive to the store in the late afternoon, Ngawang appeared a bit exasperated, though, by my stream of questions. "What is it Mr. Phub Dorji does, since he doesn't run Kuzoo?" "He works in His Majesty's secretariat." "But what does he _do_ there?" "I'm not sure, really." I didn't want my curiosity to be mistaken for rudeness, so I dropped the topic. I was starting to detect that short, vague answers were typical here. "Is that marijuana growing over there?" From the window of the van, I'd spotted blankets of pot erupting on either side of the street, an occasional flower peeking through. "It grows wild all over," Ngawang said, in the same "visitors are so predictable" tone in which she'd decoded the phalluses. "We feed that to the pigs to make them fat." I hadn't caught sight of any pigs yet, but I sure hoped there were chickens. I could live without meat, but I couldn't live without eggs. As I searched for them in the tiny store, I wondered aloud what was bothering Kesang. "Is everything okay?" I asked. A nervous look had come over him after he'd whisked a half-pound bag of rice out of my hands and into the basket. I hoped I hadn't done something to offend. I wasn't accustomed to food shopping with an entourage. Back home, my neighbor Bernie and I would embark on weekly grocery expeditions together, but we typically maintained our own carts. "Kesang's worried about what you're buying, Lady Jane," Ngawang said. "Why?" "You don't have enough rice." "This is plenty of rice for just one person," I said. I'd picked up rice only because the most prominent appliance in my kitchen was a rice cooker. Besides, it was becoming clear that I'd get plenty of the grain whenever I ate outside the apartment. "Oh, that wouldn't be enough for us," she said. "We each eat four plates of rice a day. That bag over there"—she pointed to a twenty-five-pound satchel on the floor—"would last my family about a week." For the exact same reason that people in the richest parts of the world chose to eat little starch, people in the poorest ate a lot of it: It filled you up. Only the wealthiest could indulge in a low-carb fest like the Atkins Diet, because they had the luxury of plentiful, lean meat to fill them up instead. That the Bhutanese smothered their unpolished, pink rice in a yak-cheesy, fiery-hot chili stew called _emadatse_ , and savored it three times a day, was testament to how they'd ingeniously discovered a way to live off, indeed enjoy, a low-cost food. "Are there any eggs here, do you think?" I asked. I kept peering under every well-stocked shelf and table, and still hadn't found any. "Eggs are very hard to find right now. Bird flu. No eggs coming in from India," Ngawang said matter-of-factly. "The eggs that are around are _very_ expensive." The prices had shot up to the equivalent of a quarter apiece. Oh no, I thought, as I grabbed several rolls of crepey pink toilet paper tucked beside some beer. I'd noticed with a measure of defeat that my father had been right; this bathroom essential didn't seem to be provided in the Kuzoo restroom, or any of the places Sir Tenzin had taken me to so far. Best to carry my own. "If we can find eggs, I'd be happy to pay for them myself," I said, and then I immediately regretted it. I didn't want to sound like some swaggering rich American, but I was beginning to worry a bit about the food situation. At lunchtime again today, I'd stuck a fork into a bowl of _emadatse_ and thought I might die—both from the spicy heat of it and the runny processed cheese of it. Perhaps I'd fall in love with Bhutan and its people, but I was fairly certain I would never become a fan of _emadatse_. Locally grown foods, good; local cuisine, beware. I really needed an egg. "Nuts. What about nuts?" I asked, ticking through a mental checklist of my staple foods. Then I spotted another necessity, a couple of dusty cans of tuna fish, and grabbed them, greedily, though there wasn't another shopper in the store. The label declared _Thailand_. I figured I could bring that down to Kuzoo for lunch, once Sir Tenzin tired of trotting his new American volunteer/consultant around town during the midday meal. "Nuts are also _very_ expensive," said Ngawang, and then walked over to a package of cashews that looked pale and crusty, like they'd been sitting around for a couple of years. They were even pricier than they'd be back home; $1.50 for what amounted to a large handful. In a country where per capita income was $3 a day, of course no one had indulged. "Peanut butter?" I asked hopefully, and Ngawang held up a dusty yellow plastic tub of the stuff, from India, and handed it to Kesang. That would be useful. Then she displayed a loaf of white bread whose simple wrapper announced it was from Norbu Bakery in Thimphu. "You'll need this, too. You Westerners like bread. Nobody eats this stuff here." "Someone must," I protested, "if they make it." Ngawang's aunt shouted to her in Dzongkha. "My aunt says the cornflakes are in the back here. She says the Westerners always love cornflakes." I started to say "I'm not a big cereal person," and in truth I wasn't a big milk fan, either, any more than I was of white bread. But then I found myself savoring the idea of nice, mushy cornflakes. Though I hadn't had any in years, the memory of the familiar comforted me. Having a box on the shelf would probably be a good idea. Kesang drove us back up the hill to the Rabten Apartments and carried in our purchases. Ngawang immediately began unpacking them all, and while she was organizing the kitchen, managed to turn on the hot water boiler and make us tea. "Don't worry about the eggs," said Ngawang, putting some rice in the cooker so I could have something to eat later on. "I'll find you some." And I had a feeling that if anyone could find me anything in all of Bhutan, it was Ngawang. ALMOST ON CUE, as soon as we'd finished our tea, Sir Tenzin arrived to take me on a promised twilight drive of the city. He'd been enjoying playing tour guide, showing me around and bragging to whomever saw us that he had an American consultant at his disposal. Since we were about the same age, I figured I wasn't obliged to refer to Sir Tenzin as "Sir." But I did so anyway, partly to dispel any notions that Americans were boorish, and equally, perhaps, because of his commanding and intimidating presence. Had he learned this from being a school principal, or was this innate? He was taller than most Bhutanese I'd met so far, about six foot one, and his personality filled up the room. He seemed as quick to anger as he was to laugh or smile, switching from garrulous to silent in an instant and staring into space, as if he'd tuned out you and the world around him so he might collect his thoughts. Except that his demeanor didn't seem meditative as much as distracted. I wondered if this had anything to do with Sir Tenzin's affinity for _doma_ , the Bhutanese equivalent to chewing tobacco. _Doma_ was a curious, tacolike packet made by slathering a leaf with lime paste (not the fruit but the acidic residual of boiling limestone) and wrapping it around a small brown nut called an areca. There couldn't possibly be any positive health benefits associated with sucking on this "delicacy." The trio of ingredients emitted a smell worse than the stinkiest cheese. But what magnetized its users was the warming effect it reportedly had on the insides, the way it lightened the head—the same impact as a shot of whiskey, people said. Those who indulged in _doma_ insisted the high did not impede their ability to work. They also had convinced themselves that the lime paste wasn't eating away at their guts, even as doctors diagnosed a growing number of cases of stomach cancer and gastritis. (The spicy food was believed to be a culprit in those conditions, as well.) On the matter of this addictive substance, there seemed to be two types of people: those who refused to touch the stuff, and those who did so with great frequency and gusto. Sir Tenzin was in the latter category. The two camps were easy to identify; the teeth and lips of _doma_ users were stained, to various degrees, the color of blood. A light sheen of red seemed to be the mildest of the side effects. This scarlet mark did not deter _doma_ users in the slightest; they would congregate the way cigarette smokers might outside an office building in New York, furtively rushing together for a quick hit. The king had banned the sale of tobacco products several years ago, but had he attempted to ban _doma_ , his obedient subjects quite likely would have staged a revolt. _Doma_ wasn't to blame for Sir Tenzin's momentary pause at the first stop on our twilight drive. We were pulling up outside a little place in the center of town when he fell silent. The sign outside the shop was unlike any other I'd seen. In pretty cursive script, it announced, THE ART CAFÉ. It was the first shop I'd seen that looked as if it could have been located in other parts of the world. A couple of cute tables were arranged out front, and at one of them, two young men sat talking, with big colorful mugs in their hands. "Is anything wrong, sir?" "No, no, just give me a moment, please," he responded distractedly. I hopped out of the passenger door. After a minute, he stepped out, too, without a word. And as he approached the café, he swept down in a long, elegant bow. "Your Majesty," Sir Tenzin said to the two young men. They said not a word in response, simply stopped for a moment to receive the greeting, and then kept talking. Unsure what to do, I smiled lamely, curtsied, and followed Sir Tenzin into the store. Pretty photographs of prayer flags and mountains hung on the walls of the shop, a mini-exhibit. I noticed salad advertised on the menu board, and in the corner by a woodstove sat the first other non-Bhutanese I'd encountered, two women chatting away in German. From a small selection of fresh baked goods, we ordered a couple of cupcakes to go. Sir Tenzin asked for a bottle of Coca-Cola. I was dying to know who those two guys were, but I knew I shouldn't ask until we were alone. Back in the car, under my questioning, Sir Tenzin explained that one of the men was a prince, the younger brother of His Majesty the king. After spotting him, Sir Tenzin had paused to prepare himself to give a suitably formal greeting. It was clear he didn't want to talk about this encounter very much; he was far more interested in talking about his plans for Kuzoo. The car radio was tuned to the station. RJ Ngawang introduced a song by Akon, the Senegalese-American hip-hop sensation. It was probably the dozenth time I'd heard "Don't Matter" in the few days I'd been in Bhutan. "Terrible pronunciation," said Sir Tenzin, shaking his head. He had just a bit of a British lilt in his voice, acquired during his days at a Jesuit school in India. "Akon, or the radio jockey?" I hadn't exactly expected Kuzoo, in the hands of the first generation of Bhutanese to grow up with television, to be playing a bunch of traditional Bhutanese music and public-service programs, but I was a bit surprised by just how much pop music from my side of the world cascaded over the Kuzoo air. Because of the cost of production, there was little in the way of modern recorded Bhutanese music. Music from neighboring India and Nepal was banned from the airwaves, though it wasn't clear whether that was an official edict issued from on high or a decision made by the radio jockeys themselves. He laughed. "That radio jockey. All of them. They mumble a lot. That girl in particular. They get nervous." "We all do," I said. "I can help you with that, sir." Sir Tenzin's car chugged up a winding road that promised a sweeping view of Thimphu Valley, and led to the broadcast transmission towers for both the Bhutan Broadcasting Service and Kuzoo. The city was cast now in a twilight golden hue, the same light we'd have this time of day in Los Angeles. The insistent longing of Akon was a discordant sound track for this magnificent vista, as discordant as the presence of this city in such an otherwise starkly undeveloped country. From up here, the sprawl of the growing capital was evident. Buildings in various stages of construction emerged in the embryonic skyline, pushing the boundaries of Thimphu farther out from the center—a sure sign development had come to this place where electricity hadn't flowed until a quarter century ago. Off to the left, the majestic structure known as a _dzong_ gleamed in the light from the setting sun. Each district had a _dzong_ that served as central administration for government and clergy; this particular one in Thimphu was called "the fortress of glorious religion." A golf course wrapped around the grounds, and I asked Sir Tenzin how Bhutan came to feature a game associated with rich people, and which was considered an environmental blight. "The third king loved to play golf." Sir Tenzin smiled, as if that was all there was to say about the matter. "It's not my game, though. I haven't got time for sports." He pointed to the spot straight ahead, way across the valley, where land had been cleared for the construction of what was to be one of world's largest statues of Buddha—170 feet tall. A band of light shone down on the area where the giant Buddha would eventually sit, almost like a spotlight. I squinted to be sure I wasn't imagining the gleam. "One day, maybe in my lifetime, Thimphu will have a skyscraper," said Sir Tenzin, and he laughed at his own suggestion, for it seemed so impossible. Buildings were forbidden from reaching higher than six stories, for reasons both practical and aesthetic; nothing could tower over a region's _dzong_ , for one thing. For another, modern conveniences that made taller buildings possible, like elevators and escalators, were costly uses of power. Regardless of their size or function, all structures were mandated to be built in traditional Bhutanese style; even the most recently constructed buildings—like the airport terminal—were fringed with ornate, colorful woodwork and sloping roofs. This ensured a fleet of artisans work, and at the same time flaunted the national heritage. "This is the fastest-growing capital in Asia, after all. I still can't quite believe how big we've become in just the last few years." Sir Tenzin waved his hand at all the construction. "Imagine the Kuzoo studio in a glass building. Topped with the traditional Bhutanese structure, of course." He laughed again. Even from high above the city, I could see how unlikely a proposition this was. The thought of modern buildings popping up high above the traditional ones, disturbing the order of the cityscape, was unsettling. Yet if Akon and Christina Aguilera could dominate the airwaves—if cupcakes were being baked and Coca-Cola swigged and a person like me had been allowed in—anything was possible. Just as Bhutan was undergoing a cultural invasion that threatened to erode its unique foundation, its capital city might someday, not very long from now, look like any other. Watching Bhutan change over time would be like watching a baby grow, I thought. I already felt a little maternal and protective of this unusual country, worried for its future. Sir Tenzin continued. "You know, I wanted Kuzoo FM 90 to be Kuzoo 108," he said. The number 108 was sacred in Buddhism. It was the number of volumes in the Kanjur, the Buddhist scripture. Sir Tenzin's innate prowess for marketing was admirable, given that he was trained as an educator and raised in a country whose very foundation was anti-materialistic. "The Ministry of Information and Communication wouldn't let me have it, though. The planes use the frequency 107 to communicate. Too close on the dial, too much chance for interference." He shook his head, then looked out the windshield. "I love the BBS logo," he declared, crumbling bits of his cupcake into his hand so as not to lose a single bite. "The conch shell." He motioned toward the sign near the transmission tower. Little squiggles denoting "transmission" radiated from the sides of the shell. It was at once cute and elegant. "How did that get to be the symbol of the broadcaster?" A conch shell seemed a peculiar icon for a landlocked country. "The conch is one of Buddhism's eight lucky signs. It is believed to awaken sentient beings from their sleep, their state of ignorance," Sir Tenzin said, licking his fingers. "Also, back in the villages, the conch was how people used to make announcements." "Before BBS, and then Kuzoo, of course." "Yes." Sir Tenzin smiled and washed down his last bite of cupcake with a swig of soda. A combination of the sweets and the excitement of his new mission gave Sir Tenzin's face a look of contentment. It was clear he believed he had the best job in the world. # 5 # GOD OF THE NIGHT IF YOU WALKED INTO ANY VILLAGE IN ALL OF BHUTAN and shouted "Karma," a quarter of the heads would turn. There are only about fifty names in the whole country. A monk blessed a baby with a duo of them shortly after birth. There are no familial surnames, and most names are unisex. So it is entirely possible that a family could be made up of a mother named Karma Wangdi and a father named Karma Lhamo, a child named Karma Choden, and another named Lhamo Wangdi. It is only in the last quarter century that birth certificates have been kept; many Bhutanese older than that don't know their exact date or even year of birth, another reminder that even today in the bustling city of Thimphu, a simple, rural life isn't so far away. As Bhutan becomes more modern, some of the more daring Bhutanese parents break tradition in order to distinguish themselves, altering the spelling of familiar names or abbreviating them. Or by forgoing the monk and choosing the names themselves. Tsheten Denkar was the monk-given moniker of the Kuzoo radio jockey who'd since adopted a sexier handle: Pink. Her new name had its roots in her work as a DJ in Thimphu's blossoming party scene. Even as she juggled her ever-changing shifts at the station, she continued to work the booth at Club Destiny on party nights: Wednesday, Friday, and Saturday. Her long hair was permed and highlighted with streaks of brown and blond, her lips perennially glossed and pouty. Pink had carefully crafted her image as a Bhutanese disco kitten. The name was an invisible but key part of her tranformation into a sensation in the club and on the air. Off air and out of the nightclub, things were not going well for the twenty-five-year-old woman formerly known as Tsheten. After seven years, her marriage was unraveling. Marriage had long been a very casual institution in Bhutan; a couple declared themselves married when they started living together, and unmarried when they stopped. Now, the ways of the West were imposing on this tradition. More elaborate ceremonies were becoming common, as were more acrimonious, complicated divorces. Pink's situation was so strained that her mother had sent for the family's monk for guidance. The family was also facing another life-changing issue they wanted the monk to address. Pink's sister, Tshering, had prayed and prayed she'd get a job as a flight attendant on Emirates airline. Her fervent appeal had worked. Like Pink, Tshering had one leg in the modern world. But only one. Like many single Bhutanese women, she still slept in the same bed with her mother at their shared apartment. Now she would be moving to Dubai and flying to exotic ports heretofore accessible only in her dreams. No more tediousness of going back and forth, day in and day out, serving on the only flight the Bhutanese airline ran daily: Bangkok to Paro. Paro to Bangkok. As an exotic and costly tourist destination that attracts the famous as well as the rich, Bhutan ensured the staff of Druk Air the occasional celebrity sighting—Matt Damon, Orlando Bloom, Bette Midler, and Demi Moore had been among the recent stars to visit. Better still was the honor of serving members of the royal family who might happen to be on board. Relocating to another country and going to work for a foreign airline would expand Tshering's world, and the idea of living away from family for a while was alluring—even if to do so for anything other than education was considered by many to be very un-Bhutanese. The pay would be exponentially better than what she earned now, more than enough to allow her to save money, which would never be possible at home. The monk determined that both sisters needed to be cleansed so they could proceed. A series of _puja_ s was in order to ready them for the immediate futures they faced. He'd be in town for several weeks to accomplish his plan. _Puja_ s are special prayers performed by holy men to give additional heft to a message you want to transmit to the gods. These holy men have the expertise and wisdom—the divination—to know which gods need to be sought out, which chants are necessary to best help remove whatever obstacles are in the way. The intensity and length of the _puja_ —how many monks are needed, and for how many days—depends on the severity of the situation. Moving into a new home requires _puja_ s for the old place and the new. New jobs and the ending of relationships qualify, too. No one in Bhutan questions anyone who misses school or work because they have to attend a _puja_. The ceremonies are considered a normal part of daily life. How prevalent they are is evident as you walk the streets of Thimphu on most any day. The moan of bagpipelike horns floats in the air, punctuating the throaty chants of monks and mingling with and occasionally drowning out the more mundane sounds of honking cars and barking dogs that comprise the aural cityscape. Buddhism, I was learning, was far different in this nation where the religion was dominant than it had appeared back home, far more complex than the yoga, meditation, vegetarianism, and fat smiling statues of Buddha that Westerners typically associate with this peaceful and mystical religion. The Buddhas in Bhutan, in fact, are skinny. A life of overindulgence and grandeur was exactly what the original Buddha, a prince who renounced his fortune and set off in search of enlightenment, opted to leave behind. Here, yoga isn't part of the layperson's spiritual practice; rather, it's an exercise class taught three times a week at the hospital by a German doctor eager to help chubby Bhutanese ladies slim down. As for meditation, most young Bhutanese dismiss it as something their parents or grandparents do, but not them. "Too boring," they say. Culturally, Bhutan-style Buddhism is ubiquitous, embedded in daily life. There is absolutely no separation of church and state in Bhutan; the lower half of the country's flag is orange to represent the religion. The government funds many of the monasteries, and each district's administrative seat, the _dzong_ , also houses monks. Virtually every home features an altar, housed in a room of its own if space allows. A step at the bottom of every doorway trips up unwelcome spirits. Old men and women walk the streets spinning handheld prayer wheels, lost in the murmur of their chants. Even television programming is infused with the religion; every morning at 6:00 a.m., the Bhutan Broadcasting Service kicks off its day with prayers chanted against a backdrop of scenic video clips of the country's spectacular landscape. New construction—schools, residences, a government-sponsored park—isn't inhabited or put to use until it is consecrated by monks. Religious holidays, such as the First Sermon of Lord Buddha or the Birth of Guru Rinpoche, dot the official calendar. Specially trained monastic astrologers are consulted about virtually every aspect of life—illness, marriage, trouble that befalls a family, major decisions. Draped across the landscape of Bhutan are endless ribbons of color, bright prayer flags hoisted as protection against certain gods, to coax others forward, and to repel bad spirits. These fluttering squares, in various states of fade and tatter, are almost as plentiful as the trees the king has enacted laws to protect. They're hung in places that seem impossible for a human to reach; it's believed that the closer they're raised to the heavens, the more effective they will be. The flags fly until they disintegrate, which can take years. Their tattered remnants are lingering reminders of the human call to a higher power—and of how this religion pervades the very air. To the uninitiated, the rules and rituals associated with Buddhism as practiced in Bhutan might seem absurd—elaborate and colorful and rife with inexplicable superstition. Circle a religious structure three times clockwise to accumulate merit. Circle on this auspicious day, and your merit will be doubled. If your family's monk or astrologer advises against travel, but there's no way around making the trip, you pack your suitcase early and leave it outside the door to trick the spirits. You might also carry printed prayers in your pocket or your purse, for extra protection. Animals cannot be slaughtered during certain holy months, but it's acceptable to purchase and stockpile meat in advance to consume during that time. If meat is consumed, it is preferable to choose cow over chicken, for chickens feed fewer people, and the bad karma from killing is lessened when more people benefit. Most compelling to me were the underlying principles of the religion: Compassion for all beings, and the interconnectedness of everyone. The ideals of wisdom and knowledge. Self-reliance. Acceptance and forgiveness. What you possessed and achieved wasn't what was important. These were the principles I'd learned in happiness class, writ large. Most religions espoused similar values, but there was something about the Buddhist approach to delivering the message that spoke to me, a decidedly lapsed Catholic. The holy men and women roaming the streets of Bhutan's capital city reinforce the messages of the faith. They wear burgundy robes and sport close-cropped hair; they usually travel in clusters. Some live in the monasteries or the sole nunnery in the hills above the city. Others come from the outlying areas to stock up on supplies. While monks are as common as birds, a special kind of monk always commands particular attention and respect: a Rinpoche, which means "precious one," and most with that name are recognized as reincarnations of lamas. They're distinguished by a ribbon of gold fabric worn around their burgundy robes. So it created quite a stir when the Rinpoche who'd traveled to Thimphu to attend to Pink and Tshering walked into the offices of Kuzoo FM. It was my second week in the office, and I'd been spending my days talking to staff, helping them with simple problems like English pronunciations, and answering questions about my life back in the United States. Rinpoche held himself like a man who had been told from an early age that he was special. But for the robes, he looked like a hipster thirty-year-old, with spiky black hair and a confident strut—a guy from town stopping in to guest host a show, or say hi to his friends. When I spotted him in the hallway outside the Kuzoo workroom and studio, I waved hello, then immediately hoped I hadn't offended him with my casual gesture. But he waved back, sweetly, undisturbed, as I was obviously an outsider and not schooled in monastic protocol. Beside him was a dumpling of a lady who, judging by her features, had to be Pink's mother. She beamed proudly, delighted to be seen in the company of someone so holy. The arrival of a person of seniority typically compels Bhutanese to rise and politely but cursorily bow their heads and say, " _Kuzu zampo-la_ , sir." But when the gold-fringed Rinpoche stepped into the room, the young Kuzoo FM staff immediately stopped what they were doing, rose, and approached him, single file, heads bowed. Mechanically he reached inside his robes, produced a handful of thin silk cords, and placed them on each head that paraded by, every gesture accented with a little hum of a prayer. It was a simple blessing, as reflexive as genuflecting and crossing before you entered a church pew. After receiving the blessing, the staff returned to their seats and proceeded to continue whatever they were doing, appearing now completely unaffected. I was rapt. I couldn't take my eyes off Rinpoche. He must have felt my stare, as he soon motioned in my direction, while he addressed Pink in Dzongkha. "Oh, that's Lady Jane, from California and New York," she told him in English. People seemed more familiar with my home state than my adopted one, so I'd started answering the question "Where are you from?" with a dual response. "New York," said Rinpoche. "That is a great country." He paused, as if he was imagining the distance. "My father went to New York once.... Come here." I accepted the summons and rose from my chair. As he'd done with the others, he adorned me with the red silk cord. Mimicking the other Kuzooers, I bowed my head, low. But I'd failed to notice the second half of the ritual—pulling the cord forward and tying it around my neck. Rinpoche laughed kindly as Pink intervened, completing the task. With barely a move of his arm, Rinpoche then pulled a three-foot fringed white silk scarf from inside his robes and threw it around my neck with a nearly invisible flick of the wrist. A standard greeting for an honored guest. He looked toward Pink and murmured in his native tongue. "He says that's because you did many good deeds in a former life," Pink translated. I smoothed out my hair from under the red silk blessing cord and adjusted my scarf. I found myself considering the idea of past lives: I liked this notion that it wasn't only cats who got nine chances. That whatever goodness we might accumulate in one lifetime would influence where we went in the next. But what about the bad things we'd done? Was it possible to transcend them? Were our next lives determined by a median average of this life's actions? Maybe in my next life I'd have a talent for speaking other languages, or a gift for playing guitar. Maybe, in the next life, I would get it right: enter a profession where I did some good for humanity—become a teacher or a scientist or a social worker. Or fall in love with a wonderful man and be a full-time best-mother-in-the-world to our many adopted children. Dangling in this fleeting daydream about the laments of the past and unknowable future, something occurred to me: This life right now, as a forty-three-year-old woman from Brooklyn, New York, who had moved to Los Angeles for a job in public radio, who was temporarily residing in a remote kingdom in the Himalayas, with all the strange, wondrous, and sometimes awful chapters that had led me here, was pretty okay. With each passing day, a little more than okay. I sat on a rickety chair right next to Rinpoche, hoping that a little of his spectral radiance might wash over me. Sir Tenzin entered the room and the staff rose to greet him, as if they were addressing the teacher in grade school. "Hello, sir." Sir Tenzin didn't respond, but started speaking to Rinpoche in Dzongkha, as if he'd been expecting him to be there. Rinpoche didn't engage in formalities, either. "I need some rice," he said in English. Sir Tenzin rushed back out as quickly as he'd rushed in. In moments he returned with a small bowl of uncooked grains, from which sprouted four sticks of incense. Sir Tenzin presented this offering, formally, with both hands. Rinpoche accepted it and rose. He walked to the other side of the room, lit the incense, and sat down near the transmitter that beamed Kuzoo's signal out to Thimphu Valley, the technical heart of the station. Some of the Kuzooers who'd been glued to their desks now stood. A few others remained seated but turned away from their computers to watch. Pink's mother giggled a bit, like a teenager, her cheeks flushed red. Sir Tenzin stood proudly next to Rinpoche, who closed his eyes and began to chant, presumably for the station's well-being and success. It was a murmuring hum; it sounded like a longer version of the blessing that accompanied the cord. The incense burned bright and strong, wafting around the room with its powerful, earthy smell. Every few measures, Rinpoche would toss a few grains of rice in one direction, then the next, as if to scatter his prayers evenly around the room. Though I didn't understand the words, I felt moved by the ritual, by the power of Rinpoche. The prayers continued for about fifteen minutes. A few of the Kuzooers impatiently stroked their keyboards, eager to get back to work. The station phone rang, and Pema scurried over to answer it in a whisper, lest it keep bleating in interruption. At last Rinpoche gave a final bow, indicating he was through. Grains of rice were strewn across the tattered burgundy floor covering. I had just witnessed my first _puja_. As everyone resumed their positions at their keyboards and settled back into work, Rinpoche crossed the room and addressed me solemnly. "Is everything okay?" he asked in perfect English. What do you say to a holy man when he asks you that question? Did he want a real answer, or was this the Bhutanese equivalent of the empty American query "How are you?" Could he tell I'd been struggling? Someone with the spiritual powers of a reincarnated lama could likely feel from a distance that I'd had a disastrous few years. I didn't know what to say. We were in a cramped room, surrounded by young people I was supposed to be teaching. How much did I want them to know about me? To answer Rinpoche honestly, I mustered up something to the effect that I was okay, yes, but "searching." I'd never used that word before—I thought it vague and pretentious—but now it felt honest. What exactly I was searching for I couldn't quite say. A plan for the future? Cleansing? Peace? For a moment it wasn't clear if Rinpoche had understood me. Then he called out for a piece of paper and a pen. "Call me if you'd like to talk more," he said. And slowly and deliberately, he scribbled down his cell phone number. In case I forgot whose number it was, he added above it in block letters: _RINPOCHE_. LATER THAT AFTERNOON, Kuzoo still basked in the glow of the blessing. Pema and I sat in front of our respective computers researching information for a new show that would begin that night. It was to be called _The Doctor Is In_. Sir Tenzin had run into one of Bhutan's two psychiatrists the day before and, visions of CNN in his eyes, had corralled him into visiting the Kuzoo studios to cohost a weekly call-in show. "We can be like King Larry," Sir Tenzin said brightly. Or rather, Pema could be. Because she was so diligent and productive and interested in being all over the airwaves, I'd taken to calling her Oprah, even though she couldn't have been more physically opposite than the superstar. Pema stood no more than five feet tall and weighed perhaps ninety pounds. Her cheeks were freckled like a midwestern farm girl's, and her long hair hung down past her shoulders with a slight wave. She frequently fiddled it all into a bun as she stared at the computer. Maven of popular culture that she was, she didn't have to ask who I was referring to. Early yesterday morning, I'd caught her surfing the Neiman Marcus Web site for Burberry pocketbooks. Even the tiniest cost triple her monthly salary. "Where did you learn about Burberry, much less Neiman Marcus?" Pema turned in her chair and looked at me as if I were a clueless idiot. _"Sex and the City,"_ she said. "A friend brought the DVD from India." She didn't believe I'd seen the show only once; she seemed to assume that on the series' native soil, each home would be blessed with a continuous feed of episodes. In a country where many preferred to be treated with the holistic and spiritual tradition of Bhutanese medicine, psychiatry was an alien concept. Bhutan was still a place where people actually talked to their families and friends about the trials and tribulations of daily life, and trusted that whatever haunted them would somehow work itself out. Saddled by worries or problems, they'd deploy the monks and Rinpoches. Radio was the perfect medium for the psychiatrist to make his services known—to explain how therapy worked and what kinds of issues it addressed. As was the case with all health care in Bhutan, traditional and Western, visits to doctors were free. "Let's see," said Pema, orderly and matter-of-fact. She'd already chosen the theme music for the show, Michael Jackson's "Heal the World." (Presumably for the title, and not because she saw the pop singer as a paragon of mental health.) Now she was plundering the Web for background information to explain the first topic Dr. Chencho wanted to address: anxiety. "This says there are five main types of anxiety disorders." She was reading from the Web site of the National Institute of Mental Health, the first thing to come up on her Google search. "Generalized Anxiety Disorder, Obsessive-Compulsive Disorder, Panic Disorder, Post-Traumatic Stress Disorder, and Social Phobia." None of these seemed to be an affliction with which Pema could have been personally familiar. She was magnificently confident, the type of confidence that could edge into bossiness. "Do you know what any of that means?" I asked. "Yes, it spells it out right here," snipped Pema, who I feared would read on the air the whole list and their descriptions word for word. Across the room, Pink sat absorbed in a musical otherworld, headphones pressing down on her long wavy hair, scanning tunes for her next show. This didn't keep her from hearing her cell phone chirping, a special ringtone I'd not heard it play before. She answered, then walked over to me. "It's for you," she said, as if I got calls on her number all the time. I took the phone, surprised. "Hello?" I said. "Let's have dinner tonight." The gravelly, accented English of Rinpoche was commanding. "Pink will bring you to my hotel." The prospect of a private audience with this monk intrigued me. Maybe he'd had a vision he wanted to share that could shape my future. I wasn't sure I even believed in the idea of a vision, or even of healing prayer. I had no idea what I would ask or say or expect. But I figured you should never refuse the attention of a holy man—especially when he calls you. From the backseat of Pink's little car, I witnessed dusk sweeping over the skies. Ngawang had also come along for the ride. I liked these two women so much, even though I didn't really know them yet. The streets of Thimphu began to bustle as day turned into night, filled with life. Shops were full, and business was particularly brisk at the snooker parlor, where players wagered on their games, eagerly leaning across crowded tables. As we navigated the streets on our way to meet the mystic, I allowed myself a moment of pride for my adventurousness. When I was about the age of my companions, something happened to me that could have convinced me never to venture out again. When I stopped to consider what had happened, it astonished me how far I'd come. IN THE SUMMER of 1981, when I was seventeen years old, a chance discussion with a friend on a subway platform tipped me off to the existence of a brand-new cable channel called CNN. The outfit was new and so small-time that simply by making a bold phone call to the number listed in information, I landed myself an internship at the New York bureau. That led to another, and another, and finally, when I got out of college, it was by default the place where I sought full-time employment. For the princely salary of $11,000 a year, I moved to Atlanta to work at the network's world headquarters. I'd never been to the city, so I relocated there sight unseen, as there wasn't time or money enough in the fledgling network's budget, or mine, to first check the place out. Back in its early days, CNN didn't wield enormous influence on the world stage as it does today; then it was disparagingly referred to as Chicken Noodle News. Most everyone at work turned out to be just like me—young, ambitious, from somewhere else, not long out of school. Our jobs and hours were constantly changing, but in spite of the flux, we cobbled together the kind of accelerated support system that develops when you're in an intense and demanding situation. Drinking beer at eight in the morning, after slogging through exhaustion on the overnight shift—both are excellent bonding rituals. One June night, about a year after I'd started working there, I returned home from a birthday party for my friend Michael. It was after 1:00 a.m., quite late, considering that I had to report to work at 8:00 a.m. for the day shift. Just a few weeks prior, I'd moved across the building's courtyard to my very first apartment without a roommate. The place wasn't fancy in any way, but it was all mine—a sweet little studio with a claw-foot tub and French doors that separated the living area from where I slept. It was cheap and close to work, even if the neighborhood was a bit so-so. The landlord had yet to unstick the windows, which had been painted just before I moved in, so they were stuck open a few inches. My several attempts to push them down failed, but it was hot enough that I hadn't called again to complain. Those cracks kept me from suffocating in the oppressive summer heat, as a window-unit air conditioner was beyond my means. Exhausted from a long night, I stripped off my clothes, collapsed naked onto the futon on the floor, tucked my eyeglasses underneath the edge, pulled up the top sheet, and fell right to sleep. I've never been able to calculate what time it was when I was wakened by a loud thud. At first I was certain the noise was from a picture falling off the wall. I'd hung a framed poster of chili peppers in the kitchen and it had fallen in the middle of the night the week before, too. I hadn't quite gotten the hook in the wall right. So I shifted my position on the futon and figured I'd rehang it in the morning. Then I heard another noise. In that instant, I became aware of movement across the room. It sounded like a window being forced open. No, that couldn't be. Blind as a bat, I fumbled in the pitch dark under the futon to excavate my eyeglasses so I could confirm that this was my imagination at work. Where had I put those specs? No way, that couldn't be a person. My heart and stomach felt it before my head accepted the fact. An intruder had entered the room and was now headed toward me. A scream emerged from my throat, so loud, just a pure scream, no words, no "help me." The paralysis of terror took hold. As I write this, I can feel the wave of adrenaline rushing through me, hear that sound I made so long ago. It was more of a reflex, a reaction, than a cry for help. A hand locked over my mouth to silence me. I felt a pointed object against my neck, and the man reinforced this action with words. "Shut up," he said firmly, quietly, "or I'll kill you." Even if I'd been bold enough to defy him, I didn't have the capacity to continue making any sound. My nakedness, my impaired sight, the fear that I might die all combined to render me silent, terrified, incapable of movement. I wished I could be dead, that this man would kill me, so I wouldn't have to live with this memory. The only act of self-preservation I could muster, as he raped me, was to beg him not to ejaculate inside me. To please not make me pregnant. There was no retribution for my daring to speak; he complied with my request. When he was done, he apologized. As he pulled up his pants and zipped them, he said he was sorry we had to meet this way, sorry if he hurt me, hopeful we might see each other again. Then he left through the front door, as if he'd been an invited guest. WITHIN A YEAR I'd accepted a position at a television station in central North Carolina. Moving to work in local news in a midsize market wasn't exactly the career trajectory I'd intended. Yet a smaller city, a new city, a city that didn't remind me of that night—all seemed like a good idea. My new job involved producing the eleven o'clock nightly newscast, which meant leaving the studio at just after 11:30 p.m. for the drive home. Perhaps it would kick-start me into making peace with the dark. It was easy to walk to the parking lot with my coworkers, without seeming needy or explaining why I didn't want to go outside at night by myself. But at midnight as I pulled into the parking space at my apartment building, the stillness, facing the quiet of the night on my own, would make me sweat. I'd dart out of the car, heart beating fast as I sprinted inside. Once I got safely inside, I'd turn on every single light and keep them burning until dawn, as if electricity would shield me. I'd rationalize: Wasn't there a zone of protection offered from above to people who had been through a trauma? Just one trauma per person per lifetime, right? I couldn't fully convince myself that this was the case. As the months passed, I continued on with my double life. Most people around me saw a confident young woman, even if they couldn't understand why I'd left a national network—CNN was starting to gain notoriety, by now—to work in this small city in North Carolina. One weekend night, I needed milk from the grocery store. I grabbed the keys, ran down the steps, and got into the car. On the quiet road that connected my street with the one leading to the shopping center, I stopped for a traffic light. At that instant, the magnitude of this simple act occurred to me: _I did it! I left the house! It's dark out, and I left the house!_ When the light turned green, I was so happy that I was crying. In the aisles of the grocery store, I didn't even try to conceal my tears. I was reclaiming my life, my confidence, that feeling of normal we take for granted before the unexpected turns us inside out. That night before bed, I gleefully switched off every light in my apartment. I fastened the chain locks, but didn't stick chairs under the doors. As I eased into sleep, I breathed steadily, softly: Now I could get back to life. And twenty-two years later, here I was in a car in Thimphu on the way to meet a holy man, the Rinpoche. And in some strange way it is because of that night, not despite it, that I could be here. Pink navigated into the rocky parking area, stopped the car, and we emerged into the darkness of the winter evening. THE HOTEL TANDIN is a run-down little place at the top of a five-story building, just up the street from the traffic circle in the center of town. We marched up those five flights of stairs, and Pink knocked on a door just outside the bar. The sounds of chanting seeped into the hallway. "He's meditating," said Pink, shrugging. She quietly cracked open the door and motioned for Ngawang and me to enter. The room was large and in its center were two single beds, side by side. Rinpoche sat on one of them, legs crossed, continuing his chants, seemingly oblivious to the interruption. It felt as invasive as walking in on someone in the bathroom. I sat down in a chair near the door, far across from him, so as not to disrupt his privacy. A very long five minutes later, he snapped out of his trance. He raised his head and offered a cursory greeting, then snapped his fingers and started demanding help. Though he was speaking Dzongkha, his tone translated perfectly: He was ordering Pink to do something. Flustered, she sifted through a bag of supplies on a little shelf outside the toilet, and dug out a little white bottle. Rinpoche held back his head as if he were an injured pony. "The pollution," Rinpoche explained, his chin in the air and his gaze fixed upward as Pink dutifully nursed his eyes with droplets. "It's terrible here in Thimphu. Now move over here, and tell me." He motioned that I should come closer, to the twin bed opposite where he sat. As soon as I was in place, he looked at me deeply, intensely. I didn't know what to say. Now that we were here, I didn't want to tell him anything. Whatever mystical air Rinpoche had around him, whatever I had hoped he might imbue me with or chase away, the mood was broken by his demeanor, the surroundings, this sad little room. That finger-snapping incident revealed him as a boor, not the paragon of kindness, compassion, and understanding I'd imagined a monk would be. Here we were, though, across from each other, his expectation building. I spoke, but not as openly as I had planned. "It's a very difficult time for me. I feel like everything is up in the air, in transition. Like I'm in transition." Rinpoche spoke in Dzongkha, and Ngawang interpreted. "He says it is clear you have many obstacles facing you back home," she said. "Particularly in your work. There is someone there who is working against you. You cannot move forward with the current situation." "Okay," I said. What he was saying was true. My job and the nature of the industry were wearing on my soul; several coworkers were obstacles, too, and the on-again, off-again overnight hours were physically exhausting. But those weren't the problems. Time hadn't healed all wounds, but it had smoothed them out a bit. It was up to me to do what I'd done all along, just keep moving forward and being open, aware. Kind. The only thing holding me back was me. I had answered the call of this holy man, seeking salvation, explanation, or a road map for my life. But as I looked into his eyes, I saw a clear message, even if it wasn't the one he was trying to convey. He wasn't the answer. He didn't have the answers. Ngawang broke the silence. "He says right now there's a _puja_ taking place in Sikkim where there are many monks." The Indian state of Sikkim was hours away. Could a _puja_ be conducted remotely? When you needed to be cleansed, didn't the monks come to you—pray and chant in your presence? Or was it simply enough to trust in their power? Ngawang's voice was tinged with skepticism as she obediently filtered Rinpoche's words. "He says the intensity of your obstacles requires that you hire seven monks and a lama for three days. You will also have to pay for their meals." She glanced down at her cell phone, which was buzzing with a text message. Rinpoche scolded her in Dzongkha. "That sounds pretty expensive," I said. "How much?" "He says it will be three hundred ngultrum a monk each day, plus eight hundred for the lama." Ngawang seemed to possess a great faculty for numbers, especially when it involved currency conversion. She stopped to calculate, and to glance at Pink. Both girls looked surprised. "Eight thousand seven hundred ngultrum." I'd just cashed $150 worth of traveler's checks at the Bank of Bhutan in the center of town and had received 6,500 ngultrum. Pink and Ngawang each earned just 5,000 ngultrum every month for the long, erratic hours they logged at Kuzoo. Sir Tenzin counted it out in cash to each employee on the first of the month. Eighty-seven hundred ngultrum was a lot by any standard, even for salvation. What would this _puja_ cost if I were Bhutanese? I wondered. The girls seemed keyed into this—embarrassed, even. Just because Rinpoche smelled a rich American didn't mean they did. I was their coworker, their friend. "You can think about it," said Pink sweetly. "Well, I don't really have to think about it. That's quite expensive." Rinpoche chattered a response in Dzongkha. Ngawang relayed, "He says if you pay a hundred dollars, that will be okay." Her smile told me she was on to the extortionate monk. For a split second, I worried this Rinpoche might pray to the wrong deities if I refused. Make my obstacles more intense. Then I remembered that I didn't believe in spells. Superstitions weren't the parts of this religion that made sense to me. What I'd been discovering, ever since that happiness class back home, about self-awareness, self-reliance, and compassion did. I looked straight into Rinpoche's eyes, and smiled. "Thank you very much for the consultation," I said politely. "But I'd like to offer you something for your time and just go have dinner now." Eyes wide, Rinpoche spat out some words in disgust. "He says he is not a businessman," Ngawang said, looking up from her cell phone as if to punctuate Rinpoche's displeasure. "He doesn't want any money." The air in the seedy hotel room was thick and tense. I wanted to leave, but out of respect for my friends, I decided to swallow my frustration. I rose tentatively, afraid of seeming too abrupt, and the girls got up, too. Rinpoche collected his cell phone, a sleek Motorola RAZR like the one I had back home. He commanded us to wait for him outside. At the noodle shop just behind the sole movie theater, in the center of town, Rinpoche slurped at his soup while watching cartoons on the television that bleated in the corner of the restaurant. Like an insolent little boy ignoring the grown-ups at the table, he fiddled with his cell phone. I made small talk with the girls. And when I asked the waitress for the check, Rinpoche waved his hand at me dismissively. "You pay next time," he said clearly in English, and reached into his robes for his wallet. A WEEK LATER, Ngawang and I snuck out of the station before the afternoon hip-hop show that she hosted to visit her older sister, a doctor, at the hospital. The biggest workplace problem in Thimphu wasn't yet that people surfed the Net instead of doing their jobs. There weren't enough computers, and online access wasn't reliable enough for that to be an issue. What chomped into the meat of the workday was the inevitable, interminable family visit. All day, every day, mothers, sisters, cousins, and boyfriends would just stream into offices, schools, and shops to say hello, drop off lunch, have a cup of tea, or just hang out. Because of this, I had met more family members of coworkers in Bhutan in a few weeks than I had in three years of working with the crew in Los Angeles. The fact that one of Ngawang's sisters was practicing medicine didn't mean she was off-limits. Someone would bring us tea in the examining room while she was seeing patients, Ngawang assured me. She did it all the time. There wasn't a sense of modesty or privacy in Bhutan; everything was communal. And there certainly weren't any doctor-patient confidentiality laws. About three quarters of the way to the hospital, we came to the National Memorial Chorten, at the confluence of the upper road and one leading into town. Every day, hundreds of people flocked here, from dawn till well after dark, to circumambulate this enormous and sacred religious structure—a form of walking prayer that accumulates merit with the gods. Ngawang guided me clockwise around it the requisite three times. Sometimes, she said, she prays simply because she knows it will make her father feel better. "I don't believe in it," she said. "These things are just part of who we are. It's just what we do. My father asks me to pray to the god of the night, so I do. He believes that the soul leaves the body when you sleep, and if you don't pray, it might not return." She paused as we made our way around for the third circle. "That's what happened to my mother." I grasped Ngawang's arm. "Do you miss your mother, Lady Jane?" Ngawang asked. The screen saver on my laptop was set to a smiling and gorgeous thirty-year-old image of my mother that my father had dug up and digitized not long ago. Ngawang loved learning that "Jane," my middle name, was also the name of the lady in the photo, the woman who had given birth to me. The original Jane, I told her. Of her mother, she had just one tiny black-and-white shot, only an inch square—smaller than a passport snapshot. Twenty years ago in Bhutan, photographs had been as unusual, and as dear, as electricity and telephone service. The young woman in the picture looked identical to Ngawang. There wasn't any money in her wallet, just her national ID card and this memento. "She is the same age in this picture as I am now," she said. I could feel my heart tweak for how much she missed the woman she never got to know. "I do miss my mother," I said as we made our third round, surrounded by other worshippers. "But I haven't seen her in a long while. I can see her only a couple of times a year." "What do you mean?" "Well, she lives three thousand miles away from where I do." "You don't live with her?" Ngawang sounded very surprised. Though the Bhutanese knew from the movies that many Westerners didn't live with their families, it still surprised them to meet someone in person who was a case study in this curious way of life. "No, I don't. I haven't lived with my family since I was sixteen years old." I explained that in the United States, kids often couldn't wait to leave home, and I realized how foolish and sad that must have sounded. Especially to someone who had lost her beloved mother so young. "You must get very lonely," Ngawang said. "If you ever get lonely here, you call me and I'll come stay with you, okay? I'll keep you company, my sweet Lady Jane." I promised I would, even though it would be impossible for me to convey what a triumph it was that I could not only stay alone but actually enjoy it. The god of the night could have captured my soul, but he lost. Ngawang pointed out a shop I hadn't noticed before, tucked off the street across from the chorten. It was one of the few shops on the upper road, and one of the only places in Thimphu where you could find fresh baked goods—like the ovens they were baked in, a luxury in Bhutan. We darted across the traffic and made our way into the shop to get a snack. A nice-looking cookie for me. An enormous bear claw for Ngawang. And so, two new friends chomped on sweets. It felt like the right time to ask the question I'd wanted to ask for days now. "What did you think of that Rinpoche, Ngawang?" "He was really handsome, wasn't he?" Ngawang said, perking up as if she had a crush. "Handsome? He's a monk!" "Some monks can marry, though." Ngawang smiled, and stuck another piece of pastry into her mouth. "Well, I hope you don't marry that monk. Do you think I made a mistake by not letting him do the _puja_ for me?" "No way," she said, wiping powdered sugar from her mouth with the back of her hand. "You did the right thing. That didn't seem right, what he was asking." She took another bite, and some crumbs fell to the floor. Her phone trilled after being silent for a long while. The James Bond theme song, which meant her father was calling. "You know. You can't trust all the monks just because they're monks." # 6 # BHUTAN ON THE BORDER, OR, THE START-UP COUNTRY A CITY IS AT ITS BEST, ITS PUREST, AT DAWN. EMPTY, raw. You can see the veins of it. Before the rush of the day begins, it seems more Hollywood set than reality. In the slow pace of morning, Manhattan is almost quaint. Bleary-eyed dog owners, some wearing pajamas, sleepwalk as their charges take care of business. The stillness punctuated by the whine and whir of a steady, slow parade of garbage trucks or the squeal of kneeling buses. Washington, D.C., even at its busiest, is slow by comparison. But its scrubbed-clean, stately charm takes on a special sort of majesty in the early morning light, as if the buildings were preening for admiration. Downtown Los Angeles is more alive at 6:00 a.m. on a weekday than at most other times; cars streaming into parking garages, suited throngs marching to their high-powered jobs in international finance and law. But on weekends, when it's early, the streets are so empty it feels as if there's been an evacuation. At dawn any day of the week, eighteen miles away in Santa Monica, the beach is desolate, too, but for determined joggers and the homeless men still sleeping beside the trees. As the sun rises in Paris, the scent of freshly baked baguettes fills the streets, although no store is yet open so you can enjoy one. The canals of Amsterdam are still and shimmering in the early hours, and the houseboats rarely look so inviting. Once, as I wandered aimlessly, wired by jet lag, a crazed man chased me through the streets of the Jordaan district, shouting at me in Dutch, the only sign of life I've ever seen in that neighborhood so early. In Greece to attend a very special anniversary party, my mother and I meandered through Athens in search of an early breakfast, the smells from the fishmongers assaulting us as gulls hovered overhead. Bangkok never seems to slow. Even at the airport at 4:00 a.m., the twenty-four-hour massage joint is as busy as a nail salon in New York on Saturday at noon. And then there is Thimphu, the capital city of Bhutan. A city in a country so remote, so undeveloped, that it isn't quite sure what to make of cities. A city whose population has quadrupled in the past ten years. Even as it grows exponentially, and especially in the early light of morning, Thimphu feels a bit like a gangly village, rather than the bustling hub it becomes once the day is under way. It is six thirty in the morning, in the dead of winter. It's chilly, but not as cold as this season could be, high in the Himalayas. Six scraggly stray dogs wander aimlessly in a pack, crisscrossing the street, oblivious to what little traffic there is. A lone luxury SUV rumbles past, probably a dignitary being driven to the airport or his office. As part of their compensation, ministers are given these fancy vehicles, which only a handful of private citizens can afford. A truck barrels by. Twenty Indian men stand close together on the flatbed, being driven to a construction site for the day's labor. A few Nepalese ladies stoop to sweep the street with short-handled brooms made of straw. Their eyes are heavy, as if they've already logged eight hard hours, even though they've just begun the day. A baby strapped to one woman's back somehow manages to doze through the action. The sidewalks are pockmarked with infinite cracks and holes and splats of bright red spit, remnants of the ubiquitous _doma_. It's a risk to walk almost anywhere in Thimphu and not fix your gaze downward to watch your step. But if you do, you are robbed of the vista of the most lovely Himalayan sky. Clouds hang low like tufts of cotton candy, hugging the mountains and giving the valley the air of a fairy tale. There are no traffic lights in this city. The starched cop who directs the cars with his tai chi–like moves is not yet on duty. There's no need. This early in the morning, you could walk in the center of this street fearlessly. The snooker parlor, jammed with agitated players well into the evening, is shuttered now, as are the fabric stores, bars-cum-restaurants, beauty "saloons," and the shoe shops all selling the exact same merchandise. In an hour, crossing any of the streets will require patience and skill, like navigating a chase scene in a video game. Children and dogs will clutter the sidewalks then. Weathered ladies peddling bright red chilis and rice will pour them onto the filthy, uneven pavement, an ad hoc display that doesn't seem to deter most buyers. At this time of day, I'll be lucky to find an open shop. The only place where it might be possible is on the main strip, Norzin Lam. Midway down the block, I see a sign of life. There's an open door at the Zeeling Tshongkhag, a place that's becoming one of my usual stops. I poke my head in. "Can I come in?" I ask. I'm always polite to people in stores, but in Bhutan, I am hyperaware that every place I go I am representing not just Kuzoo FM but the United States. The proprietor, Pema, welcomes me with a shy smile. "Yes, madam." I have learned I am madam not by virtue of my age but as a sign of respect. People with less experience using their English nervously call me "sir," aware only that it is an honorific, not one that is gender based. "I need some cookies. To bring to Kuzoo, for the _Early Bird_ show," I say, somewhat apologetically because of the early hour. As it would be with most merchants on the planet, why I need what I need isn't as important as the fact that I am here to buy. But I also announce this to remind him that I'm not some random tourist. Not that many tourists are typically on the streets of Thimphu, at any time of day. But here, as in other places around the world, if you look like an outsider, prices can rise quickly. "Of course, madam." I've been stopping in nearly every day for weeks now, and usually buy from his sister, who hasn't yet arrived. There are no fresh baked goods in this store. None of the few venues that do bake are open yet. So I choose a package of peanut-butter-filled cookies from India. The radio jockey hosting the _Early Bird_ show today loves those. I need a snack to soothe my stomach, so I grab some plain digestive biscuits. I buy a box of tea bags, black, just in case the studio's supply of tea is locked up. I hand the man a 100 ngultrum note, a little over $2 with the current exchange rate, and get back 40 ngultrum in change. The streets are coming to life. Some passersby return my smile warmly, if shyly. A few—the minority—don't look so happy that I'm here, and glower a bit. Most children see me as an excuse to practice their English, and they giggle when I respond to their spirited shouts of "hello." After I'd been here a week, a few kids who hang out near this particular store started running after me whenever I walked down the street, shouting "Kuzoo FM, Kuzoo FM." There is a slight hill up the road a bit, and a steeper one when you turn at the Dutch-sponsored dairy station, the only place in town where you can buy fresh milk—if you bring your own container. Men and women and kids leave clutching Coca-Cola bottles filled chalky white. Dogs roam this side street, as if the gates of the pound have been busted down. (Soon I learn there isn't a pound, and that's part of the problem. Stories abound about people being swarmed or bitten by the strays.) As I make my way back to the upper road, I'm breathless, both from the altitude and from the steep incline. On the far side of the street, directly across from the Bhutan Chamber of Commerce and Industry, adjacent to the Danish embassy and in front of Bangladesh's, sits the place where I spend my days. By the time I march up the narrow flight of stairs to the studio, I've had quite a workout, and I've walked only about a mile. With the cookies in hand, I feel suddenly aware of the sensation I'm experiencing. I'm falling in love with Bhutan. THE BUDDHISTS WOULD SAY that everything you need is right here, within you. There's no need to seek outside yourself for the answers. Nothing—no place, no person—can complete you or make you happy. The longer I live, the more I see and experience, the more certain I am that this is true. And yet, occasionally, a shakeup in location, or in the company you keep, can touch you in just the right way, awaken something inside you. At precisely the moment you need it. Timing and circumstance collided to ignite this love affair. It didn't hurt that I had a lifelong fascination with anyplace in the throes of evolution. The college I'd attended had been around for only a decade when I first enrolled. The most exciting work experiences I'd had were with start-up ventures, companies where we made it all up as we went along. In many ways, Bhutan was a start-up, too—an ancient, once-secluded kingdom transitioning now at warp speed. A new king, a new democracy about to dawn, a new constitution. The twin cultural influences of technology and media were spreading rapidly, challenging and eroding Bhutan's very foundation. It would need to quickly adjust to interruptions from the world outside its borders, the world that had been blocked out for so long. Being in Bhutan today felt like taking a ride back in a time machine, to that transformative era that much of the rest of the developed world had experienced a hundred years prior, before trains and cars and electronic communications changed how we were connected to one another, and so, the very rhythm of life. And yet, this moment for Bhutan was entirely different. This never-colonized kingdom was geographically landlocked, but given its skill at isolation, it might have been a remote island; the changes and developments here, now that they were permitted, were accelerating at a frenzied pace. There was another, more important reason I'd become so enamored. It had to do with the people here: the cadence of their speech, their wry sense of humor, their odd brand of innocence, and their newfound worldliness spurred by the infiltration of their borders. The fierce pride they had in their history, in their kings. The pace of their days. The superstitions and deep spirit of Buddhism that informed everything they did. Everyone here knew everyone, or at least knew how to find the person you were looking for. Each person had a unique role. Karma was the artist and Kinley the newspaperman and Pema the head of the environmental group. There was a feeling of interconnectedness, a sense of community, a camaraderie I'd experienced only on a college campus. The non-Bhutanese I'd been meeting were a key factor in my reaction, too. I'd never met such adventurous souls, people so committed to living life outside the sphere of comfort and routine most aspire to have. I'd become friends with a handful of other outsiders: a divorced nurse from Canada who had hit the road once her daughters were grown, volunteering her skills in countries that needed it. A midwestern couple about the same age as she, committed to doing the same. They and the others I met were active participants in the world. People who went out of their way, really out of their way, to meet other humans unlike themselves, and see how they really lived. Not from a hotel or a tour bus, and most certainly not on a screen. No View-Master living for them. As a member of the demographic majority in the United States, I also appreciated the bizarre, humbling wonder of being a minority in Bhutan. A minority most people here had trouble identifying. I liked it, even in the rare times it was uncomfortable. The better-educated citizens know about the United States, and a subset of those know it well enough to identify specific, obscure locations, often places they'd gone to study. But the majority of the people know only that you look different—that you aren't Bhutanese. Being there put the United States in a completely different perspective, the way staring into the vastness of a natural wonder like the Grand Canyon does. The United States may be the superpower, and the center of your universe, but it isn't the center of everyone else's. More reminders that the world does not revolve around one way of eating, thinking, being. Loving Bhutan is, like so many love affairs, complicated. It's not like becoming a Francophile, or a fan of Hawaii, or using every vacation to visit all the national parks or Major League Baseball stadiums or NASCAR racetracks in the United States. Bhutan is not a checklist kind of vacation on which you hit the hot spots, not an easy place, not a luxurious place to adore. It has many flaws, and it is rife with contradictions. It doesn't boast the typical assets common to vacation destinations, such as plentiful sunshine twelve months a year, or expert chefs who avail themselves of the bountiful fields (if you happen to be a fan of chanterelles and fiddlehead ferns, time your visit during the short summer season when they are as plentiful as kudzu). Bhutan may be simple and unspoiled, in the way rural America might have been at the turn of the twentieth century. With a tour guide beside you, it may seem like the promised Shangri-la, some sort of fairy tale, with endless miles of untouched land and vistas of mountains and trees so lush it's hard to imagine they're real. But it is poor. It is rough. And the sum total of the good and the bad and the strange combine to render Bhutan captivating and magnificent. THE APRIL 1914 issue of _National Geographic_ magazine is filled with advertisements for modern innovations that herald an era of convenience and speed and eroding boundaries: motorcars and tires, round-the-world steamship vacations, newfangled conveniences such as vacuum cleaners and electricity-powered refrigerators and canned soup. But the eighty-eight-page photo-essay that is central to this particular issue is the Rosetta stone for an odd chapter in Bhutan's history. It offered the magazine's 330,000 subscribers the first-ever introduction to the kingdom, a place only a handful of outsiders had seen with their own eyes and few others had even considered. The title of the story was dreamily evocative: "Castles in the Air." Its author was a British political officer named John Claude White, commander of the Indian Empire. White had initially been assigned to oversee neighboring Sikkim, and participated in the failed 1903 British invasion of Tibet, euphemistically referred to in the history books as the Young-husband Expedition. It was during that mission that White met Ugyen Wangchuck, a powerful Bhutanese figure who had managed to unite his people after years of internal strife. Wangchuck wanted to ensure his country remained independent of its powerful and enormous neighbors. Keeping the peace with the British agent for India was one way to maintain sovereignty. When Wangchuck was installed as Bhutan's first king in 1907, White was his invited guest dignitary. In the article, White details how with an entourage of "coolies, elephants, mules, ponies, donkeys, yaks, [and] oxen," he traipsed around the region that India had dismissed as a "tangle of jungle-clad and fever-stricken hills," "a region not sufficiently characteristic to merit special exploration." He found plenty to explore—and document. With a thirteen-by-ten-foot camera rig that took three mules to carry, he exposed dozens of plate-glass images of pristine mountain vistas, as well as of the king and his subjects. Many of these delicate negatives improbably survived the rocky months-long journey around and out of the country. His descriptions of the untouched land before him read like a dispatch from another universe, as if he were observing another species: "The Bhutanese are fine, tall, well-developed men, with an open, honest cast of face, and the women are comely, clean, and well dressed and excellent housekeepers and managers," he wrote. "It is impossible to find words to express adequately the wonderful beauty and variety of scenery I met with during my journeys, the grandeur of the magnificent snow peaks, and the picturesqueness and charm of the many wonderful jongs, or forts, and other buildings I came across; but I hope my photographs may...." No doubt his haunting images of frozen waterfalls and water-propelled prayer wheels; of a smiling, barefoot, _gho_ -clad king; and of an odd, rare goat-antelope called a takin captivated many an imagination. They certainly caught the eye of a _National Geographic_ reader in El Paso, Texas, a woman named Kathleen Worrell. And they might have languished collecting dust on her bookshelf had the Texas State School of Mines and Metallurgy not burned down a few years later. Mrs. Worrell, wife of the school's dean, had noted a resemblance in the photos between the landscape of her city and this country called Bhutan. And so in the aftermath of the school's destruction, she convinced her husband that the college should be rebuilt to reflect the style of the kingdom's unique structures. Now, in modern times, no one could possibly mistake the Franklin Mountains for the Himalayas. The rugged rockiness of the El Paso landscape and the lush Kingdom of Bhutan bear almost no resemblance to each other—except for the fact that each shares borders with an enormous neighbor. The mountains that cut through El Paso are grayish brown, rough like elephant skin, and rarely see snow. They're tiny in scope—just twenty-three miles long, with an elevation at its very highest of just around eight thousand feet. The majestic white peaks of the Himalayas tower to twenty-nine thousand feet and span six countries—nearly fifteen hundred miles. Not to mention they also play a critical role in nature: They feed three major water sources that support more than a billion people. But in the year 1914, with black-and-white still photographs the only window on the world, large-scale commercial air flight but a dream, and no cameras perched on satellites providing vistas accessible from your laptop computer, it is possible to see why Mrs. Worrell saw a connection between these two places. Why she felt compelled to encourage the recreation of a land she had never seen and would never get to is another matter, a mystery that can never be answered, since little else is known about her, and no heirs exist to convey any legends. The commissioned architects took their cues from the only source they had, all that existed in the world at the time: Mr. White's photographs of the landscape and his accompanying descriptions in "Castles in the Air": "This view gives a very good idea of the sloping architecture of the walls and the projecting roofs made of split pine," he wrote, never realizing this detail would later help when blueprints for replicas would be drawn. "All the walls have a distinct camber, and... the windows are of a peculiar form, with the sides sloping inwards. Each building is two stories high and is painted... a dull light gray on the lower story, with a broad band of madder red above." The drawings rendered for the creation of the buildings in El Paso look very much like the real thing; they were likely also the first of their kind ever drafted. The Bhutanese themselves, to this day, work from memory, not blueprints. Another major difference in the El Paso construction was the use of nails. No such hardware was used in Bhutan at the time; roofs were kept in place with rocks, doors hung on sturdy wooden hinges. Traditional Bhutanese structures didn't have glass in the windows, either. The reconstructed Texas State School of Mines and Metallurgy debuted in 1917, with sixty students and perhaps the most unusual architecture of any school in the United States. Over the past century, the name of the university has changed twice, and its size has ballooned, but its looks have become increasingly Bhutanese, except for one brief period: After a new building emerged from the wraps of construction in the late sixties to reveal its discordantly modern style, one local wrote to the paper expressing concern about the departure. It was so out of place that in 1971 it was "Bhutanized" to better fit the landscape and ease critics. Never again would the university's administrators allow such a deviation to occur. As a result, the school now known as the University of Texas at El Paso is an acid-trippy agglomeration of ninety buildings spread across a 366-acre campus, each building a slightly different interpretation of Bhutan's architecture. Even the 1,600-car parking garage and the guard shacks at the entryways to campus are housed in Bhutanese-style sloping-orange-roofed structures. So are the automatic teller machines—an unintentional bit of irony, since only a handful of ATMs exist in Bhutan, and most people don't use them. A Hilton Garden Inn built on the edge of campus continues the theme. This modern Bhutan is punctuated by occasional flourishes of authenticity. A twenty-foot imported altar and a twelve-by-sixteen-foot religious scroll known as a _thangka_ provide a colorful display behind the espresso bar that greets you in the lobby of the libary. Bhutanese prayer flags flutter behind the campus's Centennial Museum; the front of the building is flanked by enormous urns styled to resemble Bhutanese prayer wheels. Right across the street, on off-campus soil, the suburban-strip-mall style that dominates so much of the American landscape resumes. For the most part, if you stop someone in and around the campus and ask why UTEP looks the way it does, they'll look at you as if that's the most spectacularly odd question. As far as they know, it just does, that's all. Some are aware of the university's connection to Bhutan, but most have no idea, really, where that is—or why a tiny Asian kingdom they've never heard of happened to influence and infiltrate the school's architecture. Decades after the death of Mrs. Worrell, one curious member of the university's administration took it upon himself to delve into the aesthetic roots of the campus. It was the sixties, a time when Bhutan was still shuttered to the outside world. Before Internet connections allowed speedy research, this man, Dale Walker, obsessively sleuthed addresses, packed up photographs of the university buildings, and sent them along with queries to people around the world who might authenticate the look of the school. His most important correspondent was the queen to the third king. Her response arrived on stationery as fine as tissue paper, a year after Mr. Walker had first sent his letter. "Dear Mr. Walker," she wrote, in her letter dated December 4, 1967. "It is thrilling and deeply moving to see a great new university built in faraway America inspired by Bhutanese architecture. Only the topmost windows are unlike Bhutanese windows, as here they are made entirely of carved, painted wood. I think your new university buildings are beautiful, combining modern design so harmoniously with ancient Bhutanese architecture. I wish our new buildings in Bhutan could be so finely built." In later correspondence, the queen expressed hope that Bhutan might someday work more closely with the university. She even asked if they might enroll a student. The answer was yes. Jigme Dorji, nicknamed Jimmy by his classmates, became the first student from Bhutan sent to study in the United States. He graduated with an engineering degree in 1978. Mr. Walker also solicited validation from the few other Westerners who had traveled to the country. Burt Todd was the first American to visit Bhutan, in 1949, after befriending the soon-to-be queen while they were students at Oxford. He confirmed for Mr. Walker that the forts of Bhutan were "almost identical" to the buildings in El Paso. In 1959, journalist Desmond Doig was the first allowed to report from the country. He told Mr. Walker he had believed the pictures he'd received were of new construction in Thimphu: "When I was told they were American campus buildings, I was genuinely amazed." John Claude White most likely never knew what he inspired, for he passed away in 1918—just one year after the first Bhutanese-themed buildings opened in El Paso. Kathleen Worrell could never have imagined just how much she did make her corner of Texas resemble the Himalayan kingdom she admired but never got to see. But because of them both, and the power of the press that invisibly united them, Bhutan and El Paso are forever, indelibly, linked. THE LINKS GROW ever stronger. Over the past twenty years, the university's president, Dr. Diana Natalicio, has cultivated the school's relationship with the kingdom, welcoming more students from Bhutan each year. UTEP has even started to market its unique identity with brochures that proclaim it "Bhutan on the Border." The kingdom's own borders have been penetrated by a growing number of characters, and among this diverse constituency, one thing is certain: Anyone who comes close to the kingdom falls under its spell. Some of those who have become enthralled have gone on to play a critical role in Bhutan's history. A Jesuit priest named Father William Mackey created the first high school in Bhutan in the sixties, and began a long friendship and study exchange between Bhutan and his native Canada. The late Michael Aris, who as a young man tutored members of the royal family, published several important books on Bhutanese history at the behest of the third king. Controversial insights into Bhutan were revealed by Nari Rustomji, an Indian advisor to Sikkim. His book was long banned in the kingdom for its controversial frankness about a dark aspect of Bhutan's history: the plot by the mistress of the third king to claim royal power for the children he'd fathered with her. The mystically inclined actress Shirley MacLaine made a trip to Bhutan as a guest of the starstruck acting prime minister in 1968, and wrote about her spirit-seeking adventures in _Don't Fall Off the Mountain_ , another work long prohibited in the kingdom. Since Bhutan opened its borders, stories have abounded about alliances great and small, between outsiders who befriended the powerful of Bhutan and those who connected with regular people. Bhutanese love to tell stories about single, careerist Western women who visit on vacation and fall for their tour guides—suggesting that they couldn't find men back home and couldn't help being swept off their feet by the chivalrous Bhutanese. This kind of forbidden love affair is the basis of a book by a Canadian woman named Jamie Zeppa, who worked twenty years ago as a teacher in eastern Bhutan and fell in love—and had a child—with one of her students. After a stint as a golf pro at the Royal Thimphu Golf Course, Rick Lipsey returned home to create the Bhutan Youth Golf Association, which for years dispatched two pros a year to the kingdom to teach kids—and adults—the game. So inspired was digital photography expert Michael Hawley by the visual splendor of the country, he published a book of photographs about it that's also the world's largest, at five feet by seven feet and 133 pounds, and costs $10,000 a copy. Among many of the modern fans of Bhutan there is something of a competitive spirit to prove one's devotion to the place—and one's power to access members of the royal family and other VIPs. A kind of "My Bhutan is bigger than your Bhutan, my attachment greater." These boasts have to do with not just the desire to help the Bhutanese but the fierce need to stay connected to the land and its people after leaving it, wanting to attach somehow to Bhutan rather than just love it purely and wholly—allowing that love alone to be enough. Bhutan gives you so much, makes you aware of all you have, that it inspires you to somehow mark the territory, claim it as your own. With each incursion, each friendship, each exchange, though, a tiny bit of the old Bhutan melts away, and a new and different one emerges. Each of us who loves it—no matter how that love is manifested—permanently changes it. # 7 # THE SYMPHONY OF LOVE NGAWANG, PEMA, AND PINK WERE BICKERING and giggling conspiratorially in the center of the Kuzoo work area. Each had a piece of paper in hand; it looked as if they were working on a script. Their idea of sophisticated audio production was to divvy up the text and record tag-team, one line each. "Herpa-tett-ez B," said Pink, struggling with the words in front of her. "Herps." Word was out that Kuzoo would run free ads to show potential advertisers the power of radio, in the hope of one day attracting paying customers. The initial funding from the sale of the king's BMW gave the station a cushion of security, so making more cash to pay the bills wasn't yet an urgent necessity. Which was a good thing because the bulk of the "ads" on the station still came from the government ministries, with announcements about holiday closures and requests for bids on projects. None of the businesses in town were used to advertising, nor did they have the budget. Just putting a sign in front of the store was considered a frivolous investment, and even those signs that were installed were more about function than about competition. Proprietors hoped people stopped in, but if they didn't, no problem. Such was life in a Buddhist kingdom where Gross National Happiness, and not a grab for cash, was the guiding principle. A messenger appeared at lunchtime from the Health Ministry to deliver the script for the new ad. Valentine's Day was looming and schools were out on their two-month winter break. This meant extra nights of parties for kids with time on their hands, and the potential for hormones gone awry. " 'Sexually transmitted diseases exist among us,' " Pink read slowly, deliberately. "Herps," struggled Pink. "Herpah-tet-ahs B. 'Sexually transmitted diseases exist among us.' " "So romantic!" cackled Pema. "CONE-dom," Ngawang intoned. " 'These infections are all passed through sex without a CONE-dom.' " "No, no, no, CAHN-dom," insisted Pema. Her love of reading, or perhaps it was her encyclopedic knowledge of _Sex and the City_ , made her proficient with the script: " 'Gonorrhea, syphilis, and HIV/AIDS are all around us.' " A cell phone rang. The tune was "Summer Nights" from _Grease_. " 'These infections are all passed through sex without a CONE-dom,' " Ngawang tried again, her voice tentative. Her struggle with the word made rubbers sound very formal. "CAHN-dom," shouted Pema, even though the girls were sitting right next to one other. She was impatient, and a much quicker study than just about anyone else at Kuzoo. "How do you say this word?" Ngawang asked me, pointing to the script. "Pema's right," I said. "Cahn-dom." "Herps," repeated Pink. I walked over and peered over her shoulder. "Okay, no, it's HERP-ees," I said. "And that other word is hep-a-ti-tis." No one asked what these afflictions were, which might have been because they knew, but could also have been mere disinterest. " 'You cannot tell whether someone has an STD by looking at them. That is why you have to use a condom EVERY TIME,' " proclaimed Pema, her tone that of the smartest girl in the class. " 'Happy Valentine's Day!' " " 'Happy Valentine's Day,' " Pink and Ngawang echoed the last line of the script, which they would all read in unison. The three of them burst into laughter again. Then they shut the door and shushed me so they could record. IT DOESN'T TAKE MUCH to create a radio program: a microphone or two, a mixing board, a transmitter, and a relatively quiet room. That was about all Kuzoo had, along with a couple of faded Backstreet Boys and Spice Girls posters taped up on the walls, and an ON-AIR sign mounted outside the studio. Its lighting mechanism worked about half the time, but almost everyone ignored it anyway. They'd just barrel into the studio regardless of whether someone was transmitting live. Kuzoo FM was what most radio stations in the United States hadn't been for a very long time, an anything-goes bullhorn to the people. A core staff of paid radio jockeys kept the station going, but anyone who was interested could just show up and contribute, however they wished. You could bring in music. You could hang out in the studio with whoever was hosting the show (though eventually there were rules prohibiting _that_ ). You could answer phones. Volunteers weren't just welcome; they were encouraged. Even the typical college station in the United States had long ago ceased to be so egalitarian. A highfalutin promotional campaign hadn't been necessary to clue everyone in on the existence of Kuzoo. People started stopping by the compound that housed the station as soon as it launched, hanging out to see how things worked, answering phones or otherwise inserting themselves into the free-for-all. What might have been transmitted around the town or the country by word-of-mouth just six months earlier could now be communicated instantaneously, thanks to the magic powers of broadcasting. And just as the citizens of Thimphu or any village in Bhutan helped out one another, without question, the people of Kuzoo unfailingly broadcast whatever messages came their way. "Folder with important papers lost by the Hong Kong market," read the message submitted by a worried-looking man who showed up on the grounds one day. "Reward. Call 17-27-15-98." "Floodwaters rising near the _dzong,_ " said the Ministry of Home and Cultural Affairs. "Please leave the area immediately until further notice." "Little boy lost by Changlimithang Stadium," the police called to report. "Please claim him at the satellite police office in town." Twenty minutes later came word that the child had been picked up, and a follow-up on-air announcement ensued. Being comfortable with writing copy quickly wasn't a bad skill to have around here. Doing quick bits of rewriting, helping with the pronunciation of English words, and sharing midday meals with Sir Tenzin were shaping up to be the mainstays of my contribution to Kuzoo. As each day passed, it became more apparent that I was little more than an accessory, not expected to do much in particular, really, besides be exactly who I was: the experienced volunteer consultant from afar. Any hope I might have had of inspiring these young broadcasters to use their new radio station as a tool to prepare for their impending democracy was folly. Of far greater interest to them was where to download Destiny's Child and Alicia Keys on the Internet, especially given the slow connection speeds and dearth of computers. To give myself a mission, I'd assumed what I saw as an equally important role: Kuzoo den mother. Lacking any formal duties, I'd sit around the station from very early morning till early evening, observing, offering suggestions, reading what came in over the fax, and making sure everyone ate, even though what I was feeding them was hardly food. Apart from spicy yak-meat pizza and cookies or chips, there wasn't much in the way of take-out, not to mention common culinary ground—the pizza was only a slightly older and less exotic presence in Thimphu than I. I could educate by osmosis. Most of the staff—indeed, many Bhutanese—hadn't interacted much with anyone from so far away. My lunchtime discussions with Sir Tenzin about copyright violation, music royalties, and licensing fees proved futile. He was convinced, no matter what I said, that international laws could not possibly apply to Kuzoo. "Who would bother coming after us here in Bhutan, anyway?" he'd counter dismissively. While I suspected he was right, and that only a chance luxury vacation by a music industry executive who happened to turn on a radio might bust Kuzoo's illegal goings-on, I felt it my obligation to point out the importance of respecting intellectual property. Kuzoo's desire to become more professional and its decision to import an outsider to help make that transition was understandable. It wasn't uncommon for the radio jockeys, none of whom had appeared live on the air before, to forget the basics, such as turning on the microphone. A few, racked by nerves, compensated on the air by sounding a bit comatose when they spoke—a strange contrast to the bright bits of music they loved to play. These kinds of mistakes could be easily solved with a little adult supervision and cheerleading. And in addition to providing both in person, I convinced Pema to ration a piece of printer paper from her Fort Knox of a supply cabinet, typed out in giant, bold forty-eight-point type a list of recommendations to get them started, then posted my rules inside the studio: **Before you go on the air, please remember to:** * **Take a deep breath** * **CHECK that the microphone is on** * **Remind listeners that they're tuned in to Kuzoo FM 90** * **Encourage listeners to email us or call in or come volunteer—it's THEIR station** * **Have fun! Enjoy yourself while on the air! (After all, this IS fun, isn't it?)** All of these were reminders I often needed myself back home, as did my beleaguered colleagues there. But at work in Los Angeles, we didn't get the pleasure of playing Foo Fighters and Jay-Z. ONE OF THE GOALS on the list sent to me before my arrival, to inspire staff and volunteers alike to be interested in reporting news, seemed simple enough. Given the impending democratic elections, it also made sense; the intent of this station was to give the youth of Bhutan the tools to examine and monitor their government-to-be. So far, that had translated into a five-minute "newscast" each evening, which amounted to nothing more than the rewriting of items from the newspapers and the Bhutan Broadcasting Service. With Sir Tenzin wrangling everyone he could find into the conference room, I conducted my first—and last—formal workshop at Kuzoo. "WHAT IS NEWS?" I scrawled on the giant whiteboard I'd asked Sir Tenzin to buy for the workroom. It was there to encourage people to share their ideas. (To date, I seemed to be the only person using it.) Twenty participants were dutifully in attendance. Perhaps fearful there might be a quiz later, they wrote down every word I uttered about the Five Ws and H that were the foundation of journalism. "Who. What. When. Where. Why. And How," I explained. "These are the elements of every story. Your job is to ask the questions and find out the answers." No one said a word, and even when assigned the task of telling a story, the only person who seemed to understand, or be interested in, the exercise was Sir Pema, Kuzoo's second in command. He was a shy man with a round face and glasses; his bookish demeanor, not his looks, made him seem older than thirty-five. Sir Pema had come from a remote village in the far east of Bhutan, and had been chosen to attend a Jesuit school in India where the best and brightest of Bhutan got sent for their educations. He had gone on to earn a master's degree in education in Canada. His curiosity had been aided and abetted by new technologies that fueled his quest for news and information. He'd wax poetic about James Bond, the X-Men, or Johnny Depp and seemed shocked when I couldn't converse about the latest Hollywood blockbusters. First thing every morning at the office, as soon as he could get online, he read the _New York Times;_ at home at night he was glued to CNN with near-religious fervor. (The fact that I'd worked at both places upped my street cred with him.) He'd quote Maureen Dowd and "King Larry" as if he'd been hanging around at a bar with them the night before. " _Who?_ Stray dogs. _What?_ How many of them there are. _When?_ Right now. _Where?_ Thimphu. _Why?_ No family planning. _How?_ That's the problem," he said. Sir Pema appeared to possess the keen powers of cultural observation necessary to succeed in the news business, powers that were otherwise lacking among the Kuzoo staff. Daily life was something they took for granted; the ongoing battle with strays wasn't really an issue to them. It was just the way things were. This had to do a bit with the Buddhist belief that things simply are the way they are, and a lot to do with the unwavering Bhutanese devotion to authority. Reverence for that seemed embedded in them genetically, just like the adoration of spicy hot food. It was also, I was learning, considered rude to ask too many questions. Other than Sir Pema, my pupils appeared utterly disinterested in the idea of taking out a microphone and questioning the world around them. It was clear they were present only because they were told to be. My news session wrapped up with a thud. I decided that during the rest of my days here, I'd resort to subtler teaching methods than the classroom. The next afternoon, the utility of the radio station for something other than the transmission of pop music became very clear to all. Sir Tenzin walked into the station with a thick document tucked under his arm. "Class Ten results," he announced, and he dropped the document in front of the female Pema with a hot-off-the-presses flourish. "Announce on the air that they're here." The day the scores were released was one of the most important of the year, even if it did fall square in the middle of winter recess. Pema, aka Oprah, obediently stepped into the studio, faded out the music that was playing, and, in her best "breaking news" voice, informed the people of Thimphu that the all-important test scores had been released. And that Kuzoo would be happy to give callers their results. Class Ten grades determined whether a student could continue high school for free. Those who failed had to pony up close to $1,000 a year to finish their degrees at one of the handful of private schools in Bhutan, or had to study across the border in India, a more common fate. Given that the average midcareer civil servant took home a salary of $350 a month, coming up with the cash to pay for an education was simply out of the question. Failing Class Ten could decide your fate forever. Those who couldn't complete two more years of education could never hope to get a modern job, behind a desk. Perhaps they'd even have to stay in the village to work in the fields. And with a population of children that swelled each year, competition for slots in schools as well as for employment was intensifying. The sweet Kuzoo driver, Kesang, must be from a particularly poor family, I deduced; since he didn't speak English, he must have had to drop out of school very young to work. Instantly, the phones lit up. Students—spared the trek to their schools to find out their scores—jammed the lines. Tiny voices identified themselves and asked to learn their fate: "Wesel Wangmo, _la_. Motithang High School. Did I pass?" For two days, the staff and volunteers of Kuzoo fielded nervous requests for results. Kids even started calling in from outside the broadcast area; word had spread that Kuzoo had a copy of this important document. But that wasn't the only thing keeping Kuzoo busy. AS IN SO many places, February is a dreary time of year in Bhutan. The difference is that the skies, when the gray does melt away, are perfectly blue and framed by snowcapped Himalayan peaks that make you feel as if you're in a heavenly dream sequence. Assuming heaven also features tooting horns and foul emissions from the cars that jam the streets below. And thousands of those scruffy, barking stray dogs. One thing bolstering winter spirits in Thimphu this particular February—aside from the first measurable snowfall in the city in two years—was Kuzoo FM's Valentine's Day contest. The young volunteers behind the effort had dubbed it grandly: the Symphony of Love. Two free dinners for two at the only hamburger joint in town were offered as a prize. Listeners were asked to call in and sing love songs on the air, then vote on their favorites. The meals would be awarded to the two most popular singers. It was the first time a contest of this kind had been held anywhere in the kingdom. To build excitement, a taped announcement, recorded by Pema in the supply closet, ran a dozen times a day. This goes out to all the LOVE-birds who are planning to go out on Valentine's Day. Boy, do we have a treat for you! All expense paid for two couples at the Zone. You heard it right! Show your love how much you care. The person who gets voted the most wins. Call us. Text in your votes. Or express in words by writing a poem to us—we will be waiting! Every night at 7:30 p.m. for the three weeks leading up to February 14, listeners transformed Kuzoo into a public karaoke bar, crooning and croaking their renditions of love songs over the airwaves. The irony of a bunch of Buddhists celebrating a holiday that started in honor of a Christian saint was completely lost on the city. Listeners didn't seem to care, either, that each evening's serenade was generally more cacophonous than symphonic. Kuzoo's two phone lines jammed up immediately with singers and voters alike. A ratty old notebook, pages shy of being completely covered in ink, was used for the tabulation. Whoever was hanging around would answer the phone and strike a tally beside the number assigned to each contestant. The question of accuracy wasn't an issue. Everyone trusted their system—and one another. PricewaterhouseCoopers would never have approved. The idea for all this was, of course, firmly rooted in that fine import called _American Idol_. Bhutan was not immune to the fever for _Idol_ that was consuming the rest of the world. People rushed home each night to see the latest installment, beamed into Bhutan on a time delay via satellite on an Indian TV channel. In this country where television was still a relatively new phenomenon, any show that even appeared to be a live broadcast whipped the public into a frenzy. And when it had to do with the glitz of a Hollywood so far away, allowing for discussions over people with very un-Bhutanese names, such as LaKisha Jones and Jordin Sparks, all the better. No less popular this February was the subject of who had sung on Kuzoo. In shops and offices and on the main street in town, everyone buzzed about it, bragged in the morning if someone they knew had been heard on the air the night before. As a representative of Kuzoo, I'd be offered unsolicited commentary on the previous evening's callers anywhere I went. "The man from last night, not very good. The girl, excellent," one man told me. "And that fellow who dedicated his song to _both_ his girlfriends—very funny!" The Symphony of Love ratcheted up to another level the already feverish adoration of Kuzoo. THE CONTEST, PREDICTABLY, had its scolding detractors—chief among them one of the elder advisors to the station. Madame Carolyn was a sixty-something-year-old woman from Britain who had been imported decades earlier to tutor members of the royal family. She was one of a handful of outsiders on whom Bhutanese citizenship had been bestowed—an honor of which she was understandably proud, and which she flaunted to each newcomer she encountered. Madame Carolyn declared modestly that she didn't understand why "HM"—cheeky, irreverent shorthand tossed into conversation by those privileged few who had personal access to His Majesty—had asked her, an "old hag," to have anything to do with the station. In fact, she took quickly to the microphone, zealously taping segments about classical music, and little educational bits about English grammar and vocabulary. "Word of the day: unreconstructed," her voice would trill, and then she would go on to recite the dictionary definition. Hers was the "eat your vegetables" portion of Kuzoo. Well-intentioned, definitely; necessary, perhaps; but nowhere near as fun as the Pussycat Dolls and Jack Johnson. When visitors to the station assumed I must be the Madame Carolyn they heard on the air, I was surprised and a bit disturbed to be mistaken for this legend. It wasn't so much that she was twenty years older than me; I didn't want to be associated with such prim behavior. What the confusion of identity underscored was the fact that to many Bhutanese, Brits and Americans are indistinguishable—no matter how different we think we sound. The idea that Kuzoo resources were being dedicated to something other than the celebration of Bhutan disturbed Madame Carolyn. "We should be doing a call-in show to honor His Majesty's birthday," she scolded, lips pursed, to no one in particular. This February 21 would be the first time the nation would celebrate the new king's date of birth. "Or to mark Losar." Losar was the Bhutanese New Year, which this year landed just before the royal birthday. Both merited national holidays that would shut down the country for days. Madame Carolyn insisted she could not understand how young people could prefer serenading one another with silly love songs over honoring their king and their national heritage. TWO NIGHTS BEFORE Valentine's Day, the Kuzoo volunteers gathered in the room adjacent to the kitchen-studio to prepare for the finale. They'd taken to shortening the contest's name: "SOL" they called it, unaware of the decidedly unromantic usual meaning of the acronym. A giant bag of Lay's Spicy Indian Masala potato chips was passed like a football around the room. Several people flipped through the notepad, filled to the edges now with strike marks. Close to two hundred people had sung on the air. Scores of text messages carried votes to staffers' personal cell phones—everyone in town knew someone who worked at Kuzoo. Their choices had dutifully been recorded on the pad. This was also the time of day when the newscast got prepared. Since it was a Monday, a day when none of the newspapers published, Ngawang had to rely on the BBS as her sole source of information. This involved transcribing, by hand, what had made the news that evening; the margin for error here was high. I had tried to convince Sir Pema, Sir Tenzin's deputy, that if all we were going to do was crib from other media, the way broadcasters back home plucked the majority of the stories _they_ reported from newspapers, we should just halt the effort entirely. This was not a Western convention we should emulate, I said. But the fact that this was how it was done in the West made it all the more enticing. Sir Pema was dedicated to the idea that there needed to be at least the essence of news on Kuzoo. In his mind, it somehow offset the pop music and gave the station at least the illusion of seriousness. But he stopped short of pushing the Kuzooers to go out and report for themselves. The items that Ngawang had transcribed from the BBS newscast at the top of the hour included: a forest fire; the naming of a panel of government officials to oversee next year's elections; a hike in the price of Indian cars to twelve thousand ngultrums—about $250 each—and the revelation that the availability of low-interest auto loans had swelled the number of vehicles in the kingdom to thirty thousand. After Ngawang had transcribed and then reworded these stories, I helped her comb through the copy and do a practice read. Like anyone just starting out in radio, she was self-conscious on the air. Especially since everyone you know becomes your biggest critic when they listen in. One of her uncles had ranted about her recent past performances, saying that her English was poor, while a friend had complained she was sounding "too American." I'd hoped that wasn't a dig at her affection for me. A public service message had been faxed in, too, from the Ministry of Home and Cultural Affairs. It tied into the car stories we had to report. We changed its dreary bureaucratic-speak into zippier language: If you're listening to Kuzoo in your car... hear this: Starting next Tuesday, you might get stopped for a random emissions test. The police want to crack down on people who are not getting their cars tested for emissions. So they're going to randomly stop drivers as of February 20. Air pollution is on the rise here in Thimphu. So we hope you'll get your car tested for the sake of our air... and also to avoid having to pay hefty fines. This has been a public service announcement from Kuzoo FM 90. A pretty sixteen-year-old volunteer named Lhaki piped up as she heard Ngawang rehearse. "So glad about that," she said. She was sitting on the floor with her legs crossed, holding a guitar, long hair hanging past her shoulders, bangs brushing into her almond eyes—a Bhutanese Joni Mitchell. "My parents both bribed the emissions inspector to lie about their cars so they could pass. So selfish." Disaffected teenage angst oozed from Lhaki's every pore. The week before she'd shown off a few sketches in her tattered notebook, each as intricate and bizarre as one of the omnipresent religious scrolls—the sacred _thangka_ paintings depicting various Buddhas that hung in virtually every room in Bhutan, not far from the requisite photo of the king. Each had a Hieronymus Bosch–like quality to it, complex and colorful, playful with a tinge of sinister. One of Lhaki's drawings depicted her interpretation of Hell, accompanied by a simple declarative caption in cursive writing: _Life sucks_. On another page, in the same style, was a happy valley of love, filled with flowers and rainbows and suns, and dozens of tiny babies floating gaily around the landscape like cherubic angels floating in an Italian Renaissance painting. I wondered if she'd ever seen one or if she'd simply pulled the images from the recesses of her dark, complicated teenage brain. "I don't want to get married or have my own kids," she said with such conviction that I wanted to fast-forward into the future to see what Lhaki did, in fact, do with her life. "Just take them in when they have no parents." Lhaki and her constant companion, TT, were the nightly hosts of SOL. TT was a gawky, sweet boy who'd collapsed his Bhutanese name, Tshewang Toshi, into the much hipper moniker. He and Lhaki made quite a duo. Without a hint of guile, together on the air they embodied a bizarre fusion of the sweet funniness of George Burns and Gracie Allen, the edgy familiarity of a PG-rated Howard Stern and Robin Quivers, and the goofy sarcasm of Beavis and Butt-Head. "So," I asked, "who is winning SOL at this point?" I hadn't been paying very close attention to the competition. Usually by the time it started, I'd been at the station for a dozen hours and was leaving to have a beer or dinner with one of my other expat volunteer friends. People I'd met literally on the street, simply because we looked the same. We were thrown together by fate in the same strange land at the same time. These new friends gave me a sense of what was going on in the rest of Thimphu. We all faced similar challenges, and it helped to hear that Mark the physical therapist from Nebraska and his wife, Penny, an English tutor; Pam the nurse from Toronto; Ed the golf pro from Nova Scotia; and Mayumi the Web designer from Queensland, Australia, were experiencing their own workplace challenges. We also provided an excellent support system for one another regarding an equally important matter: where to find palatable, chili-free food, which we often consumed with one another. Mostly we talked about how we felt we were taking more from Bhutan than we could ever possibly give. Lhaki was nonchalant as she ran through the front-runners in the Symphony of Love contest. "Well, there's me, there's TT, and there's a little girl." "Wait," I said. "Run that by me again?" "TT got the most votes, I got second most, and a little girl from Motithang got the third most. So we go on tomorrow night to play again and see who the top two winners are. I'm taking TT out if I win, and if TT wins, he's promised to take me. And if we win first and second prize, we've promised to take Thujee and Khandu along." I had to stop for a second to make sure I'd understood Lhaki correctly. "So you and TT have actually been _competing_ in SOL? While you've been hosting it every night?" "Yes," said Lhaki, strumming on her guitar, oblivious to my tone of voice. "Well, the other night Thujee hosted, but mostly it's been me and TT. People seem to like the live guitar." "Does that seem right to you? That you're hosting the show and can play the guitar while you sing? You're on the air all through the evening..." Lhaki looked up at me now, her eyes widening with confusion. It was clear she didn't understand my inquisition. "Well, sure. We've worked hardest on the contest, so it is only fair that we get the hamburgers." This had the makings of a public-relations disaster. Sir Tenzin had convinced the newspapers to cover the overwhelming success of SOL, including the grand finale. When the reporters found out that the winner and runner-up of the contest actually worked at the station, the credibility Kuzoo was establishing would go down the tubes. Or maybe I was projecting. Maybe no one would notice or care. That possibility was even more distressing. My already queasy stomach interrupted this stream of worried thoughts by announcing its distress with a loud burble. This had been a "bad food day." What I needed right now was the comfort of my home bathroom, not the closet-size commode at Kuzoo with the toilet that rarely flushed. Ngawang asked Kesang to give me a ride home in the Kuzoo van. As we drove, I frantically dialed Sir Tenzin's mobile number with the cell phone she had scrounged up to lend me. No answer. There is no voice mail in Bhutan, so leaving a message wasn't possible. Just moments after Kesang dropped me off, Sir Tenzin materialized at the door of my apartment, entering the room with his swirl of urgency and drama. "They tell me you're sick. What do you need? Do you need to go to hospital?" I wondered if news of a sick person had traveled as quickly here before the mobile phone. Most likely, yes. A sick person was always cause for concern, particularly when it was a guest from afar. "Thank you, sir. Really, I don't need anything. But that's not what I was calling you about. There's a problem with the Symphony of Love..." "I'm very busy now, Jane," Sir Tenzin said as he carefully began to wrap up an afternoon hit of _doma_. I braced myself for the awful smell that was inevitable once he started sucking on it; I wasn't sure my stomach could stand it just now. "Can't you just take care of it?" Sir Tenzin was like a lot of men I'd met over the course of my career, one of those "big picture" thinkers. Details were not his strong suit; ideas were. So was tossing out those ideas and commanding the staff to execute them. He loved the buzz the Symphony of Love was getting, loved the kudos around town, but had little interest in or worry about its execution. Unless he happened to tune in and hear someone say something he didn't like—at which point he would roar into the station, yelling, correcting someone's grammar or on-air performance. "No, I'm worried about something and I need your help. I just found out that two of the three people in the finals for SOL... work for Kuzoo." It didn't immediately register for Sir Tenzin that this posed any sort of problem. He looked like he was about to shrug and say "So what?" But my demeanor forced him to pause. He nodded for me to go on. I reminded him of the series of anticorruption spots the government had been running in the newspapers and on the BBS, encouraging citizens to report any cronyism or swindling that appeared to be taking place. "Sir Tenzin, they could make an example out of Kuzoo. Maybe the stakes aren't as big, but the principle's the same. And we're going to look awfully stupid if the _Bhutan Times_ takes issue with our own people winning the contest." The Symphony of Love, to no one else but me, was a living metaphor for the upcoming parliamentary election. (Whose exact date, like the king's coronation, was still under review by royal astrologers.) A time to teach the youth not just about self-expression—creative and democratic—but also about right and wrong. If they thought it was okay to stack the deck in a contest involving love songs and hamburgers, well, who knew what would happen when they went to the voting booth? Sir Tenzin understood. "Okay, okay, let's have a meeting. You can explain." "But I need you to be there and back me up, sir." As a volunteer and not their real boss, my reprimand might not hold as much weight. As soon as Sir Tenzin agreed, I excused myself so I could run down the stairs to the bathroom. LATER THAT EVENING, the SOL team was shuttled into the conference room that had long ago served as a living area. That this building had once housed the foreign minister might suggest that it was once grand. There was, however, no indication that this was true. It was hard to imagine this was ever anything but a rumpus room for a youth-oriented radio station. Taped onto the wood paneling, opposite the portrait of the king, hung a giant yellow poster declaring Kuzoo's mission statement. Its hand lettering added to the dorm-room feel: _We, the youth, must keep ourselves informed_. _We should arm ourselves with the knowledge to make our own futures brighter_. _Youth is temporary and fleeting but the foundations we build today will decide how bright the prospects will be for the rest of our lives_. _Kuzoo aspires to be the voice of the youth of Bhutan_. _Kuzoo aspires to inform the youth of Bhutan_. _Kuzoo is the youth of Bhutan_. _Kuzoo is ours; the future is ours_. A taped recording of that message, read by a sampling of Kuzoo's most youthful volunteers, ran at the start of each broadcast day. No one seemed to be able to name the author of those words. I suspected it was the handiwork of Madame Carolyn. It didn't seem likely that anyone else, particularly a young person, could possibly have written it. "Okay, okay, everyone, thank you for coming," Sir Tenzin said, standing before the group. Fearful of his notorious temper, everyone quickly quieted down. "I need your attention to an urgent matter." Watching him work the crowd, I wondered whether he would get the political bug and run for office when the time came for elections. Sir Tenzin was at his most charming when groups of visitors toured the studio. If he could sit still long enough, he'd have made an effusive on-air personality. "Madam Jane here is going to make an important announcement." I stayed seated on one of the frayed upholstered benches in the room, to appear casual, to defuse my message. The room was so silent that when my stomach gurgled loudly, I wondered if everyone in the room could hear it. "First of all, I wanted to say how great it's been to see all of Thimphu get so excited about SOL. The whole town's been talking about it, as you know, and that's a great sign of how successful you've made Kuzoo. So congratulations for that." Thujee led everyone in a cheer: "S-O-L! S-O-L!" He'd just graduated from high school and was anxiously waiting to see what kind of scholarship he might get for college, his sights set on the United States or Europe. It was becoming vogue for young Bhutanese to save up so they could sit for the Test of English as a Foreign Language, administered in India, and then hope to find a scholarship or private sponsor so they could be educated in the West. For each student who got to study abroad, five more got the fever. To pass the time, Thujee had been hanging around the station a lot lately, picking up slots as a volunteer deejay. With his cheerful demeanor and sense of confidence, he was a born leader as well as radio announcer—more poised and eager than most of the paid staff. Beside TT and Lhaki, he was the most popular of the volunteers—on the air, and off. "Quiet, please," Sir Tenzin shouted over the din. Years of being a high school administrator made him deft at teenage crowd control. He nodded for me to continue. "You've all heard me talk about what a huge responsibility it is being on the air. You probably know that Sir Tenzin used to be a principal." I paused for dramatic effect, looking out at the group so they could nod their yeses. "Now, what if he told you that his daughter was going to win a big prize at his school, simply because she was his daughter? Would that seem fair to you?" I knew no one would answer. Bhutanese didn't like to call attention to themselves in a crowd. So I called on Lhaki, who was sitting on the floor up front. She was still clutching her guitar, suppressing her strum. "Umm, well, I guess not." "Right. Even if she was really, really, really great, and really deserved the prize, that wouldn't seem right, would it?" Thujee spoke up. "No, it would seem corrupt." "Exactly, Thujee. And it would be hard for you to trust Sir Tenzin if he told you his daughter had to win, even if he truly believed that she _deserved_ to win." A few heads timidly bobbed in agreement. "Well, that's what I wanted to call your attention to today. You have all worked incredibly hard for the past few weeks on this contest, and you're the reason why it's been such a huge success. But you've also worked hard to gain the trust of the audience. And how do you think they'd react if the people who won the contest also _ran_ the contest?" "But we deserve the hamburgers," chirped Khandu from the back of the room. "It doesn't seem right that we don't get to win. We've worked so hard on SOL." "But don't you see? You _do_ win. You get to be on the radio. You got to make up the contest. That's the most important part about this station, you know. You have this amazing way to communicate with the entire city—soon the entire country. People depend on you for information and entertainment. That's your reward." Remorse washed over the faces of the Kuzoo youth. Lhaki, TT, Thujee and some of the others looked as if they were about to cry. "So, knowing all that, what do you think we should do?" I continued. Lhaki shifted her guitar and sat up straight. "I think we have to take ourselves out of the competition. It wouldn't look right if one of us won." Sir Tenzin stood to the side, fidgeting. I was pleasantly surprised I hadn't needed him to back me up. "That's great, Lhaki," he said, and I was even more surprised that he'd been listening closely. "Really great. Thujee, do you have the list so we can call the new finalists?" "Yes, Sir Tenzin," Thujee said, "I've got it. We'll start calling now." The crowd dispersed, launching into action. The evening's Symphony of Love was to begin in just fifteen minutes. Lhaki stood up, smiling to no one in particular, and strummed her way around the room, as if nothing had happened, as if this was the way it was supposed to turn out all along. A few minutes ago she'd been upset at the mere suggestion she shouldn't win the contest, yet now she was fine with having lost her chance at victory and hamburgers. As I watched her, I considered Valentine's Day and loss. The mysteries of the brain and emotion, of letting go, of moving on. Maybe it was her short lifetime of Buddhist education that allowed Lhaki to snap back so quickly. The topic of the last episode of the Buddhist-themed _Dharma Bites_ show had been "impermanence," one of the fundamental beliefs of the religion. I'd listened intently as the hosts explained the concept: Everything is always in a state of flux. Nothing lasts forever—no triumph, love, no happy feeling, no state of sadness. Clinging to a person or place or moment in time was futile and unwise and led to suffering; so did wanting things to be different than they were. Maybe it was, in fact, simple teenage rebound and not her religious upbringing that was making the outcome of the contest sit well with Lhaki. Still, I liked considering this concept of impermanence. "Youth is temporary and fleeting," the Kuzoo manifesto declared. Babies didn't stay babies forever. Our bodies changed and grew old. Feelings morphed over time. Impermanence wasn't a word you'd ever want to associate with Valentine's Day, as silly a Hallmark holiday as it was. Love was supposed to last forever. And yet, anyone slightly older than the typical Kuzoo volunteer knew that was a fairy tale. So why did we insist on pretending that it did, and that it should? It seemed to cause us nothing but misery, in exchange for a momentary feeling of pleasure, comfort, the illusion of safety. Once, I'd been proposed to on Valentine's Day by a man I adored; a year later he'd run away, with no explanation. Four years after that, the improbable occurred: As I meandered alone through Central Park on a snowy Valentine's Day, I spotted him on a romantic walk with a beautiful red-haired woman, who I later learned was his new bride. Thrown into turmoil by the emotion of the coincidence, I found myself settling into acceptance, even feeling pleasure for him, over his new life. Right now, as another Valentine's Day approached, I could acknowledge that it was okay that my fiancé had left, fine that he had—maybe, in fact, even better that he had. All these years I had believed that everyone else commanded stability, while I floundered about. That was ridiculous. How many loveless marriages had I witnessed, complicated relationships including children and tangled finances that made it difficult to escape? How many unhappy single people did I know who were waiting around to be rescued by someone, anyone before they allowed themselves to start living? Who did I know who had anything without compromise? Existing involved compromise. Life, particularly a love life, was far richer and more complicated than a fairy tale. Sometimes—more often than not—love came to you in a short fit of wonder, warmed you, enthused you, and then vanished as suddenly as it had arrived. And that was okay, too. Sitting here in this faraway radio station, where I'd just won a small victory, I could see now that I had it good in my own unique way. I was living a rich, full life. What more was there, really? KUZOO'S DEVOTED LISTENERS had a soft spot for the youngest contestants in the Symphony of Love. With Lhaki and TT disqualified, two little girls emerged as the top vote-getters: a ten-year-old named Kiba Yangzom captured first prize, and an eleven-year-old named Kinley Choki Dorji placed second. Kinley said she'd bring a friend to the Zone as her "date." Kiba brought her big sister. The _Bhutan Times_ headline that week proclaimed "Two Little Kuzoo Idols Are Here!" Sir Tenzin decided that, under the circumstances, a celebratory party for all of the Kuzoo staff and SOL volunteers was in order. He cut a deal with the owner of the Zone to serve up Cokes and snacks to all Kuzoo volunteers who stopped in on the evening of Valentine's Day. In the end, everyone did win. A culture of dining out was uncommon in Bhutan, particularly among young people, who rarely had much pocket money. The Zone was a treat for all, and it was the only place in town where you could get a real hamburger, with fries. (The Swiss Bakery sold small, spicy meat patties on rolls and called those hamburgers, but that was a Bhutanese interpretation.) At night, the owners of the Zone dimmed the lights and illuminated the disco ball for those who might be inspired to dance. The establishment's crowning feature was its karaoke machine, one of only two in Thimphu. Thujee emceed the evening as if it were a live broadcast on Kuzoo, working the room and sounding a bit like Phil Donahue, though he'd have had no idea who that was. He had the hyperbolic phrasing of broadcast down cold. "Let's give a hand to our generous sponsors here at the Zone," he announced. "The Grammy this year shouldn't go to anyone from Hollywood. It has to go to the young ladies who are singing here today. Give them a big hand!" The disco ball sparkled; hoots and hollers and clapping hands egged Thujee on. "And now, everyone, give a warm Kuzoo welcome to Kiba Yangzom, winner of the grand prize of the Symphony of Love. She's going to entertain us with the hit 'Where Is the Love' by the Black Eyed Peas! Take it away, Kiba!" Thujee handed her the microphone. The karaoke machine clicked into gear, and the tiny girl shrank into her chair, overwhelmed at having an audience. Someone behind the bar trained a spotlight on her. Once she started singing, her confidence grew, her voice becoming a bit louder. Soon, everyone at the Zone joined in. From the corner of my eye, I spotted one of my expat friends, Ed, the golf pro from Nova Scotia, searching the darkened room to find me. I waved him over to join me at my table and as I did, as the whole room swayed to the music, my heart swelled. I might never see any of these people again after I left Bhutan and I wasn't likely to accomplish anything grand during my time here. Or anywhere else, for that matter. I might not ever find the big love that had eluded me, and by now, it was pretty clear I wouldn't have kids of my own; raising a child alone was out of the question, financially and logistically. But here, all around me, was love. Nothing mattered more. Not the romantic kind—that was nice, but it wasn't forever. What was important, and abundant, was the love that filled the room right now. No Valentine's Day I'd experienced had been as wondrous as this, so full and beautiful. In a corner, Pema, Ngawang, and Pink sat sipping sugary wine coolers. They'd gone to Pink's apartment to change after work, and were heading next to Club Destiny, the nightclub where Pink moonlit as a deejay. This was the first time I'd seen them out of their national garb, dressed up for "party night." They looked like pretty twenty-something gals anywhere who were ready to hit the town: tight T-shirts embellished with bling, low-rise jeans, smears of unnecessary makeup on their pristine skin. Despite their festive getups, they radiated glumness. I knew the syndrome: on this couple-crazy holiday, neither Pema nor Ngawang had boyfriends, and Pink was going through a divorce. The last time they'd gone out dancing, they complained the next day at work that there weren't even boys they wanted to flirt with at the bar. I considered giving them a big-sisterly lecture about how Valentine's Day was just a silly marketing conspiracy to convince you that without a "special someone" you were inferior, incomplete. I thought of doing the "girl power" thing, telling them how lucky they were to be beautiful and young and employed in a profession people clambered to work in back where I was from. How they were lucky to be alive at a time when their country was opening up in new and exciting ways. To please not make the mistake I'd seen so many women make, delaying life while waiting for a man to ride into the picture so they could be half of a couple and not need to find themselves. I even contemplated telling them my own Valentine's Day stories. But all of this was talk, and talk they didn't really want to hear. I myself couldn't have appreciated this when I was their age. You had to suffer through a few Valentine's Days before you could understand what really mattered, anyway. So I ordered the girls up another round of wine coolers, got myself a shot of Bhutan Highland whiskey and Ed a Coca-Cola. Together we swayed to the music of the karaoke machine. A little while later, we would all head over to Club Destiny to go dancing. # 8 # MY BEST FRIENDS IN THE WORLD RIGHT NOW THERE ARE A FEW VERY IMPORTANT THINGS TO know about the Bhutanese New Year, in addition to the fact that shops, schools, offices—almost everything except Kuzoo FM—shuts down for three days in celebration. For one thing, slaughtering of animals is prohibited. By government decree, the butchers remain closed throughout the first calendar month. Killing is never good for one's karma, but it is seen as a particularly inauspicious way to launch a new year. That doesn't mean _eating_ meat is forbidden; merely the act of slaughter and the _sale_ of meat are illegal. For another: Drinking as much alcohol as possible is considered an essential part of the celebration. The drunker you are, the thinking goes, the happier you are, and the likelier you are to remain that way for the rest of the year. Thankfully, this holiday is traditionally marked at home with the family, so the roads in town aren't necessarily glutted with "happy" drivers. In fact, on this particular New Year, the streets seemed emptier than I'd seen them yet. Even the five or six stray dogs that escorted me down the hill to Kuzoo each day appeared to be taking a holiday. Perhaps the fact that it arrived just before the first-ever celebration of the new king's birthday in February was contributing to the serenity in the streets. The salient points about this holiday called Losar were conveyed to me by Tenzin Choden, the radio jockey at Kuzoo with whom I'd had the least interaction. She was the quietest of the bunch, so soft-spoken that in person as well as on the air she was barely audible; she also had a husband, so she felt less compelled to hang around the studio before and after work. She'd show up at the appointed time to host whatever slot she was responsible for and then leave. Strictly business. Then there was her general suspicion of me, a foreign interloper. This had much to do, I inferred from her comments, with the way I dressed. Since I had been told that it wasn't disrespectful for a Westerner to wear Western clothing, I opted not to outfit myself in the Bhutanese national dress. As a newbie and an outsider, I felt like a clumsy poseur in the beautiful, brightly colored ensemble. Plus, it was nearly impossible to put it on properly without the aid of a fleet of assistants. The national garb consisted of layers of _wangju_ (shirt), _kira_ (dress), and _tego_ (jacket) cinched with a stiff, wide fringed belt, and with a simple broach at the breastbone. Like an old Bhutanese house, it seemed to be held together magically, with nary a nail or screw. Struggling to assemble this outfit, neatly, I thought, must be what it's like for a man the first hundred times he ties a necktie, only far worse because the entire body is involved. The day I'd tried my hand at draping my body Bhutan-style, total strangers, male and female, rushed up to me on the street to smooth me out. "You look beautiful in _kira_ ," they'd say disingenuously, their tones making it clear that I _might_ look beautiful in _kira_ if I'd only learn to put it on correctly. Then they'd tug and pull and adjust to restore my honor. I was surprised my regular pack of stray-dog escorts hadn't chimed in with their opinions, too. Tenzin Choden was always meticulously attired in the national dress, including perfectly matched, dainty three-inch heels, which Bhutanese city women wore as evidence they'd successfully moved beyond the farm. The limousine shoes did not stop her from jauntily making her way up the steep hill and narrow staircase to the kitchen studio as casually as if she donned flip-flops. It was hard to imagine the starched twenty-four-year-old Tenzin ever dressing down. I imagined she must have a special _kira_ , even, to sleep in. "When you want to wear Bhutanese, you tell me," she said disapprovingly, as she gave my all-black getup of turtleneck, stretchy pants, and clunky walking shoes a once-over. "I'll come over to Rabten Apartments, and help you get dressed. You will look very nice." For all the self-assuredness I'd detected in the Bhutanese ladies I'd met, Tenzin in particular seemed to possess confidence in abundance. Her family was in the construction business, several Kuzooers had whispered, which was polite shorthand to disclose that she was wealthier than most of the others. This explained the fact that she owned her own personal laptop, a luxury everyone at the station coveted most (having an iPod ranked second). Because of this possession, she had the freedom to work on her reports from home. Given all this, I was surprised when Tenzin asked for help on a special project she'd assigned herself, a taped report on the meaning of the Bhutanese New Year, Losar. Why Tenzin chose to sink her teeth into an additional bit of work wasn't clear; in the back of my head, I allowed myself the fantasy that my failed workshop and ongoing casual talks in the workroom each day might actually have sunk in, but I knew in my heart that that probably wasn't the case. Whatever it was that had inspired Tenzin, I was happy to help. To begin gathering interviews for the report, we walked down to the row of meat shops near the vegetable market, below the lower road where the bloody carcasses that hung in the windows would make even the most devoted carnivore commit to tofu and kale forever. They looked like Upton Sinclair's nightmare incarnate, flesh dangling, flies swarming, the occasional stray dog panting and drooling nearby. Tenzin whipped out a little minidisc recorder that she'd borrowed from the Kuzoo supply closet and interviewed a handful of shoppers (in Dzongkha) and butchers (in Nepali) as if she'd been wielding a microphone for years. Since most Bhutanese didn't want the job of slaughtering animals at any time of year—very bad for the karma—Tenzin explained that the majority of the meat proprietors were immigrants. When she was done, she announced, "Okay, we can go now," and marched us back up the half-mile hill through the center of town to the studio. Now it was midway through the holiday, and Tenzin was finally ready for feedback on her edited segment. The only other person in the studio was RJ Kinzang, a sweet, eager young man of twenty-three who came from more modest means than his colleague. He was the eldest of nine children; his family ran a tiny general shop off one of the rattier side streets, and they all lived above it. He was there hosting the country music show. What better way to ring in the Bhutanese New Year than with a little twang? More than anyone, Kinzang loved working at Kuzoo, and loved being on the air. He was the guy everyone asked to cover for them when they didn't want to come to work, which usually had something to do with their staying out too late the night before—or being slotted to host the country show. Kinzang didn't drink, couldn't afford to party, and never complained about doing the country show, the early bird show at 6:00 a.m., or anything else, for that matter. He was the only one of the radio jockeys who had any sense of Kuzoo mania or his growing (and polarizing) celebrity in the community. The audience either loved the way he spoke English, or they mocked it wickedly, but they all knew who he was. Tenzin turned down the Garth Brooks playing on the workroom monitor so I could hear her opus free of distraction. I wobbled in the rickety desk chair, and braced myself. The piece began with a short burst of traditional Bhutanese music, and after a few seconds, Tenzin's voice mixed in underneath. Her read was clear and deliberate; I'd never heard her speak that way. "Losar, the Bhutanese New Year, is considered a very auspicious time, marking the beginning of a very happy and prosperous new year. People believe with the starting of the new year, all things will go well and prosper. So to celebrate the special occasion, government offices are closed, shops are closed, and people stay home from work to be with their families." I reckoned I was one of five people who'd tune in who didn't already know everything Tenzin was reporting. "To know more about Losar and the change in the year, which is the Hog year, I interviewed Sir Lugtaen Gyatso, principal of the Institute for Language and Cultural Studies at the Royal University of Bhutan, which is located in Semtokha, Thimphu." Informing listeners that Semtokha was in Thimphu was the Bhutanese version of "Who's buried in Grant's Tomb?" The music swelled a bit, and Tenzin posed a question: "What is the significance of Losar, and could you explain about the origin of Losar, _la_?" The _la_ was the most charming of the Bhutanese verbal tics. It was frequently tacked to the end of a sentence, usually a question, to soften it. Often, it was deployed deferentially, by younger people talking to elders, or anyone to an authority, as a sign of respect. Sir Gyatso responded: "Well, Losar is New Year. Its significance is very straightforward, because it's the beginning of a new calendar, the beginning of a new time that everybody wishes to see unfold. So many wonders. We Bhutanese, being religious, we believe in spiritual power, and therefore we Bhutanese strongly believe that a good beginning is a sign of a good conclusion. I think the significance of Losar is to begin with a good start that will last until the last day of the year." From there, Sir Gyatso got a bit more technical. This New Year, it seemed, wasn't the only one in Bhutan. There were actually nine different starting points to the year. Each had a different name. After letting him ramble on for a few minutes, Tenzin asked what was in store for this year of the Female Fire Hog. Sir Gyatso sounded apologetic, as if it were his fault that the heavens were delivering a clunker: "This is not going to be a great year. After every nine years comes a year that's considered to be a bad year. And every tenth year is supposed to be the worst year of that decade." Whether this was the ninth or tenth year in the cycle, I couldn't quite glean from his comments, but the underlying message was clear: Things were not going to be good, not for a while. I understood why the coronation of the newly anointed king and the democratic elections had to wait until 2008. Tenzin asked what could be done given the dire astrological predictions. Sir Gyatso's response was very Buddhist, very wise. Big tasks would have to be tabled out of respect for misaligned stars, but life had to go on, despite the astral challenges. "I think there is no timing for good things. We can always try to be good human beings, do a little bit of soul searching, ask ourselves, 'How good a human being am I?' We can still try to do something good, and avoid doing something bad." Now Tenzin introduced an interview with the proprietor of the Wong Meat Shop, who said he'd been mobbed with customers in the last few days before the meat ban took effect. Excerpts of Tenzin's interviews with the shoppers followed, and she translated that they were saying, basically, they couldn't live without meat. After all, eating well and plentifully at the beginning of the year was another way to ensure it would be a good one. Sounds of a giant vat of liquid sloshing about for a few seconds mixed together with her next narration: "Drinking is part of our celebration for any occasion. As the celebration nears, Karma—name changed—is busy helping her mother make _ara._ " _Ara_ is a clear wine, distilled from rice. A Bhutanese friend plied me with several large glasses one night. It left my head thick and cloudy. It was delicious, but it wasn't something you'd drink if you were hoping to do anything productive the next day. In this place with so few names, and where everyone could identify a person with only the vaguest description, I loved that "Karma" had felt compelled to conceal her identity. Whatever her real name, she sounded as sweet as Goldilocks making porridge. "I'm making _ara_ for Losar celebration, _la_. People should be very happy, and in order to make my family intoxicated and happy, I'm making _ara._ " "Can you consume all you are making, la?" From the background audio, one suspected that Karma either had an enormous family, or was running a distillery. She responded, matter-of-factly: "Somehow we have to make money. We will sell half of what we make." The Bhutanese music mixed in again, and our intrepid reporter returned for the finale: "All will be busy preparing for their Losar. Some will be frying snacks, some will be decorating their altar, some will be practicing for the archery tournament, and so on. That is all for now. I hope you enjoyed listening to the Losar program. Until then, on behalf of Kuzoo FM 90, it's Tenzin Choden. For now it's bye and Happy Losar." As the Bhutanese music trailed to a close, Tenzin looked at me, her eyes wide. It was clear that she was pleased with herself. I wanted to say so many things, helpful suggestions and edits to make it a "better program," at least to my ears. Had she shown me a script beforehand, I would have felt freer to contribute them then. Of the many editorial tweaks I might have suggested, though, none mattered quite so much as the fact that she'd taken the initiative to prepare the report in the first place. Back home, a story tied to a holiday would have been prepared weeks in advance and aired the day before. Though the piece wasn't terribly illuminating, it was perfectly fine. In fact, by the standards of Kuzoo, it was quite an achievement. She had collected sound and edited it, and mixed in music and recorded a narration. So I resisted the urge to criticize, and said, sincerely, "Good job, Tenzin"—praise she didn't seem a bit surprised to hear. She flashed a big smile and scurried into action, copying the audio onto her thumb drive, marching into the studio without knocking, leaning over Kinzang, and uploading it onto the playback computer. When she was ready, she hit Play, and her Losar segment was beamed out to all of Thimphu, probably halfway through a nice fresh batch of _ara_ now, anyway. A FEW MINUTES LATER, the Kuzoo phone rang. Madam Kunzang Choden (no relation to Tenzin) was calling to tell me that her husband was on his way over to pick me up. I'd almost forgotten I had a Losar lunch date with Bhutan's first female novelist, who also happened to be, along with Madame Carolyn, the other half of the king-appointed Kuzoo board. Madam Choden was a tall woman with a sweet, round face, her thick hair pulled back into a bun. She looked like a cross between an earth mother and a kindly, youthful grandma; there was an air of formality about her that was almost regal. The Kuzoo staffers told me she was a descendent of a prominent religious family from central Bhutan. She was held in such high regard that most people called her "Ashi," an honorific usually reserved for the queens. I myself called her Madam, and she didn't correct me. Long before she started writing, Madam Choden had lived a life that read like fiction. Like a handful of Bhutanese of her generation—though most of them male—she had traveled at age nine to next-door India to get a proper education that could not be had at home. Schools were scarce in Bhutan in the 1960s, mostly reserved for monks, and construction of the roads was just getting under way. To get to the convent school where she'd been enrolled, Madam Choden had ridden on horseback for twelve days with an entourage of attendants. She went on to earn two bachelor's degrees, one in India, the other in Nebraska, and, for a time, had been a teacher. Now she was one of a handful of authors to have emerged in the kingdom. She had published several books of Bhutanese folktales, as well as a novel, which happened to be about a girl from a remote part of Bhutan who, premodernization in the sixties, yearns to explore the world. Madam Choden had just returned from a visit to the United States to see her children, who were studying there, and had brought home a fever for American public radio. When I told her I worked in radio, I could feel her brightening. And that's when I'd been invited to lunch. Promptly at noon, a little white car sped up the hilly driveway to the Kuzoo studios. Since I'd arrived in Bhutan, it was the first time anyone had shown up at the exact moment they said they would. A gray-haired European gentleman emerged, strode to the other side of the car, and opened the passenger door, all with a flourish of chauffeuresque formality. He introduced himself as Walter, stuck out his hand to shake mine, motioned for me to get in, got back in the car himself, and sped down the hill and onto the road, as efficient as a Swiss timepiece. Madam Choden was so quintessentially Bhutanese that I had forgotten she was married to a native of Switzerland. He was a handsome complement to the good looks of Madam Choden. Several of the more erudite Bhutanese had spouses from outside the country, a fact I'd observed with curiosity. Around a few corners, past my apartment, and left, right, left along a winding road, up a driveway to a house hidden on a hill. We stopped in front of a little country cottage I never would have found on my own. The branches of the barren trees blew in the wind; the air felt empty and quiet, as if the whole city were inside today. It wasn't as cold as a February day might be in New York, but it definitely felt like winter. A yapping little white dog served as welcome committee—a house pet as opposed to a stray—followed by an elegant Madam Choden, who accepted the box of cookies I handed her. The bakeries had been closed and I'd had to resort to Indian imports. She announced that Rinchen, her maid, had just pulled the pesto pasta off the stove. "Pesto from my trip to the United States. Trader Joe's," she said proudly. "I also have crackers and anchovy spread from Paris." My stomach danced with pleasure at the prospect of such delicious food. "And since it's a holiday, we must have some of this wine, too," Walter said, pulling a bottle from a cabinet. I squinted to read the label on the sly, as if I were looking at a mirage in the desert. Wine was very hard to come by in Thimphu, except for _ara;_ at the watering hole in town frequented by expats you had to order an entire bottle, because demand for imported wine was so low that it wasn't cost-effective to open one to sell by the glass. Since the local microbrew called Red Panda was available for only twenty-five cents a glass in most bars, spending $15 for a bottle of whatever wine was in stock seemed not just decadent but wasteful. Walter caught me staring and kindly indulged my curiosity by holding up the bottle so I could get a better look. It was my favorite, Shiraz, from Australia. "We get it at the duty-free shop. It's the best they send us here." As he poured three glasses, I heard noise at the door, and Madam Choden left the room to tend to it. She reemerged into the dining area moments later, followed by a very tall, distinguished-looking Western man, fortyish. Her introduction was as formal as if she were debuting him at court. "Presenting Martin Gallatin, Thimphu's newest and most eligible bachelor," she said, though she finished with a giggle. _Dear God, am I being fixed up?_ I wondered as I savored the spiciness of my wine. Was he widowed, divorced; was she a Bhutanese lady; were there kids? Half the generous glass Walter had poured for me was already warming my insides. Martin, whose name isn't really Martin, looked my way and said a lukewarm hello. Perhaps I was as much a surprise to him as he was to me. He handed over a pot of jam to Madam Choden. "Here's all that's left of Claudine," he said, smiling, and I supposed this Claudine must not be deceased. If she were, this Martin was pretty crass. Walter led us out to a beautifully set table on a glass-enclosed patio filled with plants. A cat lay wrapped around a lamp, her stomach rising and falling peacefully with each breath. We could have been in the French countryside, or in a cabin in North Carolina, or in a thousand other charming places on a quiet day. But where we happened to be was in a pretty cottage in the capital city of Bhutan at the dawn of twin holidays. "I can't place your accent," I said to Martin, as Walter poured him a glass of wine and refilled mine. I felt a bit greedy, but it was delicious. "You can't? Well, guess." I was trying to be charming, but Martin didn't seem to bite. "Well, hmm, are you German?" Swiss-German, I figured. Maybe that's how he knew Walter. "German, la." He looked at Walter. "Dear God, no. I'll try not to be insulted. Germans, _la._ " I couldn't tell if Martin was trying to be funny in mimicking the Bhutanese way of talking. I was getting a bit drunk, and I feared I'd made an ugly American mistake by remarking that someone who was likely trilingual had an accent. Unilingual me was the one with an accent. I wished I were back at the station drinking watery instant coffee and listening to the music, helping RJ Kinzang line up country songs. Casting about for a way to save face, I said, "Okay, I think I've got it. Thai! You're from Thailand, that's it!" Walter and Madam Choden were generous enough to laugh at my effort. Martin looked my way and smiled; I hadn't offended him terribly, after all. "Yes, _la!_ That's it!" He sipped some wine and patted the little white dog. "Actually, I am Norwegian, by way of Switzerland." With that settled, the conversation somehow turned to potatoes. The United Nations had announced that the next year would be "the year of the spud." The tuber was so abundant in central Bhutan that there was talk of opening a potato chip factory. Both Walter and Martin were integrally involved in agricultural development in Bhutan, and they extolled the virtues of the vegetable as an important cash crop for Bhutanese farmers. I shared my anecdotal market research about the preference among the Kuzooers for imported chips in flashy packages over my preference: the tastier, fresher homemade kind. "Packaged goods," chimed in Madam Choden. "There's a fascination here for packaged goods, because people never had access to them before. They're seen as a sign of affluence, worldliness. That's why we have so much litter. No one knows what to do with the wrapping, because few food items before ever came in wrappers." Circuitously, we moved to a discussion of Martin's time in another part of Asia, where Walter and Madam Choden had lived for a while, too. He related a lengthy tale about an American girl with whom he had become smitten. She later published a story in a prestigious literary journal about the circumstances of their meeting. Martin found this very impressive, but it also left him with a measure of defeat. He had written his own interpretation of the events and hadn't managed to get his into print. I shifted uncomfortably. Martin was dominating our Losar lunch, when I'd been hoping to learn more about Madam Choden and her writing, and about the library in town where she was a volunteer, the museum she was building on her ancestral land, what her home district of Bumthang was like, and whether Bhutan was ready for its first democratic elections next year. Not about a strange man with a crush on some younger woman he'd never see again. Eventually, under my persistent questioning, Walter and Madam Choden pulled out pictures of their kids. We feasted on the crackers and anchovy paste, and the pesto pasta was served up. When the requisite pot of _emadatse_ arrived, I quickly passed it on without taking any. Another bottle of wine was opened. Martin grabbed my camera and snapped some photos of us all. All of a sudden it was 3:00 p.m. and Martin had to go wrap up some things before the holidays continued with time off to honor the king's birthday. Everyone insisted he must drive me down the hill, even though what I really wanted to do was walk down and breathe in the quiet of the new year, work off some of the wine, savor the most delicious meal I'd had since I'd arrived. But Walter and Madam Choden were emphatic, and they waved gaily as we pulled out of the driveway. In the privacy of his car, one on one, Martin dropped the annoying theatrics, and his vulnerable side emerged. He suddenly became a more enjoyable companion. As the car chugged down Norzin Lam and approached the studio, I popped the question I'd been dying to ask. "So what exactly happened to Claudine? And when?" "She left, with our kids. Seven months ago." "Where did she go?" "To Switzerland. She's at her family's place." "How old are the kids?" "Eleven and nine." "How long have you been here?" "Six years." I imagined, in my limited experience of Bhutan, and without knowing a thing about Claudine, that it might be a particular challenge for outsiders to raise a family in Thimphu. There appeared to be a small community of expats who were here through various nongovernmental agencies, and they were said to be fairly tight-knit. But it must have been hard to be so far away from familiar territory and the creature comforts of home, especially when you had kids. There had to have been other factors contributing to Claudine's departure, though. Marriages didn't just dissolve like that—particularly when there were children involved. "Have you looked up that writer woman you keep talking about? You still seem kind of obsessed with her...." "I'm not obsessed with her." "But she's about all you talked about at lunch. Besides potatoes." Martin pulled up in front of the old foreign minister's house-turned–radio station and stopped the car. "Probably because you remind me a bit of her, _la._ " He laughed nervously, as if he knew what he was saying could get him into trouble. "But yes, yes, I have looked her up, and she's married. They both are rather well-heeled, if you know what I mean." "Ah. It helps to be if you're going to be a writer. Maybe she really loves you and is waiting for you to confess." I looked him in the eye and smiled, to make sure he could see I was teasing. "Send me the story. I'm curious." I found myself flattered that Martin had made a comparison between me and this literary prodigy he admired and, suddenly, irrationally jealous that this invisible woman had captivated him. "Her story? Or mine?" "Well, I'm more interested in yours. But sure. Send both." "I will. Let's have dinner this week?" "Yes," I said without hesitation. THAT NIGHT, I sat in the quiet of my little apartment, tethered online with the dial-up connection and listening to the Bhutan Broadcasting Service news on TV. It was rare for me to be on my own in the evening here, and I found myself liking this self-contained multimedia bubble. It felt different in the Himalayas than it would have back in Los Angeles, less like a lonely way of killing time; television, in particular, was a fun curiosity here rather than a droning nuisance. But why? In the end, news was news and commercials were commercials, weren't they? Even the Indian channels piped in on cable were screeching dreck; the only difference from home was the looks and the names. The BBS still had a rough, homespun edge to it, more utilitarian than glitz, which somehow made it more palatable. "We'll bring you the international news after this short break," announced the _kira_ -clad anchorwoman from behind the news desk, which looked like any other news desk except that it was adorned with the signature Bhutanese woodwork. She had just shown pictures of a roadblock due to a landslide on the highway near Bumthang that had stranded hundreds of people. One road being shut down could immobilize much of the country. An animated clock with a second hand that was rapidly flying around the dial flashed on the screen, and an announcer declared: "Time to file your PIT. Time to file your PIT. Time to file your PIT. Time is running short! File your personal income tax now." Beauty shots of flowers and birds, set to lovely music, flashed on the screen, and after a few seconds, a woman read in Bhutanese-accented English, as if she were reciting poetry: "Wherever you are, whatever you do, when you think of refreshment, think of ice-cold Coca-Cola." A static shot of a shop downtown, crammed with merchandise: "Kidsy, your exclusive kids store, announces Losar sales...." Photos of a newly relocated gym: "Sakten Health Club and Saloon are pleased to announce Relax and Recover and Rejuvenate...." Commercial break over, and back to the studio, where the anchorwoman informed the people of Bhutan about a stone-throwing mob attacking the deposed Nepalese king as he embarked on a pilgrimage. Of a court in southern India sentencing three members of a regional political party to death for killing three female students in 2000. Of the death-by-remote-control bomb blast of a key Pakistani health official, who was leading a polio immunization drive. In comparison to the outside world, little Bhutan lived up to its reputation as the last Shangri-la, peaceful and serene—its reward for being long sequestered. What would a villager in the remote reaches of the country think of these reports? News of a world beyond in strife, bracketed by simple commercials offering consumer goods. It was a good thing the government was committed to Gross National Happiness, because that philosophy seemed a crucial necessity to offset the effects of a few hours in front of the television. "Happiness with what you have" wasn't exactly the backbone philosophy of advertising and media and news. An email from Martin popped into my inbox, inviting me to dinner. "I have fresh greens here," he wrote, knowing how enticing that would be to a nutrition-starved visitor; good-looking fresh vegetables were hard to come by in winter in Thimphu. His story was attached, and he insisted I didn't have to read it. I opened it immediately. It was beautiful, and funny, and long—an account of running into this woman and her friends as they toured a village. In the story, Martin confessed to what I suspected might be true: He turned on the sarcasm in the face of attractive women to mask his insecurity. An introspective scientist whose third language was English and who could write very well. This Martin was intriguing. TUESDAY AROUND 5:30 P.M. I heard a chugging noise in the driveway outside my apartment. Martin had arrived to pick me up. When I got in the car, I noticed he was more relaxed, and more handsome, than I remembered. I didn't usually leave Kuzoo so early, but everything was so quiet, what with the holidays, which would keep schools and offices shut a few more days. The ride to Martin's was comically short, just a quarter of a mile. _All this time I've been here_ , I thought, _this man has been around the corner and I had no idea_. The side door of the house opened into a lovely, homey kitchen. A dozen pairs of shoes of various sizes sat neatly lined up just inside the entryway. Martin offered me a pair of floppy women's slippers in exchange for my boots; Claudine lived on. Blue glass bottles of filtered water and colorful Bhutanese thermoses, used to carry food on picnics or to work, lined the windowsills. A shopping list lay on the table: _prayer flags, TP, salt_. Martin motioned for me to sit and offered me a choice of a Red Panda or a bottle of the same wine we'd had with Walter and Madam Choden; I opted for the latter, since the taste of the red we'd enjoyed was still fresh in my memory. Besides, the birth anniversary of the king seemed to merit a more formal beverage. A cork was popped, two glasses poured, and Martin started cooking. While he fixed the meal, he asked with a keen interest about my work and my life. He refilled our glasses, lit a candle, and served us, never interrupting the flow of questioning or my responses. This was how I'd often prepare dinner for a friend back home, and I was enjoying being the guest. The wine was a truth serum, and I felt comfortable giving in to the gently somber mood of this house and of the holiday. I could tell this man everything, and he wouldn't care or judge me; we were strangers thrown together in an odd place at an interesting moment in time, and we weren't likely to ever cross paths again beyond the borders of Bhutan. And so I poured out the highlights of my life as if I were in a confessional—the good, the bad, the ugly, scenes from my life I hadn't shared even with people I considered good friends. I ended with how now, in my forties, I couldn't help wondering "if" about everything. Martin looked straight at me, the glow of the candle all that was lighting the room. The only light outside the kitchen window came from the waxing moon; everything around us was perfectly still. Every once in a while, one of the house dogs dutifully guarding the front door would yawn, as if to remind us we weren't alone. I had this hunch Martin would understand the notion of loss, since he'd experienced such a profound one himself. He had on his face an expression of pure empathy, and I wasn't ashamed of anything I'd said. Or anything that had happened in my life. "I think everyone, if they're paying attention, asks the 'if' question," Martin said, breaking the silence. "Sounds like we're both in a state." "Well, I'm in a different state, of course, than you." My losses were ephemeral. He had a longtime partner and children somewhere else in the world. "Yes. It never occurred to me this might happen." "It must be very hard not to see your kids." "Eight thousand four hundred and seventy miles," he said without a second of calculation. The light from the candle flickered in his eyes; I could feel his sadness and the purity of his emotion. "It is very difficult." I breathed deep and steady. I wanted to thank Sebastian or karma or whoever it was who'd led me here to Bhutan, to this place where the people, the conversations, everyday life was what I hoped it could be. I felt better here, freer than I ever had, more _me_ than ever before. Abruptly, he stood up and cleared the dinner plates, poured two glasses of port, and said, "Let's go to the living room." To do that, you had to walk through a formal dining area; I squinted to see if in the darkness I could make out a picture of his family, but I couldn't. Walking into the living room was like landing in Narnia—a magical, dictionary definition of a family room, hidden, beautiful, warm. I wondered: What had happened here in those six years? The life that had unfolded in this house was still so present. A child's drum set in one corner; several plush couches inviting you to sink in; enough books, CDs, DVDs to stock a small public library—fiction, science, Asian themed, Euro themed—kid's stuff everywhere. The moon shone in from one of the windows, and the stars were bright; I was drunk from the wine and getting drunker just smelling the port. Here I was in Bhutan in this beautiful space with this complicated man, and I never wanted to leave. But of course, I had to. At some point—some point very soon, in fact. The clock was ticking toward the end of my stay. And that was okay. It was more than okay; it was perfect, actually. The Himalayan air, the very notion of Gross National Happiness, and the exercise of the three good things—the cocktail of them had convinced me to embrace the moment before me, now, to appreciate it for what it was, but not to hold it so tight that I never let it go. For another moment would occur, and then another. As content as I felt right now, I felt heartbroken, too. Imagining the loss and sadness and confusion of Martin's situation conjured up all the loss and sadness I had ever felt; Martin had managed to have a family, and even though it had fractured, it was sacred. I tried to manage this realization as we relaxed into the room. He positioned some pillows so he could lay out on the floor, and I chose a particularly inviting chair across from him. Using his computer as a jukebox, he played songs for me as we talked—unusual music: ballads and vocalists I'd never heard before. On the occasion of His Majesty the king's twenty-seventh birthday, I was about to be seduced by a tall, recently separated, clearly heartbroken scientist in the home it seemed his family had evacuated in an instant and which he preserved as if they were coming back the next day. I wanted to know everything that had happened for Martin, everything about him; I wanted to understand everything about Claudine and the children. Even though I couldn't possibly. And then, as he worked his way through his musical collection, Martin said the words that broke the spell, that he had to have known would break the spell. "My French lover adores this song." It hadn't disturbed me to be on the brink of an embrace with a married-but-separated man whose estranged wife's slippers I was wearing while I lounged on Thai silk pillows they'd probably bought together in a fit of long-forgotten domestic bliss. Well, it didn't bother me that much; I had taken it on faith that I wasn't interfering in an active marriage, but you never really know, do you, what's happening beyond what someone reveals? They themselves might not really know. But this casual mention of the "lover" changed things. Was this lover the reason Claudine left? Did she live here in Thimphu? If she didn't, where could they have met and how could they possibly carry on an affair? I tried not to act surprised at Martin's revelation, as ham-handed as it was. "French lover. Obviously she has good taste. Someone in Bhutan?" "No, no, French lover in France." "Where did you meet?" "I met her in Europe. She's very young." By now I had determined that Martin was a year older than me, forty-four. "How young is very young?" "Twenty-four." "Ah. Ridiculously young. But not quite fifty-fifty young." "What's that?" "Literally half your age." Martin smiled. "You'd better be careful," I said, trying on the role of advisor. "Someone that age is bound to want to have children. Eventually." "I wouldn't mind having more children. But I don't think this will come to that. I don't know why she has anything to do with an old man like me." "Well, I do. Men are tedious until they hit forty." Some sense overtook the wine and the port and the exhaustion and the dashed desire to get closer. Martin had revealed that there was a woman in his life; I had misunderstood the cues. Even though I was unlikely to see him again, I couldn't be a part of this. It was time to go back to my little place in Rabten. "You know, I'm really exhausted. It's probably a good time for me to head home." Now Martin was the one trying to mask surprise, and he started to stand up, as if he were expecting this. "I'll drive you down the hill, of course." "No need to." "Yes, if the dogs don't get you, the rats might. I insist." Small talk was punctuated by dog howls as we wended our way out of the driveway. Had someone pushed the car, he wouldn't have needed to turn the ignition on to get me home just down the hill. None of the houses we passed showed any sign of life. As we entered my driveway, the two resident chubby Rabten dogs hovered anxiously around Martin's car. We sat and talked for so long that Martin finally turned off the engine so as not to disturb my neighbors. Without the heat of the engine, I got so cold I folded my coat around my knees, but we just kept on talking. Finally, I said, wow, it's really freezing, and it's late, and thank you for a wonderful meal—God, real green things, delicious, how wonderful—and thanks for all the music. And I leaned over to give him a good-night kiss on the cheek, because I meant it. It had been an amazing night in what had been a month of gorgeous days and evenings halfway around the world from any world I knew. And my attempt at a sweet, polite thank-you morphed into an embrace, and all the tension of the evening was poured into a very long, very beautiful kiss. MORNING ARRIVES AND I walk into town; the streets are as empty today as they'd been the night before. The ridge that juts out from the mountain, where a line of cypress trees are perched on the edge, grabs me every time. Just the right distance between them, as if nature had intentionally yard-sticked the arrangement. Their simple beauty looms high above the city; just looking at them makes me happy. They would make my list of three good things almost every day. Except I haven't done a list in weeks now. I've been so caught up in the richness and fullness of every minute here, I haven't even remembered to make lists. I haven't felt the need. It's become second nature just to look at each day, even when it was ordinary or even when something was going wrong, and find the goodness. Today, in honor of the twin holidays, three of my expat friends are celebrating with a special lunch. Today's gathering will be much different from our usual appointments to drink Red Panda beer in my apartment or grab enormous plates of fried rice for $1 at the restaurant Chopstix in town. Instead we are heading up to the fancy resort, the Aman. I spot Ed near the long line of prayer wheels on the square. He's all spiffed up today, wearing a sports jacket. The attire seems perfect on him, but out of context given the surroundings. Too country club. Several Bhutanese stand around him, smiling, grinning. They are wearing woven triangular hats, hats I haven't seen before. "Nomadic yak herders," he explains, gesturing toward them, clearly confused about what to say. "They don't speak any English, but I think they want money. I gave them some...." They also want to stare at him. Likely they've never seen a tall, light-haired Caucasian man dressed in casual business clothing. I step into sight, and they want to examine me, too, a short lady covered in black clothing. I wish more than I have the entire time I've been here that I could speak their language, so I can ask what they're thinking. I'd heard stories from Bhutanese who were my age and had grown up in the villages, without seeing paper—paper!—or wristwatches until they were teenagers. As much as I think I comprehend the magnitude of the changes here in recent years, as compassionate a student as I may be of the impact media will have on the people, I am aware that I don't really have any idea what it's like to be Bhutanese. Pam arrives. She'd come from the hospital, where she's preparing medical equipment to take to the remotest reaches of the country. Ever since her divorce, she's been running around the world to volunteer as a nurse in different countries. Most of the villages here in Bhutan that she'll visit have never seen Western medicine of any kind. "Good day." She nods, even though she knows they don't understand. "Have a good day," she says, and moves us toward the road, politely and firmly. Ed and I obediently trot along behind her. Yet I find myself magnetized and looking back. I don't have a nursing degree like Pam, but there must be something useful I can contribute out here in the world, out from behind the desk to which I'd long been tethered. Face-to-face with the most traditional Bhutanese I've met, I want to know more. Not just about Bhutan, although there's no question I haven't had enough time here. These nomads inspire me to want to see more of the world, experience more of it. To become a bit more nomadic myself. To help the world somehow, instead of chronicling its demise from a journalist's perch and a newsroom. I believe I have just made a Losar resolution of a sort. Our friend Mayumi rushes up from the street with a wave. My three companions peel me away and into the taxi, and we start the drive out of the center of town and up, up the winding road. The resort is built on land owned by one of the queens and attracts the kind of tourist any country would love: The price for a room is a thousand dollars a night. And it's as beautiful as it should be for that much money, a Westerner's fantasy interpretation of Bhutan. It evokes the kingdom, but it is nothing like it, with sophisticated muted tones, instead of the local brightness; plush furniture, instead of the hard-backed utilitarian fare in most places. Giant picture windows to showcase the view. We are enjoying lunch here at a reduced price, the local's discount no real local can afford, $50 a head. As nice as it is to eat a lovely meal in luxurious surroundings with good company, I wish we'd given the nomadic people this money and gone to my place, where I'd have made us spaghetti from the Swiss Bakery. "Here's to the new year," we say, hoisting our glasses. "Here's to the king and a long, healthy reign. Here's to all of us in Bhutan." Ed's face twists with tears; he doesn't even try to conceal his emotion. Mayumi is leaving in a week, and that's when Pam is going on the road. And I will be gone not long after. Our shared meals and confidences will soon be history. We are like four whirlpools that have run together for a while, Ed declares poetically; these weeks we've had together have felt like years. When time is limited and the environment is so different from your own, relationships skip the "how do you do" phase and go straight to the heart of friendship. He's right, I think, as we toast again, but surely not just any environment could conjure these feelings. There's something about Bhutan, even in the language. _Kuzu zampo_ means hello in Dzongkha, but there isn't a word for good-bye. # 9 # THE THUNDERBOLT, PART TWO ON THE MIDDLE FINGER OF MY LEFT HAND IS A circle of hammered gold, topped with a large, tear-shaped oval of turquoise. I got it at the shop across from the bank on Norzin Lam. There are five or six such stores up and down the main strip that sell the few items you can take home to remind you of your visit, as if _things_ could possibly do justice to an experience here. The stores sell beautiful handwoven textiles, many of them purported to be relics from the shopkeepers' family trees; in an age of inexpensive machine-woven copies imported from India, it's better to turn grandmother's handiwork into cash. Little statuettes of various Buddhas, jewel-adorned prayer wheels. Prayer flags that couldn't possibly look the same when displayed in one lonely string anywhere other than in the magnificent open landscape of these mountains. The only souvenir I wanted was this ring. The yellowness and the heft of its matte gold, and the simple, brilliant blue of the stone, drew my eye when I'd seen it on the hands of men and women around town. The question was where to buy one for myself. A tiny, hunched old lady guarded the entrance to one particular shop. Day in, day out, she stood out front with a cat curled by her feet, its stomach rising and falling softly with each breath, the two of them a still and quiet duo. Occasionally the lady's face would brighten into a smile in response to a passerby's _"Kuzu zampo-la."_ But this wasn't tourist season, so buyers were scarce. After careful consideration, I decided this would be the place where I would spend my money. The shop was a bit run-down, and might not have had the selection of the others, but I preferred it to the shinier outfits on the other side of the traffic circle, over near the tourist hotels. The lady seemed surprised as I made my way through the door. She'd probably long ago given up hope that I might make a purchase, for I passed by the shop at least a few times each day. She motioned for me to stay by the counter as she walked to the back, I supposed, to fetch someone. A woman about my age, who identified herself as the lady's daughter, emerged. She could help me, she said, since she could speak English. She asked where I was from, what had brought me to Bhutan, and when I was leaving. I told her that I was headed home tomorrow. "So sad that you have to go. Will you come back to Bhutan?" "Oh," I answered, "I hope so. I hope so." Her mother returned to the counter and handed me a cup of tea. As I often did in Thimphu, I felt like an honored guest, and not in this case simply because I held the promise of a sale. A visitor from so far away was still a special occasion. "May I see that ring, please?" I pointed to one of three similar rings in the jewelry case, the smallest among them. The lady complied, and slipped it onto my hand without effort. It fit perfectly. I held out my arm, wrist pointed up, and admired the adornment. It looked as pretty as I'd hoped. "So sixty-five hundred ngultrum today is about..." I tried to calculate out loud, looking at the tiny white label of a price tag. The exchange rate changed daily and the ngultrum, pegged to the Indian rupee, had been rising in value lately. There was no need to be approximate. The lady whipped out a calculator from under the counter and started pressing the keys. "A hundred forty dollars." "Hmm. I'll have to go get some money across the street..." I didn't even bother to ask about credit cards; only one shop accepted them, and with a steep surcharge, to boot. Plastic was still the domain of visitors. Most Bhutanese used only cash, though printed currency had been introduced only forty-odd years ago. The couple of automatic teller machines at the bank weren't designed to work with anything but a local account. "Traveler's check? We can take a traveler's check." First I'd heard of that here. As I pulled out some checks and endorsed them, I joked that I was "getting married to myself." The old lady smiled broadly, wanting to be agreeable, but not understanding. Her daughter took my words literally. "Yes." She smiled, and then she told me what I already knew. "That is a wedding ring here in Bhutan." THE NEXT DAY, I admire my Bhutanese jewel while I wait to check in at the Paro Airport ticket counter. I love this portable souvenir, and I love the significance I've ascribed to it, too. I am married to myself. Who else do I really need? It has taken me forty-three years to feel whole, to believe that nothing, really, is missing. Now is what matters; I have all I need. What's important isn't some promised future event, relationship, or achievement. The world is smaller now, and larger, filled with possibilities. Every time I catch a glimpse of my hand, I have a tangible reminder to celebrate. And to thank Bhutan for having cinched my feelings tight. Right this moment, I am exhausted with the emotion of leaving and the prospect of going back to my job and to Los Angeles. Sir Phub Dorji stopped by in the afternoon to bid me farewell; it was the first time I'd seen him since the day I arrived in the kingdom. After our tea together, I stayed up all night dancing at Club Destiny with the Kuzoo gang, and then at Benez, a bar near the traffic circle. There, a Bhutanese patron told me in his drunken haze how, after finishing graduate school in journalism in the United States ten years ago, he came back to Bhutan and quit his job at the newspaper not long after. "Everything about the job—and me—was different," he said. I already suspected that my reaction to home would be the same. The question was whether I'd allow myself to do something rash. Once we closed down the bar, Andy, one of my younger expat friends, walked me through the empty town and up the hill to Rabten. I wasn't afraid of the dark, desolate streets as much as I was worried about going back into the apartment. Earlier in the week, I'd spotted a rat while I was getting dressed in the morning and couldn't handle the idea of coming face-to-face with this new roommate. "So, you have a Bhutanese boyfriend," Sir Tenzin had joked after I'd rushed into Kuzoo in a panic. Rats, indoors and out, are a part of life there and no one fears them. It was around two in the morning. Since I didn't have an actual alarm clock, Andy said he'd be my human one—the idea being we'd stay up until dawn, when I had to leave for the airport. For the next four hours we drank tea, trying to work off the beer we'd been guzzling. The colorful silliness of a Bollywood movie flickered in the background as we talked about the future. After he got back to the United States, Andy planned to pack up and move to Montana to go to vet school. He'd been adrift since dropping out of college the year before. He had that sweet brand of earnestness men often grow out of as they get older and goals get dashed. As for my intentions, I knew better than to plot them out. This whole Bhutan experience had dropped in my lap; how could you ever force a plan? Life just evolves around you, presents opportunities for you to reject or seize. By 6:00 a.m., we were sprawled across the couch and chairs in the living room, fighting drunken exhaustion. I turned on the television to get one last glimpse of the morning prayers that kicked off the broadcasting day. The throaty chants of the monks had soothed me each morning, even though I never learned what they were saying. I would miss them. Now they were being drowned out by the sound of the car that had arrived in the driveway to take me to the airport. I said good-bye to Andy and left him to get some rest. Blinking back sleep on that scary drive to the airport in Paro, I marvelled at the sky as it turned from black to gray to blue in the sunrise; I didn't want to miss a sight or a sound in my last few hours in the kingdom. I needed to sear as much of this landscape as I could into memory. There would be plenty of time to rest on the seventeen-hour flight home from Bangkok. There's a tall blond woman ahead of me in the line at the ticket counter. Draped over her right shoulder is one of those tote bags you get when you give money to a public television station. Hers says KCET, which happens to be in Los Angeles. A fair-haired person alone would have caught my eye, much less one carrying a bag with a logo from back home. Friendly chitchat follows, the "where do you live" niceties that happen when you find something in common with a fellow traveler, far away from home. She points to a cluster of people across the ticketing area and tells me she's been here on vacation with her husband's family for the last two weeks. Seven of them, staying at the swanky Aman where I'd had lunch on the king's birthday. I fire up my mental calculator: Four rooms at a thousand dollars each, times fourteen days... $56,000 would construct an entire village. It wasn't till I learned about this family that it occurred to me—really hit me—just how different the experience of being a tourist in Bhutan would be from mine. Particularly if you traveled deluxe. Just to get there you have to be moderately well off. The plane ticket from Bangkok to Paro alone isn't cheap—around $800 round-trip. Of course, you have to get to Bangkok, too, although flying in through India might shave off a couple of hundred bucks. The $200-a-day minimum "tourist tax" on each person is the kingdom's way of deterring an onslaught of budget tourists and backpackers on spiritual quests, like the people who swarm to neighboring Nepal and India. Bhutan doesn't mind spiritual seekers; it just wants to attract a higher grade, and discourage them from staying too long. The blond woman's sister-in-law winds up sitting across the aisle from me on the plane and she's excited to show off her pictures. She's a good photographer and has a serious professional camera, not one of those point-and-shoot pocket-size digital things I own. For the past two weeks, she'd been squired around to splendid sights, and pulls up evidence of her travels on her state-of-the-art Mac laptop. Monasteries gleaming in the sun, portraits of stately _dzongs_ set against spectacular mountain vistas that I'd gone nowhere near. Gorgeous portraits of smiling Bhutanese of all ages; exotic flowers captured close-up in supersaturated colors. The promised Shangri-la has been served up to this group on a beautiful, hand-carved platter. The pièce de résistance is the family portrait: all seven handsome adults, wrapped up in the finest and brightest hand-woven _kira_ and _gho_ , purchased on the trip—not a tuck of fabric or belt fringe out of place. The lady whispers to me that that was the day they met His Majesty. I wonder to myself: When someone drops that kind of money in Bhutan, is a royal welcome part of the deal? When she finishes, she asks to see my photos. I warn her: My Bhutan is very different from your Bhutan. Yes, please, she says eagerly: I want to see. She has Bhutan fever, that mesmerized glow. She also possesses the breezy self-confidence of a person who has lived a privileged life and who is not used to being told no. Taking pictures has never really been my thing. But I had to take photographs during my time in Bhutan—how often would I travel across the world? Though I carried my little camera with me every day, I had to constantly remind myself to remove it from my purse and use it. Once, while I was in college in western Massachusetts, a friend and I stopped to admire a gorgeous sunset, an enormous orange circle descending across the valley. At age seventeen, having hardly been out of New York City, I'd never seen anything like it. I wished out loud that we could preserve it on film for future enjoyment. My friend admonished me. "Just enjoy this moment, the light. Don't feel like you have to take it home with you. You can't." From then on, anytime I saw something beautiful, I resisted the urge to look at it through a lens, challenged myself to just enjoy it. Let the camera in my brain make a record, if need be, and enjoy the feeling what I was seeing evoked. Fighting back the fatigue of my sleepless night, I decide to indulge my travel companion's request. I lean over the aisle, cradling my laptop, so she can get a good look at my impromptu slide show. Every once in a while, Pink's sister the flight attendant—days away from leaving for her new job on Emirates—interrupts us to make her way up and down the aisle. This is the first time I've looked at my snapshots. Here is the Kuzoo studio, I narrate, perhaps the only radio station in the world that broadcasts from an old kitchen. Here is my apartment, and under that chair in the living room is where I saw the rat I suspected all along was sharing my space. He was this big, so huge, like a cat. Here are the kids at the golf course, I say. I explain to my seat mate that, no, you wouldn't necessarily associate golf with a poor Buddhist kingdom, but an Indian military officer convinced the third king to build the course behind the Thimphu _dzong_ , and a generation of players was born. Only nine holes, though, and the canteen at the course serves up pretty good Chinese-style fried rice. Here's Pema and Ngawang of Kuzoo recording sound with a minidisc player. They were taping Ed's demonstration of a game he made up called gol-chery, which fuses golf with archery. Clever, no? The kids loved it. This is Pema getting her hair curled and Pink getting her eyebrows threaded at a beauty "saloon" before a night on the town. That flight attendant who keeps passing us? She's the sister of this girl in the picture. This is our boss at Kuzoo, Sir Tenzin, up by the BBS broadcast tower. This is Sir Pema, the second in command at Kuzoo. What he really wants is to be a philosopher. That's the Kuzoo van. And Kesang, he's the driver. Here's a group shot from our trek up to Takshang, Tiger's Nest. It was right before a naked young Rinpoche ran down the trail. Yes, it's magical there, isn't it? The collective effect of my little slide show is like seeing this latest chapter of my life flash before my eyes. My heart is swelling. Had I really been living in this place that these people spent thousands of dollars to see? Me? NO MATTER HOW dear you are to them, there are two things your friends are unlikely to do for you in Los Angeles. Helping you move is far and away number one. But an easy number two is picking you up at the airport. Which is why the various offers to drop me at LAX when I left, and to fetch me on the evening of my return, surprised and flattered me. They also made me feel a bit ashamed. I hadn't missed a thing about home, with the slight possible exception of the grocery store. And my swimming pool. The sole swimming pool in Thimphu had been under repair for several years, with no opening date in sight. If it would open back up, and if I could avoid the Bhutanese cuisine, I believed I could live in Bhutan very happily for a long time. The person I chose to welcome me back from this adventure was my friend Sarah, a world traveler who lived in the neighborhood next to mine. Of her many good qualities, her greatest is that she is always game for just about anything. Almost more than anyone I knew, Sarah loved that I had made this trip. She had lived in almost as many exotic places as she had visited, and I knew she would be sympathetic to the jolt experienced by a returning traveler. My suitcase had literally fallen apart as I pulled it off the baggage carousel, so I was wheeling it carefully to keep it from disintegrating before I found her. I considered just leaving the whole mess right there and rushing back upstairs to departures to board the next flight back to Asia. As a testament to the magnitude of this journey I'd just completed, Sarah had actually parked her car and come into the terminal, so she was waiting for me as I spilled out in the exodus from the baggage-claim area. Action-oriented person that she is, and knowing how exhausted I must have been, she insisted on hauling the enormous bag all the way out to the parking lot and hoisting it into her trunk. Then she produced a bottle of water and an apple. "You probably need this," she said. How fortunate I was to come home to the care of such a kind person. A wise older friend had warned me that most people wouldn't know how to ask about my journey. "People will say, 'How was Bhutan?' but they won't really want to know," he had said. "They won't know what kinds of questions to ask." I knew that worldly Sarah wouldn't fall into this category; instead she posed almost too-specific queries about the service on Thai Air and Druk Air and currency exchange and the layout of the new airport in Bangkok. About the food I'd been eating, and the people I'd met. We agreed there'd be plenty of time, after jet lag wore off, to get into that. She wrestled the mangled suitcase out of the car and upstairs, without my help. We sat and drank some tea, and I was thankful not to have to come home to an empty apartment. It was a Sunday night, and we both had to go to work the next day. I'd booked the return ticket with as little time as possible between my arrival in Los Angeles and my return to work. My instincts must have told me that if I'd given myself a minute to consider it, I might have thought better of it and not gone back. EVERYTHING AT WORK was exactly the same as I'd left it. There was little to no fanfare at the office. It's not that I expected or wanted any, but I also didn't expect the utter indifference. "You're back already?" asked one of my more harried colleagues, a woman slightly older than me who ended nearly every conversation we had with, "Well, of course you can do that. You're _single._ " Most of the others were too busy to notice I'd been gone, much less returned, with the exception of the few dear people I considered friends. Maybe, I suspected, nobody wanted to hear about an adventure they didn't get to have. The scramble of the day felt even more hurried than usual. I was constantly on the phone, all in the service of getting the sound bite. Everything moved like clockwork, on schedule, not a moment left to chance. Inside the office and out, it all was too fast, too large. The streets were too busy, the pavement too smooth. As grateful as I was to have more plentiful choices of food, the enormous grocery store overwhelmed me. Everyone took the shiny comforts, the ease of everything around them, for granted. Didn't they know how charmed their lives were? Even that magnificent view out my apartment window that I used to treasure—now I just wished those San Gabriel Mountains would morph into the snowcapped Himalayas, and that a pack of stray dogs would crowd my ankles every time I stepped out the front door to walk to work each morning. It's not that I suddenly hated Los Angeles. I could see more clearly than ever that my life was good, really good. There was an endless parade of guests through my apartment. I'd built a fine community over the years by hosting a party every Friday night. The enormous pool behind the building was huge and beautiful and just the right temperature, and every day I got to see my fellow swimmers there. Two blocks away was the gorgeous, well-stocked public library that had been my ultimate inspiration to part with my books for good. A couple of farmers' markets set up downtown each week and allowed me to enjoy California's abundance of fresh produce without getting in the car. The only easier commute than mine would be to work in my living room; I didn't even have to fight traffic to get over to the office, thanks to a pedestrian walkway that connected both sides of the street. I got to sit in climate-controlled splendor all day, in front of a computer, and interview leaders in their fields. My stories got beamed out on a show that was heard by millions of devoted listeners. The rotating overnight hours were like living in a constant state of jet lag, but nothing about my days was very exacting. In fact, life was very pleasant. And that was the problem. It was all _too easy_. Too civilized. Too flat, colorless. I couldn't just go back to a world of work, leisure, and consumption. That would feel like going back in time, living on the obvious, predictable path. The reactionary thing to do would have been to renounce it all, to pack it all up and just leave for somewhere, anywhere—to change my landscape before my mangled suitcase cooled. But my return was my endurance test of all I'd been learning. I fought the tendency to think another location, other people, another world, would be better and yet to accept that the way I'd been living wasn't right for me anymore. There was no better place. There were no better people. All I really needed was here inside me. The answers would follow in due time. And so I settled into my routine. I chatted with my neighbors at the pool. I gritted my teeth at the office. I spread the word that the Friday-night parties would resume, and each week, I cooked up the usual big pot of soup to serve. I visited friends in various parts of town and shared meals. My wise mentor had predicted correctly. "How was Bhutan?" most would ask in an obligatory, cursory way. The "How was your day?" of travel. I taught myself to smile and simply say, "Great, just great," until and unless a follow-up question demanded more of an embellishment. The pictures I showed the lady on the airplane came in handy. They were like having a secret friend. They jogged my memory as the vividness of Bhutan began to fade. I had some of the better shots blown up and posted them on the wall at work, in my line of sight. Right on the edge of my cubicle, I hung a giant picture of the king, attached to a calendar from the Bhutan National Bank. My coworkers would walk by and say, laughing, "Wow, is that the Thai Elvis?" And I would laugh back, and then answer, defensively, protectively, as if he were mine, "No, no, that's His Majesty, the king of Bhutan. The youngest monarch in the world. He presides over a land that's going through a sea change, a transformation." What I didn't add was: so had I. THE DRIVEWAY TO Sebastian's off-the-grid cabin in western New England goes on forever. It's rocky and, remarkably, even bumpier than that terrifying Bhutanese highway. After a while, there isn't a pretense of pavement. Dense woods in every direction obscure the sky. Sebastian had promised that when we got to the end, there'd be deer and turtles and a lake. This property has been in his family for decades, so this similarity to Bhutan, this roughness, couldn't be by design. Now I got a hint, though, of how he could feel so at home in the wilds there. There were supposed to be four of us here for the weekend, but the patron saint of our friendship, Harris, along with his new girlfriend, bowed out at the last minute—too busy, he said, to get out of the city. It was all very natural and ordinary, as if my going to Sebastian's, much less alone, was no big deal, as if we saw each other all the time. When I'd laid eyes on this man only once before. Since before I'd gone to Bhutan, we had kept in touch by email. Each time we'd communicate, once a week or so, he'd invite me to his cabin, as if he forgot I lived clear across the country. "You're welcome, anytime," he'd say. A visit east for a family reunion meant I'd finally be close enough to take him up on the offer. In the hours before he picked me up in the Upper West Side of Manhattan for the drive north, I felt a strange sense of anticipation and nerves, as if I were a girl with a movie-star crush about to meet her idol. Who was this man, really? What was he like? What would it be like to see him again? There was no way this second meeting could match the intensity of the first. Imagination in these matters could get you into trouble, I knew well from experience. Charming, witty emails and phone calls were not accurate measures of how you would get along face-to-face. I was about to get an intensive crash course in the reality of Sebastian. FOR HOURS WE TALKED and cooked and laughed and ate and drank, starting with tea and graduating to wine and then a special whiskey he'd been given as a present. We moved from the kitchen to the porch to the kitchen and back outside again. We covered everything and nothing in particular, like old friends who'd been reunited after a long separation. Bhutan was a part of the conversation, of course, but it wasn't the only topic. We got done with that at the beginning. He explained the little mystery I'd been unable to solve, that his ties to the kingdom were through an old friend who helped him to become a guide many years ago. After he'd led trips for several years, Bhutan had become something of an addiction. The connections continued back in the New York area, particularly with the handful of Bhutanese dignitaries and students who traveled through and temporarily resided there. He felt a deep love for the country and its people, and an attendant sense of obligation to help however possible. I told him about my visits with the friends whose names he'd given me, and how they'd welcomed me warmly. And how every time I gave him credit for connecting me to Bhutan, I was corrected that it wasn't Sebastian, but rather my karma that had united us. We discussed his business at length, for it was at a crossroads, and my own personal crossroads with my work. I teased him that since he'd hooked me up with Kuzoo, perhaps he could come up with what it was I should do next. He teased back, wondering how it was possible for an East Coast person to love Los Angeles. We stared at the lumbering turtles, and even spotted a deer, and as the sun began to set, a chill infused the night air. I excused myself to get a sweater from inside the cabin. And as I stood up, Sebastian reached over and grabbed me, kissed me hard and long, and held me tight. FOR MANY PEOPLE, the happiest end to this part of the story would be to tell you that this magical weekend never ended. That I eloped with Sebastian to Bhutan in a fit of passion, where a venerable lama presided over a ceremony uniting us forever. That we returned to New England and I helped run his business, eventually growing it into a mighty empire as our love deepened and matured. How, with all these riches, we eventually adopted several Bhutanese children and sent piles of money back to build schools for the kids we couldn't take in. A fairy-tale romance that swept across international borders, born out of a single chance encounter. Cue the music; get Hollywood on the line. What actually happened was this: On Sunday afternoon, Sebastian drove me back to a town just outside New York City, where he was scheduled to deliver a lecture on tea. My childhood friend Liz, who lived right nearby, picked me up so I could spend the night with her family. When we got to her house a few minutes later, she said I seemed a bit out of it. "I feel dazed," I explained. "I'm not quite sure what just happened." Sebastian had, on second inspection, been as wonderful as I'd imagined he would be. We had gotten along famously. What I knew for certain was that Sebastian and I would be friends, always, that we'd forever be connected because of Bhutan. But in those forty-eight hours, I had also learned enough to know that this time we were spending together could never be anything more. Shouldn't be. Now I had new photographs to add to my collection. One is a shot of a bale of hay on a dairy farm we visited that belonged to Sebastian's friends. It resembled one of those oversize Claes Oldenburg sculptures, like a giant piece of shredded wheat, the gold of the sunset shining brightly in the corner. That magic hour of light in the early summer as it starts to dim, the temperature readjusting to cool. The other picture is of Sebastian, a few minutes later, sleeves rolled up, a big grin on his face, mugging for the camera, eyes wide, the gray flecks in his hair catching the sun, the sky bright blue behind him. Reveling in the beauty of the twilight. I knew I couldn't freeze the clock right at that instant, and even if I could have, it would have been futile. But my resistance to forward motion was for good reason: When this weekend was over, a piece of who I used to be would be finished, too. Now I understood what that thunderbolt I'd experienced when we first spotted each other was all about. I had mistaken it for romance—true love, even. But in Bhutan, the Land of the Thunder Dragon, a thunderbolt literally signals a roar of power. A moment of enlightenment. # 10 # DAWN OF DEMOCRACY THE PEOPLE OF BHUTAN WERE NOT SORRY TO SEE the year of the Female Fire Hog come to a close. It had been an astrologically sour one. Arbiters of the skies had warned that it would be a bad time to do just about anything: get married, have a child, start a job or project, begin new construction. Judging from the nests of bamboo scaffolding rising above the streets of the capital city, and the numbers of babies slung over the backs of women as they trundled around doing their chores, activity in the capital had hardly come to a screeching halt. Life had proceeded—but with many extra prayers and cautions. To combat the misaligned stars, the official astrological calendar of Bhutan had recommended "appropriate preventive religious ceremonies." Maybe you went ahead and moved a few things into the home of the person you intended to marry, but delayed the complete union of possessions until 2008. In the meantime, you were well-advised to deploy a bunch of monks to make things right with the gods. Leaving the year of the Female Fire Hog behind didn't mean the Bhutanese were any less wary of the coming one. The dawn of the year of the Male Earth Mouse also meant the dawn of democracy in Bhutan—and with that, the formal diminishment of the all-powerful monarchy that had reigned for a century. Bhutan's king would continue to lead, but no longer would he possess absolute power. A year into the king's tenure, Bhutan was still adjusting to this new, young monarch, and the absence of his beloved father on the throne. No one was terribly keen to elect the parliament he'd insisted they create, or to see him give up power. Getting citizens to the polls on December 31 posed a challenge, and not because the date marked the Western New Year's Eve. It had to do with a higher power. In the weeks in advance of the election, official notices had been placed in the newspapers: REQUEST TO TEMPORARILY POSTPONE ANY PILGRIMAGE PLANS Bhutan is a Buddhist country, and annually around this time in winter, everyone wishes to pay homage to holy places in the neighbouring countries. Nevertheless it is essential for every citizen to be present in the country and be able to serve our homeland when such a historic event is taking place. At this time, National Council and National Parliament elections are in progress. Therefore, we suggest that every citizen should by way of voting contribute toward the country's prosperity. When this is done we will have enough opportunity in the future to pay homage to holy places. Nothing less than a royal edict would convince the devout Bhutanese to give up their sacred annual treks. Though voting was not compulsory, the official Bhutan Voter Guide did declare it the moral responsibility of the people to cast ballots. If they did not, it continued, they ran the risk of being "guilty of letting a less competent political party or candidate come to power." If the menace of moral failure didn't compel a proud citizen to the polls, what would? Lest the people be overwhelmed with too many decisions and campaigns at once, the election board had decided to split the daunting responsibility of choosing a parliament into two dates. In the first round, this New Year's Eve, ballots would be cast for a National Council, with each district (or _dzongkhag_ ) electing a single representative. A few months later, at a date still to be determined by astrologers, the Bhutanese would be asked to vote for a slate of candidates affiliated with one of two newly formed political parties. On December 31, 2007, voting commenced across the tiny kingdom. IT HAD BEEN just under a year since my first trip to the kingdom, and time had done nothing but intensify my curiosity about the place, my craving for more—the landscape, the people, the feeling of calm that had swept over me while I was there. The serenity had mercifully remained, but I wanted a booster shot. I needed to see this place again. I knew I'd never completely solve the riddle of Bhutan, never really "get" it. I also knew the way I felt during my first visit, the richness of experience, could never be duplicated. All I was sure of was that I needed to return. Back in Los Angeles, I'd entertained any Bhutanese visitors I could and met anyone with even a remote connection to the place. A local couple who had collected photographs of Bhutan by the explorer John Claude White and published a book of them? Let's have tea. The labor minister's wife and children, whom I'd met through our mutual friend Sebastian? You are welcome to stay with me. A friend of a friend who'd returned from a trip? Let's drink some wine so you can show off your photos. My friend's daughter had heard me talk so much about the country, she'd written a report about it for her fourth-grade class. ("Bhutan is the happiest country on the face of the earth...") In the spring, just weeks after I'd arrived back in Los Angeles from that first trip, a twenty-five-member Bhutanese trade delegation mining California for business opportunities jammed into my tiny apartment for a pizza party. (To make the slices more spicy-palatable, the ladies in the group discreetly pulled hot chili sauce from their purses.) I gave them a tour of the studio where I worked, encouraged them to help themselves to the stacks of free books we always had on the shelves to give away, and helped mitigate disappointment over their lack of sales. Their intricate handwoven items just couldn't be mass-produced on the necessary scale for the U.S. market The trade visit had unintended consequences: Because of it, I met the small but close-knit community of Bhutanese living in Southern California. (One of them became the recipient of my dusty old television set, which I was delighted to finally give away.) And my connection to the group motivated them to offer me a visitor visa for a return trip to Bhutan. After I'd put in for vacation time, I had bought my plane tickets and stuffed my suitcase with Gold Toe socks and a hand-me-down Burberry purse for Pema, courtesy of my stylish friend Barbara, and the royal astrologers made my return an even more auspicious one. My arrival would coincide with the first stage of the election. KUZOO WAS PREPARING to perform its most important community service yet: beaming out the results to the people as soon as the votes had been tabulated. The news would be revealed simply, by reading a fax from the election board on the air as soon as it chugged over the line to the well-worn machine in the workroom. News coverage had been suspended at Kuzoo not long after I'd left. Someone in His Majesty's secretariat had reportedly yanked the puny daily news report off the air when a newly hired radio jockey had committed a transgression; she chose to lead a newscast with a story about taxi fares, rather than a routine item about the king visiting one of the _dzongkhags_. Even though her news judgment was correct by journalistic standards, her respect for the monarchy as a Bhutanese citizen was in question. So much for the freedom of the press guaranteed in the as-yet-to-be-ratified constitution. I wondered if the kindly, generous king himself would have interpreted her action as a slight. In the fourteen months since the station had launched, there had been other, more sweeping changes for Kuzoo. Sir Tenzin traversed Bhutan's rocky terrain with an engineer, rigging up repeaters so that the signal could reach beyond Thimphu Valley. Now all but five of the remotest districts in Bhutan could tune in. Second in command Sir Pema stepped in to run day-to-day operations. There had been another, formative development: Kuzoo's transmission frequency had been moved, and alongside it, a new one had been added, this one broadcasting in Dzongkha. So passionate and devoted was this new crop of listeners that they decided to throw a party to thank the staff. Perhaps for the first time ever in the history of radio, a fan-appreciation day was orchestrated not by a radio station itself but by its listeners. They proudly called themselves the "Kuzoo family." From hours away, from every direction, several dozen of the most ardent audience members trekked to the modest Kuzoo studios, decked out in their finest _kira_ and _gho_ —with their similarly attired children and colorful thermoses of tea and food in tow. The first to arrive, at 6:30 a.m., was the man who'd appointed himself "Kuzoo _gup,_ " Dzonghkha for mayor. He proudly greeted partygoers who descended on the grounds as if he owned the place. A crowd of more than a hundred people assembled; it was a chilly winter day, my first back in the country. Sir Pema and I stood on the front steps with Ngawang, marveling at the "family" members who squealed with delight upon meeting one another in person for the first time. Word was a marriage had even occurred as a result of two frequent callers taken with the sound of each other's voice on the air. "I had no idea how many people loved Kuzoo," Ngawang said, shaking her head. One of the fans passed a basket of cookies, and we all each grabbed one. "I knew they were enjoying Kuzoo," said Sir Pema, "but I had no idea this much!" The main radio jockey for the new Dzongkha station emerged to make some announcements to the crowd, which obediently hushed at the sight of him. In his native language, he began to speak. Several lines in, I made out my name and felt all eyes turn in my direction. "Madam Jane," he said, followed by the words "United States." Having traveled farther than any of even the most geographically remote of the Kuzoo fans, I ranked as the most honored guest and received the first shout-out. I felt a bit embarrassed at being singled out. With a growing awareness of their own impact on their tiny country, the staff of Kuzoo settled in on election night, ready to relay the winners as soon as the electronic voting machines had finished tabulating. No chance for hanging chads here; even in communities not yet wired for electricity, the voting machines that had been deployed had touch screens and were battery powered, imported from India and reputed to be impenetrable. To fill the time, those on duty played a merry stream of New Year's Eve party music. I was torn over where to spend this historic election/New Year's Eve. Despite my allegiance to Kuzoo, I knew my friends at the station would understand why I chose to accept another invitation, one that required my being across town. THE PAIR OF pimply faced military guards who man the entrance to Villa Italia look barely old enough to carry the machine guns they have strapped to their sides. The property they are securing is an oasis just blocks from the center of Thimphu, but it's one particular resident who warrants their presence. Carefully trained vines soften the concrete walls that separate the property from the noise and imposition of the busy, narrow street just outside. Apricot and apple trees dot the yard. Despite its name, Villa Italia's exterior architecture is typically Bhutanese, trimmed with the colorfully painted, ornately stacked wooden edging that adorns every building across the land. The interior of this apartment building, though, lives up to its title; richly tiled bathrooms and terrazzo floors lend an air of understated European elegance. Modern Italian fixtures and appliances distinguish the villa from the handful of other fine homes in Thimphu that are also inhabited by VIPs. The gathering I have decided to attend is on the topmost floor; during this trip, I was staying at a borrowed apartment on the ground level. About half the guests focused on the fact that 2008 was about to begin, their sights set on the clubs they'd visit after a fusion meal of breaded chicken cutlets and _emadatse_. The other half was more concerned with election coverage—including the host himself, for he was to be a candidate in the second, as-yet-unscheduled phase of the election. Lyonpo Ugyen Tshering— _lyonpo_ being the Dzongkha word for minister—had ably served Bhutan in a number of roles. He and seven other king-appointed ministers had resigned their posts earlier in the year in order to form a political party and run for office. Leaving one's job was a prerequisite for tossing one's hat in the ring, to avoid any conflicts of interest and allow the candidates to focus on this important election. This group dubbed their coalition Druk Phuensum Tshogpa, or DPT, which translates loosely to mean Party of Blissful Harmony. The campaign slogan they crafted wisely incorporated an important piece of Bhutan's national identity: "In Pursuit of Gross National Happiness" proclaimed their campaign literature. It almost didn't matter that the rival People's Democratic Party (PDP) had chosen as its theme a phrase that smacked of a high school campaign promise: "Service with Humility. Walk the Talk." Their decided advantage was that the uncle of the king was at the helm. A citizenry that revered its monarch seemed quite likely to choose a person close to him for the crucial job of leading the first democratic government. Each party published manifestos that promised lofty improvements for their countrymen: help for the poorest, installation of roads and electricity for the remote areas still unconnected, access to education for all, continued protection of natural resources. Each offered only vague suggestions about how these formidable goals would be accomplished. And each promised, first and foremost, to uphold and further promote the beloved monarchy, ignoring the reality that by their very existence, they would be undermining it. There weren't any major philosophical differences between the two groups, and neither touched on any subject that could be construed as controversial, most notably the refugee camps over the Nepalese border, filled with a hundred thousand people with disputed Bhutanese citizenship. (The refugees claimed to have been tossed out because they weren't ethnically pure; the Bhutanese government maintained they had illegally immigrated.) But issues weren't really the point in this election. In the end, the deciding factor would be the popularity of each party's members, and their tenacity in campaigning. Though the PDP had an edge because of its royal connection, no one presumed that meant they had a lock on a win. For the DPT to boast important civil servants like Lyonpo Ugyen among its roster of stars was a tremendous asset. He was descendent of a family that had served the king honorably and he himself had given his career to his country, including nine years as Bhutan's ambassador to the United Nations. Lyonpo Ugyen's personality was as much an asset as was his résumé; he radiated a calm, quiet demeanor, a commanding presence. "Lyonpo Ugyen—he's like a Buddha," said one of our mutual acquaintances. In Western clothing he looked like a handsome college professor, with dark, thick hair that was graying just enough to give him a further sense of gravitas, while his friendly, bright eyes shone with a youthful gleam. A serene sliver of a smile graced his face almost always, and he was quick with a joke or turn of phrase. Outfitted in his Bhutanese _gho_ , he was a man you'd comfortably entrust with a giant sack of cash or a newborn baby. Like most of the elite citizens of Bhutan, Lyonpo Ugyen had begun his studies with the Jesuits in India and had gone on to attend fine universities in the United States. He enjoyed a long marriage to a formidable Italian beauty, and the magnificent Villa Italia was her creation. It was a testament to Lyonpo Ugyen's standing in Bhutan, where cultural preservation and ethnic homogeneity are highly regarded, that this "mixed marriage" hadn't become a political liability. As would-be politicians go, Lynopo Ugyen was as Teflon as Reagan, as charming as Clinton, and as wise as Carter. To the people of Bhutan, he posessed the quintessential qualities of a modern steward: humility, worldliness, and reverence of the homeland. And because of this, he could very likely be Bhutan's first elected foreign minister. The decor of the Tshering family living room was, like the evening's menu, a museum to the merged cultures of its owners: At its center sat a large, brightly decorated Christmas tree, its colorful lights reflecting in the glass covering the prominently placed portrait of His Majesty the king. The living room also boasted several more personal photographs of the revered monarch at various stages of his life, posed with Lyonpo, his wife, and their children. As a young man, His Majesty had been educated in the United States, which had coincided with Lyonpo's service as UN ambassador in New York City. Guests crunched on Bumthang-made potato chips, and the Bhutan Broadcasting Service played in the background. The BBS was doing its best impersonation of CNN, even if it had neither the budget nor the worldly reach. "This is a historic evening for Bhutan," declared the anchorman, Tshewang Dendhup. "You are watching history being made." Such a declaration was very un-Bhutanese, but if ever there was a time for drama, this was it. And Tshewang was the perfect person to utter it. He was perhaps Bhutan's best-known export, having appeared as the star of the film _Travellers and Magicians_ , and as the real-life love interest in a book by the young Canadian teacher who had, for a time, become his wife. The coanchor, Dawa Sonam, was Bhutan's answer to Anderson Cooper, his demeanor the perfect broadcast blend of serious and convivial. He chatted with reporters stationed at various polling places across the country, as slides with their pictures and the names of the districts they were calling from appeared on the screen. Only one live video shot was possible, since the BBS owned only a single microwave truck—a donation from Japan. That was parked at national election headquarters, just about a mile up the street from the BBS studio. Even if there had been more trucks available, the ability to beam live pictures across the rocky Bhutanese terrain would have posed a challenge for the most experienced engineers, for mountains would have obstructed the signal. Although the simple phoned-in reports didn't exactly make for lively imagery, viewers were ably regaled with tales of their fellow citizens who had dutifully made their way to the polls. Despite educational initiatives in advance of election day, not everyone understood how the voting machines worked. A seventy-eight-year-old woman confessed, "I pressed the first. Did I vote for the right candidate?" A forty-four-year-old yak herder trekked four hours from his village to his local polling station, and declared that he'd cast his vote for the eldest candidate in his district. "The old are always wise," he said. "Elderly people will know the difficulties and problems we face and be able to raise these issues in meetings." National law prohibited knives in polling places, which caused confusion among men in the remotest regions, who were not accustomed to leaving home without them. "We didn't mean to be disrespectful," one man said after having his machete temporarily confiscated. "The knives are just something we men carry with us always." The ballots featured thumbnail photographs of the candidates. This served two purposes: to assist the half of the population that was illiterate, and to help distinguish among contenders with identical names. In one of the races, for instance, four people were vying for the area's National Council seat: Jambay Dorji, Sherab Dorji, and two Ugyen Tsherings—neither the one whose living room I was sitting in. "Look, I won _and_ lost in Paro," joked Lyonpo Ugyen as the results flicked up on the screen. One Ugyen Tandin had received 2,886 votes. The other got 1,883. Sir Dorji and Sir Dorji trailed behind. Of the four female candidates in various races across the country, three had claimed victory—a fact both news anchors mentioned several times, clearly impressed that their fellow citizens were willing to allow the fairer sex to govern. By 10:00 p.m., all victories had been announced. Attention and the television set were then turned to the real CNN, so that the guests could witness fireworks displays and other celebrations that marked the arrival of 2008 around the world. ACROSS TOWN AT KUZOO, the radio jockeys performed their national duty, reading the names of the winners before resuming the music and their on-air party. A little before midnight, exhausted from the twin forces of a long journey and a week of overnight shifts back home, I turned the electric heater in the bedroom of my borrowed apartment on full blast, quickly stripped off my clothes, jumped into my pajamas, and crawled into bed with my little portable radio. Three of Kuzoo's new radio jockeys were leading the on-air party in the kitchen studio: Namgay (female), Namgay (male), and Choki. "The phones are jammed with callers," announced Choki. "Who's this on the line, please?" "Hi, guys," the voice said. I recognized it immediately as Ngawang. "Hey, it's Kuzoo RJ Ngawang. Wow, great, how are you?" her colleagues responded. Ngawang sounded as if she was a combination of glum and tired and was a bit hard to believe when she said, "Actually, I feel great, because my friend Lady Jane is here in town and that's so good. I've missed you, Lady Jane, and I'm glad you're back." The three radio jockeys sounded, in contrast, a bit _too_ happy. I wondered if they were tipsy. "Yes, yes, Lady Jane! Welcome back to Bhutan. Yes, yes, Ngawang! Happy New Year, everyone! You are listening to Kuzoo! Is there any other message, Ngawang?" "Umm, well, I hope to see you more in this new year, Lady Jane. And I hope my dreams come true this year. Best wishes to everyone." Ngawang sounded as if she might fall asleep. "What song would you like us to play?" the female Namgay asked. "Your choice," said Ngawang. And without the radio jockeys saying another word—thus a quicker return to their bubbly—the music started. The song they'd chosen was "Complicated" by Carolyn Dawn Johnson, a Kuzoo staple. It had to be the twentieth time it had aired that day. Shortly after midnight, the Kuzoo radio jockeys took a break to run what they described as a very important New Year's announcement. It had been recorded by the shy Sir Pema. The staff adored having him at the helm; he was studious and quiet, the temperamental opposite of Sir Tenzin, as well as a kindly, paternal advocate who listened to their hopes and aspirations. Sir Pema's own long-term dream had just come true: He'd not only been accepted to a philosophy course in Bangkok but he'd been awarded a scholarship from the king's office to subsidize it. Perhaps because of his imminent departure, he was willing to boldly transmit his voice across the land. "After the partying and drinking," he said in a quiet, noble voice, "Kuzoo would like to wish you a happy New Year with a poem by Alfred, Lord Tennyson. Ring out the old, ring in the new. _Ring, happy bells, across the snow:_ _The year is going, let him go;_ _Ring out the false, ring in the true_. He continued with a slight modification. _Ring out the grief that saps the mind_ , _For those that here we see no more_ , _Ring out the feud of rich and poor_ , _Ring in redress to all mankind...._ _Ring out false pride in place and blood_ , _The civic slander and the spite;_ _Ring in the love of truth and right_ , _Ring in the common love of good...._ _Ring in the valiant man and free_ , _The larger heart, the kindlier hand;_ _Ring out the darkness of the land_ , _And ring in Gross National Happiness_. With only the slightest wink in his voice to his having modified this great poet's work for the youthful Bhutanese audience, Sir Pema concluded his greeting: "Kuzoo's New Year's mantra: 'Let bygones be bygones.' The year 2007 is behind us, and 2008 beckons us. Warm wishes to all our listeners. May you have a really bright and prosperous New Year." With one election down and one to go sometime this coming spring, soon the new constitution would be adopted and the new king officially installed. For Bhutan, the year of the Male Earth Mouse would indeed be a historic one. THE NEXT DAY, I do something I haven't done much of yet in Bhutan: play tourist. I'm going on a field trip to a place all tourists go, about an hour outside Thimphu. It's called Dochula Pass. It is the site of a memorial built for Bhutanese who died in a conflict with Indian separatists in 2003. It's also a perfect vista for one of the most magnificent confluences of mountain ranges on the planet. My companions are a fresh crop of Bhutanese tour guides who I am tutoring in English. It's sure to be an informative trip, with one visitor (me) and forty accomplices trained to explain the landscape. As a thank-you to the trade department for inviting me here, I offered to help out however I could, and I'd been assigned to help the guides practice talking to foreigners. They are undergoing a special hospitality training course in preparation for the king's coronation. Sitting in a room and asking these shy young Bhutanese to speak English in front of their friends hadn't worked very well. I'd sliced paper into strips and written topics on them I knew tourists, particularly American tourists, would be most likely to ask about. I'd call up a guide and ask him to choose one, and have him riff on it with me: "Why are there penises painted on the houses?" "How come the king has four queens?" "What will the new democracy mean for Bhutan?" "Why is Bhutanese food so spicy hot?" The pupils clammed up under pressure to speak to a chatty foreigner in front of a group. The best way to warm these guys up, I figured—and most of them were guys—was to get them out of the dank, dark conference room. I asked the course leader if we could take a short field trip, to Kuzoo, a long two-block walk on the upper road. They all knew where it was, of course; everyone knew where everything was in Thimphu. They'd all just been too shy to go in for a visit—these same hardy young people who can set up and break down elegant campsites with ease in hours, and deftly navigate the bamboo woods and ancient trails of the most remote locations in their country simply by looking at the sky. "Who wants to go to Kuzoo?" I asked. All hands flew up, and the enthusiasm wasn't just because it meant getting outside. Our mob moved as a unit out to the street, and as we walked I made a couple of cell phone calls to be sure arriving en masse would be okay. "Five at a time," I said, and I took them in, in small groups, doing my own best impersonation of tour guide. "You're about to see the world's only radio station in a kitchen." I'd still not confirmed this to be true, but my pride in Kuzoo allowed for this reasonable conjecture. Ngawang happened to be behind the board in the blue-tiled studio, and before I opened the door, I told the guys who we were going to see. "RJ Ngawang? That's Radio Jockey Ngawang?" they exclaimed. And as they crouched against the tile in the studio while she introduced a song, they were all gaping like awestruck teenagers. Ngawang was just as shy as they were. She deferred to me to explain how the studio worked. "Are you really here at the station all night long, _la_?" one of the guides asked shyly. Ngawang giggled; no one at Kuzoo liked to admit that the station ran 24/7 thanks to a computer. It seemed duplicitous somehow, inauthentic. "Virtually," I said, pointing to the computer, and we left it at that. The trip was so successful, I suggested another. "Let's go somewhere in the area, so you can show _me_ something," I said. The course leader saw my point. There was a tour bus out back and a driver at the ready. It's a beautiful, cloudless winter day. The guides are bursting with energy, thrilled to be out of the conference room again. We drive into the parking area at Dochula, and they all gasp in unison. We have hit the equivalent of the view jackpot; it's one of those days when all the mountain ranges are perfectly visible from this spot. In the foreground is the Punakha Valley, and off in the distance at various points are spiky, soaring snowcapped peaks. There isn't a sign of man-made life as far as the eye can see. Everyone rushes off the bus. Suddenly, I'm a schoolteacher with a classful of star pupils. "Madam, that's Gangchenta, the Tiger Mountain." "And that there is the highest peak in Bhutan." "Madam, look over there. That is China, there. Can you see?" "In that direction, see? You can see Gasa, the only district in Bhutan with no motor roads. It takes five days trekking and you can get to the most beautiful hot springs...." Now my guides insist we walk to the maze of 108 religious structures, chortens, that form the memorial portion of this stop. Again I am bombarded with factoids. "The number 108 is symbolic, madam, for it is the number of volumes in the Kanjur. That is the Buddhist scripture. And that is why there are 108 chortens here." "They were built thanks to Her Majesty the queen Ashi Dorji Wangmo. This is a memorial for fallen soldiers." "Yes, _la_ , the soldiers died in 2003...." "Up there, over there. See, Her Majesty is giving us a sacred temple. See? Over there? It will be opened soon, madam." A few of us stroll among the statues. They're at different heights, and as beautiful and impressive as they are, as sacred as they may be, they've got nothing on the view, the endless ridges of mountains. The afternoon wind is picking up now and most people aren't wearing overcoats on top of their national dress. Soon we are gathering near the bus, where the coldest of the group are huddled together, a few of them grabbing a smoke, a few others continuing their discussion of the spectacular view. A transistor radio has materialized; it must be Kuzoo playing. The group is being corralled for a picture, and they insist I must stand in the center. Several cameras are passed down to the front. _"Emadatse,"_ shouts the photographer, the Bhutanese equivalent of "Say cheese." And as we smile and pose for the shot, some of the guys are singing along, and I make out the song that's wafting out around the country, and entertaining us here on this fine winter afternoon. It's the old hit by R.E.M. "Losing My Religion." WHEN I HAD only a couple of days left in Bhutan, at the station, Ngawang asked for a favor. "Would you call me there?" she asked. "To the United States?" "What does that mean, exactly? To call you? What do you need to be able to visit?" I had learned from this trip that Bhutanese citizens could only "call" foreign visitors if they could explain to immigration authorities the favor that had been performed for them, or establish the existence of a long friendship. This was to avoid skirting the payment of the tourist visa. The challenges of "calling" a visitor to the United States were different, of course. "I think you just have to write a letter." "What about? Work?" "Sir Pema would let me take leave. Especially if I was going to come to work with you. I could do an internship!" I tempered my natural instinct to automatically say yes and considered the proposition. I loved having houseguests; even though my apartment was tiny, people crashed in my living room all the time. I wished I could afford a bigger space so I could offer a bit more comfort and privacy, but for most visitors, the blow-up Aerobed was perfectly suitable for a visit of just a couple of nights. An appearance by Ngawang felt heftier, though; it introduced larger concerns than my usual worries about the size of my bathroom or the lack of a private space in which guests would sleep. The trip would be long and expensive. Could I offer to subsidize it without offending, or would I be expected to subsidize it, which would be offensive to me? Ngawang would be more of a responsibility than a typical visitor, and I'd be working and not able to squire her much. And yet the prospect of showing her around Los Angeles and where I worked was exciting. It was bound to be educational and eye-opening, and for Ngawang, maybe even life-changing—the way visiting Bhutan had been for me. I couldn't stay in Bhutan, I rationalized, so I might as well at least bring a bit of Bhutan to me. So I said yes. And as soon as I got back to Los Angeles, I grabbed a piece of letterhead at the office and composed an official invitation. # 11 # AMERICA 101: "THAT'S COOL" THE INTERNATIONAL TERMINAL AT LOS ANGELES International Airport is the winged, modern version of Ellis Island. Of course, the mode of transportation is plane, not boat. And there are just as many people leaving as there are arriving. Here the stew of nationalities that comprise the United States is starkly evident: People of every imaginable skin color, mingling, hopefulness in their eyes, some carting boxes and bulging suitcases of items they're transporting to make _there_ a bit more like _here_ , and to replicate _here_ what they've left behind _there_. For every five pieces of luggage, there is one box containing a flat-panel TV. The Friday night Ngawang was to land in Los Angeles, I was so excited that I got to the airport an hour and a half before her plane was due. This was one guest I would, without hesitation, not refer to a taxi or shuttle. I didn't expect her plane to arrive early, but I did want to ensure a prominent place in the receiving line of eager friends and family and bored-looking car-service drivers. On my way, I stopped at the supermarket to buy something I'd never bought before: a corny welcome balloon, visual reenforcement of my excitement over this important visitor. It seemed the least I could do. The trip Ngawang had just made was numbingly long. To shave off a bit of the cost, she had traveled a more arduous route than would a typical visitor to Bhutan: seven hours in a car out of Thimphu, three days on a train across India to Delhi, then nights of bunking at the Bhutanese embassy to wait for a call from the American authorities to see if she'd even be granted a visa. My personal invitation was no guarantee. U.S. customs agents are wary of allowing in another inevitable nanny. "This will be a kind of mini-internship," I had told Ngawang after sending the letter, "for you to see how media companies work here. But it's a closed invitation. You can't stay forever." Because my apartment was small, my job and hours ever-changing, I explained, I couldn't house her for more than a few weeks. While it felt rude to emphasize, I knew it was necessary to be clear. As far as the Bhutanese were concerned, it was your obligation, if someone came to visit, to put them up for as long as they needed or wanted to stay. Bhutanese living rooms were lined with couches, bumped up arm-to-arm and pressed against the walls, at the ready for whoever needed to sleep on them. Ngawang assured me that she understood, that she didn't want to leave her family or her country permanently. The day the phone rang with news from Delhi, Ngawang's excitement practically pulsed through the lines: "They said yes! I can come there!" Now the travel details had to be arranged. A day and a half later, another call came through informing me that she would arrive on Friday night. I felt a tiny bit like prospective parents I'd known, waiting for the fateful, life-altering call from the adoption agency. Of course, this was very different. But having Ngawang in my orbit turned out to be more like having a child around than I'd imagined. THE GIANT ELECTRONIC signboard announced arrivals from all kinds of exotic places: Manzanillo. Singapore. Manila. Ngawang's plane from India by way of Frankfurt, the board said, was delayed. Just like my first visit to Bhutan, I thought, as I surveyed the waiting area. Everyone here was expecting someone who had journeyed from another land; most likely none had ventured all the way from Thimphu. More than three hours elapsed before my friend emerged from the bowels of the airport. She looked so much less formal than she would in her Bhutanese dress, more like a typical college student: a sweatshirt, an orange backpack, faux Crocs to match, hair pulled back in a ponytail. Behind her, she dragged a small suitcase. She looked tired but not depleted. The forgiveness of youth, this representative of the modern Bhutan. Ngawang now could count herself among the elite few from her country to have actually seen a plane, been on one, much less flown halfway around the world. "How was it, how was it?" I asked, hugging her tight, aware that I was completely incapable of imagining what it would be like to land in the United States for the first time. During college a friend from Switzerland had come home to Brooklyn with me for Thanksgiving, her first trip to the New York metropolitan area. When she caught a look at the lower Manhattan skyline, she'd gasped, loudly, at the live vision of a vista she'd seen a thousand times in the movies. From her, I learned the wonder of seeing for real a place you'd long imagined, and how perceptions rarely matched reality. "Okay," said Ngawang, who wasn't nearly as demonstrative. "Long. What day is it?" "It's Friday evening." "I left on Friday evening. Wow. That's cool." The elasticity of time as it relates to world travel was just the beginning of a series of events that would elicit that exclamation. The next phenomenon was the five-story parking garage, crammed with cars of all shapes and colors and sizes, far more varieties than the five types of vehicles that roam the streets of Bhutan. Despite the cool, nighttime desert air, I flipped down the top of my dusty old two-seater convertible; Ngawang had never been in, much less seen, one before. Out of the airport and onto the eight-lane freeway we went, weaving through the balletic tangle of traffic. Cruising sixty-five miles an hour on a straight open road was as much a thrill for her as riding the Cyclone at Coney Island would be. Especially when we ascended the long arc of a ramp to exit the 105 so we could spill out on the even wider 110. I could feel Ngawang gasping for breath with the speed, motion, and height. We were surrounded, in every direction, by twinkling lights. "There aren't this many people in all of Bhutan." Ngawang sighed. She was right. If you drew a circle on a map around downtown Los Angeles, it was likely to contain the equivalent of the kingdom's population—around 650,000 people. The Los Angeles school system had more students in it than her entire country had citizens. Ngawang's sensory overload reached a new dimension as we neared the towering buildings of downtown, shimmering in the distance. As we got closer and closer, they commanded the sky, stern and intimidating. The modern, manufactured version of that spectacular range of mountains I'd visited with the tour guides. We drove inside another parking garage, this one with a remote-controlled electric gate. An elevator transported us to the eighteenth floor, a height she'd never experienced from inside a building. The floor-to-ceiling glass windows in my apartment offered Ngawang a panoramic view of twinkling lights flickering in every direction. The centerpiece was the stainless-steel Disney Concert Hall, a structure that resembles a silvery spaceship and dazzles even those who haven't traveled from a place where every building is the same general shape and color. It dazzled me, and I had been gazing at it every day for years now. Ngawang planted the welcome balloon in the bamboo vase that sat where the television I'd given away used to be. Then she slumped onto the sofa, conceding defeat to exhaustion. She had arrived, and soon she would conquer America. After a sip of tea and a glass of water and oohs and aahs at the UFO-like building and the magic of the blow-up mattress unfolding into a comfy bed at the press of a button, she fell off to sleep—exhausted by the jet lag and travel and the overstimulation of the sights and sounds of the country she had longed to see. And she hadn't even seen it yet in the cold light of day. BEFORE HEADING WEST on the freeway the next morning to go to the beach, I detoured to a nearby Jack in the Box. Every young Bhutanese was intrigued by the idea of take-out food. They thought Americans ate McDonald's three times a day, the way they ate three plates of rice and chilies and cheese. "Ngawang, what looks good on this menu?" The number of options on the enormous board at the drive-through was too much for her. "Egg kind of thing, or sweet kind of thing?" The only food I'd seen Ngawang eat besides _emadatse_ and mounds of rice was the occasional slice from a pie I'd picked up at Druk Pizza. And sweets. "Umm, sweet?" Her brow furrowed, and I could see she was studying the prices, deploying her internal currency converter to calculate dollar to ngultrum. "This is my treat," I said. "Everything is my treat, okay? You are my guest, and it's my pleasure." I pulled up to the ordering station, and a voice squawked through the display. "Thank you for choosing Jack in the Box. May I take your order, please?" Ngawang burst out laughing. "Who is that? Where is he?" She craned her neck out my window. This had to be a practical joke. "You'll see in just a minute. Two coffees, please, large, cream and sugar, and an order of French toast sticks." The box talked back. "Thank you. Please drive around." The young guy at the window took my cash, and I explained that my friend here had never been to a drive-through before. "She's from Bhutan," I said, forgetting that that wouldn't really mean a thing, since most people I encountered seemed not to have a clue _what_ Bhutan was, much less _where_ it was. Ngawang snapped a photo with my digital camera with the aplomb of a budding paparazzo. "Bhutan, wow. Where's that?" The guy's lip was pierced, and both his arms were covered with sleeves of angry-looking tattoos. His friendly demeanor offset the menacing body art. Ngawang was too shy to respond to this illustrated man. "It's a tiny kingdom between China and India," I said, playing spokeswoman. "My friend is a famous DJ on the radio there." The guy smiled. "Wow, that's so cool! No drive-throughs there?" "No _fast food_ there." Surely an even more preposterous concept for this guy. As was this entire transaction for Ngawang. The taste in the tiny container of syrup and the breeze in her hair on the open road soon replaced the oddity of it with sweeter sensations. THE CLOSEST STRETCH of beach accessible to downtown Los Angeles is just north of the famous Santa Monica Pier. The misty, cool early morning didn't deter Ngawang. The second I stopped the car she jumped out onto the sand, dug in her toes, then, without a word, rushed over to the ocean to wet her hands. This citizen of a landlocked country could now say she'd touched the Pacific. "Ahh, the beach!" she shouted, and I'd never heard her sound so happy. "Does it look like you imagined?" Although she hadn't had to imagine it. She'd seen it a thousand times, just not in person. That made her completely different from her father, who was exactly my age. Growing up in an electricity- and television-free Bhutan, not long after formal education in the country had begun, he probably had no idea what a beach looked like when he was the age his daughter was now. TV had provided my friend a glimpse of all manner of places and people that her older family members hadn't seen until much later in life. By the time she was a teenager, with the introduction of broadcasting into the kingdom, the need for imagining almost anything had vanished. This fact seemed to tantalize me far more than it did Ngawang, who was caught up in the vast expanse of ocean before her. "It's even bigger than it seems on television!" Ngawang picked up some sand, gently, as if she were handling a precious flower. "Is this where _Baywatch_ happened? Every teenager in Bhutan knows Pamela Anderson!" This was almost as good for Ngawang as an actual celebrity sighting. She spun around in a circle, soaking it all in. Across the Pacific Coast Highway, she spotted something else, a sleek modern structure, all glass, windows tinted dark. "Up there, look!" she said. "That's my first black house. I've never seen a house that was black before." IT WAS 1:00 A.M. Monday, time to report for work. Though these overnight hours were punishing, they had their advantages: They kept me out of the politics and manic deadlines that were the hallmark of working dayside. An added bonus was the occasional turnaround day off to adjust to the new schedule. Working the graveyard shift meant a different sort of rush, putting together and then delivering three seven-minute live newscasts, with several hours' break in between each one. It was pleasant enough, and fun, too, though the dulling effect of the inverted hours was a bit like having a straitjacket on your brain. Whenever my head really hurt from the flip-flopping schedule, I imagined what it was like for my grandmother, who'd logged overnight shifts at the Brooklyn Navy Yard while raising nine kids in a three-bedroom apartment. That put nuisance into perspective. Taking Ngawang to the studio in the dark of night made sense; this way she could ride out her jet lag, the office was less busy and therefore less intimidating, and with fewer people around, there was less chance she'd get in the way. There wasn't really anything for her to do but observe. Just seeing how we worked, the pace, the intensity, the fixed deadlines, would be shocking, and unlike Kuzoo FM in every way. We entered the back door using the special after-hours code, and snaked around the maze of cubes to my show's area of the building. She thrilled when the automatic lights popped on as they sensed our presence. Imagining it through Ngawang's eyes, I felt embarrassed by the largesse, the plushness of it all. Even the oldest computers were three years newer than anything in Thimphu, and there was one on every desk. Without people behind them at this hour they seemed like even more of a luxury, machines standing by, just in case. There were other, even more wondrous visions: the office kitchen, with its dueling microwaves, toaster oven, dishwasher, and a cabinet stuffed full with mugs. The plentiful supply of tea and coffee and sugar and little coffee creamers in different flavors. Milk was far too dear back in Bhutan to waste in tea; instead, caffeine was whitened with milk powder. And at Kuzoo, the small stash of beverages was kept under lock and key, reserved for special guests. "Free?" she asked, amazed, and then she saw the pharmacy cabinet, which momentarily trumped the drinks supply. "Free meds, too?" The five coworkers who shared these miserable hours welcomed Ngawang warmly, and sleepily volunteered to help explain their particular jobs on the show. While I went about putting together the first newscast of the evening, Ngawang meandered around the office, snooping at the colorful pictures and artifacts that decorated each cubicle. That each person had his own dedicated desk was another curious luxury. As we walked to the studio to go live, Ngawang asked, very loudly, the name of one of my coworkers who she'd been talking with earlier. "That fat girl," she explained. She was having as hard a time with our names as I had, at first, in Bhutan. I told her she must not, under any circumstances, refer to that woman that way within earshot of anyone else. "But why not? She _is_ fat." Ngawang wasn't being rude. She was just being descriptive. "Well, I understand that she's, well, overweight. But... it's just not polite. It would hurt her feelings." "Back in Bhutan, they call me Bunny." Ngawang smiled and pointed to the gap between her teeth. "Look, just like a rabbit. It doesn't bother me that people say that, because I do look like one!" Then she patted her belly. "Plus people make fun of my weight, too. I'm not exactly skinny. Doesn't—how do you say her name again?—doesn't she know she's fat?" "No, I get the honesty of it," I said. "I appreciate the honesty. It's just that here, that kind of honesty can hurt a person's feelings. It's just not cool." By dawn, Ngawang had bonded with the woman whose name she couldn't remember, and with Jeff, the friendly, patient engineer, who generously offered to mix taped promos for Kuzoo that Ngawang could take home. Another engineer, Erin, invited Ngawang to speak at an audio production class she taught at a community college. When the IT crew arrived in that morning, I asked them to show Ngawang their work area, since she loved computers so much. She charmed them, too. Everyone seemed to want to be friends with the woman from the exotic place they knew only from the pictures on my desk. WHILE MUCH ABOUT life and work in Los Angeles dazzled Ngawang, there were many things she didn't like or understand. How I could not have a television set, for one. The self-flushing toilet in the office bathroom "freaked her out." So did the size of my apartment. Though I had shown her pictures of my lovely but compact one-bedroom, it didn't compute that I had so little space, and no lawn. She'd imagined, she said, that everyone lived surrounded by the flora seen in one of her favorite movies, _Edward Scissorhands_ , with a grand, sprawling house alongside lush, plentiful greenery. We had discussed at length the fact that I didn't live with my family, yet Ngawang kept wondering where they were. That the other cities where I'd told her they lived were in other states and the states were on the other side of the country confounded her; a young Bhutanese woman could no better comprehend the distance between California and Florida any more than I could have understood how far Haa was from Trongsa. Showing her on a map hadn't helped. While Ngawang absolutely loved the dishwasher, she was disturbed that I didn't have a live person to load it, or to cook and clean for me, as she and most of her friends did. Why didn't we rich Americans have at least the same—or better? As for the absence of a television, I explained that was a lifestyle choice, but why I would want to make that choice made no sense to her. The idea that we had to call friends before showing up at their houses was also unsettling. "In Bhutan, you would just stop by," she said, "and if they weren't there, the maid would give you tea while you waited." Over the years, I had entertained dozens of guests of different ages and nationalities, but never like this, a visitor bewildered and enthralled by the simplest experiences. Her very frame of reference was fundamentally different. With every step and around every corner, I felt Ngawang exclaiming, pulsing with surprise, even if she didn't say a word. Often, she didn't; but almost every waking minute, she wore on her face an expression of pure astonishment, a combination of overwhelmed and startled and thrilled. And it was different from the startling exhaustion for me of processing Bhutan for the first time—exactly the opposite, in fact. There, you react to the absence of development, the quiet of the landscape. In the United States, you are assaulted by how everything is enormous and paved and polished. The culture shock I'd experienced when I returned from Bhutan was dwarfed by watching Ngawang react to the overdeveloped world for the first time. I felt a larger sense of investment in her exposure to the world beyond her own, a responsibility, even. A collision of sensations, the big sisterly and maternal, overwhelmed me. I didn't have a sister, and I wasn't likely to have a child, but I had this woman in my life now who filled those roles in her own way. She just happened to be a young woman who happened to be Bhutanese. OF ALL THE good and bad and strange and wonderful things Ngawang was observing, it was a knock at the door one afternoon that undid her. There stood the uniformed UPS man, wielding a package and a wireless tracking device. After I signed my name and closed the door, Ngawang literally fell to the floor in the hallway in astonishment, shaking her head. There are no street addresses in Bhutan. Mail, if you get it, is delivered to a central postbox in town. To have a package appear at your door—that was pure magic. "In my home village, I am modern and learned because I now work in the city," she said. "I can explain technology to them that they don't understand. Here, I am seeing so many things I did not know about. Here in America, I am a dumbo." "You are not a dumbo," I said, crouching down to hug her. "You're from a different world." "But you have so much more," she parried, accusingly. "You're all rich!" I couldn't dispute that, on balance, most Americans had more stuff or more money than the average Bhutanese. But beyond the material, were any of us richer, really? Everything we owned, the way we lived, came with a price. "Okay, Ngawang, so I have more cash, and don't forget, I'm twenty years older than you. But look at what you have, at your age. Your family owns a house, and several plots of land, free and clear. You are all close by and help each other out. My family lives across the country, and my parents still have a mortgage. It costs half of what I earn every month to pay my rent, and then there's everything else." I picked up the pile of that month's bills: home phone, cell phone, Internet access, YMCA. My car, I explained, was older, and owned by me outright, so I had no car payments, but then there was the cost of gas and insurance. "And medical care. If I get really sick or have an accident, I could go broke. And I'm luckier than most people, since I have some health coverage." Ngawang's eyes widened as I explained our medical system. Health care in Bhutan was free, and so were medications. And because of that, Bhutanese in Thimphu went to the hospital for the slightest cough or bruise. My standard of living, I explained, was far better than that of many of my family members, of many Americans. That had something to do with the job I had, with being cautious with money, and with not spending what I didn't have. It also, I said, had to do with the fact that I didn't have kids. "Do you get it, Ngawang? Yes, we make more money than you do, but as you can see, we spend almost all of it, too. And everything costs more, too." Ngawang patiently listened, but I knew she wasn't hearing most of what I said. She was intoxicated by the land of plenty, even if the land of plenty had proven to be more complicated and confusing than she'd anticipated. For my young friend, the view from the eighteenth floor, and that snazzy little car, was pretty enthralling. We shared a craving for worldliness; our birthplace and generation altered our vision of it. I COULD SEE subtle signs that homesickness was encroaching on her spectacular odyssey. "The next time I come to the United States, I want to bring my family," she declared as we meandered around downtown's Broadway after work one day. Mine was a household more subdued than any she'd inhabited; absent a crowd of people. And she missed her cell phone trilling twenty-four hours a day. Not that she was without callers here. They included a young Indian-American student at UCLA named Milloni, who had stayed with Ngawang's family in Thimphu the year before while she did field research. A young Bhutanese woman living with her aunt in Queens, New York, who had married one of the American golf pros and found marriage and the United States not to her liking. And a Bhutanese man who was finishing his master's degree at a university just outside Tokyo. He'd never met Ngawang in person; they were virtual friends, from Kuzoo.net. He liked her enough that he called her almost every day in Bhutan, where the cost was a steep fifty cents a minute. With the per-minute charge to the United States far cheaper, he'd been calling twice a day. "Mr. Japan is on the line for you," I would announce before handing over the phone. Each time they talked, I'd tease her about her boyfriend. "He's not my boyfriend. I'm not even sure I'll ever meet him in person," Ngawang would protest. "Aren't you curious, after talking to him all this time?" "We'll see if he _deserves_ to meet me," she'd tease back. The one person who wasn't calling was Ngawang's sister in Nebraska. Which was where she was supposed to be headed next. FRIDAY ARRIVED, and our week of overnights was through. To celebrate, we headed to the hot tub behind the apartment building, hoping to beat the intense sun before it broke through in earnest. Ngawang sat perched on the edge, too modest to wear even the one-piece bathing suit I'd loaned her; her feet dangled in the water, her pants legs rolled up modestly to mid-calf. "So, Ngawang, I heard you tell someone at the office last night that your visa lasts for three months." It was curious that she hadn't mentioned that to me. "What are we going to do about your sister in Nebraska, and your going to see her?" "She hasn't gotten back in touch with me." Ngawang looked down at her toes in the tub. "She knows for sure that you're here?" "Yes, I wrote to her. And I sent her your phone numbers." "I don't get why she hasn't been back in touch. Wasn't she expecting you?" "She must be very busy." Bhutanese often explained away a lack of communication with "I am very busy." But what could this lady be so busy doing that she was avoiding her sister? "So let's talk about what you want to do. Remember I said it was great for you to visit me here for two weeks." "Well," Ngawang said, splashing her feet in the water, trying to play down what she was about to say. "I was thinking I would get a job." For a second I thought the splashing had mangled her words. But I could tell by the look on Ngawang's face that I'd heard absolutely correctly. "Ngawang, getting a job in radio is hard for people who have lived in this country forever." "Not in radio. I was thinking maybe at a hotel or something." "You're going to give up working at the radio station, and leave your family, so you can clean rooms at a hotel?" The tone of my voice was the same as a mother whose fifteen-year-old daughter had just come home with a tattoo. "Not clean rooms. I don't want to do that. I figured I could work in accounting or something." A blinding glimpse of what should have been obvious struck me. When Ngawang said I'd made her dreams come true, I thought she was talking about her dream of _visiting_ America. Now I understood that she had meant her dream of _stowing away_ in America. How naive she sounded. How naive I had been to issue the invitation for her to come here. She continued. "I know some Bhutanese girls in New York are babysitters, but I don't want to be a nanny. I want my own kids, but I don't want to take care of someone else's." I took a deep breath. "Ngawang. Do you understand why they're nannies? It's not because they _want_ to do that. It's because _they can't get other jobs._ " My voice was very loud, and I stopped to compose myself, even though no one else was around. It was, after all, only ten in the morning, and most normal people were at work. "You can't just walk into a hotel in Los Angeles and get a job as a bookkeeper. You have to get a work permit before you can even think of trying to get a job, and that's incredibly hard. Not to mention expensive. Maybe even harder than if I was to try to stay in Bhutan and get a job." Assuming I'd have the temerity to overstay my visa there, I could live very nicely for half a year on just a few thousand dollars. Ngawang had three hundred bucks in her wallet. And, like most Bhutanese, no credit card. "What about that man at the airport when I got here? I spoke English better than he did." "What man at the airport? "The guy who asked me questions when I got here. He was Chinese, an old guy. I spoke English better than him! He can't be American. How did he get that job? And that guy in your office, David; he's Chinese. What about Milloni? She's Indian, isn't she?" The fundamentals of the melting pot that make up these United States had completely eluded Ngawang. That kind of information didn't get transmitted on _Baywatch_ , or _Sex and the City_ , or _Friends_. "I don't know anything about the Chinese guy at the airport, but trust me, if he was working in customs, he's a U.S. citizen. David is Korean, not Chinese—and he's Korean-American. He happens to have been born in Texas. That's in this country! And Milloni, I'm not sure where she was born, but she was raised here, too. America is made up of people from all different places." "That's not how it is in Bhutan. I thought everyone in America would look like you," she said, eyes glued to her feet, the splashes intensifying. During all those years Bhutan had sealed itself off, people from virtually every other part of the globe had been migrating from their hometowns, establishing roots in new places that promised greater potential, intermarrying. In Bhutan, a mixed marriage was when someone from the east of the country married someone from the west, a fairly new phenomenon given that for years few had even left their villages. Cross-cultural unions like the few I'd encountered in Thimphu were rarer still. "I know it's not like that in Bhutan. I'm very aware of that. The United States is a very different place than Bhutan." Bhutan wasn't perfect, any more than any human was perfect. It was greatly imperfect. And it had produced legions of young people like Ngawang, modern Bhutanese who loved their country but, unlike their parents before them, yearned for more. Bhutan's pride and joy, its unadulterated culture, was in danger. The connection to the world beyond was to blame. The minute tourists came in and students went out and television took hold of the people's brains, there went the Buddhist precepts and the cultural tradition and the status quo. Everything in Ngawang's line of sight conspired to make her feel that if you could only get your feet onto American soil, piles and piles of money could be excavated from the streets or would fall from the heavens. And that money would buy things, items that were the keys to happiness. That message was conveyed in television shows and movies, which Ngawang watched in a near-continual feed at home. And it was enhanced by the tales of the few Bhutanese who made their way to the United States and sent back stacks of cash. Somehow, they managed not to explain how hard they had to work to earn that money, what kinds of jobs they had to do, the cramped dorm-room-style living conditions they had to endure to be able to save even the few dollars a month they wired back home. How they lived with the constant fear of being found out and deported. Then there were those lucky, supersmart people who won scholarships. They lived under the nurturing gaze of an academic institution, and while life might not have been cushy, it was rarely the grind of an illegal immigrant. (Economics were not the only category warped by the media. At that training session I'd done for the tour guides over the winter in Thimphu, one of the young men had whispered to me: "Are the women in your country really as, umm, sexy as they are in the movies?") There was another enormous problem: Most of the foreign tourists Bhutanese do come into contact with, if they come into contact with any, are fantastically rich. They're spending thousands of dollars to get to Bhutan, and at least several thousand once they're there. Everyone had a story about some guide who'd been "adopted" by a wealthy Western visitor who'd "sponsored" him for a trip or even through college. Or a visitor who'd fallen in love with Bhutan and decided to, oh, subsidize the building of a school or a dormitory or to help support an entire family. The families for which Bhutanese served as nannies in New York or elsewhere in the world, well, they were wealthy enough to have nannies! And I was part of the problem. Breezing into Thimphu, twice over the course of a year. It was not that much different from wearing enormous diamonds and chartering a helicopter into a famine-stricken area. I might think I lived simply, with my one-bedroom apartment and six-year-old car, but the very idea that I had taken the planes to get there, that I possessed what I did and had paid for it by myself, was utterly amazing and lavish—even if it was also a little bizarre. What Bhutanese saw as evidence of the outside world was not representative of the day-to-day life of the average American citizen. Much less a stowaway. A plane flew overhead, and appeared to skim the fifty-five-story Bank of America office building two blocks away. When I'd first moved to Los Angeles, I'd sat in this very spot and worried about the flight pattern before I realized it was an optical illusion. I wondered where this jet was going, and wished I could pack Ngawang and me on the next flight out, so I could march her safely back to Bhutan, where she belonged. Where she could flourish, if she put her mind to it. Where she could be surrounded by her loving family, always. "So have I made it clear how it's not going to be possible to get a job here, or at least the kind of job you think you'd want? How living here is not what you think? And how if you take one of those other jobs, and work illegally, how much trouble you can get into?" Ngawang looked right at me, steely, and didn't say a word as I continued. This is what I'd been spared in not having children, I thought: denying a person you loved something they wanted dearly but that you knew wasn't in their best interest. "Let's discuss what we're going to do. I'm not sure I understand why your sister hasn't written back to you. Why don't we try calling her?" Upstairs on the eighteenth floor, Ngawang's virtual entourage came calling—her girlfriend in New York on the cell phone, Mr. Japan on the house phone. When both conversations ended, Ngawang tried reaching Nebraska again, with no luck. We agreed that who she really needed to talk to now was her family back in Bhutan. With the fifteen-hour time difference and the absence of voice mail, it took us two days to reach them. Though the conversation was conducted entirely in Dzongkha, I didn't have to understand the language to grok that the discussion was contentious. She revealed no details except that her family was angry with her. But she said, conceding defeat, "Okay. I will go home." THE TICKET ON Air India was confirmed. Since I had to be at work, Ngawang's friend Milloni agreed to transport my charge to the airport the next day. I had met her and felt I could trust her. She appeared at the appointed time in my driveway in a late-model Audi, a far fancier car than mine. We loaded Ngawang's luggage into the back; with all the presents she'd been given on her visits to my friends' various newsrooms, her luggage had doubled to two bags. We said our good-byes quickly, because we were blocking traffic. I felt relieved as much as I did sad to send her on her way. The next morning, Ngawang called me from Milloni's cell phone to wish me well. By that night, I assumed she was in the friendly skies, well on her way home. By Monday night, I figured, she'd at least be back in Delhi. It was only about three weeks later, when I hadn't heard from her online, and no one at Kuzoo had reported seeing her, that I started to worry. # 12 # BABY WATCH A MONTH AFTER NGAWANG WENT MISSING, THE Druk Phuensum Tshogpa (DPT) swept the National Assembly election with a landslide victory. Even the founding members of the DPT party were surprised. Just days before ballots were cast, polls showed it to be running neck and neck with the People's Democratic Party, the coalition led by the king's uncle. Because of that royal association, it was assumed the PDP would easily emerge victorious. In the end, though, with 80 percent of the population voting, the DPT swept forty-four of the forty-seven seats. This was not in any way interpreted as a condemnation of the beloved monarchy. It was seen as a rejection of the king's _family_. Since His Majesty had insisted that democratic rule was the best form of government, armchair political analysts interpreted the landslide in favor of the DPT as evidence his subjects understood the importance of electing a slate of candidates who were less tied to him. The members would serve for five years. The DPT chief Jigme Thinley was appointed prime minister, and my Villa Italia host Lyonpo Ugyen was indeed installed as foreign minister. Immediately, the new lawmakers got down to work. Viewers who tuned into the BBS in the middle of the day no longer saw a placeholder camera trained on Thimphu Valley; they could witness the nation's first-ever parliamentary proceedings, live as they happened. The democracy that had been foisted upon them was to be as transparent as possible, and there was no better way to do that than to telecast the governmental sessions each day. One of the first acts of business of the new parliament was the ratification of the years-in-the-making constitution. Monks chanted as Bhutan's fifth Dragon King signed three copies of the historic document—one in English, two in Dzongkha, one of those inscribed in gold—to enact it officially. The forty-seven National Assembly members and twenty-five representatives of the National Council then each solemnly approached the throne and added their signatures, a long process broadcast to the people to remind them that they now had a voice in their governance. (Even if it was a responsibility they hadn't requested.) The sacred documents were immediately put on display for the people to behold. All who visited were given bound copies to take home, as well as little juice boxes and a coin as tokens of appreciation. Heavy rain and a flood marked the day, which some interpreted as a sign that even nature was shaken up by the occasion. This unprecedented document spelled out equality for all; the rights to vote, to life, liberty, and security; and to freedom of speech and religion—although Buddhism was still classified as the official religion, and criticism of the royal family was still verboten. The majority of the assembled citizens who braved the rain to attend the ceremony were actually there for another reason: to gaze at an enormous, sacred, century-old religious scroll called a _thongdrel_ , unfurled in commemoration of the historic event for a rare public display. It is believed that those who stand in the presence of this tapestry will accumulate much merit. One woman said she had trekked for hours from a faraway village in order to see it and soak up the karma it offered. She hadn't been aware—until a reporter told her—that the reason it was being exhibited was because of the new constitution. Another woman, a farmer, had asked her husband to accompany her into Thimphu to see the king because she had heard that his powers had been diffused. She was very sad about this development, and wanted the chance to see her beloved monarch in person. In that summer of 2008, constitutional freedoms were still less important to many Bhutanese than their devotion to Buddhism and their king. IN THE MIDST of these changes in Bhutan, which I followed online from afar, Ngawang resurfaced. For months I'd been unsuccessfully looking for clues to her whereabouts. I felt a combination of guilt and annoyance and intrigue by her disappearance. No one at Kuzoo FM back in Thimphu had seen her or heard from her. There was no activity on her Facebook page. Calls to her friend Milloni went unreturned. No one on the other end of the numbers she'd dialed in New York returned my calls, either, nor had the mythic sister in Nebraska ever materialized. I had no number for the faraway Mr. Japan. I considered calling Ngawang's father, since his number was recorded in my Skype, but decided against it. After all, Ngawang was a grown woman—checking in with her friends was one thing, but calling her parents was another. My instinct told me she was somewhere that was safe; the mystery was _where_ she was. And then one day, her name appeared in my inbox: hey hi sweet lady jane well am fine n am with my cousins out here at new york sorry i didnt check my mails will be going home may be after two weeks how is everything going out wit you keep in touch "New York?" I wrote back. "What the hell are you doing in New York? How did you get there? What's your phone number so I can call you? Would you please call me? Does your family know where you are? Do Sir Pema or Pema?" They had both just written again that she still hadn't been seen. My mind started racing: Just how long did Ngawang intend to stay in the United States? Forever, or till the expiration of her visa? Where was she staying? What was she doing all day? Had she planned this all along? Frantic, I considered hopping a plane and scouring Jackson Heights, Queens, the neighborhood with the largest population of Bhutanese in the United States, but wrote that off as folly. I didn't hear back from her. The month of May arrived and went. By now, Ngawang's visa would have expired. I had succeeded in liberating myself from my job in Los Angeles and was making plans to go back to Kuzoo to volunteer for the summer. But before that, I'd visit Washington, D.C., as a volunteer at the Smithsonian Folklife Festival, where a faux Bhutan was being raised. The largest delegation of Bhutanese ever to leave the country at one time, 144 people, were headed to the American nation's capital to set up a living museum exhibit about their culture. Monks would bless visitors in an authentic Bhutanese temple that had been constructed on the National Mall; archers, including the king's brother Prince Jigyel, were to demonstrate the national sport; a weaver would sit at a loom creating a colorful cloth and wrapping those who wished in _kira_ and _gho_. One afternoon in the days before the festival began, I spotted Madam Kunzang Choden, the writer and Kuzoo advisor, over near the Metro with her husband, Walter. (I wondered if they knew about my dinner with Martin, who had emailed me earlier in the year to report that his work had relocated him to Thailand.) Madam Choden was to be one of the main speakers in the foodways tent, to discuss the curious culinary customs of her people, including and especially their devotion and addiction to fiery hot chilies. She had just published a book on the subject. We said our _Kuzu zampos_ , and commented on the magnificent location and the weather, and Madam Choden then said, quietly, as if she'd been waiting to clue me in, "I hear you sponsored Ngawang to come to the United States. She's always been dying to go. I thought you knew she'd been trying to get here for a long time...." And then she shook her head, disapprovingly, too polite to say another word. She didn't need to. Later, another Bhutanese friend revealed more details. He said Ngawang had boasted that she was working in New York in a shop belonging to some friend. Right after he started spilling the story, he clamped himself down and refused to say any more. And then Sir Pema let slip in an email that he had known all along Ngawang was in New York. He'd been sworn to secrecy, but eventually caved to my persistent concern. Now, I thought darkly, I had another dubious connection to the Kingdom of Bhutan, besides going to help start a radio station whose mainstay turned out to be illegally downloaded Western pop music. I'd been an unwitting accomplice to the illicit plan of a young Bhutanese to explore her American dream. IT WASN'T UNTIL the end of June that word came from my wayward friend, via email, informing me that she had finally returned home. I was relieved to hear it. She refused to discuss the who, what, when, where, why, and how of the whole New York adventure, but before she'd returned home, she had consented to meet in person the man we'd taken to calling Mr. Japan. He'd just wrapped up his graduate studies outside Tokyo and was himself returning to Thimphu. Once they were back on Bhutanese soil, their yearlong phone friendship morphed quickly into romance. Both families had blessed the union. They were waiting for the monks to give the go-ahead about which day was most auspicious so they might officially move in together and consider themselves married. And when I read that, I knew she must be pregnant. I KNEW WHAT it was like to have a whirlwind courtship that led to marriage after a life-altering experience. Mine had happened exactly twenty years earlier, back when I was Ngawang's age, in fact. My husband-to-be and I had met when he came to work at the television station in North Carolina. Not long after, I'd had that epiphany on the way to the grocery store. This liberation from my fear of the dark, I assumed, meant I was now healed and ready to join with this wonderful man. United from a point of strength, not weakness. Running off and getting married in a fit of passion, after just several months, felt like my way of declaring to the world, "That rapist didn't ruin me. See?" It was the first impetuous act of my short life thus far. And then, my new husband lost his job at the station in a management shakeup. He got an even better one—in, of all places, Atlanta. The scene of the crime was the last place I ever would have chosen to move, but in the media business, you went where the work took you. I rationalized: It wasn't Atlanta's fault that I was attacked; it could have happened anywhere. If I could survive returning there, I believed, I'd be demonstrating my capacity to adapt to anything, to show how unflappable I was. It would be an illustration of my boundless ability to forgive—to transcend geography. People who refused to leave a certain location or go to another were histrionic, inflexible, oversensitive. I was not going to be one of those people who let history get in the way of progress. Even the best intentions can be fraught with delusion. I had greatly underestimated how hard it would be to settle into the place where this horrible thing had happened, perhaps even more than I'd underestimated how complicated it was to be married. Within a year, another job offer rescued us, and back we went to North Carolina. I found myself felled by a mysterious lethargy that I first attributed to moving. In the course of trying to determine what was wrong, the doctor I visited offered a prescription: counseling. Everything is fine, I insisted. That's all in the past. Still, I obeyed and found a therapist. I was tired of being tired all the time. As we talked, a diagnosis emerged. It wasn't fatigue; it was depression. I was consumed with doubt and anger. Here I was, barely a quarter of a century old, connected to this man, moving around for his work, depending on him for financial and personal security. How did I get here? This wasn't how I imagined my life to be. I believed I loved him, but what did love mean, exactly? My problem was this sinking feeling that I was ill-equipped and unprepared to be a wife. Almost as if to prove that I wasn't really an adult, I found myself stymied in how to confess my misgivings to my poor, sweet husband. To say "I've made a mistake, there is something wrong, I don't quite understand, please forgive me. Would you mind, please, if I exile myself to a room, and let some time pass, so that when I emerge I'll be healed, recalibrated—and then we could just skip happily forward into the future?" If I had had the courage to ask, he would likely have said yes. But instead of speaking my doubts, I shared my deepest fears with a notebook. I wrote down every bit of it. How I felt trapped, how much I doubted it all, how I hated everything, including my poor husband, how I wished we both were dead. I didn't mean it. I didn't know what I meant, how I felt. I put my confessional notebook in a drawer in my desk and left town for a few days for work. And when I'd returned, my husband was waiting for me at the door, his face washed with devastation. In his search for clues to my distress, he'd read what I had written. Not long after, my whirlwind marriage was through. IT WASN'T UNTIL I was forty—thirteen years after the split, in the throes of my life review—that I began considering what a mistake it had been to let that marriage go. Over the years, I'd rarely discussed it, even with people who knew me back then. The whys and particulars of how it came to be and how it came apart were easier not to reveal to newer people in my life. To say, "I got married in reaction to having been raped," required the disclosure of layers of personal history, and now I wasn't even sure if the way I'd interpreted my history was true. What a good and patient partner he was; what an excellent father he would have been. I was wiser than I'd realized in having chosen him. But I understood now the youthful haze of mistaking desire for love and the foundations of a lasting partnership. That eagerness to move on with life, to rush into the future, lock things down, the false sense of feeling that by moving quickly, you're taking control of your destiny. And so I understood as I watched Ngawang rush into marriage. She'd gotten her American adventure out of her system; it hadn't quite worked out the way she'd planned. Perhaps in uniting with Mr. Japan, she was reacting to the disappointment of not having achieved one of her dreams. Now she was ready for the next thing. I hoped her impetuousness would yield a different outcome than mine had. NGAWANG HAS COMMANDEERED her fellow radio jockey Kencho to pick me up at the airport in Paro. Kesang the Kuzoo driver is busy, and Mr. Japan is at work. I know I must be an honored guest because Kencho and Ngawang hate each other, so I'm flattered that they've made peace long enough to take this drive together, all because of me. Arriving in Bhutan for the third time in a year and a half now seems ridiculously normal. I remind myself not to mistake the familiarity for true understanding. I have a much better sense now of Bhutan, but I still know better than to presume I really understand this place. The mystery keeps me coming back. Kencho's happy I'm here to help out again with Kuzoo, because he doesn't care for the new boss who has replaced Sir Pema, now that he's gone off to his studies in Thailand. Ngawang doesn't like the new lady, either, but she's preoccupied with the details of moving into Mr. Japan's house. The monks have given the okay for this to take place later in the week. It's all very matter-of-fact. Ngawang getting married after disappearing for a while. Me breezing in from halfway around the globe, resolved not to demand answers about the past. I'm more concerned with what is happening with her right now; I ask Ngawang point-blank as we start to drive. "Are you expecting a baby?" Her response is calm and measured. "Maybe," she says, as if it's no big deal. "We're going to the doctor tomorrow." On the ride from Paro back to Thimphu, my two friends compete like little kids for my attention, talking over each other with their stories, but I can't help tuning them both out. The roadwork is complete, the drive shorter and far less death-defying than before. Or is it just that I'm more comfortable here? It's mid-July, and monsoon season, and it's just started to rain. Kencho could have a new career as a New York City taxi driver; he's barreling down the highway at great speed, navigating past the other traffic and the cows and crouching humans. The magnificent palette of green on view in the summertime distracts me, and Kencho interrupts his monologue to turn Kuzoo on full-blast. A taped report he prepared about Gross National Happiness is about to air. To help quantify the concept to the outside world, an index has been released that aims to allow other nations to measure the happiness of their citizenry. Who needs such a scale? I think. By now, I've learned that the ingredients for happiness are simple: giving, loving, and contentment with who you are. IT ISN'T UNTIL the next day that I meet the mysterious Mr. Japan, whose monk-given name is Sonam Penjor. He's a tall man, big, with a sweet, solid personality; he greets me as if I am a long-lost relation. My immediate reaction is to like him. He and Ngawang have come by Kuzoo after having visited the doctor, who confirmed the news I suspected. I offer congratulations and hug them both. Ngawang has in her hands proof of their baby, a snapshot from the ultrasound. A tiny little sprout of new life inside my young friend. Her excitement feels almost childlike, while Mr. Japan has the demeanor of a mature father-to-be, not shocked by the immediacy of the prospect at all. He says, with confidence, that he is very ready to start a family. One day, maybe I'll get the details on Ngawang's New York adventures from him—assuming he even knows. "I hope it's twins," Ngawang says hopefully, the way you might say you wished for snow. "I hope it's _not_ twins," I say, remembering the time I was in the delivery room to assist a single friend, almost forty years old, who'd visited the sperm bank. Just seeing two babies delivered at one time gives you a glimpse into what caring for them each day might be like. One baby at a time is all a girl Ngawang's age should have to handle. "Well, I hope it's a boy," she says. "As long as the baby's healthy," says Mr. Japan, patiently. "That's what matters." This guy is all right by me. For a minute, I try to imagine how different my life would have played out had I stayed married. Would we have had children? Would we have lasted these twenty years? The only thing that's certain is that I would not have had the experiences I've had, and most definitely not this connection to Bhutan. In my forties, I understand how each decision has consequences. I also see the preposterousness of thinking you can have it all, much less trying to. WHEN NGAWANG ASKED if I would be her baby's godmother, I didn't hesitate to say yes. I understood my being so anointed had nothing to do with my perceived competence to care for this impending child should anything terrible befall her or the baby's father. After all, between their extended families, there are enough guardians to care for all the students in a small school. And as much time as I'd spent in Bhutan over the last two years, I still technically lived too far away to do much practical good. Besides the love that propelled this gesture, I knew this honor had been bestowed on me for two other reasons, reasons that were inextricably linked: I was American, and because of that, I offered access in the future to opportunities the baby's blood relatives could never provide. Charged with the responsibility of godmotherhood, I could one day invite Ngawang's child to the United States. I'd feel invested enough to pay, perhaps, for his education. In a modern Bhutan, it was simply better for a new baby and his family to know they had a friend on the other side of the world. Even without the formality, I'd be happy to be there for this child, to help however I could. By the time he was old enough to travel by himself, maybe I'd have more than a blow-up mattress in the living room to offer. But in giving me a title, Ngawang knew she was giving me a responsibility—and she knew me well enough to know I wouldn't take that responsibility lightly. As many pregnant friends and new babies as I'd been around, I'd never been offered an official role. (I'd mercifully never even been a bridesmaid.) The closest I'd come was that time I'd witnessed the birth of those twins born to my single friend. But that was more about attending to her than to the babies. I liked the idea of being a godmother. Even if, in Bhutan, virtually any woman who comes into contact with a child is called "auntie." It filled me with delight to know that across the planet, in this country I loved, there would be a little Bhutanese baby who would grow up learning he could count on me. IS IT POSSIBLE that the royal astrologers in Bhutan knew, when they had divined November 6 as the most auspicious day for the coronation of the fifth king, how important a week it would be for the rest of the world? Astrologers may not possess crystal balls, but perhaps a scan of the heavens foretold a volatile time of unprecedented change. Or perhaps it was all a happy accident that the world's most envied and reviled democracy and the world's newest happened to close out 2008 with back-to-back milestones. And that at the helm of those two democracies were two men who, while they had had vastly different upbringings, shared oddly similar philosophies. King Jigme Khesar Namgyel Wangchuck was formally coronated as king of Bhutan just hours after the landslide election of Barack Obama, America's new president-elect. Though King Jigme had attained his position purely by dint of birth, he was, in his own way, a living symbol of global interconnectedness. He'd attended prep school (Phillips and Cushing) and college (Wheaton) in the United States, then studied for a master's in England (Oxford), and preferred basketball to the national sport of archery. (Teams of tall strapping players would be summoned to the court in Thimphu when His Majesty wanted a game.) The fifth king of Bhutan might not achieve the greatness of his father and grandfather and great-grandfather before him, for history would show they had brought their country into the modern age. But the mandate before this king, during a time of unprecedented change, was no less critical. While Obama's election was still dominating world headlines, Bhutan's fifth king addressed a crowd at the Changlimithang Stadium, where thousands had waited in line all night to ensure they got inside: Ultimately, without peace, security and happiness, we have nothing. My deepest concern is that as the world changes, we may lose these fundamental values on which rest our character as a nation and people. The Bhutan we see is vastly different—unrecognizable, even—when compared to the Bhutan in the time of our first king. As long as we continue to pursue the simple and timeless goal of being good human beings, and as long as we strive to build a nation that stands for everything that is good, we can ensure that our future generations for hundreds of years will live in happiness and peace. THREE MONTHS LATER, we are on baby watch in Thimphu, awaiting the arrival of one of the next generation of Bhutanese. The ultrasound long ago revealed the baby would be a boy. To bide the time, Ngawang visits monks, drinks vanilla milkshakes from Karma Café, overdoes it on the chilies because once the baby comes she'll have to dial down the culinary heat for a while. Uncomfortably large and hoping to move things along, she takes long walks over bridges, which the Bhutanese believe will induce labor. She and Pema and I ride into town to the fabric store to buy a petticoat for the delivery, a garment designed to protect her modesty given the inevitable crowd of family who will gather when the baby debuts. In the altar room of her in-laws' house, right next to the bedroom where Mr. Japan and Ngawang sleep, monks are on duty, chanting prayers for a healthy baby. Out back, a special tub has been constructed where the new mother will soak off the pain daily in a hot stone bath. After tea one morning, Ngawang's mother-in-law fills it up and tests it out herself. One Saturday we sit for hours in the hospital waiting for the only one of the three gynecologists in the city who Ngawang trusts to examine her. He sends her home to wait some more. As the due date comes and goes, and Ngawang becomes larger and more impatient and uncomfortable, I invite her over for a sneak peek at the baby gifts I've brought; Ngawang's not allowed to take them into her home until a healthy baby has arrived. No showers in this culture, at least not yet; purchasing items in advance is seen as a jinx. Ngawang particularly loves the stuffed bear sent by friends of mine she'd met in Santa Monica. "This is for me," she says, and I know she isn't kidding. She oohs and aahs at the little hats and bibs and onesies, all from the Gap, a very exotic American brand, though much of their stuff was made in factories closer to here than to the United States. I've brought along a stack of kiddie books, too, about things this baby isn't likely to see much of until he leaves his homeland: One's about trains, another about boats, another astronauts, none of which exist in Bhutan. Maybe Ngawang is done with adventure, but that doesn't mean her baby has to be. While we wait for his arrival, I fill the time by stopping by Kuzoo; Radio Jockey Kinzang loves having guest hosts with foreign accents, and by now the listening audience has heard me yap on the air on many occasions over the past two years. I also find myself enjoying long lunches with friends, where other friends happen by and hours elapse, filled with meandering conversation. Thimphu is changing, but the rhythm of life here for me is still a delight. One afternoon my dining companion is Phuntsho, a woman about my age who is divorced with three children, two studying in India and one at the Bhutan on the Border, the University of Texas at El Paso. We are discussing how, in the second half of our lives, we want to change professions. Phuntsho says she's thinking about becoming a nurse. I reveal my recent, very peculiar recurring dream. "It is a gigantic room, filled with babies. There is a long line of bassinets, all in a row. It is up to me to take care of all of them, just me. And I work my way down the line, pick up each baby, and kiss him on the head. And all of a sudden they are all cooing and crying, and I am falling asleep...." Phuntsho smiles slyly. She knows how outrageous a fantasy this is, and I do, too. I take this dream as a sign of my morphing ambitions, to figure out how to help children who have no one else. To do this in Bhutan would be impossible; there are no orphanages here. There isn't even a formal adoption policy. The maternity nurse at the hospital keeps a list of prospective parents, and calls them when a mother wants to give up her child. There just aren't that many unwanted babies, Phuntsho says. After spending time here, I wonder if this is denial speaking, or a matter of resources and culture. Close to 40 percent of the population is under fourteen years old. And a growing problem is that kids born "without fathers" are lost in the system. Without proof of both parents' citizenship, the child is forever in limbo, denied the rights of a full-fledged Bhutanese—meaning, most notably, that the child can't enroll in school. Many of these children—no one knows just how many there are—are the result of a long-standing village ritual called "night hunting," where a man crawls through a window into a woman's bedroom and sleeps with her. When I first heard about the custom, I was startled by its similarity to my own experience. Only now is anyone in Bhutan openly beginning to call this rape, but it happens with enough frequency that the parliament has been grappling with a law to ban it. When a "fatherless" baby was born on the farm in the days before modernization, the additional hands were welcome; but as subsistence farming becomes less common, a whole new class of children is growing up who have no prospects for the future. One of the four queens of the fourth king has been an active proponent of family planning, but it's believed that only 30 percent of Bhutanese use birth control. Among young Bhutanese, the issues are becoming the same as anyplace else: a growing demand for the morning-after pill, available over the counter for a little over $1.50; a spate of botched abortions in clinics across the southern border in India; paternity tests to determine the father of the child. The ways of the Western world encroach. Like it or not, perhaps an orphanage in Bhutan is inevitable. The day after we share this discussion, Phuntsho calls me on the phone. "Actually, I thought about it and there is something here that is close to an orphanage, where they could really use your help. You will find it interesting," she says. "We leave tomorrow at ten a.m. Prepare to fall in love." She picks me up near the traffic circle in the center of town, and we make the trek toward Paro on the upgraded highway. After an hour or so of driving from city to city, we turn onto a rough unpaved road that climbs high up the mountain. Up we go through a small community of a dozen or so houses called Shaba. Monkeys are swinging in the trees, and our vehicle kicks up dust as it rumbles over the rocky terrain. It feels a bit like an Asian safari. "This is much improved since Rinpoche," says my friend reverently, as if we were driving on smooth glass. I'm happy to have missed what it was like before. This particular Rinpoche, she explains, is the ninth reincarnation of a manifestation of the mighty Guru Rinpoche, the saint credited with introducing Buddhism in Bhutan. Just twenty-eight years old, Rinpoche is revered for his tremendous compassion. For three years now, this good holy man has been taking in boys whose families can't care for them, or who have no families at all, providing them food and shelter, and educating them in the monastic tradition. After the last hairpin turn we emerge at a magnificent spot. The crystal silence of isolation; an unseasonably warm winter day in the age of global warming. Right next to where we park sits a small chorten, a row of tall prayer flags, and the Neyphug Monastery, a large structure built in 1550 that has seen better days. Its exterior is literally crumbling. Phuntsho points to two hermitages that can be reached only by walking through the forest, a three-hour trek from here. Off to the side, a dozen tiny boys wash themselves in a makeshift outdoor shower. A few larger kids are tending to chores; the rest are crowded in a small room to watch a cartoon—thanks to the one evident luxury: satellite TV. Forty children clad in red monks' robes gaze intently at an old television set. Absent neighbors, this is their lifeline to the world. If I hadn't just been told they were orphans, I probably could have guessed. These kids may be lucky to be here, but these are hardly deluxe accomodations. Filthy, threadbare bedding lines the floors of several packed sleeping rooms; shabby posters of handwritten ABCs are taped up for decoration and instruction. A fraction of the belongings in the average American kid's room would go very far here. "You can come teach the boys English," says Phuntsho. "They don't learn much of it, for their monastic studies are in Dzongkha, but these days, everyone needs English." A more immediate and urgent need is for futons, pillows, blankets, so the kids get more cushioned sleep. I make a mental note to investigate how to send them some. Rinpoche is away, trying to raise money from supporters in Taiwan to build a dormitory for the kids. His eighty-year-old father, Lopen, greets us warmly. He wears the red robes of a lay monk, a _gomchen_ , and he's as round as a teddy bear. A sweet boy with a slightly crossed eye smiles at us as he serves us tea and lunch; it's an honor to be so close to the visitors. I eat my red rice, politely, and pass my bowl of _emadatse_ to Lopen, who appears only a bit surprised that I won't eat it. Then he digs in himself. Afterward, he gives us a tour. We walk the dark, dusty, cold rooms of the monastery. All the religious relics have been restored, Phuntsho says, but the building, like the living quarters, direly needs attention. A life-size statue of Guru Rinpoche fills one shrine; an enormous Buddha sits in another. There are wall paintings by a revered lama, images of a thousand Buddhas; the largest of them is believed to once have spoken. There's also a sacred eighth-century statue of the guru that we don't see. It's opened to the public just once a year. All these relics, Phuntsho tells me, possess great power and many blessings. At each stop, she prostrates herself three times, and we tuck money on each altar as offerings. I say a silent prayer for a healthy and happy life for Ngawang, Mr. Japan, and their baby. Back inside the small residence we have another round of tea; Lopen doesn't want us to go just yet. He fumbles for his reading glasses and settles in to study the thick, bound astrological calendars Phuntsho has brought. The start of the Female Earth Ox year is a week away. Phuntsho pulls up her purse from the floor and takes out a four-inch-thick stack of cash. It's money that's been sent for a series of cleansing prayers, a _puja_ , to be conducted for a boy back in the United States who is ill. After consulting the astrological tables for information about the boy's birth year, Lopen asks when I was born. Squinting through his lopsided spectacles, he nearly presses the forecast book to his face. Phuntsho relays his prediction in a serious tone: "It is not going to be a good year for you. Watch your health. Wear red every day, preferably under your clothes." I take a mental inventory of my closet at home, scanning for red clothing. "They really need to do a wind horse _puja_ for you. You have to be very careful," she says. This _puja_ will rebalance my energy and ward off illness. In my purse, I happen to have a crisp $100 bill. A friend in Thimphu has repaid me for books I'd brought from the States. I fish out the other cash I have on hand, dollars and ngultrum. All told, the sum that other Rinpoche wanted from me two years ago. I press the money into Lopen's hands. "It's for the boys," I tell Phuntsho. "For the orphanage." "They will say prayers," she says, and she translates after he begins to speak. "Lopen says they will do a _puja._ " I don't argue. If they feel the need to do a _puja_ , so be it. The mental power of sixty young monks, with Lopen at the helm, can't be a bad idea. I'm happy to accept the prayers. Yet happier still not to be looking for answers anymore. TEN DAYS LATER, Ngawang gives birth to a healthy baby boy. Eight pounds, eight ounces. Right out of the womb, he resembles a tiny version of Mr. Japan, with the same sweet face and a generous pile of hair. His entrance into this world is thirteen days after his due date, which happens to be a year to the day after Ngawang arrived in Los Angeles, and two days into the year of the Female Earth Ox. The monks anoint him with a beautiful name. Kinga Norbu: precious jewel, loved by all. The astrological forecast for the new year is terrible in almost all regards. Rain will be scarce, food in short supply—economically difficult, all the way around. There's one exception. They say it's a particularly lucky year in which to be born. # POSTSCRIPT NOT LONG AFTER I RETURNED FROM MY INITIAL trip to the happiest place on earth, I had lunch with an acquaintance who posed a question I'd never asked myself, much less another person. "What would you like to be doing five years from now?" she said, leaning in, Oprah-style, across the basket of bread-sticks. _My God_ , I thought. _Who knows?_ Five years. Five years ago I could never have imagined jetting off to a Himalayan kingdom and finding my entire perspective of the world, and myself, turned upside down. Why would anyone want to imagine the future, much less plan it? I didn't say that, though. I didn't want to make her feel bad, since she seemed to be one of those people who liked plotting her life. I just knew you couldn't do that. So I gave her as vague an answer as possible: "I really have no idea." "But," she insisted, "do you want to be married? Do you want to move out of Los Angeles? Do you want to keep doing the work you're doing?" There was a sense of urgency to her line of questioning, as if my entire future rested on the words I now uttered. "I really, honestly have no idea," I said. "All I know for sure is this: In five years, I would love to feel as great as I do, as strong as I do, right this minute." My lunch companion looked at me expectantly. She seized on the word "great," and she wanted to know more, the specific things that were causing me to feel so good. I told her there wasn't anything specific to add. That was the triumph, I told her. I didn't feel good because I had a new romance, or a new job that paid tons of money, or anything visible or measurable. None of those things that usually set people to the "Yes" gauge on the happiness scale had happened to me. I didn't feel good because I expected nothing bad ever to befall me again; instead, I trusted that I could handle whatever came my way. Best of all, I told my dining companion, was that what I wanted from life had changed. I wasn't waiting for something to fall into place so that life could get started. Life was brimming all around me. And now I understood that what I gave was more important than what I got. It was Bhutan and the three good things that helped me arrive at these conclusions, I told her, and I explained how the exercise worked. I could see the skepticism in her eyes. "Try it, if you'd like," I said. "Maybe then you'll see." MY PERSONAL PERSPECTIVE isn't all that's changed since my first trip to Bhutan in the winter of 2007; so has much in the kingdom. In advance of the coronation in November 2008, a dog pound was temporarily erected and a new sterilization program launched. Of the estimated 1,200 strays in the city, 360 were spayed or neutered. As a consequence, the canine population has been seriously reduced, and it is now possible to walk the street without being mobbed. Although at night it's still common to be serenaded by the howls of both strays and house pets. Despite the cries of the existing papers that there was not enough advertising to go around, three new papers were granted licenses to operate. The kingdom's first daily, _Bhutan Today_ , began publishing in time for the coronation in November 2008. One eight-page issue in early 2009 was crammed full of reports that illustrated the impact of Bhutan's association with the outside world. A cover story, in English, lamented the continuing demise of the main language, Dzongkha. One student was quoted as saying the influence of Western movies and fashion and the "coolness" of speaking English leads people to be uninterested in speaking the native Bhutanese tongue. On page two, an editorial lamented the same concern raised by the Kuzoo advisor Madame Carolyn the year before, that more Bhutanese celebrate Valentine's Day than Losar. The same piece railed against the hypocrisy of the national ban on the sale of tobacco and how it made smoking seem even more alluring. In an editorial titled "Professional Fools," the writer deemed "weird" the recent announcement by the government that there was room for only 40 percent of the Class Ten students to continue their studies, and wondered, "Where will the rest go?" In 2008, an enterprising Nepali tailor began an alteration service to make it easier for foreigners to wear the Bhutanese national dress without an army of assistants. For about three dollars, she'll custom alter a half _kira_ so that it wraps easily around the middle and is fastened with Velcro and hooks that adjust to your inevitably changing waistline. It falls exactly like a half _kira_ would if you put it on the traditional way. There are now infinitely more karaoke machines in the capital city than the two that existed in early 2007. The most interesting is at the Tiger Bar on Norzin Lam, for it allows you to sing along in Dzongkha. Toilet paper is now a far more common amenity in various public restrooms than it was during my first visit. Evidence of a growing leisure class abounds. Shades of Starbucks are evident in coffee shops that debuted on opposite sides of Thimphu in the fall of 2009 and have created a market for what had once been a rare find in the capital city: brewed, takeaway caffeine. There's also now a wine shop, which features a small selection of French, Australian, and even a couple of California vintages in addition to several from India. The first fast food to come to the city is also an import from Bhutan's giant neighbor; it's a franchise called Hot Dog. (Which happens to have been launched by Pema from Kuzoo, along with her entrepreneurial Indian boyfriend.) A second, called Tsab Tsab, Dzongkha for "fast fast," is modelled after McDonald's. Not long after the two-lane Thunder Bowl opened for business in a subterranean location in the center of Thimphu, a second movie theater debuted in a rapidly growing area at the edge of the city. This one boasts a state-of-the-art sound system. (The run-down theater in the center of town is still mobbed, despite the fact that it hasn't been cleaned in thirty years.) Each theater plays only Bhutanese films. The number of movies being produced is declining, though, as filmmakers encounter the difficulties of recouping their investments. The grand $450-a-night Taj Tashi hotel debuted in the center of town in the spring of 2008, and though it mirrors the Bhutanese-style architecture around it, its stateliness is a bizarre contrast to the rest of the buildings nearby. It caters mostly to visiting dignitaries and, at mealtime, an occasional sampling of the city's small expat crowd. It features a spa, a pricey bar called Ara where you can order martinis, and a swank Sunday brunch buffet, not unlike any other elegant tourist hotel in the world. Critics have wondered if it is sacrilegious for the hotel to have used common Buddhist religious artifacts as hardware throughout the grounds, such as horns for door handles. Sometime in 2010, the Bhutan Post intended to begin delivering mail residentially, after assigning the city's first-ever street addresses. A postal code was also in the works; the absence of one can make sending packages from outside the country a challenge. FedEx now delivers to Bhutan, but posts a long list of restricted items. There continues to be some confusion over how to refer to the four queens now that their husband is no longer king. Specifically, should the honorific Queen Mother be used for all four wives, or should that be reserved only for Ashi Tshering Yangdon Wangchuck, since she is the one who actually gave birth to the man who now reigns over Bhutan? Rumors that the new king ordered the removal of painted phalluses from the sides of buildings throughout the country because some tourists found them offensive turned out to be unfounded. The phalluses remain. And the authentic Bhutanese temple built on the National Mall in Washington, D.C., in the summer of 2008 for the Smithsonian Folklife Festival was acquired by the University of Texas at El Paso. UTEP's president has been advised to check with astrologers about the most appropriate location on the grounds, as well as the most prudent date for the work to begin. As for Bhutan's new democracy: Their first session saw heated internal debate about sitting fees for members, what they should be paid over and above their salaries for the days they must appear in session. After criticism in the media, the idea of the fees was dropped, but the elected officials awarded themselves a car allowance, as well as a subsidy to pay for a driver. A year into the first term, the National Assembly banned television cameras from covering their deliberations live. They argued that the complexity of the discussions confused the citizens. One critic said the parliamentary members' real concern was that they themselves don't really understand the issues and didn't want their constituents to see. The National Council immediately reversed course and said they would continue to allow live coverage of their proceedings. Even in its infancy, democratic governance appears to be disappointing, if not polarizing, the people. "I didn't expect they'd do anything except look out for themselves," one young man said to me in the winter of 2009. "Only His Majesty has our best interest in mind." # EPILOGUE: LOOSE MOTION THE BRIGHT SUN DOESN'T QUITE OFFSET THE CHILL in the air, but the incline of the walk up to the giant Buddha statue makes it feel a bit more temperate. It is a few weeks before Kinga Norbu's first birthday. He's home with Ngawang, sick with a ferocious cold; Pema and I decide to take this Saturday jaunt anyway, even though they can't come along. She's wearing sweats and carrying a backpack, and we're both wearing sunglasses and jabbering our way up the hill like the two old friends we are by now. We discuss Pema's new job at an organic products manufacturer, men, the state of the world, and gossip involving our various mutual friends. Pema's thrilled with the latest gift I've brought, a little Louis Vuitton purse. It's another hand-me-down from my friend Barbara. Pema can't believe how kind this woman she's never met has been to her. I can't believe I keep aiding and abetting Pema's brand-goods addiction, even if all it involves is being a Sherpa. A group of boys passes us; they make a disparaging remark about the two _chilips_ , which is rude slang for foreigners. Pema is flattered to be so incognito that she's unidentifiable as a Bhutanese, but also wants to put the boys in their place, so she starts nonchalantly speaking to them in their native tongue about the weather and where they're going. Turns out they're headed to the Buddha, too. Though they climb up through the woods and we stay on the paved road, built in anticipation of the carloads of visitors who will one day make this voyage, we arrive at the construction site at about the same time. A caretaker allows us in, breaking the rules since there's no work taking place today. When I was here last year, there was just a clearing; now the Buddha has begun to take shape, and the distinct gleam of the 140-foot structure peeks dramatically through the mask of scaffolding. I wonder if when it's complete it'll look majestic or Vegasesque or a bit of both. The boys are taking pictures. We do, too. "I love you, Lisa Jane," shouts Pema, as she snaps away at the Buddha; then she trains her point-and-shoot camera on the ever-widening footprint of the Thimphu Valley below. There are so many cranes and construction sites mirroring the one before us that it looks like a giant game of SimCity. "I love you back, Pema Lhamo," I say, and for a minute I forget how odd it is that I have this lovely friend, all these lovely friends on the other side of the world from my home. It feels less odd, really, than it does lucky. IN THE WINTER of 2010, three big things were occupying the minds of the people in Thimphu. The first was the premier occupant of a new six-story structure next to the clock tower. As weary Indian laborers frantically tacked on the roof, curious customers packed the ground floor of Druk Punjab, Bhutan's first commercial bank, sipping the free tea and eagerly signing up for new accounts. The bank was an outpost of an Indian concern, and it promised a critical link to the outside world that neither the established government-owned banks, the television, nor the Internet could: ATM cards that would work in India and in Bhutan, making it possible to travel for business or pilgrimage without wads of cash. There were other enticements, too: lower interest rates on loans that made it easier to build a new house or to buy one of the new cars from the fancy showrooms cropping up on the outskirts of the city. All this was big news in a place where just forty years ago there wasn't any cash money and where the idea of institutional lending to the masses was still a cutting-edge concept. The pièce de résistance was the bank's promise to introduce in a few months the ultimate trapping of capitalism: credit cards. As if to tamp down the encroaching acquisitive spirit just a bit, and remind the Bhutanese of their Buddhist roots, a series of meetings was being held by educators during the annual winter break to discuss exactly how to introduce the fundamentals of Gross National Happiness into the school curriculum. Lately the Bhutanese elite had begun to feel that outsiders were doing a better job of examining and practicing the national philosophy than they were themselves. A lama was deployed to teach meditation skills to the assembled principals of the schools; by taming their minds, the thinking went, they'd be better equipped to help their students tame theirs. The prime minister graced one of the sessions with a four-hour speech that summed up his concerns: Our challenge is that schools [should] not produce selfish economic animals who are only motivated to succeed at the cost of relationships, environment, and family. We have to convince the children that what parents have has nothing to do with who they are. Our little country, once so blissfully isolated in a remote corner of the Himalayas, seemingly protected by high mountain peaks, wisely and peacefully governed by a lineage of great enlightened monarchs, is now buffeted by powerful forces we could not have imagined or conceived just a generation ago. Though some have brought benefit, those powerful forces are not always benign, and some of them threaten not only our profound heritage but even our lives and land. What the prime minister _didn't_ mention was that the latest powerful force to ruffle Bhutanese prayer flags had come at his invitation, in the guise of the international consulting firm McKinsey & Company. This merry band of bright-eyed young MBAs, dispatched from McKinsey's offices in next-door India, had been hired for $9.1 million. Their mission was to evaluate the nation's inner workings and mine them for greater efficiencies and value. After a months-long inquisition at each of the ministries, the McKinseyites had handed down a wide-ranging series of observations and recommendations about how to better "brand" Bhutan. Chief among them was a push to monetize the GNH thing, for GNH was seen as Bhutan's most alluring (and therefore its most marketable) asset. To achieve this, the McKinsey team proposed nixing the tourist tariff and allowing guests to book directly with hotels; there would be no more wiring thousands of dollars to tour operators you'd never met in order to secure your visa, guides, drivers, and Druk Air tickets. The idea was to make it as easy as possible for tourists to enter Bhutan and ramp up the number of annual visitors to the country from 27,000—the high to date—to 100,000 a year. Those in the travel industry expressed fear that by eliminating the barriers to entry, the mystique of Bhutan as an exclusive, elite destination would be damaged. What they really feared was that the government was trying to put all but the best-established travel professionals out of business. Others worried that Bhutan might someday soon resemble Nepal, jammed with spiritual-seeking backpackers. Still others snarked that what McKinsey was recommending was simply not possible: If filled to capacity for 365 days, the two jets owned by Druk Air would hold only 93,000 humans. It seemed delusional at best and irresponsible at worst to imagine that a place that had worked feverishly for so long to keep the world out could possibly consider allowing so many people in. And more practically, they lacked the infrastructure to accommodate them all. Besides, how would those who did venture to Bhutan deal with the cult of spicy hot chilies? Bhutan was on the brink of yet more change, this time at the hands of highly paid advisors who wouldn't have to live with the consequences of their recommendations. I'M NOT SURE I've ever been so cold; I don't remember it being this frigid here in the winter. I'm in bed under the covers in a not-shabby hotel, far better than the guesthouse where my hosts first stuck me when I arrived last week, where the only thing covering the window in the bathroom was a sheet of newspaper and fleas danced off my suitcase. This room's a suite, and boasts a modern convenience not widely available in Bhutan: a wall-mounted electric heater. Not that it chugs out any measurable warmth—the room's too big, and the draft seeping in from the half-inch crack in the patio door is too ferocious. Central heat and insulation are inventions that haven't quite made their way here, much less the luxury of a warm bathroom. Even locals grit their teeth through their wintertime ablutions. All the clothing I've got on isn't offering much protection. Two layers on the bottom, four on top, a huge scarf from Bumthang that Ngawang gave me when I arrived, and a Yankees World Championship 2009 skullcap covering my head and ears. Oh, and fluffy pink chenille socks. I just took my hands out from under the three blankets to grab my cell phone and text in my vote for singer number 6 in the Druk Star contest the Bhutan Broadcasting Service has been running. Her shy smile and sweet trill captured my attention. From my woefully inadequate understanding of Dzongkha, I deduced that she's from a tiny district in the eastern part of the country, and that made me root for her all the more. For twenty seconds, number 6 diverted my distress over my frozen nose; plus she drowned out the cacophony of howling dogs—soprano, alto, and baritone—outside my window. At least this time, by overdosing on Cipro and carefully policing what I eat, I've managed to stave off that awful affliction of the stomach, euphemistically known as "loose motion." Which is, come to think of it, the best way I can think of to describe the changes that are unfolding so rapidly here in the so-called last Shangri-la. What I'm wondering at this moment (besides how I'm going to scurry to the toilet in this cold for the inevitable middle-of-the-night pee) is what to make of this McKinsey recommendation. My predisposition was to mistrust the suits. But maybe something positive would emerge from this odd marriage. For one thing, what purpose was that long-standing $200-a-day charge for all outsiders serving, really? The government took $65 off the top; the leftover $135 had given a generation of Bhutanese the illusion that they could become rich as tour operators, even if they had no interest in that kind of work. Even without the tariff, because of Bhutan's size and location, it would be difficult for it to become as jammed with visitors as Nepal or India. Wouldn't it? Besides, the Bhutanese were doing a fine job of polluting and littering the landscape themselves. An animated commercial on TV implored youngsters to stop tossing trash on the streets, and a recent report showed ten new cars a day entering Bhutan's roads. Just the other day I spotted a woman drinking from the first disposable cup I'd ever seen here. The government argued that expanded tourism would lead to more jobs, which were necessary to employ the growing population. Sixty percent of the residents were under the age of twenty-nine. They were better educated than the generations before them and had been watching TV for a decade, acquiring sophisticated wants and desires that could no longer be fulfilled on the farms and in the villages. Every day I heard another tale of another nanny crammed into a New York City apartment and supporting four Bhutanese back home. These incomes funded the purchase of apartments, cars, and siblings' educations back on the other side of the world, and fueled the aspirations of an alarming number of young people in Bhutan, too. Buddha, schmudda: In the front window of the public library hang children's drawings of Santa Claus. "I wish we had Christmas in Bhutan," reads a caption, "so we could get presents." Bhutan is facing a dilemma that belies the premise of Buddhism and of Gross National Happiness: It's human nature to want an easier way of life. And more stuff. Maybe the MBAs from McKinsey could impose some order on the chaos of Bhutan. Case study: The circumstances under which I'd come here this time would make even the most junior of strategic consultants' heads spin. The woman who invited me to make this trip, a high-level official at the Tourism Council—arguably the most powerful agency in Bhutan given that it brings in tens of millions of dollars to the country each year, which is second only to the revenue from hydropower exports to India—has gone missing. A friend of hers had told her I was looking for another volunteer gig, and she had written to say she urgently needed my help, for six months if possible. To my surprise, I found myself balking at that kind of commitment but agreeing to a two-week trip so that we might at least get acquainted. I confirmed the dates of my stay with her, and her assistant arranged the visa and local travel. When I arrived after my long journey—familiar, but no easier now that I've done it five times—my hostess was mysteriously absent. And no one who works for her quite knew what to do with me. Almost a week after I arrived in the country, the mystery was solved. A friend in the States wrote to tell me of a report he heard on public radio that mentioned that the queen of Bhutan (the reporter didn't get into the minutiae of how this particular queen was actually one of four) was appearing at the renowned Jaipur Literature Festival in India to promote her book. My would-be hostess, a woman charged with helping revamp the entire nation's future, is also expected to attend to her queen; when the queen goes on a trip, this woman is expected to accompany her. No one considers that her two roles might create a conflict. No one thinks it strange that I schlepped around the world, at my own expense, to find no instructions have been left for me. (I wonder what the McKinseyites, who preach the language of efficiency, must think of the idea that the royal family still takes precedence in this democracy. I also wonder how long it is before anyone—at the newspapers, in the private sector—openly starts investigating the finances of the royal family. Even in the burgeoning age of investigative journalism, that would still be unthinkable now.) Royalty has back-burnered my purportedly urgent mission. I know better by now than to be surprised by this. And yet, this time it annoys me. A lot. I don't want to be here, a _chilip_ in the land of chilies. I want to go home. I love this city, even if it is becoming ever grittier and more congested; the streetscape of Thimphu has infiltrated my dreams. For so much, I owe Bhutan an enormous debt of gratitude. But this is not where I belong. The view of the valley out my hotel room window is the same angle as that of the live shot on the BBS after-hours filler. I squint and wish I could transform the twinkling lights of the city before me into what I can see from my apartment window in Los Angeles. It occurs to me that if I shift the bed away from the wall and in front of the heater, I might soak up a bit more of its warmth. FOR A FEW days now—without much else to do and after hanging out with the staff of the new weekly newspaper, _Business Bhutan_ , and running into old friends—I've been trading text messages and emails with the opposition leader, Tshering Tobgay. He's statesmanlike in an Obamaesque way, supersmart, direct, striking, and accessible to the people through various channels: his blog, Facebook, Twitter. I met him briefly two summers ago in a shop in Thimphu, where I was buying a cell phone recharge voucher and potato chips on my way back to Kuzoo, and he breezed in to pick up a snack for his daughter, who was waiting in the car. "Hey, I just saw you on TV," I said. This was before the National Council voted to prohibit televised proceedings of its debates. "Yes." He smiled. "And who are you?" I explained that I'd been volunteering at Kuzoo. It turned out he'd heard my reports from the Smithsonian Folklife Festival and was a fan of public radio from his time at Harvard. That radio job may have driven me a bit batty, but it sure offered street cred. Now, a couple of years later, here we are on a windy winter afternoon at Karma's Coffee, sitting with our dueling Mac-Books over a series of hot brewed coffees, commiserating about Bhutan's future like two old friends. I share what I've managed to glean about the McKinsey plans, how the MBAs are making lists of sacred sites and landmarks around the country and figuring out how to rank them as "products." The various districts of the country are being broken down into "circuits." "The Eastern circuit," as the young, smiling McKinsey lady describes it, will be where meditation centers are developed for Westerners who wish to travel to the "spiritual heartland" of Bhutan, as if there is such an actual location, and pay thousands of dollars to live authentically through homestays with the locals. Meditation centers in the last Buddhist kingdom? Isn't that like building igloos for Eskimos? Land is currently being claimed under rules of eminent domain and regional airports are being developed so that future visitors won't have to suffer through the rocky, undulating, death-defying twenty-four-hour drive across the country on the national "highway." Soon the previously underexplored side of Shangri-la, home to unspoiled natural wonders and simple farm people, will be more easily accessible to the outside world. How simple and unspoiled would they remain as a consequence? Tshering Tobgay is worried. He also knows that the widely held belief by Bhutanese about how rich outsiders are is a myth. "People here think you all have weeks and weeks of vacation time, too," he says. "As if you can just dash around the world for a few weeks with kids." Another thing that disgusts him is the business of "teaching" GNH in the schools. "Not everything is demand and supply," Tshering Tobgay says, lifting his fourth cup of coffee. "You can't teach Gross National Happiness and inner peace. Next thing you know, McKinsey will be recommending that we buy an ad in the Super Bowl, like we're Coca-Cola." With that, he lifts up the lid of his MacBook and whips out his cell phone. "Have you been reading the blogs? People are so angry about all of this," he says. "Now, if you will please excuse me for a few minutes while I conduct a little business." I rise to leave, and he motions for me to sit down. First, he calls his webmaster, and then makes a series of calls to the media outlets to alert them to a press release that has just this moment been posted to his blog. In between, he directs me with a grin to his site, where he says what practically everyone I've been talking to, including my tourism department hostess, has been saying—but won't say publicly. The new constitution offers freedom of speech, but that document doesn't trump the hidebound tradition of loyalty to the state: The Opposition Leader called on the Minister of Economic Affairs... yesterday to express the Opposition Party's concerns on the Royal Government's recent policy decisions on tourism.... Liberalizing the tourist tariff will undermine the positive brand image that our country has carefully cultivated and enjoyed over the last three decades.... A target of 100,000 tourists per year by 2012 may be unsustainable and undesirable, given the country's existing absorptive capacity and small population base of barely 600,000 people, most of who still live in scattered communities. The opposition leader takes his title very seriously. As he works the phones to drum up a little media attention, I savor my ringside seat to the new democracy in action. Maybe next time I see Tshering Tobgay he'll be Bhutan's second elected prime minister. THE NEXT DAY, I'm waiting outside the offices of the Tourism Council of Bhutan for a friend to pick me up for lunch. A man passes by and asks where I'm from; even in Thimphu, it's still not common to see foreigners, especially at this time of year. "Los Angeles," I say. "California. United States. And you?" "Luentse," he says wearily. It's a remote district in Bhutan's untrafficked northeastern corner. I had just read about this corner of the country in the travel materials last night; there are hardly any roads there, the residents are unfettered by modernization, and few outsiders have trod the pristine, undeveloped terrain, which offers spectacular flowers and natural beauty. Just the kind of place the McKinseyites are working to commoditize. "I would love to see that," I said solicitously, although I wonder how my stomach would handle the food. "I hear it's very beautiful." "To you it may sound beautiful." He smiles as he gets into a Toyota Land Cruiser. "To us it is backward." ON THE LAST NIGHT of my stay, Ngawang and Mr. Japan stop in to say good-bye. They are coming from the hospital; the baby is better, home with the mother-in-law, but now it's his father's turn to be ill. He's availed himself of the free health care, as many people do, to make sure his flu is really just a flu. No wonder the McKinsey report has declared this system financially untenable in the long term. As we sip ginger tea made by Pema's new organic-products employer, we discuss the frustrations of my visit, Ngawang's displeasure at the ratty guesthouse where I was first housed, and how upset she is that while I volunteer and pay my way here, $700-a-day consultants are deployed for events like the GNH conference, which she'd covered for her new weekly show on the topic. I shrug and say it's okay, really. "Money is important, especially now, so I can buy things for Kinga Norbu," Ngawang says, "because if he's happy, I'm happy." I tsk-tsk as Mr. Japan looks at her sideways. He smiles his thoughtful smile, amused by his wife but not annoyed by her. "I used to think money mattered more than GNH," he says. "But now I'm not so sure. I see those poor African people on television, and they seem very happy, even if they don't have much." He looks weary; I know it's far colder in their house than it is here in my hotel room. No wonder they've all been sick. I ask him if he'd like my Yankees cap, and as he takes it, he gives me a warm hug. So does Ngawang, apologizing that we've not been able to spend more time together on this trip. They both remind me of their standing offer to visit their family in the countryside. After they head out into the darkness, I turn on the BBS and slip under the covers. I want to fall asleep to the sound of my friend Namgay reading the newscast, and wake up in the morning with the chanting monk who begins each broadcasting day. THREE WEEKS LATER, I'm back in Los Angeles, recovered from the jet lag and the bone-chilling cold, and I happen to check Facebook in the middle of the day. There's a recent posting by Aby, one of the editors at _Business Bhutan: Tourist tariff to be $250 a day 365/7 as of 2011_. The simplicity of the sentence urgently conveys "breaking news." I start searching the Net for more information but can't find any. Then I call up the opposition leader's blog. At about the same time as Aby's post, Tshering Tobgay has tweeted this: _Travelling back to Thimphu. Heard the good news that govt has decided to scrap its plans to liberalize tourist tariff_. It isn't until the next day that I get more information from another Bhutanese friend, in the form of an email. He tells me that the buzz of dissatisfaction on the streets led the prime minister to sit down with members of the tourism industry; as a result of the talks, the tourism tariff liberalization has been declared dead. Indeed, the government decided it shall be raised. That doesn't mean the hope of attracting more tourists is diminished; that goal remains the same, but a commitment to tourism that also preserves Bhutan has been made. A victory for free speech and public outcry. In the same batch of email, I receive two other Bhutan-related messages. One is from the friend of a friend of a friend, a retired high school principal in Canada who wants to "step out of her comfort zone" and volunteer in the kingdom, a dream she's long had. Can I help her find a way? The other is from a Bhutanese friend's eighteen-year-old daughter, who has been granted a scholarship to a college I've never heard of in remote Minnesota and needs to come up with $12,000 in boarding fees. Can I help her get a job? "An old-age home, a nanny, anywhere," she implores, and I know she really doesn't understand what Anywhere, USA, means or how hard it is to make and save that kind of money. I find myself inclined to help the first lady and to question and lecture the girl. But first I decide to go for a walk outside in the California sunshine. # ACKNOWLEDGMENTS LIFE IS A SERIES of random events that thread together in ways that lead to sometimes sweet, often spectacular, perhaps transformative, experiences. This book could not have come to be were it not for myriad fortuitous meetings over the course of my decades, several of which collided over the last several years to lead me to Bhutan. To that end, I must first thank my old dear friend Harris Salat, who introduced me to my new dear friend Sebastian Beckwith, who introduced me to the country that captured my heart. (That chain of connections traces back to when I was a teenager, when our family friend Adam Cohen introduced me to Hampshire College, where I met Mary Batts, who later insisted I meet Harris.) Jeffrey Tuchman introduced me to Barbara Osborn, who told me about her husband, John Drimmer, who led the happiness class that helped me begin to see the world more positively even before I ventured to Bhutan. Had Merrill Brown not gotten me hired at MSNBC a decade ago, I might not have met Bob Sullivan there, who introduced me to Jill Schwartzman at Random House, who then connected me to Dan Conaway at Writers House, who prodded my unformed musings about Bhutan into this book and became a dear and trusted advisor and friend, whom I can never adequately thank. Dan's assistant, Stephen Barr, is the epitome of a positively wired human with whom it is a delight to interact. Tina Constable, Kristin Kiser, and Heather Jackson at Crown invested in the project, and in me, for which I am eternally grateful; Lucinda Bartley, and ultimately, Sydny Miner, deftly shepherded the project to its end state and out into the world. Thanks to the entire Crown team for their enthusiasm and support. To the many people who make me feel welcome in Thimphu, among them: Ngawang Pem and Sonam Penjor; Pema Lhamo; Phub Dorji; Sherab Tenzin; and the original staff of Kuzoo FM, particularly Sir Pema and RJ Kinzang; Choki Wangchuk; Ian Alexander-Bell; Patrizia Franceschinis and Lyonpo Ugyen Tshering; Choeki and Ugyen at Rabten Apartments. Hans Keller, Penny Siekfer, Mark, Kat and Andy Schiffler; Ed Hanzcaryck; Pam Maruoka; Mayumi Futamura; Kunzang Choden and Walter Roder; Peter Hansen. Sandee at Seasons fed me when David Havens wasn't hosting meat night at his apartment behind Villa Italia. Ugen Choden, Kuenga Gyaltsen, Dawa Sherpa, and Bruce Bunting at the Bhutan Foundation, along with Preston Scott and everyone involved in and around the Smithsonian Folklife Festival. A special thanks to KB Lama for being such a dear and frank friend. In the United States, I am grateful for Rev. Kusala Bhikshu at the IBMC, all my teachers at the Ketchum YMCA, and to the Bunker Hill swimming pool (in particular, my neighbor and fellow swimmer George Moore). And for my friends and former colleagues at the public radio show _Marketplace_. A special hug to Bill Slemering for his support. The resources and support of the Library of Congress (in particular the Asian Reading Room), the University of Texas at El Paso (Special Collections Library), and the Los Angeles Public Library proved invaluable. Willie Quinn, thank you, and the same to Dr. Diana Natalicio. Pam and Kurt Meyer clued me in to the very existence of UTEP. I am blessed with an abundance of dear friends, but there are several in particular who slogged alongside me patiently during these last several years: Matthew Mirapaul, Bernie Woodall, Liz Dubelman, Paul Slansky and Grace Slansky, Alistrone Berger, Katherine Stern, Preston Wiles, Elizabeth Kaplan, Brian Averna, Jimmy Suskin, Barbara Rybka, and Maggie Curran. Joe Hutsko has long been my chief writing cheerleader. And to all my family, including and especially my parents, Vince and Jane, Aunt Kay, and my dear brother, James. Last, thanks to everyone who has ever graced my home on a Friday night, particularly the regulars... but especially to the greatest surprise, who appeared at just the right moment, namely Ted Habte-Gabr, and the Wagner family conspiracy to connect us. Speaking of which, here's to believing that the next person you meet could very well be a source of adventure, if not an agent of change. # SELECTED BIBLIOGRAPHY: IF YOU ARE INTERESTED IN LEARNING MORE ABOUT BHUTAN... THIS IS A SELECTION of books, articles, and Web sites about Bhutan. On the list, I've included an indispensible volume about Buddhism written by a Bhutanese, along with a movie made by that book's author that was filmed in the kingdom. It is by no means an exhaustive bibliography of all that has been written about Bhutan—there are certainly others in the relatively small collection of published material about the place. And over the last two years, in conjunction with the centennial of the monarchy, many photographic and illustrated books have been released by small presses. What is listed here, however, would provide the curious student of Bhutan as much as he or she could possibly learn without moving there. Links to these materials can also be found on my personal Web site, www.lisanapoli.com. Aris, Michael. _Bhutan: The Early History of a Himalayan Kingdom_. Warminster, England: Aris & Phillips, 1979. ———. _The Raven Crown: The Origins of Buddhist Monarchy in Bhutan_. London: Serindia Publications, 1994. Choden, Kunzang. _Bhutanese Tales of the Yeti_. Bangkok, Thailand: White Lotus Press, 1997. ———. _Chilli and Cheese: Food and Society in Bhutan_. Bangkok, Thailand: White Lotus Press, 2008. ———. _The Circle of Karma_. New York: Penguin Group, 2005. ———. _Folktales of Bhutan_. Bangkok, Thailand: White Lotus Press, 1994. Collister, Peter. _Bhutan and the British_. London: Serindia Publications with Belitha Press, 1987. Davis, Samuel. _Views of Medieval Bhutan: The Diary and Drawings of Samuel Davis, 1783_. London: Serindia Publications; Washington, D.C.: Smithsonian Institution Press, 1982. Doig, Desmond. "Bhutan: The Mountain Kingdom," _National Geographic_ 120, no. 3 (September 1961). Dorji, Kinley. _Within the Realm of Happiness_. Thimphu, Bhutan: Produced by Siok Sian Pek Dorjee, 2008. Dowman, Keith, and Sonam Paljor, trans. _The Divine Madman: The Sublime Life and Songs of Drukpa Kinley_. 2nd ed. Middletown, CA: Dawn Horse Press, 1998. Khyentse, Dzongsar Jamyang. _What Makes You Not a Buddhist_. Boston: Shambhala, 2007. Lipsey, Rick. _Golfing on the Roof of the World: In Pursuit of Gross National Happiness_. New York: Bloomsbury, 2007. MacLaine, Shirley. _Don't Fall Off the Mountain_. New York: Norton, 1970. Meyer, Kurt, and Pamela Deuel Meyer. _In the Shadow of the Himalayas: Tibet, Bhutan, Nepal, Sikkim; A Photographic Record by John Claude White 1883–1908_. Ocean City, NJ: Grantha Corporation, 2003. Pommaret, Françoise. _Bhutan: Himalayan Mountain Kingdom_. 5th ed. Translated by Elizabeth B. Booz and Howard Solverson. Hong Kong: Odyssey Books & Guides, 2005. Rustomji, Nari. _Bhutan: The Dragon Kingdom in Crisis_. Delhi; New York: Oxford University Press, 1978. ———. _Enchanted Frontiers: Sikkim, Bhutan, and India's Northeastern Borderlands_. Bombay: Oxford University Press, 1971. Scofield, John. "Bhutan Crowns a Dragon King," _National Geographic_ 146, no. 4 (October 1974). Solverson, Howard. _The Jesuit and the Dragon: The Life of Father William Mackey in the Himalayan Kingdom of Bhutan_. Montreal, Canada: R. Davies Pub., 1995. Todd, Burt Kerr. "Bhutan: Land of the Thunder Dragon," _National Geographic_ 102, no. 6 (December 1952). _Travellers and Magicians_. Directed by Khyentse Norbu. New York: Zeitgeist Films, 2005. Wangchuck, Ashi Dorji Wangmo. _Treasures of the Thunder Dragon: A Portrait of Bhutan_. New Delhi; New York: Viking, 2006. White, John Claude. "Castles in the Air," _National Geographic_ 25, no. 9 (April 1914). ———. _Sikhim and Bhutan: Twenty-one Years on the North-East Frontier, 1887–1908_. London: E. Arnold, 1909. Williamson, Margaret D. _Memoirs of a Political Officer's Wife in Tibet, Sikkim, Bhutan_. London: Wisdom, 1987. Yadav, Lal Babu. _Indo-Bhutan Relations and China Interventions_. New Delhi, India: Anmol Publications, 1996. Zeppa, Jamie. _Beyond the Sky and the Earth: A Journey into Bhutan_. New York: Riverhead Books, 1999. NOT ONLY HAS Bhutan never been colonized but Christian missionaries have not had much luck there, either. In the seventeenth century, two Jesuits made their way in. The account of their interactions with Bhutan's then leader, the Shabdrung Ngawang Namgyal, is on the Web, here: Baillie, Luiza Maria. "Father Estevao Cacella's Report on Bhutan in 1627." _Journal of Bhutan Studies_ 1, no. 1 (Autumn 1999). <http://www.digitalhimalaya.com/collections/journals/jbs>. ## WEB SITES _Media_ KUZOO FM WWW.KUZOO.NET CENTENNIAL RADIO WWW.THIMPHU101.BLOGSPOT.COM BHUTAN BROADCASTING SERVICE WWW.BBS.COM/BT KUENSEL WWW.KUENSELONLINE.COM BUSINESS BHUTAN WWW.BUSINESSBHUTAN.BT BHUTAN OBSERVER WWW.BHUTANOBSERVER.BT BHUTAN TIMES WWW.BHUTANTIMES.COM BHUTAN TODAY WWW.BHUTANTODAY.BT _Nonprofit Organizations_ BHUTAN CENTRE FOR MEDIA AND DEMOCRACY WWW.BHUTANCMD.ORG.BT BHUTAN FOUNDATION WWW.BHUTANFOUND.ORG CENTRE FOR BHUTAN STUDIES WWW.BHUTANSTUDIES.ORG.BT TÂRÂYANA FOUNDATION WWW.TARAYANAFOUNDATION.ORG VAST: VOLUNTARY ARTISTS' STUDIO, THIMPHU WWW.VAST-BHUTAN.ORG _Government_ NATIONAL PORTAL OF BHUTAN (OFFICIAL GOVERNMENT SITE) WWW.BHUTAN.GOV.BT THE CONSTITUTION OF THE KINGDOM OF BHUTAN WWW.CONSTITUTION.BT TOURISM COUNCIL OF BHUTAN WWW.TOURISM.GOV.BT # ABOUT THE AUTHOR BEFORE LISA NAPOLI fell in love with Bhutan, she chronicled the dawn of the Internet era as a reporter and columnist for the first Web site of the _New York Times_ , and later as a correspondent and columnist for MSNBC. She most recently worked as a reporter and fill-in host for the public-radio show _Marketplace_. Earlier in her career, Napoli directed two short documentaries about southern culture and field produced coverage of the 1992 Clinton presidential campaign and the Waco standoffs. A native of Brooklyn, she is a graduate of Hampshire College in Amherst, Massachusetts. You can learn more about Bhutan and the author at www.lisanapoli.com.
{ "redpajama_set_name": "RedPajamaBook" }
3,879
Fly past the competition with yard signs from SpeedySignsUSA! We'll make sure your County Board yard signs stand out from the competition and help you secure the victory! Political signs are crucial to your campaign for County Board. That's why we make sure our yard signs will withstand harsh weather conditions and time. When you buy yard signs from SpeedySignsUSA you're buying quality signs made right here in the USA.
{ "redpajama_set_name": "RedPajamaC4" }
7,341
Okay. So if you have a garden--or even if you don't--now is the time to seize summer's bounty. Whether you grow it yourself or purchase it at the store, the time is ripe for summer vegetables. And when the vegetables are as perfectly ripe as they are right now, eating them raw (or some lightly cooked) with the simplest preparation is the way to go. The below recipe is just a guide. Use whatever vegetables and herbs that your garden or local market has. But here's how I made mine. Dice a perfectly ripe tomato--or two if you're eating with someone else--and as much cucumber as you think you'll eat. Combine it in a bowl with a few slices of raw onion, a minced garlic clove, a sliced hot pepper, a handful of chopped parsley, and also basil. Sprinkle the salad with sea salt, then drizzle it with a tablespoon or two of extra virgin olive oil and good quality wine vinegar. Gently toss together and taste summer.
{ "redpajama_set_name": "RedPajamaC4" }
3,529
Color Finish | SeaStone Precast, Inc. In any design it's the little details that are the most important. The ones that show the thought and sophistication put into an idea. This is no less important in planning your residential and commercial project thankfully, SeaStone Precast, Inc. offers superior hand made quality products. Color selection must be made from actual precast samples. Not from printed material or website. Color variation to be expected.
{ "redpajama_set_name": "RedPajamaC4" }
6,421
Register to receive posts 26 January 2023 | For the First Nations 26 January is Invasion Day 25 January 2023 | Joe Biden's stolen documents are one more indicator of the state of American politics 24 January 2023 | Video: The Belmarsh Tribunal DC : The Case of Julian Assange 23 January 2023 | Cross party parliamentarian group calls on Australian government to sign treaty banning nuclear weapons 2 December 2022 | The Age Pension in Australia was an outcome of workers capacity to win it Home Centrelink Morrison brings tax hikes for most and a new war against the unemployed Image from 7NEWS Posted By: Editor 3 November 2020 Contributed by Jim Hayes Modelling provided by the Bankwest Curtin Economics Centre has revealed that millions of Australians will be left worse off by the Morrison government's tax cuts for the richest, and the winding down of the JobSeeker supplement and return to the tough line against the unemployed, will force many into poverty. This is the real face of the government's economic policy and understanding this is the key to an alternative vision of what could be. The tax cuts will leave most about $1,000 out of pocket on average. The lower down the income scale, the higher the proportion of the cut. This is being brought about by the ending of the Low and Middle Income Tax Offset (LMITO), which is, that current amount that can be earned before tax has to be paid on it. It means less money in people's hands to pay their bills or spend down at the shops. Hardly what is needed in the present economic climate. Only those with incomes over $90,000 a year will be getting a tax cut. The JobSeeker supplement has been halved from $550 a fortnight to $250. Although the government seems to be for now backtracking from the political fallout and postponing the rest of the cut, originally due in December, the end game of cutting it off completely remains intact. This has been made clear. There will be a return to $40 a day. Other negative shifts are a return to the hard-line punitive methods, bureaucratic hoops, and private incentives provided to private agencies, which have seen 27,000 Australians have their JobSeeker payments cut off since September. Another 4,000 have had their payments stopped pending further investigation. The debt recovery program carried out under the infamous Robo Debt system was renewed on 2 November. Assumed debts will be chased again from February next year. Together, the tax increase for the majority and cut to payouts to the unemployed mark a significant attack on living standards. It isn't stopping here. The share going to age pensioners, sole parents, those with disabilities, continues its downward trajectory. This is not a plan for economic recovery. Even if it was, do we really want a type of recovery that leaves all but a few worse off? The Morrison government's moves also fail, if judged in terms of justice. The policy is to restore profitability to some businesses, by taking income out of the pockets of others. This will undermine the conditions which most other businesses, especially smaller ones, need to get by. By recognising this wrong direction and rejecting it, Australia can move to a different vision. One bringing Australians together for a shared effort, where no one gets left behind. This is what we must talk about and come up with practical measures to implement. Be the first to comment on "Morrison brings tax hikes for most and a new war against the unemployed" For the First Nations 26 January is Invasion Day Joe Biden's stolen documents are one more indicator of the state of American politics Video: The Belmarsh Tribunal DC : The Case of Julian Assange Cross party parliamentarian group calls on Australian government to sign treaty banning nuclear weapons The Age Pension in Australia was an outcome of workers capacity to win it Privacy and Terms Policies https://termsfeed.com/privacy-policy/a4d75ea501ee4ecc20f16887105235f1 https://termsfeed.com/terms-conditions/4d48d779ad65e58b6a6075d0f591d478
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,662
On March 31 the DEA published a proposal to allow electronic prescriptions for narcotics (Docket No. DEA-218I). The effective date for this is June 1, 2010 pending congressional review. The RFC section gives insight into how they plan to implement (bold text added by yours truly): Identity proofing, access control, authentication, biometric subsystems and testing of those subsystems, internal audit trails for electronic prescription applications, and third-party auditors and certification organizations. It looks like there will be a requirement to be "certified" to perform electronic fill of narcotic prescriptions, but is that really enough (think Heartland)? 1. "The responsibility for the proper prescribing and dispensing of controlled substances is upon the prescribing practitioner, but a corresponding responsibility rests with the pharmacist who fills the prescription." – This makes sense, but also indicates that they will likely follow a path where the responsible parties determine the means by which they accomplish an outline of requirements surrounding security related to narcotics prescription. Ask yourself this: Did HIPAA end internal patient record theft? 2. "[M]ost electronic prescriptions are routed from the electronic prescription or EHR application through intermediaries, at least one of which determines whether the prescription file needs to be converted from one software version to another so that the receiving pharmacy application can correctly import the data. There are generally three to five intermediaries that route prescriptions between practitioners and pharmacies." – This points to the lack of standards, potential for screw ups and also multiple points of potential abuse. I am still reviewing the text document (it is long) but I am also preparing and educating myself in this area – I feel some cases coming.
{ "redpajama_set_name": "RedPajamaC4" }
1,155
\section{Preliminaries} A graph is said to be \textit{(2-cell-)embedded} in a surface $M$ if it is ``drawn" in $M$ such that edges intersect only at their common vertices and deleting the graph from $M$ yields a disjoint union of disks. A graph is said to be \textit{planar} if it can be embedded in the plane. By the \textit{genus} of a graph $X$ we mean the minimum genus among all surfaces in which $X$ can be embedded. So if $X$ is planar then the genus of $X$ is zero. If a non-planar graph can be embedded on the torus, that is on the orientable surface of genus 1, it is called \textit{toroidal}. A graph is said to be \textit{outer planar} if it has an embedding such that one face is incident to every vertex. It is known that each group can be defined in terms of generators and relations, and that corresponding to each such (non-unique) presentation there is a unique graph, called the Cayley graph of the presentation. A ``drawing" of this graph gives a ``picture" of the group from which certain properties of the group can be determined. The same principle can be used for other algebraic systems. So algebraic systems with a given system of generators will be called \textit{planar} or \textit{toroidal} if the respective Cayley graphs can be embedded on the plane or on the torus. Finite planar groups have been cataloged by Maschke~\cite{M}. On the basis of Maschke's Theorem, in this work we investigate embeddings of certain completely regular semigroups (unions of groups), namely of right groups. This is a continuation of the investigations from~\cite{Xia} where Clifford semigroups were in focus. Here our attention is restricted to a special class of presentations of right groups for which we classify the toroidal right groups. Note that this generally only gives upper bounds on the genus of right groups. The full determination of the genus will be studied in a subsequent paper~\cite{kk}. We use $K_n$ for the complete graph on $n$ vertices, $C_n$ for the cycle on $n$ vertices, and $K_{n,n}$ for the respective complete bipartite graph. We denote the cyclic group of order $n$ by $\mathbb{Z}_n=\{0,\dots,n-1\}$, and the dihedral, symmetric and alternating groups by $D_n$, $S_n$ and $A_n$, respectively. We recall that a \textit{right group} is a semigroup of the form $G\times R_r$ where $G$ is a group and $R_r$ is a right zero semigroup, i. e., $R_r=\{r_1,\dots,r_r\}$ with the multiplication $r_ir_j=r_j$ for $r_i,r_j\in R_r$. Every semigroup presentation is associated with a \textit{Cayley color graph}: the vertices correspond to the elements of the semigroup; next, imagine the generators of the semigroup to be associated with distinct colors. If vertices $v_{1}$ and $v_{2}$ correspond to semigroup elements $s_{1}$ and $s_{2}$ respectively, then there is a directed edge (of the color of the generator $e$) from $v_{1}$ to $v_{2}$ if and only if $s_{1}e=s_{2}$. It is also possible to construct a Cayley color graph by action from the left. It is clear that for semigroups the structure of this graph may change heavily, when changing the side of the action. In this note we consider the graph obtained from the Cayley color graph by suppressing all edge directions and all edge colors, deleting loops and multiple edges, that is, the uncolored Cayley graph. It is clear that in passing from the Cayley color graph to the corresponding uncolored graph algebraical information is lost but the genus is not changed. We call this graph \textit{Cayley graph} and denote it by ${\it Cay}(S,C)$ for the semigroup $S$ with the set of generators $C\subseteq S$. The reader is referred to \cite{GT}, \cite{IK}, \cite{Kilp}, \cite{Pe}, \cite{White} and \cite{Xia} for the terminology and notations which are not given in this paper. We need the following results. \begin{resu}\label{Euler}{\em (Euler, Poincar{\'e} 1758)} A finite graph with $n$ vertices, $m$ edges, which is $2$-cell embedded on an orientable surface $M$ of genus $g$ with $f$ faces fulfills the Euler-Poincar{\'e} formula: $n-m+f=2-2g$. \end{resu} \begin{resu}\label{genuslemm1}{\em (Maschke 1896)} The finite group $G$ is planar if and only if $G=G_{1} \times G_{2}$, where $G_{1}=\mathbb{Z}_{1}$ or $\mathbb{Z}_{2}$ and $G_{2}=\mathbb{Z}_{n}$, $D_{n}$, $S_{4}$, $A_{4}$ or $A_{5}$. \end{resu} \begin{rema} \label{generatorrema}\rm{It is clear that planarity depends on the set of generators $C$ chosen for the Cayley graph. For example ${\it Cay}(\mathbb{Z}_6,\{1\})=C_6$ and also ${\it Cay}(\mathbb{Z}_6,\{2,3\})$ which is the box product $ C_3\Box K_2$ is planar, but ${\it Cay}(\mathbb{Z}_6,\{1,2,3\})=K_6$ is not. For the planar groups $D_{n}$, $S_{4}$, $A_{4}$ or $A_{5}$ we get various Archemedian solids as Cayley graph representations, with two or three generators \cite{www}}. \end{rema} \begin{resu}\label{genuslemm2}{\em (Kuratowski 1930)} A finite graph is planar if and only if it does not contain a subgraph that is a subdivision of $K_{5}$ or $K_{3,3}$. \end{resu} \begin{resu}\label{genuslemm3}{\em (Chartrand, Harary 1967)} A finite graph is outer planar if and only if it does not contain a subgraph that is a subdivision of $K_4$ or $K_{2,3}$. \end{resu} \section{The $Cay$-functor and right groups} For most of the considerations we can use the following two results which we take from \cite{UX}. However, as far as we know, there do not exist general formulas which relate the genus of a cross product or a lexicographic product of two graphs to the genera of the factors, compare for example \cite{GT}, \cite{IK} or \cite{White}. Some of the difficulties with respect to the lexicographic product can be seen in Example \ref{prop:D3}. We denote by $\times$ the \textit{cross product} for graphs and also the direct product for semigroups and sets. By $X[Y]$ we denote the \textit{lexicographic product} of the graph $X$ with the graph $Y$. \begin{prop}\label{theo2} For semigroups $S$ and $T$ with subsets $C$ and $D$, respectively, we have ${\it Cay}(S\times T, C\times D)={\it Cay}(S,C)\times {\it Cay}(T,D)$. \end{prop} Note that if in the above formula the semigroup $T$ is $R_r$ its graph ${\it Cay}(R_r,R_r)$ has to be considered as $K_r^{(r)}$, i. e. the complete graph with $r$ loops. \begin{prop}\label{theo2a} Let $S$ be a monoid with identity $1_S$, $T$ a semigroup, $C$ and $D$ subsets of $S$ and $T$ respectively. Then $${\it Cay}(S\times T, (C\times T) \cup (\{1_S\}\times D))={\it Cay}(S,C)[{\it Cay}(T,D)]$$ if and only if $tT=T$ for any $t\in T$, that is if and only if $T$ is a right group. \end{prop} \begin{rema} \rm{A formal description of the relation between graphs and subgraphs which are subdivisions with the help of the $Cay$-functor on semigroups with generators seems to be difficult. In ${\it Cay}(\mathbb{Z}_6,\{1\})$ we find a subdivision of $K_3$ corresponding to ${\it Cay}(\{0, 2, 4\}, \{2\})$, as a subgraph. But subdivision is not a categorical concept. And there is no inclusion between $\{0, 2, 4\}\times \{2\}$ and $\mathbb{Z}_6\times \{1\}$. } \end{rema} \section{The embeddings} Now we determine the minimal genus among the Cayley graphs ${\it Cay}(G\times R_r, C\times R_r)$ taken over all minimum generating set $C$ of the group $G$. We do not claim that an embedding of this graph gives the (minimal) genus of the right group considered. Generally $G\times R_r$ may have a generating system $C'\neq C\times R_r$ which yields a Cayley graph with fewer edges and consequently tends to have a smaller genus. A straight-forward calculation yields the following lemma. Note that the first equality can also be obtained by applying Proposition \ref{theo2a} in the form ${\it Cay}(G\times R_r, (C\times R_r) \cup (\{1_G\}\times \emptyset))={\it Cay}(G,C)[{\it Cay}(R_r,\emptyset)]$. \begin{lemm} Denote by ${\it Cay}(G,C)[\overline K_r]$ the lexicographic product of ${\it Cay}(G,C)$ with $r$ isolated vertices. We have ${\it Cay}(G\times R_r, C\times R_r)={\it Cay}(G,C)[\overline K_r]$. \end{lemm} Note that this product can be seen as replacing every vertex of ${\it Cay}(G,C)$ by $r$ independent vertices and every edge by a $K_{r,r}$. In particular $K_{k,k}[ \overline K_r]=K_{kr,kr}$. \begin{prop} If ${\it Cay}(G,C)$ is not planar then ${\it Cay}(G\times R_r, C\times R_r)$ cannot be embedded on the torus. \end{prop} \begin{proof} Already $K_{3,3}[\overline K_{2}]\cong K_{6,6}$ has genus 4. Moreover, the graph $K_5[\overline K_{2}]$ has 10 vertices and 40 edges. An embedding on the torus would have 30 faces by the formula of Euler-Poincar{\'e}. Even if all faces were triangles in this graph, this would require 45 edges. So the graphs are not toroidal. \end{proof} \begin{prop} If $r\geq 5$ then ${\it Cay}(G\times R_r, C\times R_r)$ cannot be embedded on the torus. \end{prop} \begin{proof} The resulting graph contains $K_{5,5}$ which has genus 3, compare~\cite{White}. \end{proof} \begin{prop}\label{prop:K22} If ${\it Cay}(G,C)$ contains a $K_{2,2}$ subdivision and $r\geq 3$ then ${\it Cay}(G\times R_r, C\times R_r)$ cannot be embedded on the torus. \end{prop} \begin{proof} The resulting graph contains $K_{6,6}$ which has genus 4, compare~\cite{White}. \end{proof} Hence, for the rest of the paper we will check all planar groups $G$ and $1\leq r\leq 4$ for ${\it Cay}(G\times R_r, C\times R_r)$ having genus 1. \begin{lemm}\label{lem:threereg} If the vertex degree of a planar ${\it Cay}(G,C)$ is at least $3$ then ${\it Cay}(G\times R_2,C\times R_2)$ cannot be embedded on the torus. \end{lemm} \begin{proof} Since ${\it Cay}(G,C)$ is at least $3$-regular ${\it Cay}(G\times R_2,C\times R_2)$ is at least $6$-regular. Assume that ${\it Cay}(G\times R_2,C\times R_2)$ is embedded on the torus, then the formula of Euler-Poincar{\'e} yields that all faces are triangular. This implies that every edge of ${\it Cay}(G\times R_2,C\times R_2)$ lies in at least two triangles, hence every edge of ${\it Cay}(G,C)$ lies in at least one triangle. Let $c_1,c_2,c_3\in C$ the generators corresponding to a triangle $a_1,a_2,a_3$. Then $c_1^{\pm 1}c_2^{\pm 1}c_3^{\pm 1}=1_G$ for some signing, where $1_G$ is the identity in $G$. If any two of the $c_i$ are distinct then one of the two is redundant, hence $C$ was not inclusion minimal. Thus every $c\in C$ must be of order $3$. Since $G$ is not cyclic we obtain that ${\it Cay}(G,C)$ is at least $4$-regular. The formula of Euler-Poincar{\'e} yields that the at least $8$-regular ${\it Cay}(G\times R_2,C\times R_2)$ cannot be embedded on the torus. \end{proof} \begin{figure}[ht] \psfrag{a}[cc][cc]{$0$} \psfrag{b}[cc][cc]{$1$} \psfrag{c}[cc][cc]{$2$} \psfrag{d}[cc][cc]{$3$} \psfrag{a'}[cc][cc]{$0'$} \psfrag{b'}[cc][cc]{$1'$} \psfrag{c'}[cc][cc]{$2'$} \psfrag{d'}[cc][cc]{$3'$} \psfrag{a''}[cc][cc]{$0''$} \psfrag{b''}[cc][cc]{$1''$} \psfrag{c''}[cc][cc]{$2''$} \psfrag{d''}[cc][cc]{$3''$} \begin{center} \includegraphics[width = \textwidth]{pics/Z34R23.eps} \caption{The planar ${\it Cay}(\mathbb{Z}_3\times R_2, \{1\}\times R_2)$, the toroidal ${\it Cay}(\mathbb{Z}_3\times R_3, \{1\}\times R_3)$ and ${{\it Cay}}(\mathbb{Z}_4\times R_2, \{1\}\times R_2)\cong K_{4,4}$} \label{fig:Z34R23} \end{center} \end{figure} \begin{prop}\label{prop:main} The minimum genus of ${\it Cay}(\mathbb{Z}_n\times R_r, C\times R_r)$ among all generating systems $C$ is $1$ iff $(n,r)\in\{(2,3),(2,4),(3,3),(i,2)\}$ for $i\geq 4$. \end{prop} \begin{proof} By Lemma~\ref{lem:threereg} we can assume $C=\{1\}$. For $n=2$ we have ${\it Cay}(\mathbb{Z}_2\times R_r, C\times R_r)=K_{r,r}$ which exactly for $r\in\{3,4\}$ has genus 1. Take $n=3$. If $r=2$ we obtain the planar graph ${\it Cay}(\mathbb{Z}_3\times R_2, \{1\}\times R_2)$ shown in Figure~\ref{fig:Z34R23}. If $r=3$ the resulting graph contains $K_{3,3}$, so it cannot be planar. Figure~\ref{fig:Z34R23} shows an embedding as a triangular grid on the torus. If $r=4$ we have the complete tripartite graph $K_{4,4,4}$. Delete the entire set of $16$ edges between two of the partitioning sets. The remaining (non-planar) graph has $12$ vertices, $32$ edges and, assuming a toroidal embedding, $20$ faces. A simple count shows that this cannot be realized without traingular faces. So for $r\geq 4$ the graph ${\it Cay}(\mathbb{Z}_3\times R_r, C\times R_r)$ is not toroidal. Take $n\geq 4$. Now the graph ${\it Cay}(\mathbb{Z}_n,\{1\})$ contains a $C_4=K_{2,2}$ subdivision. If $r\geq 3$ then ${\it Cay}(\mathbb{Z}_n\times R_r, \{1\}\times R_r)$ is not toroidal by Proposition~\ref{prop:K22}. If $r=2$ an embedding of ${\it Cay}(\mathbb{Z}_4\times R_2, \{1\}\times R_2)$ as a square grid in the torus is shown in Figure~\ref{fig:Z34R23}. This is instructive for the cases $n\geq 5$. Moreover we see that the vertices $\{0,0',2\}$ and $\{1,1',3\}$ induce a $K_{3,3}$ subgraph of ${\it Cay}(\mathbb{Z}_4\times R_2, \{1\}\times R_2)$. Generally for $n\geq 4$ we have that ${\it Cay}(\mathbb{Z}_n\times R_2, \{1\}\times R_2)$ contains a $K_{3,3}$ subdivision, it hence is not planar. \end{proof} \begin{theo} \label{maintheo} Let $G\times R_r$ be a finite rightgroup. The minimal genus of ${\it Cay}(G\times R_r,C\times R_r)$ among all generating sets $C\subseteq G$ of $G$ is $1$ iff $G\times R_r$ is one of the following rightgroups: \begin{itemize} \item $\mathbb{Z}_n\times R_r$ with $(n,r)\in\{(2,3),(2,4),(3,3),(i,2)\}$ for $i\geq4$ \item $\mathbb{Z}_2\times\mathbb{Z}_{2n+1}\times R_2$ for $n\geq 1$ \item $D_n\times R_2$ for all $n\geq 2$ \item $\mathbb{Z}_2\times D_n\times R_2$ for all $n\geq 2$ \end{itemize} \end{theo} \begin{proof} Since $\mathbb{Z}_2\times\mathbb{Z}_{2n+1}\cong \mathbb{Z}_{4n+2}$ Proposition~\ref{prop:main} proves the first two sets of right groups to have the desired property. Observe that ${\it Cay}(D_n,C)$, where $C$ consists of two generators $g_1, g_2$ of order 2, is isomorphic to ${\it Cay}(\mathbb{Z}_{2n}, \{1\})$. Thus it is planar and by Proposition~\ref{prop:main} ${\it Cay}(D_n\times R_2,\{g_1,g_2\}\times R_2)$ can be embedded on the torus. Any other generating system for $D_n$ yields ${\it Cay}(D_n,C)$ with degree at least $3$, hence by Lemma~\ref{lem:threereg} it cannot be embedded on the torus and in particular is non-planar. The only generating system for $\mathbb{Z}_2\times D_n$ which escapes the preconditions of Lemma~\ref{lem:threereg} is $C=\{(1,g_1),(0,g_2)\}$ and indeed ${\it Cay}(\mathbb{Z}_2\times D_n,C)\cong C_{4n}\cong {\it Cay}(\mathbb{Z}_{4n},\{1\})$. Thus ${\it Cay}(\mathbb{Z}_2\times D_n\times R_2,C\times R_2)$ is toroidal by Proposition~\ref{prop:main}. Let $G\in\{A_4,S_4,A_5, \mathbb{Z}_2\times A_4, \mathbb{Z}_2\times S_4, \mathbb{Z}_2\times A_5, \mathbb{Z}_2\times \mathbb{Z}_{2n}\}$ for $n\geq 2$. It can be checked that $G$ cannot be generated by two elements of order two. Since $G$ is not cyclic we have $|\{g\in C\mid \textmd{ord}(g)=2\}|+2|\{g\in C\mid \textmd{ord}(g)\geq 3\}|\geq 3$ for every generating system $G$. Thus, by Lemma~\ref{lem:threereg} we know that ${\it Cay}(G\times R_2,C\times R_2)$ cannot be embedded on the torus. \end{proof} In the above proofs we make strong use of Lemma~\ref{lem:threereg}, which tells us that $3$-regular planar Cayley graphs will not be embeddable on the torus after taking the cartesian product with $R_2$. In fact, this operation can increase the genus from $0$ to $3$ already in the following small example. \begin{figure}[ht] \psfrag{0}[cc][cc]{$0$} \psfrag{1}[cc][cc]{$1$} \psfrag{2}[cc][cc]{$2$} \psfrag{3}[cc][cc]{$3$} \psfrag{4}[cc][cc]{$4$} \psfrag{5}[cc][cc]{$5$} \psfrag{0'}[cc][cc]{$0'$} \psfrag{1'}[cc][cc]{$1'$} \psfrag{2'}[cc][cc]{$2'$} \psfrag{3'}[cc][cc]{$3'$} \psfrag{4'}[cc][cc]{$4'$} \psfrag{5'}[cc][cc]{$5'$} \psfrag{A}[cc][cc]{$X$} \psfrag{B}[cc][cc]{$Y$} \psfrag{C}[cc][cc]{$Z$} \begin{center} \includegraphics[width = .6\textwidth]{pics/D3R2.eps} \caption{${\it Cay}(\mathbb{Z}_6\times R_2, \{2,3\}\times R_2)$ in the triple torus with handles $X$, $Y$, $Z$.} \label{fig:D3R2} \end{center} \end{figure} \begin{exam}\label{prop:D3} The genus of ${\it Cay}(\mathbb{Z}_6\times R_2, \{2,3\}\times R_2)$ is $3$. Note that ${\it Cay}(\mathbb{Z}_6\times R_2, \{2,3\}\times R_2)\cong(C_3\Box K_2)[\overline K_2]$. \end{exam} \begin{proof} To see this we observe that ${\it Cay}(\mathbb{Z}_6\times R_2, \{2,3\}\times R_2)$ consist of two disjoint copies $C_3\Box K_2$ and $(C_3\Box K_2)'$ of ${\it Cay}(\mathbb{Z}_6,\{2,3\})$ with vertex sets $\{0,1,2,3,4,5\}$ and $\{0',1',2',3',4',5'\}$, respectively. Every vertex $v$ of $C_3\Box K_2$ is adjacent to every neighbor of its copy $v'$ in $(C_3\Box K_2)'$. Figure~\ref{fig:D3R2} shows an embedding of ${\it Cay}(\mathbb{Z}_6\times R_2, \{2,3\}\times R_2)$ into the orientable surface of genus $3$ -- \textit{the triple torus}. This graph is $6$-regular with $12$ vertices, so it has $36$ edges. By Lemma~\ref{lem:threereg} ${\it Cay}(\mathbb{Z}_6\times R_2, \{2,3\}\times R_2)$ cannot be embedded on the torus. So assume that ${\it Cay}(\mathbb{Z}_6\times R_2, \{2,3\}\times R_2)$ is 2-cell-embedded on the double torus. Delete the $4$ edges between $1,1'$ and $5,5'$ and the $4$ edges between $0,0'$ and $4,4'$. The resulting graph $H$ has $28$ edges. It consists of two graphs $A$ and $B$, which are copies of $K_{4,4}$, where $A$ has the bipartition $(\{0,0',5,5'\}$, $\{2,2',3,3'\})$ and $B$ has $(\{0,0',1,1'\}$, $\{3,3',4,4'\})$. They are glued at the four vertices with the same numbers and the corresponding $4$ edges are identified. Although $H$ is no longer bipartite it still is triangle-free. Hence by our assumption it is 2-cell-embedded on the double torus. By the formula of Euler-Poincar{\'e} this gives 14 faces and consequently all of them are quadrangular. So the edges between $1,1'$ and $5,5'$ and between $0,0'$ and $4,4'$, which we have to put back in, have to be diagonals of these quadrangular faces. But then $\{2',4,2,0\}$ and $\{2',4,2,0'\}$ are the only 4-cycles in $H$ which contain the vertices $4,0$ and $4,0'$, respectively, they form faces of $H$. Since they have the common edges $\{2',4\}$ and $\{2,4\}$ we obtain a $K_{2,3}$ with bipartition $(\{2,2'\},\{0,0',4\})$. It is folklore that $K_{2,3}$ is not outer planar. Thus the region consisting of the glued $4$-cycles $\{2',4,2,0\}$ and $\{2',4,2,0'\}$ must contain one of the vertices $0,0'$ or $4$ in its interior. Hence this vertex has only degree $2$ -- a contradiction. \end{proof} \noindent We thank Xia Zhang for many helpful comments as well as Srichan Arworn, Nirutt Pipattanajinda and several graduate students with whom one of the authors discussed the topic extensively on a research stay at Chiangmai University, Thailand -- supported by Deutsche Forschungsgemeinschaft.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,015
Q: embedding fonts Fonte_Nexa-TTF and Fonte_Nexa in css I am working on a website on which I want to apply the following fonts: 1. Fonte_Nexa-TTF 2. Fonte_Nexa I have placed the above fonts in the css/fonts folder as shown below in an image: Inside Fonte_Nexa-TTF directory, I have the following fonts: In the CSS, I have placed the following codes on the top but it doesn't seem to work. @font-face { font-family:"Fonte Nexea"; src: url("fonts/Fonte_Nexa-TTF") format("truetype"), url("fonts/Fonte_Nexa") format("opentype"); } Problem Statement: I am wondering what changes I need to make in the CSS codes above so that it start working. Also, at console I am getting the following error: A: It looks like your references are for directories instead of the fonts themselves. Try adding the file names for the specific font file within the directory and see if that makes a difference. @font-face { font-family:"Fonte Nexea"; src: url("fonts/Fonte_Nexa-TTF/fontname.ttf") format("truetype"), url("fonts/Fonte_Nexa/fontname.otf") format("opentype"); }
{ "redpajama_set_name": "RedPajamaStackExchange" }
563
Produced by Nick Hodson of London, England Among the Red-skins, by W.H.G. Kingston. ________________________________________________________________________ ________________________________________________________________________ AMONG THE RED-SKINS, BY W.H.G. KINGSTON. CHAPTER ONE. MISSING. AN UNEXPECTED RETURN--HUGH IS ABSENT--NO KNOWLEDGE OF HIS WHEREABOUTS-- UNCLE DONALD'S APPREHENSIONS--A HURRIED SUPPER, AND PREPARATIONS FOR A SEARCH. "Hugh, my lad! Hugh, run and tell Madge we have come back," cried Uncle Donald, as he and I entered the house on our return, one summer's evening, from a hunting excursion in search of deer or any other game we could come across, accompanied by three of our dogs, Whiskey, Pilot, and Muskymote. As he spoke, he unstrapped from his shoulders a heavy load of caribou meat. I, having a similar load, did the same--mine was lighter than his--and, Hugh not appearing, I went to the door and again called. No answer came. "Rose, my bonnie Rose! Madge, I say! Madge! Where are you all?" shouted Uncle Donald, while he hung his rifle, with his powder-horn and shot-pouch, in their accustomed places on the wall. On glancing round the room he seemed somewhat vexed to perceive that no preparations had been made for supper, which we expected to have found ready for us. It was seldom, however, that he allowed himself to be put out. I think I can see him now--his countenance, though weather-beaten and furrowed by age, wearing its usual placid and benignant expression; while his long silvery beard and the white locks which escaped from beneath his Highland bonnet gave him an especially venerable appearance. His dress was a plaid shooting-coat, and high leggings of well-tanned leather, ornamented with fringe after the fashion of the Indians. Upright as an arrow, with broad shoulders and wiry frame, he stood upwards of six feet in his mocassins, nor did he appear to have lost anything of the strength and energy of youth. As no one appeared, I ran round to the back of the house, thinking that Rose and Madge, accompanied by Hugh, had gone to bring in the milk, which it was the duty of Sandy McTavish to draw from our cows, and that he, for some cause or other, being later than usual, they had been delayed. I was not mistaken. I presently met them, Madge carrying the pails, and Rose, a fair-haired, blue-eyed little maiden, tripping lightly beside her. She certainly presented a great contrast in appearance to the gaunt, dark-skinned Indian woman, whose features, through sorrow and hardship, had become prematurely old. I inquired for Hugh. "Is he not with you?" asked Rose, in a tone of some little alarm. "He went off two hours ago, saying that he should be sure to fall in with you, and would assist in bringing home the game you might have killed." "Yes, Hugh would go. What he will he do," said the Indian woman, in the peculiar way of speaking used by most of her people. "He felt so much better in the afternoon that he was eager to go out and help you," said Rose. "He thought that Uncle Donald would not be angry with him, though he had told him to remain at home." We soon got back to the house. When Uncle Donald heard where Hugh had gone, though he expressed no anger, he looked somewhat troubled. He waited until Rose had gone out of the room, then he said to me-- "I noticed, about four miles from home, as we went out in the morning, the marks of a `grizzly,' which had been busy grubbing up a rotten log, but as his trail appeared to lead away up the mountains to the eastward I did not think it worth my while to chase him; and you having just before separated from me, I forgot to mention the fact when you came back. But vexed would I be if Hugh should have fallen in with the brute. He's too venturesome at times; and if he fired and only wounded it, I doubt it would be a bad job for him. Don't you let Rose hear a word about the `grizzly,' Archie," he hastily added, as she re-entered the room. Both Madge and Rose were, however, very anxious when they found that Hugh had not returned with us. There was still an hour or so of daylight, and we did not therefore abandon the hope that he would return before dark. Uncle Donald and I were both very hungry, for we had been in active exercise the whole of the day, and had eaten nothing. Madge knowing this set about preparing supper with all haste. She could not, however, help running to the door every now and then to ascertain if Hugh were coming. At length Sandy McTavish came in. He was something like Uncle Donald in figure, but though not so old, even more wiry and gaunt, looking as if he were made of bone and sinews covered with parchment. He at once volunteered to set out and look for Hugh. "Wait till we get our supper, and Archie and I will go too. What's the use of man or boy with an empty stomach?" said Uncle Donald. "'Deed an' that's true," observed Sandy, helping himself from the trencher which stood in the centre of the table. "It's a peety young Red Squirrel isna' here; he would ha' been a grand help if Maister Hugh's missin'. But I'm thinkin' he's no far off, sir. He'll have shot some beast likely, and be trying to trail it hame; it wud be a shame to him to hae lost his way! I canna believe that o' Maister Hugh." Sandy said this while we were finishing our supper, when, taking down our rifles, with fresh ammunition, and bidding Rose and Madge "cheer up," we three set out in search of Hugh. Fortunately the days were long, and we might still hope to discover his track before darkness closed upon the world. CHAPTER TWO. AN INDIAN RAID. SCENE OF THE STORY--HISTORY OF ARCHIE AND HUGH--A JOURNEY ACROSS THE PRAIRIE--A VILLAGE BURNT BY THE INDIANS--UNCLE DONALD PURSUES THE BLACKFEET--ARRIVAL AT THE INDIAN CAMP. But where did the scene just described occur? And who were the actors? Take a map of the world, run your eye over the broad Atlantic, up the mighty St. Lawrence, across the great lakes of Canada, then along well-nigh a thousand miles of prairie, until the Rocky Mountains are reached, beyond which lies British Columbia, a region of lakes, rivers, and streams, of lofty, rugged, and precipitous heights, the further shores washed by the Pacific Ocean. On the bank of one of the many affluents of its chief river--the Fraser--Uncle Donald had established a location, called Clearwater, far removed from the haunts of civilised man. In front of the house flowed the ever-bright current (hence the name of the farm), on the opposite side of which rose rugged pine-crowned heights; to the left were others of similar altitude, a sparkling torrent running amid them into the main stream. Directly behind, extending some way back, was a level prairie, interspersed with trees and bordered by a forest extending up the sides of the variously shaped hills; while eastward, when lighted by the rays of the declining sun, numberless snow-capped peaks, tinged with a roseate hue, could be seen in the far distance. Horses and cattle fed on the rich grass of the well-watered meadows, and a few acres brought under cultivation produced wheat, Indian corn, barley, and oats sufficient for the wants of the establishment. Such was the spot which Uncle Donald, who had won the friendship of the Sushwap tribe inhabiting the district, had some years ago fixed on as his abode. He had formerly been an officer in the Hudson's Bay Company, but had, for some reason or other, left their service. Loving the country in which he had spent the best years of his life, and where he had met with the most strange and romantic adventures, he had determined to make it his home. He had not, however, lost all affection for the land of his birth, or for his relatives and friends, and two years before the time I speak of he had unexpectedly appeared at the Highland village from which, when a young man, more than a quarter of a century before, he had set out to seek his fortune. Many of his relatives and the friends of his youth were dead, and he seemed, in consequence, to set greater value on those who remained, who gave him an affectionate reception. Among them was my mother, his niece, who had been a little blooming girl when he went away, but was now a staid matron, with a large family. My father, Mr Morton, was a minister, but having placed himself under the directions of a Missionary Society, he was now waiting in London until it was decided in what part of the world he should commence his labours among the heathen. My two elder brothers were already out in the world--one as a surgeon, the other in business--and I had a fancy for going to sea. "Let Archie come with me," said Uncle Donald. "I will put him in the way of doing far better than he ever can knocking about on salt water; and as for adventures, he'll meet with ten times as many as he would if he becomes a sailor." He used some other arguments, probably relating to my future advantage, which I did not hear. They, at all events, decided my mother; and my father, hearing of the offer, without hesitation gave his consent to my going. It was arranged, therefore, that I should accompany Uncle Donald back to his far-off home, of which he had left his faithful follower, Sandy McTavish, in charge during his absence. "I want to have you with me for your own benefit, Archie; but there is another reason. I have under my care a boy of about your own age, Hugh McLellan, the son of an old comrade, who died and left him to my charge, begging me to act the part of a father to him. I have done so hitherto, and hope to do so as long as I live; you two must be friends. Hugh is a fine, frank laddie, and you are sure to like one another. As Sandy was not likely to prove a good tutor to him, I left him at Fort Edmonton when I came away, and we will call for him as we return." I must pass over the parting with the dear ones at home, the voyage across the Atlantic, and the journey through the United States, which Uncle Donald took from its being in those days the quickest route to the part of the country for which we were bound. After descending the Ohio, we ascended the Mississippi to its very source, several hundred miles, by steamboat; leaving which, we struck westward, passing the head waters of the Bed River of the north, on which Fort Garry, the principal post of the Hudson's Bay Company, is situated, but which Uncle Donald did not wish to visit. We had purchased good saddle-horses and baggage animals to carry our goods, and had engaged two men--a French Canadian, Pierre Le Clerc, and an Irishman, Cornelius Crolly, or "Corney," as he was generally called. Both men were known to Uncle Donald, and were considered trustworthy fellows, who would stick by us at a pinch. The route Uncle Donald proposed taking was looked upon as a dangerous one, but he was so well acquainted with all the Indian tribes of the north that he believed, even should we encounter a party of Blackfeet, they would not molest us. We had been riding over the prairie for some hours, with here and there, widely scattered, farms seen in the distance, and were approaching the last frontier settlement, a village or hamlet on the very outskirts of civilisation, when we caught sight of a column of smoke ascending some way on directly ahead of us. "Can it be the prairie on fire?" I asked, with a feeling of alarm; for I had heard of the fearful way in which prairie fires sometimes extend for miles and miles, destroying everything in their course. Uncle Donald stood up in his stirrups that he might obtain a better view before us. "No; that's not the smoke of burning grass. It looks more like that from a building, or may be from more than one. I fear the village itself is on fire," he answered. Scarcely had he spoken when several horsemen appeared galloping towards us, their countenances as they came near exhibiting the utmost terror. They were passing on, when Uncle Donald shouted out, "Hi! where are you going? What has happened?" On hearing the question, one of the men replied, "The Indians have surprised us. They have killed most of our people, set fire to our houses, and carried off the women and children." "And you running away without so much as trying to recover them? Shame upon ye!" exclaimed Uncle Donald. "Come on with me, and let's see what can be done!" The men, however, who had scarcely pulled rein, were galloping forward. Uncle Donald shouted to them to come back, but, terror-stricken, they continued their course, perhaps mistaking his shouts for the cries of the Indians. "We must try and save some of the poor creatures," said Uncle Donald, turning to our men. "Come on, lads! You are not afraid of a gang of howling red-skins!" and we rode on, making our baggage horses move much faster than they were wont to do under ordinary circumstances. Before reaching the village we came to a clump of trees. Here Uncle Donald, thinking it prudent not to expose his property to the greedy eyes of the Indians, should we overtake them, ordered Corney and Pierre to halt and remain concealed, while he and I rode forward. By the time we had got up to the hamlet every farm and log-house was burning, and the greater part reduced to ashes. No Indians were to be seen. According to their custom, after they had performed their work they had retreated. I will pass over the dreadful sights we witnessed. Finding no one alive to whom we could render assistance, we pushed on, Uncle Donald being anxious to come up with the enemy before they had put their captives to death. Though darkness was approaching, we still rode forward. "It's likely they will move on all night, but, you see, they are loaded, and we can travel faster than they will. They are sure to camp before morning, and then we'll get up with them," observed Uncle Donald. "But what will become of our baggage?" I asked. "Oh, that will be safe enough. Pierre and Corney will remain where we left them until we get back," he answered. I was certain that Uncle Donald knew what he was about, or I should have been far from easy, I confess. We went on and on, the Indians keeping ahead of us. From this circumstance, Uncle Donald was of opinion that they had not taken many prisoners. At length we came to a stream running northward, bordered by willows poplars, and other trees. Instead of crossing directly in front of us, where it was somewhat deep, we kept up along its banks. We had not got far when we saw the light of a fire, kindled, apparently, at the bottom of the hollow through which the stream passed. "If I'm not far wrong, that fire is in the camp of their rear guard. Their main body cannot be far off," observed Uncle Donald. "Dismount here, Archie, and you hold the horses behind these trees, while I walk boldly up to them. They won't disturb themselves much for a single man." I dismounted as he desired, and he proceeded toward the fire. I felt very anxious, for I feared that the Blackfeet might fire and kill him without stopping to learn who he was. CHAPTER THREE. WITH THE RED-SKINS. UNCLE DONALD AND THE BLACKFEET--THE CHIEF'S SPEECH--A FORTUNATE RECOGNITION--PONOKO GIVES UP A LITTLE GIRL TO UNCLE DONALD--IMPOSSIBLE TO DO ANY MORE--PONOKO URGES DEPARTURE--ROSE IS ADOPTED BY UNCLE DONALD--HUGH MCLELLAN--MADGE--STORY OF A BRAVE INDIAN MOTHER--RED SQUIRREL--THE HOUSEHOLD AT CLEARWATER. I waited with intense anxiety for Uncle Donald who appeared to have been a long time absent. I dared not disobey his orders by moving from the spot, yet I felt eager to creep up and try and ascertain what had happened. I thought that by seeming the horses to the trees, I might manage to get near the Indian camp without being perceived, but I overcame the temptation. At length I heard footsteps approaching, when, greatly to my relief, I saw Uncle Donald coming towards me, carrying some object wrapped up in a buffalo-robe in his arms. I will now mention what occurred to him. He advanced, as he told me afterwards, without uttering a word, until he was close up to the fire round which the braves were collected, then seating himself opposite the chief, whom he recognised by his dress and ornaments, said, "I have come as a friend to visit my red brothers; they must listen to what I have to say." The chief nodded and passed the pipe he was smoking round to him, to show that he was welcome as a friend. Uncle Donald then told them that he was aware of their attack upon the village, which was not only unjustifiable, but very unwise, as they would be certain to bring down on their heads the vengeance of the "Long-knives"--so the Indians call the people of the United States. That wide as was the country, the arm of the Long-knives could stretch over it; that they had fleet horses, and guns which could kill when their figures appeared no larger than musk rats; and he urged them, now that the harm was done, to avert the punishment which would overtake them by restoring the white people they had captured. When he had finished, the chief rose and made a long speech, excusing himself and his tribe on the plea that the Long-knives had been the aggressors; that they had killed their people, driven them from their hunting-grounds, and destroyed the buffalo on which they lived. No sooner did the chief begin to speak than Uncle Donald recognised him as a Sioux whose life he had saved some years before. He therefore addressed him by his name of Ponoko, or the Red Deer, reminding him of the circumstance. On this the chief, advancing, embraced him; and though unwilling to acknowledge that he had acted wrongly, he expressed his readiness to follow the advice of his white friend. He confessed, however, that his hand had only one captive, a little girl, whom he was carrying off as a present to his wife, to replace a child she had lost. "She would be as a daughter to me; but if my white father desires it, I will forego the pleasure I expected, and give her up to him. As for what the rest of my people may determine I cannot be answerable; but I fear that they will not give up their captives, should they have taken any alive," he added. "It would have been a terrible thing to have left the little innocent to be brought up among the savages and taught all their heathen ways, though they, no doubt, would have made much of her, and treated her like a little queen," said Uncle Donald to me; "so I at once closed with the chief's offer. Forthwith, a little girl, some five years of age, was brought out from a small hut built of boughs, close to where the party was sitting. She appeared almost paralysed with terror; but when, looking up, she saw that Uncle Donald was a white man, and that he was gazing compassionately at her, clinging to his hand, she entreated him by her looks to save her from the savages. She had been so overcome by the terrible scenes she had witnessed that she was unable to speak." Uncle Donald, lifting her up in his arms, endeavoured to calm her fears, promising that he would take care of her until he had restored her to her friends. He now expressed his intention of proceeding to the larger camp, but Ponoko urged him on no account to make the attempt, declaring that his life would not be safe, as several of their fiercest warriors were in command, who had vowed the destruction of all the Long-knives or others they should encounter. "But the prisoners! What will they do with them?" asked Uncle Donald. "Am I to allow them to perish without attempting their rescue?" "My white father must be satisfied with what I've done for him. I saw no other prisoners taken. All the pale-faces in the villages were killed," answered Ponoko. "For his own sake I cannot allow him to go forward; let him return to his own country, and he will there be safe. I know his wishes, and will, when the sun rises, go to my brother chiefs and tell them what my white father desires." Ponoko spoke so earnestly that Uncle Donald, seeing that it would be useless to make the attempt, and fearing that even the little girl might be taken from him, judged that it would be wise to get out of the power of the savages; and carrying the child, who clung round his neck, he bade the other braves farewell, and commenced his return to where he had left me. He had not got far when Ponoko overtook him, and again urged him to get to a distance as soon as possible. "Even my own braves cannot be trusted," he said. "I much fear that several who would not smoke the pipe may steal out from the camp, and try to kill my white father if he remains longer in the neighbourhood." Brave as Uncle Donald was, he had me to look after as well as the little girl. Parting with the chief, therefore, he hurried on, and told me instantly to mount. I was very much astonished to see the little girl, but there was no time to ask questions; so putting spurs to our horses, we galloped back to where we had left our men and the baggage. As both we and our horses required rest, we camped on the spot, Pierre and Corney being directed to keep a vigilant watch. The little girl lay in Uncle Donald's arms, but she had not yet recovered sufficiently to tell us her name, and it was with difficulty that we could induce her to take any food. Late in the day we met a party going out to attack the Indians; but, as Uncle Donald observed, "they might just as well have tried to catch the east wind. We waited to see the result of the expedition. They at length returned, not having come near the enemy. The few men who had escaped the massacre were unable to give any information about the little girl or her friends, nor could we learn to whom she belonged. All we could ascertain from her was that her name was Rose, for her mind had sustained so fearful a shock that, even after several days had passed, she was unable to speak intelligibly. "Her fate among the Indians would have been terrible, but it would be almost as bad were we to leave her among the rough characters hereabouts," observed Uncle Donald. "As none of her friends can be found, I will be her guardian, and, if God spares my life, will bring her up as a Christian child." It was many a long day, however, before Rose recovered her spirits. Her mind, indeed, seemed to be a blank as to the past, and Uncle Donald, afraid of reviving the recollection of the fearful scenes she must have witnessed, forbore to say anything which might recall them. However, by the time we reached Fort Edmonton, where Hugh McLellan had been left, she was able to prattle away right merrily. The officers at the fort offered to take charge of her, but Uncle Donald would not consent to part with his little "Prairie Rose," as he called her; and after a short stay we set out again, with Hugh added to our party, across the Rocky Mountains, and at length arrived safely at Clearwater. Corney and Pierre remained with us, and took the places of two other men who had left. Hugh McLellan was a fine, bold little fellow, not quite two years my junior; and he and I--as Uncle Donald had hoped we should--soon became fast friends. He had not much book learning, though he had been instructed in the rudiments of reading and writing by one of the clerks in the fort, but he rode fearlessly, and could manage many a horse which grown men would fear to mount. "I want you, Archie, to help Hugh with his books," said Uncle Donald. "I believe, if you set wisely about it, that he will be ready to learn from you. I would not like for him to grow up as ignorant as most of the people about us. It is the knowledge we of the old country possess which gives us the influence over these untutored savages; without it we should be their inferiors." I promised to do my best in fulfilling his wishes, though I took good care not to assert any superiority over my companion, who, indeed, though I was better acquainted with literature than he was, knew far more about the country than I did. But there was another person in the household whose history is worthy of narration--the poor Indian woman--"Madge," as we called her for shortness, though her real name was Okenmadgelika. She also owed her life to Uncle Donald. Several years before this, she, with her two children, had accompanied her husband and some other men on an expedition to trap beavers, at the end of autumn, towards the head waters of the Columbia. While she was seated in her hut late in the evening, one of the men staggered in desperately wounded, and had just time to tell her that her husband and the rest were murdered, when he fell dead at her feet. She, instantly taking up her children--one a boy of six years of age, the other a little girl, an infant in arms--fled from the spot, with a horse and such articles as she could throw on its back, narrowly escaping from the savages searching for her. She passed the winter with her two young ones, no human aid at hand. On the return of spring she set off, intending to rejoin her husband's people far away to the westward. After enduring incredible hardships, she had been compelled to kill her horse for food. She had made good some days' journey, when, almost sinking from hunger, and fearing to see her children perish, she caught sight of her relentless foes, the Blackfeet. In vain she endeavoured to conceal herself. They saw her and were approaching, when, close to the spot where she was standing, a tall white man and several Indians suddenly emerged from behind some rocks. The Blackfeet came on, fancying that against so few they could gain an easy victory; but the rifles of the white man and his party drove them back, and Uncle Donald--for he was the white man--conveyed the apparently dying woman and her little ones to his camp. The house at Clearwater had not yet been built. By being well cared for the Indian woman and her children recovered; but though the boy flourished, the little girl seemed like a withered flower, and never regained her strength. Grateful for her preservation, the poor woman, when she found that Uncle Donald was about to settle at Clearwater, entreated that she might remain with her children and labour for him, and a faithful servant she had ever since proved. Her little girl at length died. She was for a time inconsolable, until the arrival of Rose, to whom she transferred all her maternal feelings, and who warmly returned her affection. But her son, whose Indian name translated was Red Squirrel, by which appellation he was always known, had grown up into a fine lad, versed in all Indian ways, and possessing a considerable amount of knowledge gained from his white companions, without the vices of civilisation. He was a great favourite with Uncle Donald, who placed much confidence in his intelligence, courage, and faithfulness. Nearly two years had passed since Rose, Hugh, and I had been brought to Clearwater, and by this time we were all much attached to each other. We had also learned to love the place which had become our home; but we loved Uncle Donald far more. CHAPTER FOUR. THREE GRIZZLIES. THE START AFTER HUGH--A FOOT-PRINT--FOLLOWING THE TRAIL--ARCHER MEETS A GRIZZLY--A MISS-FIRE--DISCRETION THE BETTER PART OF VALOUR--FAR MORE BEARS--HELP, AND A JOINT ATTACK--HUGH UP IN A TREE--THE RESULT OF DISOBEDIENCE. I must now continue my narrative from the evening Hugh was missing. The moment we had finished our hurried meal we set out. Sandy, in case we should be benighted, had procured a number of pine torches, which he strapped on his back; and Uncle Donald directed Corney and Pierre who came in as we were starting, to follow, keeping to the right by the side of the torrent, in case Hugh should have taken that direction. Whiskey, Pilot, Muskymote followed closely at our heels--faithful animals, ready to drag our sleighs in winter, or, as now, to assist us in our search. We walked on at a rapid rate, and were soon in a wild region of forests, rugged hills, and foaming streams. As we went along we shouted out Hugh's name, and searched about for any signs of his having passed that way. At length we discovered in some soft ground a foot-print, which there could be no doubt was his, the toe pointing in the direction we were going. "Now we have found the laddie's trail we must take care not to lose it," observed Uncle Donald. "It leads towards the very spot where I saw the grizzly this morning." On and on we went. Soon another foot-print, and then a mark on some fallen leaves, and here and there a twig bent or broken off, showed that we were on Hugh's trail. But the sun had now sunk beneath the western range of mountains, and the gloom of evening coming on would prevent us from tracing our young companion much further. Still, as we should have met him had he turned back, we followed the only track he was likely to have taken. We were approaching the spot where Uncle Donald had seen the bear, near a clump of trees with a thick undergrowth, a rugged hill riding beyond. We were somewhat scattered, hunting about for any traces the waning light would enable us to discover. I half feared that I should come upon his mangled remains, or some part of his dress which might show his fate. I had my rifle, but was encumbered with no other weight, and in my eagerness, I ran on faster than my companions. I was making my way among some fallen timber blown down by a storm, when suddenly I saw rise up, just before me, a huge form. I stopped, having, fortunately, the presence of mind not to run away, for I at once recognised the animal as a huge grizzly, which had been engaged in tearing open a rotten trunk in search of insects. I remembered that Uncle Donald had told me, should I ever find myself face to face with a grizzly, to throw up my arms and stand stock still. The savage brute, desisting from its employment, came towards me, growling terribly, and displaying its huge teeth and enormous mouth. I was afraid to shout, lest it might excite the animal's rage; but I acted as Uncle Donald had advised me. As I lifted up my rifle and flourished it over my head, the creature stopped for a moment and got up on its hind legs. Now or never was my time to fire, for I could not expect to have a better opportunity, and bringing my rifle, into which I had put a bullet, to my shoulder, I took a steady aim and pulled the trigger. To my dismay, the cap snapped. It had never before played me such a trick. Still the bear kept looking at me, apparently wondering what I was about. Mastering all my nerve, and still keeping my eye fixed on the shaggy monster in front of me, I lowered my rifle, took out another cap, and placed it on the nipple. I well knew that should I only wound the bear my fate would be sealed, for it would be upon me in an instant. I felt doubly anxious to hill it, under the belief that it had destroyed my friend Hugh; but still it was sufficiently far off to make it possible for me to miss, should my nerves for a moment fail me. As long as it remained motionless I was unwilling to fire, in the hope that before I did so Uncle Donald and Sandy might come to my assistance. Having re-capped my rifle, I again lifted it to my shoulder. At that moment Bruin, who had grown tired of watching me, went down on all fours. The favourable opportunity was lost; for although I might still lodge a bullet in its head, I might not kill it at once, and I should probably be torn to pieces. I stood steady as before, though sorely tempted to run. Instead, however, of coming towards me, to my surprise, the bear returned to the log, and recommenced its occupation of scratching for insects. Had it been broad daylight I might have had a fair chance of shooting it; but in the obscurity, as it scratched away among the fallen timber, from which several gnarled and twisted limbs projected upwards, I was uncertain as to the exact position of its head. Under the circumstances, I considered that discretion was the better part of valour; and feeling sure that Uncle Donald and Sandy would soon come up and settle the bear more effectually than I should, I began slowly to retreat, hoping to get away unperceived. I stepped back very cautiously, scarcely more than a foot at a time, then stopped. As I did so I observed a movement a little distance off beyond the big bear, and presently, as I again retreated, two other bears came up, growling, to the big one, and, to my horror, all three moved towards me. Though smaller than their mother, each bear was large enough to kill me with a pat of its paw; and should I even shoot her they would probably be upon me. Again, however, they stopped, unwilling apparently to leave their dainty feast. How earnestly I prayed for the arrival of Uncle Donald and Sandy! I had time, too, to think of poor Hugh, and felt more convinced than ever that he had fallen a victim to the ferocious grizzlies. I still dared not cry out, but seeing them again turn to the logs, I began, as before, to step back, hoping at length to get to such a distance that I might take to my heels without the risk of being pursued. In doing as I proposed I very nearly tumbled over a log, but recovering myself, I got round it. When I stopped to see what the bears were about they were still feeding, having apparently forgotten me. I accordingly turned round and ran as fast as I could venture to go among the trees and fallen trunks, till at length I made out the indistinct figures of Uncle Donald and Sandy, with the dogs, coming towards me. "I have just seen three bears," I shouted. "Come on quickly, and we may be in time to kill them!" "It's a mercy they did not catch you, laddie," said Uncle Donald, when he got up to me. "With the help of the dogs we'll try to kill them, however. Can you find the spot where you saw them?" "I have no doubt about that," I answered. "Well, then, before we go further we'll just look to our rifles, and make sure that there's no chance of their missing fire." Doing as he suggested, we moved on, he in the centre and somewhat in advance, Sandy and I on either side of him, the dogs following and waiting for the word of command to rush forward. The bears did not discover us until we were within twenty yards of them, when Uncle Donald shouted to make them show themselves. I fancied that directly afterwards I heard a cry, but it might only have been the echo of Uncle Donald's voice. Presently a loud growl from the rotten log showed us that the bears were still there, and we soon saw all three sitting up and looking about them. "Sandy, do you take the small bear on the right; I will aim at the big fellow, and leave the other to you, Archie; but do not fire until you are sure of your aim," said Uncle Donald. "Now, are you ready?" We all fired at the same moment. Sandy's bear dropped immediately, but the big one, with a savage growl, sprang over the logs and came towards us, followed by the one at which I had fired. Uncle Donald now ordered the dogs, which had been barking loudly, to advance to the fight; but before they reached the larger bear she fell over on her side, and giving some convulsive struggles, lay apparently dead. The dogs, on this, attacked the other bear, which, made furious by its wound, was coming towards us, growling loudly. On seeing the dogs, however, the brute stopped, and sat up on its hind legs, ready with its huge paws to defend itself from their attacks. We all three, meantime, were rapidly reloading, and just as the bear had knocked over Whiskey and seized Muskymote in its paws, Uncle Donald and Sandy again fired and brought it to the ground, enabling Muskymote, sorely mauled, to escape from its deadly embrace. I instinctively gave a shout, and was running on, when Uncle Donald stopped me. "Stay!" he said; "those brutes play `possum' sometimes, and are not to be trusted. If they are not shamming, they may suddenly revive and try to avenge themselves." "We'll soon settle that," said Sandy, and quickly reloading, he fired his rifle into the head of the fallen bear. "Have you killed them all?" I heard a voice exclaim, which seemed to come from the branches of a tree some little distance off. I recognised it as Hugh's. "Hurrah!" I shouted; "are you all right?" "Yes, yes," answered Hugh, "only very hungry and stiff." We quickly made our way to the tree, where I found Hugh safe and sound, and assisted him to descend. He told us that he had fallen in with the bears on his way out, and had just time to escape from them by climbing up the tree, where they had kept him a prisoner all day. "I am thankful to get ye back, Hugh. You disobeyed orders, and have been punished pretty severely. I hope it will be a lesson to you," was the only remark Uncle Donald made as he grasped Hugh's hand. I judged, by the tone of his voice, that he was not inclined to be very angry. Having flayed the bears by the light of Sandy's torches, we packed up as much of the meat as we could carry, and hung up the remainder with the skins, intending to send for it in the morning. We then, having met the other two men, hastened homewards with Hugh; and I need not say how rejoiced Rose and Madge were to see him back safe. CHAPTER FIVE. AN EXPEDITION. WAITING FOR THE MESSENGERS--TWO TIRED INDIANS--BAD NEWS OF ARCHIE'S FATHER--UNCLE DONALD DETERMINES TO CROSS THE ROCKY MOUNTAINS-- PREPARATIONS--NEWS OF THE BLACKFEET--INDIAN CANOES--THE EXPEDITION STARTS. Summer was advancing, and we had for some time been expecting the return of Red Squirrel and Kondiarak, another Indian, who had been sent in the spring to Fort Edmonton with letters, and directions to bring any which might have come for us. At length we became somewhat anxious at their non-appearance, fearing that some serious accident might have happened to them, or that they might have fallen into the hands of the savage Blackfeet, the chief predatory tribe in the country through which they had to pass. Hugh and I were one evening returning from trapping beaver, several of which we carried on our backs. Though the skins are the most valued, the meat of the animal serves as food. We were skirting the edge of the prairie, when we caught sight of two figures descending the hills to the east by the pass which led from Clearwater towards the Rocky Mountains. "They are Indians," cried Hugh, "What if they should be enemies?" "It is more likely that they are friends," I answered. "If they were enemies they would take care not to show themselves. Let us go to meet them." The two men made their way slowly down the mountains and had got almost up to us before we recognised Red Squirrel, and his companion Kondiarak ("the rat"), so travel-stained, wan, and haggard did they look. They had lost their horses, they said, after our first greetings were over. One had strayed, the other had been stolen by the Blackfeet, so that they had been compelled to perform the greater part of the journey on foot; and having exhausted their ammunition, they had been almost starved. They had succeeded, however, in preserving the letters confided to them, and they had brought a packet, for Uncle Donald, from a white stranger at whose hut they had stopped on the way. On seeing the beavers we carried they entreated that we would give them some meat without delay, saying that they had had no food for a couple of days. Their countenances and the difficulty with which they dragged their feet along corroborated their assertions. We, therefore, at once collecting some fuel, lighted a fire, and having skinned and opened one of the beavers, extended it, spread-eagle fashion, on some sticks to cook. They watched our proceedings with eager eyes; but before there was time to warm the animal through their hunger made them seize it, when tearing off the still uncooked flesh, they began to gobble it up with the greatest avidity. I was afraid they would suffer from over eating, but nothing Hugh or I could say would induce them to stop until they had consumed the greater part of the beaver. They would then, had we allowed them, have thrown themselves on the ground and gone to sleep; but anxious to know the contents of the packets they had brought, relieving them of their guns, we urged them to lean upon us, and come at once to the farm. It was almost dark before we reached home. Madge embraced her son affectionately, and almost, wept when she observed the melancholy condition to which he was reduced. He would not, however, go to sleep, as she wanted him to do, until he had delivered the packets to Uncle Donald, who was still out about the farm. He in the meantime squatted down near the fire, where he remained with true Indian patience till Uncle Donald came in, when, rising to his feet, he gave a brief account of his adventures, and produced the packets, carefully wrapped up in a piece of leather. To those which came by way of Edmonton I need not further refer, as they were chiefly about business. One, however, was of great interest; it was in answer to inquiries which Uncle Donald had instituted to discover any relatives or friends of little Rose. To his secret satisfaction he was informed that none could be found, and that he need have no fear of being deprived of her. As he read the last packet his countenance exhibited astonishment and much concern. "This letter is from your mother, Archie," he said, at length, when he had twice read it through. "Your father has brought her and the rest of the family to a mission station which has been established for the benefit of the Sercies, on the other side of the Rocky Mountains. Scarcely had they been settled for a few months, and your father had begun to win the confidence of the tribe among whom he had come to labour, than the small-pox broke out in their village, brought by the Blackfeet from the south; and their medicine-men, who had from the first regarded him with jealous eyes, persuaded the people that the scourge had been sent in consequence of their having given a friendly reception to the Christian missionary. Some few, whose good will he had gained, warned him that his life was in danger, and urged him to make his escape from the district. Though unwilling himself to leave his post, he had proposed sending your mother and the children away, when he was attacked by a severe illness. She thus, even had she wished it, could not have left him, and they have remained on at the station, notwithstanding that she fears they may at any time be destroyed by the savages, while the medicine-men have been using all their arts to win over the few Indians who continue faithful. These have promised to protect them to the best of their power, but how long they will be able to do so is doubtful. Their cattle and horses have been stolen, and they have for some time been short of provisions; thus, even should your father regain his health, they will be unable to travel. He, like a true missionary of the Gospel, puts his confidence in God, and endeavours, your mother says, ever to wear a cheerful countenance. She does not actually implore me to come to her assistance, for she knows the length and difficulties of the journey; and she expresses her thankfulness that you are safe on this side of the mountains, but I see clearly that she would be very grateful if I could pay her a visit; and I fear, indeed, unless help reaches your family, that the consequences may be serious. I have, therefore, made up my mind to set off at once. We may manage to get across the mountains before the winter sets in, though there is no time to be lost. I will take Pierre and Corney, with Red Squirrel and a party of our own Indians, and leave Sandy, with Hugh and you, in charge of Clearwater." "May I not go, also?" I asked, in a tone of disappointment. "Surely I may be able to help my father and mother, and Hugh would be very sorry to be left behind." "It is but natural that you should wish to go; and Hugh, too, maybe of assistance, for I can always trust to your discretion and judgment should any difficulty occur," he observed. "Then you will take us, won't you?" we both cried at once. "Yes," he answered. "I would not take one without the other, so Hugh may go if he wishes it." "Thank you, thank you!" I exclaimed, gratified at Uncle Donald's remark; "we will try to deserve your confidence. What shall we do first?" "We must have the canoes got ready, and lay in a stock of provisions so that we may not be delayed by having to hunt; indeed, except some big-horns, and perhaps a grizzly, we shall not find much game on the mountains," he remarked. That evening all our plans were completed, and Sandy and the other men received their directions. Saddle and pack horses were at once to be started off by a circuitous route, carrying only light loads however, and were to meet us at the head of the river navigation, however, while we were to go as far up the stream as we could in canoes, with as large a supply of provisions as they could convey. The very next morning at daybreak while we were engaged in preparing the birch bark canoes by covering the seams with gum, and sewing on some fresh pieces of bark with wattap, which is formed of the flexible roots of the young spruce tree, an Indian was seen on the opposite side of the river making a signal to us that he desired to cross. One of the canoes which was ready for launching was sent for him and brought him over. "He had come," he said, "to bring us information that a large body of Blackfeet were on the war-path, having crossed the Rocky Mountains at one of the southern passes, and that having attacked the Sinapools, their old enemies on the Columbia, they were now bending their steps northward in search of plunder and scalps. He came to tell his white friends to be prepared should they come so far north." On hearing this I was afraid that Uncle Donald would give up the expedition and remain to defend Clearwater, but on cross-questioning the Indian, he came to the conclusion that the Blackfeet were not at all likely to come so far, and Sandy declared that if they did he would give a very good account of them. Still, as it was possible that they might make their appearance, Uncle Donald considered that it was safer to take Rose with us notwithstanding the hardships to which she might be exposed. "Then Madge will go too," exclaimed Rose; "poor Madge would be very unhappy at being left alone without me." "Madge shall go with us," said Uncle Donald; and Rose, highly delighted, ran off to tell her to get ready. The horses had been sent off at dawn, but we were not able to start until the following morning as it took us the whole day to prepare the packages of dried fish, pemmican, and smoked venison and pork, which were to serve us as provisions. On a bright clear morning, just before the sun rose over the hills to the east, we pushed off from the bank in four canoes. In each were five people, one to steer and the others to paddle. Uncle Donald took Rose in his as a passenger. Hugh and I went together with Red Squirrel to steer for us, and Corney and Pierre had each charge of another canoe. I will describe our canoes, which were light, elegant, and wonderfully strong, considering the materials of which they were formed. They were constructed of the bark of the white birch-tree. This had been peeled from the tree in large sheets, which were bent over a slender frame of cedar ribs, confined by gunwales, and kept apart by thin bars of the same wood. The ends were alike, forming wedge-like points, and turned over from the extremities towards the centre so as to look somewhat like the handle of a violin. The sheets of bark were then fastened round the gunwales by wattap, and sewn together with the same materials at the joinings. These were afterwards covered by a coat of pine pitch, called gum. The seats for the paddlers were made by suspending a strip of board with cords from the gunwales in such a manner that they did not press against the sides of the canoe. At the second cross-bar from the bow a hole was cut for a mast, so that a sail could be hoisted when the wind proved favourable. Each canoe carried a quantity of spare bark, wattap, gum, a pan for heating the gum, and some smaller articles necessary for repairs. The canoes were about eighteen feet long, yet so light that two men could carry one with ease a considerable distance when we had to make a "portage." A "portage," I should say, is the term used when a canoe has to be carried over the land, in consequence of any obstruction in the river, such as rapids, falls, or shallows. As soon as we were fairly off Pierre struck up a cheerful song, in which we, Corney, and the Indians joined, and lustily plying our paddles we urged our little fleet up the river. CHAPTER SIX. PADDLING UP STREAM. THE FIRST CAMP--RAPIDS--A PORTAGE--INDIANS ATTACK THE CANOES--A RACE FOR LIFE--HE'S WON JUST IN TIME--MORE RAPIDS IN AN AWKWARD PLACE--THE CANOES POLED UP STREAM--AN UPSET--THE INDIANS AGAIN, AND HUGH IN DANGER--OTHER CANOES TO THE RESCUE. For the first day we made good progress, stopping only a short time to land and cook our provisions. We then paddled on until nearly dark, when we went on shore, unloaded our canoes, hauled them up, lighted a fire for cooking, and pitched a small tent for Rose, in front of which Madge, as she always afterwards did, took up her post to be ready to guard her in case of danger. As soon as supper was over, two men were placed on watch, and the rest of the party lay down round the fire with our buffalo-robes spread on fresh spruce or pine boughs as beds. Before dawn we were aroused by Uncle Donald. The morning was calm, the stars were slightly paling, a cold yellow light began to show itself. Above the river floated a light mist through which objects on the opposite bank were dimly seen, while on the land side a wall of forest rose up impenetrable to the eye. From the dying embers of the camp fire a thin column of smoke rose high above the trees, while round it were the silent forms of the Indians, lying motionless at full length on their backs, enveloped in their blankets. To stretch my legs I walked a few paces from the camp, when I was startled by a sudden rush through the underbrush. For a moment I thought of the Blackfeet, but the movement proved to be made by a minx or marten, which had been attracted to the spot by the remains of last night's meal. On hearing Uncle Donald's voice the Indians started to their feet, and after a hurried breakfast, the canoes being launched and the baggage stowed on board, we proceeded on our voyage. The mist by degrees cleared away, the sun mounting over the hills, lighted up the scenery, and our crews burst into one of the songs with which they were wont to beguile the time while plying their paddles. Having stopped as before to dine we were paddling on, when we heard a low ceaseless roar coming down between the high banks. In a short time we saw the waters rushing and foaming ahead of us, as they fell over a broad ledge of rocks. "Can we get over there?" asked Hugh. "No," I answered; "see, Uncle Donald is steering in for the shore." We soon landed, the canoes were unloaded, and being hauled up the bank, each was placed on the shoulders of two men, who trotted off with them by a path parallel to the river; the rest loaded themselves with the bales. Hugh and I imitated their example, Madge carried as heavy a package as any of the men, and Rose begged that she might take charge of a small bundle, with which she trotted merrily off, but did not refuse to let Madge have it before she had gone half-way. After proceeding for nearly a mile among rocks and trees, the canoes were placed on the banks where the river flowed calmly by, and the men returned for the remainder of the baggage. Three trips had to be made to convey the whole of the cargoes above the falls. This is what is called "making a portage." Re-embarking, on we went until nightfall. During the next few days we had several such portages to make. We were at times able to hoist our sails, but when the stream became more rapid and shallow, we took to poling, a less pleasant way of progressing, though under these circumstances the only one available. Occasionally the river opened out, and we were able to resume our paddles. We had just taken them in hand and were passing along the east bank when Hugh exclaimed, "I see some one moving on shore among the trees! Yes, I thought so; he's an Indian," and he immediately added, "there are several more." I shouted to Uncle Donald to tell him, and then turned to warn Pierre and Corney. Scarcely had I spoken than well-nigh fifty savages appeared on the banks, and, yelling loudly, let fly a cloud of arrows towards us, while one of them shouted to us to come to shore. "Very likely we'll be after doin' that, Mister Red-skins," cried Corney. And we all, following Uncle Donald's example, turning the heads of our canoes, paddled towards the opposite bank. We were safe for the present, and might, had we chosen, have picked off several of the savages with our rifles; Corney and Pierre had lifted theirs for the purpose, but Uncle Donald ordered them not to fire. "Should we kill any of them we should only find it more difficult to make peace afterwards," he observed. The river was here wide enough to enable us to keep beyond range of their arrows, and we continued our course paddling along close to the western bank. After going a short distance we saw ahead of us a lake, which we should have to cross. The Indians had disappeared, and I hoped we had seen the last of them, when Corney shouted out that he had caught sight of them running alone; the shore of the lake to double round it. Their object in so doing was evident, for on the opposite side of the upper river entered the lake, rounding a point by a narrow passage, and this point they hoped to gain before we could get through, so that they might stop our progress. "Paddle, lads--paddle for your lives!" cried Uncle Donald. "We must keep ahead of the red-skins if we wish to save our scalps." We did paddle with might and main, making the calm water bubble round the bows of our canoes. Looking to our right, we every now and then caught a glimpse of the Blackfeet, for such we knew they were by their dress. They were bounding along in single file among the trees, led apparently by one of their most nimble warriors. It seemed very doubtful whether we could pass the point before they could reach it. We persevered, for otherwise we should be compelled either to turn back, or to run the risk of being attacked at one of the portages, or to land at the western side of the lake, and to throw up a fort in which we could defend ourselves should the Blackfeet make their way across the river. It was not likely, however, that they would do this. They had already ventured much farther to the north than it was their custom to make a raid; and should they be discovered, they would run the risk of being set upon by the Shoushwaps, the chief tribe inhabiting that part of the country, and their retreat cut off. Still it was of the greatest importance to lose no time, and we redoubled our efforts to get by the point. The Indians had a greater distance to go; but then they ran much faster than we could paddle our canoes. As we neared the point, I kept looking to the right to see how far our enemies had got. Again I caught a glimpse of their figures moving among the trees, but whether or not they were those of the leaders I could not distinguish. Uncle Donald reached the point, and his canoe disappeared behind it. Hugh and I next came up, closely followed by the other two. We could hear the savage shouts and cries of the red-skins; but there was now a good chance of getting beyond their reach. "There goes the captain's canoe," I heard Corney sing out; "paddle, boys, paddle, and we'll give them the go-by!" We had entered the upper branch of the river; the current ran smoothly. Still we were obliged to exert ourselves to force our canoes up against it. Looking back for a moment over my shoulder, I could see the leading Indians as they reached the point we had just rounded. Enraged at being too late to stop us, they expended another flight of arrows, several of which struck the water close to us, and two went through the after end of Pierre's canoe, but fortunately above water. Though we had escaped for the present, they might continue along the eastern bank of the river, and meet us at the next portage we should have to make. The day was wearing on, and ere long we should have to look out for a spot on which to camp, on the west bank, opposite to that where we had seen the Indians. We had got four or five miles up the river when the roaring sound of rushing waters struck our ears, and we knew that we should have to make another portage. The only practicable one was on the east bank, and as it would occupy us the greater part of an hour, we could scarcely hope to escape the Indians, even should they not already have arrived at the spot. On the left rose a line of precipitous rocks, over which we should be unable to force our way. At length we got up to the foot of the rapids. Uncle Donald took a survey of them. I observed on the west side a sheet of water flowing down smoother and freer from rocks than the rest. "We must pole up the rapids, but it will need caution; follow me," said Uncle Donald. We got out our long poles, and Uncle Donald leading the way, we commenced the ascent. While resting on our paddles Corney and Pierre had overtaken us, and now followed astern of Uncle Donald, so that our canoe was the last. We had got nearly half-way up, the navigation becoming more difficult as we proceeded. The rocks extended farther and farther across the channel, the water leaping and hissing and foaming as it rushed by them. One of our Indians sat in the bows with a rope ready to jump out on the rocks and tow the canoe should the current prove too strong for us. Red Squirrel stood aft with pole in hand guiding the canoe, while Hugh and I worked our poles on either side. Corney and Pierre were at some little distance before us, while Uncle Donald, having a stronger crew, got well ahead. "We shall soon be through this, I hope," cried Hugh; "pretty tough work though." As he spoke he thrust down his pole, which must have been jammed in a hole, and his weight being thrown upon it, before he could recover it broke, and over he went; I in my eagerness, leaning on one side, attempted to grasp at him, the consequence was that the canoe, swinging round, was driven by the current against the rock. I heard a crash, the foaming water washed over us, and I found myself struggling in its midst. My first impulse was to strike out, for I had been a swimmer from childhood. Notwithstanding, I found myself carried down. I looked out for Hugh, but the bubbling water blinded my eyes, and I could nowhere see him nor my Indian companions; still I instinctively struggled for life. Suddenly I found myself close to a rugged rock, whose sides afforded the means of holding on to it. By a violent effort I drew myself out of the water and climbed to the top. I looked round to see what had become of the rest of the crew; my eye first fell on the canoe, to which Hugh was clinging. It was being whirled hurriedly down the rapids; and some distance from it, indeed, almost close to where I now was, I saw the head of an Indian. His hands and feet were moving; but instead of trying to save himself by swimming towards the rock on which I was seated, he was evidently endeavouring to overtake the canoe. I could nowhere see our other companion; he had, I feared, sunk, sucked under by the current. A momentary glance showed me what I have described. Directly I had recovered breath I shouted to Pierre and Corney, but the roar of the waters prevented them from hearing my voice; and they and their companions were so completely occupied in poling on their canoes that they did not observe what had occurred. Again and again I shouted; then I turned round, anxiously looking to see how it fared with Hugh and the Indian. The canoe had almost reached the foot of the rapids, but it went much faster than the Indian, who was still bravely following it. He had caught hold of one of the paddles, which assisted to support him. I was now sure that his object was to assist Hugh, for he might, as I have said, by swimming to the rock and clutching it, have secured his own life until he could be taken off by Corney or Pierre. Hugh still held tight hold of the canoe, which, however, the moment it reached the foot of the rapids, began to drift over to the eastern shore. Just then what was my dismay to see a number of red-skins rush out from the forest towards the bank. They were those, I had no doubt, from whom we were endeavouring to escape. They must have seen the canoe, and were rejoicing in the thoughts of the capture they were about to make. Hugh's youth would not save him from the cruel sufferings to which they were wont to put their prisoners, should they get hold of him, and that they would do this seemed too probable. I almost wished, rather than he should have had to endure so cruel a fate, that he had sunk to the bottom. Even now the Indian might come up with the canoe, but would it be possible for him to tow it to the west bank, or support Hugh while swimming in the same direction. Though the rock was slippery I at length managed to stand up on it, and as I did so I gave as shrill a shout as I could utter. One of the Indians in Corney's canoe glanced at me for a moment. He at once saw what had happened, and I guessed from his gestures was telling Pierre as well as Corney of the accident. In an instant the poles were thrown in, and the Indians seizing their paddles, the canoes, their heads turned round, were gliding like air bubbles down the torrent. CHAPTER SEVEN. A NARROW ESCAPE. HUGH'S CANOE ARRESTED BY RED SQUIRREL JUST IN TIME--THE CANOE SAVED--ALL GOT UP THE RAPIDS AT LAST--CAMP AT THE TOP--THE BLACKFEET REACH THE CAMP TO FIND THE PARTY GONE--THE INDIANS PURSUE, AND UNCLE DONALD LIES BY FOR TWO DAYS ON AN ISLAND--END OF THE WATER PASSAGE--THE HORSES DO NOT APPEAR. As Corney and Pierre approached I waved to them to go on, pointing to the canoe to which Hugh was clinging. They saw the necessity of at once going to his rescue, and so left me on the rock, where I was perfectly safe for the present. There was need, in truth, for them to make haste, for already Hugh was drifting within range of the Indians' arrows, and they might shoot him in revenge for the long run we had given them. The overturned canoe seemed to be gliding more and more rapidly towards them, when I saw its progress arrested. The brave Indian had seized it, and was attempting to tow it away from the spot where the savages were collected. But all his efforts could scarcely do more than stop its way, and he apparently made but little progress towards the west shore. Corney and Pierre were, however, quickly getting up to it. I shouted with joy when I saw Hugh lifted into Corney's canoe, and the Indian with some assistance clambering into that of Pierre. Not satisfied with this success they got hold of the canoe itself, determined to prevent it from falling into the hands of the enemy. This done, they quickly paddled over to the west shore, where a level spot enabled them to land. They had not forgotten me; and presently I saw Corney's canoe, with three people in her, poling up towards the rock on which I stood, while Pierre's was engaged in picking up such of the articles of baggage as had floated. It was not without some difficulty that I got on board. My first inquiry was to ascertain which of the Indians had assisted to save Hugh, and I was thankful to hear, as I had expected, that it was Bed Squirrel who had behaved so gallantly. We then had to decide what to do--whether to continue our course upwards, to let Uncle Donald know what had happened, or to rejoin Pierre. Though I had managed to cling on to the rock I found my strength so much exhausted that I could afford but little help in poling up the canoe. While we were discussing the matter, what was my dismay to see an Indian on the top of the western cliff. "Our enemies must have crossed, and we shall be attacked," I exclaimed. "Sure no, it's one of Mr Donald's men who has been sent to see what has become of us," answered Corney. Such I saw was the case. We could not hear his voice, but getting closer to us he made signs which his own people understood, that he would go back to Uncle Donald and learn what we were to do. In reply our two Indians pointed down to where Pierre's party were now on shore, letting him understand exactly what had happened. He quickly disappeared, and we had to wait some time, hanging on to a rock by a rope, until he returned with two other men. They then pointed up the stream as a sign to us that we were to proceed. We accordingly did so, poling up as before. By the time we got to the head of the rapids we saw that Pierre was coming after us, apparently towing the shattered canoe. Above the rapids we discovered a small bay, towards which Uncle Donald's voice summoned us. As we landed he grasped my hand, showing his joy at my escape. It was some time before Pierre arrived. Hugh came in his canoe, while the rest of the men had arrived over land with the luggage which had been saved, as also with our rifles, which, having been slung under the thwarts, had fortunately not slipped out. We immediately began our preparations for camping, but had, besides doing what was usual, to collect materials for a stockade, which might enable us to resist a sudden onslaught of the Blackfeet should they cross the river. One of the men was also placed on watch all the time to prevent surprise. While most of the party were thus engaged, Red Squirrel and Jock, who were the best canoe builders, were employed in repairing the shattered canoe, and making some fresh paddles and poles; indeed there was so much work to be done, that none of us got more than a few hours' rest. We had also to keep a vigilant watch, and two of the men were constantly scouting outside the camp, to guard in more effectually from being taken by surprise. All was ready for a start some time before daylight, when Uncle Donald, awakening the sleepers, ordered every one to get on board as noiselessly as possible. He, as usual, led the way, the other canoes following close astern. The last man was told to make up the fire, which was left burning to deceive the enemy, who would suppose that we were still encamped. We had got some distance, the wind being up stream, when just at dawn I fancied that I heard a faint though prolonged yell. We stopped paddling for a moment, I asked Red Squirrel if he thought that the Blackfeet had got across to our camp. He nodded, and uttered a low laugh, significant of his satisfaction that we had deceived them. Daylight increasing, we put up our masts and hoisted the light cotton sails, which sent our canoes skimming over the water at a far greater speed than we had hitherto been able to move. Another lake appeared before us. By crossing it we should be far ahead of the Blackfeet. We had brought some cooked provisions, so that we were able to breakfast in the canoes. It was long past noon before, the river having again narrowed, we ventured on shore for a brief time only to dine. The next portage we came to was on the east bank. It was fortunately a short one, and Uncle Donald kept some of the men under arms, a portion only being engaged in carrying the canoes and their cargoes. No Indians, however, appeared. "I hope that we have given them the go-by," said Hugh, "and shall not again see their ugly faces." "We must not be too certain; I'll ask Red Squirrel what he thinks," I replied. "Never trust a Blackfoot," was the answer. "They are as cunning as serpents, and, like serpents, they strike their enemies from among the grass." We expected in the course of two or three days more to come to an end of the river navigation at a spot where Uncle Donald had directed that the horses should meet us. We were not without fear, however, that some, if not the whole of the animals, might have been stolen by the Blackfeet should they by any means have discovered them. Occasionally sailing, sometimes paddling and poling, and now and then towing the canoes along the banks, we continued our progress. As we went along we kept a look-out for the Blackfeet, as it was more than possible that they might pursue us. We accordingly, in preference to landing on either bank, selected an island in the centre of the stream for our camping-ground. We had just drawn up the canoes among the bushes and formed our camp in an open spot near the middle of the island, when one of the men who was on the lookout brought word that he saw a large number of savages passing on the east bank. We were, however, perfectly concealed from their keen eyes. Watching them attentively, we guessed by their gestures that they were looking for us, and not seeing our canoes, fancied that we had passed on. Night was now approaching. We were afraid of lighting a fire, lest its glare might betray our position to our pursuers. They would, however, on not discovering us, turn back, so that we should thus meet them, and Uncle Donald resolved, therefore, to remain where we were, until they had retreated to the southward. Even should they discover us we might defend the island more easily than any other spot we could select. We had plenty of provisions, so that we could remain there without inconvenience for several days, except that we should thus delay our passage over the mountains. Hugh and I were, much to our satisfaction, appointed by Uncle Donald to keep watch, Hugh on one side of the island and I on the other, for fear lest, should the red-skins find out where we were, they might attempt, by swimming across, to take us by surprise. None appeared, however, and two more days went by. At last Uncle Donald began to hope that they, supposing we had taken another route, were on their way back. We accordingly, seeing no one the next morning, embarked, and the river here expanding into a lake, we were able to paddle on without impediment across it, and a short distance up another stream, when we came to a fall of several feet, beyond which our canoes could not proceed. This was the spot where we had expected to find the horses, but they had not arrived. We were greatly disappointed, for, having been much longer than we had calculated on coming up, we naturally expected that they would have been ready for us. Winter was rapidly approaching, and in the autumn before the streams are thoroughly frozen the dangers of crossing the mountains are greater than at any other period. As the canoes could go no higher we took them up the stream and placed them "en cache," where there was little chance of their being discovered. They were to remain there until the return of our men, who would accompany us to the foot of the mountains and go back again that autumn. On not finding the horses Uncle Donald went to the highest hill in the neighbourhood, overlooking the country through which they had to pass, in the hopes of seeing them approach. He came back saying that he could perceive no signs of them, and he ordered us forthwith to camp in such a position that we might defend ourselves against any sudden attack of hostile Indians. CHAPTER EIGHT. AMONG THE MOUNTAINS. THE HORSE PARTY ARRIVES AT LAST, BUT WITH HALF THE HORSES STOLEN--THE START ACROSS THE MOUNTAINS--MORE BLACKFEET IN THE WAY OBLIGE THE PARTY TO TAKE A STRANGE PASS--IT BECOMES COLDER--SNOW COMES ON--A PACK OF WOLVES--SLEIGHS AND SNOW-SHOES--IN THE HEART OF THE "ROCKIES"--CORNEY HAS A NARROW ESCAPE AND A COLD BATH--SNOW IN THE CANOES--DIFFICULTIES OF THE WAY--THE PASS AT LAST--A FEARFUL AVALANCHE. Several days passed by. We were not molested by the Indians, but the horses did not arrive. Uncle Donald never fretted or fumed, though it was enough to try his temper. I asked him to allow me to set off with Corney and Pierre to ascertain if they had gone by mistake to any other place. We were on the point of starting when we saw a party of horses and men approaching. They proved to be those we were expecting, but there were only eight horses, less than half the number we had sent off. The men in charge had a sad account to give. The rest had been stolen by Indians, and one of their party had been killed, while they had to make a long round to escape from the thieves, who would otherwise very likely have carried off the remainder. The men also had brought a dozen dogs--our three especial favourites being among them--to be used in dragging our sleighs in case the horses should be unable to get through. We had carried the materials for forming sleighs with us in the canoes, while the harness had been transported thus far with the other packages by the horses. The poor beasts, though very thin, were better than no horses at all. There were a sufficient number to convey our stores and provisions, one for Uncle Donald, who carried Rose on his saddle, and two others for Hugh and me. The rest of the party had to proceed on foot. I offered mine to Madge, but she declared that she could walk better than I could. We made a short day's journey, but the poor animals were so weak that we were compelled to camp again at a spot where there was plenty of grass. It was here absolutely necessary to remain three days to enable them to regain their strength. While we were in camp Uncle Donald sent out Pierre and one of our Indians to try and ascertain if any of the Blackfeet were still hovering in the direction we proposed taking across the mountains. We did not wait for the return of our scouts, but started at the time proposed, expecting to meet them on the road we should travel. We were engaged in forming our camp, collecting wood for the fires, and putting up rough huts, or rather arbours of boughs, as a protection from the wind--which here coming off the snowy mountains was exceedingly cold at night--while the gloom of evening was coming on, when one of the men on watch shouted-- "The enemy! the enemy are upon us!" While some of our people ran out intending to bring in the horses, the rest of us flew to our arms. Uncle Donald, taking his rifle, at once went out in the direction in which the sentry declared he had seen the band of savages coming over the hill. Our alarm was put an end to when, shortly afterwards, he came back accompanied by Pierre and his companion, who brought the unsatisfactory intelligence that a large body of Blackfeet were encamped near the pass by which we had intended to descend into the plains of the Saskatchewan. Ever prompt in action, Uncle Donald decided at once to take a more northerly pass. The country through which we were travelling was wild and rugged in the extreme; frequently we had to cross the same stream over and over again to find a practicable road. Now we had to proceed along the bottom of a deep valley among lofty trees, then to climb up a steep height by a zigzag course, and once more to descend into another valley. Heavily laden as were both horses and men, our progress was of necessity slow. Sometimes after travelling a whole day we found that we had not made good in a straight line more than eight or ten miles. The weather hitherto had been remarkably fine, and Hugh and Rose and I agreed that we enjoyed our journey amazingly. Our hunters went out every day after we had camped, and sometimes before we started in the morning, or while we were moving along, and never failed to bring in several deer, so that we were well supplied with food. The cold at night was very considerable; but with good fires blazing, and wrapped up in buffalo-robes, we did not feel it; and when the sun shone brightly the air was so pure and fresh that we were scarcely aware how rapidly winter was approaching. It should be understood that there are several passes through the lofty range it was our object to cross. These passes had been formed by the mountains being rent asunder by some mighty convulsion of nature. All of them are many miles in length, and in some places several in width; now the pass presents a narrow gorge, now expands into a wide valley. The highest point is called the watershed, where there is either a single small lake, or a succession of lakelets, from which the water flows either eastward through the Saskatchewan or Athabasca rivers, to find its way ultimately into the Arctic Ocean, or westward, by numberless tributaries, into the Fraser or Columbia rivers, which fall, after making numerous bends, into the Pacific. We had voyaged in our canoes up one of the larger tributaries of the Fraser, and had now to follow to its source at the watershed one of the smaller streams which flowed, twisting and turning, through the dense forests and wild and rugged hills rising on every side. The country had become more and more difficult as we advanced, and frequently we had to wind our way in single file round the mountains by a narrow path scarcely affording foothold to our horses. Sometimes on one side, sometimes on the other were steep precipices, over which, by a false step, either we or our animals might be whirled into the roaring torrent below. Now we had to force a road through the tangled forest to cut off an angle of the stream, and then to pass along narrow gorges, beetling cliffs frowning above our heads, and almost shutting out the light of day. At length we camped on higher ground than any we had yet reached. On one side was a forest, on the other a rapid stream came foaming by. The sky was overcast, so that, expecting rain, we put up all the shelter we could command. The hunters having brought in a good supply of meat, our people were in good spirits, and seemed to have forgotten the dangers we had gone through, while they did not trouble themselves by thinking of those we might have to encounter. We had no longer hostile Indians to fear; but we still kept a watch at night in case a prowling grizzly or pack of hungry wolves might pay the camp a visit. The wind blew cold; not a star was visible. The light from our fire threw a lurid glare on the stems and boughs of the trees and the tops of the rugged rocks which rose beyond. Having said good night to Rose, whom we saw stowed away in her snug little bower, Hugh and I lay down a short distance from the fire, sheltered by some of the packages piled up at our heads. Uncle Donald was not far from us. On the other side were Pierre and Corney and Red Squirrel, while Madge took her post, disdaining more shelter than the men, close to Rose's hut. Two of the men kept awake, one watching the camp, the other the horses, and the rest lay in a row on the opposite side of the fire. Such was the scene I looked on till, completely covering my head up in a buffalo-robe, I closed my eyes. I was awakened by finding an unusual weight above me. I threw my arms about, when down came a cold shower on my face and clearing my eyes I could just see the snow on every side, while my body was completely covered up. I was perfectly warm, however, and felt no inclination to get out of my cosy bed to brush the snow away. I drew my robe again over my head; being well assured that Uncle Donald would arouse us if there was any risk of our being completely covered up. How much longer I had slept I could not tell, when I was once more awakened by a terrific howling, yelping, the barking of dogs, the trampling and snorting of horses, followed by the shouts and shrieks of our men. I speedily drew myself out of my snowy burrow, and through the gloom I caught sight of our horses endeavouring to defend themselves by kicking out with their heels against a pack of wolves which had followed them up to the camp, and Uncle Donald with the men engaged, some with their rifles and others with sticks, in endeavouring to drive off the savage brutes, but they were afraid of firing, for fear of wounding the horses. I felt about for Hugh, who being covered up by the snow, had not been awakened by the din. "What is happening?" he exclaimed, sitting up. "Are the Indians upon us?" "Only some hungry wolves, and we are all right," I said. "Why, I fancy it has been snowing!" he exclaimed. "I should think so," I answered. "Come, jump up, we'll help put those brutes to flight." When the wolves found themselves encountered by human beings, they quickly turned tail, but we had some difficulty in catching the frightened horses, and I was just in time to seize one which was on the point of dashing into Rose's hut. As it was almost daylight, no one again turned in; the fires were made up, and we began cooking our morning meal. The snow continued to fall so heavily, that Uncle Donald decided to remain where we were, or rather to form another camp more under shelter of the trees. To proceed with the horses would have been almost impossible, and he therefore settled to send them back and to prepare the sleighs and snow-shoes for the rest of our journey. A sleigh is simply a thin board, ten feet long and about a foot broad, turned up at one end. The baggage is secured to it by leathern thongs. To form a cariole, a cradle or framework like the body of a small carriage is fixed on a sleigh such as I have just described, and is covered with buffalo skin parchment, the inside being lined with a buffalo-robe. When the traveller is seated in a cariole with outstretched legs, he is only separated from the snow by the thin plank which forms the floor. The dogs which drag the sleighs are attached to them by leathern thongs and collars generally decorated with bead work and tassels, surmounted by arches, to which are suspended strings of small bells. We had brought a supply of snow-shoes and moccasins for all the party. The snow-shoe is an oval frame five or six feet in length, about one in width, the intermediate space being filled with network, except a hole in the centre for the heel of the wearer. It is attached to the foot by leathern thongs. All hands were busily engaged in putting the sleighs together, fitting the harness to the dogs, and arranging the cargoes. The horses were sent back. The canoe men had taken their departure, and our party now consisted of Uncle Donald, Rose, Hugh and I, Pierre, Corney, Madge, Red Squirrel, and four Indians. We had to wait until the snow had somewhat hardened, and the stream up which we were to proceed had been frozen over. Uncle Donald had made for Rose to sleep in a bag of buffalo-robes lined with softer furs, which kept her perfectly warm. She was the only person who was to enjoy the privilege of a sleigh, drawn by Whiskey and Pilot, and guided by Uncle Donald. The rest of us were to travel on snow-shoes, a mode of proceeding which, though fatiguing, kept us warm. The last night of our stay in camp arrived. We were to start, should the weather be propitious, the next morning. Soon after we turned in for the night, before I had fallen asleep, I was greatly surprised to hear the sound of chopping in a wood at no great distance off. I called to Hugh, he heard it also, as did Uncle Donald. One after the other the men expressed their wonder at the sound. Corney, who was on guard, walked a few paces in the direction from whence it came, evidently thinking that something was wrong, but he soon returned, declaring that he could see no one. Suddenly there came the crash of a falling tree. After this mysterious occurrence, nothing could induce him to go up to the spot, though it could not have been more than two hundred yards off. No one had been seen on the previous evening, and had Indians been there, they would have observed our fire, and would long ere this have gathered round it. What Uncle Donald thought I could not tell, he certainly did not get up to try and solve the mystery, nor did any of the Indians. Night passed away without disturbance, and the next morning, though Hugh, and Pierre, and I made a circuit of the camp, we could discover no footsteps to indicate that any one had been in the neighbourhood, nor signs of chopping, nor a fallen tree, so that the mystery remained unexplained. Breakfast over, our four Indians were sent ahead to trample down the snow with their snow-shoes, the loaded sleighs following, driven by the other men and Madge, who was as good a driver as any of them, Uncle Donald in charge of Rose bringing up the rear with Hugh and me. Such was to be our proceeding for many a day, until we were over the mountains. We were now in the heart of the "Rockies." The valley of the river we were following was about a mile wide, and on either side rose high rocky peaks, covered with perpetual snow, among which big-horns could be seen watching us, the intruders into their domains, and daring us, as it were, to scale the glaciers and meet them on their own ground. We several times met with moose, one of which was shot nearly every day to supply our camp with meat. We were anticipating getting through the pass without difficulty, when we found ourselves at the bottom of a fall a hundred feet in height, with thickly timbered hills on each side, which, rising abruptly from the water's edge, seemed to offer no footing even for a snow-shoe, much less a practicable trail for dog-sleighs. Uncle Donald was not to be defeated, however, and at once ordered a regular track, graded round the face of the bluffs, to be formed. By using snow-shoes as shovels, and poles and brush for bridges, we crossed the intervening gullies and reached the edge of the first fall. Going on a mile further, we found the river confined between perpendicular walls of rock, up which there was no climbing. We had to form another path, carrying it over ledges of rock, banks of ice and snow, making bridges from one huge boulder to another with the dark water boiling at our feet ready to engulf any one who might make a false step. To our joy, the formidable obstacle being surmounted, the good ice was reached at last, when we pushed on, the dogs trotting gaily along, and we following behind. But ere long another fall barred our progress. Before attempting to surmount it, we halted for dinner. As I was looking up I espied a big-horn, or mountain goat, and believing that we could get near enough to shoot it, Hugh and I set off with our guns. The animal is about the size of a common sheep, with conical horns, nearly three feet long, and forming a complete circle, but so thick is the wool which covers its head and body that their full length is not seen. "Sure, you'll not be gettin' up after that baste!" I heard Corney say, he having followed us. "We'll try," I answered, and began ascending the steep rocks. The difficulties were greater than we expected, but still we did not like to be defeated. We had been deceived by the clearness of the atmosphere, and after climbing up and up, the goat appeared as far off as ever. Presently he saw us, and off he bounded, springing along places where it would have been madness to follow. "I tould ye so!" cried Corney from below, for he had still followed us. "Ye must git above one of those gentlemen if you want to shoot him. Now dinner will be cooked, and we had better be after getting down to eat it." We accordingly descended to where we had left our snow-shoes. "Stop a moment!" cried Corney. "Just let me get a drink of water, for I see a rill dripping over a rock there." Corney accordingly made his way up to the perpendicular bank, but scarcely had he reached it, when, to our horror, there was a crash, and he suddenly disappeared, leaving, however, his long pole behind him. I knew that the river was running like a mill sluice down below, so rushing forward I shoved the pole across the opening, and holding it in one hand, as I threw myself flat on the ice, I thrust down my arm. To my relief, I felt Corney's head as he came to the surface, and seizing his hair, hauled away with might and main. Hugh now assisted me, and we managed to drag up the Irishman from his fearfully perilous position. It required caution, however, to get him on the ice, as that at any moment might give way, and we should have to share the fate from which we were trying to rescue him. "Arrah! the spalpeens! why don't they help us?" cried Corney. "Shout, Mr Archie! shout, Mr Hugh!" Our cries brought Pierre, who was nearest at hand, carrying a long rope and a pole. By resting on the poles, and lowering the rope with a bowline knot at the end, we got it under his arms, and soon hauling him upon the ice, we hurried away from the dangerous spot. He was none the worse for his dip, though it was no joke to be plunged head over ears in that icy cold water. Several of the other men fell in at different times, for although it was freezing hard the rapidity of the current prevented the ice forming securely in many places. We had occasionally, therefore, to leave the river and to make our way through the forest--no easy undertaking. But we could get through any places, provided they were more than two feet wide. When camping, we shovelled away the snow until we reached the moss on which we formed our beds; then we made our fire in the centre of the hole, and took our places round it. When we went to sleep it was pretty deep, but in the morning, on getting up, I found that I could not see over the wall of snow. By beating down the edges, however, we managed to climb out. In spite of the depth of the snow, we travelled on, though as our snow-shoes sank in places nearly a foot deep, the fatigue was very great. Rose laughed heartily as she saw us trudging on, and wanted Hugh to take her place in the sleigh and let her go on foot while he rested. Again we came to a more mighty canyon than any we had yet encountered. This necessitated a detour, to avoid it, of about three miles overland. A canyon, from the Spanish, is a deep gully or gorge, either with a river or stream flowing through the bottom or not, but the canyons in this part of the Rockies nearly always have a stream at the bottom. We had again reached the river where it flowed on a more even course. It was entirely frozen over, but we were high above it, and the difficulty was to get down. Pierre was the first to start. Away went the dogs with the sleigh, Pierre hauling it back and trying to stop its way. But all would not do, and presently he, dogs, and sleigh, went rolling over and over, until they plunged into the snow at the bottom, to a considerable depth. "Och sure I'll be wiser," cried Corney; and he made fast a tail rope to a tree, thus enabling him to lower it gently for a short distance at a time. In slipping it, however, from one tree to another, the sleigh gathered way, but scarcely had it got abreast of the dogs than it sheered off on one side of a small tree, the dogs rolling on the other. The tree--a mere sapling--bent, and the impetus carried the whole train nearly twenty feet out towards its end--the dogs hanging by their traces on one side, counterbalancing the sleigh on the other, where they swayed to and fro in the most ludicrous fashion, yelping, barking, and struggling to get free, and running a great risk of being hanged. "Surely I'll be afther losin' me dogs, and the sleigh will be dashed to pieces!" cried Corney, wringing his hands in his despair. Uncle Donald told us to take charge of Rose; then springing down the bank with the agility of a young man, axe in hand, with a few blows he cut the traces and set the poor dogs free, while the sleigh bounded down the hill into the snow at the bottom, where Pierre was trying to put his train to rights, the new arrival adding not a little to his difficulties. Fearing that Rose might meet with a similar accident, Uncle Donald, taking her in his arms, carried her down, while Hugh and I managed the sleigh. As soon as we were all to rights, we had the satisfaction of seeing before us a clear "glare" of ice. The dogs, entering into our feelings, set off at a scamper to cross it. In less than an hour we had got over a greater distance than we had the whole of the previous day. We had now reached the entrance to the pass. On either side rose pyramidical peaks, covered with perpetual snow, three thousand feet above the valley. Shortly afterwards we came to the foot of a magnificent glacier, which must have been scarcely less than a mile in length and several hundred feet in height. As we had made a good day's journey, and evening was approaching, Uncle Donald was looking out for a place at which to camp. We had just fixed on a spot on the bank of the river at the edge of a thick belt of trees, which here intervened between it and the cliffs, when a roar as of distant thunder reached our ears. "Look out! look out!" cried the Indians in chorus, and they pointed upwards. We did look, and there we saw the whole side of the mountain, as it seemed, in movement. Huge rocks and vast masses of ice came rolling down towards the spot we were passing over, threatening to overwhelm us. Down rushed the fearful avalanche. One huge rock was so directing its course that our destruction seemed certain, when it crashed in among the trees, tearing several up by the roots, but meeting with one of a larger size, just before it readied us, it was turned aside, and forcing its way through the remainder, it plunged into the river, not many feet from where we stood. As may be supposed, we did not camp at that spot, but, thankful for our preservation, pushed on to where, the valley slightly widening out, we ran less risk of being overwhelmed by an avalanche. CHAPTER NINE. LOST IN THE SNOW. THE DIVIDING RIDGE--A MISHAP--MORE DIFFICULTY WITH THE SNOW--THE PROVISIONS RUN SHORT--THE DOGS BEGIN TO SUCCUMB--HUGH, ARCHIE, AND RED SQUIRREL ARE LOST IN A SNOW-STORM--DONE UP, AND NO SHELTER. "The first part of our difficulties are approaching an end," said Uncle Donald the next morning, as we were starting. "It is possible that we may reach the dividing ridge by nightfall." The news caused every countenance to assume a cheerful expression. We pushed on in high spirits. The river, which had been growing less and less as we proceeded, at length became a small stream, fed by a fall down a steep <DW72>, up which we had, as before, to make our way by a zigzag path. On reaching the summit we found ourselves in an elevated valley, with mountain peaks on each side towering magnificently to the sky, the rays of the rising sun glancing on their snow-clad sides. The surface of the lakes afforded a level and easy road. Away went the dogs at a brisk trot, the men shouting with glee as they thought our difficulties were over. Climbing up the banks of one lake, we crossed over the ground to another, and then went on again as before. We quickly got over seven or eight miles, when we saw a stream, which, issuing from the eastern end of the last lake, ran down a gentle incline. The bright rivulet was a feeder of one of the vast rivers which flow towards the Arctic Ocean. A joyous shout was raised; we had crossed the dividing ridge, and the vast plain through which flow the Saskatchewan and Athabasca lay below us. Several trees which grew by the lakelet were marked, to show the boundary of the North West Territory, into which we had now entered. Having quenched our thirst from the little stream, we again set out, the ground sloping perceptibly towards the east. The rivulet widened as we advanced, and after we had gone a short way we found it completely frozen over. The ice being of sufficient thickness to bear our weight, we at once descended on to it, and away we went at a greater speed than we had hitherto gone, every one being in the highest spirits. We had now to make a long circuit through a dense forest, keeping away from the river, for fear of slipping down over the precipices which formed the side. Hugh and I, while sitting on our snow-shoes, were gliding downwards, fancying that we should reach the bottom of a hill without difficulty, when presently I saw him, on coming to some object concealed by the snow, give an unintentional jump, and over he went, head first, clutching at the shrubs and trying to stop himself. I was laughing at his mishap, when I felt myself jerked forwards, and then away I went in the same fashion. After some tumbling and rolling, with arms and legs outstretched, we were both pitched into a deep snow reef at the foot of the hill. One of the loaded sleighs, driven by Corney, before he had time to unharness the dogs, as he was about to do, broke away from him, and away it went, the poor dogs, terribly frightened, endeavouring to keep ahead of it, but it went faster than they could. In vain Corney and Red Squirrel tried to stop it. Had it kept clear of all impediments no great harm would have happened; but, unfortunately, it came in contact with a log, turning the poor dog who had the leader's place into a pancake, while the front part of the sleigh itself was shattered to fragments. We hurried to the spot. The poor dog lay dead, with its head and limbs fractured. We were some time occupied in repairing the broken sledge and harness. Continuing our journey, the river level was at last reached, when, on looking up, we saw that we had stood on a projecting ledge of ice not more than two feet in thickness, which might have given way beneath our weight and carried us down to destruction. Hitherto, when not travelling on the ice, we had to make our way over snow seldom less than two feet deep, but as we reached the base of the mountains it suddenly disappeared. As far as we could see to the eastward, not a patch was visible. Had it not been for the frozen rivers and the leafless trees, we might have fancied that summer was returning. This phenomenon occurs along the whole base of the Rocky Mountains, where there is a belt of nearly twenty miles in width perfectly free from snow. The ground being hard, we made good way over it, directing our course about south-east towards a stream running into the Saskatchewan. The stream we were steering for was reached. Travelling over the ice, we were soon again in a region where the snow lay thicker than ever, and it became very trying to our dogs. Our special favourites, Whiskey, Pilot, and Muskymote, went on bravely, in spite of their hard labour by day and the intense cold to which they were exposed by night. They, knowing fellows, whenever they stopped, carefully picked out the snow which, getting between their toes, would have cut them severely; but some of the younger ones, not understanding the necessity of so doing, allowed it to accumulate, and became lame. The snow now lay two feet in thickness over the whole surface of the country, making it fearfully heavy work to get along. We frequently had to go ahead to form a track; and even so soft was the snow, that the poor dogs would wallow through it up to their bodies, until they were well-nigh worn out with their incessant labour. We, however, pushed on, for had we ventured to stop our whole party might have succumbed. Our provisions were well-nigh exhausted, and neither buffalo, nor deer, nor smaller game appeared to enable us to replenish our stock of food. Our object was to get on a stream with a southerly or south-easterly course, on which we could travel until we could strike a line across the country leading to the missionary station. We made short journeys between sunrise and sunset. At the end of each day our first task was to clear away the snow, so as to have a space for our camp fire and room for the party to stretch themselves round it. The most sheltered spot was selected for Rose's hut, which, when wood was wanting, was formed of buffalo-robes. She seemed to enjoy the journey, and was as blooming and merry as ever. The poor dogs were the greatest sufferers. They had hard work and scanty food. First one stretched out its legs and died, and then another did the same; and one morning, when we were starting, even Pilot could not be coaxed away from the camp fire. No one had the heart to kill him, but stand on his legs he either could not or would not, so he was left to his fate in the faint hope that in an hour or so he might recover his strength and overtake us. As we pushed forward, on one side rose the lofty peaks of the Rocky Mountains, and on the other stretched out a vast extent of comparatively level land, in some parts open prairie, in others dense forest. The boughs of the trees were thickly laden with snow, the whole country, indeed, was wrapped in a white wintry mantle. The scenery was dreary in the extreme. Our spirits sank; it seemed that we should never come to an end of our long journey. The sky, hitherto bright, became overcast with clouds about the time that we had got over about two-thirds of the day's journey. Hugh and Red Squirrel and I were at some distance in the rear of the party, when snow began to fall and the wind to blow with unusual violence. The snow came down so thickly that it seemed as if the contents of a huge feather-bed had suddenly been emptied upon us. Thicker and thicker it fell; so great was the obscurity that we could scarcely see a yard ahead, while the tracks of our companions were almost instantly obliterated. We shouted, expecting that they would reply, and that we should be guided by their voices, but no sound came in return. We tried to run on, hoping to overtake them, when Hugh fell and broke one of his snow-shoes. We, of course, stopped to help him up, and in so doing must have turned slightly about. Red Squirrel, ever fertile in resources, set to work to mend the shoe. This he did very rapidly; but even that short delay was serious. As soon as Hugh was on his legs we again hurried on, supposing that we were following close behind the rest of the party. We shouted and shouted, but still there was no reply. I asked Red Squirrel if he thought we were going right. He did not answer. It is seldom that an Indian loses his way, but at length I began to fear that he was at fault. He acknowledged, indeed, that he was so. We unslung our guns, hoping that if we fired our friends would hear the report, and fire theirs in return, but neither Hugh's nor mine would go off. We put on fresh caps, and both again snapped. I felt in my pouch for my pricker, to try and clear out the nipple, but could not find it. I asked Hugh for his. "I'm afraid that I dropped it yesterday evening in the camp, and I thought that I would look for it in the morning, but forgot to do so," he answered. At last we gave up the attempt in despair. More valuable time had thus been lost. Red Squirrel urged us to go on, saying that he thought he could guide us by the wind. On and on we went. The snow fell as thickly as ever. At last Hugh declared that he could go no further. We were both suffering from fearful pains in our ankles--the _mal de raquette_, as the French Canadians call it, produced by the pressure of the snow-shoe straps. I looked anxiously about, hoping to discover some trees or shrubs which might afford us shelter and enable us to light a fire, but a thick veil of falling snow shrouded us on every side. I consulted Red Squirrel as to what we should do. One thing was certain--that if we remained in the open, exposed to the biting blast, we should perish. I feared that such would be our fate. Poor Hugh gave way altogether, and, casting off the straps from his ankles, threw himself down on the snow, and begged us to leave him. CHAPTER TEN. SNOWED UP. RED SQUIRREL AND ARCHIE DIG A HOLE IN THE SNOW--THE SNOW SHELTER-- SLEEP--NO FOOD, AND BURIED IN SNOW--EFFORTS TO DIG OUT--SOME ANIMAL SCRATCHES AT THE HOLE--LAST EFFORTS AT DEFENCE. To leave Hugh was not to be thought of. "Oh, say what we must do!" I exclaimed, addressing Red Squirrel. "Make haste," he answered, taking off his snow-shoes. I took off mine also, and using them as spades, we energetically set to work to shovel up the snow until we had got down to the ground, building up a wall with what we had thrown out. There was just sufficient space to hold three. We then placed Red Squirrel's shoes on the top, for they were the longest, and Hugh's above them, while with mine we threw up more snow to form a roof. As soon as we had got thus far, we lowered Hugh into our burrow, that he might be sheltered from the wind, placing the guns beside him. We then continued throwing up the snow until we had completely surrounded the hole, leaving only a small aperture through which we could crawl in on hands and knees. We next covered one of my snow-shoes with snow, patted it down until it was like a board, and this served as the door of our burrow. We had just space sufficient to sit up, or lie down packed close together, for we knew that the smaller its size the warmer it would be, or, rather, the less should feel the cold. The change from the outer biting air made us feel tolerably comfortable, and we had no great fear of being frozen to death. Hugh, from not having exerted himself in building the hut, suffered more than Red Squirrel or I, and as soon as the door was closed I set to work to rub his hands and feet to restore circulation, for I was afraid that they might have been frost-bitten. A very faint light at first came in through the snowy walls, but this lessened, until we could not see our hands held close to our faces. Night we knew must have at length come on. We were very hungry, but as we had not a particle of food, there was no use in complaining. For a long time neither Hugh nor I could go to sleep. At last Red Squirrel set us the example, and when, some time afterwards, I addressed Hugh, he did not answer, so that I knew he had forgotten his troubles, and I hoped that perfect rest would enable him to recover from the pain he had been suffering. I at last also dropped off to sleep. When I awoke the darkness was as complete as ever, though supposing it was still night, I once more went to sleep. The next time I opened my eyes it was still dark as before. I felt warmer than I had expected, but I was desperately hungry. From this I fancied that another day must have begun. In a short time my companions awoke. Hugh said the pain in his instep had gone, but that he would give much for something to eat. Red Squirrel did not suffer as much as we did, for Indians are able to endure hunger and pain a much longer time than can white people. "Surely it must be day," said Hugh. "We ought to try and get out, and find our friends. Rose and Uncle Donald will be dreadfully frightened at having lost us." "I hope that no accident has happened to them," I could not help saying, for the recollection came upon me that they also had been exposed to the snow-storm; but then I reflected that they were a large party, and might have reached the shelter of a wood. This was some consolation. "Oh, how hungry I am!" cried Hugh. "We must get out." I took up my rifle and tried to open the door with the barrel, but, although I ran it up to the lock, on again withdrawing it I could not see daylight through the hole. "I am afraid that the snow must be very thick," I said. The dreadful idea now occurred to me that we were buried alive in a snow tomb. Such had happened to other people, I knew, and it might be our fate, for if the snow once froze over us we might be unable to force our way out. I asked Red Squirrel what he thought. He answered with an ominous "Very bad! Try," he added, and I found that he was groping about to find the door. He did not speak, but I heard him scraping away with his hands, just as a terrier does at the entrance of a rabbit burrow, with a vehemence which showed how much he feared that we were completely buried. I could feel the snow which he dug up coming down on my legs. At last he asked for my gun. He thrust it into the hole he had formed, but still no light streamed through it. We must, however, by some means or other, force our way out or perish. "We had better try to work upwards," I observed. "The falling snow has surrounded the walls of our hut, and though we made the roof pretty thick, we are more likely to reach the open air through it than by working at the sides." The Indian followed my suggestion. Of course, we could all work together, but then we might have pulled a mass of snow down on our heads. Our object was simply to make a hole through which we could look out and ascertain if it were daylight, and if so to try and find out whereabouts we were. We might all the time be close to our party. I earnestly hoped that we were, so that we might satisfy the cravings of hunger without delay. The Indian tried to force off the snow-shoe which formed the door, but found that impossible. He then worked away above it. The snow he brought down considerably decreased the size of our hut. Still he persevered in working away, until I thought that he would never get through the roof. At last he asked me again to hand him up my gun, and having forced the barrel upwards, as he withdrew it we could feel the cold air coming down, while a gleam of daylight entered our burrow. But it would still require much labour before we could enlarge the hole sufficiently to enable us to force our bodies through it. At last, by dint of hard work, standing on the snow he had brought down, Red Squirrel got out his head. The report he gave was unsatisfactory. Scarcely, however, listening to what he said, I jumped up and thrust out my head, eager to ascertain the state of affairs. I could see nothing but a vast plain of snow on every side without a single object to direct our steps. Snow was still falling and had already reached above the level of our hut. We could not make our way over the vast plain without our snow-shoes, and it would take a considerable time before we could dig them out; and in the meantime we should be well-nigh frozen. I drew in my head again, my face chilled by the cold air, and, sinking down to the bottom of the hut, consulted with Red Squirrel and Hugh as to what was to be done. Hunger made us all anxious to go on; but then arose the question, In what direction should we go? We might perish in the attempt to reach our friends. We accordingly agreed to wait until the snow had ceased. Red Squirrel had, in the meantime, stopped up the hole to prevent the cold from getting in. Hunger and darkness soon caused us again to drop off to sleep, and thus we must have remained some hours. When at length I awoke, I had neither the inclination nor power to move. I called to Hugh. He answered faintly. I had, however, my senses sufficiently about me to be aware of our perilous position. The acute sensation of hunger had gone off, and my only wish was to be left alone. I tried to rouse myself, and endeavour to get up, but sank again to the ground. I then asked Red Squirrel to take a look out. He at once rose and scrambled up to the hole. It was some time before he could force off the snow. He then told us that the snow had ceased, and that it was night, for he could see the stars shining overhead. "We must wait until morning, then," I said, thankful that I should not have to move. Once more we all dropped off into a state of stupor rather than sleep. I don't know how long we had thus remained, when I was aroused by a noise which came down the funnel. It seemed as if some animal were scratching away at the entrance. The idea seized me that it was a bear, and I thought how unable we were to defend ourselves. I felt about for my gun, forgetting that it had refused to go off. Just as I grasped it I remembered this, and desperately plunged my hand into my pouch, when at the bottom I discovered my pricker, which my numbed fingers had before failed to feel. Clearing out the nipple as well as I could in the dark, I put on a fresh cap. While doing so, I awoke my companions. Hugh answered faintly. Red Squirrel immediately got up, and together we managed to crawl to the opening through which I thrust my rifle, ready to fire should the bear show himself. The scratching continued more vehemently than before. "He'll be upon us presently," I whispered to Red Squirrel, as a gleam of light came down through the aperture. "Do you take the gun; I haven't strength enough to fire;" and I sank back quite exhausted. CHAPTER ELEVEN. RESCUED. THE ANIMAL PROVES TO BE ONE OF THE DOGS--WHO GOES OFF FOR RESCUE--HELP COMES AT LAST--HOW THE DOG HAD FOUND THE PARTY--EFFECTS OF THE ADVENTURE--THE PARTY REACH THE BLOCK-HOUSE AT LAST, TO FIND ARCHIE'S FAMILY ALL SAFE AND WELCOME REST. I fully expected the next moment to see the huge claws of a monstrous grizzly as he worked his way down to us, when, instead of a growl, I heard the whine and sharp bark of a dog. It was the voice I felt sure, of our faithful Pilot, whom we had left at our last camp, as we supposed, on the point of death. I called out his name, and he answered with a joyous bark. Presently we saw him looking down upon us, when, satisfied that we were really there, he gave another bark, and then Red Squirrel, who had clambered up to the surface, told me that he was scampering away to the southward. I tried to get out to watch him, but was utterly unable to accomplish the task, and Red Squirrel himself was too weak to help me. I felt sure, however, that the dog had gone to summon our friends. I tried to cheer up poor Hugh with the news. He seemed scarcely able to understand what had occurred, and I became greatly alarmed at his condition. We waited and waited; it seemed as if several hours had elapsed. At last Red Squirrel, who had gone to the hole, exclaimed that he saw some dark objects moving over the snow. They came nearer and nearer. I cannot describe the joy I felt when I heard Uncle Donald's voice, and presently I saw Red Squirrel's legs disappear as he was drawn up through the hole. Directly afterwards another person came slipping down. "Arrah! we've found ye at last, sure!" exclaimed Corney, lifting me in his arms. "Take up Hugh," I said, "he is in a worse state than I am." He did as I requested, but he was down again in a minute, and carrying me up, wrapped me in buffalo-robes and placed me in one of the sleighs which Uncle Donald, who was engaged in feeding Hugh from a can of broth, had brought to convey us. Some of the broth was immediately given to me. I could have gobbled up the whole of it, for the moment I felt the fresh air the keenness of my appetite returned. "I feared, my dear lads, that you were lost!" exclaimed Uncle Donald, as he ran backwards and forwards between Hugh and me, giving us each alternately a mouthful of the food. "But through the mercy of Heaven, as I will tell you by-and-by, we were led to this spot, and now the sooner we get back to camp the better, for you require careful nursing, I suspect. It is a wonder that you have escaped." Red Squirrel came in for a portion of the broth, and, not suffering so much from hunger as we were, he was soon able, after he had swallowed the food, to move about and assist Corney in digging out our snow-shoes. As soon as they had been recovered, we set out for the camp, which we found under the shelter of a wood about two miles off. How Pilot, who had been left, as we supposed, dying in the camp, had found us out, we were curious to know. It appeared that one of the Indians had left, as he confessed, a load of pemmican behind. This the dog must have scented out after we had gone, and having eaten it, had remained sheltered during the storm under the snow. His provisions exhausted, he had set out to rejoin his companions, and on his way had providentially been led to the mouth of our burrow. Finding that he could not get us out, he had gone on, and on coming up with the party, by his extraordinary behaviour attracted attention. The moment he had had some buffalo meat, he rushed back towards where he had left us, and then pulled at Corney's and Uncle Donald's leggings, thus leading them to believe that he knew where we were to be found. The cold was intense, but as it had hardened the snow, and the dogs had greatly recovered by having had plenty of buffalo meat to eat, we made rapid progress. Hugh was placed in Rose's sleigh, and I had one to myself, with some of the cargo stowed at my back, for even after two day's rest we were unable to walk; Red Squirrel, however, was soon himself again, and was able to keep up with the rest of the men. More than a week had passed, when, as evening was approaching, we caught sight of a flagstaff, above a block-house, and a circle of palisades rising out of the snow on the banks of a stream backed by a lofty range of mountains, spurs of the Rockies. Though there were no trees in the immediate neighbourhood, a thick forest was seen on either side, extending backwards, and rising up the steep <DW72>s. It was the station to reach which we had travelled many hundred miles. Descending to the river, which was frozen over, we dashed across it, and were met on the other side by a party who issued from the stockade as we approached. At first we could only make out a number of Indians, but presently a lady and five young people appeared among them. To my joy, I recognised the lady as my mother, the others were my two sisters and three younger brothers, but they had all grown so much that I should not have known them; and certainly they did not know me, for they looked greatly surprised at the affectionate greeting my mother gave me. "I am grateful, most grateful to you, Uncle Donald, for having come to our assistance," she said, as she kissed his weather-beaten cheek. "Your appearance will revive my poor husband, who is still suffering from sickness. He has not got over the fearful scenes we witnessed, and is still anxious about our safety, as the savage Indians have vowed that they will return in the spring and put us and those of their tribe who have become Christians to death, should the pest again break out among them, and I much fear, in consequence of their careless and dirty habits, that it will do so." "Cheer up, my good niece, we will now go into the house, and then arrange what is best to be done," answered Uncle Donald. I, in the meantime, was receiving the embraces of my brothers and sisters, the latter of whom immediately rushed towards Rose, and conducted her to the house. My brothers also gave a warm greeting to Hugh. My poor father had risen to receive us. He looked fearfully thin and careworn, though our arrival, it was evident, cheered him. Very soon we were all assembled round a roaring fire in the sitting-room, thankful for our preservation from the clangers of our journey, and not a little pleased to be able to throw off our heavy clothing. The Indians took good care of Madge, Corney, and Pierre, and the rest of the party, not neglecting the poor dogs, honest Pilot especially, when the service he had rendered us was told, coming in for a large share of their favour. CHAPTER TWELVE. ON THE ALERT. AT THE STATION--AFTER BUFFALO--RETURN OF RED SQUIRREL FROM A SCOUT WITH NEWS OF THE BLACKFEET--A PARTY RETURN--A PARTY SENT OUT TO BRING BACK THE HUNTERS TO THE FORT--A STRANGE FIRE--RED SQUIRREL GOES OFF AGAIN ON THE SCOUT. My brothers and sisters, Hugh Rose, and I were very happy. The former fancied that, now we had come, all their troubles would be over. They had, however, passed a sad and anxious time; the missionary who had accompanied my father, with his wife and two children, had died, as had several of the Christian Indians, while some hundreds of the wild Indians had been swept off by the fearful pestilence. The latter had gone away south during the winter, and it was supposed that they would not return till the spring. Hugh and I occasionally went out with Uncle Donald, or Pierre and Corney, in search of buffalo or deer. We were generally fortunate enough to kill either the one or the other. Uncle Donald had lost no time in sending out trusty scouts to try and ascertain the whereabouts of the Blackfeet. Red Squirrel, from being one of the most active and intelligent of our Indians, was thus constantly employed. The duty was a hazardous one, for, as he well knew, should the enemy catch him, they would to a certainty take his scalp. As neither buffalo nor deer had for several days appeared near the station, the hunters had to go a considerable distance in search of them. As soon as an animal was killed one of the dog-sleighs was sent out to bring in the meat. I have not described the station. It was in some respects like a fort, being entirely surrounded by palisades, both that it might be defended from an hostile attack, and for the purpose of protecting the buildings in the interior from the cold winds in winter, and to prevent the snow from drifting round them. There was a strong gate on one side which could be securely closed with bars, and a narrow platform with a parapet ran round the upper part of the palisades, from which its defenders could fire down on their assailants. It was in this respect very different from the usual missionary stations, which are entirely without defence. It had been built as a fort by the fur traders, and being in the neighbourhood of a savage and warlike tribe, it was considered prudent to repair it in the fashion I have described. When existing as a fort, it had been more than once captured and plundered by the Indians, and on one occasion the whole of the defenders had been put to death. I had one morning gone up to the platform to take a look out, when I espied far off to the southward a small herd of buffalo. Our hunters had, on the previous evening, gone off to the eastward, and, unless they should find game near, were not likely to return for some days. I hurried down to Uncle Donald to tell him what I had seen, and request permission to set off to try and kill a buffalo. "I will go with you," he said; and Hugh begged that he might accompany us. So we set off with our guns, hoping, that by keeping among the woods, we might get to leeward of the herd, and sufficiently near to shoot one or more beasts. My brother Alec, who was nearly as old as Hugh, went also. We hurried along on our snow-shoes, eager to get up to the herd before they should move off. This they were not likely to do, as they had found a spot where the snow was less deep than in other places, and they had got down to the grass by pawing with their feet. They did not perceive us, and the wind being north-east, we succeeded in getting round to the south of them. We then crept carefully up, and Uncle Donald, firing, brought a fat cow to the ground. Hugh and I aimed at another, which we badly wounded; but instead of running off with its head lowered, ploughing up the snow as a ship turns up the foaming water, it came charging towards us. "Now, Alec, see what you can do!" exclaimed Hugh and I, as we rapidly re-loaded; "but run aside as soon as you have fired, or the brute may kill you." I heard Alec's shot, when, looking up, to my dismay, I saw that he had missed. The buffalo was within twenty paces of us. Alec did his best to make off on one side, which, however, could not be done very rapidly with snow-shoes on. In another instant the buffalo would have reached us, when a shot which came from behind a tree laid him low, and looking round, I saw an Indian, whom I directly recognised as Red Squirrel. The rest of the herd being thus disturbed had made off. Uncle Donald now came up and thanked Red Squirrel for his timely aid. He reported that he was on his return to the fort with somewhat alarming intelligence. He had got up one night, he said, close to the Blackfeet lodges, where he observed the chiefs seated in council. He caught the meaning of some of their speeches, from which he gathered that it was their intention, before long, to come north and avenge themselves on the white medicine man--so they called my father--for the pestilence which they asserted he had inflicted on them because they had refused to become his proselytes. Red Squirrel also stated that he had seen among them a white man, who had spoken, and tried to dissuade them from prosecuting their design. He was clothed, like them, in a dress of buffalo-robes, from which Red Squirrel argued that he had been some time among them. They seemed, however, in no way inclined to listen to the advice of the white stranger, and expressed their intention of setting out as soon as their medicine man should pronounce the time to be propitious. "We must return at once and put the station in a state of defence," said Uncle Donald, on hearing this. "The savages may be upon us in the course of two or three days, and will give us but a short time to prepare for them. It is unfortunate that the hunters are away, for we require their assistance; and should the Blackfeet fall in with them they will lose their scalps to a certainty." "I would willingly go out and try and find them," I said. "As no snow has fallen since they started, I can easily find their tracks." "I would much rather send Red Squirrel or Corney; but I'll think about it as we go along," said Uncle Donald. Pierre had gone with the hunters, so that only the Irishman and young Indian were available for the purpose. We at once turned our faces homewards, going on as fast as we could move on our snow-shoes. We thought it possible that we might find on our arrival that some of the hunters had returned, but none had made their appearance. My father looked very anxious when he heard the information brought by Red Squirrel. "We might repulse them should they attack the place, but if any are killed, what hope can I afterwards have of winning them over to the Gospel?" he said. "I talk to them of peace, and urge them to enlist under the banner of the Prince of Peace, and yet they find me and my friends allied in arms against them." "But if we don't defend ourselves, they will knock us on the head and carry off our scalps," answered Uncle Donald. "I will do all I can to preserve peace, and induce them to go back without fighting, should I be able to hold any communication with them. In the meantime, we must prepare to defend the fort. Archie has volunteered to go out in search of the hunters, who must be forthwith called in, but without your permission I do not like to let him go." "As it is in the path of duty, I will not forbid him," answered my father. "If Archie goes, let me go too," cried Alec. "I can run as fast as he does on snow-shoes." After some demur, Alec got leave to accompany me, for Hugh, not being quite well, was unable to go. We were in good spirits, pleased at the confidence placed in us, and only regretting that Hugh had not been able to come. The trail of the hunters was perfectly clear, leading away to the south-east. They had taken a couple of sleighs to bring in the meat, so that we had no difficulty in directing our course. We had made good nearly ten miles, and had not met any buffalo tracks, which showed us that the hunters must still be some way ahead, when we heard a voice shouting to us, and, looking back, we saw an Indian running towards us over the snow. As he was alone, we had no doubt that he was a friend, and as he came nearer we recognised Red Squirrel. He could not, he said, allow us to go without him, and as soon as he had taken some food he had set off. He had left Uncle Donald busily engaged, assisted by my father and the remaining men in the fort, in strengthening the palisades. "If the Blackfeet come expecting to get in and plunder the fort, they will find themselves mistaken," he added. We were very glad to have Red Squirrel with us; although, accustomed as we were to travel over the snow-covered plains, and having the mountains with whose forms we were well acquainted to the eastward, we had no fear about finding our way back, provided that the weather should remain clear. There was, of course, the possibility of a snow-storm coming on, and then we might have been greatly puzzled. Notwithstanding the fatigue Red Squirrel had gone through during the last few days, he was as active as ever, and kept us moving as fast as we could go. Before sunset we came upon the tracks of buffalo, though the animals themselves were nowhere to be seen. "We'll soon find them," observed the Indian; but though we went on some distance, neither buffalo nor hunters could we discover, and we were glad, just as night fell, to take shelter under the lee of a thick clump of poplars and spruce pine. To cut sufficient wood for our fire and clear away the snow was the work of a few minutes, and, with our pot boiling, we were soon sitting round a cheerful blaze discussing our supper. We continued sitting round the fire, wrapped in our buffalo-robes, with our feet close to the embers, every now and then throwing on a stick, while we talked and Red Squirrel smoked his pipe. I proposed that two of us should lie down and go to sleep, while the third kept watch, when Red Squirrel, getting up, said he would take a look out. Climbing up the bank, he went to the top of a knoll a short distance off. We could see his figure against the sky. In a short time he came back. "See fire out there," he said, pointing to the southward. "May be friends, may be enemies, may be Blackfeet. If Blackfeet, sooner we get 'way better." "But how are we to find out whether they are friends or foes?" I asked. "Red Squirrel go and see," he answered. "You stay here;" and taking up his gun, he quickly disappeared in the darkness, leaving us seated at our camp fire. CHAPTER THIRTEEN. ATTACKED BY THE RED-SKINS. PROLONGED ABSENCE OF RED SQUIRREL--FLIGHT--THE STRANGERS PROVE TO BE FRIENDS--RETURN TO THE FORT--UNCLE DONALD OPPOSES THE DOCTRINE OF NON-RESISTANCE--THE GUARD OVER THE FORT--THE INDIANS ATTACK THE FORT. We felt very uneasy at the strangely prolonged absence of Red Squirrel. He could have anticipated no danger, or he would have advised us what course to pursue should he not return. At last, telling Alec to sit quiet, I got up, and made my way to the top of the knoll, whence I could see over the country to the southward, in the direction I supposed Red Squirrel had gone. I looked and looked in vain through the gloom of night, though I could see in the far distance the light of the fire of which he had spoken. Could he have been captured? if so, what should Alec and I do? It would be impossible to rescue him--indeed, it was too probable that he had been immediately put to death by the Blackfeet, and that we might ourselves, should we remain in the neighbourhood, be killed. I came therefore to the conclusion that we must continue our search for the hunters to the eastward, keeping at the same time a watchful eye in the direction in which we had seen the fire of our supposed enemies. I say supposed enemies, because I still had a lingering hope that, after all, the fire might be at the hunters' camp. Such were the thoughts which passed through my mind as I stood on the top of the knoll. I had not been there many minutes before I recollected how clearly I had seen Red Squirrel in the same position against the sky. Instead, therefore, of remaining upright, I stooped down until I reached a thick bush, behind which I crouched, as well able as before to see any objects moving in the plain below. At last I thought that it was time to go back to Alec, and was on the point of descending the knoll, when I fancied that I saw some objects moving along the ground. I remained stock still, scarcely daring to breathe, with my eyes fixed on the spot. They were human beings--Indians I felt sure; if so, they would soon see our fire, and we should be discovered. While there was time I hurried down the knoll and flew to Alec. I made a sign to him to take up his rifle and buffalo-robe, with a few other articles, left on the ground, and led the way through the wood. Here we might remain concealed until the savages had gone away, and then try to get back to the fort. I had no great hopes of success, still, it was the only thing to be done. We had reached the spot, and it was some way from the fire, but we were still able to see it by raising our heads over the bushes. We had both knelt down behind the bush, with our rifles ready to raise to our shoulders at any minute. Alec, only the moment before I returned, had thrown some wood on the fire, so that it was now blazing up brightly, and we could see all the objects round it. Just then three figures appeared. Two were Indians--there could be no doubt about it; but the other we could not make out clearly. They advanced, looking eagerly around, but as they came more into the light, instead of savages, with scalping knives in hand ready to kill us, great was our joy to discover that one was Pierre, and the others Red Squirrel and Kondiarak. They looked very much astonished at not seeing us. We did not keep them long in suspense, and Pierre then told us that they had come on purpose to advise that we should at once return to the fort, without waiting for daylight. They had been successful in hunting, having killed three buffalo cows, with the meat of which the sleighs were already packed, and as the track was formed, the dogs would find their way without the slightest difficulty. We reached the fort without having seen the enemy, and, as may be supposed, were heartily welcomed. Our arrival restored the spirits of my poor father and mother, who were very anxious, not so much for themselves as for my younger brothers and sisters. They were prepared to die, if God so willed it, in the path of their duty. My father was still very unwilling to resort to force, and proposed going out himself to meet the enemy to try and induce them to turn back. Uncle Donald, however, told him that as he was the object of their vengeance they would, to a certainty, seize and torture him, and then probably come on and endeavour to destroy the fort. Thus no object would have been gained, as we should do our utmost to defend ourselves, and his life would be uselessly sacrificed. "But I should have done my duty in attempting to soften the hearts of the poor savages," answered my father, meekly. "My good nephew, it's just this, I'm not going to let ye have your scalp taken off," said Uncle Donald, bluntly. "I am commander here for the time being, and no man, not e'en yourself, shall leave the fort without my leave. If the savages come they must take the consequences." My father did not reply, but I am very sure that, had he been left to act by himself, he would have earned out his intentions, and would most probably have perished. From Pierre's report we fully expected every minute to see the Blackfeet appear. To each man under Uncle Donald's directions a post was assigned, which he was charged to defend with his life. Orders were, however, given that no one was to fire until the word of command was received. Hugh, Alec, and I were stationed together, and highly proud we were at the confidence placed in us, as the post we had to maintain was one of the most important. The day wore on, but we were still unmolested, and at last darkness came down upon us. The winter, it will be remembered, was not yet over. To defend ourselves from the intense cold we all put on as many buffalo-robes and bear-skins as we could wear, and Hugh declared that we looked like a garrison of grizzlies. It was cold enough during the day, but it was still colder at night; notwithstanding this, as Alec and I had had no sleep for many hours, we found it difficult to keep awake. We, therefore, rolling ourselves up in our wraps, lay down, while Hugh stood ready to call us at a moment's notice. There were, however, sentries enough to keep a look-out, and Uncle Donald continued going round and round the fort, seeing that they were watchful. The dawn was approaching; it was the time the Red-skins often make their attacks, as they expect to find their enemies buried in sleep. When morning at last came, and no enemy had appeared, we began to hope that no Blackfeet had as yet reached the neighbourhood. Another day was drawing on. Except a few men who remained on guard, the rest of the garrison lay down to sleep, that they might be more watchful the following night. I spent a short time with my mother and sisters and Rose, and did my best to encourage them, but I could not help feeling that possibly it might be the last time we should be together on earth. By Red Squirrel's report, the Blackfeet were very numerous, and they are noted for being the most savage and warlike of all the northern tribes. The next night was almost a repetition of the former, except that Alec and I kept watch, while Hugh lay down to sleep. Uncle Donald, as before, went his rounds, and there seemed but little risk of our being taken by surprise. He had just left us, when Hugh, who had got up and was standing near me, whispered-- "I see something moving over the snow. There! there are others. Yes, they must be Indians." "Wait until we are certain," I answered, in the same low voice; "and then, Alec, run round and tell Uncle Donald." We were not left long in doubt before we all three were certain that the objects we saw were Indians, and that they were trying to keep themselves concealed. Alec set off to find Uncle Donald. He had not been gone many seconds, when fearful yells rent the air. Before us up started hundreds of dark forms, and a shower of bullets and arrows came flying above our heads. CHAPTER FOURTEEN. AN OLD FRIEND. THE BLACKFEET MEET A WARM RECEPTION--AND RETREAT--A WOUNDED INDIAN-- PROVES TO BE PONOKO, WHO TELLS OF A WHITE MAN IN THE INDIAN CAMP--A FRIENDLY CONFERENCE. The moment the war-whoop of the Blackfeet had ceased Uncle Donald's voice was heard, ordering us to fire. We obeyed with right good will, and must have greatly astonished the savages, who, not aware of the increased number of our garrison, had probably expected to gain quite an easy victory. Many of them had muskets, but the larger number could only have been armed with bows and arrows. After they had shot five or six showers of arrows and fired their guns--fortunately, without hitting any of us, though we could hear their missiles pinging against the thick palisades--they suddenly ceased, and began to retreat, when Uncle Donald shouted to them in their own language, inquiring why they had attacked people who had done them no harm, but were anxious to benefit them. No reply came. Our men uttered a shout of triumph. Uncle Donald stopped them. "The Blackfeet have retired, but I know their cunning ways, and I deem it more than likely that they will be down upon us again when they think to catch us off our guard or maybe they have devised some treacherous plot to entrap us." We waited, but, as far as we could judge by the sounds which reached our ears, the savages had really retreated, and did not intend to attack us again that night. That they would give up their object was not to be expected, and my father proposed, should we find they had gone to a distance, that, rather than cause more bloodshed, we should abandon the station and retreat to one of the company's forts to the northward, "We have sleighs sufficient to convey the women and children," he added; "and when the anger of the misguided people has subsided, I will return by myself, and endeavour to win them over by gentle means, for such only should be employed to spread the Gospel among the heathen." "You are very right in that respect, but though we may get to some distance, when the Blackfeet find that we have gone, they will to a certainty follow on our trail and quickly overtake us," answered Uncle Donald. "I cannot consent to such a plan; we must show them that we are able to defend ourselves, and let their blood be upon their own heads if they persist in attacking us. We will, however, try how negotiation will succeed. I used to be well-known among them, and I propose to-morrow, should they not again attack the fort, to go singly into their camp and invite them to smoke the calumet of peace. Should I be detained, you must promise to hold out to the last, and not any account trust to what they may say. We will, in the meantime, send a messenger to Rocky Mountain House, entreating for assistance. I feel sure that the officer in charge will send as many men and horses as he can spare to enable you to escape, or defend the fort, if necessary." My father and mother entreated Uncle Donald not thus to risk his life; but he was firm in his resolution. My father then proposed going with him, but to this Uncle Donald would not consent. A considerable portion of the night was consumed in these discussions. A vigilant watch was of course kept, but no one could be seen stirring outside the fort. Having taken a brief nap, just before dawn I returned to my post on the ramparts. As daylight increased I fancied that I saw the body of a man lying under a bush some distance from the fort. Yes, I was certain of it. I pointed him out to Hugh, and we both fancied that we saw an arm move. "He is one of the savages who was shot in the attack last night, and, unperceived by his companions, he must have fallen where we see him," observed Hugh. While we were speaking, some of the Indians we had brought with us--who, though faithful servants, were still heathens--caught sight of the body. Lowering themselves down without asking leave, they were rushing, with their scalping knives in their hands, towards the hapless being. Uncle Donald at that instant coming up on the ramparts saw them, and guessed their object. "Come back, you rascals!" he shouted. "Whether that man be alive or dead, don't touch a hair of his head!" As they did not stop he fired his rifle, the bullet passing just in front of the leading Indian, who now thought it time to come to a standstill. "Archie and Hugh, you go and look after that poor fellow, and make our people bring him in," continued Uncle Donald. We instantly obeyed, for although the height was considerable we could manage to drop to the bottom without injuring ourselves. We then ran as fast as our legs could carry us to overtake our Indians. Having delivered Uncle Donald's orders, we then hurried on to where the Indian lay. At a glance I saw that he was desperately wounded from the blood which flowed from both his legs, while another shot had rendered his right arm powerless. His eyes still wore a defiant expression, and he appeared to fancy that we were about to kill him. By signs and such words of his language as we could speak, we endeavoured to make him understand that we had come to carry him into the fort to try and save his life. As there was not a moment to be lost, we first bound up his wounds, and then ordering our people to assist us we lifted him from the ground and hurried towards the fort, meeting on our way Uncle Donald, who had the gate open to admit us. Without stopping we carried the wounded man into the house, where my father, who had risen, was ready with bandages and salves to attend to him. My mother, meantime, was preparing some strong broth, which our prisoner eagerly swallowed. It had an almost instantaneous effect in reviving him. Uncle Donald, who had in the meantime been going round the fort to ascertain if more wounded had been left in its neighbourhood, now entered the room, and as his eye fell on the countenance of our captive, he exclaimed, "Ponoko! Do you remember your white friend?" The Indian made a sign that he was the person supposed, though he was too weak to speak. Uncle Donald then told him that although he had come as an enemy he should be well cared for. In a short time the judicious treatment he was receiving enabled him to utter a few words. He seemed grateful for the care taken of him, and his eyes brightened when my young sisters and Rose brought him the soup, which he received almost every hour. He especially noticed Rose, and when Uncle Donald came to see him, inquired, in a tone of evident interest, who she was. "You are right if you think you remember her, for she is the little girl you saved when your people attacked the village in the territory of the Long-knives some years ago," answered Uncle Donald. "Will you now let me take her back?" asked Ponoko. "Do you think it likely that I should consent?" said Uncle Donald. "Her ways are not the ways of your people. She would pine and die were she to be treated as your women are treated." "But there is one who has long lived with us whose heart would be rejoiced to see her," said Ponoko. "You may remember when I parted from you I promised to try and save the lives of any of our pale-faced prisoners. I succeeded in saving that of one man just as he was about to be tortured and killed, but it was on condition that he would swear to remain with us, and never betray us to our enemies. He was a great hunter, and brave as the bravest among us. He also, we found, was not one of the Long-knives, but was a subject of the Queen of the Pale-faces. He has kept his promise, though he might often have made his escape. He had been many months with us, before I found how sorely his heart yearned to get away, and I would have set him free, but the other chiefs would not consent. He looked upon me as his friend. He told me that his child and all his household had died by the hands of our people, except his wife, who was away in one of the big cities in the east at the time we attacked the place. I was thus led to tell him of the little girl I had saved and given over to you, and he has ever since been hoping that she might prove to be one of his children. He has hoped and hoped until he has persuaded himself that such she is. Thus I know how it would rejoice his heart to see her." "I have strong doubts about that," answered Uncle Donald. "He would rejoice to see her, but not to have her among your people, from whom she differs so greatly. The only way truly to benefit him would be to set him at liberty and allow him to return among the Pale-faces to whom he belongs." "But how can that be while I am sick and a prisoner with you?" asked Ponoko. "You'll recover, I hope, ere long, and as you have fulfilled your promise on one occasion, I feel confident that you will not disappoint us if we set you at liberty on your undertaking to restore this white stranger to his people." "Ponoko always keeps his word," answered the Indian in a proud tone. "But should the Blackfeet, in the meantime, attack us, we may be destroyed, and they may take you away with them," observed Uncle Donald. "If my people come, you shall carry me out on a litter; I will tell them how well the Pale-faces have treated me, and will urge them, instead of fighting, to make a lasting peace with my white father and his friends," said Ponoko. "I will trust you, my brother," said Uncle Donald, pressing Ponoko's hand. "I pray that you may soon be restored to health, and that you will teach your people that it is to their true interests to be at peace with the white men, and to trade honestly with them." CHAPTER FIFTEEN. A HAPPY ENDING. PONOKO RECOVERS--TIME PASSES WITHOUT FURTHER ATTACK--AND MEAT HAS TO BE PROCURED--RED SQUIRREL AGAIN SENT ON SCOUT--RETURNS PURSUED BY SIX BLACKFEET--TIMELY RESCUE--POOR RED SQUIRREL IS QUITE EXHAUSTED--THE BLACKFEET RETURN IN LARGE NUMBERS--PONOKO GOES OUT TO MEET THEM--EFFECT OF HIS APPEARANCE ON THE TRIBE--HE RETURNS WITH A WHITE MAN--ROSE FINDS A FATHER--AND BOTH FIND A WIFE AND MOTHER--ALL ENDS HAPPILY AT LAST. Day after day went by, and the Blackfeet did not appear. Ponoko, never having indulged in the pernicious fire-water, was rapidly recovering under my father's judicious care and the attention he received from Rose and the rest of the family. We had not yet told her of the possibility that her father had escaped and might be restored to her. I suspect that she would not have understood us had we done so, for she looked upon Uncle Donald as her father, though she called him "Uncle" as Hugh and I did. Indeed, all the events of her life which had occurred before the fearful night of the massacre appeared to have faded from her memory. At length, as the Blackfeet had not shown themselves, we began to hope that they would allow us to remain at peace, and Uncle Donald already talked of returning home. He proposed that my mother and father and the rest of the family should accompany him, but my father replied that nothing should induce him to quit his post, unless driven away by the savages, and that he would then retire, with his converts, to some spot among more friendly tribes further north. Among others signs of returning spring was the appearance of a herd of buffalo passing in the far distance, and as our provisions were again running short, Uncle Donald was compelled to allow the hunters to set off for the purpose of killing some of the animals. Hugh and I wanted to accompany them, but he would only allow Pierre, and Corney, and four of the most active red men to go on the expedition. As soon as they set out, he sent off Red Squirrel to try and ascertain the whereabouts of the Blackfeet camp, with directions to come back should he discover that they were on the move. We waited day after day for Red Squirrel's expected return, but he did not appear, and we began to have serious apprehensions that he had been captured. The hunters, however, had come back with a good supply of buffalo meat, so that we should be well prepared in case we should be besieged. At last, one evening as I was looking out towards the south, I saw several objects moving across the prairie. At first I thought that they might be deer or wolves, or even smaller game. One was leading considerably ahead of the rest. They were coming towards the fort. Besides the first I counted six others. I called the attention of my companion to them. "They are men!" exclaimed Ponoko. "Those six are of my tribe; they are in pursuit of the first! He must run fast, or before he can reach the fort they will overtake him. Already I see by his movements that he is fatigued." I had little doubt but that the leader was Red Squirrel. I asked Ponoko, whose keen eyes could distinguish his dress better than the rest of us could do. "Yes, he is your young friend," he answered. "See, see! he is increasing his speed, he may still escape, and my people will go back disappointed. They will not dare to come within range of your rifles." "Then we will go out and meet them!" I exclaimed, hurrying down. I told Uncle Donald what Ponoko had said. Taking our rifles, and buckling on our snow-shoes, Hugh, Alec, Pierre, Corney, and I hurried out of the fort, and set off running faster, I think, than we had ever run before, to meet the hard-pressed fugitive. Once more his pursuers were gaining on him; before long their scalping knives might be about his head. He was the first to perceive us approaching, and it seemed to add fresh nerve to his legs. Soon afterwards the Blackfeet caught sight of us. The instant they did so they sprang forward, making a last desperate effort to overtake our friend; but perceiving that we had rifles ready, they well knew that, even should they succeed, we should make them pay dearly for the act. Giving up the chase, therefore, they stopped, and turning round, ran off at a rate which soon placed them beyond our reach. In a few moments Red Squirrel was up to us, but so hard-pressed had he been that he was unable to tell us what had happened. We supported him, not without difficulty, to the fort, when his snow-shoes being taken off, had he not been resting in our arms, he would have sunk fainting to the ground. We delivered him over to his mother, who chafed his limbs, and used every other means she could devise for restoring his strength. It was some time before he could speak. He had ably fulfilled his mission, having watched the enemy's camp until the previous day, when finding that they were about to move northward, he had set off to bring us tidings of their approach. He was, however, observed, and six of their fleetest runners had pursued him. Hour after hour he had continued his flight, though he confessed that, had we not come to his assistance, he should, he believed, have fallen even in sight of the fort. That night was an anxious one. Frequent alarms were raised that the enemy were upon us. At length the morning broke, and as the sun rose above the eastern prairie his beams fell on the plumed heads and trappings of several hundred warriors, who came on, confident in their numbers, and believing that our small garrison would easily become their prey. They halted when considerably beyond range of our weapons, and having sung a war-song, gave utterance to one of those terrible whoops which are said to paralyse even horses and cattle. Ponoko had in the meantime, dressed himself in the costume in which he had been discovered when lying wounded, and the gate being opened, he sallied forth with feeble steps, very different from his once elastic tread. The gates of the fort were closed behind him, and he proceeded towards the warriors drawn up in battle array. We watched him as he approached them. At length he stopped and stretching out his arms, addressed his people. The effect on his tribe of what he said was almost electrical. They looked upon him as one restored from the dead, for they had long mourned him as lost. We watched him until he was among them, when, after some time, he reappeared, leading by the hand a person who, though dressed in Indian costume, we saw was a white man. Together they approached the fort, when the gate was opened to receive them. The stranger gazed round with looks of astonishment, evidently endeavouring to find the words to express himself. At last he said-- "I can scarcely believe my senses. A few minutes ago I was a prisoner, and threatened by the Indians with a cruel death should they again be defeated." "We are truly thankful that you have escaped," answered Uncle Donald, advancing and taking his hand. "You owe your preservation to our friend Ponoko here." "I am indeed grateful to him," said the stranger. "He preserved my life when so many of my companions were massacred. He has ever since continued my protector, but when it was supposed that he was killed, his people threatened to avenge his death by murdering me. Grateful as I am to him and to you, I am restored to liberty a ruined and a childless man, while I know not what has become of my poor wife, who was providentially absent from the settlement at the time of the massacre, but will have supposed that I, as well as our little girl, shared the common fate," answered Mr Kennedy, for such he told us was his name. "Should your child have escaped, do you believe you would recognise her?" asked Uncle Donald. "Among a hundred!" answered the stranger. "I should know her, however much grown, from her likeness to her mother." As he spoke my sisters and Rose approached. The stranger glanced at the group, then rushing forward, gazed earnestly into Rose's countenance. "You would not deceive me!" he exclaimed. "Say, how did this young girl come to be with you? Rose, do you recollect me? Speak, my child, are you not Rose Kennedy?" "Kennedy! Kennedy!" murmured Rose, looking greatly astonished and somewhat frightened. "Kennedy! Yes, that was my papa's name." "You are my own child!" he exclaimed, kissing her brow and cheeks again and again while he held her in his arms. The lookers-on were greatly moved. It was some time, however, before Rose could fully comprehend that the stranger was her father, and that she belonged to him rather than to Uncle Donald. Mr Kennedy now eagerly inquired whether we could give him any tidings of his wife. "Extraordinary as it may seem, I think I am able to do so," said my father. "On stopping at the Red River settlement on our way hither, I met a Mrs Kennedy, whose husband and child had, I heard, been murdered by the Indians." I should like to prolong my history, but I must be brief. Ponoko, after remaining a day or two with us, went among his tribe, and persuaded them that it would be to their advantage to live peaceably with their neighbours. Not many years after they entered into a treaty with the Canadian Government, and the fearful state of warfare which for so long a period had existed in that fair northern region almost entirely ceased. We were very, very sorry to lose Rose, but Mr Kennedy was, of course, most anxious to join his wife. As soon as he could travel he set off for the Red River. He promised to return and bring his wife and Rose with him, having accepted an invitation from Uncle Donald to settle at Clearwater. In course of time, Hugh, Alec, and I established in its neighbourhood several fairly flourishing farms, of one of which Hugh, with Rose as its mistress, became the owner. My father laboured for many years among the heathen, greatly aided by Ponoko. The entire country, including the Rocky Mountains over which we passed, now forms part of the great Canadian dominion, and probably, before another generation has passed away, the whole region, from east to west, will be the home of happy and flourishing communities. THE END. End of the Project Gutenberg EBook of Among the Red-skins, by W.H.G. Kingston ***
{ "redpajama_set_name": "RedPajamaBook" }
3,281
Q: jquery Object Iteration with Alert Can someone explain how to iterate through all my object values with an alert? // get a single audio url echonest.artist("Prince").audio(function(audioCollection) { $('#artistAudioURL').append(audioCollection.data.audio[0].url); }); A: With which object do need to iterate ? jQuery method for iterate is $(obeject).each( function() {} )
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,626
function z=Y(R) %Forms 6x6 rotation matrix from 3x3 rotation matrix R z = [[R,zeros(3)];[zeros(3),R]];
{ "redpajama_set_name": "RedPajamaGithub" }
2,319
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <meta name="keywords" content="W3C SVG 1.1 Test Suite testsuite mobile"/> <meta name="description" content="W3C SVG 1.2 Tiny Test Suite"/> <title> SVG 1.2 Tiny test:animate-elem-13-t </title> <style type="text/css"> <!-- .bodytext { font-family:verdana, helvetica, sans-serif; font-size: 12pt; line-height: 125%; text-align: Left; margin-top: 0; margin-bottom: 0 } .pageTitle { line-height: 150%; font-size: 20pt; font-weight : 900; margin-bottom: 20pt } .pageSubTitle { color : blue; line-height: 100%; font-size: 24pt; font-weight : 900 } .openChapter { color : blue; line-height: 125%; font-weight : 900 } .openSection { color : blue; line-height: 125%; font-weight : 900 } .info { color : black; line-height: 110%; font-size: 10pt; font-weight : 100 } p { margin-top:0; margin-bottom:0; padding-top:0; padding-bottom:0 } blockquote { margin-top:0; margin-bottom:0; padding-top:0; padding-bottom:0 } .opscript {margin-left: 3%; margin-right: 3%; } .opscript p { margin-top: 0.7em} .navbar {background: black; color: white; font-weight: bold} a,a:visited { color: blue } --> </style> </head> <body class="bodytext"> <div class="linkbar"> <p> <a href="animate-elem-13-t.html">Tiny version</a></p> <p>Specification link: <a target="spec" href="http://www.w3.org/TR/SVGMobile12/animate.html">16.2 Animation elements</a></p> <p> <a href="animate-elem-12-t.html">animate-elem-12-t ←</a> <a href="index.html">index</a> <a href="animate-elem-14-t.html">→ animate-elem-14-t</a> </p></div> <table align="center" border="0" cellspacing="0" cellpadding="10"> <tr> <td align="center" colspan="3"> <table border="0" cellpadding="8"> <tr> <td align="center" colspan="2" class="pageTitle"> <h1>animate-elem-13-t</h1> </td> </tr> <tr class="navbar"> <td align="center"> SVG Image </td> <td align="center"> PNG Image </td> </tr> <tr> <td align="right"> <object data="../svggen/animate-elem-13-t.svg" width="480" height="360" type="image/svg+xml"><p style="font-size:300%;color:red">FAIL</p></object> </td> <td align="left"> <img alt="raster image of animate-elem-13-t" src="../png/animate-elem-13-t.png" width="480" height="360"/> </td> </tr> </table> </td> </tr> </table> <div class="opscript"> <p>Test 'from', 'by', 'to' and 'values'</p> <p> Six animations have been defined. All six animations define the same simultaneous behavior, but use different combinations of attributes 'from', 'by', 'to' and 'values'. In all cases, from time 2 seconds to time 5 seconds, the rectangle should change from a width of 30 to a width of 300. </p> <p>The red text shows the attributes that were used for that particular animation.</p> </div> <div class="linkbar"> <p> <a href="animate-elem-12-t.html">animate-elem-12-t ←</a> <a href="index.html">index</a> <a href="animate-elem-14-t.html">→ animate-elem-14-t</a> </p></div> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
80
Celebrate spring with Baseball! Enjoy a special rate that includes a cold beer (with ID) or beverage! Columbia Alumni will meet up at a pregame happy hour and will all be seated together as a group. Great opportunity to hang out like the good old days in NYC! • On game day please meet the ColumbiaDC board member Eun Joo Yi at the Right Field gate between 2:00-3:00 pm to get your ticket/s. Her contact is listed below. RF gate is located on first street, about 200 feet from the Will Call booths. Happy Hour at the Miller Lite Scoreboard Walk bar features $6 beers before the game. This pregame happy hour is held at the Miller Lite Scoreboard Walk bar prior to the game and until the first pitch. The Miller Lite Scoreboard Walk opens 1 ½ hours before game time on Thursdays and 2 ½ hours before game time on Fridays & Saturdays. In the event of a rain delayed start, happy hour will end two and a half hours after the ballpark gates open. Each Miller Lite Party Night ticket pack purchased includes the choice of 1 Coca-Cola product, Dasani water or beer (with valid ID). Please proceed to the Miller Lite Scoreboard Walk Bar, and present your ticket at the register where the barcode of your ticket will be scanned for verification of redemption. Drinks can only be redeemed at the Miller Lite Scoreboard Walk bar on day of game. Must be over 21 to redeem for beer and valid ID is required. Alcohol sales in Nationals Park cease at the end of the 7th inning. The Miller Lite Scoreboard Walk bar is located behind sections 240-243.
{ "redpajama_set_name": "RedPajamaC4" }
7,090
Q: Is there an equivalent of a file's atime for a MySQL row? This might seem like a weird comparison at first glance, but anyway: every time you read a file on Linux, its atime is updated. Is there an equivalent behaviour in MySQL, such as: every time a row is targeted by a SELECT, a last access time is updated? That could prove very useful when storing sessions in the database: you don't want to perform an UPDATE after every SELECT just to say "this row is fresh". Any thoughts?
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,754
Modernized Museo nina Juan at Antonio Luna in Ilocos The National Historical Commission of the Philippines (NHCP) will open the modernized Museo nina Juan at Antonio Luna at Rizal Street, Badoc, Ilocos Norte on April 27 at 11:00 a.m. The Luna brothers engaged in the reform and the revolutionary movements aimed at securing Philippine independence. Juan Luna became renowned for his excellence in painting. Many of his works, such as the Spoliarium, the Blood Compact, and the Death of Cleopatra, won awards in various expositions abroad. In 1898, he became a diplomat of the Philippines. At the request of his brother Antonio, he designed the uniforms of the Philippine army. Antonio Luna, an accomplished pharmacist, was active in the Propaganda movement in Spain. He was arrested for suspected complicity to the revolution in 1896. After his release, he joined the Philippine revolution as editor-in-chief of the revolutionary newspaper La Independencia, as a member of the Malolos Congress, and a general of the Philippine army. His 150th birth anniversary will be commemorated on October 29. The museum, located in the reconstructed brick house of the Luna family, relates the life and career of the brothers through various displays and interactive exhibits. A reconstruction of Juan's Paris studio and an augmented-reality simulation of trench warfare are among the museum's new features. The museum also has an e-learning facility where students can access the NHCP's online lessons on Philippine history. Maria Isabel Ongpin, a descendant of Juan and Antonio's brother Jose, is guest of honor. The NHCP is the national government agency mandated to promote Philippine history through its museums, research and publication, and preserve the nation's historical heritage through conservation and the marking of historic sites and structures Source: Tribune 玩謝英超阿四 紅藍衝冠 (體育) 2016/04/16 Islamist militants in the Philippines say they'll kill a … SC junks Quirino daughter's hubby bid on condo unit ownership The Supreme Court (SC) junked the petition of the husband of the late President Elpidio Quirino's... FOI in all gov't branches necessary: PCOO Public access of information to all branches of government is a must, an official of the... Moratorium on gov't dues payment eyed for Taal victims Senator Sherwin Gatchalian said he is set to file a resolution directing the Government Service Insurance...
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,977
{"url":"http:\/\/www-nature-com-s.caas.cn\/articles\/s41893-022-00849-0?error=cookies_not_supported&code=87f1ae7b-b6e1-4049-ac42-b9d8b5a9aed0","text":"Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.\n\n# Co-benefits of forest carbon projects in Southeast Asia\n\n## Abstract\n\nForest carbon projects can deliver multiple benefits to society. Within Southeast Asia, 58% of forests threatened by loss could be protected as financially viable carbon projects, which would avoid 835\u2009MtCO2e of emissions per year from deforestation, support dietary needs for an equivalent of 323,739 people annually from pollinator-dependent agriculture, retain 78% of the volume of nitrogen pollutants in watersheds yearly and safeguard 25\u2009Mha of Key Biodiversity Areas.\n\n## Main\n\nForest conservation is an important nature-based solution for achieving the goal of the Paris Climate Agreement to limit global warming to below 2\u2009\u00b0C (refs. 1,2). Growing demand for high-quality, nature-based carbon credits from the private and public sectors to meet their climate and sustainability goals presents new opportunities for carbon projects to deliver climate and other benefits to society2,3.\n\nSoutheast Asia consists of approximately 196 million ha of tropical forests, many of which are under threat from agricultural expansion and other economic activities4. It was estimated that deforestation in the region contributed to 2.56\u2009GtCO2e\u2009yr\u22121 of emissions between 2005 and 20105, and further losses will probably exacerbate climate change impacts. There is thus great potential to implement large-scale carbon projects in the region that focus on avoided deforestation as a nature-based climate solution, with countries such as Cambodia already committing to a 59.1% reduction of their emissions from forestry in their Nationally Determined Contributions by 20306.\n\nImportantly, forests within carbon projects also provide essential contributions to people, including pollination service for pollinator-dependent agriculture and water quality regulation, as well as biodiversity conservation1,4. Certification bodies and standards typically account for the climate mitigation potential of forest carbon projects, with the abovementioned co-benefits increasingly recognized through standards such as the Gold Standard (goldstandard.org) and Verra\u2019s Climate, Community and Biodiversity Standard (verra.org\/project\/ccb-program)3,7.\n\nYet, these co-benefits are typically documented at the project level and are typically only measured qualitatively rather than being systematically considered or prioritized during earlier stages of policy and decision-making processes. A robust accounting and recognition of these co-benefits as potential socio-economic and environmental returns on investment can help inform climate policies, strategies and decisions at national, regional and global levels.\n\nHere we assessed the co-benefits of establishing carbon projects that focus on avoided deforestation across Southeast Asia. First, we mapped the locations of standing forests that could be protected as financially viable carbon projects based on net present values (NPVs) and considering additionality over a 30-year time frame2 (see the Methods for the details). We then modelled the extent to which carbon projects would (1) mitigate climate change from the avoided emissions from deforestation2, (2) support crop pollination services for pollinator-dependent agriculture8,9, (3) maintain water quality regulation services for downstream rivers and lakes by retaining nitrogen in watersheds8,9, and (4) safeguard Key Biodiversity Areas (KBAs)10.\n\nWe find that 114 million ha of forests in Southeast Asia could be protected as viable carbon projects (NPV\u2009>\u20090) on the basis of our conservative starting carbon pricing scenario of US$5.80 per tCO2e (refs. 2,3,11). Protecting forests through carbon projects would thereby avoid 835 \u00b1 348 MtCO2e of emissions from deforestation across the region per year (Fig. 1a and Supplementary Table 2). Forests in the Indonesian provinces of Riau and West Kalimantan have the greatest climate mitigation potential at up to 49 tCO2e ha\u22121 yr\u22121. Forest carbon projects in proximity to agricultural lands also provide important foraging and nesting habitats for wild pollinators4,8,9. These pollinators not only ensure the ecosystem health of adjoining forest patches but also support pollinator-dependent agricultural production and nutritional services within the immediate vicinity. We find that this benefit can serve the dietary needs of an equivalent of 323,739 \u00b1 18,725 people across the region every year, on the basis of pollinated micronutrient production and dietary intake requirements (Fig. 1b and Supplementary Table 3). This service is particularly important in the Malaysian state of Sabah, where pollination service supported by each hectare of protected forest provides enough micronutrient production to fully meet the needs of up to 42 people, with more people potentially benefiting from having their nutritional needs even partially supported by pollination. Forests are also known to absorb nutrients such as nitrogen from the environment for biomass growth and metabolism. This uptake would in turn reduce the amount of nutrients that flow into freshwater habitats within the area\u2019s watersheds and thereby improve the quality of water flowing downstream, reducing the need for added treatment of potable water8,9. On the basis of an InVEST Nutrient Delivery Ratio model8,9, we find that 2.86 \u00b1 0.03 Mt of nitrogen pollutants (representing an estimated 78% of potential nitrogen pollutants across Southeast Asia) per year would be avoided from the establishment of carbon projects (Fig. 1c and Supplementary Table 3). This is particularly important for people who rely on the Mekong River, where nutrient loads from surrounding agriculture may impact livelihoods and access to clean drinking water. KBAs are sites that contribute greatly to the global persistence of biodiversity10. Protecting forests through carbon projects would thus conserve 25 \u00b1 3 Mha of KBAs in Southeast Asia, which represents half of all terrestrial forest KBAs in the region (Fig. 1d and Supplementary Table 3). We also identify hotspots where the establishment of carbon projects could deliver multiple co-benefits. We find that there are an estimated 6.6 Mha of forests in Southeast Asia that deliver some level of all four assessed benefits (Fig. 1e). Most of these hotspots are located in Thailand (1.7 Mha) and Indonesia (1.6 Mha). We also find that approximately 107 Mha of forests in the region would deliver at least one co-benefit in addition to climate change mitigation. Our findings are based on a conservative starting carbon price of US$5.80 per tCO2e (refs. 2,3,11). If carbon prices increased in the future, we would expect an increase in the regional extent of forests that could be protected as financially viable carbon projects. This in turn presumes an increase in the quantity of co-benefits that could be delivered to society7.\n\nWe thus performed an additional analysis to assess the effects of carbon pricing on the delivery of co-benefits from forest carbon projects in the region (Fig. 2). We find that an increase in carbon price to US25 per tCO2e\u2014the average price of carbon adopted by western nations11\u2014would result in corresponding increases in climate mitigation potential (from 835 \u00b1 348 MtCO2e yr\u22121 to 875 \u00b1 364 MtCO2e yr\u22121; 5% increase), crop pollination (from 323,739 \u00b1 18,725 to 372,390 \u00b1 17,225 equivalent people fed; 15% increase), water quality regulation (from 2.86 \u00b1 0.03 Mt to 3.76 \u00b1 0.02 Mt of nitrogen retained; 24% increase) and biodiversity conservation (25 \u00b1 3 Mha to 35 \u00b1 3 Mha of KBAs protected; 29% increase). Further increases in carbon price would result in diminishing returns in benefits (Fig. 2 and Supplementary Tables 4 and 5). Consequently, such increases in carbon prices could promote the financial viability of carbon projects, allowing them to compete with other potentially lucrative land-use alternatives (such as palm oil production12). Alternatively, mechanisms such as payments for ecosystem services and other conservation strategies could complement the establishment of carbon projects to further incentivize landholders to invest in protection and potentially increase the likelihood of the permanence of protections2,12. Importantly, the realization of co-benefits from forest carbon projects is essential to the alignment of climate policies such as the Paris Agreement with key global policy frameworks such as the Post-2020 Global Biodiversity Framework and the United Nations Sustainable Development Goals (SDGs). For example, forest carbon projects directly address the conservation of terrestrial ecosystems, enabling countries to better meet the targets of SDG 15.1 as well as Goal A of the Global Biodiversity Framework13. They also allow for the synergistic achievement of other goals and targets across the SDGs such as food security (SDG 2), clean water (SDG 6) and biodiversity, as well as other co-benefits not assessed in this study, such as terrestrial surface cooling (SDG 13 on climate action)13. Quantifying forest services further exemplifies the interconnections and importance of forest ecosystems for biodiversity and people. Particularly for communities in Southeast Asia engaged in subsistence and\/or smallholder agriculture, forests support their production of food and contribute to their livelihoods, as well as provide clean water for drinking and household use across the region4,14. Naturally, forests in carbon projects can also provide many other socio-economic benefits such as recreation and cultural, gender and economic empowerment for local communities15. While these benefits are typically measured qualitatively and are important in addressing human development goals, quantifying these benefits would require a more nuanced understanding of interrelations between forest services and the realized benefits to people, as well as the socio-political ecology at the local scale15. Other types of carbon projects\u2014namely, those focusing on reforestation and improved land management\u2014can also contribute to mitigating climate change and provide a variety of co-benefits, though their potentials may also be limited by specific economic and social constraints16. The investment in the protection of forests, their natural capital and their ongoing provision of services through carbon projects enables a financially viable and sustainable means of addressing other socio-economic and environmental issues beyond climate change. By assessing this potential in Southeast Asia, we demonstrate the potential of carbon finance to meet global climate and human development ambitions. ## Methods First, we mapped areas of standing forests that could be protected as financially viable carbon projects over a 30-year time frame. Second, we modelled the extent to which carbon projects would (1) mitigate climate change from the avoided emissions from deforestation2, (2) support crop pollination services for pollinator-dependent agriculture8,9, (3) maintain water quality regulation services for downstream rivers and lakes by retaining nitrogen in watersheds8,9, and (4) safeguard KBAs10. Third, we assessed the effects of carbon pricing on the delivery of co-benefits from forest carbon projects in the region. Standing forests in Southeast Asia were mapped on the basis of the European Space Agency\u2019s Climate Change Initiative 2015 land cover classification17 (Supplementary Table 1). We updated these forest areas to exclude recently deforested areas up to 201818 and existing human settlements19. To harmonize our spatial data layers with Avitabile et al.\u2019s20 carbon data (1-km resolution), we resampled (nearest neighbour) higher-resolution data where necessary. Profitable forest carbon was determined on the basis of Koh et al.2, which estimated profitability on the basis of key carbon finance requirements such as additionality and NPV. Specifically, NPV was calculated on the basis of several simplifying assumptions, including a project establishment cost of US25\u2009ha\u22121, an annual maintenance cost of US$10 ha\u22121 and a carbon price of US$5.80 per tCO2e for the first five years, followed by a 5% appreciation for the subsequent years over a 30-year project time frame. We also applied a 10% risk-adjusted discount rate and considered profitable areas where NPV\u2009>\u20090.\n\nThe regional estimates for pollination service (measured as the equivalent number of people fed), water quality regulation (in tons of nitrogen retained in the watershed) and KBAs (in hectares) within profitable forest carbon areas were then extracted from the respective spatial layers (see the Supplementary Information for the details, especially Supplementary Tables 2 and 3). We also determined locations across the region where carbon projects would deliver multiple co-benefits through a spatial overlay. Areas identified to contribute any level of co-benefit were coded 1 to 3, indicating the number of co-benefits that could be attained in addition to climate change mitigation from avoided deforestation (Fig. 1e).\n\nWe then explored the potential for carbon prices to affect the delivery of co-benefits from forest carbon projects in the region. The carbon prices assessed included US$1, US$2, US$3, US$4, US$5, US$10, US$15, US$20, US$25 and US$50 per tCO2e, with US\\$100 per tCO2e set as the maximum on the basis of Griscom et al.\u2019s1 cost-effective climate change mitigation threshold, with the same project establishment and annual maintenance cost, price appreciation, discount rates and time frame as assumed in the earlier analyses. The respective co-benefit estimates within profitable forest carbon areas at each price point were then extracted from the respective spatial layers (Supplementary Figs. 15).\n\nWe used uncertainties reported as standard deviations that were inherent to Avitabile et al.\u2019s20 carbon dataset for all spatial layers. Uncertainties associated with the price of carbon, and in turn the associated co-benefit uncertainty estimates, were also based on an assumed uniform distribution of the minimum and maximum prices of carbon between 2006 and 20183, and reported as standard deviations.\n\n### Reporting Summary\n\nFurther information on research design is available in the Nature Research Reporting Summary linked to this article.\n\n## Data availability\n\nAll maps generated are available in Zenodo at https:\/\/doi.org\/10.5281\/zenodo.5572600.\n\n## Code availability\n\nAll R and Python scripts used to process the data are available from the corresponding authors upon request.\n\n## References\n\n1. Griscom, B. W. et al. National mitigation potential from natural climate solutions in the tropics. Phil. Trans. R. Soc. B https:\/\/doi.org\/10.1098\/rstb.2019.0126 (2020).\n\n2. Koh, L. P., Zeng, Y., Sarira, T. V. & Siman, K. Carbon prospecting in tropical forests for climate change mitigation. Nat. Commun. 12, 1271 (2021).\n\n3. Forest Trends\u2019 Ecosystem Marketplace Financing Emission Reductions for the Future: State of Voluntary Carbon Markets 2019 (Forest Trends, 2019).\n\n4. Sodhi, N. S. et al. Conserving Southeast Asian forest biodiversity in human-modified landscapes. Biol. Conserv. 143, 2375\u20132384 (2010).\n\n5. Pearson, T. R., Brown, S., Murray, L. & Sidman, G. Greenhouse gas emissions from tropical forest degradation: an underestimated source. Carbon Balance Manage. 12, 3 (2017).\n\n6. Yurnaidi, Z. et al. ASEAN Climate Action: A Review of Nationally Determined Contributions Updated in 2020 (ASEAN Centre for Energy, 2021); https:\/\/aseanenergy.org\/asean-climate-action-a-review-of-nationally-determined-contributions-ndcs-updated-in-2020\/\n\n7. Goldstein, A. Not So Niche: Co-Benefits at the Intersection of Forest Carbon and Sustainable Development (Forest Trends\u2019 Ecosystem Marketplace, 2016); https:\/\/www.forest-trends.org\/publications\/not-so-niche\/\n\n8. Chaplin-Kramer, R. et al. Mapping the planet\u2019s critical natural assets for people. Preprint at bioRxiv https:\/\/doi.org\/10.1101\/2020.11.08.361014 (2021).\n\n9. Chaplin-Kramer, R. et al. Global modeling of nature\u2019s contributions to people. Science 366, 255\u2013258 (2019).\n\n10. BirdLife International The World Database of Key Biodiversity Areas (KBA Partnership, accessed 3 September 2020); www.keybiodiversityareas.org\n\n11. Carbon Pricing Dashboard (World Bank, 2021); https:\/\/carbonpricingdashboard.worldbank.org\/\n\n12. Butler, R. A., Koh, L. P. & Ghazoul, J. REDD in the red: palm oil could undermine carbon payment schemes. Conserv. Lett. 2, 67\u201373 (2009).\n\n13. Wood, S. L. R. et al. Distilling the role of ecosystem services in the Sustainable Development Goals. Ecosyst. Serv. 29, 70\u201382 (2018).\n\n14. van Noordwijk, M. et al. Tree cover transitions and food security in Southeast Asia. Glob. Food Sec. 3, 200\u2013208 (2014).\n\n15. Ojea, E., Loureiro, M. L., All\u00f3, M. & Barrio, M. Ecosystem services and REDD: estimating the benefits of non-carbon services in worldwide forests. World Dev. 78, 246\u2013261 (2016).\n\n16. Zeng, Y. et al. Economic and social constraints on reforestation for climate mitigation in Southeast Asia. Nat. Clim. Change 10, 842\u2013844 (2020).\n\n17. Bontemps, S. et al. Consistent global land cover maps for climate modelling communities: current achievements of the ESA\u2019s land cover CCI. In Proc. ESA Living Planet Symposium ESA SP-722. 2-13 (2013).\n\n18. Hansen, M. C. et al. High-resolution global maps of 21st-century forest cover change. Science 342, 850\u2013853 (2013).\n\n19. Pesaresi, M., Florczyk, A., Schiavina, M., Melchiorri, M. & Maffenini, L. GHS-SMOD R2019A - GHS Settlement Layers, Updated and Refined REGIO Model 2014 in Application to GHS-BUILT R2018A and GHS-POP R2019A, Multitemporal (1975-1990-2000-2015) (European Commission, Joint Research Centre (JRC), 2019); http:\/\/doi.org\/10.2905\/42E8BE89-54FF-464E-BE7B-BF9E64DA5218 (2019).\n\n20. Avitabile, V. et al. An integrated pan-tropical biomass map using multiple reference datasets. Glob. Change Biol. 22, 1406\u20131420 (2016).\n\n## Acknowledgements\n\nWe thank Z. Burivalova for her contributions to improving this paper. R.N. thanks Gordon and Betty Moore for their generous gift in supporting this study. L.P.K. is supported by the National Research Foundation (NRF) Singapore under its NRF Returning Singaporean Scientists Scheme (grant no. NRF-RSS2019-007).\n\n## Author information\n\nAuthors\n\n### Contributions\n\nT.V.S., Y.Z., R.N. and L.P.K. conceived the study. T.V.S. carried out the analyses. Y.Z., R.N. and R.C.-K. contributed to the data. T.V.S., Y.Z., R.N., R.C.-K. and L.P.K. contributed discussions and modelling insights. T.V.S., Y.Z., R.N., R.C.-K. and L.P.K. wrote the article.\n\n### Corresponding authors\n\nCorrespondence to Tasya Vadya Sarira, Yiwen Zeng or Lian Pin Koh.\n\n## Ethics declarations\n\n### Competing interests\n\nThe authors declare no competing interests.\n\n## Peer review\n\n### Peer review information\n\nNature Sustainability thanks Christa Anderson, David Ellison and Kei Uchida for their contribution to the peer review of this work.\n\nPublisher\u2019s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Supplementary information\n\n### Supplementary Information\n\nSupplementary Figs. 1\u20135 and Tables 1\u20135.\n\n## Rights and permissions\n\nReprints and Permissions\n\nSarira, T.V., Zeng, Y., Neugarten, R. et al. Co-benefits of forest carbon projects in Southeast Asia. Nat Sustain 5, 393\u2013396 (2022). https:\/\/doi.org\/10.1038\/s41893-022-00849-0\n\n\u2022 Accepted:\n\n\u2022 Published:\n\n\u2022 Issue Date:\n\n\u2022 DOI: https:\/\/doi.org\/10.1038\/s41893-022-00849-0","date":"2022-08-13 12:37:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.288246214389801, \"perplexity\": 11496.81449106657}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882571950.76\/warc\/CC-MAIN-20220813111851-20220813141851-00454.warc.gz\"}"}
null
null
1. In a 8 quart boiler cover chicken breasts with water and bring to a boil. Reduce heat and simmer until chicken is cooked. Once the chicken is cooked through, remove and allow to cool. 2. Remove and discard the bone and skin. The chicken must be shredded, or stripped into small pieces. Length does not matter, width does. In order to dry chicken correctly it must be thin. 3. Heat enough olive oil to generously cover the bottom of a large skillet. Slice squash about 1/4 inch thick. Dice the tomatoes. Dice the onion. Place the vegetables in the heated oil and added generous amounts of spices. Saute vegetables until they are still slightly crisp. Add shredded chicken. Mix. 4. Meanwhile, prepare your instant rice per package instructions. 5. Serve chicken and vegetables on a bed of rice.
{ "redpajama_set_name": "RedPajamaC4" }
8,500
Q: How to specify WPF Theme in c#? I found out that I can use a different theme in an C# WPF Application by adding the theme .xaml-file to the project and add to App.xaml as a Resource. see http://wpf.codeplex.com/wikipage?title=WPF%20Themes for a more detailed description. Can I do this as well at runtime in C#? Is it possible to specify a different theme for different wpf windows? Best regards Marc A: You can share themes between applications and have each one use a different one. I didn't think you could mix themes within the same application, but marc40000 has found out you can: <Button Height="23" Margin="81,65,122,0" Name="button1" VerticalAlignment="Top"> <Button.Resources> <ResourceDictionary Source="ShinyBlue.xaml"/> </Button.Resources> Button </Button> <Button Height="23" HorizontalAlignment="Right" Margin="0,0,38,35" Name="button2" VerticalAlignment="Bottom" Width="75">Button</Button> put the resourcedirectory to the controls you want to theme instead of doing it globaly in the app.xaml There's even more information from this MSDN page and this one
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,410
Chuwi LapBook Air now available for Pre Order Image Courtesy: Chuwi.com. Chuwi is a well known company that brings budget range of laptops and tablets in the market for consumers. The company is adding a new LapBook Air to its list of laptops and Tablets. The LapBook Air is powered by an Intel Celeron N3450 processor at 1.1GHz. The LapBook Air has a capacity of 8GB of DDR3 RAM and 128GB of Storage with a 2MP webcam available above the 14.1 inch 1080p IPS display. The LapBook also comes with an SD Card Slot, two USB 3.0 ports and a single Micro HDMI port. The LapBook Air also packs with a standard 3.5mm headphone jack and is priced at a starting price of $429 which is currently available for pre order on Amazon. The device is also available for pre order at Chuwi's website.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,712
Q: How can I create a new account or address with web3.js? I'm trying to interact with geth, and I need to create a new deposit address (and be able to control it). How can I implement this with web3.js? A: You can use the Web3.eth.accounts.create() function. It will return an account object which you'll be able to control. https://web3js.readthedocs.io/en/1.0/web3-eth-accounts.html Example: var Web3 = require("web3"); var web3 = new Web3('http://localhost:8545'); // your geth var account = web3.eth.accounts.create(); You can also use the web3.eth.personal.newAccount() function. http://web3js.readthedocs.io/en/1.0/web3-eth-personal.html#newaccount Check this blog post to help determine which is right for you. https://medium.com/@andthentherewere0/should-i-use-web3-eth-accounts-or-web3-eth-personal-for-account-creation-15eded74d0eb Note: This answer is for Web3.js 1.0.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,059
Klonowo may refer to the following places: Kłonowo, Radziejów County, Kuyavian-Pomeranian Voivodeship (north-central Poland) Klonowo, Golub-Dobrzyń County in Kuyavian-Pomeranian Voivodeship (north-central Poland) Klonowo, Tuchola County in Kuyavian-Pomeranian Voivodeship (north-central Poland) Klonowo, Podlaskie Voivodeship (north-east Poland) Klonowo, Masovian Voivodeship (east-central Poland) Klonowo, Działdowo County in Warmian-Masurian Voivodeship (north Poland) Klonowo, Ostróda County in Warmian-Masurian Voivodeship (north Poland)
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,978