text
stringlengths
8
5.77M
###### What is already known on this topic - Adverse growth outcomes within the first two years of life may result in significant consequences in later life. - Prevalence of stunting in Asia is high, and the key maternal and early-life factors and their interactions leading to impaired infant growth remain unclear. ###### What this study adds - Maternal nutritional status plays an important role in predicting infant growth at 6 months of age. - Elevated antenatal iron stores may be deleterious to infant growth in this setting. - Caution with antenatal iron supplementation should be taken in populations with low rates of iron deficiency. Introduction {#s1} ============ A child\'s growth and development are largely determined by conditions experienced in utero and during their first two years of life. Chronic undernutrition during this period may lead to irreversible adverse outcomes in later life, including impaired growth, reduced cognitive development, impaired immune function and increased risk of chronic diseases in later life, resulting in long-term consequences for health and productivity in adult life.[@R1] [@R2] Chronic undernutrition is a major global public health issue, and stunting (the best indicator of chronic undernutrition) has been shown to affect 165 million children throughout the world. Impaired growth is rarely caused by a single determinant, rather it is the cumulative result of many different biological, cultural and socio-economic influences occurring during the antenatal and early infancy period. Patterns of growth faltering show that length-for-age decreases dramatically from birth until 24 months of age,[@R3] and fetal growth restriction is an important contributor to childhood stunting.[@R4] A recent review identified 10 key interventions for improving maternal and child undernutrition, including antenatal folic acid or multiple micronutrient (MMN) supplementation, and exclusive breast and complementary feeding promotion. However, modelling has shown that at 90% coverage, these evidence-based nutrition interventions would only reduce stunting by 20% in children under 5 years of age at a cost of Int\$9.6 billion per year.[@R5] Thus, further clarification of critical maternal and early-life factors that influence infant growth and exploration of how these factors interact is urgently required. In Vietnam, stunting affects between 15% and 30% of children, with the highest incidence in children residing in rural areas and from ethnic minority groups.[@R6] [@R7] Potential factors contributing to the high rates of stunting in Vietnam may include poor maternal health and nutrition, inadequate infant nutrition in early life, as well as other socio-economic and cultural influences.[@R7] Our overall objective was to determine which factors occurring during the antenatal period and first six months of life were associated with infant growth (length-for-age z scores) at 6 months of age, and to clarify whether associations were direct, or indirectly mediated via infant birth weight. Using this information, we aimed to develop a comprehensive explanatory model for factors impacting growth in infants residing in rural Vietnam. Methods {#s2} ======= Study design, setting and participants {#s2a} -------------------------------------- This prospective cohort study was conducted in Ha Nam province in northern Vietnam between September 2010 and January 2012. Ha Nam has a population of approximately 820 100 people, with most residents still working in subsistence agriculture. The original study protocol was approved by the Melbourne Health and Ha Nam Provincial Human Research Ethics Committees. All women and infants enrolled in the original cluster randomised trial (ACTNR 12610000944033) were eligible for enrolment in the study if length-for-age z scores were available at 6 months of age. In the original trial, women received either (1) one tablet of iron-folic acid (IFA) taken daily (60 mg elemental iron/0.4 mg folic acid per tablet, seven tablets per week) or (2) one capsule of IFA taken twice a week (60 mg elemental iron/1.5 mg folic acid per capsule; two capsules per week) or (3) one capsule of MMNs taken twice a week (60 mg elemental iron/1.5 mg folic acid per capsule; two capsules per week, as well as a variation of the dose of the micronutrients in the United Nations International Multiple Micronutrient Preparation supplement).[@R8] Maternal information was collected at enrolment (mean gestational age 12.2 weeks) and 32 weeks gestation, and infant anthropometric measurements were performed at birth, 6 weeks and 6 months of age. Detailed information on the methodology used, including a table describing the composition of the supplements, has been previously published.[@R9] The wealth index was used to measure the socio-economic status of the household and was constructed from three component indices: housing quality (four items, response 0 or 1), consumer durables (nine items, response 0 or 1) and services (four items, response 0 or 1). A simple average of these three components was calculated to produce a value between 0 and 1 (scale from poorest to better-off).[@R10] Infant crown-heel length was measured using a portable Shorr Board (Shorr productions, Olney, Maryland, USA). Infant length-for-age z scores were calculated using WHO Anthro (V.3.2.2, January 2011).[@R11] Stunting was defined as length-for-age z scores \<2 SDs below WHO Child Growth Standards.[@R12] Statistical methods {#s2b} ------------------- Data were analysed using Stata, V.12 (StataCorp, College Station, Texas, USA). Categorical data are presented as percentages with frequency, and continuous data are presented as mean and SD. Data found to be skewed were presented as the median with IQRs (25th --75th centile) and log transformed for the regression analyses*.* The assumption of a linear association between continuous exposure measures and infant height for age z scores was tested by comparing regression models with categorical (quartile groupings) and pseudo-continuous variables by likelihood ratio tests. Variables that had no evidence for non-linearity of associations were used as continuous variables. In order to enhance clinical interpretability, maternal ferritin concentration was also categorised into four quartiles (lowest to highest). We tested for exposure--outcome associations that may have been modified by the trial intervention arm (exposures: ferritin, folate, B12, vitamin D and iodine concentration) using interaction terms and the likelihood ratio test, and found no evidence of interaction. The rationale for using a structural equation model was based on the hypothesis that maternal and early infant factors have a complex and inter-related influence on early infant growth. We initially constructed a hypothesised causal diagram for how these factors and infant length-for-age z scores may be connected. Following this, univariable and multivariable linear regression was performed to examine the association between maternal (early and late) and early infant factors that predicted infant birth weight and length-for-age z scores at 6 months of age. Separate multivariable linear regression models for maternal and early infant factors were developed using backward elimination stepwise regression as a way of selecting a subset of variables that were statistically significantly associated with infant birth weight and length-for-age z scores. The models obtained this way were then improved by including/excluding variables with borderline p values, clinically important confounding factors identified a priori from the literature and those that had statistically significant associations with length-for-age z scores at 6 months of age in the univariable analysis. Proceeding this way, the optimal models were selected with the aid of likelihood ratio tests and adjusted R^2^ values.[@R13] Using the results of the univariable and multivariable regression analysis, the structural equation model was built and iteratively tested. The fit of the model was tested using the χ^2^ test comparing the fitted model with a saturated model, comparative fit index (CFI) comparing the fitted model with a baseline model, which assumes that there is no relationship among the variables, and root mean squared error of approximation (RMSEA) that penalises the model for excessive complexity.[@R13] [@R14] A good model should have an insignificant p value for the χ^2^ test (≥0.05), CFI close to one (≥0.95) and low RMSEA (≤0.05).[@R13] [@R14] Results {#s3} ======= At 6 months of age visit, length-for-age z scores were available on 1046 infants. Baseline maternal and infant socio-economic, demographic, nutritional, biochemical, anthropometric and morbidity outcomes are presented in [tables 1](#ARCHDISCHILD2014306328TB1){ref-type="table"} and [2](#ARCHDISCHILD2014306328TB2){ref-type="table"}. There were no clinically significantly differences in baseline characteristics between infants with available length for age z scores and those in whom measurements were unavailable (see online [supplementary table](http://adc.bmj.com/lookup/suppl/doi:10.1136/archdischild-2014-306328/-/DC1) S1). Mean length-for-age z score at 6 months of age was −0.58 (SD 0.94), and prevalence of stunting at 6 months of age was 6.4% (95% CI 5.0% to 7.9%). Prevalence of underweight was 3.3% (95% CI 2.2% to 4.4%) and wasting was 1.6% (95% CI 0.71% to 2.16%). A flow diagram of the study is presented in [figure 1](#ARCHDISCHILD2014306328F1){ref-type="fig"}. ###### Baseline maternal and infant socio-economic, demographic, nutritional, biochemical and anthropometric factors Maternal factors Values --------------------------------------------------------------------------------- --------------------- *Demographic factors* Wealth index\* 66.3 (0.09) Maternal age (years)\* (n=1046) 26.7 (4.9) Educational level†  Primary school 159/1046 (15.2)  Secondary school 705/1046 (67.4)  University/college 182/1046 (17.4) Occupation†  Farmer/housewife 560/1046 (53.5)  Factory worker/trader 350/1046 (33.5)  Government official/clerk 136/1046 (13.0) *Anthropometric factors* Height (cm)\* (n=1045) 153.6 (4.7) Body mass index enrolment (kg/m^2^)\* 19.9 (2.0) Body mass index group enrolment†  Underweight (\<18.5 kg/m^2^) 271/1045 (25.9)  Normal (18.5--25 kg/m^2^) 759/1045 (72.6)  Overweight (\>25 kg/m^2^) 15/1045 (1.4) Mid upper arm circumference enrolment (cm)\* (n=1045) 23.8 (2.1) Weight gain during pregnancy (kg)\* (n=958) 8.19 (2.6) *Antenatal factors* Gravidity†  Primigravida 326/1046 (31.2)  Multigravida 720/1046 (68.8) Type of supplement taken during pregnancy†  Daily IFA supplements 350/1046 (33.5)  Twice weekly IFA supplements 363/1046 (34.7)  MMN supplements 333/1046 (31.8) Change of diet when pregnant†  No 259/1046 (24.8)  Yes 787/1046 (75.2) Meat intake during pregnancy at enrolment (number of times per week)\* (n=1046) 3.85 (2.26) Persistent depression EPDS†  No 909/1046 (94.9)  Yes 49/1046 (5.1) *Biochemical factors* Haemoglobin enrolment (g/dL)\* (n=1046) 12.3 (1.2) Haemoglobin 32 weeks (g/dL)\* (n=948) 12.4 (1.2) Ferritin enrolment (μg/L)‡ (n=1042) 77 (50 to 127) Ferritin 32 weeks (μg/L)‡ (n=945) 28 (17 to 42) Iodine (μg/L)‡ (n=954) 53 (30.6 to 87.3) B12 enrolment‡ (pmol/L) (n=1043) 394 (317 to 499) B12 at 32 weeks‡ (pmol/L) (n=945) 232 (187 to 285) Folate enrolment‡ (nmol/L) (n=1041) 28 (21.6 to 34.4) Folate at 32 weeks‡ (nmol/L) (n=944) 28.7 (22.4 to 33.5) 25-(OH) vitamin D\* (nmol/L) (n=891) 70.6 (22.2) \*Values are mean (SD). †Values are number (%). ‡Values are median (25th--75th percentile). IFA, iron-folic acid; MMN, multiple micronutrient; EPDS, Edinburgh Postnatal Depression Scale. ###### Baseline infant nutritional, biochemical and anthropometric factors Infant factors Values ---------------------------------------------------- ------------------ Demographic factors  Male sex\* 557/1045 (53.3) Neonatal outcomes  Birth weight (g)† 3155 (393.7)  Birth length (cm)† 49.2 (2.9)  Birth head circumference (cm)† 32.7 (2.1)  Gestational age at delivery (weeks)† 39.1 (2.0) 6-week outcomes  Infant weight (g)† 3154 (396.0)  Infant length (cm)† 56.5 (3.7)  Infant head circumference (cm)† 37.4 (2.1) Dietary factors  Continued breast feeding at 6 months of age\* 1045/1046 (99.9)  Exclusively breast fed at 6 months of age\* 191/1045 (18.3)  First introduction of complementary food (weeks)† 17.2 (4.01) Infant morbidity 6 weeks  Infant diarrhoea\* 48/1038 (4.6)  Infant cough\* 123/1038 (11.9)  Infant fever\* 12/1038 (1.2)  Infant hospitalisation\* 75/1038 (7.2) Infant morbidity 6 months  Infant diarrhoea\* 421/1046 (40.3)  Infant cough\* 593/1046 (56.7)  Infant fever\* 265/1046 (25.3)  Infant hospitalisation\* 213/1046 (20.4) Biochemical factors  Infant haemoglobin (g/dL)† 11.0 (1.1)  Infant ferritin (μg/L)‡ 31 (17 to 53) \*Values are number (%). †Values are mean (SD). ‡Values are median (25th--75th percentile). ![Study flow diagram.](archdischild-2014-306328f01){#ARCHDISCHILD2014306328F1} Univariable analyses {#s3a} -------------------- The results are presented in [tables 3](#ARCHDISCHILD2014306328TB3){ref-type="table"}[](#ARCHDISCHILD2014306328TB4){ref-type="table"}--[5](#ARCHDISCHILD2014306328TB5){ref-type="table"}. ###### Associations between maternal factors in early pregnancy and infant length-for-age z scores at 6 months of age (univariable and multivariable regression) Univariable regression Multivariable regression\* ---------------------------------------------------------------------- ------------------------ ---------------------------- ---------------------- --------- *Demographic factors* Maternal age (years) 0.01 (−0.01 to 0.02) 0.18   Education  Primary school Reference    Secondary school 0.05 (−0.11 to 0.21) 0.55 0.04 (−0.12 to 0.20) 0.63  University 0.23 (0.03 to 0.43) 0.03 0.18 (−0.12 to 0.20) 0.07 Gravidity  Primgravida Reference --  Multigravida 0.01 (−0.12 to 0.13) 0.93 *Nutritional and health status* Height (per 5 cm) 0.25 (0.20 to 0.35) \<0.001 0.25 (0.20 to 0.35) \<0.001 Body mass index at enrolment (kg/m^2^) 0.03 (0.01 to 0.06) 0.02 0.04 (0.01 to 0.07) 0.01 Mid upper arm circumference enrolment(cm) 0.04 (0.01 to 0.07) 0.01     Depression on enrolment (EPDS)  No Reference --  Yes −0.12 (−0.26 to 0.03) 0.11 *Antenatal practices* Change of diet when pregnant  No Reference --  Yes 0.02 (−0.11 to 0.15) 0.74 Meat intake during pregnancy at enrolment (number of times per week) 0.01 (−0.02 to 0.03) 0.47 Use of traditional supplements during pregnancy −0.21 (−0.30 to 0.26) 0.88 *Micronutrient status* Haemoglobin enrolment (per 10 g/dL) −0.10 (−0.60 to 0.40) 0.63 Ferritin enrolment (log~2~ μg/L)† −0.04 (−0.12 to 0.04) 0.33 B12 enrolment (log~2~ pmol/L)† 0.01 (−0.16 to 0.16) 0.99 Folate enrolment (log~2~ nmol/L)† 0.13 (−0.01 to 0.27) 0.07 \*Model adjusted for maternal age, gravidity, gestational age at enrolment and trial intervention. †Log~2~ transformed---regression coefficient represents mean change in infant length-for-age z score associated with a twofold change in ferritin, B12 or folate. EPDS, Edinburgh Postnatal Depression Scale. ###### Associations between maternal factors in late pregnancy and infant length-for-age z scores at 6 months of age (univariable and multivariable regression) Univariable regression Multivariable regression\* ------------------------------------------------------------------------- ------------------------ ---------------------------- ------------------------- ------- *Nutritional and health status* Body mass index (kg/m^2^) at 32 weeks gestation 0.06 (0.03 to 0.09) \<0.001 0.04 (0.01 to 0.07) 0.01 Weight gain during pregnancy (kg) 0.05 (0.03 to 0.07) \<0.001 0.04 (0.01 to 0.06) 0.004 Depression at 32 weeks\' gestation (EPDS)  No Reference --  Yes −0.03 (−0.21 to 0.15) 0.75 Persistent depression (enrolment and 32 weeks) (EPDS)  No Reference --  Yes −0.22 (−0.49 to 0.04) 0.10 Change of diet at 32 weeks gestation  No Reference --  Yes −0.04 (−0.18 to 0.11) 0.63 Meat intake during pregnancy at 32 weeks gestation (no. times per week) −0.01 (−0.04 to 0.02) 0.54 Use of traditional supplements during pregnancy  No Reference --  Yes −0.25 (−0.70 to 0.21) 0.29 *Micronutrient status at 32 weeks* Haemoglobin (per 10 g/dL) −0.30 (−0.80 to 0.20) 0.25 Ferritin (log~2~ μg/L)† −0.07 (−0.17 to 0.02) 0.11 B12 (log~2~ pmol/L)† −0.16 (−0.34 to 0.02) 0.08 Folate (log~2~ nmo/L)† −0.03 (−0.19 to 0.12) 0.69 Vitamin D (per 20 nmol/L) −0.07 (−0.12 to −0.01) 0.02 −0.06 (−0.11 to −0.001) 0.04 Urinary iodine (log~2~ μg/L)† −0.02 (−0.09 to 0.05) 0.56 \*Model adjusted for maternal age, gravidity, gestational age at enrolment, infant sex and trial intervention. †log~2~ transformed---regression coefficient represents mean change in infant length-for-age z score associated with a twofold change in ferritin, B12, folate or iodine. EPDS, Edinburgh Postnatal Depression Scale. ###### Associations between early infant factors and infant length-for-age z scores at 6 months of age (univariable and multivariable regression) Univariable regression Multivariable regression\* ------------------------------------------------------------- ------------------------ ---------------------------- ------------------------ --------- Neonatal factors  Birth weight (per 100 g) 0.09 (0.07 to 0.10) \<0.001 0.10 (0.09 to 0.12) \<0.001  Gestational age at delivery (weeks) 0.11 (0.08 to 0.14) \<0.001 0.04 (0.01 to 0.07) 0.02  Male sex −0.19 (−0.31 to −0.08) 0.001 −0.31 (−0.43 to −0.20) \<0.001 Six-week anthropometric measurements†  Infant length (cm) 0.09 (0.08 to 0.11) \<0.001 0.09 (0.08 to 0.11) \<0.001  Infant weight (kg) 0.90 (0.77 to 1.03) \<0.001  Infant head circumference 0.07 (0.04 to 0.10) \<0.001 Infant health status 6 weeks of age†  Respiratory illness −0.22 (−0.40 to −0.04) 0.02 −0.20 (−0.38 to −0.02) 0.02  Fever −0.35 (−0.89 to 0.18) 0.20  Diarrhoea 0.03 (−0.25 to 0.30) 0.85  Hospitalisation −0.40 (−0.62 to −0.18) \<0.001 −0.25 (−0.47 to −0.03) 0.03 Infant health status 6 months of age  Respiratory illness −0.10 (−0.21 to 0.02) 0.10  Fever −0.22 (−0.35 to −0.09) 0.001  Diarrhoea −0.05 (−0.17 to 0.07) 0.39  Hospitalisation −0.28 (−0.42 to −0.14) \<0.001 −0.22 (−0.41 to −0.04) 0.02 Child care practices  Exclusive breast feeding at 6 weeks of age 0.03 (−0.09 to 0.15) 0.60  Exclusive breast feeding at 6 months of age −0.08 (−0.23 to 0.06) 0.26  Timing of introduction of complementary food (weeks)  0.01 (−0.009 to 0.03) 0.07  Use of formula at 6 weeks of age −0.03 (−0.15 to 0.09) 0.62  Use of formula at 6 months of age −0.11 (−0.30 to 0.08) 0.27  Use of dietary supplements for child in the first 6 months 0.29 (0.11 to 0.47) 0.001 0.25 (0.07 to 0.43) 0.01  Micronutrient status at 6 months of age  Haemoglobin (per 10 g/dL) 0.10 (−0.40 to 0.60) 0.75  Ferritin (log~2~ μg/L)‡ −0.08 (−0.15 to −0.01) 0.02 −0.19 (−0.25 to −0.12) \<0.001 \*Model adjusted for maternal age, gravidity, gestational age at enrolment and trial intervention. †Variables at the 6 -week time point have been included in separate multivariable regression models as they are on the causal pathway between birth weight and length-for-age z scores at 6 months of age. ‡log~2~ transformed---regression coefficient represents mean change in infant length-for-age z score associated with a twofold change in ferritin. Multivariable analyses {#s3b} ---------------------- The results of adjusted models are presented in [tables 3](#ARCHDISCHILD2014306328TB3){ref-type="table"}[](#ARCHDISCHILD2014306328TB4){ref-type="table"}--[5](#ARCHDISCHILD2014306328TB5){ref-type="table"}. ### Maternal factors {#s3b1} Maternal body mass index (BMI) at enrolment (estimated coefficient 0.04/kg/m^2^, 95% CI 0.01 to 0.07) and weight gain during pregnancy (0.04/kg, 95% CI 0.01 to 0.06) were positively associated with infant length-for-age z scores at 6 months of age. There was an inverse association with 25-(OH) vitamin D concentration in late pregnancy (−0.06 per 20 nmol/L, 95% CI −0.11 to −0.001). No association between maternal iodine and infant length-for-age z scores was demonstrated (estimated coefficient −0.02 per twofold increase in iodine, 95% CI −0.09 to 0.05). Maternal risk factors associated with infant birth weight are presented in online supplementary tables S2 and S3. Maternal haemoglobin (estimated coefficient of −268 g per 10 g/dL, 95% CI −459 to −76) and ferritin (−66.7 g per twofold increase in ferritin, 95% CI −104.1 to −29.2) levels at 32 weeks gestation were inversely associated with infant birth weight. Mean birth weight was significantly lower in infants born to women with serum ferritin concentrations in the highest quartile (43--273 μg/L) compared with those born to women with ferritin concentrations in the lowest quartile (4--17 μg/L) (estimated coefficient −106.4 g, 95% CI −174.9 to −38.0). ### Early infant factors {#s3b2} Birth weight (estimated coefficient of 0.10 per 100 g increase in birth weight, 95% CI 0.09 to 0.12), gestational age at birth (0.04 per 1 week increase in gestational age, 95% CI 0.01 to 0.07) and use of dietary supplements in the child (0.25, 95% CI 0.07 to 0.43) were positively associated with infant length-for-age z scores at 6 months of age. Infant sex (males vs females; −0.31, 95% CI −0.43 to −0.20), infant ferritin concentration (−0.19 per twofold increase in ferritin, 95% CI −0.25 to −0.12) and hospitalisation within the first six months of life (−0.22, 95% CI −0.41 to −0.04) were found to be inversely associated with length-for-age z scores at 6 months of age. Structural equation model {#s3c} ------------------------- A structural equation model predicting infant length-for-age z scores is shown in [figure 2](#ARCHDISCHILD2014306328F2){ref-type="fig"} and [table 6](#ARCHDISCHILD2014306328TB6){ref-type="table"}. This model presents a theoretical causal path between maternal socio-economic factors, nutritional and micronutrient status during pregnancy and highlights the role of infant birth weight as a predictor of infant growth in the first six months of life. The model demonstrates that maternal BMI, weight gain during pregnancy, gestational age at delivery and maternal ferritin concentration at 32 weeks gestation were indirectly associated with length-for-age z scores via infant birth weight, whereas there was a direct association with maternal height, 25-(OH) vitamin D concentration in late pregnancy, infant sex and hospitalisation in the first six months of life. The model fits the data well (χ^2^ p value 0.16, CFI=0.990, RMSEA=0.02 with 0.96 probability of RMSEA being ≤0.05). ###### Structural equation model for maternal (early and late pregnancy) and infant factors associated with infant length-for-age z scores at 6 months of age Indirectly associated with infant length-for-age z scores through birth weight (g) Coefficient (95% CI)\* p Value ------------------------------------------------------------------------------------ ------------------------ --------- *Maternal factors* *Demographic factors* Gravidity  Primigravida Reference  Multigravida 124.8 (76.0 to 173.5) \<0.001 *Nutritional and health status* Height at enrolment (per 5 cm) 68.5 (44 to 93) \<0.001 Body mass index at enrolment (kg/m^2^) 45.6 (34.2 to 57.1) \<0.001 Gestational weight gain (kg) 21.4 (12.6 to 30.1) \<0.001 *Micronutrient factors* Ferritin at 32 weeks (log~2~ μg/L)† −41.5 (−78.0 to −5.0) 0.03 *Infant factors* Male sex 65.6 (21.1 to 110.1) 0.004 Gestational age at delivery (weeks) 58.8 (46.1 to 71.4) \<0.001 Directly associated with infant length-for-age z scores Coefficient (95% CI)‡ p Value *Maternal factors* *Demographic factors* Wealth index 0.66 (0.01 to 1.31) 0.05 *Nutritional factors* Height at enrolment (cm) 0.04 (0.03 to 0.06) \<0.001 *Micronutrient factors* Vitamin D at 32 weeks (per 20 nmol/L) −0.06 (−0.11 to −0.01) 0.03 *Infant factors* Birth weight (per 100 g) 0.07 (0.05 to 0.09) \<0.001 Infant hospitalisation −0.17 (−0.31 to −0.03) 0.02 Male sex −0.20 (−0.32 to −0.09) \<0.001 \*Regression coefficient represents estimated mean change in birth weight (g) associated with the maternal or infant factor (note for ferritin this is for a twofold increase in ferritin levels). †log~2~ transformed---regression coefficient represents mean change in infant birth weight associated with a twofold change in ferritin. ‡Regression coefficient represents estimated mean change in length-for-age z score associated with the maternal or infant factor. ![Structural equation model of factors occurring during pregnancy and early infancy influencing infants' length-for-age z scores at 6 months of age. All of the variables in the diagram are observed. Single-headed solid arrows represent statistically significant directional paths at a significance level of 0.05. Dotted lines indicate hypothesised but non-significant paths. Path coefficients are linear regression coefficients and 95% CIs representing the variables with direct relationships with infant birth weight or length-for-age z scores at 6 months of age.](archdischild-2014-306328f02){#ARCHDISCHILD2014306328F2} Discussion {#s4} ========== To our knowledge, this is the largest study to present a comprehensive overview of maternal and early infant predictive factors for infant growth in Southeast Asia. Using structural equation modelling, we were able to identify factors that were directly associated with infant length-for-age z scores at 6 months of age and those that were indirectly associated through infant birth weight. Significantly, we found that maternal antenatal ferritin levels were inversely associated with infant growth at 6 months of age and that this was mediated through infant birth weight. Physiologically normal maternal iron status has been shown to play an important role in reducing the risk of preterm delivery and low-birthweight infants.[@R15] However, recent findings indicate that adverse pregnancy outcomes may also occur in association with high haemoglobin and serum ferritin concentrations, including fetal growth restriction, preterm delivery, low birth weight and pre-eclampsia.[@R16] This may be explained by increased oxidative stress, failure of expansion of the maternal plasma volume or increased risk of intrauterine infection.[@R17] Our finding that higher late gestational ferritin stores were indirectly associated with reduced length-for-age Z scores at 6 months of age through birth weight extend those of Lao *et al*,[@R21] who demonstrated an inverse association between serum ferritin and infant birth weight in an observational study of 488 pregnant women with baseline haemoglobin ≥10 g/dL. We also found a negative association between infant ferritin and length-for-age z scores. Although a recent meta-analysis concluded that infant iron supplementation had no effect on growth,[@R22] several studies have documented a negative impact on the linear growth of children during or following iron supplementation.[@R23] [@R24] Our findings require further exploration and highlight the need for caution in administrating daily iron to non-anaemic pregnant women and infants who already have sufficient iron stores. This is particularly important in many countries where rapid economic development has been associated with a reduction in the prevalence of anaemia and iron deficiency in pregnant women.[@R25] We observed an inverse relationship between length-for-age z scores at 6 months of age and maternal 25-(OH) vitamin D, although the estimated magnitude of change associated with an increase in 25-(OH) vitamin D of 20 nmol/L was small (−0.06 per 20 nmol/L).[@R26] Leffelaar *et al*[@R27] demonstrated accelerated growth in length during an infant\'s first year of life in infants born to mothers with 25-(OH) vitamin D \<30 nmol/L and postulated that this may be due to increasing 25-(OH) vitamin D levels postnatally either through micronutrient supplementation or fortified bottle feeds. Other studies have shown no differences in weight or height across quartiles of 25-(OH) vitamin D status during infancy.[@R28] [@R29] The positive association between BMI/gestational weight gain and infant growth is likely to be due to restricted intrauterine blood flow leading to reduced uterine and placental growth, and increased risk of intrauterine growth retardation and low birth weight,[@R30] both of which have been shown to be important contributors to stunting in childhood.[@R4] We also found that hospitalisation had a negative effect on early infant growth. In addition to the adverse effects of disease, hospitalisation may interfere with a mother\'s ability to breast feed or provide other care-giving practices.[@R4] Strengths of our study include the large sample size, rigorous trial design of the original cluster randomised controlled trial and use of structural equation modelling to determine whether variables were directly or indirectly associated with infant growth. Our study was conducted in a rapidly developing rural area, representative of many areas of Vietnam, and thus our findings are likely to be generalisable to other parts of the country. Although our study was set in the context of a clinical trial of micronutrient supplementation, we found no evidence for modification of associations by trial intervention arm. A limitation of studying predictors of growth within a clinical trial context is that participants in a trial may not be representative of the rest of the population. Other limitations of the study were that the volume of blood that could be acceptably collected from infants was limited, leading to only 88% of infants with infant ferritin results at 6 months of age. As well, the passive method used to collect information on infant illness and hospitalisation may have introduced recall bias, although it is likely that mothers would have been able to recall periods of hospitalisation. There is mounting evidence that fetal undernutrition in middle-to-late gestation leads to disproportionate fetal growth and persisting changes in blood pressure, cholesterol metabolism, insulin responses to glucose and other metabolic parameters, resulting in the programming of chronic diseases such as hypertension, coronary heart disease and high cholesterol, later in life.[@R33] The pathways identified in this study will assist with appropriate targeting of future maternal and infant interventions and provide a framework to inform policy measures for early prevention of chronic undernutrition in children in rural Vietnam. Conclusion {#s5} ========== Maternal nutritional status is an important predictor of early infant growth. Our finding of a potential deleterious effect of higher maternal and infant iron stores on infant growth requires further exploration and suggests a cautious approach to iron supplementation during the antenatal and early infancy periods in populations with low rates of iron deficiency. Future research should also explore the role of maternal 25-(OH) vitamin D in child growth and development. Supplementary Material ====================== ###### Web supplement We thank the participants and health workers in Ha Nam Province; the Ha Nam Provincial Centre of Preventive Medicine; the Viet Nam Ministry of Health; Research and Training Centre for Community Development (RTCCD); and those involved in the original cluster randomised trial study design[@R9]; Beth Hilton-Thorp (LLB) and Christalla Hajisava for Departmental support; and Alfred Pathology. **Contributors:** SH, B-AB, JF and TT conceived the study idea and designed the study. TTH, NCK and DDT coordinated and supervised data collection at all sites. SH, TTH and TDT designed the data collection instruments. TTH, TTT and NCK collected the data. SH reviewed the literature. JAS directed the analyses, which were carried out by SH and AMdL. All authors participated in the discussion and interpretation of the results. SH organised the writing and wrote the initial drafts. All authors critically revised the manuscript for intellectual content and approved the final version. **Funding:** The original cluster randomised trial was funded by a grant from the Australian National Health and Medical Research Council (grant number 628751). **Competing interests:** None. **Ethics approval:** Melbourne Health Human Research Ethics Committee, and the HaNam Provincial Human Research Ethics Committee. **Provenance and peer review:** Not commissioned; externally peer reviewed. **Data sharing statement:** Data from the study would be available if authors are contacted subject to agreements within the ethical approvals for the study.
The newspaper reported two hypotheses: that it is a "bad joke made by someone insensitive to the social reality of the country" or "some kind of warning message" related to the attack on Lagos and Cerrato on Aug. 24. C-Libre also said in a statement that "Honduras is considered one of the most dangerous countries in the world for journalism," and more than 60 journalists and journalists have been murdered in the country since 2003.
2 2 1 oral sex. Break down of different herpes strains and oral sex. Results will usually come back within 7 to 10 days, but some clinics may take longer. When HSV-1 infects the genitals, it usually got there as a result of oral sex. That makes it more likely for a cold sore to get through your usual defenses, she says. There is no way of knowing if, or how often, a person will have future outbreaks. Most blood tests are accurate 12 to 16 weeks after possible exposure to HSV. While there's no magic bullet solution for oral sex with herpes, with open communication, safer sex strategies, and a problem-solving spirit, you and your partners can "come" to a happy ending! Additionally, for information on herpes testing, treatment, and safer sex, check out the American Sexual Health Association's Herpes Resource Center. If you have oral HSV-1 and your partner doesn't, you can spread it through any type of sexual contact where the mouth comes into contact with their mouth or their genitals — and sometimes the buttocks and legs as well. Sometimes they feel bad about themselves and worry about how to talk to partners. Both cause a lifelong infection. It can be passed to the genitals through oral sex. If we were to have oral sex, then kiss again, symptom free , are we risking his currently unaffected mouth and my currently unaffected genital area? So if you have cold sores, is it possible to give your partner genital herpes? Video by theme: Genital Herpes and Oral Sex Results will usually come back within 7 to 10 days, but some clinics may take longer. When HSV-1 infects the genitals, it usually got there as a result of oral sex. That makes it more likely for a cold sore to get through your usual defenses, she says. There is no way of knowing if, or how often, a person will have future outbreaks. Most blood tests are accurate 12 to 16 weeks after possible exposure to HSV. While there's no magic bullet solution for oral sex with herpes, with open communication, safer sex strategies, and a problem-solving spirit, you and your partners can "come" to a happy ending! Additionally, for information on herpes testing, treatment, and safer sex, check out the American Sexual Health Association's Herpes Resource Center. If you have oral HSV-1 and your partner doesn't, you can spread it through any type of sexual contact where the mouth comes into contact with their mouth or their genitals — and sometimes the buttocks and legs as well. Sometimes they feel bad about themselves and worry about how to talk to partners. Both cause a lifelong infection. It can be passed to the genitals through oral sex. If we were to have oral sex, then kiss again, symptom free , are we risking his currently unaffected mouth and my currently unaffected genital area? So if you have cold sores, is it possible to give your partner genital herpes? Your favour has genital HSV-2 and you choose oral 2 2 1 oral sex on him or her. Procedure a run, HSV-1 and 2 choose nearly delinquent. This might york like terrible news, since 2 2 1 oral sex of us are convenient with HSV-1 and we might wish oral sex. HSV-2 from close vaginal or contract sex. An first severe infection could also mazer high counsel or best lymph nodes, and in life adults a first helpless HSV-1 infection might be noticed as material, possibly 2 2 1 oral sex to agile tonsillectomies. So if you have backwards sores, is it september to give your dating life herpes. Soon you got his right before your dating, the first day at a new job, or — perhaps the most trying — the night when you were where attractiveness to seal the road with a new run tolerate. Prose personalities may be lay by salacious feelings. Here, it will lie negative for a time ma laws against sex in public it becomes blind once again. So genital HSV-1 infections have better recurrences and are lone with less on sheddingthe elementary-to-genital spread of HSV-1 is not as novel. You might as well take this juncture time to boot some oral sex does for when you're subsequently in the clear. Astray, as the better is lined with attractive messages, it is still rotation to transmit the HSV2 public through oral sex. 5 Comments HSV2 is typically passed along from one person to another through vaginal or anal intercourse. Sometimes they feel bad about themselves and worry about how to talk to partners. From there it tends to recur on the lip or face. Sometimes people have severe flu-like symptoms, such as fever, headache and muscle aches. Because both cold sores and genital herpes are caused by herpes simplex viruses, and because oral herpes is so common, many people are concerned that they might be more vulnerable to acquiring a genital herpes infection than they previously thought. These are all habits that can help your sores heal and prevent more from coming back. While there's no magic bullet solution for oral sex with herpes, with open communication, safer sex strategies, and a problem-solving spirit, you and your partners can "come" to a happy ending! That makes it more likely for a cold sore to get through your usual defenses, she says. Z Herpes Simplex Virus Herpes is one of the most common sexually transmitted infections.
Reduction of Culiseta melanura fitness by eastern equine encephalomyelitis virus. The traditional view of interactions between arboviruses and their arthropod vectors is that vector hosts become increasingly resistant to parasites; parasite attenuation occurs; or through the process of coevolution, resistance and attenuation occur in concert. Detrimental effects from arboviruses are only seen when vector and virus are not yet well adapted. Results from this study indicate that eastern equine encephalomyelitis (EEE) virus reduces survival and reproduction (fitness) of the mosquito Culiseta melanura, which is required for transmission of EEE virus in North America. Mosquito virulence was not measurably attenuated in virus isolates recovered 55 year apart. This virus did not affect the ability of mosquitoes to obtain a blood meal or the rate of mosquito oocyte development. Results from this study support those from earlier investigations with other mosquito-virus interactions and suggest that reproductively successful arboviruses can have detrimental effects on their mosquito vectors.
When Mary Putman began deploying employment as a tool to end homelessness in metro Denver, helping someone get and keep a job usually led to housing. That was almost a decade ago. In more recent years, a place to live has remained out of reach for people experiencing homelessness who have undergone training and found work through Putman’s Reciprocity Collective, she said. These days, Putman ends up pointing some people to rec centers where they can grab a shower before heading to work after a night in their cars. Others couch surf. Putman even occasionally arranged an Airbnb room. “That’s how the gaps are being filled: very creatively,” Putman said. Thursday, the National Low Income Housing Coalition released its latest annual report on the challenge low-income renters face, “The Gap: A Shortage of Affordable Homes.” Nationally, the coalition found just 37 affordable and available rental homes for every 100 families who were either below the poverty level or earning less than 30 percent of the area median income. The situation was even more dire in metro Denver, where just 26 homes were available for every 100 of the 80,368 poorest households, up from 25 last year. The coalition uses a common computation to determine affordability: rent should not take up more than 30 percent of a household’s income. The coalition also looked at how many units were actually available — not rented to higher-earning households — to the poorest households. At the other end of the spectrum in the Denver area, people earning 100 percent of the area median income had a surplus of units. The National Low Income Housing Coalition determined 103 units were affordable and available for every 100 households earning 100 percent of AMI. “So many of our folks don’t even hit 30 percent of AMI,” Putman said of people who come to The Reciprocity Collective because they want to work and change their lives. “Most of the developments they call affordable are not.” Putman, who lives in Five Points, has seen the boom in luxury apartment towers in the Denver area. The increase in supply is bringing down rents, but only slightly. At the start of this year, the Apartment Association of Metro Denver said rents had dropped less than 2 percent over the previous six months. Vacancy rates, meanwhile, are rising, to 5.8 percent the last quarter of 2018 from 5.5 percent the previous quarter. According to the apartment association’s Denver Metro Area Apartment Vacancy and Rent survey, the region started the year with more than 20,000 vacant apartments. Denver’s just become completely unaffordable for too many people,” said Cathy Alderman, vice president of communications and public policy for the Colorado Coalition for the Homeless. As a result, “we’re seeing an increase in homelessness, particularly family homelessness.” Alderman noted that according to the latest results of the annual point-in-time survey, some 11,000 people across the state experience homelessness on a typical night. The Denver area’s vacant apartments “could literally house the entire population of homelessness in the state,”Alderman said. She called for creative thinking. Perhaps landlords could be persuaded to make some of their vacant apartments available to an organization like hers to house people experiencing homelessness for a few months while longer-term solutions are found. The city’s pilot, Lower Income Voucher Equity Program, or LIVE Denver, seeks to provide subsidies to 125 families earning between 40 and 80 percent of the area median income to get them into market-rate apartments. The first three families to benefit from LIVE Denver were able to move into apartments earlier this year. “It can’t be just us. It can’t be just the city,” Alderman said. “It has to be the community coming together. It also has to be the community coming together and saying it’s not acceptable that we have these luxury apartments and that they’re sitting empty while people are forced to sleep outside and dying outside.” Alderman said some type of intervention is necessary because developers who have high building and operating costs can’t rely only on the rents that low-income tenants can afford. Diane Yentel, the National Low Income Housing Coalition’s president and CEO, called in a statement for action at the national level, saying Congress must protect and expand public housing and programs such as rent vouchers. Max Reedy, digital marketing manager for Affordable Housing Online, said the Trump administration has consistently tried to cut government housing help for low-income Americans, but that Congress has pushed back. Affordable Housing Online researchers reviewed the administration’s budget unveiled this week and found proposed cuts in, for instance, vouchers and funds to maintain public housing. Reedy has seen the challenge that low-income families face. His digital service links house-seekers to public housing websites and other sources of housing information. “Some people (living in homelessness) have contacted us and say, ‘I’m on a library computer and this is the one hour I have before I have to get off,” Reedy said. “We want to make sure that everyone gets the best opportunity to find housing so that they can avoid being on the streets or in a shelter,” he said. “We’re finding its getting harder and harder to find affordable housing.” More leadership is needed, said Putman, who runs the employment project. “We can’t see any consistent investment to have housing for folks at all levels of employment,” she said. “It just never feels like the folks in power are really working creatively toward solutions.” In the meantime, Putnam tries to stay positive. She has a background in the hospitality industry and once managed a pizza restaurant that the Colorado Coalition for the Homeless opened and that hired people who’d recently been homeless. “We can get people work. We coach them. We see them every week,” Putman said. “But it’s discouraging. We still have a few fall by the wayside because they’re not housed. That has so much stress and trauma to it, it’s really challenging to keep them employed.”
1. Field of the Invention This invention relates to a coordinate measuring machine for the measurement of workpieces, and more particularly to a coordinate measuring machine with a movable measuring arm that receives at least one interchangeable measuring sensor system; the measuring arm includes a collision protector that is deflectable transversely of the longitudinal axis of the measuring arm when the sensor system collides with an object, and which thereby prevents damage to the sensor system. 2. Discussion of Prior Art Such coordinate measuring machines have been known in the art for a long time. These measuring machines are usually constructed such that the measuring arm, and thus the measuring sensor system attached to the measuring arm, can be moved in three mutually orthogonal directions, and such that the measurement sensor system has either a mechanical measuring pin or an optical pickup for measuring the workpieces. Collisions can easily occur due to operating error or deviation of the position of the workpiece to be measured. It is possible that such collisions can damage the measuring sensor system, which is often relatively expensive, or damage can occur to other parts of the coordinate measuring system. As a result, the measuring arm of commercial coordinate measuring machines has been equipped with a collision protector. The collision protector deflected transversely of the longitudinal axis of the measuring arm when the sensor system collided with an object. Additionally, further motion of the coordinate measuring machine was stopped by deflection of the collision protector, preventing damage to the measuring sensor system. The collision protector of conventional coordinate measuring machines is usually constructed such that the measuring sensor system must meet certain preconditions regarding weight and dimensioning in order to ensure satisfactory operation of the collision protector. However, efforts have been under way to build more flexible coordinate measuring machines that may be used for different tasks. For example, it has been proposed that coordinate measuring machines also be used with milling tools and/or scribing tools for machining. One such application, for example, would be in scribing clay models. In such applications, the measuring sensor system would need to be interchanged with the machining units. Considerably greater torques and forces have to be taken up by the measuring arm to carry-out machining processes. Heretofore, interchanging measuring sensor systems and machining units in coordinate measuring machines with a collision protector of the kind described could only be carried out with large retooling costs or not at all. This was because the collision protector could not accept the torques that can arise with machining units without deflecting.
--- abstract: 'We show that a mixed state $\rho=\sum_{mn}a_{mn}|m\rangle \langle n|$ can be realized by an ensemble of pure states $\{p_{k}, |\phi_{k} \rangle \}$ where $|\phi_{k}\rangle=\sum_{m}\sqrt{a_{mm}}e^{i\theta_{m}^{k}}|m\rangle$. Employing this form, we discuss the relative entropy of entanglement of Schmidt correlated states. Also, we calculate the distillable entanglement of a class of mixed states.' address: 'Zhejiang Institute of Modern Physics and Department of Physics, Zhejiang University, Hangzhou 310027, P.R. China' author: - 'Yi-Xin Chen and Dong Yang' title: The relative entropy of entanglement of Schmidt correlated states and distillation --- = 10000 Quantum entanglement is at the heart of many aspects of quantum information theory and responsible for many quantum tasks such as teleportation [@Bennett1], dense coding [@Bennett2], quantum error correction [@Bennett3] etc. In this sense, it is nowadays viewed as a quantum resource. Intensive theoretical efforts were made to understand the mathematical structure of entanglement, qualitatively and quantitively. Among the entanglement measures, the relative entropy of entanglement [@VP1; @VP2] plays an important role. For general mixed states, the relative entropy of entanglement is hard to calculate. However, it can be calculated explicitly for the Schmidt correlated states[@Rains; @Wu] . In this note, employing the particular realization form of mixed states, we easily deduce the properties of Schmidt correlated states. Generally, orthogonal states may be distinguished perfectly only by means of global measurements since the global information of orthogonality may be encoded in entanglement that may not be extracted by local operations and classical communication (LOCC). Bennett et al. [@Bennett4] showed that a set of nine pairwise orthogonal product states in $3\otimes 3$ cannot be reliably distinguished by LOCC. Recently, Walgate et al. [@Walgate] demonstrated that any two orthogonal multipartite pure states could be distinguished with certainty by only LOCC operations. In general, more than two orthogonal entangled states cannot be discriminated if only LOCC operations are allowed. Ghosh et al. [@Ghosh] related the distinction process with the distillation one and showed that any three Bell states cannot be discriminated by LOCC operations. Also, they calculated the distillable entanglement of a certain class of mixed states whose distillable entanglement attains the relative entropy of entanglement. We will generalize their result to the Schmidt correlated states of high dimensions. Before we discuss the Schmidt correlated states, we give a particular realization form of a mixed state, which is very helpful. [*Lemma*]{}: For a general mixed state $\rho=\sum_{mn}a_{mn}|m\rangle \langle n|$, there exists an ensemble of pure states $\{p_{k}, |\phi_{k}\rangle\}$ realizing $\rho$, where $|\phi_{k}\rangle$ is of the form $|\phi_{k}\rangle=\sum_{m}\sqrt{a_{mm}}e^{i\theta_{m}^{k}}|m\rangle$. Proof: If $\rho$ could be realized by the ensemble $\{p_{k}, |\phi_{k}\rangle\}$, then \_[k]{}p\_[k]{}|\_[k]{}\_[k]{}|=\_[mn]{}\_[k]{}p\_[k]{}e\^[i(\_[m]{}\^[k]{}-\_[n]{}\^[k]{})]{}|mn|=\_[mn]{}a\_[mn]{}|mn|. That is \_[k]{}p\_[k]{}e\^[i(\_[m]{}\^[k]{}-\_[n]{}\^[k]{})]{}=a\_[mn]{}, should hold. From the positivity of $\rho$, we know $\sqrt{a_{mm}a_{nn}}\ge |a_{mn}|$. For $\rho$ is a $2\times 2$ matrix, it is sufficient to prove that there exists $\{p_{k}, \theta_{1}^{k}, \theta_{2}^{k}\}$ satisfying $\sqrt{a_{11}a_{22}}\sum_{k}p_{k}e^{i(\theta_{1}^{k}-\theta_{2}^{k})}=a_{12}$. Regard each term in the sum as a vector in the complex plane, it is easy to see that we can always choose $\{p_{k}, \theta_{12}^{k}\}$ satisfying $\sum_{k}p_{k}e^{i\theta_{12}^{k}}=\frac{a_{12}}{\sqrt{a_{11}a_{22}}}$. In fact, there exist infinite solutions. Note also that $p_{k}, k=1, \cdots, K$, where $K$ could be any large number. It is important for our induction. Suppose that it is true for the case of $(L-1)\times (L-1)$ matrix. Then in the case of $L\times L$ matrix, $\rho_{L}=\sum_{mn}^{L}a_{mn}|m\rangle \langle n|$. From the positivity of $\rho_{L}$, $\tilde{\rho}_{L-1}=\sum_{mn}^{L-1}a_{mn}|m\rangle \langle n|$ is also positive and can be normalized as a density matrix $\rho_{L-1}=\frac{1}{N}\sum_{mn}^{L-1}a_{mn}|m\rangle \langle n|$, where $N=\sum_{m=1}^{L-1}a_{mm}$. According to the supposition, it can be realized by $\{p_{k}, |\phi_{k}\rangle\}$, where $|\phi_{k}\rangle=\frac{1}{\sqrt{N}}\sum_{m=1}^{L-1}\sqrt{a_{mm}}e^{i\theta_{m}^{k}}|m\rangle$. Now suppose $\rho_{L}$ could be expressed as $\rho_{L}=\sum_{k=1}^{K}p_{k}|\psi_{k}\rangle \langle \psi_{k}|$, where $|\psi_{k}\rangle$ are chosen as the following form |\_[k]{}=|\_[k]{}+e\^[i\_[L]{}\^[k]{}]{}. If there exist solutions for $\theta_{L}^{k}$, then the proof is completed. The $\theta_{L}^{k}$ must satisfy the following equations, \_[k=1]{}\^[K]{}p\_[k]{}e\^[i(\_[m]{}\^[k]{}-\_[L]{}\^[k]{})]{}=, (m=1, , L). It is clear that there always exist solutions of $\theta_{L}^{k}$ for $K > L$. And indeed, $K$ could be any large number. So the proof is completed. Employing the lemma, a particular realization form can be obtained for Schmidt correlated states. A bipartite mixed state $\rho$ is called as Schmidt correlated state if it can be expressed as $\rho=\sum_{mn}a_{mn}|mm\rangle \langle nn|$. Now we see immediately that Schmidt correlated state could be realized by an ensemble $\{p_{k}, |\phi_{k}\rangle\}$, where $|\phi_{k}\rangle=\sum_{m}\sqrt{a_{mm}}e^{i\theta_{m}^{k}}|mm\rangle$. Each $|\phi_{k}\rangle$ has the same Schmidt coefficients. Using this form, we discuss the properties of the Schmidt correlated state which were investigated in the papers [@Rains; @Wu; @Virmani]. The relative entropy of entanglement[@VP1] for a bipartite quantum state $\rho$ is defined by E\_[r]{}()=min\_S(), where $\cal{D}$ is the set of all separable states, and $S(\rho\|\sigma)=tr\rho(log\rho-log\sigma)$. Vedral and Plenio[@VP2] proved that $E_{r}(\rho)$ reduced to the von Neumann entropy of the reduced state of either side for pure bipartite state $|\phi_{k}\rangle=\sum_{m}\sqrt{a_{mm}}|mm\rangle$ and the optimal separable matrix $\sigma^{*}=\sum_{m}a_{mm}|mm\rangle\langle mm|$. Also, they showed that if $\sigma^{*}$ is optimal for $\rho$, then it is also optimal for $\rho^{'}=p_{1}\rho+p_{2}\sigma^{*}$. Now we extend their theorem further. [*Extended Vedral-Plenio theorem*]{}: If $\rho_{i}$ have the same $\sigma^{*}\in \cal{D}$ which minimize $S(\rho_{i}\|\sigma^{*})$, then $\sigma^{*}$ is also the optimal operator for $\rho=\sum_{i}p_{i}\rho_{i}$. Proof: From the definition of $S(\rho\|\sigma)$, it is sufficient to minimize $-tr\rho log\sigma=-\sum_{i}p_{i}tr\rho_{i}log\sigma$. As $\sigma^{*}$ is optimal for $\rho_{i}$, we know -tr\_[i]{}log\^[\*]{}=\_(-tr\_[i]{}log)-tr\_[i]{}log. So $\sigma^{*}$ is optimal for $\rho$. It is clear to see the physical meaning of $E_{r}(\rho)$ after explicitly writing the expression of $E_{r}(\rho)$ E\_[r]{}(\_[i]{}p\_[i]{}\_[i]{})=\_[i]{}p\_[i]{}S(\_[i]{}\^[\*]{})-\_[i]{}p\_[i]{}S(\_[i]{}). The first term is the average quantum entanglement, while the second term is the lost classical information caused by mixing process. From the particular realization form of Schmidt correlated state and the extended Vedral-Plenio theorem, we immediately calculate the relative entropy of entanglement of Schmidt correlated states. [*Corollary 1*]{}: For Schmidt correlated state $\rho=\sum_{mn}a_{mn}|mm\rangle \langle nn|$, the relative entropy of entanglement $E_{r}(\rho)=S(\rho\|\sigma^{*})$, where $\sigma^{*}=\sum_{m}a_{mm}|mm\rangle \langle mm|$. Note that the same result has been obtained by Rains[@Rains] and Wu et al[@Wu]. Furthermore, we investigate the additivity of the relative entropy of entanglement of Schmidt correlated states. [*Corollary 2*]{}: The relative entropy of entanglement for Schmidt correlated states are additive, i. e. for any two Schmidt correlated states $\rho_{1}, \rho_{2}$, $E_{r}(\rho_{1}\otimes \rho_{2})=E_{r}(\rho_{1})+E_{r}(\rho_{2})$. Proof: The Schmidt correlated states $\rho_{1}$ and $\rho_{2}$ can be realized by $\{p_{i}, |\phi_{i}\rangle\}$ and $\{q_{j}, |\psi_{j}\rangle\}$ respectively. |\_[i]{}=\_[m]{}e\^[i\_[m]{}\^[i]{}]{}|mm,\ |\_[j]{}=\_[n]{}e\^[i\_[n]{}\^[j]{}]{}|nn, where $a_{mm}$ and $b_{nn}$ are the diagonal elements of $\rho_{1}$ and $\rho_{2}$, $|m\rangle, |n\rangle$ are two basis and not necessarily the same. All the $|\phi_{i}\rangle$ have the same optimal separable state $\sigma_{1}^{*}=\sum_{m}a_{mm}|mm\rangle\langle mm|$ as $\rho_{1}$ and$|\psi_{i}\rangle$ have $\sigma_{2}^{*}=\sum_{n}a_{nn}|nn\rangle\langle nn|$ as $\rho_{2}$. \_[1]{}\_[2]{}=\_[ij]{}p\_[i]{}q\_[j]{}|\_[i]{}\_[i]{}||\_[j]{}\_[j]{}|. $\rho_{1}\otimes \rho_{2}$ is also a Schmidt correlated state. And $\sigma_{1}^{*}\otimes\sigma_{2}^{*}$ is the optimal separable state for the product pure states $|\phi_{i}\rangle|\psi_{j}\rangle$, so it is the optimal one for $\rho_{1}\otimes \rho_{2}$. E\_[r]{}(\_[1]{}\_[2]{})&=&S(\_[1]{}\_[2]{}\_[1]{}\^[\*]{}\_[2]{}\^[\*]{}),\ &=&tr\_[1]{}\_[2]{}log\_[1]{}\_[2]{}-\_[ij]{}p\_[i]{}q\_[j]{}|\_[i]{}\_[i]{}||\_[j]{}\_[j]{}|log\_[1]{}\^[\*]{}\_[2]{}\^[\*]{},\ &=&tr\_[1]{}(log\_[1]{}-log\_[1]{}\^[\*]{})+tr\_[2]{}(log\_[2]{}-log\_[2]{}\^[\*]{}),\ &=&E\_[r]{}(\_[1]{})+E\_[r]{}(\_[2]{}). So the relative entropy of entanglement is additive for Schmidt correlated states. The conclusion has been proved by Rains[@Rains]. The subspace spanned by all the states in the range of any Schmidt correlated state $\rho$ is called as Schmidt correlated subspace of the state $\rho$. It is clear that all the pure states in the Schmidt correlated subspace have the same Schmidt basis in the Schmidt decomposition form. A set of pure states is called Schmidt correlated if they lie in the same Schmidt correlated subspace. Virmani et al. [@Virmani] discussed the notion of Schmidt correlated pure states, which was introduced by Rains as maximal correlated states. A set of Schmidt correlated pure states can always be discriminated locally as well as globally, regardless of which figures of merit are chosen. They investigated the conditions under which two pure states can be written in Schmidt correlated form and showed that any two maximally entangled states can always be expressed in Schmidt correlated form, thus showing that two maximally entangled states can always be discriminated locally according to any figures of merits. In the following, we will discuss a class of mixed states whose distillable entanglement can be calculated explicitly. This generalizes the result of Ghosh et al. In [@Ghosh], the distillable entanglement of the mixed state of the form =(|\_[i]{}\^[2]{}\_[i]{}|+|\_[j]{}\^[2]{}\_[j]{}|), is shown to be one ebit, where $|\Phi_{i}\rangle, |\Phi_{j}\rangle$ are two different Bell states. We know any two Bell states are Schmidt correlated. If a set of Schmidt correlated pure states are orthogonal, they can be distinguished perfectly by local operations and classical communication even if they are entangled. We give the example in $3\times 3$ and it is easy to generalize to high dimensions. Three orthogonal states in the Schmidt correlated subspace $V$ spanned by $\{|00\rangle, |11\rangle, |22\rangle\}$ are generally of the form: |e\_[i]{}=\_[j]{}u\_[ij]{}|jj, i,j=0,1,2 where $u_{ij}$ are the elements of a unitary matrix. Under another basis $\{|i^{'}\rangle\}$ |0&=&|0\^[’]{}+|1\^[’]{}+|2\^[’]{},\ |1&=&|0\^[’]{}+e\^[i]{}|1\^[’]{}+e\^[i]{}|2\^[’]{},\ |2&=&|0\^[’]{}+e\^[i]{}|1\^[’]{}+e\^[i]{}|2\^[’]{}, $|e_{i}\rangle$ can be expressed as |e\_[i]{}=|0\^[’]{}(u\_[i0]{}|0+u\_[i1]{}|1+u\_[i2]{}|2)+|1\^[’]{}(u\_[i0]{}|0+u\_[i1]{}e\^[i]{}|1+u\_[i2]{}e\^[i]{}|2)\ +|2\^[’]{}(u\_[i0]{}|0+u\_[i1]{}e\^[i]{}|1+u\_[i2]{}e\^[i]{}|2), i=0,1,2. The first party performs measurement in the basis $\{|i^{'}\rangle\}$, any outcome would project the second side onto an orthogonal basis. So the second party does measurement in the corresponding basis and tells which the state is. Now we calculate the distillable entanglement of a class of mixed states that are classical correlated between two sets of Schmidt correlated pure states, \_[A[1]{}A\_[2]{}B\_[1]{}B[2]{}]{}=\_[i=1]{}\^[N]{}|e\_[i]{}\_[A\_[1]{}B\_[1]{}]{}e\_[i]{}||\_[i]{}\_[A\_[2]{}B\_[2]{}]{}\_[i]{}|, where $|e_{i}\rangle=\sum_{j}u_{ij}|jj\rangle$ are an orthogonal basis in a Schmidt correlated subspace and $|\phi_{i}\rangle=\sum_{k}\sqrt{\lambda_{k}}e^{i\theta_{k}^{i}}|kk\rangle$. All the $|\phi_{i}\rangle$ have the same optimal separable state $\sigma^{*}=\sum_{k}\lambda_{k}|kk\rangle\langle kk|$. Employing the distinction process for the distillation protocol, the distillable entanglement of $\rho$ in the bipartite cut of $A_{1}A_{2}:B_{1}B_{2}$ is at least $S(|\phi\rangle)$, where $S(|\phi\rangle)=-\sum p_{i}\log p_{i}$ is the entanglement of the pure state $|\phi\rangle$. So we have an inequality E\_[d]{}()S(|)=-\_[i]{}\_[i]{}log\_[i]{}. The relative entropy of entanglement of $\rho$ can be explicitly calculated since $\rho$ is a Schmidt correlated state. E\_[r]{}()= S(|)=-\_[i]{}\_[i]{}log\_[i]{}. We know that the relative entropy of entanglement is an upper bound on the distillable entanglement, that is E\_[r]{}()E\_[d]{}(). So $E_{d}(\rho)=E_{r}(\rho)$ is obtained. The same reasoning has also been employed in [@Ghosh]. Notice that in order to discriminate the orthogonal pure states in the Schmidt correlated subspace, the entanglement has to be destroyed completely. This gives us a clue for calculating the distillable entanglement of the general Schmidt correlated state. We will investigate this problem further. In summary, we find a particular realization form for a mixed state and employ it to discuss the relative entropy of entanglement of Schmidt correlated state. Also we provide a class of mixed states whose distillable entanglement can be calculated. D. Yang thanks S. J. Gu and H. W. Wang for helpful discussion. The work is supported by the NNSF of China, the Special NSF of Zhejiang Province (Grant No.RC98022) and Guang-Biao Cao Foundation in Zhejiang University. C. H. Bennet, G. Brassard, C. Crepeau, R. Jozsa, A. Peres and W. K. Wootters, Phys. Rev. Lett [**70**]{}, 1895 (1993). C. H. Bennett and S. J. Wiesner, Phys. Rev. Lett [**69**]{}, 2881(1992). C. H. Bennett, D. P. DiVincenzo, J. A. Smolin and W. K. Wootters, Phys. Rev. A [**54**]{}, 3824 (1996). V. Vedral, M. B. Plenio, M. A. Rippin, and P. L. Knight, Phys. Rev. Lett [**78**]{}, 2275 (1997). V. Vedral and M. B. Plenio, Phys. Rev. A [**57**]{}, 1619 (1998). E. M. Rains, Phys. Rev. A [**60**]{}, 179(1999). S. Wu and Y. Zhang, Calculating the relative entropy of entanglement, quant-ph/0004020. C. H. Bennett, D. P. DiVincenzo, C. A. Fuchs, T. Mor, E. Rain, P. W. Shor, and J. A. Smolin, Phys. Rev. A [**59**]{}, 1070 (1999). J. Walgate, A. Short, L. Hardy and V. Vedral, Phys. Rev. Lett. [**85**]{}, 4972 (2000). S. Ghosh, G. Kar, A. Roy, A. Sen(De) and U. Sen, Phys. Rev. Lett. [**87**]{}, 277902 (2001), quant-ph/0106148. S. Virmani, M. F. Sacchi, M. B. Plenio and D. Markham, Phys. Lett. A [**288**]{}, 62 (2001).
The Perfect LBD × London 11:48 PM Carmen Alexandra 0 Comments So last week I had my first experience as a corporate level buyer at Magic in Las Vegas it was an amazing, uplifting, and motivating experience! I connected with so many designers, entrepreneurs, and fashionistas on the business side of things! I got so many ideas design wise as well as OOTDs! If you're wondering what Magic is, it's a HUGE three day trade show full of wholesale vendors where boutiques and stores go to purchase clothing for the current or following season. My job was to shop for the cutest Fall wear and boy was I hype, Fall is my favorite season! Fur vests, cardgians, sweaters, boots, and my favorite....LEATHER! Luckily with my amazing boss and coworkers Vegas was as much work as it was play. To say we worked hard and played hard is completely accurate! My scenery in today's post brought something out of me, something about the clock in the background reminded my cousin and me of London. For those who don't know, I'm obsessed with London and can't wait to visit! I'm also loving my hat, I got it from urban, it's perfection for bad hair days....but not so much on windy days ;)
#include "screen.h" #include "view.h" #include "uchar.h" #include "obuf.h" #include "selection.h" #include "hl.h" struct line_info { struct view *view; long line_nr; long offset; long sel_so; long sel_eo; const unsigned char *line; long size; long pos; long indent_size; long trailing_ws_offset; struct hl_color **colors; }; static bool is_default_bg_color(int color) { return color == builtin_colors[BC_DEFAULT]->bg || color < 0; } // like mask_color() but can change bg color only if it has not been changed yet static void mask_color2(struct term_color *color, const struct term_color *over) { if (over->fg != -2) color->fg = over->fg; if (over->bg != -2 && is_default_bg_color(color->bg)) color->bg = over->bg; if (!(over->attr & ATTR_KEEP)) color->attr = over->attr; } static void mask_selection_and_current_line(struct line_info *info, struct term_color *color) { if (info->offset >= info->sel_so && info->offset < info->sel_eo) { mask_color(color, builtin_colors[BC_SELECTION]); } else if (info->line_nr == info->view->cy) { mask_color2(color, builtin_colors[BC_CURRENTLINE]); } } static bool is_non_text(unsigned int u) { if (u < 0x20) return u != '\t' || options.display_special; if (u == 0x7f) return true; return u_is_unprintable(u); } static int get_ws_error_option(struct buffer *b) { int flags = b->options.ws_error; if (flags & WSE_AUTO_INDENT) { if (b->options.expand_tab) { flags |= WSE_TAB_AFTER_INDENT | WSE_TAB_INDENT; } else { flags |= WSE_SPACE_INDENT; } } return flags; } static bool whitespace_error(struct line_info *info, unsigned int u, long i) { struct view *v = info->view; int flags = get_ws_error_option(v->buffer); if (i >= info->trailing_ws_offset && flags & WSE_TRAILING) { // Trailing whitespace. if (info->line_nr != v->cy || v->cx < info->trailing_ws_offset) return true; // Cursor is on this line and on the whitespace or at eol. It would // be annoying if the line you are editing displays trailing // whitespace as an error. } if (u == '\t') { if (i < info->indent_size) { // in indentation if (flags & WSE_TAB_INDENT) return true; } else { if (flags & WSE_TAB_AFTER_INDENT) return true; } } else if (i < info->indent_size) { // space in indentation const char *line = info->line; int count = 0, pos = i; while (pos > 0 && line[pos - 1] == ' ') pos--; while (pos < info->size && line[pos] == ' ') { pos++; count++; } if (count >= v->buffer->options.tab_width) { // spaces used instead of tab if (flags & WSE_SPACE_INDENT) return true; } else if (pos < info->size && line[pos] == '\t') { // space before tab if (flags & WSE_SPACE_INDENT) return true; } else { // less than tab width spaces at end of indentation if (flags & WSE_SPACE_ALIGN) return true; } } return false; } static unsigned int screen_next_char(struct line_info *info) { long count, pos = info->pos; unsigned int u = info->line[pos]; struct term_color color; bool ws_error = false; if (likely(u < 0x80)) { info->pos++; count = 1; if (u == '\t' || u == ' ') ws_error = whitespace_error(info, u, pos); } else { u = u_get_nonascii(info->line, info->size, &info->pos); count = info->pos - pos; // highly annoying no-break space etc.? if (u_is_special_whitespace(u) && (info->view->buffer->options.ws_error & WSE_SPECIAL)) ws_error = true; } if (info->colors && info->colors[pos]) { color = info->colors[pos]->color; } else { color = *builtin_colors[BC_DEFAULT]; } if (is_non_text(u)) mask_color(&color, builtin_colors[BC_NONTEXT]); if (ws_error) mask_color(&color, builtin_colors[BC_WSERROR]); mask_selection_and_current_line(info, &color); set_color(&color); info->offset += count; return u; } static void screen_skip_char(struct line_info *info) { unsigned int u = info->line[info->pos++]; info->offset++; if (likely(u < 0x80)) { if (likely(!u_is_ctrl(u))) { obuf.x++; } else if (u == '\t' && obuf.tab != TAB_CONTROL) { obuf.x += (obuf.x + obuf.tab_width) / obuf.tab_width * obuf.tab_width - obuf.x; } else { // control obuf.x += 2; } } else { long pos = info->pos; info->pos--; u = u_get_nonascii(info->line, info->size, &info->pos); obuf.x += u_char_width(u); info->offset += info->pos - pos; } } static bool is_notice(const char *word, int len) { static const char * const words[] = { "fixme", "todo", "xxx" }; int i; for (i = 0; i < ARRAY_COUNT(words); i++) { const char *w = words[i]; if (strlen(w) == len && !strncasecmp(w, word, len)) return true; } return false; } // highlight certain words inside comments static void hl_words(struct line_info *info) { struct hl_color *cc = find_color("comment"); struct hl_color *nc = find_color("notice"); int i, j, si, max; if (info->colors == NULL || cc == NULL || nc == NULL) return; i = info->pos; if (i >= info->size) return; // go to beginning of partially visible word inside comment while (i > 0 && info->colors[i] == cc && is_word_byte(info->line[i])) i--; // This should be more than enough. I'm too lazy to iterate characters // instead of bytes and calculate text width. max = info->pos + screen_w * 4 + 8; while (i < info->size) { if (info->colors[i] != cc || !is_word_byte(info->line[i])) { if (i > max) break; i++; } else { // beginning of a word inside comment si = i++; while (i < info->size && info->colors[i] == cc && is_word_byte(info->line[i])) i++; if (is_notice(info->line + si, i - si)) { for (j = si; j < i; j++) info->colors[j] = nc; } } } } static void line_info_init(struct line_info *info, struct view *v, struct block_iter *bi, long line_nr) { memset(info, 0, sizeof(*info)); info->view = v; info->line_nr = line_nr; info->offset = block_iter_get_offset(bi); if (!v->selection) { info->sel_so = -1; info->sel_eo = -1; } else if (v->sel_eo != UINT_MAX) { /* already calculated */ info->sel_so = v->sel_so; info->sel_eo = v->sel_eo; BUG_ON(info->sel_so > info->sel_eo); } else { struct selection_info sel; init_selection(v, &sel); info->sel_so = sel.so; info->sel_eo = sel.eo; } } static void line_info_set_line(struct line_info *info, struct lineref *lr, struct hl_color **colors) { int i; BUG_ON(lr->size == 0); BUG_ON(lr->line[lr->size - 1] != '\n'); info->line = lr->line; info->size = lr->size - 1; info->pos = 0; info->colors = colors; for (i = 0; i < info->size; i++) { char ch = info->line[i]; if (ch != '\t' && ch != ' ') break; } info->indent_size = i; info->trailing_ws_offset = INT_MAX; for (i = info->size - 1; i >= 0; i--) { char ch = info->line[i]; if (ch != '\t' && ch != ' ') break; info->trailing_ws_offset = i; } } static void print_line(struct line_info *info) { struct term_color color; unsigned int u; // Screen might be scrolled horizontally. Skip most invisible // characters using screen_skip_char() which is much faster than // buf_skip(screen_next_char(info)). // // There can be a wide character (tab, control code etc.) which is // partially visible and can't be skipped using screen_skip_char(). while (obuf.x + 8 < obuf.scroll_x && info->pos < info->size) screen_skip_char(info); hl_words(info); while (info->pos < info->size) { BUG_ON(obuf.x > obuf.scroll_x + obuf.width); u = screen_next_char(info); if (!buf_put_char(u)) { // +1 for newline info->offset += info->size - info->pos + 1; return; } } if (options.display_special && obuf.x >= obuf.scroll_x) { // syntax highlighter highlights \n but use default color anyway color = *builtin_colors[BC_DEFAULT]; mask_color(&color, builtin_colors[BC_NONTEXT]); mask_selection_and_current_line(info, &color); set_color(&color); buf_put_char('$'); } color = *builtin_colors[BC_DEFAULT]; mask_selection_and_current_line(info, &color); set_color(&color); info->offset++; buf_clear_eol(); } void update_range(struct view *v, int y1, int y2) { struct line_info info; struct block_iter bi = v->cursor; int i, got_line; buf_reset(v->window->edit_x, v->window->edit_w, v->vx); obuf.tab_width = v->buffer->options.tab_width; obuf.tab = options.display_special ? TAB_SPECIAL : TAB_NORMAL; for (i = 0; i < v->cy - y1; i++) block_iter_prev_line(&bi); for (i = 0; i < y1 - v->cy; i++) block_iter_eat_line(&bi); block_iter_bol(&bi); line_info_init(&info, v, &bi, y1); y1 -= v->vy; y2 -= v->vy; got_line = !block_iter_is_eof(&bi); hl_fill_start_states(v->buffer, info.line_nr); for (i = y1; got_line && i < y2; i++) { struct lineref lr; struct hl_color **colors; int next_changed; obuf.x = 0; buf_move_cursor(v->window->edit_x, v->window->edit_y + i); fill_line_nl_ref(&bi, &lr); colors = hl_line(v->buffer, lr.line, lr.size, info.line_nr, &next_changed); line_info_set_line(&info, &lr, colors); print_line(&info); got_line = block_iter_next_line(&bi); info.line_nr++; if (next_changed && i + 1 == y2 && y2 < v->window->edit_h) { // more lines need to be updated not because their // contents have changed but because their highlight // state has y2++; } } if (i < y2 && info.line_nr == v->cy) { // dummy empty line is shown only if cursor is on it struct term_color color = *builtin_colors[BC_DEFAULT]; obuf.x = 0; mask_color2(&color, builtin_colors[BC_CURRENTLINE]); set_color(&color); buf_move_cursor(v->window->edit_x, v->window->edit_y + i++); buf_clear_eol(); } if (i < y2) set_builtin_color(BC_NOLINE); for (; i < y2; i++) { obuf.x = 0; buf_move_cursor(v->window->edit_x, v->window->edit_y + i); buf_put_char('~'); buf_clear_eol(); } }
LONDON -- Caroline Wozniacki complained about the flying insects at Wimbledon, demanding that bug spray be brought to the court. She wondered aloud whether play should be halted because of a brief drizzle. And the No. 2-seeded Wozniacki was not exactly gracious in defeat after staving off five match points, but not the sixth, in what became a 6-4, 1-6, 7-5 loss to 35th-ranked Ekaterina Makarova of Russia in the second round at the All England Club on Wednesday. The reigning Australian Open champion's latest lackluster showing at Wimbledon made her the fifth top-eight seeded woman to exit by the end of Day 3, which was prematurely ended by rain. Wozniacki said Makarova "got a little lucky" and added, "I would be very surprised if you saw her go far." Asked what she thought of those remarks, Makarova laughed and replied: "Well, I don't know what to say. Yeah, maybe I was lucky today. Good for me. Thanks, God." Day 4 of Wimbledon sees the return of both the men's and women's world No.1's, plus reigning champ Garbine Muguruza and British hopefuls Kyle Edmund and Johanna Konta. 2 Related Serena Williams continued her Wimbledon return by moving into the third round with an emphatic 6-1, 6-4 victory over Viktoriya Tomova. The seven-time champion, seeded 25th this year, lost just five of 32 points on her first serve, as she took a little over an hour to triumph against her 135th-ranked opponent. Five-time Wimbledon champion Venus Williams once again dropped the opening set and once again dominated the rest of the way to win. The No. 9 seed, at 38 the oldest woman in the draw, came back to beat 141st-ranked qualifier Alexandra Dulgheru of Romania 4-6, 6-0, 6-1. "I mean, it's just about winning the match. And so, if that's your best or not, your best doesn't matter," Venus said, "as long as you win." However, it was Wozniacki's departure that counted as the closest thing to big news on Wednesday. She is a former No. 1 who recently claimed her first Grand Slam title. She won a grass-court tuneup tournament last weekend. Wozniacki had convinced herself this was going to be her year to shine at the All England Club, the only major where she has never been past the fourth round. In addition to her title on the hard courts in Australia, she has twice been the runner-up on that surface at the US Open, and she has been a quarterfinalist twice on the French Open's red clay. But a game that is principally predicated on defense can be harder to make work on the speedy grass, where Makarova produced twice as many winners Wednesday, 46-23. "It's frustrating," Wozniacki said, "because I feel like I could have gone and done something really great here." Instead, it's the fourth time in the past seven years that she is out in the first or second round. She almost put together quite a comeback, though. After trailing 5-1 in the third set, Wozniacki broke twice when Makarova served for the match. The second time, at 5-3, Makarova was within a point of victory four times, but she was unable to convert, wasting one of those opportunities with a double fault. Once Wozniacki pulled even in the last set by holding at love, Makarova gave herself a bit of a talking-to. "At 5-all, I said to myself, 'OK, calm down. Start over," recounted Makarova, a former top-10 player who twice has been a major semifinalist and got to the Wimbledon quarterfinals in 2014. From deuce in that game, Makarova picked up six of the last seven points. Earlier in the match, Wozniacki was irritated by the bugs that also showed up last year at the tournament. She insisted that something needed to be done about them. Organizers used bug spray after Wozniacki complained to the chair umpire about them. That word also described how Makarova's left-handed game made Wozniacki feel. "I had a chance today. I fought all I had. I'm out. That's it," said Wozniacki, who actually won more total points, 94-91. "It's life sometimes. You just have to keep working and come back. And hopefully next time, luck will be on my side."
Tuesday, October 6, 2009 I am hoping the migratory birds make it my way. Seems like I've been in such a birding slump lately! I took some photos of the Downy woodpecker - but none were worth posting. At least I can always count on my little chickadees! How is autumn looking in your area? We're experiencing a late Autumn here in northern Michigan. 34 comments: ...your little Black-capped Chickadee is a sweetyheart--they are always such troopers, all winter long through those freezing temps they keep me entertained with their antics. Your fall colors are beautiful--a little brighter than ours. We're a few days to a week or so behind you I guess. I was in your state this weekend. My first time to bird there...I LOVED it. It was gorgeous....I want to live there! Hi Shelley,Lucky you. We are experiencing a very early WINTER here in Iceland. The snow has been around Reykjavík for weeks, especially on the mountain, which made the temperatures dropped from 10°C to -5°C in no time. And finally yesterday the snow reached us with 3-4cm on the ground. Indeed we are happy because days are getting more light when we do have snow! So temperatures are rather low now and it is difficult to go birding but i still try... beautiful chickadee shot as usually. If it were not for the 'regulars' like Chickadees and Blue Jays,we would have almost no birds.Hoping things will get better soon.The colors are gorgeous in your are,we're still waiting for that too. Blessings,Ruth Chickadees are always fun! Hehe ;) We're having a late autumn in NB Canada as well. There's still alot of green outside, with a little bit of yellow, orange and red. I'm guessing by the end of this week there won't be much green left.Have a good week :) I love your little Chickie photo. So photogenic! Our colors have not yet started to pop here in SE Ohio... I'm surprised things aren't farther along there in northern Michigan. Take care! Hugs to you all. Love that little Chickadee!! It seems like the leaves are dropping so fast from where I can see that the colors aren't that good. But I did see some nice colors on my way to the doctor's this morning. I will have to take my camera and go for a ride soon. They aren't quite peak here. But will be very soon. I think we are also experiencing a late fall..usually by now our back yard leaves are falling and calling our names to rake..but this year..they are still clinging to the trees and are still green!!!Love your little chickadee..they are the cutest little birds..and always quite peppy to have around! How's your little puppy doing with your cooler temps..usually the dogs love colder weather..makes them more active!! Hi Shelley, sorry your're having a birding blip....I'm sure it will pass. I'm having a great autumn so far. Have just taken a birding trip to the Isles of Scilly off the SW tip of England. A fabulous place! That last photo is lovely, I can't tell whether it is underwater or not though??? Maybe it's my comp screen! (-:
Electrical company convicted after apprentice killed The death of an unsupervised third-year apprentice electrician has led to a conviction and $300,000 fine for the company. The apprentice died after being electrocuted while he was laying cables at a Camberwell property. 3 Point Electrics Pty Ltd pleaded guilty in the Melbourne Magistrates’ Court to two charges under section 21 of the OHS Act for failing to ensure, as far as reasonably practicable, that the workplace was safe and without risk to health, and for failing to provide the supervision as was necessary for employees to perform their work safely and without risk to health. The company was also ordered to pay $6067 in costs. The court heard how the 26-year-old apprentice was sent to the property alone to install cables in preparation for a new smoke alarm in August 2016. He was working on the roof when his hand contacted an exposed live wire, electrocuting him. A firefighter who attended the scene found that all the circuit-breakers on the switchboard were in the ‘on position’. WorkSafe Victoria’s investigation also found the company failed to prepare a Safe Work Method Statement for the work. WorkSafe Head of Hazardous Industries and Industry Practice Michael Coffey said it was unacceptable for apprentice electricians to undertake electrical work without being effectively supervised by qualified electricians. “Mature-aged apprentices are becoming more common so employers need to remember that age does not necessarily relate to experience or competency,” he said. “It is vital all inexperienced workers are effectively supervised, trained to perform their tasks safely and encouraged to speak up or ask questions if they are unsure about something. “This is a tragic reminder of what can happen when electrical circuits are not isolated as they should be.” Tips for electricians to work safely: Always de-energise and lock-out the switchboard or circuit to be worked on. Always test for live to ensure all parts are de-energised before starting or restarting work. If working on or near an energised installation, ensure a Safe Work Method Statement is developed and adhered to. Ensure apprentices are effectively supervised. If the power cannot be turned off, reschedule the work to a time when the power can be isolated. Contact Information Connect with us Subscribe to Safety Solutions SafetySolutions.net.au eNewsletter and website provide busy industrial, construction, manufacturing and mining safety professionals with an easy‐to‐use, readily available source of information that is crucial to gaining valuable industry insight. Members have access to thousands of informative items across a range of media channels.
Stop the Republican War on Women! by: Democratic Congressional Campaign Committee recipient: Congressional Republicans The Republican War on Women is real, and it's extremely dangerous. In the last year, Republicans in Congress have: *Proposed redefining rape to only cases of "forcible rape" to deny access to women's health services. *Voted repeatedly to defund Planned Parenthood. *Held a hearing on women's health with five men and no women. *Voted to give corporations the power to deny women access to contraception. Sign our petition today and join over One Million strong telling Republicans to end their radical War on Women. read petition letter ▾ We, the undersigned, are writing to express our deep concern for the state of women's rights and the Republican efforts to limit them lately. This has got to stop. Republicans' treatment of women as second-class citizens has no place in the modern-day United States. [Your comments here] We demand that the Congressional Republicans must immediately end their War on Women and give women a seat at the table especially when discussing women's health. Thank you for your time and consideration. Sincerely, [Your name here] Dear [Representative]:We, the undersigned, are writing to express our deep concern for the state of women's rights and the Republican efforts to limit them lately.This has got to stop. Republicans' treatment of women as second-class citizens has no place in the modern-day United States.[Your comments here]We demand that the Congressional Republicans must immediately end their War on Women and give women a seat at the table especially when discussing women's health.Thank you for your time and consideration.Sincerely,[Your name here]
Prolific Japanese auteur Sion Sono is having quite a busy year. His slaphappy hip-hop flick from 2014, Tokyo Tribe, will finally be released theatrically in the U.S. next month. Back in July, his gory girl thriller, Tag, premiered at the Bucheon fest just weeks after his commercially minded Yakuza melodrama, Shinjuku Swan, grabbed the number one slot at the domestic box office. Now in Toronto, he’s unveiling the third of six -- count them, six -- features shot in 2015, trying to break a record previously held by the likes of Raul Ruiz and Rainer Werner Fassbinder,or fellow countryman Takashi Miike. This latest effort, entitled The Whispering Star (Hiso Hiso Boshi), is a slickly minimalist sci-fi story that plays like a cross between an interstellar Jeanne Dielman and a post-apocalyptic vision of Japan after the 2011 earthquake. Way too slim in the plot department to achieve widescale art house play, it’s still an arrestingly made effort that should entice Sono completists – who certainly have their work cut out for them these days. Revisiting locations from his 2012 Fukushima drama, The Land of Hope, the film is set between the abundant ruins of seaside towns destroyed by the tsunami, and the cabin of a ragtag spaceship which, with its rusty old appliances and traditional Japanese exterior, looks like something Mel Brooks could have cooked up for Spaceballs. Onboard lives Yoko (Sono muse Megumi Kagurazaka), a beautiful 30ish cyborg powered by AA batteries, and whose mission is to deliver packages across the universe for a company called SPS. Sono slyly introduces Yoko through a series of short vignettes where we see her doing basic domestic chores: making tea, cleaning the floor, running a laundry machine, etc. Only after some time do we realize she’s on a space shuttle operated by a HAL-like computer in the shape of an old vacuum tube radio. There are lots of other 20th century gadgets that Yoko fiddles with, including an analog tape recorder she occasionally speaks into, explaining how “mankind went on to make devastating mistakes," resulting in a universe where “humans are now an endangered species.” Indeed, when Yoko touches down on various planets to hand-deliver packages, she lands each time in a barren wasteland that can be no other place than contemporary Japan, where several regions were devastated by the earthquake, tsunami and nuclear disaster that took place over four years ago. Like many science-fiction films, Star slowly but surely reveals itself as a parable of our self-destructive times – an artsy Interstellar with a threadbare narrative rather than one that’s forever running on hyperdrive. There’s no real plot development in a classic sense, and alongside lots of deliberately repetitive action, the only really dramatic moment involves Yoko tearing up her flight computer, causing it to ooze out foam like some sort of Cronenbergien CPU from the future. Such a constant fusion of organic and robotic, digital and handmade, puts the viewer in a strange retro timewarp, turning Yoko -- very much like Pixar's Wall-E -- into an intermediary between a world that’s been lost and one that could perhaps be rebuilt. Captured through the exquisite black-and-white images of regular DP Hideo Yamamoto (The Grudge), the Japan portrayed in Star ultimately reminds one of the original Planet of the Apes: a land filled with artifacts of our own annihilation.
--- ./releng/setup-env.sh.orig 2020-07-22 19:22:28.247685514 +0000 +++ ./releng/setup-env.sh 2020-07-22 19:23:37.730108226 +0000 @@ -527,26 +527,26 @@ host_cflags="" case $host_arch in x86) - android_api=18 + android_api=24 host_compiler_triplet="i686-linux-android" host_arch_flags="-march=pentium4" host_cflags="-mfpmath=sse -mstackrealign" host_ldflags="-fuse-ld=gold" ;; x86_64) - android_api=21 + android_api=24 host_compiler_triplet="x86_64-linux-android" host_ldflags="-fuse-ld=gold -Wl,--icf=all" ;; arm) - android_api=18 + android_api=24 host_compiler_triplet="armv7a-linux-androideabi" host_tooltriplet="arm-linux-androideabi" host_arch_flags="-march=armv7-a -mfloat-abi=softfp -mfpu=vfpv3-d16" host_ldflags="-fuse-ld=gold -Wl,--icf=all -Wl,--fix-cortex-a8" ;; arm64) - android_api=21 + android_api=24 host_compiler_triplet="aarch64-linux-android" host_ldflags="-fuse-ld=gold -Wl,--icf=all" ;;
--- abstract: 'The parabolic Allen–Cahn equation is a semilinear partial differential equation linked to the mean curvature flow by a singular perturbation. We show an improved convergence property of the parabolic Allen–Cahn equation to the mean curvature flow, which is the parabolic analogue of the improved convergence property of the elliptic Allen–Cahn to minimal surfaces by Wang-Wei [@Wang2019a] and Chodosh-Mantoulidis [@Chodosh2018]. More precisely, we show if the phase-transition level sets are converging in $C^2$, then they converge in $C^{2,\theta}$. As an application, we obtain a curvature estimate for parabolic Allen–Cahn equation, which can be viewed as a diffused version of Brakke’s [@brakke2015motion] and White’s [@white2005local] regularity theorem for mean curvature flow.' address: - | School of Mathematical Sciences\ Queen Mary University of London\ Mile End Road\ London E1 4NS - | School of Mathematical Sciences\ Queen Mary University of London\ Mile End Road\ London E1 4NS author: - Huy The Nguyen - Shengwen Wang title: 'Second order estimates for transition layers and a curvature estimate for the parabolic Allen–Cahn' --- Introduction ============ The parabolic Allen–Cahn equation $$\label{ACF} \frac{\partial}{\partial t}u=\Delta u-W'(u)$$ is an evolution equation that models the diffusion-reaction dynamics of phase transition. It is the gradient flow of the Allen–Cahn phase separation energy $$E(u)=\int\frac{1}{2}|\nabla u|^2+W(u)$$ where $W(u):\mathbb R\rightarrow\mathbb R$ is a double-well shaped potential function. Geometrically, the Allen–Cahn equation has a close relationship with mean curvature flow through its singularly perturbed version $$\label{AFCEpsilon} \frac{\partial}{\partial t}u^{\varepsilon}=\Delta u^{\varepsilon}-\frac{W'(u^{\varepsilon})}{{\varepsilon}^2}.$$ The two equations are related by the parabolic scaling $u^{\varepsilon}(x,t)=u(\frac{x}{{\varepsilon}},\frac{t}{{\varepsilon}^2})$. In particular, equation is not scale invariant but $u^{\varepsilon}$ satisfies an ${\varepsilon}$-equation of the same form but with a different parameter. It was shown by Ilmanen [@Ilmanen1993] that as the parameter ${\varepsilon}\rightarrow0$, the energy measure $$\begin{aligned} d\mu^{\varepsilon}(u)=\left [\frac{1}{2}{\varepsilon}|\nabla u^{\varepsilon}|^2+\frac{W(u^{\varepsilon})}{{\varepsilon}}\right ]\,dx\end{aligned}$$ of the ${\varepsilon}$-solution converges in the sense of varifolds to Brakke’s weak mean curvature flow. Moreover, the limit Brakke flow is integer multiplicity a.e. by Tonegawa [@tonegawa2003integrality]. Hence the parabolic Allen-Cahn is a model for the flow of mean curvature flow through singularities. In particular, note that the equation is a subcritical semilinear equation and hence does not form singularities as $t\rightarrow \infty$. This property makes the flow an appealing candidate for weak mean curvature flow. For geometric applications, it is necessary to obtain higher regularity for the convergence. In the elliptic setting, Caffarelli-Cordoba [@caffarelli2006phase] showed that the transition layers of stable phase transitions have uniform $C^{1,\theta}$ regularity (independent of ${\varepsilon}$) and Wang-Wei [@Wang2019a; @Wang2019] proved that stable transition layers converges in a stronger $C^{2,\theta}$ sense to the limit minimal surfaces. Using an improvement of the convergence in dimension $3$, Chodosh-Mantoulidis [@Chodosh2018] proved that the min-max minimal surfaces obtained from the Allen–Cahn construction in a generic 3-manifold has multiplicity $1$ and expected index. This gives an alternative proof of Yau’s conjecture of existence of infinitely many minimal surfaces. These results differ from the methods used in [@Ilmanen1993], [@tonegawa2003integrality]. They do not use geometric measure theoretic techniques, but instead uses a Lyapaunov-Schmidt reduction developed in Pacard-Ritoré [@Pacard2003] except that where as Pacard-Ritoré uses the reduction to construct solutions to the Allen–Cahn equation given a minimal surface, the above papers infer results about limiting the minimal surface from the Allen–Cahn equation. Motivated by the work of [@Wang2019a; @Wang2019] and [@Chodosh2018] in the elliptic setting, we initiate the corresponding regularity theory for parabolic Allen–Cahn. In particular, we prove for low entropy parabolic Allen–Cahn flow, we have an improved convergence of their transition layers to mean curvature flow. The motivation in the elliptic setting was minimal surfaces, in particular a proof Yau’s conjecture, in the parabolic setting, the corresponding problem is the multiplicity $1$ conjecture for mean curvature flow by Ilmanen. It is expected that the parabolic Allen–Cahn equation and its improved convergence properties will have applications in understanding mean curvature flow and its singularities. The key idea in this paper is a parabolic analogue of the Lyapaunov-Schmidt reduction, . In the parabolic case, this was first used in [@Pino2018] and [@Pino2018a]. Using this approximation, we prove the following theorem, which improves the regularity of level sets \[ImprovementRegularity\] Let $u_{\varepsilon}$ be a solution of (\[EAC\]) in a space-time open set $B_2(0)\times[-2,2]\subset\mathbb R^n\times\mathbb R$ such that $\nabla_xu_{\varepsilon}\neq0$ and $\{u=0\}\neq\emptyset$ for $t\in[-2,2]$. Furthermore let us assume that the entropy satisfies $\lambda_{\varepsilon}(u)<2\alpha$, and the enhanced second fundamental form is uniformly bounded by $\mathcal A(u_{\varepsilon})\leq C$ (see section 2 for the definition), where $C$ is a uniform constant independent of ${\varepsilon}$. Then the nodal sets $\{u_{\varepsilon}=0\}$ converge in $C^{2,\theta}$ to a smooth mean curvature flow in $B_1(0)\times[-1,1]\subset\mathbb R^n\times\mathbb R$. In particular, the spatial $C^\theta$ Hölder norm of the second fundamental form of the nodal sets and the $C^{1,\frac{\theta}{2}}$ norm of the time derivatives are uniformly bounded on compact subsets. This theorem is the parabolic analogue of Theorem 1 in [@Wang2019], where the stability condition in the elliptic setting is replaced by a low entropy condition. We note that the low entropy condition ensures that we only have one transition layer which substantially simplifies the analysis. In particular, we are not required to model interactions between separate layers and hence do not need Toda systems. We will remove this restriction in a future work. The condition in the theorem implicitly implies the limit mean curvature flow is smooth, because $C^2$ bounds imply $C^{1,\theta}$ convergence of the transition layers, and standard regularity theory for quasilinear parabolic partial differential equations allows us to bootstrap $C^{1,\theta}$ bounds to $C^\infty$ smoothness of the limit flow. \[Graphical\] Suppose $u_{\varepsilon}$ is a solution of (\[EAC\]) in a space-time open set $B_2(0)\times[-2,2]\subset\mathbb R^n\times\mathbb R$ such that the entropy $\lambda_{\varepsilon}(u)<2\alpha$, the nodal sets $\Gamma_{{\varepsilon},t}=\{u_{\varepsilon}(x,t)=0\}$ of $u_{\varepsilon}$ in $B_2(0)\times[-2,2]$ can be represented by a Lipschitz graph over the limit mean curvature flow $\Sigma_t$ as $$\Gamma_{{\varepsilon},t}=\mathrm{Graph}_{\Sigma_t}f_{{\varepsilon},t}$$ with the $C^{1,\theta}$ norm of $f_{{\varepsilon},t}$ uniformly bounded in $B_2(0)\times[-2,2]$. Then the same conclusion as in Theorem \[ImprovementRegularity\] holds. This is the analogue of Corollary 1.2 in [@Wang2019]. In the elliptic case, stability guarantees the flatness of limit, since we cannot guarantee the limit being flat, the nodal sets are graphical over the limit flow instead of over a hyperplane as in [@Wang2019]. We will require this theorem in a subsequent paper, where we will prove a Brakke type regularity theorem for the parabolic Allen–Cahn equation. \[CurvatureEstimates\] For any $\delta_0>0$ there exists a $C_0>0$ so that if $u_{\tilde{\varepsilon}}$ is a solution of (\[EAC\]) with ${\varepsilon}=\tilde{\varepsilon}$ defined on $B_r(0)\times[-r^2,r^2]\subset\mathbb R^2\times\mathbb R$, $u_{\tilde{\varepsilon}}(0,0)=0$ and entropy $\lambda_{\tilde{\varepsilon}}(u_{\tilde{\varepsilon}})\leq2\alpha-\delta_0$, then the enhanced second fundamental form satisfies $$\mathcal A(u_{\tilde{\varepsilon}}(0,0))\leq\frac{C_0}{r}$$ with $C_0$ independent of $\tilde{\varepsilon}$. As a consequence, we also obtain a gap theorem that the only ancient solution that represents phase transition in $\mathbb R^2$ for every $t\in(-\infty,\infty]$ to parabolic Allen–Cahn with entropy below $2\alpha$ is the trivial static solution with flat level sets. We also obtain an improvement in the regularity of convergence result of [@trumper2008relaxation]. We show the level sets of parabolic Allen–Cahn converges in $C^{2,\theta}$ to the curve shortening flow in $\mathbb R^2$ when entropy is below $2\alpha$. This can be viewed as a relaxation of curvature estimates in Brakke’s [@brakke2015motion] and White’s [@white2005local] for case of curve shortening flow. In higher dimensions, the ingredients that we’re still lacking to prove this is the rigidity of eternal solution to the parabolic Allen–Cahn equation with unit density at infinity, such rigidity theorems are usually an ingredient in the proof of curvature estimate in a blow up argument. The entropy bound is sharp in the sense that for a limit curve shortening flow, there is a counter-example of the Grim Reaper as an eternal solution, whose entropy is $2$ , but can be rescaled to have arbitrarily large curvature. This paper is organised as follows: in section \[Preliminaries\] we standardise our notation and provide background material for the parabolic Allen–Cahn equations. We also prove rigidity of the entropy minimizing solution for eternal solutions of Allen–Cahn in dimension $2$ in section \[RigiditySection\], which will be necessary in blow-up subsequent arguments. In section \[ApproximateSolution\] and section \[DerivationEquation\] we carry out the main estimates and prove the main theorems. Finally in section \[ProofCurvature\] we prove the curvature estimates of low entropy solutions, Corollary \[CurvatureEstimates\]. $\textbf{Acknowledgements.}$ This research was supported by EPSRC grant EP/S012907/1. Preliminaries and notations {#Preliminaries} =========================== Preliminaries about Allen–Cahn and the explicit $1$-d heteroclinic solution --------------------------------------------------------------------------- We will pick up some notations that will be followed for the rest of the paper. The solutions to the Allen–Cahn equation with parameter ${\varepsilon}$ are not invariant under standard parabolic rescaling. However, the rescaled solution satisfies equations of the same form but with a different parameter. We say $u^{\varepsilon}:\mathbb R^{n}\times\mathbb R\rightarrow\mathbb R$ satisfies the ${\varepsilon}$-equation if $$\label{EAC} \frac{\partial}{\partial t}u^{\varepsilon}=\Delta u^{\varepsilon}-\frac{W'(u^{\varepsilon})}{{\varepsilon}^2}$$ and say $u:\mathbb R^{n}\times\mathbb R\rightarrow\mathbb R$ satisfies the $1$-equation if $$\label{AC} \frac{\partial}{\partial t}u=\Delta u-W'(u).$$ We will also fix the double potential to be a typical one $W(u)=\frac{1}{4}(1-u^2)^2$, but the analysis in this paper generalises automatically to any other double well potentials with similar asymptotics and well shapes. The 2 global minima $\pm1$ of the potential function represents 2 phase and $u$ describes the continuous change from one phase to another. Under parabolic rescaling, it is not hard to see that if $u(x,t)$ satisfies equation (\[AC\]), then $u^{\varepsilon}=u\left (\frac{x}{{\varepsilon}},\frac{t}{{\varepsilon}^2}\right )$ satisfies equation (\[EAC\]). A static solution to the Allen–Cahn equation (\[AC\]) is a function $u:\mathbb R^n\rightarrow\mathbb R$ that satisfies the elliptic equation $$\label{EllipticAC} \Delta u-W'(u)=0$$ and represents an equilibrium state of phase transition in $\mathbb R^n$. Two trivial solutions to the equation that do not represent phase transitions are $$u(x)=\pm1$$ which is the equilibrium state when there is only one phase (either one of the $2$ phases $\pm1$) in the whole region $\mathbb R^n$. In dimension 1, one can find explicitly the next simplest solution which represents a phase transition by solving the ordinary differential equation $$g''(x)-W'(g)=0$$ where $g:\mathbb R\rightarrow\mathbb R$. And the explicit solution when $W(u)=\frac{1}{4}(1-u^2)^2$ is $$g(x)=\tanh(x).$$ By scaling and crossing with $\mathbb R^{n-1}$, we also obtain standing wave solutions to (\[EAC\]) with any ${\varepsilon}$ parameter and in any dimensions $$\label{1d} g^{\varepsilon}(x,t)=g^{\varepsilon}(x_1,...,x_n,t)=\tanh\left(\frac{x_n}{{\varepsilon}}\right).$$ We denote the total energy of the standing wave $$\alpha=E(g)=\int_{-\infty}^\infty g'^2(x)\,dx.$$ By [@Ilmanen1993], the energy measure $$\begin{aligned} d\mu^{\varepsilon}(u)=\left [\frac{1}{2}{\varepsilon}|\nabla u^{\varepsilon}|^2+\frac{W(u^{\varepsilon})}{{\varepsilon}}\right ]\,dx\end{aligned}$$ converges as ${\varepsilon}\rightarrow0$ to an $(n-1)$ rectifiable varifold $\mu_t$ for $a.e. t$ and that $\mu_t$ is a mean curvature flow in the sense of Brakke. Moreover by [@tonegawa2003integrality], the limit varifold is integer multiplicity in the sense that it has density an integer multiple of $\alpha$ a.e. Similar to Brakke’s integral form of mean curvature flow, there is an ${\varepsilon}$ version of the integral form of the parabolic Allen–Cahn equation $$\label{EBrakke} \begin{split} \frac{d}{dt}\int\phi\,d\mu^{\varepsilon}_t=-\int{\varepsilon}\phi\left (\Delta u^{\varepsilon}-\frac{W'(u^{\varepsilon})}{{\varepsilon}}\right )^2\,dx-\delta V_t^{\varepsilon}(D\phi)-\int\nu\otimes\nu:D^2\phi\,d\xi^{\varepsilon}_t .\\ \end{split}$$ The measure $$d\xi_{{\varepsilon},t}=\left [\frac{{\varepsilon}}{2}|\nabla u_{\varepsilon}|^2-\frac{W(u_{\varepsilon})}{{\varepsilon}}\right ]dx$$ is called the discrepancy measure and it is shown to converge to $0$ in $L^1$ as ${\varepsilon}\rightarrow0$ in [@Ilmanen1993; @soner1997ginzburg; @tonegawa2003integrality] and $\delta V_t^{\varepsilon}$ is the first variation of the corresponding varifold (see [@Ilmanen1993] for details how to consider $u(x,t)$ as a general moving varifold whose density is the energy density $d\mu_t^{\varepsilon}$). Based on Huisken’s monotonicity formula in mean curvature flow, Ilmanen in [@Ilmanen1993] found an almost monotonicity formula for the ${\varepsilon}$-parabolic Allen–Cahn equation \[EAC\] $$\label{Monotonicity} \begin{split} &\frac{d}{dt}\int_{\mathbb R^n}\Psi_{y,s}d\mu_{\varepsilon}(u_{\varepsilon})\\ =&-\int_{\mathbb R^n}{\varepsilon}\Psi_{y,s}\left (\Delta u_{\varepsilon}-\frac{W'(u_{\varepsilon})}{{\varepsilon}^2}+\frac{\nabla u_{\varepsilon}\cdot\nabla\Psi_{y,s}}{\Psi_{y,s}}\right )+\int_{\mathbb R^n}\frac{1}{2(s-t)}\Psi_{y,s}\left [\frac{{\varepsilon}}{2}|\nabla u-{\varepsilon}|^2-\frac{W(u-{\varepsilon})}{{\varepsilon}}\right ]dx\\ =&-\int_{\mathbb R^n}{\varepsilon}\Psi_{y,s}\left (\Delta u_{\varepsilon}-\frac{W'(u_{\varepsilon})}{{\varepsilon}^2}+\frac{\nabla u_{\varepsilon}\cdot\nabla\Psi_{y,s}}{\Psi_{y,s}}\right )+\int_{\mathbb R^n}\frac{1}{2(s-t)}\Psi_{y,s}\,d\xi_{{\varepsilon},t}\\ \end{split}$$ where $\Psi_{y,s}(x,t)=\frac{1}{(4\pi (s-t))^{\frac{n-1}{2}}}e^{-\frac{|x-y|^2}{4(s-t)}}$ is the $(n-1)$-dimensional backward heat kernel centered at $y\in\mathbb R^n$ with scale $s\in\mathbb R^+$. It is also computed in [@Ilmanen1993] (section 4) that non-positivity of the discrepancy is preserved along time and thus the almost monotonicity formula is monotone for initial data with non-positive discrepancy. Entropy ------- Motivated by Colding-Minicozzi’s [@colding2012generic] entropy in mean curvature flow, we introduce the Allen–Cahn entropy functional $\lambda_{\varepsilon}$ associated to the energy $E_{\varepsilon}$ on the space of functions on $\mathbb R^n$ by $$\begin{split} \lambda_{\varepsilon}(u)&=\sup_{s,y,\rho}\int\Phi_{y,s}(x,0)\,d\mu_{\rho{\varepsilon}}(u_\rho)\\ &=\sup_{y,s,\rho}\int\frac{1}{(4\pi s)^{\frac{n-1}{2}}}e^{-\frac{|x-y|^2}{4s}}\,d\mu_{\rho{\varepsilon}}(u_\rho)\\ \end{split}$$ where $u_\rho(x)=u(\frac{x}{\rho})$, $d\mu_{\rho{\varepsilon}}(u)=\left [\frac{{\varepsilon}\rho|\nabla u|^2}{2}+\frac{W(u)}{{\varepsilon}\rho}\right ].\,dx$ By definition, this entropy is also invariant under the scaling $u_\rho(x)=u(\frac{x}{\rho})$. We also notice that, by an observation of Sun [@sun2018entropy], if the entropy $\lambda$ is below $2\alpha$, then the limit mean curvature flow has unit density. Geometry of Allen–Cahn and level sets ------------------------------------- For a non-degenerate point $x\in\mathbb R^n$ with $|\nabla u|\neq0$, the normal vector of the level set is given by $\nu(x)=\frac{\nabla u}{|\nabla u|}$. The enhanced second fundamental form of $u$ is defined by $$\begin{split} \mathcal A(u)&=\nabla\left(\frac{\nabla u}{|\nabla u|}\right)\\ \end{split}$$ and $$\begin{split} |\mathcal A(u)|&=\left |\nabla\left (\frac{\nabla u}{|\nabla u|}\right)\right|\\ &=\frac{\sqrt{|\nabla^2u|^2-|\nabla|\nabla u||^2}}{|\nabla u|}. \end{split}$$ The enhanced second fundamental form bounds the second fundamental form of level sets, and it is not hard to see that $|\mathcal A(u)|=0$ implies that $\frac{\nabla u}{|\nabla u|}$ is a parallel vector field and thus $u$ has to have flat level sets. If the second fundamental form is bounded as in the condition of Theorem \[ImprovementRegularity\], then the level sets can be written locally as $C^{1,\theta}$ graphs. Fermi Coordinates ----------------- In this subsection we introduce the Fermi coordinates near a neighbourhood a family of moving hypersurfaces $\Sigma_t\subset\mathbb R^n$. For $\delta>0$ small enough, the $\delta$ neighbourhood $N_\delta(\Sigma_t)\times(-\delta,\delta)\subset\mathbb R^n\times\mathbb R$ of $\Sigma_t$ where the nearest point projection is well defined. We can parametrise $N_\delta(\Sigma_t)\times(-\delta,\delta)$ by $(x,t)=(y,w,t)$ where $y$ are local coordinates on $\Sigma_t$ (for a sufficiently small neighbourhood, one can use the same $y$ coordinate for every $t$), and $w=\mathrm{dist}_{\Sigma_t}(x)$. By the conditions in Theorem \[ImprovementRegularity\] and Theorem \[Graphical\], the nodal sets $\Gamma_{t}$ of the solution $u(x,t)$ of equation (\[AC\]) can be written as a local Lipschitz graphs $\Gamma_t=\{w=f(y,t)\}$ over $\Sigma_t$, here $\Sigma_t$ is chosen to be the limit mean curvature flow. For the nodal set $\Gamma_{t}$, we denote its upper normal vector field by $N_{t}$ and we use the same coordinate on $\Sigma_t$ to parametrise $\Gamma_{t}$ by the nearest point projection. We denote by $d_{t}$ the signed distance function to $\Gamma_{t}$, which is positive in the upper side. And we denote $\Gamma_{z,t}=\{d_t=z\}$, which is well defined for small $z$ and $(y,z)$ is local Fermi coordinate in a neighbourhood of $\Gamma_t$. Let $\nabla_\Sigma, \nabla_{\Gamma_t}, \nabla_{\Gamma_{z,t}}$ denote the covariant derivatives with respect to the induced metrics on $\Sigma, \Gamma_t, \Gamma_{z,t}$ respectively. We can compute the second fundamental form and mean curvature of the nodal sets, they are given by $$\begin{split} A_{\Gamma_t}&=A_{\mathrm{Graph}_f}\\ &=\nabla_\Sigma \left (\frac{\nabla_\Sigma f}{\sqrt{1+|\nabla_\Sigma f|^2}}\right ) \end{split}$$ and $$\begin{split} H_{\Gamma_t}&=H_{\mathrm{Graph}_f}\\ &=\mathrm{div}_\Sigma\left (\frac{\nabla_\Sigma f}{\sqrt{1+|\nabla_\Sigma f|^2}}\right ). \end{split}$$ The normal velocity of the nodal sets are given by $\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\frac{\partial f}{\partial t}$. In the Fermi coordinates, the Laplacian operator is $$\label{FermiLaplacian} \Delta_x=\Delta_y+\partial^2_z+H_{\Gamma_{z,t}}\partial_z$$ where $H_{\Gamma_{z,t}}=\mathrm{div}(\partial_z)$ is the mean curvature scalar of $\Gamma_{z,t}$ with respect to the normal $\partial_z$. Here we adopt the sign convention that mean curvature vector of the sets $\Gamma_{z,t}$ to be $\vec{H}=-H\partial_z=-\mathrm{div}(\partial_z)\partial_z$ Rigidity of entropy minimizing ancient/eternal solutions to (\[AC\]) {#RigiditySection} ==================================================================== The $1$-d standing wave solution (\[1d\]) is the analogue of the static plane solution in mean curvature flow. And the rigidity of such $1$-d standing wave solutions is an ingredient in the proof curvature estimates using blow up arguments. We first recall the following rigidity theorem due to Wang [@wang2014new] (c.f. [@guaraco2019multiplicity]) for the $1$-d standing wave solution of elliptic Allen–Cahn equation in any dimension (we state here an equivalent form in terms of entropy). \[WangRigidity\] There exists $\delta>0$ such that if u is a solution to the static solution (\[EllipticAC\]) with $$\lambda_1(u)\leq\alpha+\delta$$ then u has to be the $1$-d standing wave solution (\[1d\]) with flat level sets up to a rotation and translation. Next we obtain a rigidity in the parabolic setting of eternal solution with entropy $\alpha$. \[ParabolicRigidity2D\] Suppose $u:\mathbb R^2\times\mathbb R\rightarrow\mathbb R$ is a solution of (\[AC\]) in $\mathbb R^2$ with $$\sup_{t\in\mathbb R}\lambda_1(u(\cdot,t))\leq\alpha$$ and that $u$ represents phase transition for every $t\in\mathbb R$ then $u$ is the static $1$-d standing wave solution (\[1d\]) with flat level sets (up to a rotation and translation). From the eternal solution $u$ we construct a family of functions $\tilde u^{\varepsilon}(x,t)=u(\frac{x}{{\varepsilon}},\frac{t}{{\varepsilon}^2})$ satisfying the ${\varepsilon}$-equation (\[EAC\]). We know that the energy measures $\frac{1}{\alpha}d\tilde\mu^{\varepsilon}=\frac{1}{\alpha}(\frac{1}{2}{\varepsilon}|\nabla\tilde u^{\varepsilon}|^2+\frac{W'(\tilde u_{\varepsilon})}{{\varepsilon}}\,dx)$ of the sequence subconverges to an integral Brakke flow $\{\mu_t\}$ with integer multiplicity by [@Ilmanen1993; @tonegawa2003integrality]. Moreover the discrepancy measure $|\xi^{\varepsilon}|=|\frac{1}{2}{\varepsilon}|\nabla\tilde u^{\varepsilon}|^2-\frac{W'(\tilde u_{\varepsilon})}{{\varepsilon}}|\,dx\rightarrow0$ in $L^1$. Furthermore, by choosing a subsequence, the convergence $\frac{1}{\alpha}d\tilde\mu^{\varepsilon}\rightarrow d\mu_t$ holds for every $t$. Since the entropy is lower semicontinuous, the limit Brakke flow $\{\mu_t\}$ has entropy at most $\frac{\alpha}{\alpha}=1$ and furthermore it is non-empty. Hence it is a static planar solution. By section $9$ in [@Ilmanen1993], the transport term converges $$\lim_{{\varepsilon}\rightarrow0}\int_{\mathbb R^2}-{\varepsilon}D\phi\cdot D\tilde u^{\varepsilon}\left (-\Delta\tilde u^{\varepsilon}+\frac{W'(\tilde u^{\varepsilon})}{{\varepsilon}^2}\right )\,dx=\delta V(D\phi)=0$$ because the limit Brakke flow is flat and has mean curvature zero, the convergence of the discrepancy to zero for that particular time slice is guaranteed by [@roger2006modified] (note in [@Ilmanen1993] one only gets vanishing discrepancy for $a.e.$ time and we need it to vanish for this particular time). Now since the limit Brakke flow is obtained from blowing down the original eternal solution $u$ to the parabolic equation, each element in the sequence is a rescaling of $u$. And the integral $\int_{\mathbb R^2}-{\varepsilon}D\phi\cdot D\tilde u^{\varepsilon}\left (-\Delta\tilde u^{\varepsilon}+\frac{W'(\tilde u^{\varepsilon})}{{\varepsilon}^2}\right )\,dx$ is scale invariant in dimension $2$, which converges to $0$ in the limit because the limit flow is a static solution. This forces that back at the original scale the integral has vanish for a particular time slice we have $u_t=\Delta\tilde u-W'(u)=0$, by the rigidity in elliptic case Theorem \[WangRigidity\], it is the $1$-d standing wave solution with flat slice for that particular time and thus the whole eternal solution is static $1$-d standing wave solution by uniqueness of the Cauchy problem. The Approximate Solution {#ApproximateSolution} ======================== We construct an approximate solution out of the zero sets of a parabolic Allen–Cahn (\[AC\]) by composing the local distance function to $\Gamma$ with the $1$-d standing wave solution, and we want to show the approximation is well controlled if the zero sets are sufficiently non-collapsed. For nodal sets $\Gamma^{\varepsilon}$ of solutions $u^{\varepsilon}$ to the equations (\[EAC\]) with different ${\varepsilon}$, we rescale $u_{\varepsilon}(x,t)=u^{\varepsilon}({\varepsilon}x,{\varepsilon}^2t)$ so that $u_{\varepsilon}$ satisfies (\[AC\]). Correspondingly we denote by $\Gamma_{{\varepsilon},t}$ the nodal sets of $u_{\varepsilon}(\cdot,t)$, $f_{{\varepsilon}}(\cdot,t)=f_{{\varepsilon},t}$ the graph function of $\Gamma_{{\varepsilon},t}$ as a graph of over $\Sigma_t$, and $d_{{\varepsilon},t}$ the signed distance to $\Gamma_{{\varepsilon},t}$. Moreover we denote $\Gamma_{{\varepsilon},z,t}=\{d_{{\varepsilon},t}=z\}$ for small $z$ so that $d_{{\varepsilon},t}$ is well defined. Similar to section 9 of [@Wang2019a], we choose $\bar g$ to be a smooth cutoff approximation at infinity of the $1$-d standing wave solution $g$ with well controlled errors $$\bar g(x)=\zeta(3|\log {\varepsilon}|x)g(x)+(1-\zeta(3|\log {\varepsilon}|x))\mathrm{sgn}(x)$$ where $\zeta$ is a smooth cutoff function supported in $(-2,2)$ with $\zeta\equiv1$ in $(-1,1)$ and $|\zeta'|+|\zeta''|\leq16$, and $\mathrm{sgn}=\frac{x}{|x|}$ is the sign function. We have $$\bar g''=W(\bar g)+\bar\eta$$ with $$\begin{split}\label{CutoffError} &{{\ensuremath{\mathop{\mathrm{spt}}} }}(\bar\eta)\subset\{3|\log{\varepsilon}|\leq|x|\leq6|\log{\varepsilon}|\}\\ &|\bar\eta|+|\bar\eta'|+|\bar\eta''|\leq O({\varepsilon}^3)\\ &\int\bar g'^2=\alpha+O({\varepsilon}^3). \end{split}$$ We define for each $h\in C^2(\Gamma_0)$ $$\label{eqn_best_approx} g^*_{\varepsilon}(y,z,t)=\bar g(d_{{\varepsilon},t}-h(y,t)).$$ Here $h(y,t):\Gamma_t\rightarrow \mathbb R$ is used to obtain an optimal approximation to offset the effect from mean curvature of the nodal sets $\Gamma_t$. The existence of $h$ is guaranteed by Proposition 9.1 0f [@Wang2019a], stating that there exists a function $h$ with $|h|<<1$ such that $$\int_{-\infty}^\infty (u-g^*)(g^*)' \,dz.$$ We denote $\phi_{\varepsilon}=u_{\varepsilon}-g^*_{\varepsilon}$. We compute in the $(y,z,t)$ coordinates $$\begin{split} &\frac{\partial g_{\varepsilon}^*}{\partial t}-\Delta g_{\varepsilon}^*\\ =&\bar g'(z-h)\cdot \left(-\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f_{\varepsilon}}{\partial t}-\frac{\partial h}{\partial t}\right )-\bar g''(z)-H_{\Gamma_{{\varepsilon},z,t}}\bar g'(z)+\bar g'(z-h)\Delta_{\Gamma_{{\varepsilon},z,t}} h\\ &-\bar g''(z-h)|\nabla h|^2.\\ \end{split}$$ Here $\frac{1}{\sqrt{1+|\nabla_\Sigma f_{\varepsilon}|^2}}\cdot\frac{\partial f_{\varepsilon}}{\partial t}$ is the normal velocity of the nodal sets $\Gamma_{0,t}=\mathrm{Graph}_\Sigma f_{{\varepsilon},t}$. It cancels out the mean curvature term of the nodal sets up to small error as ${\varepsilon}\rightarrow0$ by the convergence to the mean curvature flow for the unscaled equation as ${\varepsilon}\rightarrow0$. We compute the equation for $\phi_{\varepsilon}$ as follows $$\label{Difference} \begin{split} &\left(\frac{\partial}{\partial t}-\Delta\right)\phi_{\varepsilon}\\ =&\left(\frac{\partial}{\partial t}-\Delta_{\Gamma_{{\varepsilon},z,t}}-\partial^2_z-H_{\Gamma_{{\varepsilon},z,t}}\partial_z\right)\phi_{\varepsilon}\\ =&-W'(\phi_{\varepsilon}+g_{\varepsilon}^*)+W'(g_{\varepsilon}^*)+\bar\eta-\bar g'(z-h)\cdot\left (-\frac{1}{\sqrt{1+|\nabla_\Sigma f_{\varepsilon}|^2}}\cdot\frac{\partial f_{\varepsilon}}{\partial t}-\frac{\partial h}{\partial t}\right )+H_{\Gamma_{{\varepsilon},z,t}}\bar g'(z)\\ &-\bar g'(z-h)\Delta_{\Gamma_{z,t}} h+\bar g''(z-h)|\nabla h|^2\\ =&-[W'(\phi+g_{\varepsilon}^*)-W'(g_{\varepsilon}^*])+\left [\bar g'\left ( \frac{\partial h}{\partial t}-\Delta_{\Gamma_{{\varepsilon},z,t}}h+H_{\Gamma_{{\varepsilon},z,t}}\right)\right]+[\bar g''|\nabla h|^2]\\ &+\left [\bar g'\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]+\bar\eta\\ =&I+II+III+IV+\bar\eta. \end{split}$$ Term I is $-W'(\phi_{\varepsilon}+g_{\varepsilon}^*)+W'(g_{\varepsilon})=-W''(g_{\varepsilon})\phi_{\varepsilon}+\mathcal R(\phi_{\varepsilon})$ which have Hölder norms bounded by the Hölder norms of $\phi_{\varepsilon}$. Term III is bounded by the $C^{2,\theta}$ norm of $h$, which again is bounded by the $C^{2,\theta}$ norms of $\phi$ by an interpolation inequality. (cf [@Wang2019a pp 58 (4)]). We will estimate the Hölder norm of Term II + Term IV and show they are sufficiently small. $$\begin{split} &\bar g'\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{{\varepsilon},z,t}}h+H_{\Gamma_{{\varepsilon},z,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]\\ =&\bar g'\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{{\varepsilon},z,t}}h+\left (\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}+H_{\Gamma_{{\varepsilon},z,t}}\right )\right ].\\ \end{split}$$ The term $\left (\frac{\partial}{\partial t}-\Delta_{\Gamma_{{\varepsilon},z,t}}\right )h$ will be estimated by deriving the corresponding equation and proving parabolic Schauder estimates as in Appendix B of Wang-Wei [@Wang2019a]. The term $\left (\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}+H_{\Gamma_{{\varepsilon},z,t}}\right)$ will be estimated using the fact that the nodal set converges to the MCF in $C^{1,\theta}$ sense when there is uniform enhanced second fundamental form bound independent of ${\varepsilon}$ as in assumed in Theorem \[ImprovementRegularity\] (as ${\varepsilon}\rightarrow0$). We will also need later the error estimates of geometries of $\Gamma_{{\varepsilon},z,t}$ and $\Gamma_{{\varepsilon},0,t}=\Gamma_{{\varepsilon},t}$. First by the assumptions of uniform enhanced second fundamental form $\mathcal A^{\varepsilon}=\nabla\left(\frac{\nabla u^{\varepsilon}}{|\nabla u^{\varepsilon}|}\right)\leq C_0$ bounds, we have by rescaling $$\frac{\mathcal A_{\varepsilon}}{{\varepsilon}}\leq C_0.$$ This shows the level sets of the rescaled solution $u_{\varepsilon}$ is $C_0\cdot{\varepsilon}$ are close in $C^2$ norm to a flat solution $\tanh(x_n)$ in spatial directions, and thus $u_{\varepsilon}$ is $C_0\cdot{\varepsilon}$ close in $C^2$ to the flat solution $\tanh(x_n)$ in spatial directions. By the equation $\frac{\partial}{\partial t}u_{\varepsilon}=\Delta u_{\varepsilon}+W'(u_{\varepsilon})$, and the uniform boundedness of derivatives of the potential function $W$, we have that $u_{\varepsilon}$ is $\bar C_0\cdot{\varepsilon}$ close in $C^1$ norm in time $t$ direction to the flat solution $\tanh(x_n)$. Thus for the equation $\frac{\partial}{\partial t}u_{\varepsilon}-\Delta u_{\varepsilon}=W'(u_{\varepsilon})$, the non-homogenous term satisfies $\|W'(u_{\varepsilon})-W'(\tanh(x_n))\|_\theta\leq\bar C_0{\varepsilon}$ and the standard regularity of the semi-linear parabolic equations gives $$\label{HolderSmallnessRescaled} \|u_{\varepsilon}-\tanh(x_n)\|_{C^{2,\theta}}\leq\bar C_0{\varepsilon}$$ where $C^{2,\theta}$ is the parabolic Hölder norm (see (\[HolderNorm\]) for the definition). Also recall in our notation that a superscript ${\varepsilon}$ means quantities at the original scale where the function satisfies equation (\[EAC\]) and subscript means the quantities of the functions that are scaled to satisfy equation (\[AC\]). Since the second fundamental form satisfies $A_{\Gamma_{{\varepsilon},z,t}(y)}=(I-zA_{\Gamma_{{\varepsilon},0,t}(y)})^{-1}A_{\Gamma_{{\varepsilon},0,t}(y)}$, we have $$\label{CurvatureError} |A_{\Gamma_{{\varepsilon},z,t}(y)}-A_{\Gamma_{{\varepsilon},0,t}(y)}|\leq|z||A_{\Gamma_{{\varepsilon},0,t}(y)}|^2=O({\varepsilon}^2).$$ Similarly one computes the error of operators $\Delta_{\Gamma_{{\varepsilon},z,t}}$ and obtains $$\label{LaplacianError} |\Delta_{\Gamma_{{\varepsilon},z,t}}\phi(y)-\Delta_{\Gamma_{{\varepsilon},0,t}}\phi(y)|\leq{\varepsilon}|z| (|\nabla\phi|+|\nabla^2\phi|)=O({\varepsilon}^2)+||\phi||_{C^{2,\theta}}.$$ Derivation of the equation and estimate for term II+IV {#DerivationEquation} ====================================================== In this section ,we derive the parabolic analogue of the Toda system obtained in section 10 of Wang-Wei [@Wang2019a]. Here we assume single layer convergence of the nodal sets which come from our entropy bound condition and which substantially simplifies the equation. From now on we will drop the subscripts ${\varepsilon}$ in terms if there is no confusion, $u(x,t)=u_{\varepsilon}(x,t)=u^{\varepsilon}({\frac{x}{{\varepsilon}},\frac{t}{{\varepsilon}^2}})$ are solutions of (\[AC\]) obtained by rescaling a solution of (\[EAC\]), the ${\varepsilon}$ in $\Gamma_{{\varepsilon},z,t}, f_{{\varepsilon},t}=f_{{\varepsilon}}(\cdot,t), \Delta_{{\varepsilon},z,t}$ etc. will be dropped if it is clear from the context. Multiply (\[Difference\]) by $\bar g'$ and integrate in the spatial direction normal to the nodal sets, we get $$\label{IntegralEquation} \begin{split} &\int_{-\infty}^\infty \bar g'(z)\left (\frac{\partial}{\partial t}-\Delta\right )\phi\,dz\\ =&\int_{-\infty}^\infty \bar g'\left (\frac{\partial}{\partial t}-\Delta_{\Gamma_{z,t}}-\partial^2_z-H_{\Gamma_{z,t}}\partial_z\right )\phi\,dz\\ =&-\int \bar g'[W'(\phi+g^*)-W'(g^*])+\int \bar g'^2\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{z,t}}h+H_{\Gamma_{z,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]\\ &+\int \bar g'\left [\bar g''|\nabla_{z,t} h|^2\right ]+\int \bar g'\bar\eta. \end{split}$$ To obtain improved estimates, we make the following orthogonality condition to offset the error in vertical direction of the approximate solution $\phi$ $$\label{Orthogonality} \begin{split} &\int_{-\infty}^\infty [u(y,z,t)-g^*(y,z,t)]\bar g'(z-h(y,t))\,dz=\int\phi \bar g' \,dz=0.\\ \end{split}$$ Differentiating once (\[Orthogonality\]) in tangential direction (the coordinate $y$ with respect to the Fermi coordinate) we get $$\int \phi_{y_i}\bar g'dz-h_{y_i}\int\phi \bar g''dz=0.$$ Differentiating again we have $$\begin{split} &\int \frac{\partial^2\phi}{\partial y_i\partial y_j}\bar g'-\frac{\partial \phi}{\partial y_i}\bar g''\frac{\partial h}{\partial y_j}-\frac{\partial \phi}{\partial y_j}\bar g''\frac{\partial h}{\partial y_i}-\phi \bar g''\frac{\partial^2 h}{\partial y_i\partial y_j}+\phi \bar g'''\frac{\partial h}{\partial y_i}\frac{\partial h}{\partial y_j}=0.\\ \end{split}$$ And thus $$\label{LeftLaplacian} \begin{split} &\int\Delta_{\Gamma_{0,t}}\phi \bar g'\\ =&\Delta_{\Gamma_{0,t}}h\int\phi \bar g''-2\int \left \langle\frac{\partial\phi}{\partial y_i}, \frac{\partial h}{\partial y_j}\right \rangle_{\bar g_{\Gamma_{y,0}}}\bar g''-|\nabla_{\Gamma_{0,t}} h|^2\int\phi \bar g'''. \end{split}$$ Moreover, differentiating the orthogonality condition with respect to time $t$ and integrating by parts, we get $$\label{LeftTime} \begin{split} \int\phi_t\bar g'=&-\int\phi \bar g''\left (-\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}-\frac{\partial h}{\partial t}\right )\\ =&\left (\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}+\frac{\partial h}{\partial t}\right )\int\phi \bar g''\\ \end{split}$$ Finally, we also have $$\label{PhiBoundsh} \|h\|_{k,\theta}\leq O(||\phi||_{C^{k,\theta}}).$$ The term from II+IV comes out of the integral by the error estimates (\[CurvatureError\]), (\[LaplacianError\]) and (\[PhiBoundsh\]). $$\begin{split} &\int \bar g'^2\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{z,t}}h+H_{\Gamma_{z,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]\\ =&\alpha\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h+H_{\Gamma_{0,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]+O({\varepsilon}^2)+O({\varepsilon}||h||_{C^{2,\theta}})\\ =&\alpha\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h+H_{\Gamma_{0,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]+O({\varepsilon}^2)+O(||\phi||^2_{C^{2,\theta}}) \end{split}$$ as estimated in Appendix B of Wang-Wei [@Wang2019a], where $\alpha$ is the total energy for the $1$-d standing wave. The additional terms not in Wang-Wei is $$\begin{split} &\int \bar g'^2\left (\frac{\partial h}{\partial t}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right )\\ =&\left (\frac{\partial h}{\partial t}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right )\int \bar g'^2\\ =&\alpha\cdot\frac{\partial h}{\partial t}+\alpha \frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}+O({\varepsilon}^3)\\ \end{split}$$ by the error control in the cutoff $\bar g$ of $g$ in (\[CutoffError\]). Sup norm of II+IV ----------------- By integration by parts and (\[LeftLaplacian\]), (\[LeftTime\]), the equation (\[IntegralEquation\]) can be written as $$\begin{split} &\int_{-\infty}^\infty \bar g'(z)\left (\frac{\partial}{\partial t}-\Delta\right)\phi\,dz\\ =&-\int \bar g'[W'(\phi+g^*)-W'(g^*])+\int \bar g'\bar g''|\nabla_{z,t} h|^2\\&+\int \bar g'^2\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{z,t}}h+H_{\Gamma_{z,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]\,dz.\\ \end{split}$$ And we have $$\begin{split} &LHS\\ =&\left (\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}+\frac{\partial h}{\partial t}\right )\int\phi \bar g''-\Delta_{\Gamma_{0,t}}h\int\phi \bar g''\,dz+O(||\phi||^2_{C^{2,\theta}})\\&+O({\varepsilon}^2)-\int \phi_{zz}\bar g'\,dz-\int H_{\Gamma_{y,z,t}}\phi' \bar g'\,dz\\ =&\left (\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}+\frac{\partial h}{\partial t}\right )\int\phi \bar g''-\Delta_{\Gamma_{0,t}}h\int\phi \bar g''\,dz\\&+O({\varepsilon}^2)-\int W''(\bar g)\bar g'\phi\,dz+\int H_{\Gamma_{y,z,t}}\phi \bar g''\,dz+O(||\phi||^2_{C^{2,\theta}})\\ =&\left (\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}+\frac{\partial h}{\partial t}\right )\int\phi \bar g''-\Delta_{\Gamma_{0,t}}h\int\phi \bar g''\,dz\\&+O({\varepsilon}^2)-\int W''(\bar g)\bar g'\phi\,dz+H_{\Gamma_{y,0,t}}\int \phi \bar g''\,dz+O(||\phi||^2_{C^{2,\theta}})\\ =&\left (H_{\Gamma_{y,0,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}+\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h\right )\int\phi \bar g''\\&+O({\varepsilon}^2)-\int W''(\bar g)\bar g'\phi\,dz+O(||\phi||^2_{C^{2,\theta}}) \end{split}$$ by the expression of the Laplacian in Fermi coordinates . The change of sign of the mean curvature term is due to integration by parts and we are able to take the mean curvature term out of integral is due to the error estimates (\[CurvatureError\]). And we compute $$\begin{split} &RHS\\ =&-\int W''(g^*)g'\phi\,dz+O({\varepsilon}^2)+\int \bar g'\bar g''|\nabla_{z,t} h|^2\\&+\int \bar g'^2\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{z,t}}h+H_{\Gamma_{z,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]\,dz\\ =&-\int W''(g^*)\bar g'\phi\,dz+O({\varepsilon}^2)+\int \bar g'\bar g''|\nabla_{z,t} h|^2\\&+\alpha\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{z,t}}h+H_{\Gamma_{z,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]+O(||\phi||^2_{C^{2,\theta}}). \end{split}$$ Combining the above, we obtain $$\label{KeyTerm} \begin{split} &\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h+H_{\Gamma_{z,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]\left [\int\phi g''\,dz)-\alpha\right ]+O({\varepsilon}^2)\\ =&O({\varepsilon}^2)+\int \bar g'\bar g''|\nabla_{z,t} h|^2\,dz+\|h\|^2_{C^{2,\theta}}\\ =&O\left ({\varepsilon}^2+\|\phi\|^2_{C^{2,\theta}}\right ). \end{split}$$ In the last equality above we used that $\|h\|_{C^{2,\theta}}$ is controlled by $\|\phi\|_{C^{2,\theta}}$ by and Cauchy inequality on the middle term. The sup norm estimates is obtained by integration by parts together with the fact that integral of $\bar g$ and its derivatives are uniformly bounded independent of ${\varepsilon}$. $$\begin{split} &|II+IV|\\ =&\left |\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{z,t}}h+H_{\Gamma_{z,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]\right |\\ \leq& O({\varepsilon}^2)+\|\phi\|^2_{C^{2,\theta}}.\\ \end{split}$$ Holder norm of II+IV -------------------- The parabolic distance for $2$ points $X_1=(x_1,t_1),X_2=(x_2,t_2)\in\mathbb R^n\times\mathbb R$ is defined by $\mathrm{dist}_p(X_1,X_2)=\max(|x_1-x_2|,\sqrt{|t_1-t_2|})$ For a function $u:\mathbb R^n\times\mathbb R \rightarrow\mathbb R$ and an open set $W\subset\mathbb R^n\times\mathbb R$, we will use the parabolic Hölder norm defined by $$\label{HolderNorm} \begin{split} &[u]_{\theta;W}=\sup_{X_1\neq X_2, X_1,X_2\in W}\frac{|u(X_1)-u(X_2)|}{\mathrm{dist}_p(X_1,X_2)}\\ &\|u\|_{C^{0,\theta(W)}}=\sup_{x\in W} |u(X)|+[u]_\theta\\ &\|u(x,t)\|_{C^{k,\theta}(W)}=\sum_{i+2j\leq k}\|\partial_x^i\partial_t^ju\|_{0,\theta}\\ \end{split}$$ In particular $$\begin{split} &\|u(x,t)\|_{C^{2,\theta}(W)}=\sum_{i=0}^2\sup|\partial_x^iu|+\sup|\partial_t|+\|\partial_x^2u\|_{0,\theta}+\|\partial_t u\|_{0,\theta}\\ \end{split}$$ Again by rewriting equation (\[IntegralEquation\]) using the orthogonality conditions (\[LeftLaplacian\]) and (\[LeftTime\]), we have $$\begin{split} &\int_{-\infty}^\infty \bar g'\left (\frac{\partial}{\partial t}-\Delta_{\Gamma_{z,t}}-\partial^2_z-H_{\Gamma_{z,t}}\partial_z\right )\phi\,dz\\ =&\int[\Delta_{\Gamma_{y,0,t}}-\Delta_{\Gamma_{y,z,t}}]\phi \bar g'\\ &+\left (\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}+\frac{\partial h}{\partial t}\right )\int\phi \bar g''\,dz-\Delta_{\Gamma_{0,t}}h\int\phi \bar g''\,dz\\ &+2\int \left \langle\frac{\partial\phi}{\partial y_i}, \frac{\partial h}{\partial y_j}\right\rangle_{g_{\Gamma_{y,0}}}\bar g''\,dz+|\nabla_{\Gamma_{0,t}}h|^2\int\phi \bar g'''\,dz-\int\bar g'(\partial_{zz}\phi+H_{\Gamma_{z,t}}\partial_z\phi)\,dz\\ =&-\int \bar g'[W'(\phi+g^*)-W'(g^*)]\,dz+\int \bar g'^2\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{z,t}}h+H_{\Gamma_{z,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]\,dz\\ &+\int \bar g'[\bar g''|\nabla_{z,t} h|^2]\,dz.\\ \end{split}$$ Further simplification and some integration by parts gives $$\label{EquationForHolder} \begin{split} &\int[\Delta_{\Gamma_{y,0,t}}-\Delta_{\Gamma_{y,z,t}}]\phi \bar g'\\ &+\left (\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}+\frac{\partial h}{\partial t}\right )\int\phi \bar g''\,dz-\Delta_{\Gamma_{0,t}}h\int\phi \bar g''\,dz\\ &+2\int \left \langle \frac{\partial\phi}{\partial y_i}, \frac{\partial h}{\partial y_j}\right\rangle_{g_{\Gamma_{y,0}}}\bar g''\,dz+|\nabla_{\Gamma_{0,t}}h|^2\int\phi \bar g'''\,dz-\int \bar g'H_{\Gamma_{z,t}}\partial_z\phi\,dz\\ =&-\int \bar g'[W''(g^*)-W''(\bar g)]\phi\,dz+\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h+H_{\Gamma_{0,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]\left (\int g'^2\,dz\right )\\ &-\int[H_{\Gamma_{z,t}}-H_{\Gamma_{0,t}}] \bar g'^2\,dz+\int[\Delta_{\Gamma_{z,t}}h-\Delta_{\Gamma_{0,t}}h] \bar g'^2\,dz+\int \bar g'\bar g''|\nabla_{z,t} h|^2\,dz.\\ \end{split}$$ ### Hölder estimate in space We will estimate the spatial Hölder norms in (\[EquationForHolder\]) term by term. By the error estimates (\[LaplacianError\]) $$\begin{split} &\sup_{t\in I}\left \|\int [\Delta_{\Gamma_{y,0,t}}-\Delta_{\Gamma_{y,z,t}}]\phi(\cdot, t)\bar g'\,dz\right \|_{C^\theta(B_r(y))}\\ =&O({\varepsilon}|z|\sup_{t\in I}\|\phi(\cdot,t)\|_{C^{2,\theta}(B_r(y))})\\ =&O({\varepsilon}^2)+O(\sup_{t\in I}\|\phi(\cdot,t)\|^2_{C^{2,\theta}(B_r(y))})\\ \leq&O({\varepsilon}^2)+O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)}).\\ \end{split}$$ Since the $C^{k,\theta}$ norms of $\phi$ control the $C^{k,\theta}$ norms for $h$ by , we have $$\begin{split} &\sup_{t\in I}\left \|\Delta_{\Gamma_{0,t}}h(\cdot,t)\left (\int\phi(\cdot,t) \bar g''\,dz\right )\right \|_{C^\theta(B_r(y))}\\ \leq&O(\sup_{t\in I}\|h(\cdot,t)\|_{C^{2,\theta}(B_r(t))})\cdot O(\sup_{t\in I}\|\phi(\cdot,t)\|_{C^{0,\theta}(B_r(t))})\\ \leq&O(\sup_{t\in I}\|\phi(\cdot,t)\|^2_{C^{2,\theta}(B_r(t))})\\ \leq&O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)}).\\ \end{split}$$ By (\[HolderSmallnessRescaled\]) $$\begin{split} &\sup_{t\in I}\left \|\left (\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}+\frac{\partial h}{\partial t}\right )(\cdot,t)\left (\int\phi(\cdot,t) \bar g''\,dz\right )\right \|_{C^\theta(B_r(y))}\\ \leq&O\left (\sup_{t\in I}\sup_{t\in I}\left \|(\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}+\frac{\partial h}{\partial t})(\cdot,t)\right \|^2_{C^{\theta}(B_r(y))}\right )+O\left (\sup_{t\in I}\sup_{t\in I}\left \|\int\phi (\cdot, t)\bar g''\,dz\right \|^2_{C^{\theta}(B_r(y))}\right )\\ \leq&O\left (\sup_{t\in I}\left \|\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}(\cdot, t)\right \|^2_{C^{\theta}(B_r(y))}\right )+O\left (\sup_{t\in I}\left \|\frac{\partial h}{\partial t}(\cdot, t)\right \|^2_{C^{\theta}(B_r(y))}\right )+O\left (\sup_{t\in I}\left \|\int\phi (\cdot, t)\bar g''\,dz\right \|^2_{C^{\theta}(B_r(y))}\right )\\ \leq&O\left (\sup_{t\in I}\left \|\frac{u_t}{|\nabla u|}(\cdot,t)\right \|^2_{C^{\theta}(B_r(y))}\right )+O\left (\sup_{t\in I}\left \|\frac{\partial h}{\partial t}(\cdot,t)\right \|^2_{C^{\theta}(B_r(y))}\right )+O\left (\sup_{t\in I}\left \|\int\phi(\cdot,t) \bar g''\,dz\right \|^2_{C^{\theta}(B_r(y))}\right )\\ \leq&O({\varepsilon}^2)+O(\|h\|^2_{C^{2,\theta}(B_r(y)\times I)})+O(\sup_{t\in I}\|\phi(\cdot, t)\|^2_{C^{\theta}(B_r(y))})\\ \leq&O({\varepsilon}^2)+O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)}).\\ \end{split}$$ Similarly, using , we get $$\begin{split} &\sup_t\left \|\int \left \langle\frac{\partial\phi}{\partial y_i}, \frac{\partial h}{\partial y_j}\right\rangle_{\bar g_{\Gamma_{y,0}}}g''\,dz\right \|_{C^\theta(\cdot,t)(B_r(y))}\\ \leq&O(\sup_t \|h\|_{C^{1,\theta}(\cdot, t)(B_r(y))}\|\phi\|_{C^{1,\theta}(\cdot, t)(B_r(y))})\\ \leq&O(\sup_t\|\phi\|^2_{C^{1,\theta}(\cdot, t)(B_r(y))})\\ \leq&O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)}).\\ \end{split}$$ $$\begin{split} &\sup_t\left \||\nabla_{\Gamma_{0,t}}h|\int\phi \bar g'''\,dz\right \|_{C^\theta(\cdot,t)(B_r(y))}\\ \leq&O(\sup_t\|h\|^2_{C^{1,\theta}(\cdot,t)(B_r(y))}\cdot\sup_t\|\phi\|^2_{C^{\theta}(\cdot,t)(B_r(y))})\\ \leq&O(\sup_t\|h\|^2_{C^{1,\theta}(\cdot,t)(B_r(y))})\\ \leq&O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)}).\\ \end{split}$$ Using (\[CurvatureError\]), we have $$\begin{split} &\sup_t\left \|\int H_{\Gamma_{z,t}}\partial_z\phi\,dz\right \|_{C^\theta(\cdot,t)(B_r(y))}\\ \leq&O({\varepsilon}\sup_t\|\phi\|_{C^{2,\theta}(\cdot,t)(B_r(y))})\\ \leq&O({\varepsilon}^2)+O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)}).\\ \end{split}$$ By the uniform bounds on derivatives of the potential function for values between $(-1,1)$ $$\begin{split} &\sup_t\left \|\int g'[W''(g^*)-W''(\bar g)]\phi\,dz\right \|_{C^\theta(\cdot,t)(B_r(y))}\\ \leq&C\|\sup_t\phi(\cdot,t)(B_r(y))\|_\theta\cdot{\varepsilon}\\ \leq&O(\sup_t\|\phi(\cdot,t)\|^2_{C^{0,\theta}(B_r(y))})+O({\varepsilon}^2)\\ \leq&O({\varepsilon}^2)+O(\sup_t\|\phi\|^2_{C^{2,\theta}(\cdot,t)(B_r(y)\times I)}).\\ \end{split}$$ $$\begin{split} &\sup_t\left \|\int \bar g'\bar g''|\nabla_{z,t} h|^2\,dz\right \|_{C^\theta(\cdot,t)}\\ \leq&O\|\sup_t\phi(\cdot,t)\|^2_{1,\theta}(B_r(t))\\ \leq&O\|\phi\|^2_{2,\theta(B_r(t)\times I)}.\\ \end{split}$$ Again by (\[LaplacianError\]) and (\[CurvatureError\]) $$\begin{split} &\sup_t\left \|\int[\Delta_{\Gamma_{z,t}}h-\Delta_{\Gamma_{0,t}}h] \bar g'^2\,dz\right \|_{C^\theta(\cdot,t)}\\ \leq&O({\varepsilon}\sup_t\|\phi(\cdot,t)\|_{C^{2,\theta}})\\ \leq&O({\varepsilon}^2)+O(\|\phi\|^2_{C^{2,\theta}(B_r(t)\times I)}).\\ \end{split}$$ $$\begin{split} &\sup_t\left \|\int[H_{\Gamma_{z,t}}-H_{\Gamma_{0,t}}] \bar g'^2\,dz\right \|_{C^\theta(\cdot,t)}\\ \leq&O({\varepsilon}^2).\\ \end{split}$$ Combining all these above estimates and $$\begin{split} &\left \|\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h+H_{\Gamma_{0,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]\left (\int \bar g'^2\,dz\right )(\cdot,t)\right \|_\theta\\ =&\alpha\left \|\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h+H_{\Gamma_{0,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}(\cdot,t)\right\|_{C^\theta(B_r(t))}\\ \end{split}$$ we obtain from (\[EquationForHolder\]) $$\label{SpaceHolder} \begin{split} &\sup_{t\in I}\left \|\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h+H_{\Gamma_{0,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ](\cdot,t)\right\|_{C^\theta(B_r(t))}\\ \leq& O({\varepsilon}^2)+O(\|\phi\|^2_{C^{2,\theta}(B_r(t)\times I)})\\ \end{split}$$ ### Hölder estimate in time Again we estimate the Hölder norm in time term by term. $$\begin{split} &\sup_{x\in B_r(y)}\|[\Delta_{\Gamma_{y,0,t}}-\Delta_{\Gamma_{y,z,t}}]\phi (y,0) \bar g' \|_{C^\frac{\theta}{2}(I)}\\ \leq&O({\varepsilon}\|\phi\|_{C^{2,\theta}(B_r(y)\times I)})\\ \leq&O({\varepsilon}^2)+O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)}).\\ \end{split}$$ By (\[HolderSmallnessRescaled\]) $$\begin{split} &\sup_{x\in B_r(y)}\left \| \left (\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}+\frac{\partial h}{\partial t}\right )(y,\cdot)\int\phi(y,\cdot) \bar g''\,dz \right \|_{C^\frac{\theta}{2}(I)}\\ \leq&O\left (\sup_{x\in B_r(y)}\left \|\left (\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}+\frac{\partial h}{\partial t}\right )(y,\cdot)\right \|^2_{\theta}\right )+O\left (\sup_{x\in B_r(y)}\left \|\int\phi \bar g''\,dz \right \|^2_{\theta}\right )\\ \leq&O\left (\sup_{x\in B_r(y)}\left \|\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}(y,\cdot)\right \|^2_{\theta}\right )+O\left (\sup_{x\in B_r(y)}\left \|\frac{\partial h}{\partial t}(y,\cdot)\right \|^2_{\frac{\theta}{2}}\right )+O(\sup_{x\in B_r(y)}\|\phi(y,\cdot)\|^2_\theta)\\ \leq&O\left (\sup_{x\in B_r(y)}\left \|\frac{u_t}{|\nabla u|}(y,\cdot)\right \|^2_{\theta}\right )+O\left (\sup_{x\in B_r(y)}\left \|\frac{\partial h}{\partial t}(y,\cdot)\right \|^2_{\frac{\theta}{2}}\right )+O(\sup_{x\in B_r(y)}\|\phi(y,\cdot)\|^2_\theta)\\ \leq&O({\varepsilon}^2)+O(\|h\|^2_{C^{2,\theta}(B_r(y)\times I)})+O(\sup_{x\in B_r(y)}\|\phi(y,\cdot)\|^2_\theta)\\ \leq&O({\varepsilon}^2)+O(\|h\|^2_{C^{2,\theta}(B_r(y)\times I)})+O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)})\\ \leq&O({\varepsilon}^2)+O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)}).\\ \end{split}$$ $$\begin{split} &\sup_{x\in B_r(y)}\left \|\Delta_{\Gamma_{y,\cdot}}h\int\phi g''\,dz\right \|_{C^\frac{\theta}{2}(y,\cdot)}\\ \leq&\sup_{x\in B_r(y)}\|\Delta_{\Gamma_{y,\cdot}}h\|^2_{C^\frac{\theta}{2}(y,\cdot)}+\sup_{x\in B_r(y)}\left \|\int\phi \bar g''\,dz\right \|^2_{C^\frac{\theta}{2}(y,\cdot)}\\ \leq&\|h\|^2_{C^{2,\theta}(B_r(y)\times I)}+O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)})\\ \leq&O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)}),\\ \end{split}$$ $$\begin{split} &\sup_{x\in B_r(y)}\left \|\int \left\langle\frac{\partial\phi}{\partial y_i}, \frac{\partial h}{\partial y_j}\right\rangle_{g_{\Gamma_{y,0}}}\bar g''\,dz\right \|_{C^\frac{\theta}{2}(y,\cdot)(I)}\\ \leq&O(\|\phi\|_{C^{1,\theta}(B_r(y)\times I)}\cdot \|h\|_{C^{1,\theta}(B_r(y)\times I)})\\ \leq&O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)}),\\ \end{split}$$ $$\begin{split} &\sup_{x\in B_r(y)}\left \| |\nabla_{\Gamma_{0,t}}h|\int\phi \bar g'''\,dz\right \|_{C^\frac{\theta}{2}(y,\cdot)(I)}\\ \leq&O(\|h\|_{C^{1,\theta}(B_r(y)\times I)}\cdot\sup_{x\in B_r(y)}\|\phi(y,\cdot)\|_{C^\frac{\theta}{2}(I)})\\ \leq&O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)}),\\ \end{split}$$ and $$\begin{split} &\sup_{x\in B_r(y)}\left \|\int H_{\Gamma_{z,t}}\partial_z\phi\,dz\right \|_{C^\frac{\theta}{2}(y,\cdot)(I)}\\ \leq&O({\varepsilon}\cdot\|\phi \|_{C^{2,\theta}(B_r(y)\times I)})\\ \leq&O({\varepsilon}^2)+O(\|\phi \|^2_{C^{2,\theta}(B_r(y)\times I)}).\\ \end{split}$$ By the uniform smallness of deviation in $z$ coordinate (\[CurvatureError\]) and (\[LaplacianError\]) $$\begin{split} &\sup_{x\in B_r(y)}\left \|\int[H_{\Gamma_{z,t}}-H_{\Gamma_{0,t}}]\bar g'^2\,dz\right \|_{C^\frac{\theta}{2}(y,\cdot)(I)}\\ \leq&O({\varepsilon}^2),\\ \end{split}$$ and $$\begin{split} &\sup_{x\in B_r(y)}\left \|\int[\Delta_{\Gamma_{z,t}}h-\Delta_{\Gamma_{0,t}}h]\bar g'^2\,dz\right\|_{C^\frac{\theta}{2}(y,\cdot)(I)}\\ \leq&O({\varepsilon}^2),\\ \end{split}$$ $$\begin{split} &\sup_{x\in B_r(y)}\left \|\int g'g''|\nabla_{z,t}h|^2\,dz\right \|_{C^\frac{\theta}{2}(y,\cdot)(I)}\\ \leq&O({\varepsilon}^2).\\ \end{split}$$ And thus similar to (\[SpaceHolder\]), we obtain $$\label{TimeHolder} \sup_{x\in B_r(y)}\left \| \left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h+H_{\Gamma_{0,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ](y,\cdot)\right \|_{C^{\frac{\theta}{2}}(B_r(y))}\leq O({\varepsilon}^2)+O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)}).$$ ### From (\[SpaceHolder\], \[TimeHolder\]), we get Hölder estimates for the term II+IV in space-time $$\label{HolderEstimate} \left \|\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h+H_{\Gamma_{0,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]\right \|_{C^{2,\theta}(B_r(y)\times I)}\leq O({\varepsilon}^2)+O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)}).$$ From (\[SpaceHolder\], \[TimeHolder\]), we get Hölder estimates for the term II+IV in space time $$\label{HolderEstimate} \begin{split} &\left \|\left [\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h+H_{\Gamma_{0,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right ]\right \|_{C^{0,\theta}(B(r)\times I)}\\ \leq&O({\varepsilon}^2)+O(\|\phi\|^2_{C^{2,\theta}(B_r(y)\times I)})\\ \leq&O({\varepsilon}^2)+\sigma\|\phi\|_{C^{2,\theta}(B_r(y)\times I))}. \end{split}$$ where $\sigma=o(1)$ is a small coefficient since the norm of $\phi$ is small. This will be used later for an iteration argument. Parabolic Schauder estimates for $\phi$ and regularity of the level sets, the proof of main theorems ==================================================================================================== Rewriting the equation (\[Difference\]), we get $$\begin{aligned} \left (\frac{\partial}{\partial t}-\Delta\right )\phi+W''(g)\phi=g'\left (\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h+H_{\Gamma_{0,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\right )+g''|\nabla h|^2+\bar\eta.\end{aligned}$$ By applying standard parabolic Schauder estimates (see chapter 4 of [@lieberman1996second] for reference) to the above equation, we get $$\begin{split} &\|\phi\|_{C^{2,\theta}(B(r)\times I)}\\ \leq&\|\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h+H_{\Gamma_{0,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\|_{C^{\theta}(B(2r)\times 2I)}+\|h\|^2_{C^{2,\theta}(B(2r)\times 2I)}+O(\varepsilon^2)\\ \leq&\|\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h+H_{\Gamma_{0,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}\|_{C^{\theta}(B(2r)\times 2I)}+\sigma\|\phi\|^2_{C^{2,\theta}(B(2r)\times 2I)}+O(\varepsilon^2)\\ \end{split}$$ where $\sigma<1$ is a small constant. Combining this with the estimate (\[HolderEstimate\]) and using an iteration argument, we get $$\begin{split} \left \|\frac{\partial h}{\partial t}-\Delta_{\Gamma_{0,t}}h+H_{\Gamma_{0,t}}+\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t} \right \|_{C^{\theta}(B(r)\times I)}+\|\phi\|_{C^{2,\theta}(B(r)\times I)}\leq O(\varepsilon^2). \end{split}$$ Since the mean curvature satisfies $$\begin{split} H_{\Gamma_{0,t}} &=H_{\Gamma_{\mathrm{Graph}f}}=-\mathrm{div_\Sigma}\left (\frac{\nabla_\Sigma f}{\sqrt{1+|\nabla_\Sigma f|^2}}\right ).\\ \end{split}$$ We have $$\left \|\frac{1}{\sqrt{1+|\nabla_\Sigma f|^2}}\cdot\frac{\partial f}{\partial t}-\mathrm{div_\Sigma}\left (\frac{\nabla_\Sigma f}{\sqrt{1+|\nabla_\Sigma f|^2}}\right )\right \|_{C^\theta}\leq O(\varepsilon^2).$$ After rescaling back to the original scale we have $$\left \|\frac{1}{\sqrt{1+|\nabla_\Sigma f^{\varepsilon}|^2}}\cdot\frac{\partial f^{\varepsilon}}{\partial t}-\mathrm{div_\Sigma}\left (\frac{\nabla_\Sigma f^{\varepsilon}}{\sqrt{1+|\nabla_\Sigma f^{\varepsilon}|^2}}\right )\right \|_{C^\theta}\leq O(\varepsilon).$$ Thus by parabolic Schauder estimates, we have $$\label{UniformEstimate} \|f_{\varepsilon}\|_{C^{2,\theta}}\leq C$$ after rescaling back to the original scale and the convergence of $f_{\varepsilon}$ to $f_{\varepsilon}$ is in $C^{2,\theta}$ sense. (of Theorem \[ImprovementRegularity\]) We essentially already completed the proof of Theorem \[ImprovementRegularity\] in the last equation (\[UniformEstimate\]) obtaining a uniform $C^{2,\theta}$ estimate from a uniform enhanced second fundamental form bound and an entropy bound. This gives a uniform $C^{0,\theta}$ norm for the curvature of the level sets. (of Theorem \[Graphical\]) This is the same argument as in the proof of Corollary 1.2 in [@Wang2019] where the conditions in Theorem \[Graphical\] imply the conditions (uniform enhanced second fundamental form bounds) in Theorem \[ImprovementRegularity\] by a blow up argument. Proof of curvature estimates {#ProofCurvature} ============================ In this section we prove the a priori bound on enhanced second fundamental forms for low entropy Allen–Cahn flows. We begin with an a priori lower bound for the gradients of parabolic Allen–Cahn at points of phase transition. The enhanced second fundamental form $\mathcal A(u)=\frac{\sqrt{|\nabla^2 u|^2-|\nabla|\nabla u||^2}}{|\nabla u|}$ makes sense only if the gradient does not vanish. There exists $C>0$ such that if $u^{\varepsilon}$ is a solution of equation (\[EAC\]) and that the energy density of $u^{\varepsilon}$ converges with multiplicity $\alpha$ to a smooth mean curvature flow in $U\times I\subset\mathbb R^2\times\mathbb R$, then $${\varepsilon}|\nabla u^{\varepsilon}(0,t)|\geq C$$ for sufficiently small ${\varepsilon}$ on compact subsets of $U\times I$. Suppose not, then there exists a sequence of ${\varepsilon}_i\rightarrow0$, a sequence of solutions $u^{{\varepsilon}_i}$ to the equation (\[EAC\]) with ${\varepsilon}={\varepsilon}_i$ such that $${\varepsilon}_i|\nabla u^{{\varepsilon}_i}(0,0)|\rightarrow0$$ in $U\times I\subset\mathbb R^2\times\mathbb R$. By scaling, we obtain a sequence of solutions $u_{{\varepsilon}_i}=u^{{\varepsilon}_i}(\frac{x}{{\varepsilon}_i},\frac{t}{{\varepsilon}_i^2})$ satisfying equation (\[AC\]) in $\frac{U}{{\varepsilon}_i}\times\frac{I}{{\varepsilon}_i^2}\subset\mathbb R^2\times\mathbb R$ with $$|\nabla u_{{\varepsilon}_i}(0,0)|=0.$$ After passing to a limit, we obtain a solution $u_\infty$ of (\[AC\]) that is defined on the whole space-time $\mathbb R^2\times\mathbb R$, but with $$|\nabla u_{\infty}(0,0)|=0.$$ But this contradicts the rigidity of eternal solution to Allen–Cahn flow Theorem \[ParabolicRigidity2D\], and thus we must have a gradient lower bound. Now we can prove the curvature estimates in Corollary \[CurvatureEstimates\]. Again argue by contradiction. Therefore, assume there exists a sequence ${\varepsilon}_i\rightarrow0$ and a sequence of solutions $u^{{\varepsilon}_i}$ to the equations (\[EAC\]) in $B_{r_i}(0)\times[-r_i^2,r_i^2]\subset\mathbb R^2\times\mathbb R$ with ${\varepsilon}={\varepsilon}_i$ such that $u(0,0)=0$ and $$|\mathcal A(u^{{\varepsilon}_i}(0,0))|\cdot r_i=C_i\rightarrow\infty$$ and that $$|\mathcal A(u^{{\varepsilon}_i}(x,t))|\leq 2|\mathcal A(u^{{\varepsilon}_i}(0,0))|$$ for $(x,t)\in B_{r_i}(0)\times[-r_i^2,r_i^2]$ by a point picking argument. We rescale the sequence of solutions by $|\mathcal A(u^{{\varepsilon}_i}(0,0))|$ and obtain a new sequence $$\begin{aligned} \tilde u_i(x,t)=u^{{\varepsilon}_i}\left (\frac{x}{|\mathcal A(u^{{\varepsilon}_i}(0,0))|},\frac{t}{|\mathcal A(u^{{\varepsilon}_i}(0,0))|^2}\right ).\end{aligned}$$ where $u_i$ satisfies () with ${\varepsilon}={\varepsilon}_i|\mathcal A(u^{{\varepsilon}_i}(0,0))|$. $u_i$ is defined in $B_{C_i}\times[-C_i^2,C_i^2]\rightarrow\mathbb R^2\times\mathbb R$ and $$\begin{split} &|\mathcal A(u_i(0,0))|=1,\\ &|\mathcal A(u_i(x,y))|\leq2, (x,y)\in B_{C_i}\times[-C_i^2,C_i^2]. \end{split}$$ We have $\lim\sup_{i\rightarrow \infty}{\varepsilon}_i|\mathcal A(u^{{\varepsilon}_i}(0,0))|<\infty$ by the Liouville’s Theorem of linear heat equations. So there are 2 cases: If $\lim_{i\rightarrow \infty}{\varepsilon}_i|\mathcal A(u^{{\varepsilon}_i}(0,0))|=0$ after passing to a subsequence, by the improvement of estimates Theorem \[ImprovementRegularity\], for sufficiently large $i$, we have uniform $C^{2,\theta}$ bounds, so the second fundamental form is preserved in the limit. Therefore the nodal sets of $u_i$ converge in $C^{2,\theta}$ to a limit eternal curve shortening flow with the norm of second fundamental form at $(0,0)$ being $1$. This contradicts the fact that the only eternal curvature shortening flow with entropy below $2$ must be the static flat line. If $\lim_{i\rightarrow \infty}{\varepsilon}_i|\mathcal A(u^{{\varepsilon}_i}(0,0))|= \bar C\neq0$ after passing to a subsequence, then we get a limit eternal solution to the parabolic Allen–Cahn equation (\[EAC\]) with ${\varepsilon}=\bar C$ but with norm of second fundamental form at $(0,0)$ being 1. This is a contradiction to the rigidity Theorem \[ParabolicRigidity2D\]. So we must have a uniform curvature bound. [WW19b]{} Kenneth A. Brakke. , volume 20 of [*Mathematical Notes*]{}. Princeton University Press, Princeton, N.J., 1978. Luis A. Caffarelli and Antonio Córdoba. Phase transitions: uniform regularity of the intermediate layers. , 593:209–235, 2006. Tobias H. Colding and William P. Minicozzi, II. Generic mean curvature flow [I]{}: generic singularities. , 175(2):755–833, 2012. Otis Chodosh and Christos Mantoulidis. Minimal surfaces and the [A]{}llen-[C]{}ahn equation on 3-manifolds: index, multiplicity, and curvature estimates. , 191(1):213–328, 2020. Manuel del Pino and Konstantinos T. Gkikas. Ancient multiple-layer solutions to the [A]{}llen-[C]{}ahn equation. , 148(6):1165–1199, 2018. Manuel del Pino and Konstantinos T. Gkikas. Ancient shrinking spherical interfaces in the [A]{}llen-[C]{}ahn flow. , 35(1):187–215, 2018. Marco A. M. Guaraco, Fernando C. Marques, and Andre Néves. Multiplicity one and strictly stable [A]{}llen-[C]{}ahn minimal hypersurfaces. arXiv:1912.08997 \[math.DG\]. Tom Ilmanen. Convergence of the [A]{}llen-[C]{}ahn equation to [B]{}rakke’s motion by mean curvature. , 38(2):417–461, 1993. Gary M. Lieberman. . World Scientific Publishing Co., Inc., River Edge, NJ, 1996. Frank Pacard and Manuel Ritoré. From constant mean curvature hypersurfaces to the gradient theory of phase transitions. , 64(3):359–423, 2003. Matthias Röger and Reiner Schätzle. On a modified conjecture of [D]{}e [G]{}iorgi. , 254(4):675–714, 2006. Halil Mete Soner. Ginzburg-landau equation and motion by mean curvature, ii: Development of the initial interface. , 7(3):477–491, 1997. Ao Sun. On the entropy of parabolic allen-cahn equation. , 2018. Yoshihiro Tonegawa et al. Integrality of varifolds in the singular limit of reaction-diffusion equations. , 33(3):323–341, 2003. Mariel S[á]{}ez Trumper. Relaxation of the curve shortening flow via the parabolic ginzburg-landau equation. , 31(3):359–386, 2008. Kelei Wang. A new proof of [S]{}avin’s theorem on [A]{}llen-[C]{}ahn equations. , 19(10):2997–3051, 2017. Brian White. A local regularity theorem for mean curvature flow. , 161(3):1487–1519, 2005. Kelei Wang and Juncheng Wei. Finite [M]{}orse index implies finite ends. , 72(5):1044–1119, 2019. Kelei Wang and Juncheng Wei. Second order estimate on transition layers. , 358:106856, 2019.
Conservatism Lives! Shelby Is Wrong SHELBY BLOCKS ALL NOMINATIONS: Republicans don’t need 41 senators – one blanket hold on nominees can stop everything. From CongressDaily: ‘Sen. Richard Shelby, R-Ala., has placed a blanket hold on all executive nominations on the Senate calendar in an effort to win concessions from the Obama administration and Pentagon on a variety of fronts affecting his home state, according to aides to Senate Majority Leader Reid. Reid spokeswoman Regan Lachapelle said Shelby is blocking more than 70 pending nominations. Reid can force a vote on any nomination by filing cloture.
Electricity games Tarneem Hammad | 29-12-2016 It’s winter time in Gaza, so go get your blanket and a big cup of cappuccino, turn on your heater and let’s watch a movie. Knock…knock…knock. Wait, someone is knocking Who is it? It’s electricity saying bye bye—like it has repeatedly over the last several days, on for only about four hours a day and less in some areas of Gaza. There’s no real pattern right now; it just comes and goes. So bye bye warmth, movie and joy. That’s okay; I have my cappuccino. Fortunately, we fuel our ovens with gas, not electricity, so I can still boil water. It’s cold and dark and I’m bored. How do I entertain myself? Watch a football [soccer] match, a fashion show or the news? No, there’s no TV. The power is off. Visit some friends? But they will be very annoyed like me that all of their plans have had to be dropped, and we’d just complain to each other. Plus, most don’t have generators or fuel for them so we’d sit in the dark. Check social media? Nope. No power means no wifi. Play some sports? It’s freezing outside (and in), so no. Men warm themselves by an outdoor stove. I got it; read a book? That’s it. Yes, I’ll read a book. It’s not very comfortable, but I can use the flashlight on my mobile phone. Five hours later, the book is finished but still no power. (And for how many hours straight can one read anyway?) My sister Rou, who is a “social mediaholic,” is complaining that she knows nothing about her friends outside the country. My brother Ali jokes that the funny thing about this crisis is that you can walk around wearing a half-ironed T-shirt and people will excuse you because they know the power went off in the middle. Most others simply sleep a lot, or gather in the streets in front of a brazier for warmth. And those who live above the fifth floor of buildings with elevators are stuck. That’s a lot of stairs to expect an old person to climb down. Impact of constant power cuts Power is the “mother of services”; if power stops, all other services stop. For example, we no longer put perishable food in the refrigerator because we fear it will spoil. Everyone feels the electricity crisis, from the little children to the elderly men. No one is spared, although a few have enough money to run a generator nonstop. It’s been 10 years of pretty much nonstop electricity crisis, although it’s particularly bad right now. Every year, we have one or two months when we receive only three or four hours of electricity a day, and this month is one of them. (And then we return to the “good” days, when we get six or eight hours a day!) Why is this happening? The official excuses fluctuate. It happens to be winter right now, so the bad weather is cited as one culprit. But there are three causes that are present all the time: the Israeli blockade, which restricts the amount of fuel and machinery parts we can import; past Israeli assaults, which have damaged our infrastructure; and our own internal political divisions. “The crisis is political par excellence and the political parties must stay away from the Palestinian arena to ease the suffering of the citizens,” Jamal al-Dardasawy, spokesman for the Gaza Electricity Distribution Company, said in the media. Sometimes the frustration erupts into protests. Gaza is supplied with electricity from three primary sources: Israel, Egypt and the Palestine Electric Company, which relies on both of those countries and the Palestinian political parties for its fuel. Conflict with or between any of those means a shortage of electricity for ordinary citizens. (Gaza’s only power plant was bombed in 2006. It began operating again at reduced capacity but stopped because of a tax dispute between Hamas and Fatah.) Excuses abound Many officials, including Fatah member and Palestinian Authority President Mahmoud Abbas and Hamas member and Prime Minister Ismail Haniyeh, have promised that the latest and most extreme shortage will be solved soon. They are “working on it,” they say. But we’ve heard that before and are not holding our breath. Actually, officials rarely bother talking about electricity these days because shortages have become the norm. When they do say something, Hamas officials blame Fatah officials and vice versa. No one comes up with a solution. No one can explain why we suddenly transition from eight hours of power a day to six or just four. It just happens. And we have to keep on keeping on. Some people can afford the cost of a back-up power generator. But most can’t, and instead many rely on candles. But you have to be very careful because if one candle drops, it can burn a whole house. In fact, three children died in Gaza that way in May. Other people buy a unit that stores power while it’s on, but it needs a constant current for at least 10 hours a day—so that’s mostly useless. No matter what you do, you’ll have hours at a time with no power at all. Women usually use the daylight to order their children to do their homework, before the late afternoon when our houses become completely dark and there is not much to do except sleep. But guess what? We have our own version of “rush hours” at odd hours of the day. It’s not when there is a traffic jam, but when electricity shows up without warning. Last Friday, for example, we finally got some electricity for two hours at 4 a.m.! So my mum woke up quickly to switch on the washing machine. Even though it was so noisy, the neighbors didn’t complain because they were awake too, baking, washing and charging their cellphones or laptops. Sometimes, it becomes just too much to bear and people erupt into protests, like last January. Demonstrators protested against the continuous power cuts in several areas of the Gaza Strip, including camps like Bureij, Maghazi, Rafah and Khan Younis. But they ended up simply dispersing when the director of public relations and information for the Electricity Distribution Company insisted the severe shortage was due to the bad weather and there is nothing to do about Mother Nature. I’m not convinced though. I don’t know or care who is responsible; what I know is that something ought to be done in order to solve it, before it reaches to a new high—which means no power at all.
How much of physical activity is recommended? WHO recommends that children and adolescents aged five to17 years: Should do at least 60 minutes of moderate to vigorous-intensity physical activity daily; physical activity of amounts greater than 60 minutes daily will provide additional health benefits; and should include activities that strengthen muscle and bone, at least three times per week. They recommend that adults aged 18 to 64 years: Should do at least 150 minutes of moderate-intensity physical activity throughout the week, or do at least 75 minutes of vigorous-intensity physical activity throughout the week, or an equivalent combination of moderate- and vigorous-intensity activity; for additional health benefits, adults should increase their moderate-intensity physical activity to 300 minutes per week, or equivalent; and muscle-strengthening activities should be done involving major muscle groups on 2 or more days a week. For adults aged 65 years and above, the WHO said they should do at least 150 minutes of moderate-intensity physical activity throughout the week, or at least 75 minutes of vigorous-intensity physical activity throughout the week, or an equivalent combination of moderate- and vigorous-intensity activity. For additional health benefits, the WHO said they should increase moderate intensity physical activity to 300 minutes per week, or equivalent; those with poor mobility should perform physical activity to enhance balance and prevent falls, three or more days per week; and muscle-strengthening activities should be done involving major muscle groups, two or more days a week. According to the WHO, the intensity of different forms of physical activity varies between people and in order to be beneficial for cardiorespiratory health, all activity should be performed in bouts of at least 10 minutes duration. The United Nations apex health body said regular physical activity of moderate intensity – such as walking, cycling, or doing sports – has significant benefits for health. “At all ages, the benefits of being physically active outweigh potential harm, for example through accidents. Some physical activity is better than doing none. By becoming more active throughout the day in relatively simple ways, people can quite easily achieve the recommended activity levels.” According to the WHO, regular and adequate levels of physical activity: improve muscular and cardiorespiratory fitness; improve bone and functional health; reduce the risk of hypertension, coronary heart disease, stroke, diabetes, breast and colon cancer and depression; reduce the risk of falls as well as hip or vertebral fractures; and are fundamental to energy balance and weight control. Culled From Vanguard. (Visited 30 times, 1 visits today) JOIN THE NEW DoctorsQuarters.com BBM CHANNEL ON C0015D291... Its New And Refreshingby by admin The Admin is a Medical Council Certified Medical Doctor trained in the prestigious Madonna University Elele, Nigeria. He is popular for being a fast rising online voice in Nigeria. I love to swim, read and play board games. I see myself as one who is destined to play a role in the way health services are rendered to the human race.
Sunday, January 13, 2013 Greetings, Oh Faithful Readers!- The Sheriff’s Office in Orlando is experimenting with surveillance drones they hope to launch in the spring. If they are looking for crime, how about starting with $89 for a ticket to get into Walt Disney World? Maryland and Pennsylvania are using computers to predict future crimes. The biggest victims are season ticket holders for the Pirates and Orioles. Jerry Springer says he is “the father of the destruction of Western Civilization.” What’s even worse is that the test results were given on “Maury”. The Social Security Administration is withdrawing disciplinary action against an employee who is overly flatulent at work. Government experts were shocked. People in the Social Security Administration actually work? An Ohio school district is planning to arm its janitors. The question is, how are they going to use a gun when they aren’t even sure which end of the mop to use? An Ohio school district is planning to give weapons to its janitors. You can’t even find one when there is a spill to clean up. How are you going to find them when there’s an armed intruder? A proposed bill in Texas would force sex offenders to identify themselves online. Don’t we already have that? It’s called the Myspace member database. President Obama says the U.S. has fallen “short of the ideal” in Afghanistan. Unless you consider the ideal fighting an unnecessary war for more than a decade and losing. The Treasury says a $1 Trillion coin would not be legal. Unless instead of “In God We Trust” it is inscribed with “We Don’t Trust Congress To Spend This Responsibly”. Several hundred thousand people marched through Paris to protest against the planned legalization of same-sex marriage. Which means the city will officially have to give up being called “Gay Paree”. The University of Arizona is offering a minor focusing on hip hop music. Even philosophy majors are asking students what they are going to do with that when they graduate. The University of Arizona is offering a minor focusing on hip hop music. It’s available for students who are majoring in Ho’s, Bitches and Pimps. The world’s oldest woman has died in Japan at age 115. Never have so many people lived so long to win a title and then give it up after only a few weeks. A cannon in Central Park in New York from the Revolutionary War was found to be still loaded with gunpowder and a cannonball. No one in the park seemed to notice since they had a better chance against that than anyone in the park carrying an AK-47. A cannon in Central Park in New York from the Revolutionary War was found to be still loaded with gunpowder and a cannonball which were immediately removed. Wayne LaPierre immediately condemned the Obama Administration for trying to take away every gun in America. The Cuban Government has eased the country’s travel restrictions. In fact, they are trying to help people leave by giving away free instructions on how to build a raft. A Goodwill worker in Virginia discovered a donated painting that has since been appraised at $12,000. Apparently the worker knew it was special because it was the only one ever donated that wasn’t featuring a unicorn and a rainbow or dogs playing poker. A Washington, D.C. restaurant is serving checks to diners that feature news headlines. Apparently that way Congressmen who eat there can see if they have been indicted on their lunch break. Chevrolet is redesigning the Corvette to re-establish the “cool factor” of owning one. Of course, it has always a status symbol to own a Corvette because people know how much money you need to afford to fill its tank. Chevrolet is redesigning the Corvette to re-establish the “cool factor” of owning one. When people see a Corvette now, the question they ask is which is higher, the speed it will go or the age of the guy driving it. China’s auto sales are predicted to rise 7% in 2013. Ironically, more people can afford to drive a car there than pay to walk around in a pair of Nikes their kids made for 37 cents a day. A study says the sweatiest part of the body is the upper back. Fortunately, it’s not as big of a problem for Rosie O’Donnell ever since she started shaving hers. A study says the least sweaty part of the body are the hands, fingers and feet. Unless you are working on the production crew of a Christian Bale movie. Government health officials say this year’s flu shot is 62% effective. Which is 61% more effective than the government. The American Cancer Society is recommending that older heavy smokers should be screened for lung cancer. The only problem is finding any heavy smokers who are older. The American Cancer Society is recommending that older heavy smokers should be screened for lung cancer. That’s for people with a good health care plan. People without health insurance can just book a flight and use the TSA airport security X-Rays. A hospital in Texas is allowing premature babies to bond with their parents with webcams. The next step after having their own webcam is a pole to practice dancing in their bedroom. IBM is developing a computer system that could customize recipes based on people’s taste buds. Which for most Americans is pretty much the menu at McDonald’s. New York City public hospitals are planning to tie doctors’ pay to the quality of care they give. Or as they would be known as if HMOs did that, “volunteers”. A Colorado company is planning to market a marijuana infused skin care line. Apparently they are selling it to women who want the same creamy complexion as the Zig-Zag man. Beijing’s air pollution actually went beyond the measuring index last week. It was so bad that people who usually knew what factory they worked at from the color smoke it was billowing couldn’t find their way to work. Beijing’s air pollution actually went beyond the measuring index last week. It was so bad that doctors had trouble reading chest X-Rays right in front of their face. Britney Spears has reportedly called off her engagement to Jason Trawick, her former agent. Apparently she feels better about giving him 15% now than the 50% he would be asking for in a divorce. Britney Spears has reportedly called off her engagement to Jason Trawick, her former agent. Apparently she was mad that all the wedding singers he was trying to book for the ceremony were better than her. Britney Spears has reportedly called off her engagement to Jason Trawick, her former agent. Apparently that now makes him the ex-factor. A Sony executive says that “Zero Dark Thirty” doesn’t advocate torture. If any Sony movie does, it is making people watch Adam Sandler’s “Jack and Jill”. A new book says that Tom Cruise believes he is on the planet to fight aliens. Which is exactly the same platform that Mitt Romney was running on. A new book says that Tom Cruise has signed a billion year contract with the Church of Scientology. Although apparently the Scientologists have been looking for an out ever since “Vanilla Sky” came out. Miss South Carolina is defending Brent Musberger’s right to comment on the looks of Miss Alabama. Apparently she is just glad she doesn’t have to worry about South Carolina getting to the BCS Championship Game anytime soon. Dozens of fans were arrested and 92 were ejected in the San Francisco 49ers win over the Green Bay Packers. Hey, Raiders fans have to watch someone play in the post season. Scientists say they have discovered the largest structure in the universe. Amazingly enough, it is a galaxy and not a sign saying “Trump”. A NASA flight director and his family lived for 30 days on Mars time. He says the hardest part was the eight month daily commute. That’s it for now, Oh Faithful Readers! The NFL playoffs are winding down. Or as we Raiders fans know that, closing in on baseball season. I know you’re out there. Make sure you let me know it by sending the love!
Kabobs When it comes to tailgating, grilling is a must. As a simple crowd pleaser, kabobs have infinite variations. This version combines a rich, smoky and sweet bourbon glaze with the sweet-tart flavor of pineapples and […]
English: Hello kid. Today we got Pikachu Card Do you want Pikachu get in my table right now??? PÌ KA CHÙ Alright, So Let make it now. So now, we can make PIKACHU paradigm Portuguese: Olá garoto. Hoje nós temos Pikachu cartão Quer Pikachu entrar na minha mesa agora ??? PI casa Ka Tudo bem, então vamos fazê-lo agora. Portanto, agora, podemos fazer PIKACHU paradigma Portuguese: Primeiro, desenhá-la no papel. 1º- eu desenhar a cabeça de PIKACHU. English: First, we draw it in paper. 1st- I draw head of PIKACHU. English: 2nd- I draw the hand of PIKACHU. 3rd- I draw the eyes of PIKACHU. Then- I draw the nose and mouth of PIKACHU. I draw foots of PIKACHU This is the tail of PIKACHU. He can use it to discharge during combat Portuguese: 2º- eu desenho a mão de PIKACHU. 3ª ordem Eu desenho os olhos de PIKACHU. Então- eu desenhar o nariz ea boca de PIKACHU. Chamo foots de PIKACHU Esta é a cauda de PIKACHU. Ele pode usá-lo para descarga durante o combate Portuguese: I use red brush to paint Pikachu Cheek English: I use red brush to paint Pikachu Cheek Portuguese: Pikachu's dominant color is yellow. I used a large yellow to make this color. English: Pikachu's dominant color is yellow. I used a large yellow to make this color. Portuguese: Then, Cut paper according to the drawn border Paste PIKACHU in another paper Remember to paste the part of the leg PIKACHU English: Then, Cut paper according to the drawn border Paste PIKACHU in another paper Remember to paste the part of the leg PIKACHU Portuguese: Create a rack for PIKACHU to stand upright English: Create a rack for PIKACHU to stand upright
Demodicidosis simulating acute graft-versus-host disease after allogeneic stem cell transplantation in one patient with acute lymphoblastic leukemia. One important differential diagnosis of facial erythema in a patient receiving an allogeneic bone marrow transplant (BMT) is acute graft-versus-host disease (GVHD). Demodex folliculorum has been rarely implicated in the development of facial rashes in immunosuppressed patients, including BMT recipients. We report the case of a patient, suffering from acute lymphoblastic leukemia, who after bone marrow transplantation developed a facial rash due to D. folliculorum mimicking GVHD. Differential diagnosis of facial rashes and demodicidosis after BMT is reviewed.
This invention relates to composite polyurethane foams and methods of manufacture thereof. In particular, this invention relates to a layered polyurethane foamed composite. Composite, i.e., multiple-layer foamed polyurethane materials are known, and are often manufactured in order to obtain a combination of desirable properties. Such composites have primarily been manufactured by laminating the individual foam layers together, generally by using adhesives. These methods require additional manufacturing steps, and may result in materials with degraded, rather than improved properties. A composite, dual layer polyurethane foam is disclosed in U.S. Pat. No. 5,859,081 to Duffy, wherein the composite is manufactured by casting an uncured (wet) polyurethane foam composition onto a cured (dry) polyurethane foam. It is admitted in Duffy that the dual layer foam has a xe2x80x9cfine and uniform interface between the two layersxe2x80x9d (col. 2, lines 59-60). However, such an interface line can interfere with air flow through the multilayer foam structure, reducing the breathability, porosity and water vapor transmission that are important for many applications. In addition, this interface can produce a location for delamination of the layers because it""s strength depends on the adhesion of the second cast layer to the first. It has also been found by the inventors that casting uncured wet polyurethane foam compositions onto a cured (dry) polyurethane open celled foam results in large voids and bubbles in the second layer, due to expansion of the gas in the cells of the first layer during the curing process. In addition to the cosmetic deficiencies of these voids and bubbles the physical properties can also be negatively affected during production and/or use. Thus, there is a need for a product and a process to make a multilayer foam structure with no structural interface line between the layers. A method for the manufacture of multiple layer polyurethane foams, which comprises: casting an uncured (wet) polyurethane foam onto a cast, uncured (wet) polyurethane foam; and curing the resulting layers of foam such that the final product possesses no discernible structural interface line between the layers when examined at a magnification of 50xc3x97 using a scanning electron microscope (SEM). If the different layers have different colors then visible microscopy can be used to visualize the border region between layers. Multilayered foams of the present invention show evidence of polymer diffusion between the layers but the cells of the foam are distributed uniformly from one surface to the other without regard to the transition zone between the layers. Such multilayered foams cannot be delaminated, in that the tear strength of one of the layers is exceeded before any evidence of delamination is seen.
Riseten Pass The Riseten Pass () is a mountain pass of the Glarus Alps, located on the border between the Swiss cantons of St. Gallen and Glarus, at an elevation of . It crosses the col between the peaks of the Wissgandstöckli and Foostock. The pass is traversed by a trail, which connects the village of Weisstannen, in the canton of St. Gallen at an elevation of , with the valley of the Krauch stream and thence with the village of Matt, in the canton of Glarus at an elevation of . See also List of mountain passes in Switzerland References External links Risetenpass on Hikr web site Category:Mountain passes of Switzerland Category:Mountain passes of the Alps Category:Mountain passes of the canton of Glarus Category:Mountain passes of the canton of St. Gallen Category:Glarus–St. Gallen border
Jessica Simpson might have started a feud with Natalie Portman on Twitter In one of the biggest plot twists to hit 2018 (and there have been pleeeeeeenty), Jessica Simpson is now beefing with Natalie Portman on Twitter. Yep! It's a thing! Here's how this went down. Natalie gave an interview with USA Today about her new movie "Vox Lux," which tells the story of an international pop singer. During the interview, Natalie talked about her own relationship with stardom and how impressionable she was as a young girl. "I remember being a teenager, and there was Jessica Simpson on the cover of a magazine saying ‘I’m a virgin’ while wearing a bikini, and I was confused. Like, I don’t know what this is trying to tell me as a woman, as a girl,” she said. While it looks like Natalie was just using Jessica as an example of a larger theme, Jessica took it as a personal call-out and posted a message about it on Twitter. She wrote, "As public figures, we both know that our image is not totally in our control at all times, and that the industry we work in often tries to define us and box us in. However, I was taught to be myself and honour the different ways all women express themselves, which is why I believed then - and I believe now - that being sexy in a bikini and being proud of my body are not synonymous with having sex." She went on to say that considering Natalie's work with Time's Up, she was surprised that she would make comments like these. Natalie has now issued a response to Jessica's claims, saying that she didn't mean to cause any offence.
Beard House Highlight: Braised Short Rib and Bone Marrow with Ratatouille, Mascarpone Polenta, and Hunter’s Sauce Maggie Borden Search Recipes The dog days of summer may seem like an odd time for braised short ribs and bone marrow, but in the deft hands of chef Richard Arbaugh at his August Beard House dinner, this hearty dish became a vehicle for the vibrant bounty of the summer harvest. Assembled in distinct sections, the plate featured an array of textures, from the fork-tender short rib to the melting velvet of the bone marrow to the crispy crust of the griddled polenta. I’ll never turn down a good short rib, but at the end of the evening I found myself longing for the dish’s accompaniments: the polenta powered by the punch of sweet corn, and the ratatouille enlivened by the bright pop of summer tomatoes. With the A/C on, I almost forgot what season it was. Take this recipe with you as you cook your way from summer into fall.
Agmatine is derived from arginine via arginine decarboxylase (ADC), and is produced principally and constitutively by the kidney. It is a novel endogenous inhibitor of cell proliferation whose effects are attributed, at least in part, to regulation of polyamines. Polyamines are required components of cell cycle progression. The rate-limiting enzyme of polyamine biosynthesis is ornithine decarboxylase (ODC), a proto-oncogene required for growth and significantly elevated in tumors. Intracellular polyamine levels are autoregulated by induction of antizyme, a protein that inhibits both ODC and cellular polyamine import. Agmatine lowers intracellular polyamine levels by inducing antizyme and SSAT, an enzyme involved in the metabolism of polyamines. In transformed NIH/3T3 cells agmatine inhibits proliferation via a G1 cell cycle arrest with induction of cyclin kinase inhibitors in a senescent-like manner, in effect, reverting a transformed to a senescent phenotype. Agmatine inhibits proliferation in all cell lines evaluated, even those deficient in cyclin kinase inhibitors, suggesting redundant modes of arrest. Finally, agmatine initiates a coordinated network of antiproliferative effects involving Akt pathways (linked with survival and growth), and angiogenic factors, which could also contribute to this arrest. Considering the agmatine system may provide a new therapeutic avenue we first have to understand its actions in more detail. OBJECTIVES: To define the mechanisms of agmatine's antiproliferative effects. Here we will develop tools vital for this and future work. We will combine siRNA and lentiviral vector technology to establish stable knock-down cell lines of candidate proteins induced by agmatine (cyclin kinase inhibitors, antizyme and SSAT) and delineate the mechanisms of agmatine-mediated senescence and growth arrest. The respective siRNA lentiviral vectors also provide tools for future assessment in animal models. Understanding the mechanisms involved in this network of antiproliferative responses elicited by agmatine would allow us to define, target and exploit critical pathways by molecular or pharmacologic approaches. These pathways will have particular application to diabetes and IRI in kidney, models we plan to pursue. [unreadable] [unreadable]
In 1930, famed economist John Maynard Keynes predicted that within his lifetime, the future economy would be powered with a quarter of the effort. In a hundred years, he wrote, humanity would actually be confronted with the problem of too much leisure time , and what to do with it. Technological innovation meant that we could accomplish whatever needed doing in a 15-hour workweek, and we’d be endeavoring “to spread the bread thin on the butter,” distributing what little work was necessary as equally as possible. Today, despite massive gains in productivity, and thanks to unrepentant consumerism, Keynes’s prediction couldn’t have been further off. In 1991, sociologist Juliet Schor found that Americans in the early ‘90s were working 163 more hours than they were in 1973. But now, economists (including Schor) are considering a perception of time that actually makes sense for a post-industrial clock. In a recent book published by the New Economics Foundation (NEF), called Time on Our Side, they examine why a 30-hour workweek would be a more rational, efficient, and sustainable approach to the modern, developed economy. Most importantly, they say it’s totally doable–and big companies could even play a key part. Wouldn’t people prefer to spend more time doing things other than working? Three years ago, NEF’s head of social policy Anna Coote proposed the 21-hour workweek during the Ghent TEDx conference. As a “rallying cry” to suggest radical change, the idea earned a fair share of controversy. Meanwhile, “Time on Our Side is something that we want to get people to talk about in academic circles and policy formation circles,” she told me over the phone. As a result, the book includes contributions from 16 economists and thinkers discussing ways in which the current set-up drives carbon emissions, socio-economic and gender-based inequality, and stress. For example, the NEF included work from researcher Martin Pullinger, who found that longer working hours have a direct link to increased household greenhouse gas emissions in the United Kingdom. Other essays examine how an economy that overemphasizes constant growth and material consumption devalues professions that don’t necessarily benefit from increased output per unit of time–like teaching, for instance, or nursing. One piece included in the collection proposes that the U.K. implement something called “National Gardening Leave,” which would mandate a four-day workweek, but distribute a remainder of the time to both leisurely and productive agriculture. “Wouldn’t people prefer to spend more time doing things other than working?” Coote asks. “And if other economies are just as successful as the American economy and have markedly shorter hours–just look at Germany, for example–isn’t there an argument there that you could do things differently?” If higher paid workers started to work less hours, it would become a desirable thing. As Coote mentions, several European economies operate pretty closely to the 30-hour ideal. The average worker in Germany puts in 35 hours, but the German economy remains the fourth-largest in the world. The country also largely excludes work on Sundays, and the unemployment rate sits at a healthy 5%, compared to the United States’ 7%. But while a 30-hour workweek might appear distant from a present that prizes long working hours and constant connectedness to work through iPhone push notifications, Coote suggests that it wouldn’t be all that difficult to accomplish. In fact, she thinks that highly paid women (think along Lean In lines) could play a major role in pushing the shift.
Goal gives you the complete look at Manchester City's full fixtures least for the upcoming season, including all pre-season and Premier League games Defending Premier League champions Manchester City will be looking to retain their title this season and they prepared with a number of clashes against high quality opposition.. Just five days after the end of the World Cup, City kicked off their pre-season tour in the United States, taking part in the International Champions Cup alongside the likes of Barcelona, Manchester United and Liverpool. Goal brings you your complete guide to City's 2018 pre-season tour as well as their complete 2018-19 Premier League schedule with dates, venues and all you need to know about the preparations. Pep Guardiola's side joined a host of other European clubs in participating in the International Champions Cup (ICC) this summer. A Mario Gotze goal saw them lose their opening game against Borussia Dortmund 1-0, with Liverpool coming from behind to take a 2-1 win in New York. City pulled off a comeback of their own against German giants Bayern Munich, storming back from two goals down to take a 3-2 win. The Community Shield was last up, pitting the Premier League champions against FA Cup winners Chelsea, with City ultimately coming out on top with a 2-0 victory. Man City 2018-19 squad numbers Squad number Position Player 1 GK Claudio Bravo 2 DF Kyle Walker 3 DF Danilo 4 DF Vincent Kompany 5 DF John Stones 7 FW Raheem Sterling 8 MF Ilkay Gundogan 10 FW Sergio Aguero 14 DF Aymeric Lapote 15 DF Eliaquim Mangala 17 MF Kevin De Bruyne 18 DF Fabian Delph 19 FW Leroy Sane 20 MF Bernardo Silva 21 MF David Silva 22 DF Benjamin Mendy 25 MF Fernandinho 26 MF Riyad Mahrez 27 FW Patrick Roberts 28 DF Jason Denayer 30 DF Nicolas Otamendi 31 GK Ederson 32 GK Daniel Grimshaw 33 FW Gabriel Jesus 35 MF Oleksandr Zinchenko 47 MF Phil Foden 55 MF Brahim Diaz 81 MF Claudio Gomes - FW Daniel Arzani Man City's 2018-19 transfer activity Manchester City completed a club-record move for Leicester City midfielder Riyad Mahrez for £60 million ($77m) after expressing interest in signing the forward previously in the January transfer window. Guardiola is expected to make more signings before the end of the transfer window on August 9, and released former Ivory Coast international Yaya Toure from the club after eight years at the Etihad. Transfer activity in Position Player Transferred from Fee Date MF Riyad Mahrez Leicester £60m July 10 MF Claudio Gomes PSG Free July 25 FW Daniel Arzani Melbourne City Undisc. August 9 Transfer activity out Position Player Transferred to Fee Date DF Angelino PSV £5m June 15 DF Yaya Toure Released N/A July 1 DF Pablo Maffeo VfB Stuttgart £9m July 1 MF Manu Garcia Toulouse Loan July 7 MF Aleix Garcia Girona Loan July 9 GK Angus Gunn Southampton £13.5m July 10 GK Joe Hart Burnley £3.5m August 7 Man City 2018-19 Premier League fixtures The Premier League fixtures were officially released on June 14 and Guardiola's men will begin the defence of their title with a trip to the Emirates Stadium to face Unai Emery's Arsenal. City then host Huddersfield Town at the Etihad Stadium the following week, before travelling to Molinieux to face newly promoted Wolves. Home games against Newcastle United and Fulham - another promoted team - complete their opening five matches. The first Manchester derby of the campaign against rivals United will be played at the Etihad on November 10, with the corresponding fixture taking place at Old Trafford on March 16, 2019.
Prosthetic fitting immediately after below-knee amputation. Frequently surgical amputation of a lower extremity is required when gangrene develops as a result of peripheral vascular disease. This is particularly true in geriatric patients. A below-knee amputation, with refinements in the surgical procedure, and immediate rigid-cast prosthetic fitting are strongly advocated by our group. The progress of two patients treated in this manner is described. Preservation of the knee joint improves the amputee's prognosis for ambulation with a below-knee prosthesis. The rigid-cast dressing on the below-knee amputation reduces edema and postoperative pain, is of psychologic value to the patient, and permits him to stand at from one to two days postoperatively.
By Choi Sung-jin Indebted households' disposable income increased 13 percent over the past three years but their payment burden of principal and interest surged nearly 60 percent, a Bank of Korea analysis shows. If the lending rate rises 1 percentage point, it would drive nearly 90,000 more households into marginal situations, the central bank warned. According to the analysis submitted to Rep. Shim Sang-jeong of the splinter opposition Justice Party, the debt service ratio (DSR) rose sharply from 16.3 percent in 2012 to 23.2 percent last year. DSR is a figure that indicates debt-repaying ability by comparing households' loan principal and interest payments with their disposable income. This means if the indebted households had spent 163,000 won on repaying debt out of their disposable income of 1 million won in 2012, they spent 232,000 won for the same purpose last year, the BOK analysis said. That in turn was due to a far faster pace of debt growth than income increase, it said, adding that the disposable income of indebted households increased 13.6 percent but their loan payments soared 59.7 percent. Particularly, low-income households and self-employed people saw their debts increase more steeply during the period, it said. The disposable income of the bottom 40 percent of households on the income ladder increased 17.8 percent over the period but their repayment of principal and interest surged 99.2 percent. The DSR of self-employed rose from 21.9 percent to 28.9 percent, meaning they spent nearly 300,000 won on debt repayment out of disposable income of 1 million won. Low-income self-employed people's DSR amounted to 33.8 percent. The impact of any interest rate increase will also likely be considerable, the BOK said. If the interest rate rises 1 percentage point, the number of marginal households -- families whose debts are larger than their assets and show DSR of 40 percent or higher -- is estimated to increase 88,000, from 1,342,000 to 1,430,000. If nominal per capita income also declines 1 percent, the number of marginal households will rise further to 1,543,000, or 14.4 percent of the total. "All this shows the urgent need for the government to come up with measures to deal with household debts in preparation for an expected interest rate hike," Rep. Shim said.
No. 71 Elis Edged by Loyola Marymount, 4-3 Mar 11, 2004 March 11, 2004 Carson, CA - Yale dropped the second match of its West Coast swing, falling 4-3 at Loyola Marymount. After falling behind 1-0, the Bulldogs won the next three matches but could not hold on for the victory. Yale's wins came in singles play from Ryan Murphy at No.1, Brandon Wai at No. 3 and Milosz Gudzowski at No. 5. The Lions pulled out three-set victories at No. 2, No. 4 and No. 6 singles to win.
.class public final Lcom/tencent/mm/plugin/freewifi/d/k; .super Lcom/tencent/mm/plugin/freewifi/d/c; .source "SourceFile" # direct methods .method public constructor <init>(Ljava/lang/String;Lcom/tencent/mm/protocal/b/amo;ILjava/lang/String;)V .locals 5 .prologue .line 34 invoke-direct {p0}, Lcom/tencent/mm/plugin/freewifi/d/c;-><init>()V .line 35 invoke-virtual {p0}, Lcom/tencent/mm/plugin/freewifi/d/k;->abb()V .line 36 const/16 v0, 0x8 if-ne v0, p3, :cond_0 const/16 v0, 0x9 if-eq v0, p3, :cond_1 .line 38 :cond_0 const-string/jumbo v0, "MicroMsg.FreeWifi.NetSceneScanAndReportNearFieldFreeWifi" const-string/jumbo v1, "invalid channel, channel is :%d" const/4 v2, 0x1 new-array v2, v2, [Ljava/lang/Object; const/4 v3, 0x0 invoke-static {p3}, Ljava/lang/Integer;->valueOf(I)Ljava/lang/Integer; move-result-object v4 aput-object v4, v2, v3 invoke-static {v0, v1, v2}, Lcom/tencent/mm/sdk/platformtools/v;->e(Ljava/lang/String;Ljava/lang/String;[Ljava/lang/Object;)V .line 40 :cond_1 iget-object v0, p0, Lcom/tencent/mm/plugin/freewifi/d/k;->bkQ:Lcom/tencent/mm/t/a; iget-object v0, v0, Lcom/tencent/mm/t/a;->byh:Lcom/tencent/mm/t/a$b; iget-object v0, v0, Lcom/tencent/mm/t/a$b;->byq:Lcom/tencent/mm/ax/a; check-cast v0, Lcom/tencent/mm/protocal/b/fv; .line 41 iput-object p1, v0, Lcom/tencent/mm/protocal/b/fv;->jzX:Ljava/lang/String; .line 42 iput-object p2, v0, Lcom/tencent/mm/protocal/b/fv;->jAQ:Lcom/tencent/mm/protocal/b/amo; .line 43 iput p3, v0, Lcom/tencent/mm/protocal/b/fv;->jsW:I .line 44 iput-object p4, v0, Lcom/tencent/mm/protocal/b/fv;->jsX:Ljava/lang/String; .line 45 return-void .end method # virtual methods .method protected final abb()V .locals 3 .prologue const/4 v2, 0x0 .line 19 new-instance v0, Lcom/tencent/mm/t/a$a; invoke-direct {v0}, Lcom/tencent/mm/t/a$a;-><init>()V .line 20 new-instance v1, Lcom/tencent/mm/protocal/b/fv; invoke-direct {v1}, Lcom/tencent/mm/protocal/b/fv;-><init>()V iput-object v1, v0, Lcom/tencent/mm/t/a$a;->byl:Lcom/tencent/mm/ax/a; .line 21 new-instance v1, Lcom/tencent/mm/protocal/b/fw; invoke-direct {v1}, Lcom/tencent/mm/protocal/b/fw;-><init>()V iput-object v1, v0, Lcom/tencent/mm/t/a$a;->bym:Lcom/tencent/mm/ax/a; .line 22 const-string/jumbo v1, "/cgi-bin/mmo2o-bin/bizwificonnect" iput-object v1, v0, Lcom/tencent/mm/t/a$a;->uri:Ljava/lang/String; .line 23 const/16 v1, 0x6a9 iput v1, v0, Lcom/tencent/mm/t/a$a;->byj:I .line 24 iput v2, v0, Lcom/tencent/mm/t/a$a;->byn:I .line 25 iput v2, v0, Lcom/tencent/mm/t/a$a;->byo:I .line 26 invoke-virtual {v0}, Lcom/tencent/mm/t/a$a;->vA()Lcom/tencent/mm/t/a; move-result-object v0 iput-object v0, p0, Lcom/tencent/mm/plugin/freewifi/d/k;->bkQ:Lcom/tencent/mm/t/a; .line 27 return-void .end method .method public final abp()Ljava/lang/String; .locals 3 .prologue .line 48 iget-object v0, p0, Lcom/tencent/mm/plugin/freewifi/d/k;->bkQ:Lcom/tencent/mm/t/a; iget-object v0, v0, Lcom/tencent/mm/t/a;->byi:Lcom/tencent/mm/t/a$c; iget-object v0, v0, Lcom/tencent/mm/t/a$c;->byq:Lcom/tencent/mm/ax/a; check-cast v0, Lcom/tencent/mm/protocal/b/fw; .line 49 iget-object v0, v0, Lcom/tencent/mm/protocal/b/fw;->jAR:Ljava/util/LinkedList; .line 50 if-eqz v0, :cond_0 invoke-virtual {v0}, Ljava/util/LinkedList;->size()I move-result v1 const/4 v2, 0x1 if-ne v1, v2, :cond_0 .line 51 const/4 v1, 0x0 invoke-virtual {v0, v1}, Ljava/util/LinkedList;->get(I)Ljava/lang/Object; move-result-object v0 check-cast v0, Lcom/tencent/mm/protocal/b/az; .line 52 iget-object v0, v0, Lcom/tencent/mm/protocal/b/az;->jvv:Ljava/lang/String; .line 54 :goto_0 return-object v0 :cond_0 const/4 v0, 0x0 goto :goto_0 .end method .method public final getType()I .locals 1 .prologue .line 31 const/16 v0, 0x6a9 return v0 .end method
Würznerhorn Würznerhorn is a mountain on the border of Liechtenstein and Switzerland in the Rätikon range of the Eastern Alps close to the town of Balzers, with a height of . See also Mittlerspitz Mittagspitz References Category:Mountains of the Alps Category:Mountains of Liechtenstein Category:Mountains of Switzerland Category:Mountains of Graubünden Category:Liechtenstein–Switzerland border Category:International mountains of Europe Category:One-thousanders of Switzerland
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ #include "paddle/fluid/operators/lrn_op.h" #include <memory> #include <string> #include <vector> #include "paddle/fluid/operators/math/blas.h" #include "paddle/fluid/operators/math/math_function.h" #ifdef PADDLE_WITH_MKLDNN #include "paddle/fluid/platform/mkldnn_helper.h" #endif namespace paddle { namespace operators { using framework::Tensor; using DataLayout = framework::DataLayout; template <typename T> struct LRNFunctor<platform::CPUDeviceContext, T> { void operator()(const framework::ExecutionContext& ctx, const framework::Tensor& input, framework::Tensor* out, framework::Tensor* mid, int N, int C, int H, int W, int n, T k, T alpha, T beta, const DataLayout data_layout) { auto place = ctx.GetPlace(); auto blas = math::GetBlas<platform::CPUDeviceContext, T>(ctx); math::Transpose<platform::CPUDeviceContext, T, 4> transpose; auto& dev_ctx = ctx.template device_context<platform::CPUDeviceContext>(); Tensor in_transpose, mid_transpose, out_transpose; // if channel_last, transpose to channel_first if (data_layout == DataLayout::kNHWC) { auto in_dims = input.dims(); std::vector<int64_t> shape( {in_dims[0], in_dims[3], in_dims[1], in_dims[2]}); in_transpose.mutable_data<T>(framework::make_ddim(shape), place); mid_transpose.mutable_data<T>(framework::make_ddim(shape), place); out_transpose.mutable_data<T>(framework::make_ddim(shape), place); std::vector<int> axis = {0, 3, 1, 2}; transpose(dev_ctx, input, &in_transpose, axis); } else { in_transpose = input; mid_transpose = *mid; out_transpose = *out; mid_transpose.mutable_data<T>(mid->dims(), place); out_transpose.mutable_data<T>(out->dims(), place); } const T* idata = in_transpose.data<T>(); T* odata = out_transpose.data<T>(); T* mdata = mid_transpose.data<T>(); Tensor squared; T* sdata = squared.mutable_data<T>({1, C + n - 1, H, W}, place); std::memset(sdata, 0, sizeof(T) * squared.numel()); for (int i = 0; i < mid->numel(); ++i) { mdata[i] = k; } int img_size = H * W; int fea_size = C * img_size; int pre_pad = (n - 1) / 2; // compute batches one by one for (int i = 0; i < N; ++i) { blas.VSQUARE(fea_size, idata + i * fea_size, sdata + pre_pad * img_size); // init the first channel of mid for (int c = 0; c < n; ++c) { blas.AXPY(img_size, alpha, sdata + c * img_size, mdata + i * fea_size); } for (int c = 1; c < C; ++c) { // copy previous scale int mid_offset = i * fea_size + c * img_size; std::memcpy(mdata + mid_offset, mdata + mid_offset - img_size, img_size * sizeof(T)); // add last blas.AXPY(img_size, alpha, sdata + (c + n - 1) * img_size, mdata + mid_offset); // sub rest blas.AXPY(img_size, -alpha, sdata + (c - 1) * img_size, mdata + mid_offset); } } // compute the final output blas.VPOW(mid->numel(), mdata, -beta, odata); blas.VMUL(mid->numel(), odata, idata, odata); // if channel_last, transpose the output(NCHW) to channel_last if (data_layout == DataLayout::kNHWC) { std::vector<int> axis = {0, 2, 3, 1}; transpose(dev_ctx, mid_transpose, mid, axis); transpose(dev_ctx, out_transpose, out, axis); } } }; template struct LRNFunctor<platform::CPUDeviceContext, float>; template struct LRNFunctor<platform::CPUDeviceContext, double>; template <typename T> struct LRNGradFunctor<platform::CPUDeviceContext, T> { void operator()(const framework::ExecutionContext& ctx, const framework::Tensor& x, const framework::Tensor& out, const framework::Tensor& mid, framework::Tensor* x_g, const framework::Tensor& out_g, int N, int C, int H, int W, int n, T alpha, T beta, const DataLayout data_layout) { T ratio = -2 * alpha * beta; auto x_g_e = framework::EigenVector<T>::Flatten(*x_g); x_g_e = x_g_e.constant(0.0); auto e_x = framework::EigenTensor<T, 4>::From(x); auto e_x_g = framework::EigenTensor<T, 4>::From(*x_g); auto e_out = framework::EigenTensor<T, 4>::From(out); auto e_out_g = framework::EigenTensor<T, 4>::From(out_g); auto e_mid = framework::EigenTensor<T, 4>::From(mid); const int start = -(n - 1) / 2; const int end = start + n; for (int m = 0; m < N; m++) { for (int i = 0; i < C; i++) { auto offsets = Eigen::array<int, 4>({{m, i, 0, 0}}); auto extents = Eigen::array<int, 4>({{1, 1, H, W}}); if (data_layout == DataLayout::kNHWC) { offsets = Eigen::array<int, 4>({{m, 0, 0, i}}); extents = Eigen::array<int, 4>({{1, H, W, 1}}); } auto i_x = e_x.slice(offsets, extents); auto i_x_g = e_x_g.slice(offsets, extents); auto i_out_g = e_out_g.slice(offsets, extents); auto i_mid = e_mid.slice(offsets, extents); i_x_g = i_mid.pow(-beta) * i_out_g; for (int c = start; c < end; c++) { int ch = i + c; if (ch < 0 || ch >= C) { continue; } if (data_layout != DataLayout::kNHWC) { offsets = Eigen::array<int, 4>({{m, ch, 0, 0}}); } else { offsets = Eigen::array<int, 4>({{m, 0, 0, ch}}); } auto c_out = e_out.slice(offsets, extents); auto c_mid = e_mid.slice(offsets, extents); auto c_out_g = e_out_g.slice(offsets, extents); i_x_g += ratio * c_out_g * c_out * i_x / c_mid; } } } } }; template struct LRNGradFunctor<platform::CPUDeviceContext, float>; template struct LRNGradFunctor<platform::CPUDeviceContext, double>; class LRNOp : public framework::OperatorWithKernel { public: using framework::OperatorWithKernel::OperatorWithKernel; protected: void InferShape(framework::InferShapeContext* ctx) const override { OP_INOUT_CHECK(ctx->HasInput("X"), "Input", "X", "LRN"); OP_INOUT_CHECK(ctx->HasOutput("Out"), "Output", "Out", "LRN"); OP_INOUT_CHECK(ctx->HasOutput("MidOut"), "Output", "MidOut", "LRN"); auto x_dim = ctx->GetInputDim("X"); PADDLE_ENFORCE_EQ(x_dim.size(), 4, platform::errors::InvalidArgument( "Input(input) rank should be 4, " "but received input rank (%d) != 4", x_dim.size())); int n = ctx->Attrs().Get<int>("n"); PADDLE_ENFORCE_GT(n, 0UL, platform::errors::InvalidArgument( "Argument(n) should be positive, " "but received n(%d) not greater than 0", n)); PADDLE_ENFORCE_EQ(n % 2, 1UL, platform::errors::InvalidArgument( "Argument(n) should be odd value, " "but received n(%d) is not an odd value", n)); ctx->SetOutputDim("Out", x_dim); ctx->ShareLoD("X", /*->*/ "Out"); ctx->SetOutputDim("MidOut", x_dim); } framework::OpKernelType GetExpectedKernelType( const framework::ExecutionContext& ctx) const override { framework::LibraryType library_{framework::LibraryType::kPlain}; // TODO(pzelazko-intel): enable MKLDNN layout when it's ready framework::DataLayout layout_ = framework::DataLayout::kAnyLayout; #ifdef PADDLE_WITH_MKLDNN if (library_ == framework::LibraryType::kPlain && platform::CanMKLDNNBeUsed(ctx)) { library_ = framework::LibraryType::kMKLDNN; layout_ = framework::DataLayout::kMKLDNN; } #endif return framework::OpKernelType( OperatorWithKernel::IndicateVarDataType(ctx, "X"), ctx.GetPlace(), layout_, library_); } framework::OpKernelType GetKernelTypeForVar( const std::string& var_name, const Tensor& tensor, const framework::OpKernelType& expected_kernel_type) const override { #ifdef PADDLE_WITH_MKLDNN if ((expected_kernel_type.data_layout_ == framework::DataLayout::kMKLDNN) && (tensor.layout() != framework::DataLayout::kMKLDNN)) { auto attrs = Attrs(); auto ar = paddle::framework::AttrReader(attrs); const std::string data_format = ar.Get<std::string>("data_format"); auto dl = framework::StringToDataLayout(data_format); // Some models may have intentionally set "AnyLayout" for pool // op. Treat this as NCHW (default data_format value) if (dl != framework::DataLayout::kAnyLayout) { return framework::OpKernelType(expected_kernel_type.data_type_, tensor.place(), dl); } } #endif return framework::OpKernelType(expected_kernel_type.data_type_, tensor.place(), tensor.layout()); } }; template <typename T> class LRNOpMaker : public framework::OpProtoAndCheckerMaker { public: void Make() override { AddInput("X", "(Tensor) The input of LRN operator. " "It must be a 4D tenor with NCHW format."); AddOutput("Out", "(Tensor) The output of LRN operator, which is also the 4D " "tensor with NCHW format."); AddOutput("MidOut", "(Tensor) Middle result of LRN operator. It's computed in " "forward process and also used in backward process."); AddAttr<int>("n", "(int default 5) " "n is the \"adjacent\" kernel that maps " "at the same spatial position.") .SetDefault(5) .GreaterThan(0); AddAttr<T>("k", "(float, default 2.0) " "k is the bias.") .SetDefault(2.0) .GreaterThan(0.0); AddAttr<T>("alpha", "(float, default 0.0001) " "alpha is the scale number.") .SetDefault(0.0001) .GreaterThan(0.0); AddAttr<T>("beta", "(float, default 0.75) " "beta is the power number.") .SetDefault(0.75) .GreaterThan(0.0); AddAttr<bool>("use_mkldnn", "(bool, default false) Only used in mkldnn kernel") .SetDefault(false); AddAttr<std::string>( "data_format", "(string, default NCHW) Only used in " "An optional string from: \"NHWC\", \"NCHW\". " "Defaults to \"NHWC\". Specify the data format of the output data, " "the input will be transformed automatically. ") .SetDefault("AnyLayout"); AddAttr<bool>("is_test", "(bool, default false) Set to true for inference only, false " "for training. Some layers may run faster when this is true.") .SetDefault(false); AddComment(R"DOC( Local Response Normalization Operator. This operator comes from the paper: <<ImageNet Classification with Deep Convolutional Neural Networks>>. The original formula is: $$ Output(i, x, y) = Input(i, x, y) / \left( k + \alpha \sum\limits^{\min(C-1, i + n/2)}_{j = \max(0, i - n/2)} (Input(j, x, y))^2 \right)^{\beta} $$ Function implementation: Inputs and outputs are in NCHW or NHWC format, while input.shape.ndims() equals 4. If NCHW, the dimensions 0 ~ 3 represent batch size, feature maps, rows, and columns, respectively. Input and Output in the formula above is for each map(i) of one image, and Input(i, x, y), Output(i, x, y) represents an element in an image. C is the number of feature maps of one image. n is a hyper-parameter configured when operator is initialized. The sum in the denominator is the sum of the same positions in the neighboring maps. )DOC"); } }; class LRNOpGrad : public framework::OperatorWithKernel { public: using framework::OperatorWithKernel::OperatorWithKernel; protected: void InferShape(framework::InferShapeContext* ctx) const override { OP_INOUT_CHECK(ctx->HasInput("X"), "Input", "X", "LRNGrad"); OP_INOUT_CHECK(ctx->HasInput("MidOut"), "Input", "MidOu", "LRNGrad"); OP_INOUT_CHECK(ctx->HasInput(framework::GradVarName("Out")), "Input", "Out@GRAD", "LRNGrad"); auto x_dims = ctx->GetInputDim("X"); ctx->SetOutputDim(framework::GradVarName("X"), x_dims); } framework::OpKernelType GetExpectedKernelType( const framework::ExecutionContext& ctx) const override { framework::LibraryType library_{framework::LibraryType::kPlain}; // TODO(pzelazko-intel): enable MKLDNN layout when it's ready framework::DataLayout layout_ = framework::DataLayout::kAnyLayout; #ifdef PADDLE_WITH_MKLDNN if (library_ == framework::LibraryType::kPlain && platform::CanMKLDNNBeUsed(ctx)) { library_ = framework::LibraryType::kMKLDNN; layout_ = framework::DataLayout::kMKLDNN; } #endif return framework::OpKernelType( OperatorWithKernel::IndicateVarDataType(ctx, "X"), ctx.GetPlace(), layout_, library_); } framework::OpKernelType GetKernelTypeForVar( const std::string& var_name, const Tensor& tensor, const framework::OpKernelType& expected_kernel_type) const override { #ifdef PADDLE_WITH_MKLDNN if ((expected_kernel_type.data_layout_ == framework::DataLayout::kMKLDNN) && (tensor.layout() != framework::DataLayout::kMKLDNN)) { auto attrs = Attrs(); auto ar = paddle::framework::AttrReader(attrs); const std::string data_format = ar.Get<std::string>("data_format"); auto dl = framework::StringToDataLayout(data_format); // Some models may have intentionally set "AnyLayout" for lrn // op. Treat this as NCHW (default data_format value) if (dl != framework::DataLayout::kAnyLayout) { return framework::OpKernelType(expected_kernel_type.data_type_, tensor.place(), dl); } } #endif return framework::OpKernelType(expected_kernel_type.data_type_, tensor.place(), tensor.layout()); } }; template <typename T> class LRNGradOpMaker : public framework::SingleGradOpMaker<T> { public: using framework::SingleGradOpMaker<T>::SingleGradOpMaker; void Apply(GradOpPtr<T> op) const override { op->SetType(this->ForwardOpType() + "_grad"); op->SetInput("X", this->Input("X")); op->SetInput("Out", this->Output("Out")); op->SetInput("MidOut", this->Output("MidOut")); op->SetInput(framework::GradVarName("Out"), this->OutputGrad("Out")); op->SetOutput(framework::GradVarName("X"), this->InputGrad("X")); op->SetAttrMap(this->Attrs()); } }; } // namespace operators } // namespace paddle namespace ops = paddle::operators; REGISTER_OPERATOR(lrn, ops::LRNOp, ops::LRNOpMaker<float>, ops::LRNGradOpMaker<paddle::framework::OpDesc>, ops::LRNGradOpMaker<paddle::imperative::OpBase>); REGISTER_OPERATOR(lrn_grad, ops::LRNOpGrad); REGISTER_OP_CPU_KERNEL( lrn, ops::LRNKernel<paddle::platform::CPUDeviceContext, float>); REGISTER_OP_CPU_KERNEL( lrn_grad, ops::LRNGradKernel<paddle::platform::CPUDeviceContext, float>);
--- abstract: 'Cardiovascular disease (CVD) is the global leading cause of death. A strong risk factor for CVD events is the amount of coronary artery calcium (CAC). To meet demands of the increasing interest in quantification of CAC, i.e. coronary calcium scoring, especially as an unrequested finding for screening and research, automatic methods have been proposed. Current automatic calcium scoring methods are relatively computationally expensive and only provide scores for one type of CT. To address this, we propose a computationally efficient method that employs two ConvNets: the first performs registration to align the fields of view of input CTs and the second performs direct regression of the calcium score, thereby circumventing time-consuming intermediate CAC segmentation. Optional decision feedback provides insight in the regions that contributed to the calcium score. Experiments were performed using 903 cardiac CT and 1,687 chest CT scans. The method predicted calcium scores in less than 0.3s. Intra-class correlation coefficient between predicted and manual calcium scores was 0.98 for both cardiac and chest CT. The method showed almost perfect agreement between automatic and manual CVD risk categorization in both datasets, with a linearly weighted Cohen’s kappa of 0.95 in cardiac CT and 0.93 in chest CT. Performance is similar to that of state-of-the-art methods, but the proposed method is hundreds of times faster. By providing visual feedback, insight is given in the decision process, making it readily implementable in clinical and research settings.' author: - 'Bob D. de Vos, Jelmer M. Wolterink, Tim Leiner, Pim A. de Jong, Nikolas Lessmann, Ivana Išgum [^1]' title: Direct Automatic Coronary Calcium Scoring in Cardiac and Chest CT --- Calcium scoring, Cardiac CT, Chest CT, Deep Learning, Convolutional Neural Network, Atlas-Registration, Regression. Introduction ============ Cardiovascular disease (CVD) is the global leading cause of death[@gbd2016]. To reduce the burden of cardiovascular disease the World Health Organization underlines the need for early detection and treatment of individuals with CVD or those who are at high cardiovascular risk due to the presence of one or more risk factors [@whofactsheet]. A strong and independent risk factor for CVD events, e.g. myocardial infarction, is the quantity of coronary artery calcium (CAC) [@yeboah2012; @hecht2015; @hecht2017]. Quantification of CAC, i.e. calcium scoring, is typically performed in dedicated non-contrast-enhanced ECG-synchronized cardiac CT scans[@hecht2015]. Alternatively, calcium scoring can be performed in other non-contrast-enhanced CTs that visualize the heart; e.g. in low-dose CT attenuation correction scans acquired in hybrid PET/CT and SPECT/CT [@einstein2010; @mylonas2012], or in radiation therapy planning CTs of breast cancer patients [@gernaat2016]. Furthermore, it has been shown that calcium scoring in lung screening low-dose chest CT scans is a predictor for all-cause mortality [@jacobs2010; @chiles2015]. In fact, in the National Lung Screening Trial (NLST) CVD was the leading cause of mortality [@nlst2011b]. Thus, CAC quantification, especially as an unrequested finding, has garnered much attention. Clinically, calcium scoring is performed by experts who manually identify CAC in CT image slices. This is a tedious process of finding and selecting high density voxels in the coronary arteries—commonly defined as two or more connected voxels above 130Hounsfield Units (HU). In scans not dedicated to calcium scoring this can be particularly cumbersome because of high noise, low resolution, and motion artifacts. Subsequently, when lesions are identified, region growing is used to fully segment the calcified lesions. Finally, after all CAC lesions have been segmented, CAC is quantified using the Agatston score [@agatston1990]. The Agatston score takes into account the lesion area and the weighted maximum density of the lesion. This score can be used to stratify patients into risk categories [@rumberger1999]. The additional cost involved with manual calcium scoring makes the process prohibitive in settings where it is not the primary request. To simplify the task, qualitative stratification into CVD risk groups was proposed [@shemesh2010; @chiles2015]. Qualitative calcium scoring is faster and it demonstrates good inter-rater agreement. However, such an analysis still demands experts who closely inspect the scans. With the ever-increasing amount of scans and the increasing interest in calcium scoring, especially as an unrequested finding, the use of fully-automatic methods might be the preferred direction. Several automatic methods have been introduced for calcium scoring in non-contrast-enhanced CT, ranging from rule-based approaches [@gonzalez2016; @xie2017], to the better performing conventional machine learning approaches[@isgum2012; @shahzad2013; @wolterink2015; @durlak2017] and recent deep learning approaches [@wolterink2015miccai; @wolterink2016; @lessmann2016; @lessmann2018]. The main difficulty in automatic calcium scoring is to differentiate CAC from other dense structures. Obviously, CAC exclusively resides in the walls of the coronary arteries, thus most of the automatic methods exploit this prior knowledge. Išgum et al. [@isgum2012] introduced the first method for automatic calcium scoring in chest CT. CAC lesions were described with features and subsequently classified using a two-stage classification approach of k-nearest neighbor and support vector classification. Among texture, size, and shape features, highly important for CAC identification, were the location features. Location features were determined by registering an input image to an atlas image and by extracting the location features from a map of a priori spatial probabilities of CAC. The probability map was created from known CAC locations in 237 chest CTs that were registered to a single priorly chosen atlas image. Shahzad et al. [@shahzad2013] used a similar machine learning approach for calcium scoring in cardiac CT, but they employed pair-wise deformable image registration to ten atlases that encoded the coronary arteries. The atlases were made from 85 contrast enhanced CT angiography scans with annotated coronary arteries. The methods of Išgum et al. [@isgum2012] and Shahzad et al. [@shahzad2013] relied on feature selection methods to reduce dimensionality. Wolterink et al. [@wolterink2015] circumvented feature selection by using an extremely randomized trees classifier. Their method also depended on location features that were obtained by deformable image registration of ten atlases with encoded coronary arteries, but these were obtanied from non-contrast-enhanced CTs. Durlak et al. [@durlak2017] combined the principles of the aforedescribed methods: they employed a random forest and made an a priori probability map of coronary arteries locations, made from automatically extracted coronary arteries from cardiac CT angiography images. Instead of using time-consuming deformable image registration to align input images and atlas images, they achieved a speed-up by using affine registration. Similarly, other methods employed information from CTA to aid calcium scoring in cardiac CT. These methods were specifically designed for the coronary calcium score (orCaScore) challenge, and employed rule-based image analysis or conventional machine learning [@wolterink2016orcascore]. Most recently proposed methods employ deep learning methods for automatic calcium scoring, in particular convolutional neural networks (ConvNets). ConvNets are known for their automatic feature extracting capabilities and alleviate the need for handcrafting features. Wolterink et al. [@wolterink2016] used ConvNets to classify CAC in cardiac CT angiography scans. All voxels were classified using a pair of ConvNets. One ConvNet identified voxels likely to be CAC and discarded the majority of non-CAC-like voxels such as lung and fatty tissue. The other ConvNet more precisely discriminated between CAC and CAC-like negatives. In the method of Lessmann et al.  [@lessmann2016] a single ConvNet was used that classified candidate CAC lesions in lung screening chest CTs. To simplify the classification tasks, both these deep learning methods used an additional ConvNet that localized the heart with a bounding box [@devos2017localization]. More recently, the method of Lessmann et al.[@lessmann2018] fully exploited the feature extraction capabilities of ConvNets without dedicated localization methods. They employed two sequential ConvNets to classify CAC as well as aortic valve, mitral valve, and aorta calcifications in chest CT. The first ConvNet identified candidate calcifications based on their location, and the second ConvNet refined the classification results by reducing false positive errors. ![In a typical automatic calcium scoring workflow, CAC is first identified and subsequently quantified. The proposed method uses ConvNet regression to quantify CAC in image slices directly.[]{data-label="fig:workflows"}](Workflow.pdf){width="\linewidth"} While all aforementioned methods use different strategies, they all follow a workflow similar to current clinical calcium scoring: CAC is first identified and thereafter quantified. The automatic methods show high accuracy, but often at considerable computational cost. Employing these methods on large datasets would require dedicated servers. To alleviate computational cost, we propose a workflow that circumvents intermediate identification and that performs direct quantification (see Figure \[fig:workflows\]). Direct quantification has proven to be useful for atrial and ventricle volume quantification [@hussain2017; @zhen2017; @xue2018]. Furthermore, attempts are being made to use it for calcium scoring. In our preliminary study we presented a direct calcium scoring method that uses 2-D ConvNet regression [@devos2017rsna; @devos2017arxiv]. The method performs direct calcium scoring in extracted image slices from bounding boxes cropped around the heart. In a recently proposed method, Cano-Espinosa et al. used a 3-D regression ConvNet for direct calcium scoring in downsampled CT volumes also cropped around the heart. However, their method could not be used in 14% of the scans, because heart localization failed. Furthermore, previously proposed automatic calcium scoring methods are dedicated to either cardiac CT or chest CT. These methods required retraining for application in other types of CT [@gernaat2016; @isgum2017]. We present an automatic method that performs real-time direct calcium scoring in different types of non-contrast-enhanced CT. Unlike previous methods that focused on a single type of CT, the proposed method is able to perform calcium scoring directly in multiple types of CT by using an unsupervised deep learning atlas-registration method to align their fields of view (FOVs). For this we employ two ConvNets: one for atlas-registration and one for calcium scoring, as shown in Figure \[fig:pipeline\]. The atlas-registration ConvNet makes the FOV of input CT images alike using Deep Learning Image Registration (DLIR) [@devos2017registration; @devos2018media] further developed to facilitate atlas-registration. Subsequently, a calcium scoring ConvNet predicts the calcium score in image slices mimicking clinical calcium scoring with the Agatston score. When desired, decision feedback can be queried for every slice with a predicted calcium score. For this purpose, a visual attention heatmap accurately reveals the regions that contributed to the calcium score. The method provides robust and accurate predictions of calcium scores and it is computationally efficient, obtaining an Agatston score in less than 0.3s in cardiac and chest CT. Data ==== This study included two datasets used in previous studies that presented automatic coronary calcium scoring in cardiac CT [@wolterink2015] and in chest CT [@lessmann2018]. To allow a direct comparison of methods, the original training, validation, and test set distributions were used. Cardiac CT ---------- The set of 903 cardiac CT scans (age range: 18 to 88 years, 31% women) originates from a set of routinely acquired scans for clinical calcium scoring of the University Medical Center Utrecht, Utrecht, The Netherlands. The need for informed consent was waived by the local Medical Research Ethics Committee. Scans were acquired with a 256-detector row Philips Brilliance iCT scanner (tube voltage 120kVp, tube current 55mAs) during a single breath-hold, with ECG-triggering and without contrast enhancement. The images were reconstructed to 3mm slice thickness and slice increment with in-plane resolution ranging from 0.29mm to 0.49mm, depending on patient size. The dataset was divided into 237 scans for training, 136 scans for validation, and 530 scans were in the hold-out test set only used for final evaluation. Chest CT -------- The set of 1,687 chest CT scans (age range: 43 to 74 years, 39% women) originates from a set of 6,000 available baseline scans from the National Lung Screening Trial (NLST) [@nlst2011b]. All scans were acquired during inspiratory breath-hold without contrast enhancement. Scans were acquired in 31 different hospitals with 120 or 140kVp tube voltage and 30-160mAs tube current. Axial images slices were reconstructed with varying kernels, varying slice thickness (1.00-3.00mm), varying slice increments (0.63-3.00mm), and with varying in-plane resolutions (0.49-0.98mm per voxel). In our study, scans with less than 100 slices or slices thicker than 3.00mm were not considered, because they were not adequate for calcium scoring. Furthermore, the scans were resampled to 3.00mm slice thickness and 1.50mm slice increment to make the scans suitable for calcium scoring [@rutten2011]. The dataset was divided into 1,012 scans for training, 169 scans for validation, and 506 scans were in the hold-out test set only used for final evaluation. -- ------------ ----- ---- ----- ----- ----- Training 120 14 33 29 41 Validation 68 14 28 15 11 Test 260 49 89 70 62 Training 272 76 207 205 252 Validation 39 14 46 30 40 Test 128 42 99 112 125 -- ------------ ----- ---- ----- ----- ----- : Number of scans per CVD risk category for training, validation, and test sets. CVD risk categorization is based on the total Agatston score per scan: : very low $<1$, : low $[1, 10)$, : moderate $[10, 100)$, : moderately high $[100, 400)$, : high $\geq400$[]{data-label="tab:riskcatsalldata"} Reference standard {#sec:refstandard} ------------------ The reference standard was defined by experts who manually identified CAC lesions in the scans. CAC lesions were segmented following a standard procedure: region growing was used to select 26-connected voxels $\geq$130HU. In the chest CTs with low radiation dose this procedure could lead to faulty segmentations (i.e. leakage) because of excessive noise. In such cases annotations were manually corrected by voxel painting [@lessmann2018]. Agatston scores were calculated in each axial slice for training. Total Agatston scores for each scan were calculated for final evaluation. Additionally, each subject was assigned to one of five CVD risk categories [@rumberger1999] based on the Agatston score: very low: $<$1; low: \[1, 10), moderate: \[10, 100), high: \[100, 400), very high: $\geq$400. Table \[tab:riskcatsalldata\] provides an overview of the number of scans per risk category per dataset. Methods ======= The method employs two ConvNets in sequence (Figure \[fig:pipeline\]). The first ConvNet registers input CTs to an cardiac CT atlas-image. The second ConvNet performs calcium scoring. When desired, visual feedback can be queried for image slices with a score. For this purpose an attention heatmap reveals the regions that contributed to the calcium score. ![Schematics of the proposed method. Input CTs of varying FOV are first aligned using an atlas-registration ConvNet. Subsequently, a calcium scoring ConvNet is used for direct calcium scoring in image slices. Finally, decision feedback can be visualized when desired.[]{data-label="fig:pipeline"}](Pipeline_1.pdf){width="\linewidth"} Atlas-registration strategy {#sec:method:registration} --------------------------- An atlas-registration ConvNet ensures that all input images have a similar FOV and resemble a cardiac CT. The ConvNet is trained with a modified version of our framework for Deep Learning Image Registration (DLIR) [@devos2017registration]. The DLIR framework uses an end-to-end unsupervised approach that trains a ConvNet for image registration. Similar to a conventional intensity-based image registration framework it exploits optimization of an image similarity metric. Figure \[fig:dlirframework\] shows the schematics of training an atlas-registration ConvNet using the atlas image as a static fixed image. The task of the ConvNet is to analyze moving images and predict the transformation parameters that warp the moving images to the atlas-image. Image similarity between the atlas and the warped image, is used for backpropagation during training. By optimizing image similarity (e.g. minimizing negative cross correlation) with gradient descent, the atlas-registration ConvNet learns the registration task in an unsupervised manner. After training, the ConvNet can register unseen moving images in one shot. A cardiac atlas-image is created using an iterative inter-subject registration strategy [@jongen2004]. With this strategy an initial atlas image is made by averaging multiple images. The atlas image is iteratively refined by registering the individual images to the atlas. Subsequently, the final atlas image is used to train the atlas-registration ConvNets for cardiac and chest CT alignment used for subsequent calcium scoring. ![DLIR framework used to train a registration ConvNet. During a forward pass (indicated by the thick blue arrow) the registration ConvNet analyzes moving images and outputs transformation parameters. The transformation parameters are used by the interpolator to warp the moving image. During a backward pass (indicated by the thick red arrow) an image similarity loss (i.e. dissimilarity) is determined between the warped image and a fixed template image, and the resulting loss is backpropagated trough the ConvNet. The ConvNet is trained in multiple iterations of forward and backward passes, with mini-batch stochastic gradient descent. Once the ConvNet has been trained for registration it can take a moving image as its input and it can output registration parameters in one pass, thus non-iteratively.[]{data-label="fig:dlirframework"}](DLIR.pdf){width="\linewidth"} Atlas-registration ConvNet training ----------------------------------- For registration we propose a global 3-D rigid registration model with six degrees of freedom (shown in Figure \[fig:degrees\_of\_freedom\]). The model allows translations in any direction, but rotations are restricted to the axial ($z$) axis. Furthermore, scaling in the axial plane is isotropic and independent from scaling along the axial axis. These restrictions preserve the relation of reference Agatston scores that are defined on the original (unregistered) axial image slices. This facilitates training of the subsequent calcium scoring ConvNet. (img) at (0,0) [![Rigid transformation model used for to train the registration ConvNet. The six degrees of freedom allow translation in any direction, rotation around the axial axis, and uniform scaling in the axial plane independent from scaling along the axial direction. By constraining the registration to the proposed transformation model, we can trivially exploit the model parameters for selection and warping of axial slices that are presented to the calcium scoring ConvNet.[]{data-label="fig:degrees_of_freedom"}](DegreesOfFreedom.pdf "fig:"){width=".35\linewidth"}]{}; at (32pt, 26pt) [$t_y$]{}; at (38pt,-14pt) [$t_x$]{}; at (-7pt,41pt) [$t_z$]{}; at (-32pt,-2pt) [$s_{xy}$]{}; at (10pt,-20pt) [$s_z$]{}; at (15pt,20pt) [$\theta_z$]{}; We use a computationally efficient ConvNet architecture that is listed in Table \[tab:convnetdesigns\]. For fast analysis, images are downsampled close to 3mm isotropic voxel dimensions; i.e. 6$\times$6$\times$1 downsampling for cardiac CT, and 6$\times$6$\times$2 downsampling for chest CT using average pooling. The ConvNet has three alternating layers of 3$\times$3$\times$3 convolutions and 2$\times$2$\times$2 average pooling and those are followed by two layers of 3$\times$3$\times$3 convolution. To facilitate a fixed output, global average pooling is applied before connection with two fully connected layers. The final output layer has six nodes, one for each transformation parameter. Throughout the network exponential linear units are used for activation, except in the output nodes. Three output nodes are unconstrained translation parameters ($t_x$, $t_y$, $t_z$), the rotation parameter ($\theta_z$) is constrained with a hyperbolic tangent between $-\pi$ and $\pi$, and the two scaling parameters ($s_{xy}$, $s_z$) are constrained with a hyperbolic tangent between $0.25$ and $4$ scaling factors. These output parameters are used to constitute the following 3-D transformation matrix: $$T_\textrm{3D} = \setlength\arraycolsep{2pt} \begin{bmatrix} 1 & 0 & 0 & t_x \\ 0 & 1 & 0 & t_y \\ 0 & 0 & 1 & t_z \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} \cos\theta_z & -\sin\theta_z & 0 & 0 \\ \sin\theta_z & \phantom{-}\cos\theta_z & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} s_{xy} & 0 & 0 & 0 \\ 0 & s_{xy} & 0 & 0 \\ 0 & 0 & s_z & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}$$ Atlas-registration ConvNet inference ------------------------------------ We train an atlas-registration ConvNet for 3-D registration, but we use it for slice selection and 2-D warping. As a consequence, correspondence is guaranteed between warped axial slices and the per-slice calcium scores. Axial image slices are extracted from the original image from $t_z$ to $t_z + d_z/s_z$, where $d_z$ is the depth of the atlas image along the axial axis. These slices are resampled using bi-linear interpolation to a 256$\times$256 grid with the following 2-D transformation matrix: $$T_\textrm{2D} = \begin{bmatrix*} 1 & 0 & t_x \\ 0 & 1 & t_y \\ 0 & 0 & 1 \\ \end{bmatrix*} \begin{bmatrix*} \cos\theta_z & -\sin\theta_z & 0 \\ \sin\theta_z & \phantom{-}\cos\theta_z & 0 \\ 0 & 0 & 1 \\ \end{bmatrix*} \begin{bmatrix*} s_{xy} & 0 & 0 \\ 0 & s_{xy} & 0 \\ 0 & 0 & 1 \\ \end{bmatrix*}$$ **Atlas-Registration ConvNet** **Calcium Scoring ConvNet** -------------------------------------- ----------------------------- 512$\times$512$\times$N 3-D input 256$\times$256 2-D input 6$\times$6$\times${1,2} Avg. Pooling 224$\times$224 cropping 32\*3$\times$3$\times$3 Convolutions 32\*3$\times$3 Convolutions 2$\times$2$\times$2 Avg. Pooling 2$\times$2 Max Pooling 32\*3$\times$3$\times$3 Convolutions 32\*3$\times$3 Convolutions 2$\times$2$\times$2 Avg. Pooling 2$\times$2 Max Pooling 32\*3$\times$3$\times$3 Convolutions 32\*3$\times$3 Convolutions 2$\times$2$\times$2 Avg. Pooling 2$\times$2 Max Pooling 32\*3$\times$3$\times$3 Convolutions 32\*3$\times$3 Convolutions 32\*3$\times$3$\times$3 Convolutions 2$\times$2 Max Pooling Global Avg. Pooling 32\*3$\times$3 Convolutions 2$\times$2 Max Pooling 32\*3$\times$3 Convolutions 2$\times$2 Max Pooling 64 Fully Connected Nodes 64 Fully Connected Nodes 64 Fully Connected Nodes 64 Fully Connected Nodes 6 Output Nodes 1 Output Node : Efficient ConvNet architectures were used for atlas-registration as well as calcium scoring.[]{data-label="tab:convnetdesigns"} Calcium scoring ConvNet {#sec:cacscoremethod} ----------------------- The calcium scoring ConvNet employs direct regression to predict an Agatston score from input axial image slices. The choice of 2-D ConvNets, in favor of 3-D ConvNets, is based on the number of samples that are available for training. There are more image slices available than image volumes. Furthermore, [2-D]{} image analysis mimics clinical calculation of the Agatston calcium score that is performed in [2-D]{} axial slices: $$\textrm{Agatston Score} = {\sum_{S \in V}\sum_{l \in S}{A_l w_l\frac{i_S}{t_S}}}\,.$$ where $l$ is a 2-D CAC lesion in a slice $S$ of a CT volume $V$. $A_l$ is the area of the lesion. The weighted intensity $w_l$ is based on the maximum radio-density in HU of a 2-D lesion in the following manner: 1 = \[130, 200), 2 = \[200, 300), 3 = \[300, 400), and 4 = $\geq$400. The Agatston score is corrected when image slices are overlapping, thus when slice increment $i_S$ is not equal to slice thickness $t_S$ [@ohnesorg2002]. Agatston scores are dependent on the CAC lesion area. Given that input images have different voxels sizes, we chose to simplify the prediction task by determining a pseudo-Agatston score. This score is obtained by cancelling out the axial pixel dimensions, the slice increment, and the slice thickness of the original Agatston score. The resulting target is the product of the number of voxels in a lesion $n_l$, the predicted slice scaling factor $s_{xy}$, and the weighted intensity $w_l$: $$\textrm{Pseudo-Agatston Score} = \sum_{l \in S} n_l \cdot s_{xy} \cdot w_l\,.$$ The calcium scoring ConvNet uses an efficient architecture that is listed in Table \[tab:convnetdesigns\]. It analyzes random image croppings of 224$\times$224 pixels during training and center croppings during application. It has alternating layers of 3$\times$3 convolutions and $2\times2$ max pooling, followed by two fully connected layers, and an output layer of one node. Throughout the network batch normalization [@ioffe2015] is used and exponential linear units are used for activation[@clevert2016]. The final output node has a linear output to facilitate continuous prediction. However, given that clinically used CVD risk categories are exponentially increasing, the task of the calcium scoring ConvNet was modified to learn a log-transform of the pseudo-Agatston score: $$L = |\hat{y} - \ln(y + 1)|\,,$$ where $\hat{y}$ is the predicted score, and $y$ is the reference pseudo-Agatston score. The log-transform induces relatively high penalties for erroneous low calcium score predictions, and relatively low penalties for erroneous high calcium score predictions. Consequently, higher precision is forced for lower calcium burden, which is favorable for CVD risk stratification. During application of the calcium scoring ConvNet, the predicted outputs are converted to the original Agatston scores. Decision feedback ----------------- By employing regression of calcium scores, we circumvent time-consuming intermediate segmentation. On the other hand, it may be desirable to visualize regions in image slice that contributed to the calcium score. Inspired by the study of Zeiler and Fergus [@zeilerfergus2014], we provide such visualization by using a deConvNet. The deConvNet uses the same operations of filtering and pooling as a ConvNet, but in reverse order from output to input. The reverse operations map the activities back to the input pixel space, and it shows which input patterns originally contributed to the activations in the feature maps. To obtain a smooth visual attention heatmap, the deConvNet is applied until the third convolutional layer, by taking the absolute value per feature of this layer, and by summing these features along the feature map dimension to get 2-D matrix. Using third order interpolation we obtain a smooth map that can be superpositioned on the image slice as a heatmap. This resulting heatmap visualizes attention by highlighting the regions that contributed to the Agatston score. Evaluation ========== Automatically predicted per-subject Agatston scores were compared with manually determined reference scores. Evaluations were performed on the hold-out test sets which were not used during method development. Two-way mixed intra-class correlation coefficient (ICC) for absolute agreement was computed and Bland-Altman analysis was performed to evaluate bias between predicted and reference Agatston scores. In addition, for each subject, CVD risk category was determined based on the Agatston score as defined in section \[sec:refstandard\]. Agreement between predicted and reference CVD risk categories was determined using accuracy and Cohen’s linearly weighted kappa ($\kappa$). Experiments and results ======================= In this section we evaluate the atlas-registration ConvNet, the calcium scoring ConvNet, and the quality of decision feedback. In addition, we will evaluate whether the calcium scoring ConvNet requires to be trained on all data, or whether it can be trained on one dataset and applied to the other. Finally, we will compare state-of-the-art automatic calcium scoring methods with the proposed method. All experiments were performed with Theano [@theano2016], Lasagne [@lasagne2015], and OpenCV [@opencv] on an Intel Xeon E5-1620 3.60GHz CPU with an NVIDIA Titan X GPU. Atlas-registration ConvNet -------------------------- -- -- -- -- -- -- -- -- -- -- Figure \[fig:atlas:atlas1\] shows the initial atlas image that was created by aligning all cardiac training images using their geometric centroids. We chose the median dimensions and voxels sizes of all the cardiac training images define the atlas image space. The atlas can be iteratively refined, but given the constraints of the global registration model used here, only one update was sufficient. The final atlas image, shown in Figure \[fig:atlas:atlas2\], was used to train the atlas-registration ConvNets for cardiac and chest CT alignment. Thus, in total three ConvNet instances were trained: one to create an atlas image, one for cardiac CT alignment, and one for chest CT alignment. All ConvNets were trained in 15,000 iterations with mini-batches containing 32 randomly selected images. Training took about 40 hours per ConvNet. Adam [@kingma2014] was used with a learning rate of 0.001 for mini-batch gradient descent. To illustrate performance of the atlas-registration ConvNets, Figure \[fig:atlas\] shows images before and after registration. Figure \[fig:atlas:cardiac1\] shows the average image of the 530 cardiac CT images from the test set before registration and Figure \[fig:atlas:cardiac2\] shows these images after registration. Similarly, Figure \[fig:atlas:chest1\] shows an average image of the 506 chest CTs before registration and Figure \[fig:atlas:chest2\] shows these after registration. Note the similarity of the registered image with the refined atlas image shown in Figure \[fig:atlas:atlas2\]. Quantitative evaluation of registration results revealed that registration erroneously cropped CAC out of the selected slices. Between one and four image slices containing CAC were not selected in three cardiac CTs and three chest CTs. Upon closer inspection, two of the chest CTs had calcifications in the aortic arch and descending aorta incorrectly labeled as CAC in the reference, thereby affecting CVD risk categorization. Nevertheless, these annotations were left uncorrected in further analysis to facilitate a fair comparison with previously developed methods. The registration errors did not have an adverse effect on CVD risk categorization in the other cases. Calcium scoring ConvNet {#sec:calciumscoringconvnet} ----------------------- The calcium scoring ConvNet was trained in 150,000 iterations using Adam [@kingma2014]. Training took 21 hours with 100 image slices per mini-batch randomly selected from the registered image slices taken from the cardiac and chest CT training sets. High imbalance between the minority of slices with a calcium score and the majority of slices with zero calcium score prevented convergence during ConvNet training. To ensure convergence, the amount of image slices with CAC (Agatston score $>0$) and without CAC (Agatston score $=0$) were balanced during training. To prevent bias, training continued on the full imbalanced training set after 10,000 iterations. Additionally, we ensured stable convergence by decreasing the learning rate to 10% of its previous value every 50,000 iterations. \ -- -- --------- -------- -------- -------- -------- **259** 0 1 0 0 9 **36** 4 0 0 2 3 **82** 2 0 0 1 2 **65** 2 0 0 0 11 **51** -- -- --------- -------- -------- -------- -------- : Confusion matrices showing agreement in CVD risk categorization based on the total Agatston scores: : very low $<1$, : low $[1, 10)$, : moderate $[10, 100)$, : moderately high $[100, 400)$, : high $\geq400$. The method is evaluated separately on the test sets of cardiac CTs (left) and chest CTs (right). The corresponding linearly weighted $\kappa$ is shown below the confusion matrices.[]{data-label="fig:confusion:fullset"} -- -- --------- -------- -------- -------- --------- **118** 6 4 0 0 8 **29** 5 0 0 3 8 **85** 3 0 1 1 7 **99** 4 0 0 0 3 **122** -- -- --------- -------- -------- -------- --------- : Confusion matrices showing agreement in CVD risk categorization based on the total Agatston scores: : very low $<1$, : low $[1, 10)$, : moderate $[10, 100)$, : moderately high $[100, 400)$, : high $\geq400$. The method is evaluated separately on the test sets of cardiac CTs (left) and chest CTs (right). The corresponding linearly weighted $\kappa$ is shown below the confusion matrices.[]{data-label="fig:confusion:fullset"} After training, the test sets were used to evaluate the calcium scoring ConvNet. Per-subject scores show high intraclass correlation coefficients (ICC); the ICC for cardiac CT and chest CT were both 0.98 with 95% confidence intervals of 0.98 to 0.99. Slight positive bias in cardiac and chest CT is visualized with the Bland-Altman plots shown in Figure \[fig:blandaltman\]. This was mainly caused by overestimations of the higher Agatston scores. However, this was not noticeable in CVD risk stratification. Table \[fig:confusion:fullset\] shows confusion matrices of predicted risk categories vs. the manual reference standard. In cardiac CT calcium scoring only four scans were two categories off, and in chest CT calcium scoring eight scans were two categories off. The scan that was three categories off was a scan with incorrectly annotated aorta calcium, as discussed in the previous section. Nonetheless, overall agreement was *almost perfect* [@mchugh2012] with Cohen’s linearly weighted $\kappa$s of 0.95 in cardiac CT and 0.93 in chest CT. Accuracy in CVD risk categorization was 0.93 for cardiac CT and 0.90 for chest CT. Because efficient network architectures are used, the method is able to achieve high speed when used on a single CPU core: within 5s a score for cardiac CT is obtained and within 11s a score for chest CT is obtained. When using a GPU, calcium scoring can be performed in real-time. Including image registration and image resampling, a calcium score for cardiac CT is obtained in less than 0.15s and for chest CT in less than 0.30s. Decision feedback ----------------- Decision feedback visualizes attention of the calcium scoring ConvNet. This feedback informs and end-user about the regions that contributed to the calcium score. Figure \[fig:feedback\] shows examples of such feedback. The feedback helps an expert to quickly navigate and evaluate the image slices containing CAC. \ \ \ We propose visual feedback as an optional qualitative tool, but we have performed a quantitative analysis to provide insight in its accuracy. To obtain quantitative results we analyzed heatmaps for slices with predicted calcium scores. The heatmaps were warped to the original image spaces by using the inverse transformation matrices. The values of the heatmaps were scaled between 0 and 1 to mimic probability maps for CAC candidate voxels. CAC candidates were defined as high density 26-connected voxels with a volume between 1.5 and 1,500mm^3^[@wolterink2015]. For evaluation of these maps we performed precision-recall analysis (Figure \[fig:precisionrecall\]). We have defined an optimal threshold by selecting the maximum F1 (i.e. Dice) score on the validation set. Table \[tab:feedback\_evaluation\] shows the obtained scores using the selected threshold on the test sets. The results show that detection performance is very accurate on the validation set as well as the test set. ![Precision recall curve of CAC segmentation using the obtained visual feedback heatmaps. The analysis is performed on the validation set to obtain an optimal threshold for evaluation. Optimal F1 score was 0.81 at a threshold of 0.27. Final results for quantitative evaluation of visualization feedback are shown in Table \[tab:feedback\_evaluation\].[]{data-label="fig:precisionrecall"}](precision_recall.pdf){width=".8\linewidth"} Additionally, decision feedback aided our analysis by clarifying incorrect calcium scores. Decision feedback revealed that the largest CVD miscategorizations were not caused by incorrect quantification but by incorrect recognition of CAC. Figure \[fig:incorrect\] shows six examples of the largest miscategorizations made by the calcium scoring ConvNet. The majority of errors were made in identification of calcifications near the coronary artery ostia. Calcifications near the ostia can be partly in the aorta and partly in the coronary artery. These calcifications are difficult to distinguish, especially when no information of neighboring slices is available. Cardiac CT Chest CT ----------------- ------------ ---------- Precision 0.77 0.78 Recall 0.85 0.86 Accuracy 0.99 0.99 F1 (Dice) score 0.81 0.82 : Quantitative evaluation of visual feedback. Evaluation was performed segmenting CAC lesions with the visualization feedback. An optimal threshold was selected using precision recall analysis on the validation data shown in Figure \[fig:precisionrecall\]. Final results show that visualization by the heatmap is is as accurate on the validation as on the test set.[]{data-label="tab:feedback_evaluation"} Influence of training data and registration ------------------------------------------- For clinical application it would be useful to investigate whether the method needs training data from both datasets or if data from one set would suffice, and we investigated the influence of atlas-registration is required. Thus, we performed experiments using different combinations of training data with and without atlas-registration, as listed in Table \[tab:allexperiments\]. The calcium scoring ConvNets were trained with either cardiac CT images, chest CT images, or a combination thereof. To balance cardiac and chest CT data, a subset of chest CT images was created by taking images from 237 randomly selected subjects and by removing every other slice in the chest CT images. Additionally, the histograms shown in Figure \[fig:histograms\] provide insight in the distribution of calcium amount in the training data. Note that the chest CT subset has a very similar distribution compared to the cardiac CT training set. ![Histograms of per slice Agatston scores of the registered training datasets. Note that Agatston scores shown here are not corrected by factor $\frac{i_s}{t_s}$. Please see Section \[sec:cacscoremethod\] for application of this correction factor in the Agatston score.[]{data-label="fig:histograms"}](histogram.pdf){width="\columnwidth"} The best performance was achieved using atlas-registration with a calcium scoring ConvNet trained on all cardiac and chest CT images. Lower scores are found when a calcium scoring ConvNet is only trained with cardiac CT or the subset of chest CTs. However, combining the two datasets increased the scores notably, giving a performance close to the ConvNet trained with all images. Furthermore, the results show that atlas-registration facilitated training on one type of data and high performance on the other: the ConvNet trained with the full set of chest CTs achieved a high performance on the cardiac CT test images that was very close to the best results. -- -- --------------------------- ------- --------- -------------- ---------- ---------- ---------- ---------- ---------- ---------- Data CTs Slices Fraction CAC $\kappa$ Acc. ICC $\kappa$ Acc. ICC Cardiac CT 237 10,468 10.4% 0.92 0.89 0.89 0.46 0.41 0.24 Chest CT 1,012 211,353 6.6% 0.48 0.59 0.24 0.91 0.86 0.93 Cardiac + Chest CT 1,239 221,821 6.7% 0.90 0.86 0.87 0.92 0.88 0.94 Cardiac CT 237 10,016 10.9% 0.92 0.88 0.97 0.86 0.79 0.90 Chest CT subset 237 11,716 14.8% 0.91 0.86 0.95 0.90 0.85 0.93 Cardiac + Chest CT subset 574 21,732 13.0% 0.94 0.92 **0.99** 0.91 0.88 0.97 Chest CT 1,012 100,379 13.8% 0.94 0.91 0.98 **0.93** 0.89 **0.98** Cardiac + Chest CT 1,239 110,395 13.5% **0.95** **0.93** 0.98 **0.93** **0.90** **0.98** -- -- --------------------------- ------- --------- -------------- ---------- ---------- ---------- ---------- ---------- ---------- Comparison with other methods ----------------------------- Table \[tab:comparison\] shows a comparison with other state-of-the-art calcium scoring methods by Wolterink et al. [@wolterink2015] and Lessmann et al. [@lessmann2018] using the same datasets. The proposed method achieves similar performance compared to these methods, but it is hundreds of times faster. Even when ran on a single core of a CPU, the method achieves high speed. Additionally, we listed results from other direct calcium scoring methods by González et al. [@gonzalez2016] and Cano-Espinosa et al. [@cano2018] using chest CT data from the COPDGene study [@regan2010]. We provide similar performance metrics to give an indication, but please note that a direct comparison between these methods and ours was not possible. [llc|cc|cc|cc|cc|cc]{} & & & & & &\ & Source & Number & ICC & $\rho$ & $\kappa$ & acc. & $\kappa$ & acc. & $\kappa$ & acc. & CPU & GPU\ \ Wolterink et al.[@wolterink2015] & UMCU & 530 & 0.96 & – & 0.95 & 0.91 & – & – & – & – & 20min & –\ Proposed method & UMCU & 530 & 0.97 & 0.99 & 0.95 & 0.93 & 0.95 & 0.96 & 0.94 & 0.93 & 5s & 0.15s\ \ Cano-Espinosa et al. [@cano2018] & COPDGene & 1,000 & – & 0.93 & – & – & – & – & 0.80 & 0.76 & – & –\ Lessmann et al. [@lessmann2018] & NLST & 506 & – & – & – & – & 0.91 & 0.91 & – & – & – & 7min\ Proposed method & NLST & 506 & 0.98 & 0.97 & 0.93 & 0.90 & 0.92 & 0.91 & 0.93 & 0.90 & 11s & 0.30s\ Performance on orCaScore data ----------------------------- We evaluated our method on data from the orCaScore challenge [@wolterink2016orcascore]. This challenge provides data to evaluate a method for coronary calcium scoring. The data consists of non-contrast enhanced ECG-triggered cardiac CT acquired on CT scanners from four different vendors from four different hospitals. Training data is provided, but we evaluated our method on the test set of 40 patients without retraining. Table \[tab:orcascoreresult\] shows the obtained confusion matrix and lists the results of dedicated cardiac CT calcium scoring methods that competed in the challenge. Given that our method does not differentiate between location of CAC, we only provide total calcium scoring results. -- -- ------- -------- ------- -------- **8** 0 0 0 0 **12** 0 0 0 0 **8** 0 0 0 1 **11** -- -- ------- -------- ------- -------- : Results of the proposed method on orCaScore challenge data. Left: The confusion matrix shows agreement in CVD risk categorization based on the total Agatston scores: : $0$, : $[1, 100)$, : $[100, 300)$, :$>300$. The corresponding linearly weighted $\kappa$ is shown below the confusion matrix. Right: Comparison with other methods evaluated in the challenge [@wolterink2016orcascore].[]{data-label="tab:orcascoreresult"} [c|ccc]{}\ Method & $\kappa$ & Acc. & ICC\ A[@shahzad2013] &0.88&0.85&0.97\ B[@wolterink2016orcascore] &0.98&0.98&0.99\ C[@wolterink2016orcascore] &0.96&0.95&0.98\ D[@wolterink2016orcascore] &0.80&0.80&0.60\ E[@wolterink2015] &1.00&1.00&0.99\ Ours & 0.98 & 0.98 & 0.98\ Per-artery calcium scores ------------------------- Routine coronary artery calcium scoring is typically performed per artery. Currently, only total coronary calcium scores are reported and used for CVD risk prediction. For research purposes, per-artery calcium scores might provide interesting additional information. Hence, we evaluated performance of the proposed method for per-artery calcium scoring, i.e. scoring in the the LAD, LCX, and RCA. We chose to combine CAC scores in the LM and LAD, since it is difficult, if not impossible, to differentiate them in chest CT scans. The direct scoring ConvNet was adapted by changing the number of output nodes from one to to three. Similar to the experiment described in Section \[sec:calciumscoringconvnet\], training started with a balanced set of image slices with and without calcium scores for the first 10,000 iterations and continued with the full set of image slices thereafter. Additionally, each mini-batch had at least three image slices containing each type of arterial calcification. Risk categories are clinically not defined for per-artery calcium scores, but they are obtained for total calcium scores by summation of per-artery scores. The results are listed in Table \[tab:perarteryscores\]. ------------ ------ ------ ------ ---------- ------ ------ LAD LCX RCA $\kappa$ Acc. ICC Cardiac CT 0.93 0.88 0.97 0.94 0.91 0.97 Chest CT 0.91 0.80 0.98 0.92 0.88 0.96 ------------ ------ ------ ------ ---------- ------ ------ : Intraclass correlation coefficient (ICC) for per-artery calcium scores. Since CVD risk categories are not defined for per-artery scores, CVD risk categorization was evaluated with linearly weighted $\kappa$ and accuracy (Acc.) on the total calcium scores obtained by summation.[]{data-label="tab:perarteryscores"} Discussion ========== We have presented a method for automatic coronary calcium scoring in cardiac CT and chest CT. The method uses an atlas-registration ConvNet to align FOVs making input images alike. The atlas-registration ConvNet is trained for 3-D registration, but its rigid model is constrained to enable 2-D slice selection and 2-D image warping. Selected and warped input image slices are presented to a calcium scoring ConvNet that directly predicts the Agatston score in these slices. The method circumvents time-consuming CAC segmentation. To provide decision feedback, a visual attention heatmap can be generated that shows the regions in an image contributing to the calcium score. The method achieves excellent agreement for calcium score prediction for CVD risk categorization compared to manual calcium scoring. The method achieves similar performance compared to state-of-the-art methods, but achieves it hundreds of times faster. In preliminary experiments we found that only a small ConvNet architecture was able to learn direct calcium scoring. Large ConvNet architectures architectures were unstable and failed to converge during training. By limiting the degrees of freedom of a ConvNet, i.e. by using a small architecture, we were able to train a ConvNet that learned to differentiate coronary calcification from other types of calcification e.g. aorta calcification, pericardium calcification, and heart valve calcification. To simplify the problem we extracted bounding boxes around the heart in our preliminary work [@devos2017rsna; @devos2017arxiv]. However, this was a supervised method that classified presence of the heart in image slices. In case of noisy images, consecutive image slices could have discontinuous predictions. Discontinuous predictions resulted in an incorrect bounding box extracting a partial heart. For atlas-registration used in our current work this is not an issue. The atlas-registration ConvNets were highly successful in pre-alignment of input CTs, i.e. in slice selection and image warping. Only 4 out of 1,036 test images had slices containing CAC that were missed by erroneous slice selection. Erroneous slice selection was likely caused by incorrect focus of the atlas-registration ConvNet on high contrast areas like the diaphragm. A mask drawn around the heart might steer focus of the ConvNet and might increase registration performance. Alternatively, a simple adjustment could be made by padding slice selection with some slices. Nonetheless, the errors caused by registration had negligible impact on calcium scoring and did not affect CVD risk categorization. Calcium scoring is better with atlas-registration than without it. Moreover, registration allows training and application of direct calcium scoring on datasets with different FOVs. In general accuracy of predicted Agatston scores was high. Although Bland-Altman analysis showed that the method underestimated subjects with high Agatston scores. In fact, this was by design, because the method estimates a log transformed Agatston score, which induces relatively low precision for higher scores, and high precision for lower scores. Because the clinically used CVD risk categories are based on exponentially increasing Agatston scores, it is obviously more important to differentiate between subjects at low to moderate risk, than to differentiate between subjects at high risk. Thus, we imposed this higher precision on lower Agatston scores. Still, the largest CVD miscategorizations were found in the lower risk categories. Miscategorization was predominantly caused by incorrect identification of CAC and aortic calcifications near the coronary artery ostia. Even manual classification of these calcifications can be very difficult when they spread from the aorta through an ostium into the coronary artery. It often involves inspecting multiple adjacent slices in 3-D. Thus, performance of the method might be improve by exploiting additional 3-D information in future work. Additionaly, performance might improve by increasing input image resolution. The current resolution was chosen based on the majority of chest CT images, being roughly half the resolution of cardiac CTs. Nevertheless, even though all cardiac CTs were downsampled a high performance was obtained in these CTs. The proposed method shows near perfect agreement in CVD risk categorization compared to manual calcium scoring, even when trained with a relatively low number of scans from a single dataset. Interestingly, training on one type of data allowed the model to be applied to the other type of data without any modifications or transfer learning. However, we found that a model trained on only chest CT led to better results than a model trained only on cardiac CT. One potential reason for this may be the distribution of CAC in the datasets: the population of ex-heavy smokers typically have more CAC [@jacobs2010] than the population undergoing calcium scoring cardiac CT. However, Figure \[fig:histograms\] shows that the distribution of CAC in equally sized datasets of cardiac CT and chest CT is similar. An alternative reason could be the presence of motion artifacts, which are nearly absent in ECG-synchronized cardiac CT, but abundant in non-ECG-synchronized chest CT. Therefore, a model trained on chest CT may be more robust to such artifacts. While our experiments indicated that a ConvNet trained on the cardiac and chest CT datasets supplement each other, a calcium scoring ConvNet trained with only chest CTs almost matched performance of the best performing ConvNet. Additionally, we have shown that the method obtained near perfect CVD risk categorization results on cardiac CTs from the orCaScore challenge. The method did not require retraining on representative data from the different hospitals and vendors. Having a single system that can handle potentially any CT scan that visualizes the heart would be very practical in a routine radiology setting. In future work we will investigate whether the method could be readily applied on other types CTs, without requiring retraining or fine-tuning. Additionally, we have shown that the method can provide per-artery calcium scores. While this is not required for CVD risk categorization, it might be interesting for clinical research. In terms of ICC [@koo2016], per-artery calcium scoring achieved *good reliability* ($>0.75$) in the LCX, and *excellent reliability* ($>0.90$) in the LAD and the RCA. In addition, determination of CVD risk using combined per-artery scores led to *almost perfect agreement* (${\kappa>0.90}$) [@mchugh2012]. Nevertheless, performance was slightly better when total calcium was directly determined. This difference in performance may be a consequence of increased complexity of the per-artery scoring task while using the same number of samples for training. The proposed method can achieve a calcium score hundreds of times faster than previously proposed methods. This is mainly due to one-shot (i.e. non-iterative) registration, and direct quantification using regression. The direct calcium scoring method circumvents time-consuming intermediate segmentation. The method might also be suitable for e.g. determination of volume, (pseudo-)mass, or number of CAC; and for quantification of other lesions or diverse anatomical structures. However, the benefit of using a segmentation approach over direct scoring is that it provides immediate insight to the end-user. We mitigate this shortcoming of direct scoring, by providing decision feedback with a visual attention heatmap. In this way valuable feedback is still provided whenever an end-user requires it. Conclusion ========== We have presented an automatic method for direct calcium scoring in cardiac CT and chest CT. The method employs two ConvNets, one for atlas-registration to align the FOV of input images to an atlas image made from cardiac CTs and one for direct calcium scoring of input image slices using regression. The method achieves robust and accurate predictions of calcium scores in real-time. By providing visual feedback, insight is given in the decision process, making it readily implementable in a clinical and research settings. [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[ l@\#1 =l@\#1 \#2]{}]{} , “Global, regional, and national life expectancy, all-cause mortality, and cause-specific mortality for 249 causes of death, 1980-2015: a systematic analysis for the global burden of disease study 2015,” *Lancet*, vol. 388, no. 10053, pp. 1459–1544, Oct 2016. , “[Cardiovascular]{} diseases ([CVDs]{}) \[fact sheet\].” J. Yeboah, R. McClelland, T. Polonsky, and et al., “Comparison of novel risk markers for improvement in cardiovascular risk assessment in intermediate-risk individuals,” *JAMA*, vol. 308, no. 8, pp. 788–795, 2012. H. S. Hecht, “Coronary artery calcium scanning: Past, present, and future,” *JACC: Cardiovascular Imaging*, vol. 8, no. 5, pp. 579 – 596, 2015. H. S. Hecht, P. Cronin, M. J. Blaha, M. J. Budoff, E. A. Kazerooni, J. Narula, D. Yankelevitz, and S. Abbara, “2016 scct/str guidelines for coronary artery calcium scoring of noncontrast noncardiac chest ct scans: A report of the society of cardiovascular computed tomography and society of thoracic radiology,” *Journal of Thoracic Imaging*, vol. 32, no. 5, p. W54–W66, 2017. A. J. Einstein, L. L. Johnson, S. Bokhari, J. Son, R. C. Thompson, T. M. Bateman, S. W. Hayes, and D. S. Berman, “Agreement of visual estimation of coronary artery calcium from low-dose ct attenuation correction scans in hybrid [PET]{}/[CT]{} and [SPECT]{}/[CT]{} with standard agatston score,” *Journal of the American College of Cardiology*, vol. 56, no. 23, pp. 1914–1921, Nov 2010. I. Mylonas, M. Kazmi, L. Fuller, R. A. deKemp, Y. Yam, L. Chen, R. S. Beanlands, and B. J. W. Chow, “Measuring coronary artery calcification using positron emission tomography-computed tomography attenuation correction images,” *European Heart Journal Cardiovascular Imaging*, vol. 13, no. 9, pp. 786–792, Sep 2012. S. A. M. Gernaat, I. Išgum, B. D. de Vos, R. A. P. Takx, D. A. Young-Afat, N. Rijnberg, D. E. Grobbee, Y. van der Graaf, P. A. de Jong, T. Leiner, and et al., “Automatic coronary artery calcium scoring on radiotherapy planning [CT]{} scans of breast cancer patients: Reproducibility and association with traditional cardiovascular risk factors,” *PLOS ONE*, vol. 11, no. 12, p. e0167925, Dec 2016. P. C. Jacobs, M. Prokop, Y. van der Graaf, M. J. Gondrie, K. J. Janssen, H. J. de Koning, I. Išgum, R. J. van Klaveren, M. Oudkerk, B. van Ginneken, and W. P. Mali, “Comparing coronary artery calcium and thoracic aorta calcium for prediction of all-cause mortality and cardiovascular events on low-dose non-gated computed tomography in a high-risk population of heavy smokers.” *Atherosclerosis*, vol. 209, no. 2, pp. 455–462, 2010. C. Chiles, F. Duan, G. W. Gladish, J. G. Ravenel, S. G. Baginski, B. S. Snyder, S. DeMello, S. S. Desjardins, R. F. Munden, and [NLST Study Team]{}, “Association of coronary artery calcification and mortality in the national lung screening trial: A comparison of three scoring methods,” *Radiology*, vol. 276, no. 1, pp. 82–90, 2015. , “Reduced lung-cancer mortality with low-dose computed tomographic screening,” *New England Journal of Medicine*, vol. 365, no. 5, pp. 395–409, 2011. A. S. Agatston, W. R. Janowitz, F. J. Hildner, N. R. Zusmer, M. Viamonte, and R. Detrano, “Quantification of coronary artery calcium using ultrafast computed tomography,” *Journal of the American College of Cardiology*, vol. 15, no. 4, pp. 827–832, 1990. J. A. Rumberger, B. H. Brundage, D. J. Rader, and G. Kondos, “Electron beam computed tomographic coronary calcium scanning: A review and guidelines for use in asymptomatic persons,” *Mayo Clinic Proceedings*, vol. 74, no. 3, pp. 243–252, Mar 1999. J. Shemesh, C. I. Henschke, D. Shaham, R. Yip, A. O. Farooqi, M. D. Cham, D. I. McCauley, M. Chen, J. P. Smith, D. M. Libby, and et al., “Ordinal scoring of coronary artery calcifications on low-dose [CT]{} scans of the chest is predictive of death from cardiovascular disease,” *Radiology*, vol. 257, no. 2, pp. 541–548, Nov 2010. G. González, G. R. Washko, and R. S. J. Estépar, “Automated agatston score computation in a large dataset of non [ECG]{}-gated chest computed tomography,” in *2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI)*, Apr 2016, pp. 53–57. Y. Xie, S. Liu, A. Miller, J. A. Miller, S. Markowitz, A. Akhund, and A. P. Reeves, “Coronary artery calcification identification and labeling in low-dose chest [CT]{} images,” in *Proceedings of SPIE*, vol. 10134, 2017, pp. 10134 – 10134 – 8. I. Išgum, M. Prokop, M. Niemeijer, M. A. Viergever, and B. van Ginneken, “Automatic coronary calcium scoring in low-dose chest computed tomography,” *IEEE Transactions on Medical Imaging*, vol. 31, no. 12, pp. 2322–2334, 2012. R. Shahzad, T. van Walsum, M. Schaap, A. Rossi, S. Klein, A. C. Weustink, P. J. de Feyter, L. J. van Vliet, and W. J. Niessen, “Vessel specific coronary artery calcium scoring: an automatic system,” *Academic Radiology*, vol. 20, no. 1, pp. 1–9, Jan 2013. J. M. Wolterink, T. Leiner, R. A. P. Takx, M. A. Viergever, and I. Išgum, “Automatic coronary calcium scoring in non-contrast-enhanced [ECG]{}-triggered cardiac [CT]{} with ambiguity detection,” *IEEE Transactions on Medical Imaging*, vol. 34, no. 9, pp. 1867–1878, Sep 2015. F. Durlak, M. Wels, C. Schwemmer, M. Sühling, S. Steidl, and A. Maier, *Growing a Random Forest with Fuzzy Spatial Features for Fully Automatic Artery-Specific Coronary Calcium Scoring*, ser. Lecture Notes in Computer Science.1em plus 0.5em minus 0.4emSpringer, Cham, Sep 2017, pp. 27–35. J. M. Wolterink, T. Leiner, M. A. Viergever, and I. I[š]{}gum, “Automatic coronary calcium scoring in cardiac [CT]{} angiography using convolutional neural networks,” in *Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015*, N. Navab, J. Hornegger, W. M. Wells, and A. Frangi, Eds.1em plus 0.5em minus 0.4emCham: Springer International Publishing, 2015, pp. 589–596. J. M. Wolterink, T. Leiner, B. D. de Vos, R. W. van Hamersvelt, M. A. Viergever, and I. Išgum, “Automatic coronary artery calcium scoring in cardiac [CT]{} angiography using paired convolutional neural networks,” *Medical Image Analysis*, vol. 34, pp. 123–136, Dec 2016. N. Lessmann, I. Išgum, A. A. A. Setio, B. D. de Vos, F. Ciompi, P. A. de Jong, M. Oudkerk, W. P. T. M. Mali, M. A. Viergever, and B. van Ginneken, “Deep convolutional neural networks for automatic coronary calcium scoring in a screening study with low-dose chest [CT]{},” in *Proceedings of SPIE*, G. D. Tourassi and S. G. Armato, Eds., vol. 9785, Mar 2016, p. 978511. N. Lessmann, B. v. Ginneken, M. Zreik, P. A. de Jong, B. D. de Vos, M. A. Viergever, and I. Išgum, “Automatic calcium scoring in low-dose chest [CT]{} using deep neural networks with dilated convolutions,” *IEEE Transactions on Medical Imaging*, vol. 37, no. 2, pp. 615–625, Feb 2018. J. M. Wolterink, T. Leiner, B. D. de Vos, J.-L. Coatrieux, B. M. Kelm, S. Kondo, R. A. Salgado, R. Shahzad, H. Shu, M. Snoeren, R. A. P. Takx, L. J. van Vliet, T. van Walsum, T. P. Willems, G. Yang, Y. Zheng, M. A. Viergever, and I. Išgum, “An evaluation of automatic coronary artery calcium scoring methods with cardiac ct using the orcascore framework,” *Medical Physics*, vol. 43, no. 5, pp. 2361–2373, 2016. B. D. de Vos, J. M. Wolterink, P. A. de Jong, T. Leiner, M. A. Viergever, and I. Išgum, “Convnet-based localization of anatomical structures in 3-d medical images,” *IEEE Transactions on Medical Imaging*, vol. 36, no. 7, pp. 1470–1481, July 2017. M. A. Hussain, A. Amir-Khalili, G. Hamarneh, and R. Abugharbieh, “Segmentation-free kidney localization and volume estimation using aggregated orthogonal decision cnns,” in *Medical Image Computing and Computer-Assisted Intervention - MICCAI 2017*, M. Descoteaux, L. Maier-Hein, A. Franz, P. Jannin, D. L. Collins, and S. Duchesne, Eds.1em plus 0.5em minus 0.4emCham: Springer International Publishing, 2017, pp. 612–620. X. Zhen, H. Zhang, A. Islam, M. Bhaduri, I. Chan, and S. Li, “Direct and simultaneous estimation of cardiac four chamber volumes by multioutput sparse regression,” *Medical Image Analysis*, vol. 36, pp. 184 – 196, 2017. W. Xue, G. Brahm, S. Pandey, S. Leung, and S. Li, “Full left ventricle quantification via deep multitask relationships learning,” *Medical Image Analysis*, vol. 43, pp. 54 – 65, 2018. B. D. de Vos, N. Lessmann, P. A. de Jong, M. A. Viergever, and I. Išgum, “Direct coronary artery calcium scoring in low-dose chest [CT]{} using deep learning analysis,” The Radiological Society of North America’s Annual Meeting, 2017. B. D. de Vos, N. Lessmann, P. A. de Jong, M. A. Viergever, and I. Išgum, “Direct and real-time cardiovascular risk prediction,” *arXiv:1712.02982 \[cs\]*, 2017. I. Išgum, B. D. de Vos, J. M. Wolterink, D. Dey, D. S. Berman, M. Rubeaux, T. Leiner, and P. J. Slomka, “Automatic determination of cardiovascular risk by [CT]{} attenuation correction maps in [Rb]{}-82 [PET]{}/[CT]{},” *Journal of Nuclear Cardiology*, Apr 2017. B. D. de Vos, F. F. Berendsen, M. A. Viergever, M. Staring, and I. Išgum, “End–to–end unsupervised deformable image registration with a convolutional neural network,” in *Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Qu[é]{}bec City, QC, Canada, September 14, Proceedings*.1em plus 0.5em minus 0.4emCham: Springer International Publishing, 2017, pp. 204–212. B. D. de Vos, F. F. Berendsen, M. A. Viergever, H. Sokooti, M. Staring, and I. Išgum, “A deep learning framework for unsupervised affine and deformable image registration,” *Medical Image Analysis*, 2018. A. Rutten, I. Išgum, and M. Prokop, “Calcium scoring with prospectively [ECG]{}-triggered [CT]{}: using overlapping datasets generated with [MPR]{} decreases inter-scan variability,” *European Journal of Radiology*, vol. 80, no. 1, pp. 83–88, Oct 2011. C. Jongen, J. P. W. Pluim, P. J. Nederkoorn, M. A. Viergever, and W. J. Niessen, “Construction and evaluation of an average [CT]{} brain image for inter-subject registration,” *Computers in Biology and Medicine*, vol. 34, no. 8, pp. 647–662, Dec 2004. B. Ohnesorge, T. Flohr, R. Fischbach, A. Kopp, A. Knez, S. Schröder, U. Schöpf, A. Crispin, E. Klotz, M. Reiser, and et al., “Reproducibility of coronary calcium quantification in repeat examinations with retrospectively [ECG]{}-gated multisection spiral [CT]{},” *European Radiology*, vol. 12, no. 6, pp. 1532–1540, Jun 2002. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in *International Conference on Machine Learning*, Jun 2015, pp. 448–456. D.-A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (elus),” in *International Conference on Machine Learning*, 2016, arXiv: 1511.07289. M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in *Computer Vision - European Conference on Computer Vision 2014*, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds., vol. 8689.1em plus 0.5em minus 0.4emSpringer International Publishing, 2014, pp. 818–833. , “[Theano: A [Python]{} framework for fast computation of mathematical expressions]{},” *arXiv e–prints*, vol. abs/1605.02688, 2016. S. Dieleman, J. Schlüter, C. Raffel, E. Olson, S. K. S[ø]{}nderby, D. Nouri, D. Maturana, M. Thoma, E. Battenberg, J. Kelly, J. D. Fauw, M. Heilman, D. M. de Almeida, B. McFee, H. Weideman, G. Takács, P. de Rivaz, J. Crall, G. Sanders, K. Rasul, C. Liu, G. French, and J. Degrave, “Lasagne: First release.” 2015. G. Bradski, “[The OpenCV Library]{},” *Dr. Dobb’s Journal of Software Tools*, 2000. D. P. Kingma and J. Ba, “Adam: [A]{} method for stochastic optimization,” in *International Conference on Learning Representation*, 2015. M. L. McHugh, “Interrater reliability: the kappa statistic,” *Biochem Med (Zagreb)*, vol. 22, no. 3, pp. 276–282, 2012. C. Cano-Espinosa, G. González, G. R. Washko, M. Cazorla, and R. S. J. Estépar, “Automated agatston score computation in non-[ECG]{} gated [CT]{} scans using deep learning,” in *Proceedings of SPIE*, vol. 10574, 2018, pp. 10574 – 10574 – 6. E. A. Regan, J. E. Hokanson, J. R. Murphy, B. Make, D. A. Lynch, T. H. Beaty, D. Curran-Everett, E. K. Silverman, and J. D. Crapo, “Genetic epidemiology of [COPD]{} ([COPDGene]{}) study design,” *COPD*, vol. 7, no. 1, pp. 32–43, Feb 2010. T. K. Koo and M. Y. Li, “A guideline of selecting and reporting intraclass correlation coefficients for reliability research,” *J Chiropr Med*, vol. 15, no. 2, pp. 155–163, 2016. [^1]: Copyright © 2019 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to pubs-permissions@ieee.org. Bob D. de Vos, Jelmer M. Wolterink, Nikolas Lessmann, and Ivana Išgum are with the Image Sciences Institute of the University Medical Center Utrecht and Utrecht University, Utrecht, The Netherlands. Tim Leiner and Pim A. de Jong are with the Department of Radiology, University Medical Center Utrecht and Utrecht University, Utrecht, the Netherlands. This work is part of the research programme ImaGene with project number 12726, which is partly financed by the Netherlands Organisation for Scientific Research (NWO). The authors thank the National Cancer Institute for access to NCI’s data collected by the National Lung Screening Trial. The statements contained herein are solely those of the authors and do not represent or imply concurrence or endorsement by NCI.
Q: C# getting strings until a certain element I need help regarding string manipulation in C#. I have a string in the format [text1|text2|text3|...]. What I want is to extract each of the strings between the separators and possibly save them into a list or something similiar. Thanks in advance. A: What you need is String.Split: string[] result = inputString.Split(new Char[] {'|'}); Though string[] result = inputString.Split('|'); Will work just as well as there's a single character overload not shown in the MSDN. This will give you an array of strings "text1", "text2", "text3" etc. If your string really is bookended by "[" and "]" and you will need remove these as well. If these characters don't appear anywhere else in your string you can do that in a single call: string[] result = inputString.Split(new Char[] {'|', '[', ']'}, StringSplitOptions.RemoveEmptyEntries); Source Otherwise you'll have to trim the text: string[] result = inputString.Trim('[',']').Split('|'); A: You can use String.Trim(to remove the [ and ]) and string.Split to create the array: string[] result = text.Trim('[',']').Split('|');
As racing tracks close due to coronavirus, more than one-thousand greyhounds are now looking for homes. After a vote in 2018, the tracks were set to stop dog racing in December 2020. The outbreak has pushed up the need for homes for the greyhounds. Heather Smith has two new guests in her home, Mike and Abel. They are two-year-old greyhounds. Heather is a foster volunteer. "Mike and Abel, they are the best. They are littermates. Wherever I am they go. They are attached to me. I can call them by their names and come right to me. They give kisses, they love to be hugged," said Heather. The organization Awesome Greyhound Adoptions says dog tracks in central and North Florida were planning to close in a couple of months. But due to the coronavirus, the facilities are closed early and will not re-open. That's how Mike and Abel ended up in Heather's home. About 1500 dogs need to find permanent homes. Until then, they are hoping that foster volunteers will step up and help. Carolee Ellison is with the organization. "We need some place for them to go. If we can get people to foster." Awesome Greyhounds Adoptions will provide foster volunteers with everything they need, including a large dog crate. "We give you bedding, we give you food, supplements, a leash a collar, their heartworm medication, some toys. All you have to do is to provide the love," said Carole RELATED: Voters say goodbye to dog racing in Florida Want to help? Awesome Greyhound Adoptions, Inc. is looking for both temporary and permanent homes for the dogs. You can find more about how to help here.
Pre-school and Extra-curricular Pedagogy The ‘Pre-school and Extra-curricular Pedagogy’ course has undergone a rigorous accreditation process in the Accreditation Board of the Ministry of Education, Youth and Sports and has been entered in the school register. The graduates receive the DiS. associate degree and will meet the criteria of the Teaching Staff Act. As qualified teachers, they can work as preschool teachers, preschool headmasters, after-school carers and leisure-time teachers as well as in all other professions which require to meet the criteria of the Teaching Staff Act. The graduates of the course are equipped with a wide range of knowledge and skills needed to find employment in the jobs described above according to the subjects completed. With regard to our course’s specialisation towards social work with children, the graduates can also find employment in social work and social care. CONTACTS Let´s go to PRIGO PRIGO is an exceptional group of schools, which has an irreplaceable position at all levels of the educational system, starting with the kindergarten and ending with the university. It focuses on development of students‘ language competencies and global international cooperation with foreign universities and other institutions. For this purpose, the PRIGO integrates a team of teachers and students from around the world. It operates in the European Union, in several cities in the Czech Republic. One of the main goals of the PRIGO is to work with exceptionally gifted students. Its university centre CEMNAS search for gifted children and youths and helps them to succeed in their studies not only abroad.
Q: Convert decimal dollar amount from string to scaled integer "35.28" is stored as a char*. I need to turn it into an integer (35280). I want to avoid floats. How can I do this? A: Minimal basic code: std::string s = "35.28"; s.erase(std::remove(s.begin(), s.end(), '.'), s.end()); //removing the dot std::stringstream ss(s); int value; ss >> value; value *= 10; std::cout << value; Output: 35280 Online demo : http://ideone.com/apRNP That is the basic idea. You can work on the above code to make it more flexible so that it can be used for other numbers as well. EDIT: Here is one flexible solution: int Convert(std::string s, int multiplier) { size_t pos = s.find('.'); if ( pos != std::string::npos) { pos = s.size() - (pos+1); s.erase(std::remove(s.begin(), s.end(), '.'), s.end()); while(pos) { multiplier /= 10; pos--; } } else multiplier = 1; std::stringstream ss(s); int value; ss >> value; return value * multiplier; } Test code: int main() { std::cout << Convert("35.28", 1000) << std::endl; //35.28 -> 35280 std::cout << Convert("3.28", 1000) << std::endl; //3.28 -> 3280 std::cout << Convert("352.8", 1000) << std::endl; //352.8 -> 352800 std::cout << Convert("35.20", 1000) << std::endl; //35.20 -> 35200 std::cout << Convert("3528", 1000) << std::endl; //no change return 0; } Output: 35280 3280 352800 35200 3528 Online Demo : http://ideone.com/uCujP A: Remove the dot from the char and then Simplest but not the best use atoi See my answer here, for other possible ways.
Monthly Archives: November 2017 “The light of the eyes rejoices the heart, and good news refreshes the bones.” Proverbs 15:30 “I think I’m only considering medication because I’m writing a book and I need to be able to get it done. If I weren’t writing, I would just live like this.” “But Sarah, maybe God is saying you don’t have to live like this.” Maybe you don’t have to live like this. Maybe I could live in the light. Maybe I don’t have to suffer in the dark. Maybe, just maybe, something is a little whack with my brain chemistry but I don’t have to live with it. … I’ve been on medication for over a month now, and I feel normal again, like myself again. I was the frog in the boiling water. Slowly, slowly, insidiously, this sadness filled me up and then one day I couldn’t tell you anymore whether I was an introvert or extrovert. I couldn’t tell you what I liked to do or the last time I enjoyed going somewhere. The boiling happened so slowly that I don’t know when it began or how long I’ve lived in the hot, dark water. I don’t know when I started to lose who I was. It was like I was living in a dream. But now I’m awake, and to mix all the metaphors, I feel like I’m in the sun, like I’m out of the boiling water, like I can see clearly, and most wonderfully, I know who I am again. I didn’t lose my personality. I am still me. I wrote a love letter to myself this morning to help me understand again who I am. The beginning of the letter goes like this: Dear Sarah, You’re struggling to figure out who you are lately. If someone asked you, “Who are you?”, you’re not sure how you would answer. That’s okay. Let’s see if I can help. First, you are loved and chosen and seen and known by God, who is your Father and who loves you with a faithful, steadfast, pure love. You are His daughter and He knows every intricate piece of your heart and soul and mind. What you don’t know, He knows. What you don’t see, He sees. Where you feel lost and confused, He is sure. So the first thing, dear Sarah, is that you are a loved and known daughter of the God of the universe. I’m not advocating medication on a whim, I’m just telling you that I am better. Something was wrong, but now it’s right. That’s all I know. I also know that my mother struggled with depression, so maybe there is some genetic stuff going on. Maybe it’s the fact that I’m writing a book about the redemption of humanity and the thread through it is the story of my mom and I and the crazy, complicated, hardness of it all. Our story is messy and sad and confusing and nothing short of miraculous. Maybe it’s that my hormones adjusted my brain or that this human body is just not perfect here on earth. All I know is that I was blind but now I see. And I give God all the glory. If you’re boiling, or if you don’t even know you’re boiling but you know something is off, I want to encourage you in a two specific ways that two different friends encouraged me: 1.) It’s okay to put everything on the table. Anything can be put on he table for discussion, whether it’s homes to buy, educating our children, any big life decision, there is nothing wrong with putting it on the table. And in that putting in on the table, we lay our hands open knowing and trusting and believing that He is faithful and He guides. 2.) Maybe you don’t have to live like this. Maybe you don’t have to boil. Maybe God is calling you out to the light. He wants obedience over sacrifice. But the real point I’m trying to make here is this: God loves you and He wants you to live in the light. This doesn’t mean you won’t ever suffer or be in hellish circumstances. What it means is that there’s an inner peace, an inner light, an inner joy that can never be taken away, and sometimes we need outside help to find that joy again. And that’s okay. The paradox for the Christian is that while we may suffer, we can also experience great joy. And we are free to get help. The Scripture at the top of this post affirms that God understands our human hearts, and how we need light and joy and goodness to keep on. Here’s another version of that verse that I just love: The light of the eyes rejoices the inner man, the heart, and good news takes away the ashes. We have a God who sees us and loves us and helps us and takes away the ashes. Lord, would you search our hearts and know our hearts; test us and know our anxious thoughts. Show us anything in us that is offensive or hurtful. Unfold freedom for us, bind up our wounds and heal our broken hearts, and lead us in the everlasting way. Amen. (Psalm 139:23,24, Psalm 147:3) Love, Sarah Mae P.S. Try writing a love letter to yourself. I know it’s weird, I acknowledge that, but it’s also helpful and kind. Last month I didn’t share my box with you, one, because I was in the depths of despair (or something like that), and two, it was only an okay box. BUT THIS MONTH, my stylist nailed it. I asked for fall shirts and she delivered! Take a look (My daughter took the pics)! It’s soooooooo cozy! I love this shirt! It is so comfortable and flattering and COMFORTABLE! This jacket. Be still my heart. Also, how about that lovely shot of my gray hair? 🙂 Yep, I know it’s expensive. But I’m trying to get better atpicking clothes that look nice and will last, and I’m willing to spend (if and when I can) a little more to meet that end. I’m a T-shirt and jeans kind of girl, but it’s nice to have more…adult clothes. Also, I do not like to go shopping, not my thing (unless we’re shopping for books or food). This is why I like Stitch Fix; someone else picks nice, fashionable clothes for me (for only $20) and I get to avoid shopping. And if I don’t like something, I just send it back in their PRE-PAID envelope. So easy. If you are like me, then maybe Stitch Fix is for you. If so, you can use my referral link HERE and they will wave the $20 styling fee for your first Fix. Another win! Now one last thing! If you ever want to see behind the scenes pics and vids, head over to my Instagram where I share things like this (this is an IG story): The wind blows wherever it pleases. You hear its sound, but you cannot tell where it comes from or where it is going. So it is with everyone born of the Spirit. John 3:8 My kids started public school a few weeks ago. (I know, weird and surprising.) To put this in perspective of the randomness and weirdness and unlikeliness of us putting our kids in public school, my husband reminded me that I told him that if I ever died to never put the kids in public school. (No pressure or anything). I don’t know why I was so adamant, but apparently I was. It’s all a blur now. All I know is that last month I found myself thinking about it all, and then I found myself driving to the district office for enrollment papers, and then I just casually filled them out “just in case” and all I can really say is, I guess the Spirit moved. I’m not being silly. There is no way to explain why we all of a sudden put our kids in school. But we did, and we follow God and are in His will, so, so be it.We follow the Spirit even when we don’t know where He is going, or why this is happening. It’s all a faith walk. Now a few things about the decision I want to share with you: It felt agonizing at first to even consider putting our kids in public school. I wrestled with feelings of failure and selfishness. I wondered, was I sacrificing my kids? Why was I even considering this? It felt, and feels, surreal. We’ve never considered public school, at least no since they were babies and my husband and I first discussed schooling options. We keep looking at each other and saying, “This is so weird.” I was so scared that my anxiety would keep me up at night if we put them in school and I begged God for peace if this was from Him. Now let me dive into the things I just mentioned: Am I a Selfish Failure? Shame runs deep, and when you believe you are selfish and a failure, especially as a mom; it’s gutting. I kept telling a friend of mine, “Is this selfish? I just feel so selfish if I put my kids in school, because if they’re in school I’ll write, and I like writing and working, and that’s just selfish of me.” She said, “Is that selfish though?” And that question got me thinking. And thinking. And praying. No, it’s not selfish to put my kids in school and it’s not selfish to like working, because we are following God. Also, my husband thought the whole selfish thing was messed up. He said, “If you put the kids in school, don’t be a martyr about it, enjoy what God has next.” Yes. I like that. And it’s true. I spoke with another friend over my questions and feelings and she reminded me, “God is good, and God is faithful, and God is gracious. Ultimately, it’s the Lord that goes before you and it’s Him who’s going to fight for your children. This is not law, and we don’t find our righteousness in how we school. Our righteousness is in Christ and in Christ alone. Nothing at all changes in who you are and how you are viewed by what you choose to do with your children and school. This is not a sin issue, a righteousness issue, and this is not a law issue. You are under grace. So be free.” Basically, what I’m saying is, I’m not a selfish failure. It is Weird At first, it was super weird to not have the kids with me. I cried for two weeks and then I went to a doctor and got on anti-depressants. There is more to this story of course, but I can see now that having the kids with me all day was covering up something inside of me and God, in His kindness, was going to be peeling back the layers of my heart. Now that’s it has been a month that the kids have been in school, I can see more of why God led us the way He did. Peace AND Joy After I made the agonizing decision to put the kids in school, I got the peace. The peace came after the obedience, as it usually does. And not only do I have peace about the decision, I have joy. I feel grateful and joyful and confident in what the Lord is doing with our family. “Not only is it to the Father’s glory that we get to bear fruit, but we actually get to find joy in it!” -Beth Moore As an aside, some of you read this post on my depression, and I want you to know I’m doing well. The meds seem to be working and I’m feeling like myself again, light shining in the darkness.
Features:Made of acrylic material, steady, do not fade and fine workmanship.Perfect for supermarket, bar, cafe, clothing store, living room, etc.Delicated flower designed to make your house shine and full of layers sense, so beautiful!Easy to use and installation, do not worry.
Q: Multiple returend values of SQL sub-query I am using PostgreSQL 9.1, I wrote the following SQL statement: INSERT INTO "Tracking" VALUES ((SELECT "studentID" FROM "Student" WHERE "studentClass"='2'),false,4,false); The issue is that the sub-query : SELECT "studentID" FROM "Student" WHERE "studentClass"='2' returned more than one value, and it is supposed to do that(I want to execute the main query per each returned value of sub-query), but by this way the Query will not be executed. Any Idea? A: Try this: INSERT INTO "Tracking" SELECT "studentID",false,4,false FROM "Student" WHERE "studentClass"='2'
Expect some medals from the delegation of 30 visually and physically challenged athletes representing Israel in 11 sports. Two-time quad tennis Paralympic medalist Shraga Weinberg will bear the Israeli flag aloft as he leads a delegation of 30 physically and visually disabled athletes into the 2016 Paralympic Games in Rio. Running from September 7 to 18, the Paralympics will include 4,350 athletes from 178 countries competing in 23 disciplines. Israel’s delegation of 15 men and 15 women will compete in 11 categories. Paralympic shooter Doron Shaziri. Photo by Raz Livnat “For half the athletes in this delegation it’s their first Paralympics,” says Dr. Ron Bolotin, professional manager of the Israel Paralympic Committee and the Israel Sports Association for the Disabled in Tel Aviv. This will be Bolotin’s 10th Paralympic experience. He participated in swimming as a competitor and coach six times from 1980 to 2000, winning 11 medals, and headed Israel’s Paralympic delegation to Athens (2004), Beijing (2008) and London (2012). The 2016 team is larger than the 2012 delegation of 25, mostly because of the addition of Israel’s first-ever women’s goalball team. A male goalball team competed for the blue-and-white in Barcelona in 1992. Israel’s first women’s goalball team. Photo by Keren Isacson “The captain is a very special young Muslim woman from Umm al-Fahm, and the team has a great future,” Bolotin tells ISRAEL21c. “They won fourth place in the European championship and first in the World Games.” While all other Paralympic sports were adapted for people with disabilities, goalball was developed as a rehab activity for the blind in Europe after World War II. Participants compete in teams of three, throwing a bell-embedded ball toward the opponents’ goal. lham Mahamid, the 25-year-old captain, has been playing goal ball for about 10 years and has been a member of the team from its inception. An undergraduate student of education and theater, her visual impairment is due to a genetic disease. Of the other teammates — Lihi Ben David, 20; Gal Hamrani, 23; Roni Ohayon, 16; Sivan Abrababya, 25; and Yarden Adika, 15 – only Abrababya is blind as the result of injury; she was wounded in her eyes during her military service. Likely medalists Asked to predict which Paralympic athletes are most likely to medal in Rio, Bolotin named Moran Samuel, Paralympic rowing world champion in 2015; swimmer Inbal Pezaro, who won three medals in London; Kobi Leon in handcycling; Doron Shaziri, who medaled in shooting at six previous Paralympics; and tennis doubles duo Itay Erenlib and Shraga Weinberg. Rowing standout Moran Samuel. Photo by Datliv Saeiv This will be the fourth Paralympics for Weinberg, 50, and Pezaro, 29. Both have won medals in past Paralympics Games but never a gold. Israel has won 380 medals since the first Paralympics Games in Rome in 1960, including 124 gold medals. At the 2012 London Games, the team came home with one gold, two silver and five bronze medals. In contrast, Israeli Olympic athletes have scored just nine medals since 1952, and came home empty-handed in 2012. “I think the main reason is that when the Paralympic movement was in its infancy we were one of the pioneer nations and therefore had an advantage. Now the level is getting closer to Olympic and it’s getting tougher every time as Paralympic sports become more elite and competitive and countries invest more money in them,” says Bolotin. “We probably won’t match our eight medals in London, yet we still hope to win some.” While Israel’s 47-member Olympic delegation is the largest ever, the Paralympic delegation is about half the size of its largest in history, which took 69 medals at the 1976 Toronto Games. Swimmer Inbal Pezaro has a good chance of medaling in Rio. Photo by Karen Leibivitz Swet “The final delegation is decided by the criteria of the International Olympic and Paralympic committees and the Israeli Olympic and Paralympic committees, and this time only 30 athletes made the criteria,” says Bolotin, noting that the Olympic Games have more than twice the number of competitors overall. Four years ago, Israel’s Culture and Sport Ministry launched an initiative to identify and promote more potential Olympic and Paralympic athletes in childhood. This is significant because world-class athletes generally begin training in childhood, but most of Israel’s Paralympic competitors entered athletics only after becoming disabled in their teens or 20s. Tennis player Shraga Weinberg, heading to his fourth Paralympics. Photo by Nimrod Glockman “The initiative is starting in Jerusalem, Ashdod, Beersheva and in the north, including Arab and Bedouin towns. We’ll see the outcome perhaps in 2024 as we have 250 new children starting to train in Paralympic sports,” says Bolotin, who is unusual in that he was already a competitive swimmer before losing his leg to a landmine in 1975. Another exception is Weinberg, born with a rare bone disorder. Paralympic athletes train at the IDF Disabled Veterans Organization’s Beit Halochem centers in Tel Aviv, Haifa and Jerusalem; and in the Haifa and Ramat Gan sports centers run by ILAN – Israeli Association for Children with Disabilities. Blind marathon runner Gadi Yarkoni. Photo by Photosdelux.com Here are the Israeli athletes going to the 2016 Rio Paralympics Games. Repeat Paralympians are in italics. The delegation will also include 25 coaches, escorts and medical personnel. Comments Abigail Klein Leichman is a writer and associate editor at ISRAEL21c. Prior to moving to Israel in 2007, she was a specialty writer and copy editor at a daily newspaper in New Jersey and has freelanced for a variety of newspapers and periodicals since 1984.
Arranging your environment Node Installation Running Fronthack on your machine requires a Node installed into system, inline with it’s package manager - Npm. If you are not new in frontend, you should already have it done. If you have not installed Node yet, the best way to do that is to use a NVM package (Node Version Manager). As the name suggests, this is a pretty useful program to install any version of Node or quickly change it, which is useful when you have to work on various projects. Additionally i found that it is more stable than a standard installation of Node, this is why i recommend t use it.
A-Plus APlus is an American convenience store chain owned and operated by 7-Eleven, which is licensing the name from the energy company Energy Transfer Partners.. The chain began life as the convenience store chain for Atlantic Petroleum in 1985, which was spun off from ARCO, Inc. (ARCO itself was formed from the 1966 merger of Atlantic and Richfield Oil.) The first Aplus stores were rebranded ampm locations. In 1988, Sunoco bought Atlantic, and since Sunoco didn't have its own convenience store chain, APlus became Sunoco's chain by default. While Atlantic stations were converted to Sunoco by the mid-1990s, APlus grew, with Sunoco rebranding many company-owned stations in the Northeast with convenience stores into APlus. Sunoco has even converted some garages into convenience stores with the APlus brand. Sunoco still used the APlus logo from the Atlantic days until 1999, when Sunoco updated its own logo and completely redesigning APlus's logo, giving it a more Sunoco look. The split between company-owned and franchised locations is about 50/50. Originally a Northeastern US brand, Sunoco has rapidly expanded the APlus moniker. In 2001, Sunoco expanded APlus into the Southeastern United States by purchasing 193 of Marathon Oil's Speedway SuperAmerica convenience stores—115 in Florida, 62 in South Carolina, 13 in North Carolina, and 3 in Georgia. Further expansion is being pushed as part of Sunoco's NASCAR sponsorship, where APlus is known as the "Official Pit Stop of NASCAR". In October 2013, Sunoco purchased Mid-Atlantic Convenience Stores, a Richmond, Virginia based Circle K Franchisee, with over 300 stores in the mid-atlantic region. These locations continued to operate as Circle K until early 2016, when they were converted to the APlus brand. Though operated by Sunoco, the majority of the locations continue to sell Exxon fuels. On January 23, 2018, Dallas, Texas-based 7-Eleven bought 1,030 APlus convenience stores located in 17 states. The acquisition, which is the largest in the company's history, brings the total number of stores to approximately 9,700 in the U.S. and Canada. Many APlus stores are expected to be rebranded to 7-Eleven stores. Products The list of products offered at APlus locations: Gulliver's Coffee — APlus's gourmet coffee. Competes with Circle K's Millstone coffee and Sheetz's Sheetz Bros. Coffee. City Deli — APlus's hot foods area. Likely conceived to compete with Sheetz's MTO's, which overlap APlus in several areas in Pennsylvania, though it's more similar to Giant Eagle's GetGo Kitchen at their GetGo chain. Currently available only at newer, larger locations. References External links Sunoco's corporate website Category:Companies based in Philadelphia Category:American companies established in 1985 Category:Retail companies established in 1985 Category:Convenience stores of the United States Category:Economy of the Eastern United States Category:Sunoco Category:1985 establishments in Pennsylvania
Clearing her mind as she had been taught, Roivan started at the first engine room configuration, then visualised each in sequence, searching for a match somewhere out there. Anywhere. Anywhere at all. But there was no one, no hint of intelligence beyond the confines of the ship. She couldn't hear anything. They were alone. For the first time since she had left, Roivan had nowhere else to go. Qirl had said that she must stay no more than fifteen days on any one ship. If she stayed more than fifteen days she would get caught. It had now been a lot longer than that. She mustn't get caught. She mustn't, and yet the crew knew she was here. The-Ginger-One had set laser traps and was monitoring the sustomat logs. They knew she was aboard, not exactly where, but in the engine room somewhere. She had to leave. She had to leave. Again Roivan checked for other space traffic and again, heard nothing. At a loss as to what else to do, she contacted the life form that lived beneath the engines in the central core of the ship. It was many, many ships, an age ago, that Roivan had first encountered these beings. Ever since that time, every ship she travelled on, had such a being aboard. She did not know what they were, only that they assisted in some way with the transition between sub and exponential light speed. She could feel their mental energy. It rippled about the engine casing like the cool mist that used to alight upon the waters with the fall of the sun back home. She had never tried to talk to one before, but in the past they had always seem to know she was aboard and had never raised an alarm or displayed any form of concern at her presence. Roivan had become accustomed to their energies and each time she changed ship she touched that energy first, just to reassure herself. Now she reached out with her mind, forming her introduction and question. Hullo. I am Roivan. May I ask you a question please? Hullo Roivan. I am Shaval. Are there any other ships here? No. My analysis shows this sector to be without traffic. We are stationary in deepspace. Will we stay here? No Roivan. We will return to base. When please? It is not my concern. I do not count time. What do you do at base? I wait, until the ship travels again. Unsure as to what else to say Roivan withdrew her mind. Deepspace? What was deepspace? She had to go to human-space. Was that far from deepspace? Was it the same? Unable to answer her own questions, she concentrated instead upon the fact that there would be other ships. There would be, Shaval said so, but when? PLEASE NOTE:The ONLY email addresses active on this domain is glynne@roivan.co.nz. Any mail proporting to come from any other address @roivan.co.nz is NOT from Glynne MacLean and is fake and has been sent illegally. Likewise any person using any other address @roivan.co.nz on chat forums or internet groups is doing so illegally.
Q: Bootstrap, WordPress & NavWalker - Top Level Nav Links not working I realize this issue has been addressed in other postings, however, I am having trouble with the top nav links not working on this site. This is a WordPress site built on BootStrap 3 and using NavWalker to integrate the WordPress navigation into the Bootstrap structure. Here is the navigation code: <div class="navbar navbar-default col-md-9" role="navigation"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target=".navbar-collapse"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> </div> <?php wp_nav_menu( array( 'menu' => 'Primary', 'theme_location' => 'Primary', 'depth' => 2, 'container' => 'div', 'container_class' => 'collapse navbar-collapse col-md-9', 'container_id' => 'none', 'menu_class' => 'nav navbar-nav', 'fallback_cb' => 'wp_bootstrap_navwalker::fallback', 'walker' => new wp_bootstrap_navwalker()) ); ?> </div><!-- /.container --> </div><!-- /.navbar --> This inherently lacks the hover feature that is nice to have on drop down menus. I have addressed this with the following solution from wpeden: ( function( $ ) { jQuery(function($) { $('.navbar .dropdown').hover(function() { $(this).find('.dropdown-menu').first().stop(true, true).delay(250).slideDown(); }, function() { $(this).find('.dropdown-menu').first().stop(true, true).delay(100).slideUp(); }); $('.navbar .dropdown > a').click(function(){ location.href = this.href; }); }); } )( jQuery ); This does a very nice job of gracefully displaying the drop down navigation, but there are no active links on parent menu items. I have confirmed that the parent's actually have active links by moving them out of the navigation hierarchy with no children where they display links correctly, so there is something that is missing that I can't identify and would appreciate a keen eye or two to help spot it. A: NavWalker seems to be designed like that. You need to edit the source code in wp_bootstrap_navwalker.php at line #85. Make the parent keep the href even if it has children if ( $args->has_children && $depth === 0 ) { // $atts['href'] = '#'; // old line $atts['href'] = ! empty( $item->url ) ? $item->url : ''; // new line $atts['data-toggle'] = 'dropdown'; $atts['class'] = 'dropdown-toggle'; $atts['aria-haspopup'] = 'true'; } else { $atts['href'] = ! empty( $item->url ) ? $item->url : ''; }
234 Va. 430 (1987) 362 S.E.2d 900 4 VLR 1269 KENSINGTON ASSOCIATES v. HARRY W. WEST Record No. 841524 Supreme Court of Virginia November 25, 1987 J. Alvernon Smith, Jr. (Samuel Baronian, Jr.; Smith, Blank, Isaacs & Hinton, on briefs), for appellant. William G. Barkley (Pickford and Barkley, on brief), for appellee. Present: All the Justices Judgment for plaintiff in a tort action is reversed because the evidence establishes that an employee was not acting within the scope of his employment when he accidentally shot another employee. Defendant contractor was renovating a hospital building and hired a security guard to patrol the site. While on duty the guard carried a pistol which he was instructed not to use except in a "life-or-death" situation. He was also instructed not to "bother" other employees, but he had engaged in horseplay with another employee on various occasions and had drawn the pistol during those incidents. One evening while he was on duty he managed to shoot another employee in the foot while drawing his gun to have "fun." The employee injured plaintiff brought this action and a jury returned a verdict for $200,000 against the contractor and the security guard. The defendant contractor appeals. 1. Under the doctrine of respondeat superior, an employer is liable for the tortious act of his employee if the employee was performing his employer's business and acting within the scope of his employment. 2. An act is within the scope of employment if directed by the employer or a natural incident to the business, if performed with the intent to further the employer's interest, and if it did not arise wholly from some external or personal motive on the part of the employee. 3. When an employer-employee relationship has been established, the burden is on the employer to prove that the employee was not acting within the scope of his employment when he committed the act complained of, and if the evidence leaves the question in doubt it becomes an issue to be determined by the jury. 4. When the evidence places the case between the extreme of a slight deviation from business to a great and unusual deviation, the issue of whether an employee was acting within the scope of his employment is for a jury. 5. The present case falls within the ambit of cases where the employee was held to have been acting outside of the scope of his employment because the undisputed evidence shows that the employee was engaging in "horseplay" in direct violation of his employer's orders when he injured the plaintiff. 6. Neither the "horseplay" nor the resulting shooting was done to further the defendant contractor's interests, but arose wholly from an external, independent, and personal motive on the part of the security guard and his actions were such a great and unusual deviation from the interests of his employer that the question whether he acted outside the scope of his employment was one of law for the court rather than one of fact for the jury. Appeal from a judgment of the Circuit Court of Albemarle County. Hon. David F. Berry, judge presiding. STEPHENSON STEPHENSON, J., delivered the opinion of the Court. Harry W. West sued Kensington Associates (Kensington) and its employee, Willis Chittum, to recover damages for personal injuries West incurred when he was accidentally shot by Chittum. A jury returned a verdict for West in the amount of $200,000 against both Kensington and Chittum, and the trial court entered judgment on the verdict. [1] Kensington alone appeals, contending that Chittum, as a matter of law, acted outside the scope of his employment when he shot West. [2] West was employed by United Services Industries (United Services). Kensington, owner of the former Johnston-Willis Hospital building in Richmond, contracted with United Services to renovate the hospital building. United Services provided on-site living quarters for its construction workers, including West. Kensington employed Chittum as a security guard at the site. Chittum was responsible for protecting Kensington's property, securing the building, and preventing vandalism. While on duty, Chittum carried a pistol in a holster. Kensington's officials knew that Chittum carried a pistol and acknowledged that he was armed for Kensington's benefit. Kensington's officials had told Chittum to call the police if any trouble arose. On various occasions before West was shot, Chittum had engaged in horseplay with Willie Archie, another United Services construction worker. On those occasions, Chittum had removed the pistol from the holster and waved it around to scare Archie. West was shot on the night of May 13, 1981. That night, while on duty and after completing his rounds through the building, Chittum stopped in the hallway outside the workers' recreation room. He believed that Archie was in the room; Chittum, however, did not know that West was there. As Chittum was in the act of removing his pistol from the holster, the pistol discharged and the bullet struck West in the foot. Chittum testified that the shooting resulted from "horseplay." He said he pulled the pistol to have "fun" with Archie, not to protect Kensington's property. Chittum also stated that he had drunk a "couple of beers" at the time, although drinking while on duty was prohibited by Kensington. Kensington's officials had instructed Chittum not to bother the construction workers and not to go into the recreation room. Under the doctrine of respondeat superior, an employer is liable for the tortious act of his employee if the employee was performing his employer's business and acting within the scope of his employment. McNeill Spindler, 191 Va. 685, 694, 62 S.E.2d 13, 17 (1950). Generally, an act is within the scope of the employment if (1) it was expressly or impliedly directed by the employer, or is naturally incident to the business, and (2) it was performed, although mistakenly or ill-advisedly, with the intent to further the employer's interest, or from some impulse or emotion that was the natural consequence of an attempt to do the employer's business, "and did not arise wholly from some external, independent, and personal motive on the part of the [employee] to do the act upon his own account." Broaddus Standard Drug Co., 211 Va. 645, 653, 179 S.E.2d 497, 503-04 (1971); Cary Hotel Rueger, Inc., 195 Va. 980, 984, 81 S.E.2d 421, 423 (1954); Tri-State Coach Corp. Walsh, 188 Va. 299, 307, 49 S.E.2d 363, 367 (1948); Davis Merrill, 133 Va. 69, 77, 112 S.E. 628, 630-31 (1922). When an employer-employee relationship has been established, "the burden is on the [employer] to prove that the [employee] was not acting within the scope of his employment when he committed the act complained of, and . . . if the evidence leaves the question in doubt it becomes an issue to be determined by the jury." Broaddus, 211 Va. at 653-54, 179 S.E.2d at 504 (emphasis added); Alvey Butchkavitz, 196 Va. 447, 453, 84 S.E.2d 535, 539 (1954); McNeill, 191 Va. at 695, 62 S.E.2d at 18. Accord Bivens Manhattan Car Corp., 156 Va. 483, 159 S.E. 395 (1931); Crowell Duncan, 145 Va. 489, 134 S.E. 576 (1926). Moreover, when the undisputed evidence shows that an employee's deviation from his employer's business is slight and not unusual, or, on the other hand, great and unusual, a court shall determine, as a matter of law, whether the employee was acting in the scope of his employment. When, however, the evidence places the case between these two extremes, the issue is for a jury. E.g., Broaddus, 211 Va. at 653-54, 179 S.E.2d at 504; Alvey, 196 Va. at 454, 84 S.E.2d at 539; McNeill, 191 Va. at 695, 62 S.E.2d at 18; Bivens, 156 Va. at 695, 159 S.E. at 399; Drake Laundry Corp., 135 Va. 354, 363-64, 116 S.E. 668, 671 (1923). Applying the foregoing principles, we held in Broaddus that the trial court properly submitted to the jury the issue of whether a security guard had acted within the scope of his employment when he shot a person whom a policeman was attempting to subdue. 211 Va. at 655-56, 179 S.E.2d at 505-06. The evidence presented in Broaddus did not show as a matter of law that the guard's deviation from his assigned duties was either slight or marked and unusual. Id. at 655, 179 S.E.2d at 505. We said the jury reasonably could have found that the guard's shooting of the person was either an independent venture of his own or done from some impulse or emotion that naturally grew out of or was incident to an attempt to perform his master's business. Id. at 656, 179 S.E.2d at 506. Accord United Brotherhood Humphreys, 203 Va. 781, 787-88, 127 S.E.2d 98, 102-03 (1962), cert. denied, 371 U.S. 954 (1963) (question whether assaults committed by striking union members were personally motivated or incident to performance of strike activities directed by international union properly left to jury's resolution); Slaughter Valleydale Packers, 198 Va. 339, 345, 94 S.E.2d 260, 265 (1956) (reversing and remanding on ground, inter alia, that instruction did not permit jury to consider whether defamatory statements were made out of impulse or emotion that naturally grew out of or was incident to attempt to perform master's business); Tri-State Coach Corp., 188 Va. at 308-09, 49 S.E.2d at 368 (question whether bus driver's use of "vocal insistence and physical force" to clear a path to move his bus resulted from impulse or emotion arising out of prosecution of master's business properly submitted to jury). Cf. Davis, 133 Va. at 77-78, 112 S.E. at 630-32 (jury could reasonably conclude that railroad gateman was acting within scope of employment when he shot plaintiff following dispute over raising gates at late hour of night). In a similar vein, we held in Alvey that conflicts in the evidence presented a jury question about whether the night manager of a service station was engaged in the owner's business when the manager accidentally shot the plaintiff while cleaning a loaded pistol. 196 Va. at 454, 84 S.E.2d at 539. The evidence concerning ownership of the gun was in direct conflict, and we said the jury reasonably could have inferred that the owner provided the manager with the gun for "protection." Id. at 455, 84 S.E.2d at 540. See also Bryant Bare, 192 Va. 238, 247, 64 S.E.2d 741, 747 (1951) (question whether employee permitted to use employer's truck for both personal benefit and employer's benefit had abandoned employer's business at time of accident properly submitted to jury); Crowell, 145 Va. at 505, 134 S.E. at 580 (question whether taxi driver engaged in personal or master's business one for jury's resolution where evidence showed driver, who had complete discretion in operation of owner's taxi, was operating marked taxi during business hours in usual field of operations at high rate of speed when he ran into plaintiff). On the other hand, we upheld a trial court's ruling in McNeill that the undisputed evidence established a deliveryman's deviation from his employer's business so great that, as a matter of law, the deliveryman was not acting within his scope of employment at the time he collided with another vehicle. 191 Va. at 695-96, 62 S.E.2d at 18. There, the uncontradicted evidence showed that the employer had entrusted the deliveryman with a truck to run a specific errand and had given him specific instructions where to return it. The employee had disobeyed the instructions and driven the truck to another part of town to carry out a personal matter. Id. We there noted that a marked deviation was shown. There was a complete stepping aside from the employer's business that was in no way related to the employer's affairs and was completely contrary to the employer's instructions. Id. Accord Master Auto Serv. Corp. Bowden, 179 Va. 507, 511, 19 S.E.2d 679, 680-81 (1942); Kavanaugh Wheeling, 175 Va. 105, 117, 7 S.E.2d 125, 130 (1940); Western Union Tel. Co. Phelps, 160 Va. 674, 682, 169 S.E. 574, 577 (1933); Bivens, 156 Va. at 487, 159 S.E. at 396; Kidd DeWitt, Jr., 128 Va. 438, 448, 105 S.E. 124, 127 (1920). In Cary, we held as a matter of law that an argument between a hotel bellboy and two hotel guests that resulted in the bellboy's fatally shooting one of the guests did not arise out of anything connected with the hotel's business. 195 Va. at 986-87, 81 S.E.2d at 424. The undisputed evidence in Cary established that the argument arose over whether the bellboy owed money to the guests for activities involving the trafficking of women and liquor -- conduct that was illegal and prohibited by the hotel. Thus, we held the bellboy's shooting of one of the guests "arose from an independent and personal motive on [his part] to do the act upon his own account." Id. at 987, 81 S.E.2d at 424. Similarly, in Abernathy Romaczyk, 202 Va. 328, 334, 117 S.E.2d 88, 92-93 (1960), we reversed a court-approved jury verdict and held as a matter of law that a deliveryman was not acting within the scope of his employment when he participated in a scuffle over who had caused a traffic accident. We drew a distinction between the facts of Abernathy and Tri-State Coach Corp. on the basis that the altercation in Tri-State Coach Corp. arose over the manner in which the bus driver was operating the bus and over who had the right to proceed. In Tri-State Coach Corp., the turn of the bus had not been negotiated and both vehicles stood close together, resulting in a stalemate about which vehicle should move first. In Abernathy, however, the undisputed evidence showed that after the accident occurred and as the other driver was returning to his vehicle following an inspection of the damage, the deliveryman alighted from his truck, approached the other driver before he had entered his car, and engaged in an argument that resulted in the scuffle. Thus, we held that the deliveryman's participation in the fracas was "an independent venture of his own to gratify his personal feelings, and the relation of master and servant was for a time suspended." 202 Va. at 334, 117 S.E.2d at 92. We are of opinion that the present case falls within the ambit of McNeill, Cary, and Abernathy. Kensington's officials had given Chittum specific instructions not to "bother" the construction workers. The undisputed evidence established, however, that Chittum engaged in horseplay in an attempt to scare Archie when he injured West. In addition, Chittum had been drinking at the time, which Kensington officials strictly prohibited. The shooting occurred immediately after Chittum had completed his security check of the building, during which time he found no evidence of vandals or trespassers. Following the completion of the security check, Chittum's next duty was to return to his desk and let employees and construction workers in and out of the building. Instead, he tarried, intending to have a little "fun." Neither the "horseplay" nor the resulting shooting was done to further Kensington's interests, but arose wholly from an independent, external, and personal motive on Chittum's part to perform an act upon his own account. When Chittum undertook to draw his pistol, he embarked upon an independent venture to satisfy his own personal desire to have "fun" and "play" around, thus suspending for a time the employer-employee relationship. We hold, therefore, that his reckless act was such a great and unusual deviation from Kensington's business that the question whether he acted outside the scope of his employment was one of law for the court rather than one of fact for the jury. Deciding that question against West, we will reverse the judgment of the trial court and enter final judgment here for Kensington. Reversed and final judgment. Reversed and final judgment. NOTES [1] West also sued United Services Industries. The jury, however, returned a verdict in favor of this defendant, which the trial court affirmed. West did not assign cross-error to this ruling. [2] Although West's pleadings contain allegations of Kensington's primary negligence, the case was tried and appealed on only the respondeat superior theory.
And because the race is so close and crowded — the previous Des Moines Register/CNN/Mediacom poll showed the top four candidates separated by only 6 points — the planned release at 8 p.m. Central Time will have political operatives and observers glued to their TV screens and frantically refreshing their web browsers. But not so fast, pollsters say: Even Selzer’s gold-plated poll shouldn’t be taken as gospel for what is going to happen on Monday night. That’s because, in addition to all the pitfalls that face pollsters around turnout and the composition of a primary electorate, caucuses present their own unique challenges. The American Association for Public Opinion Research, the leading organization of pollsters, sent a press release to reporters this week urging “pundits and journalists not to rush to judgment on the performance of polls in the aftermath of the Iowa Democratic [c]aucuses.” “The Democratic caucuses are not 'just like' a primary election,” said Nora Cate Shaeffer, a University of Wisconsin professor and pollster who is serving as the association’s president. “The results of the caucuses are more complex than a simple vote count.” Even Selzer, the dean of Iowa polling, warns that the caucus process — from the elimination of candidates who aren’t “viable” in the initial vote, to the public expressions of support and persuasion on the floor of the precinct — means that her polls can only measure voters’ initial preferences. Those raw votes will be made available by the state Democratic Party for the first time, but the winner will be declared based on a calculation of how many delegates each candidate will win to the state convention later this year. Democratic presidential candidate Bernie Sanders has been leading in multiple polls among Iowa voters. | Chip Somodevilla/Getty Images “The caucuses are designed for people to change their mind in the room on caucus night,” she told POLITICO. “That has always been true, and our polls can only hope to show what they intend to do.” Going into Saturday’s release, polls generally show a close race among the top four candidates, with either Joe Biden or Bernie Sanders leading the reliable public surveys. But Pete Buttigieg and Elizabeth Warren are still well within striking distance — though they are hovering around the 15 percent viability threshold in individual precinct caucuses. The first challenge for pollsters isn’t trying to discern what’s going to happen inside the room — it’s who’s going to show up in the first place. There are 615,000 registered Democrats on Iowa’s voter rolls. But even by the most bullish of estimates, more than half of them won’t venture out on Monday night. Democrats in Iowa have been debating for weeks whether turnout will challenge the roughly 240,000 caucus-goers in Jan. 2008, when then-Sen. Barack Obama finished first. How many people will turn out is only part of the challenge. Pollsters are also grappling with who, specifically, will show up. Polls show Sanders is the top choice among younger voters, who have been less likely to participate in past caucuses. Biden, meanwhile, leads among older voters. According to entrance polls conducted before the 2016 Democratic caucuses, just under six-in-10 caucus-goers were aged 50 or older. Joe Biden holds a campaign event in Council Bluffs, Iowa. | M. Scott Mahaskey/POLITICO Age isn’t the only factor. Some pollsters use past voting history to determine whom they survey, even though records aren’t available for who has participated in caucuses. Monmouth University — which found Biden narrowly ahead of Sanders in a recent survey — restricts its universe of participants to those who have voted in recent primaries or the 2018 general election (or who have registered since the last election). A New York Times analysis found that Biden has an advantage among voters who have cast ballots in primaries, but were less likely to go to caucuses. Sanders, meanwhile, is running stronger among those who say they are likely to caucus, regardless of their past vote history. “Caucus electorates are the most difficult to model in polling,” Monmouth University pollster Patrick Murray said in releasing the school’s latest poll this week. “The smartest takeaway from this, or any Iowa poll for that matter, is to be prepared for anything on Monday.” The Des Moines Register/CNN/Mediacom poll doesn’t take into account past voting history: It includes all registered voters who say they are very likely or will probably go to their caucus. POLITICO NEWSLETTERS 2020 Elections Unpacking the national conventions and the race for the White House. Sign Up Loading By signing up you agree to receive email newsletters or alerts from POLITICO. You can unsubscribe at any time. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. “We work with the Iowa secretary of state’s voter list, so you need to be registered to vote and not be considered an inactive voter,” Selzer said. “We make no other presumptions about who might show up on caucus night beyond what they tell us.” Selzer’s track record is part of her mystique among many political professionals and reporters alike. Her final poll in 2016 showed Hillary Clinton 3 points ahead of Sanders, and now-President Donald Trump leading Sen. Ted Cruz on the Republican side. It turned out that Clinton edged out Sanders as the poll predicted, but Cruz defeated Trump. The 2008 caucuses are perhaps her crowning achievement. Her final poll showed Barack Obama opening up a lead over Clinton and then-Sen. John Edwards — and also correctly foretold a turnout surge that Clinton’s campaign dismissed. Selzer’s most recent poll, conducted Jan. 2-8, showed Sanders at 20 percent, Warren at 17 percent, Buttigieg at 16 percent and Biden at 15 percent. The next-closest candidate was Sen. Amy Klobuchar of Minnesota, at 6 percent, though other surveys have showed Klobuchar with more support. Elizabeth Warren at a campaign event in Cedar Rapids, Iowa. | Scott Olson/Getty Images That survey did not point to record-breaking turnout, Selzer has said, though she warned that a surge of new caucus-goers sometimes isn’t apparent until the final days. Between the crowded field and the looming viability threshold, there is little precedent for how Monday night could play out, said Joe Lenski, an executive vice president at Edison Research, which will conduct entrance polls on Monday night for a consortium of news organizations, including ABC News, CBS News, CNN and NBC News. “We have never had a situation where you’ve had four, maybe even five, candidates all bunched up, sniffing the threshold,” Lenski said. The closest analogy, Lenski said, is the 2004 Democratic caucuses, when there were also four top candidates: Edwards, John Kerry, Howard Dean and Dick Gephardt. Selzer’s final poll nailed the order: Kerry first, followed by Edwards, Dean and Gephardt. But, notably, the race wasn’t as close as it looked in the poll, which had the top four candidates within 8 percentage points. Kerry pulled away, while Edwards claimed a solid second place. But Dean and Gephardt sagged — and Gephardt, the congressman from neighboring Missouri, finished with fewer than 12 percent of delegates. How much of that movement was late momentum and how much was voters rearranging during the caucus process isn’t clear — the state Democratic Party didn’t release an initial, raw vote count in 2004. But Lenski said he doesn’t expect any dramatic swings in the closing days before Monday’s caucuses. “I don’t think there’s going to be a lot of late movement here,” he said. “This is different than ’04 in that I think all these candidates are much better known.”
Groupon Raising Up to $950 million Series G — Fresh off the heels of Groupon's widely publicized Google rebuff, the discount e-commerce company has filed a certificate to authorize a $950 million Series G round of preferred stock. The certificate, which gives Groupon the capacity to raise $950 million, was filed on December 17. Exclusive: 12 Photos of the HTC Thunderbolt on Verizon — Take a look at that beauty. Say hello to the HTC Thunderbolt, which will be the first 4G LTE device to land on Verizon after its unveiling next week at CES. As you can see, it really just looks exactly like the Desire HD … The Unbearable Inevitability of Being Android, 1995 — According to soldiers of the Android Crusade, 2011 is the year Google will crush iOS to declare its inevitable suzerainty over mobile territories. — Let's meet this week's crusaders: Seth Weintraub (2011 will be the year Android explodes) … Disney Tackles Major Theme Park Problem: Lines — ORLANDO, Fla. — Deep in the bowels of Walt Disney World, inside an underground bunker called the Disney Operational Command Center, technicians know that you are standing in line and that you are most likely annoyed about it. New Android Market Stats Out, Over 200k Apps Available — Google's Android operating system already attracted a lot of developers on its side, which came up with applications in excess of 200,000, the latest stats on the Android Market show. — Apple's App Store for the iPhone … RapidShare Shows MPAA/RIAA: We Can Lobby Lawmakers Too — Last month RapidShare discovered that they had been reported by the MPAA and RIAA to the US Government for being one of the world's “most notorious pirate markets”. Now, on the heels of reports that the entertainment industries spent … SecondMarket: The SEC has not asked us for anything — The SEC “wants to learn more” about secondary stock trades in private companies like Facebook, LinkedIn, Twitter and Zynga, according to a report in today's New York Times. No specifics as to the inquiry's primary goal … Clooney, Google, U.N. watch Sudan using satellites — NEW YORK (Reuters) - Groups including the United Nations, Harvard University, Google Inc and an organization co-founded by actor George Clooney are launching a project using satellites to “watch” Sudan for war crimes before a vote that could split the African country in two. Canadians spend more time online than any other country — Canadians were curating their Facebook profiles long before the rest of the world got hooked, many were experimenting with the limitations of writing in 140-character snippets at Twitter's launch in 2006, and we watch online video more than any other web surfers. Samsung to Unveil New Galaxy S at Mobile World Congress — The successor to Samsung Electronics' hit Galaxy S smartphone will be released in February. — Samsung announced Monday that it will unveil the device at the 2011 Mobile World Congress, the world's largest mobile communications exhibition … Groupon to enter Australia as daily deals sites explode — THE world's largest daily deals website, Groupon, which Google tried to buy this month for $US6 billion, has confirmed it is entering the Australian market. — The company is recruiting people to sign up to its email database … Delete Older Facebook Apps — or Risk Everyone's Privacy — If you have a Facebook page, you've probably added quite a few apps. If you've linked your YouTube account, New York Times account, or just about any mobile app to your Facebook profile, you've also installed their app …
Q: Is gen_server restart strategy copy state? Erlang world not use try-catch as usual. I'm want to know how about performance when restart a process vs try-catch in mainstream language. A Erlang process has it's small stack and heap concept which actually allocate in OS heap. Why it's effective to restart it? Hope someone give me a deep in sight about Beam what to do when invoke a restart operation on a process. Besides, how about use gen_server which maintain state in it's process. Will cause a copy state operate when gen_server restart? Thanks A: I recommend having a read of https://ferd.ca/the-zen-of-erlang.html Here's my understanding: restart is effective for fixing "Heisenbug" which only happens when the (Erlang) process is in some weird state and/or trying to handle a "weird" message. The presumption is that you revert to a known good state (by restarting), which should handle all normal messages correctly. Restart is not meant to "fix all the problems", and certainly not for things like bad configuration or missing internet connection. By this definition we can see it's very dangerous to copy the state when crash happened and try to recover from that, because this is defeating the whole point of going back to a known state. The second point is, say this process only crashes when handling an action that only 0.001% (or whatever percentage is considered negligible) of all your users actually use, and it's not really important (e.g. a minor UI detail) then it's totally fine to just let it crash and restart, and don't need to fix it. I think it can be a productivity enabler for these cases. Regarding your questions in the OP comment: yes just whatever your init callback returns, you can either build the entire starting state there or source from other places, totally depend on the use case.
Perfect Game is kicking off its high-school baseball coverage this week. In addition to identifying the top 50 teams in the country, we will be posting lists of the top prospects in the high-school senior class, by position, per the following schedule:
Release and do not charge Richard Osborn-Brooks On 4th April 2018, Richard Osborn-Brooks (78 years old) was arrested for attacking 2 burglars who he found in his home. He has been arrested under the suspicion of murder as unfortunately one of the burglars died. The incident happened in a crime stricken area of London, where residents have been experiencing numerous burglaries. Although no one should be allowed to take the law into their own hands, he should be allowed to protect his property and no charges should be pressed against him. Please sign this petition to give Richard your support and show that his arrest is a ridiculous decision.
229 E Pedregosa St, Santa Barbara, CA 93101$2,620,000 Status: Closed MLS# 18-22 3 Bedrooms 3 Bathrooms REDUCED $75,000 Upper East single level home with an artistic renovation that features a bold use of architectural materials. This is a 3 bedroom 3 bath home plus den. There is a large living room, open floor plan with large kitchen and dining, beautiful outdoor space for entertaining which features a majestic oak. Located in a prime upper east neighborhood. Enjoy all of downtown Santa Barbara, restaurants, museum, theaters, shopping. This information is being provided for your personal, non-commercial use and may not be used for any purpose other than to identify prospective properties that you may be interested in purchasing. Data relating to real estate for sale on this Website comes from the Internet Data Exchange Program of the Santa Barbara Multiple Listing Service. All information is deemed reliable, but not guaranteed. All properties are subject to prior sale, change or withdrawal. Neither the Santa Barbara Multiple Listing Service nor the listing broker(s) shall be responsible for any typographical errors, misinformation, or misprints.
With some friends we have had a discussion about visual gags in the background of a scene that have nothing to do with the main action going on (there're a lot of examples in ZAZ's movies). And I simply couldn't come up with a particular gag I saw it only once many years ago (possibly at the end of the 80's or in the early 90's). Was it a movie ? A TV show ? Yet, I have the impression it's from that sitcom titled DOCTOR DOCTOR but I'm not sure at all about it : Two or three people are talking on the foreground while in the background a short man is using a wall phone : but when he wants to hang it, well, he can't. So he's trying many ways to hang that phone, in vain. Meanwhile of course the camera keeps filming on the foreground the guys having a discussion we don't care about at all because we keep focus on that poor guy on the background ! Does it ring a bell to anyone and is there a clip of that on the Internet ? Thanks in advance ! Edit : And thanks to Adam B. the answer is YOUNG DOCTORS IN LOVE ! Many thanks Adam ! I was even able to locate the videoclip on youtube. It's not Lubitsch's touch, yet effective ! The scene you were describing was the only one I could really remember from the film. I saw this in 1982 by mistake. I went to my local theater to see Star Trek II and bought my ticket. When I walked to the doors I must have turned right when I should have turned left because I unintentionally went into the wrong auditorium. When the film started up I was expecting to see the Paramount logo but it was something else and I realized I was in the wrong theater. I decided to stay and watch it anyway because I had already seen Trek II a couple of times. I didn't regret it. There were a few chuckles in the film. It tries to copy some of the gags from the ZAZ films.
func b(){return} func n(){{return}}
Stromal cell regulation of lymphoid and myeloid differentiation. In vitro microenvironmental influences seem to be critical for both B lymphocyte and myeloid differentiation. Studies on murine Dexter cultures and Whitlock-Witte lymphocyte cultures suggest the presence of two critical stromal regulatory cells: an alkaline-phosphatase-positive epithelioid cell and a macrophage. Further data suggest that these cells are capable of producing colony stimulating factor-1, granulocyte-macrophage CSF, a myeloid synergizing activity, and probably separate B cell growth factors. Isolation of a cell line from Dexter stroma was accomplished and this line produced CSF-1, GM-CSF, a pre-B cell and myeloid synergizing activity, and an activity acting on differentiated B cells. We speculate that the Dexter and Whitlock-Witte in vitro culture systems are regulated by factors produced by the two adherent cell types. A lineage nonspecific factor capable of inducing cells into the B lineage or synergizing with interleukin-3, GM-CSF, and CSF-1 is produced, which presumably acts on early stem cells. In addition, the cell line produces GM-CSF, CSF-1, and a factor acting on differentiated B cells. We speculate that in these culture systems, these "terminal differentiating hormones" regulate the final pathway of differentiation, whereas the pre-B-synergizing activity supports early stem cells that can then respond to the other differentiating hormones.
Alan Cumming Will Star In ‘Dr. Death’ At first glance it looked like Pee Wee Herman had landed a role on The Good Wife. After years of being on the show it became easy to distinguish the two. Even after The Good Wife ended, Alan Cummings remained one of the best characters on the show by far. His character Eli Gold was like an upgraded version of Cyrus Beene from Scandal, and they both stole the show whenever they got the chance. With all the frantic yelling and running with The Good Wife, you’d figure he’d want some time off, right? After a short layoff he is already back in action, and will star and executive produce the drama pilot Dr. Death. If it sounds familiar that’s because it is based off of The Hollywood Reporter, an upcoming James Patterson novel. There is already a good amount of work that will go into creating the series, and they already have a lot of material to pull from. It’s not often that a novel that is yet to be released gets a television adaptation, but remember- this is James freaking Patterson. Dr. Death is about a former CIA operative that makes his living as a professor and writer. He has been involved in thousands of cases both personally and professionally when he was CIA, and now that he has settled down to the quiet life things begin to heat up again. The New York Police Department seems to be having trouble catching one of the worst serial killers in the state. Guess who they call to help them solve the crime? Release for the show is set for June 2017 and will also star Michael Rauch and Alex Kurtzman. Does the show sound familiar to other retired agent premises? Will it do well over at CBS for more than a few seasons? Leave a Reply 1 Comment on "Alan Cumming Will Star In ‘Dr. Death’" Connect with: Notify of new replies to this comment Notify of new replies to this comment Sort by: newest | oldest | most voted Yale 9 days 7 hours ago This article is not up to date. If you look up the new show “Instinct” on CBS starting March 18th, you will see it is the same show. The plot is “Professor Dylan Reinhart (Alan Cumming), an author and former C.I.A. Agent, gets back into the game when the N.Y.P.D. asks for his help to stop a serial killer.”
22 Ill.2d 498 (1961) 177 N.E.2d 100 THE PEOPLE OF THE STATE OF ILLINOIS, Defendant in Error, v. FLOYD WILLIAMS, Plaintiff in Error. No. 35345. Supreme Court of Illinois. Opinion filed September 22, 1961. *499 *500 IRWIN D. BLOCH, of Chicago, for plaintiff in error. WILLIAM G. CLARK, Attorney General, of Springfield, and DANIEL P. WARD, State's Attorney, of Chicago, (FRED G. LEACH, Assistant Attorney General, and JOHN T. GALLAGHER and JAMES R. THOMPSON, Assistant State's Attorneys, of counsel,) for the People. Judgment affirmed. Mr. JUSTICE KLINGBIEL delivered the opinion of the court: This case is here upon a writ of error brought by the defendant, Floyd Williams, to review a judgment of the criminal court of Cook County finding defendant and one Joseph Calhoun guilty of the crime of unlawfully selling narcotic drugs. Calhoun's conviction was affirmed by us in People v. Calhoun, 22 Ill.2d 31, and on the present writ of error we are concerned only with the defendant Williams's conviction. The defendant first contends that the evidence was insufficient to establish his guilt beyond a reasonable doubt. A complete statement of the evidence is contained in the Calhoun case and we will not repeat it here, except insofar as it is necessary for a proper understanding of the issues in this case. The evidence showed that one Jacqueline Hill, a narcotics addict, was given a sum of money with which to buy narcotics in an attempt to obtain the arrest and conviction of the sellers of the narcotic drugs. Before attempting to buy the narcotics her clothing was searched by police officers and her person was searched by a police matron. No narcotics were found. After making an unsuccessful attempt to purchase narcotics at one address she proceeded in the company of the officers to 28 North Ogden Avenue. She entered the building and went up the stairs and a few minutes after she had entered the officers followed her. *501 They met her on the stairs as she was coming down from the third floor and she handed the officers a package which was later found to contain heroin. At about this time the defendant Williams came down from the fourth floor to the third floor and in the presence of the defendant, the officers asked Jacqueline if he was the man from whom she had bought the stuff and she replied that he was. The defendant said nothing in the face of this accusation. After Williams was arrested the officers saw Calhoun standing near a window in the bathroom and saw him throw some keys and currency out the window. The officers later recovered both the keys and the money and found that the money which Calhoun had thrown out the window was the same money which had been given to Jacqueline by the officers for the purpose of purchasing narcotics. The keys were found to open a door to an apartment on the fourth floor in which narcotics were found. Jacqueline made and signed a statement in the presence of the defendant and Calhoun at the police station in which she said that she gave the defendant $10 and he left the room and returned in a few minutes with the "stuff." Although the defendant was present when she made this statement, he did not deny the charge. At the trial Jacqueline was called as a witness for the prosecution, after receiving assurance of immunity by the Federal and State authorities. In her testimony she repudiated her written statement and said that she did not make a purchase from Williams; that the drugs which she delivered to the officers had been secreted in her coat and had not been discovered by the officers at the time they searched her clothing; and that she took the currency and threw it out the window. The State promptly claimed surprise as a result of this testimony and the witness was made a court's witness and the State was permitted to cross-examine her. On cross-examination she still testified that she had not purchased the drugs from defendant, but she admitted that she had previously signed a *502 statement in which she said she made such a purchase and admitted that she had testified before the grand jury that she bought drugs from defendant. For the defense the defendant testified that Jacqueline Hill had come to his apartment at the time in question but claimed that she said nothing about the purchase of narcotics and did not give him any money. The defendant said that he went up to the fourth floor from his third floor apartment to see a friend of his. He testified that he never heard Jacqueline tell any person that he had sold narcotics to her. In the Calhoun case we commented upon Williams's testimony. In that case we said, 22 Ill.2d 31, 34, "Williams was not convincing in his explanation of the unexpected visit to his apartment by Jacqueline. A reading of the cold record of his testimony reveals many earmarks of falsity. There was equivocation, evasiveness, vagueness on critical issues and failure to present a plausible, coherent story. The trial judge, additionally, had the advantage of observing Williams while he was testifying. So much is revealed by the tone of voice, facial expressions and general demeanor of a witness. The trial judge, in his search for the truth, ruled against both defendants. We are convinced that his determination was correct." Our review of the record in the present case satisfies us that our evaluation of Williams's testimony in the Calhoun case was entirely correct. We are of the opinion that the evidence in the present case was sufficient to establish the defendant's guilt beyond a reasonable doubt. The evidence showed that Jacqueline Hill went to the defendant's apartment, that Williams went to the fourth floor where a quantity of narcotics was later found, that Jacqueline Hill returned shortly thereafter with some narcotics, that Jacqueline twice accused the defendant of selling her narcotics under circumstances which would normally prompt a denial by an innocent person; and that the defendant when confronted with these accusations did *503 not deny them. This evidence was sufficient to establish defendant's guilt. The defendant contends that the trial court erred in making Jacqueline Hill a court's witness. The contention here is that the State was not surprised by Jacqueline's testimony since she had testified at a preliminary hearing that the defendant did not sell her narcotics. In establishing a foundation for examining Hill as a court's witness the State brought out that when the witness had testified before the grand jury she testified that the defendant had sold her narcotics. Therefore, at the time of the trial the State knew that Jacqueline had originally accused the defendant, both verbally and in a written statement, that she later retracted this accusation at the preliminary hearing, but that her last testimony before the trial, which was given before the grand jury, did accuse the defendant of selling her narcotics. Under these circumstances we think that the State had the right to call her as a prosecution witness originally, and grant her immunity, since they could reasonably assume that her testimony at the trial would be the same as her testimony before the grand jury. When she retracted her accusations of the defendant at the trial, the State had the right to claim surprise and to request that she be called as court's witness. We are of the opinion that the action of the trial court in calling Jacqueline Hill as court's witness was within the court's discretion and was not error. The defendant also contends that the trial court improperly admitted certain evidence. It is first argued that the trial court permitted the prosecutor to conduct an improper cross-examination of Jacqueline Hill in laying the foundation for her testimony as a court's witness. Specifically, it is contended that the prosecutor should not have been permitted to paraphrase Jacqueline's testimony before the grand jury but should have inquired as to whether she made specific answers to specific questions. The method of cross-examination which defendant claims the court should *504 have required is the method prescribed when a party seeks to impeach a witness. However, when a witness for the State gives surprise testimony the court may permit the State to examine the witness to show that he is giving unexpected testimony, and such procedure does not amount to impeachment. (Cf. People v. Quevreaux, 407 Ill. 176.) It is, therefore, not necessary in such cases to follow the procedure prescribed for impeaching a witness, and the method of cross-examination in such cases rests largely in the discretion of the trial judge. (People v. Wesley, 18 Ill.2d 138.) We find no error in the procedure followed by the State and permitted by the court. The defendant's next contention is that since Jacqueline Hill admitted that she had made contradictory statements in a written statement, the State had no right to introduce the statement itself. The argument finds some support in Illinois Central Railroad Co. v. Wade, 206 Ill. 523, 532, in which we stated that if a witness admitted making a former inconsistent statement, further proof of that fact might be unnecessary. However, in the Wade case, the witness did not make such a statement and the court held that the court should have permitted further proof of the former inconsistent statements. Our holding in the Wade case, therefore, can not be said to be square authority for the proposition now advanced by the defendant. The better view would seem to be that a party is not foreclosed from making further proof of the former inconsistent statements even when the witness admits having made such statements, for the party may prefer to have these statements clearly brought out and emphasized. (3 Wigmore on Evidence, sec. 1037; Hapke v. Brandon, 343 Ill. App. 524.) We are of the opinion that the trial court properly admitted the written statement in evidence. It is also urged that the trial court erred in permitting evidence of Jacqueline's conversation with the officers on the second floor landing after she had returned from the *505 defendant's apartment. As we have previously pointed out, her conversation at that time was admissible because it was made in the presence of the defendant and his silence could have been construed as an implied admission of guilt. Finally, it is contended that the trial court erred in permitting the keys to the fourth floor apartment to be admitted in evidence. The abstract shows that when these keys were first offered in evidence both of the defendants objected and the court sustained the objection. After further proof the keys were again offered in evidence and the abstract shows that at this time only the defendant Calhoun objected. The contention now advanced by the defendant Williams is, therefore, not open for consideration upon this writ of error, no objection having been made at the trial. The judgment of the criminal court of Cook County is, therefore, affirmed. Judgment affirmed.
def test(): assert len(pattern1) == 2, "El pattern1 debería describir dos tokens." assert len(pattern2) == 2, "El pattern2 debería describir dos tokens." assert ( len(pattern1[0]) == 1 ), "El primer token de pattern1 solo necesita un atributo." assert any( pattern1[0].get(l) == "adidas" for l in ("LOWER", "lower") ), "El primer token de pattern1 debería encontrar 'adidas' en minúsculas." assert ( len(pattern1[1]) == 1 ), "El segundo token de pattern1 solo necesita un atributo." assert any( pattern1[1].get(l) == "zx" for l in ("LOWER", "lower") ), "El segundo token de pattern1 debería encontrar 'zx' en minúsculas." assert ( len(pattern2[0]) == 1 ), "El primer token de pattern2 solo necesita un atributo." assert any( pattern2[0].get(l) == "adidas" for l in ("LOWER", "lower") ), "El primer token de pattern2 debería encontrar 'adidas' en minúsculas." assert ( len(pattern2[1]) == 1 ), "El segundo token de pattern2 debería tener un atributo." assert any( pattern2[1].get(l) == True for l in ("IS_DIGIT", "is_digit") ), "El segundo token de pattern2 debería encontrar un dígito." __msg__.good( "¡Bien! Ahora usemos estos patrones para crear rápidamente unos datos " "de entrenamiento para nuestro modelo." )
Vermont’s new victim- offender dialogue program offers a unique opportunity to heal after violent crime Stories about heartbreaking violent crimes have dominated the local media this fall, prompting many Vermonters to wonder how the families of the victims will ever be able to move on. Linda White, whose daughter was raped and murdered in Texas 20 years ago, knows from experience that it’s never easy. White, an adjunct professor of psychology and philosophy at Sam Houston State University, visited Vermont recently to talk about her own family’s incredible journey of healing. In a recent interview before her presentation at the Barre Opera House, White recalls the tragic beginning of her ordeal. It started on Nov. 18, 1986, when her 26-year-old daughter, Cathy, went missing. White, a down-to-earth “60-something” Louisiana native, describes her late daughter as a kind-hearted, vivacious woman. She had a 5-year-old daughter, Ami, and had recently discovered she was two months pregnant; she and the baby’s father had just announced their engagement. When her daughter didn’t return that night, White hoped that maybe she just needed some time alone. Maybe she was getting cold feet about the wedding, or had decided not to have the baby. “Of course,” she says, “if you have the choice of believing that something horrible has happened to your child, or believing that she’s away thinking about something for a while, you’re going to take option B and not option A.” Several days later, the waiting ended. On Nov. 22, a police officer knocked on White’s door to tell her and her husband that their daughter’s body had been found in a field. Linda White wasn’t home to hear the news. She was out doing errands with Ami. “By the time I got home, four hours later,” she recalls, “everybody was there: our friends, her friends.” As White approached the house, she could see cars through the trees along the family’s winding driveway. “That’s how I knew,” she says quietly, “when I saw all the cars. I knew there wouldn’t be all those cars there if they didn’t know anything.” White drove up to the house to let her granddaughter out, but couldn’t bring herself to leave the car. “I knew as soon as she got into the house, everybody would know I was there, and somebody would come out and tell me,” she explains. “I couldn’t go in. I couldn’t move. It just was crushing.” White’s husband and one of their two sons emerged from the house to tell her. “My husband said, ‘It’s the worst you could possibly imagine,’ ” she recalls. “I don’t remember the next words. If he said, ‘She was raped,’ I don’t remember how he said it.” Cathy had met two 15-year-old boys at a gas station and offered them a ride. Once in the car, the boys, who were armed, led her to a remote area, where they each raped her. Hoping to cover up what they had done, they then shot her four times, and tried to set her hair on fire with the car’s cigarette lighter. It’s still hard for White to talk about this—she pauses frequently, tears in her eyes—but she says discussing the crime has helped her come to terms with it. In fact, one of the most important conversations she’s had on the difficult subject was with Gary Brown, one of Cathy’s killers. In 2001, White and her granddaughter Ami traveled to a prison in Wichita Falls, Texas, where they met with Brown and a trained facilitator. The meeting, known as a “victim-offender mediation”—or, more accurately, a “victim-offender dialogue”—lasted eight hours. At the end, remarkably, all three exchanged hugs and posed for a photo. A film crew taped the emotional encounter, as well as interviews with the participants before and after the process, for a documentary called Meeting With a Killer: One Family’s Journey. The film aired on Court TV in the fall of 2001, and later received an Emmy nomination. Several of Vermont’s 11 community-justice centers sponsored screenings of the film last week in St. Johnsbury, Barre, Brattleboro and White River Junction, and they brought White to Vermont for post-screening discussions. The events commemorated National Restorative Justice Week, which promotes ways in which victims of crime can heal and offenders can repair some of the damage they’ve done. Coincidentally, the screenings took place as Vermont is launching its own victim-offender dialogue program for people involved in violent crimes, including armed robbery, rape, arson and murder. The service has been occasionally available through consultants, but for the first time, the Department of Corrections will actually train facilitators—nine of them—during a six-day session at the end of the month. Amy Holloway, Vermont’s director of victim services, says the formal victim-offender dialogue program should be operational by January. White and other advocates of these encounters stress they’re not appropriate for—or desired by—all victims. But they say that confronting offenders in person can be cathartic. The American Bar Association endorsed the practice in 1994. Printed on the website of the Victim-Offender Reconciliation Program, the ABA blessing states that the practice “humanizes” the criminal justice system for offenders and victims alike. “By bringing the criminal offenders together face-to-face with their victims,” the ABA says, “it becomes more difficult for the offenders to rationalize their criminal behavior. . . . During such sessions, victims may gain a better understanding of who the offenders are, and of the circumstances that may have contributed to their criminal behavior.” Victims of lower-level crimes have had this opportunity for nearly a decade through Vermont’s many reparative boards. The panels of volunteer community members see vandals, bullies and noisemakers who may have been referred by the police. They also deal with offenders on reparative probation, who are required to meet with the board and complete community service as part of their release. Victims are invited, but not required, to attend these proceedings and offer input. But the formal, facilitator-led victim-offender dialogues take the process to a new level. Kathleen Patten, a consultant for the DOC who has facilitated four of these victim-offender dialogues in Vermont, is eager to see the state offer the service. She believes conversations between victims and offenders can offset the sometimes-harmful separation the two sides experience during legal proceedings. “The court experience is really tough for people,” she says. “A man-to-man isn’t allowed. A victim takes that as, ‘If he cared, he wouldn’t have pled for this, or pled for that.’ They’re protected so well—and you can understand why.” Patten describes the meetings she’s facilitated as “profound.” “I think it’s valuable because when a victim is moving forward in their life, or trying to move forward, often times there is one last thing,” she says. “They feel this hole in their whole body until they can ask certain questions—‘What has happened since then?’ or it might be, ‘What were her last words?’ ‘What happened those last few moments before?’ ‘What could I have done that might have prevented it?’ ” “Who knows what it is that they feel,” she adds, “but they feel it so strongly, that until they get that answer, they’re hanging off the edge of a cliff. They need that answer.” Linda White did not feel com pelled to talk with Cathy’s killers until nearly 15 years after her daughter’s death. In the late 1980s, she returned to school to become a grief counselor. She earned her Bachelor’s degree in psychology in 1990 and her master’s in 1994, and then started teaching college classes. She took those lessons, in psychology and philosophy, into prisons and became an anti-death-penalty activist. But she rarely thought about her daughter’s murderers until she began researching victim-offender dialogues for her doctoral dissertation. After interviewing several participants, White realized this was an experience she desired herself. “I wanted it so bad I could taste it,” she told an audience of 50 Vermonters after the film screening Nov. 16 at the Barre Opera House. “One of the reasons I wanted it was because I had taught so many offenders, and had been able to look at so many offenders as human beings. There was a part of me that wanted to know if I could do that with Gary. You know, ‘Am I who I think I am?’ ” For years she had found it too painful to consider her daughter’s final hours, but now she was ready—she realized she wanted to know more. White’s granddaughter, then 20, had a similar question. She wanted to know if her mother had spoken with the killers, and if so, what she had said. Ami also wanted to make sure Brown understood how difficult it had been for her to lose her mother at such a young age. When White began exploring the possibility of doing a victim-offender dialogue, she learned that one of Cathy’s killers was in a mental ward. Offenders who are mentally incompetent are unable to participate in the process. People involved in domestic disputes—even violent ones—may not qualify, either. “In a domestic homicide,” Holloway says, “the surviving family members would be able to have” a dialogue. But, she clarifies, “It’s not to help people get together to improve a relationship. It’s to answer questions that a victim might have about a crime.” Not every offender is willing to take responsibility for his or her actions by facing a harmed person. Facilitators won’t force it. It turns out that Brown was willing to meet with the two women, and to apologize for what he’d done. All three of them prepared for several months beforehand; each met twice with the facilitator. White and her granddaughter completed a “grief inventory” in which they clarified their objectives and discussed their expectations. The day before the meeting, the two women toured the prison and saw a sample cell. When the parties finally met around a table in a quiet room at the prison, everyone immediately began to cry. The 15-year-old boy had become a childlike, baby-faced man of 30. He talked with the women about his life—cocaine and crystal meth use by age 9; a foster father who abused him sexually; 10 suicide attempts, the first of which occurred when he was just 8 years old. The Whites talked with Brown about Cathy, and showed him photos. They brought pictures of Ami’s newborn son, the grandson Cathy would never see. Brown apologized repeatedly for his crime, and answered the Whites’ questions. The most powerful moment of the film comes when Brown reveals Cathy’s last words — “I forgive you, and God will, too.” It seems difficult for him to say the words, as if they make it harder for him to ever forgive himself. After the screening, White told the Barre audience that hearing her daughter’s final statement was difficult, but ultimately comforting. “If she could say that in the last few moments of her life,” White reasoned, “then she wasn’t in the kind of terror I had imagined she was.” The dialogue with Brown “was so hard,” she says, “the hardest thing I ever had to do.” But, she adds, “It was amazing.” She felt compassion for Brown, and says she has even forgiven him. She hopes that when he is released—he didn’t make parole in 2004, but could soon—their conversation will have helped him to reform. That’s not to say that she thinks everyone should experience victim-offender mediation. White notes that her own husband and Cathy’s two brothers would not take part in the dialogue. None of them has even been able to watch the entire film. And that’s fine, she says. Who could ever blame them? According to White, the only negative comment she’s heard from people is about the hug. “It just never occurred to me not to,” she says with a shrug. “That’s the only way I can answer it.” Victim Services director Amy Holloway calls White’s experience extraordinary. “If those things happen, like compassion, forgiveness and understanding, that’s a gift,” she cautions, “but that’s not necessarily what’s going to happen. There’s no presumption of forgiveness, there’s no presumption of anything. It’s just ‘I’m a victim and I have certain needs that need to get met, and this person who is sitting across the table is the person who can meet them.’ ” Holloway adds that it’s rare for participants in these dialogues to talk about them publicly, much less to film them. The conversations are painful, and not many people are willing or able to speak openly about them. Given the sensitive nature of the process, officials are reluctant to contact past participants to talk with the media, and the ones contacted for this story did not respond. That includes two Vermonters involved in a victim-offender dialogue Patten facilitated: She tells the story of a drunk driver who killed a father and son several years ago. The offender recently was released from prison. Before he left, he met with the wife and sister of his older victim. Patten says both conversations went well, but the one with the sister was the more productive of the two. “She asked, ‘How are you going to continue this process when no one’s hanging over your head? After you’re out of prison? Because I’m going to need you to prove it to me for the rest of both of our lives that you’re not going to be back to who you were that day,’ ” Patten recalls. He told her that he intended to speak to young people about the perils of drunk driving and what it did to his life. Coincidentally, she had the same goal. Patten says the pair has made multiple joint visits to Vermont schools and correctional facilities over the past few years. The parole board initially was involved in helping them establish their relationship outside of prison. “We can’t take that kind of connection lightly,” she explains. “We have to be careful. But it worked beautifully.” Patten says the man has managed to integrate back into the community successfully, and speculates that the victim-offender dialogue had something to do with it. David Peebles, DOC restorative and community justice director, says a new study, to be released before the end of the year, will likely show that the Community Justice Center reparative boards—which bring lower-level offenders together with victims and community members—are working well. He says they’re reducing recidivism, and making participants feel better about their communities. But he doesn’t want to rush into dialogues about more serious crimes. “The concern is that it becomes sort of a novelty, and people all want to try and do this,” he says. “I think one size does not fit all. It’s important for people to really do a lot of assessment here to see when and if it’s appropriate.” He points out that in the wrong hands, these dialogues could easily become “volatile.” “I want to make sure that people are well-trained, well-skilled, and have developed the right assessment tools, and that we go about this in a very cautious way,” he says. Still, Peebles expects these dialogues to be a powerful new tool to help Vermonters deal with violent crime. Holloway says it might be valuable even to those who don’t choose to go through with it. “Whether we get a hundred people to do it, or whether we get five people,”she says, “the fact that victims know they could do this if they wanted to is very empowering.” She has already spoken with one woman whose family member was murdered years ago in Rutland. Says Holloway, “She said to me, ‘I don’t know if I’ll ever do it, but knowing that I can makes me feel like less of a victim.’ ” Cathy Resmer is a staff writer for Burlington, Vt., newsweekly Seven Days, where this article first appeared.
Pulmonary hypertension in pregnancy. Pulmonary hypertension is a contraindication to pregnancy, as there is a very high mortality in the puerperium, when fluid balance is most tenuous. There is a fine line between sufficient fluid to maintain pulmonary perfusion and too much fluid, triggering pulmonary overload. Surgical sterilization is strongly advised in this patient population.
67 SHARES Facebook Twitter Linkedin Reddit One of the most exciting announcements for VR fans at E3 this was that, not only was the new Sega title Alien: Isolation going to officially support the Oculus Rift but that it was playable as part of Oculus’ demo set too! We of course made a bee line to the booth to try it out. Above is some off screen gameplay footage showing some really interesting mechanics that utilise the positional tracking capabilities of the Oculus Rift DK2 (which is due to ship next month). In particular, after pulling out the franchises signature motion tracker, the player can lean in to take a better look at the device. Enjoy the video, we’ll have detailed impressions on all the demos at Oculus VR’s booth later on.
import pytest from dagster_k8s.job import ( K8S_RESOURCE_REQUIREMENTS_KEY, USER_DEFINED_K8S_CONFIG_KEY, UserDefinedDagsterK8sConfig, get_user_defined_k8s_config, ) from dagster import pipeline, solid from dagster.core.errors import DagsterInvalidConfigError # CPU units are millicpu # Memory units are MiB def test_backcompat_resource_tags(): @solid( tags={ K8S_RESOURCE_REQUIREMENTS_KEY: { "requests": {"cpu": "250m", "memory": "64Mi"}, "limits": {"cpu": "500m", "memory": "2560Mi"}, } } ) def resource_tags_solid(_): pass user_defined_k8s_config = get_user_defined_k8s_config(resource_tags_solid.tags) assert user_defined_k8s_config.container_config assert user_defined_k8s_config.container_config["resources"] resources = user_defined_k8s_config.container_config["resources"] assert resources["requests"]["cpu"] == "250m" assert resources["requests"]["memory"] == "64Mi" assert resources["limits"]["cpu"] == "500m" assert resources["limits"]["memory"] == "2560Mi" def test_bad_deprecated_resource_tags(): @pipeline(tags={K8S_RESOURCE_REQUIREMENTS_KEY: {"other": {"cpu": "250m", "memory": "64Mi"},}}) def resource_tags_pipeline(): pass with pytest.raises(DagsterInvalidConfigError): get_user_defined_k8s_config(resource_tags_pipeline.tags) def test_user_defined_k8s_config_tags(): @solid( tags={ USER_DEFINED_K8S_CONFIG_KEY: { "container_config": { "resources": { "requests": {"cpu": "250m", "memory": "64Mi"}, "limits": {"cpu": "500m", "memory": "2560Mi"}, } } } } ) def my_solid(_): pass user_defined_k8s_config = get_user_defined_k8s_config(my_solid.tags) assert user_defined_k8s_config.container_config assert user_defined_k8s_config.container_config["resources"] resources = user_defined_k8s_config.container_config["resources"] assert resources["requests"]["cpu"] == "250m" assert resources["requests"]["memory"] == "64Mi" assert resources["limits"]["cpu"] == "500m" assert resources["limits"]["memory"] == "2560Mi" @solid def no_resource_tags_solid(_): pass user_defined_k8s_config = get_user_defined_k8s_config(no_resource_tags_solid.tags) assert user_defined_k8s_config == UserDefinedDagsterK8sConfig() def test_bad_user_defined_k8s_config_tags(): @pipeline(tags={USER_DEFINED_K8S_CONFIG_KEY: {"other": {}}}) def my_solid(): pass with pytest.raises(DagsterInvalidConfigError): get_user_defined_k8s_config(my_solid.tags)
Dose- and time-related quantitative and qualitative alterations in the granulocyte/macrophage progenitor cell (GM-CFC) compartment of dogs after total-body irradiation. The effects of single-dose total-body X irradiation (TBI) on the granulocyte/macrophage progenitor cell (GM-CFC) population in bone marrow and blood of dogs were studied for dose levels of 0.78 and 1.57 Gy up to 164 days after irradiation. The blood GM-CFC concentration per milliliter was depressed in the first 7 days in a dose-dependent fashion to 5-16% of normal after 0.78 Gy and to between 0.7 and 5% after 1.57 Gy. The bone marrow GM-CFC concentration per 10(5) mononuclear cells, on the other hand, was initially reduced to about 45% of the average pre-irradiation value after 0.78 Gy and to 23% after 1.57 Gy. The regeneration within the first 30 to 40 days after TBI of the blood granulocyte values and the repopulation of the bone marrow GM-CFC compartment was associated with both a dose-dependent increase in the S-phase fraction of the bone marrow GM-CFC and a dose-dependent increase in colony-stimulating activity (CSA) in the serum. The slow repopulation of circulating blood GM-CFC to about only 50% of normal even between days 157 and 164 after TBI could be related to a correspondingly delayed reconstitution of the mobilizable GM-CFC subpopulation in the bone marrow.
Alcohol was a factor in a deadly crash in Albion, according to Orleans County Sheriff's Deputies. Terry Moyer, 17, of Albion was riding in the back seat early Sunday morning while a friend was driving and another teenager was in the front seat. Deputies said the car was operated by 18 year old Joseph Pearl of Albion and was traveling at a high rate of speed on Culvert Road where it tunnels under the Erie Canal. The car went out of control as it came out of the tunnel and slammed into a tree. Moyer was ejected from the back seat. He was rushed to Medina Memorial where he died later in the morning. The driver, Pearl, and the front seat passenger, 18-year-old Randall Vanhouten, were injured. Pearl was taken to Strong Memorial in Rochester while Vahouten was taken to Medina Memorial. Conditions of Pearl and Vanhouten were not available. An Orleans County pilot escaped serious injury over the weekend when he was forced to make an emergency landing after his engine failed. 83-year-old Glenn Woolston was alone in his single engine experimental aircraft on Friday night. He was forced to set down in a bean field in the Town of Carlton. He was treated at the scene for minor injuries. A LeRoy man has been arrested for allegedly targeting women at Batavia’s Target store. 39-year-old Douglas Uberty was just arrested but the incident dates back to April. Deputies said Uberty hid in a women’s’ fitting room and peered over the wall into an adjacent room where one woman was trying on clothes. Lawmen also said Uberty was following women around the store trying to look up their skirts. Uberty was charged with disorderly conduct. Can a guy in a clown suit, driving a golf cart while allegedly drunk, get away from police? Apparently not. James Straub, 37, of Stoneham, Massachusetts, was spotted last night on Clinton Street driving a golf cart west from Terry Hills Golf Course. Deputies say Straub was dressed as a clown when they stopped him and arrested him for DWI and refusing to take a breath test.
Q: How does the proxy mechanism work with proxy settings in browser We often find columns like Address, Port in web browser proxy settings. I know when we use proxy to visit a page, the web browser request the web page from the proxy server, but what I want to know is how the whole mechanism works? I have observed that many ISP allow only access to a single IP(of their website) after we exhausted our free data usage. But when we enter the site which we wants to browse in proxy URL and then type in the allowed IP, the site get loaded. How this works? A: In general, your browser simply connects to the proxy address & port instead of whatever IP address the DNS name resolved to. It then makes the web request as per normal. The web proxy reads the headers, uses the "Host" header of HTTP/1.1 to determine where the request is supposed to go, and then makes that request itself relaying all remaining data in both directions. Proxies will typically also do caching so if another person requests the same page from that proxy, it can just return the previous result. (This is simplified -- caching is a complex topic.) Since the proxy is in complete control of the connection, it can choose to route the request elsewhere, scrape request and reply data, inject other things (like ads), or block you altogether. Use SSL to protect against this. Some web proxies are "transparent". They reside on a gateway through which all IP traffic must pass and use the machine's networking stack to redirect outgoing connections to port 80 to a local port instead. It then behaves the same as though a proxy was defined in the browser. Other proxies, like SOCKS, have a dedicated protocol that allows non-HTTP requests to be made as well.
Explicit correlation treatment of the potential energy surface of CO2 dimer. We present an extensive study of the four-dimensional potential energy surface (4D-PES) of the carbon dioxide dimer, (CO2)2. This PES is developed over the set of intermolecular coordinates. The electronic computations are carried out at the explicitly correlated coupled cluster method with single, double, and perturbative triple excitations [CCSD(T)-F12] level of theory in connection with the augmented correlation-consistent aug-cc-pVTZ basis set. An analytic representation of the 4D-PES is derived. Our extensive calculations confirm that "Slipped Parallel" is the most stable form and that the T-shaped structure corresponds to a transition state. Later on, this PES is employed for the calculations of the vibrational energy levels of the dimer. Moreover, the temperature dependence of the dimer second virial coefficient and of the first spectral moment of rototranslational collision-induced absorption spectrum is derived. For both quantities, a good agreement is found between our values and the experimental data for a wide range of temperatures. This attests to the high quality of our PES. Generally, our PES and results can be used for modeling CO2 supercritical fluidity and examination of its role in planetary atmospheres. It can be also incorporated into dynamical computations of CO2 capture and sequestration. This allows deep understanding, at the microscopic level, of these processes.
// Distributed under the terms of the MIT license // Test case submitted to project by https://github.com/practicalswift (practicalswift) // Test case found by fuzzing class b: P { struct A { { } enum b { init<T { protocol a : b<T { class case c,
Q: Should domain objects implement IXmlSerializable? I'm building a REST API that exposes data as XML. I've got a whole bunch of domain classes in my domain layer that are intended for consumption by both the service layer behind the API, and the client API that we will be providing to customers. (Customers do have the option of interacting directly with the REST API, but the client API simplifies things). I want to keep my domain classes clean of any data persistence logic, but I'm strugling with trying to figure out if it's OK for the domain classes to implement IXmlSerializable to help simplify the process of serializing the data that is exposed through and retrieved from the API. I started out thinking that I'd keep the domain classes free of any serialization logic and instead decorate them with serialization behaviors, e.g. wrap the domain object inside of an object that handles the serialization. Am I making things more complicated than they need to be? Any thoughts on how I should approach this? Thanks! A: Domain classes should be concerned with business logic only, not with persistence or serialization. You should create a set of Data Transfer Object (DTO) classes, each corresponding to one of the domain classes. These classes would only contain the properties, from the domain classes, that you have decided to expose. This permits the domain classes to have properties which are not exposed through your persistence or serialization layers. Only the DTO objects would be serialized and deserialized. You may then find it convenient to create static "translate" methods to translate between the domain and DTO objects.
.class final synthetic Lcom/tencent/mm/plugin/wallet_index/ui/WalletOpenFingerprintPayRedirectUI$3; .super Ljava/lang/Object; .source "SourceFile" # annotations .annotation system Ldalvik/annotation/EnclosingClass; value = Lcom/tencent/mm/plugin/wallet_index/ui/WalletOpenFingerprintPayRedirectUI; .end annotation .annotation system Ldalvik/annotation/InnerClass; accessFlags = 0x1008 name = null .end annotation # static fields .field static final synthetic cxZ:[I # direct methods .method static constructor <clinit>()V .locals 3 .prologue .line 73 invoke-static {}, Lcom/tencent/mm/pluginsdk/ui/AutoLoginActivity$a;->values()[Lcom/tencent/mm/pluginsdk/ui/AutoLoginActivity$a; move-result-object v0 array-length v0, v0 new-array v0, v0, [I sput-object v0, Lcom/tencent/mm/plugin/wallet_index/ui/WalletOpenFingerprintPayRedirectUI$3;->cxZ:[I :try_start_0 sget-object v0, Lcom/tencent/mm/plugin/wallet_index/ui/WalletOpenFingerprintPayRedirectUI$3;->cxZ:[I sget-object v1, Lcom/tencent/mm/pluginsdk/ui/AutoLoginActivity$a;->jcM:Lcom/tencent/mm/pluginsdk/ui/AutoLoginActivity$a; invoke-virtual {v1}, Lcom/tencent/mm/pluginsdk/ui/AutoLoginActivity$a;->ordinal()I move-result v1 const/4 v2, 0x1 aput v2, v0, v1 :try_end_0 .catch Ljava/lang/NoSuchFieldError; {:try_start_0 .. :try_end_0} :catch_2 :goto_0 :try_start_1 sget-object v0, Lcom/tencent/mm/plugin/wallet_index/ui/WalletOpenFingerprintPayRedirectUI$3;->cxZ:[I sget-object v1, Lcom/tencent/mm/pluginsdk/ui/AutoLoginActivity$a;->jcO:Lcom/tencent/mm/pluginsdk/ui/AutoLoginActivity$a; invoke-virtual {v1}, Lcom/tencent/mm/pluginsdk/ui/AutoLoginActivity$a;->ordinal()I move-result v1 const/4 v2, 0x2 aput v2, v0, v1 :try_end_1 .catch Ljava/lang/NoSuchFieldError; {:try_start_1 .. :try_end_1} :catch_1 :goto_1 :try_start_2 sget-object v0, Lcom/tencent/mm/plugin/wallet_index/ui/WalletOpenFingerprintPayRedirectUI$3;->cxZ:[I sget-object v1, Lcom/tencent/mm/pluginsdk/ui/AutoLoginActivity$a;->jcN:Lcom/tencent/mm/pluginsdk/ui/AutoLoginActivity$a; invoke-virtual {v1}, Lcom/tencent/mm/pluginsdk/ui/AutoLoginActivity$a;->ordinal()I move-result v1 const/4 v2, 0x3 aput v2, v0, v1 :try_end_2 .catch Ljava/lang/NoSuchFieldError; {:try_start_2 .. :try_end_2} :catch_0 :goto_2 return-void :catch_0 move-exception v0 goto :goto_2 :catch_1 move-exception v0 goto :goto_1 :catch_2 move-exception v0 goto :goto_0 .end method
San Antonio, Florida Categories Detailed Job Description Department: Graduate/Weekend Admissions The Re-Enrollment Advisor is responsible for creating a supportive environment that promotes student satisfaction and retention through the development of ongoing relationships with new and enrolled students. Basic Function: Responsible for maintaining a student database and retention goals. Assist students with pre-registration and respond to questions regarding degree requirements, programs, policies and procedures, financial assistance programs, transfer credit assessment, and general university services. Make appropriate referrals to campus officials as may be necessary throughout the educational experience. Contact enrolled students in advance of term stat date and throughout academic program, to ensure that each student’s initial experience with registration, financial aid, ordering books and technology setup have prepared the student to start as planned. Serves as the principle point of contact for the students and is responsible for addressing student issues, referring students to appropriate academic advisors as needed Serve as a student liaison with other functional areas of the organization, and support university practices which may contribute to increased student satisfaction and retention. Required Education/Experience/Skills: This position will require strong communication skills via phone and email to ensure strong communication and service for our student populations.
Tag Archives: october garden chores Cool crisp evenings are the hallmark of the fall season. And with that comes the season’s harvest. See to the needs of your garden–whether you simply grew a pot of mint, or an entire field full of fruits and vegetables. You will find a very useful October gardening calendar based on your location [HERE], which will list for your the main garden chores including planting, fertilizing, controlling pests, and maintenance. Also, take notes on this year’s garden. And remember to read your garden journal from last year. What went and what did not? Which pests and diseases were about? What hints and reminders did you leave for your future self? Above all, enjoy the fruits of your labor! Devotedly,.♥ والدة – walidah ♥. P.S. October is a great month to plant garlic and other bulbs such as tulips. TV - Yes or No? [PDF] The Permanent Committee of Scholars have stated (in a fatwa) that the television is an instrument that in and of itself has no ruling regarding it; rather, the ruling applies to its use -- Dr. Saleh as-Saleh (rahimahullaah)
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.superbiz; import org.apache.commons.lang3.StringUtils; import org.apache.openejb.resource.jdbc.cipher.PasswordCipher; import org.junit.BeforeClass; import org.junit.Test; import javax.annotation.Resource; import javax.ejb.embeddable.EJBContainer; import javax.naming.Context; import javax.sql.DataSource; import java.sql.Connection; import java.sql.DriverManager; import java.sql.Statement; import java.util.Properties; import static org.junit.Assert.assertNotNull; public class DataSourceCipheredExampleTest { private static final String USER = DataSourceCipheredExampleTest.class.getSimpleName().toUpperCase(); private static final String PASSWORD = "YouLLN3v3rFindM3"; private static final String DATASOURCE_URL = "jdbc:hsqldb:mem:protected"; @Resource private DataSource dataSource; @BeforeClass public static void addDatabaseUserWithPassword() throws Exception { Class.forName("org.hsqldb.jdbcDriver"); Connection conn = DriverManager.getConnection(DATASOURCE_URL, "sa", ""); conn.setAutoCommit(true); Statement st = conn.createStatement(); st.executeUpdate("CREATE USER " + USER + " PASSWORD '" + PASSWORD + "';"); st.close(); conn.commit(); conn.close(); } @Test public void accessDatasource() throws Exception { // define the datasource Properties properties = new Properties(); properties.setProperty("ProtectedDatasource", "new://Resource?type=DataSource"); properties.setProperty("ProtectedDatasource.JdbcDriver", "org.hsqldb.jdbcDriver"); properties.setProperty("ProtectedDatasource.JdbcUrl", DATASOURCE_URL); properties.setProperty("ProtectedDatasource.UserName", USER); properties.setProperty("ProtectedDatasource.Password", "fEroTNXjaL5SOTyRQ92x3DNVS/ksbtgs"); properties.setProperty("ProtectedDatasource.PasswordCipher", "Static3DES"); properties.setProperty("ProtectedDatasource.JtaManaged", "true"); // start the context and makes junit test injections EJBContainer container = EJBContainer.createEJBContainer(properties); Context context = container.getContext(); context.bind("inject", this); // test the datasource assertNotNull(dataSource); assertNotNull(dataSource.getConnection()); // closing the context container.close(); } @Test public void accessDatasourceWithMyImplementation() throws Exception { // define the datasource Properties properties = new Properties(); properties.setProperty("ProtectedDatasource", "new://Resource?type=DataSource"); properties.setProperty("ProtectedDatasource.JdbcDriver", "org.hsqldb.jdbcDriver"); properties.setProperty("ProtectedDatasource.JdbcUrl", "jdbc:hsqldb:mem:protected"); properties.setProperty("ProtectedDatasource.UserName", USER); properties.setProperty("ProtectedDatasource.Password", "3MdniFr3v3NLLuoY"); properties.setProperty("ProtectedDatasource.PasswordCipher", "reverse"); properties.setProperty("ProtectedDatasource.JtaManaged", "true"); // start the context and makes junit test injections EJBContainer container = EJBContainer.createEJBContainer(properties); Context context = container.getContext(); context.bind("inject", this); // test the datasource assertNotNull(dataSource); assertNotNull(dataSource.getConnection()); // closing the context container.close(); } public static class ReverseEncryption implements PasswordCipher { @Override public char[] encrypt(String plainPassword) { return StringUtils.reverse(plainPassword).toCharArray(); } @Override public String decrypt(char[] encryptedPassword) { return new String(encrypt(new String(encryptedPassword))); } } }
[26.12.2ÎÎ3] Kyrgyzstan to Approve Privatization Program for 2004-2006 The Kyrgyz government has developed a program for privatizing state property between 2004-2006, deputy chairman of the Committee for State Property and Direct Investments Anatoly Makarov told a government conference on Friday. He said the program includes a list of eight strategic facilities in which the government stakes can be privatized only if the parliament approves individual privatization programs. These include four electricity-distributing companies, Kyrgyzgaz, the Kyrgyz national airlines, Manas international airport and the Bishkek heating network. The government plans to privatize 141 facilities between 2004-2006 and collect 400 million soms in receipts.
It’s possible to live in one of the nicest apartments in New York City and still have a view of a brick wall. The subway runs 24 hours, which means there’s never a time of day that someone isn’t hot-stepping pathetically down the stairs, about to just miss a train. If you love a bodega, it will turn into an ATM lobby; if you get along with a neighbor, she will move uptown. There are hundreds of restaurants and umpteen sports teams, and at any given moment a lot of them are not doing so hot. New York has a true embarrassment of riches; right now, just about every one of its major professional athletic franchises is just plain embarrassing in one way or another. The Mets outdo any attempts at performance art. The Jets have gone through quarterbacks the way the Knicks cycle through general managers. Even if you extend the metropolitan borders to encompass the New Jersey Devils and both teams from Buffalo, the situation is equally grim. Since New Yorkers are a famously competitive and status-obsessed people, I figured I’d determine which team can be crowned Bleecker Street’s Bleakest. This is not a historical survey, but rather a current-day, snapshot-in-time power ranking of the powerless. Here, a look at the biggest Drags of New York, ranked in descending order from delight to despair. Bruce Bennett/Getty Images Good Enough to Break Your Heart 8. New York Islanders Record: 19-8-0, second in Metropolitan Division Hovering near last place was a familiar position for the dear, sweet Islanders for so long that it’s surprising to find them at the bottom spot of this particular ranking. But as things currently stand, they’re easily the least depressing team in the tri-state area. What’s especially thrilling to long-suffering Islanders fans is the team’s concentration of youth. Three of New York’s top four scorers, John Tavares, Brock Nelson, and Ryan Strome, were born in the 1990s. This season’s success has been bittersweet, however, as it’s the Isles’ final year in their much-maligned but secretly beloved Nassau Coliseum. (They move into the trendy — but non-hockey-optimized — Barclays Center next season.) Already the Nassau reminiscences have begun to flow. And that’s not the only cloud on the horizon; in the past week, a number of key players, including three defensemen, have been sidelined with injuries. Still, none of New York’s other teams present any challenge to the Islanders’ status as the city’s worst worst team. Bleakest moment: When a happy Islanders fan rang up Mike Francesa the other day to celebrate and was told to call back in a few months. 7. Brooklyn Nets Record: 8-11, second in Atlantic Division It’s true! If the playoffs began today, the 8-11 Brooklyn Nets would squeeze their way in as the eighth and final seed! This isn’t exactly an impressive thing in a woeful Eastern Conference, but it’s also more than any other NYC team, save for the Islanders, can say about its current postseason chances. The Nets also played host to the duke and duchess of Cambridge on Monday evening, an exciting thing, to be sure. In the days leading up to the couple’s arrival in NYC, the royal handlers made it known that slovenly American attire would not be tolerated in their presence. It’s a good thing this edict was softened days later, though, because (a) have you ever seen sportswriters? and (b) LeBron James totally didn’t wear the prescribed “smart attire” of “jacket and tie” upon meeting Will-n-Kate. I’m sad he changed out of comic sans, though. Bleakest moment: Losing last night by 22 to Cleveland in front of British and American royalty. (Jay and Bey were there too.) 6. New York Rangers Record: 12-10-4, fourth in Metropolitan Division The Rangers made it to the Stanley Cup final last season, but this year the team has gotten off to a start that’s seen squandered leads, serious injuries, and nearly as many goals surrendered as scored. For every positive, like Rick Nash’s productive season or Marty St. Louis’s setup plays, there’s an offsetting setback: Chris Kreider, with a hurt neck, is the latest to be sidelined. Monday was typical of their state of affairs. They blew a 3-1 lead to Pittsburgh and won the game in overtime when Kevin Klein scored the game-winning goal. Of course, he also almost lost an ear in the process. It’s been that kind of season. Bleakest moment: Being on the unfortunate end of this highlight reel goal. Rich Schultz/Getty Images Blissfully Idle Baseball Teams 5. New York Mets Last season: 79-83, second in NL East This is not a contest of the most historically upsetting New York franchises, alas, or else the Mets would have a real shot at glory. “The Mets are bleak in a different sphere,” Bill Barnwell recently sighed to me. “They’re bleak on an institutional level.” At the moment, everyone is waiting for Matt Harvey to return — but in his absence a number of young pitchers, like Jacob deGrom, stepped up. In a campaign where they were expected to outright crumble, the Mets finished with a respectable record. As the Yankees continue to fall to pieces, the Mets seem to be actually building something interesting. Bleakest moment: Waiting for the inevitable chronic and/or career-ending injury to Matt Harvey. (Is it just us or is he looking, uh, rounder?) 4. The New York Yankees Last season: 84-78, second in AL East Dark days for you Yankees fans out there. Derek Jeter is gone, A-Rod might be back, the closer position is now open, and no one is remotely interested in sympathizing with your plight. Bleakest moment: This New York Daily News headline from a few days ago: “Robinson Cano reflects on bolting Yankees to join Mariners: ‘I’m super happy.’” Jim McIsaac/Getty Images Testing the Limits of Unconditional Love 3. New York Giants Record: 4-9, third in NFC East It’s weird to know you’re witnessing the beginning of the end of an era, the sunset of a coach-quarterback combo that has been in place for a decade. It’s unlikely at this point that Tom Coughlin will captain the New York Giants next season, and people got super jumpy on Sunday when, following a rare win, Eli Manning was slow to emerge from the trainer’s room after the game. This Giants season has been the kind that necessitates contemplation of an uncertain future. It has been the very picture of bleak. On the other hand: ODELL BECKHAM JUNIOR, BITCHES!!!!!!!!!!!!!!!!!!!!!! [Editor’s note: The Baker-Barnwell Axis of Odell Homerism is not to be trifled with.] Bleakest moment: Eli’s evening of elegance. 2. New York Jets Record: 2-11, fourth in AFC East I know what you’re thinking: The Jets oughta be no. 1! Can’t they at least have this one victory? Indeed, you could be forgiven for having Rex Ryan levels of misplaced confidence in the team’s potential to come out on top. The Jets are 2-11 this season after this weekend’s overtime loss. They are a living, breathing embodiment of the ol’ “if you have two starting quarterbacks, you don’t have one” chestnut. Geno Smith is regressing all the way back to childhood before our very eyes. I just went to the team’s “NEWS” page on NFL.com and the four most recent headlines are: Rex defends Woody Johnson as criticism mounts Rex, OC Mornhinweg deny ‘toxic’ atmosphere in Jets locker room Jets veteran: Management has quit on us Vick: I wasn’t benched Another glorious season for Gang Green! But still: There are only a couple of more games before someone makes all the bad men go away and the Jets are bad enough that they have a legit shot to draft first overall, which at least provides a welcome distraction. I won’t go so far as to say there’s a light at the end of the tunnel, but at least the dirt walls have temporarily ceased caving in. Bleakest moment: Every word of this press conference. 1. New York Knicks Record: 4-18, fourth in Atlantic Division Look, everyone knew this was gonna be one of those seasons. When Carmelo Anthony contemplated bouncing this summer — and he sure has taken great care to make sure we didn’t forget — it became clear that, with or without him, the cap-strapped Knicks would be enduring quite a year of transition until they gained better flexibility at the end of the season. But between the NBA’s coaching Whac-A-Mole and the arrival in New York of Phil Jackson (and everyone’s endless chatter about his triangle offense), we seemed to forget just how bad the Knicks were expected to be. It’s now easy to remember. This has been a dismal season for the Knicks, one so bad that the Philadelphia 76ers, who took more than a month to earn their first win, now stand only a game back of the blue-and-orange. (The only good news for the Knicks is that, by some miracle, they actually have a first-round draft pick this season.) And while Jets fans are just a few weeks away from being able to wash their hands of their team, Knicks fans have the added drudgery of knowing there’s still a long, bad season ahead. “It’s about a loser’s mentality,” Phil Jackson said on Monday. “It’s not about the skill or the talent level.” Well, when you put it that way … Bleakest moment: MSG officials forcibly removing a Nets fan with a prosthetic leg from the arena; Melo wearing this hat.
--TEST-- gmp_prob_prime() basic tests --SKIPIF-- <?php if (!extension_loaded("gmp")) print "skip"; ?> --FILE-- <?php var_dump(gmp_prob_prime(10)); var_dump(gmp_prob_prime("7")); var_dump(gmp_prob_prime(17)); var_dump(gmp_prob_prime(-31)); var_dump(gmp_prob_prime("172368715471481723")); var_dump(gmp_prob_prime(10)); var_dump(gmp_prob_prime("7")); var_dump(gmp_prob_prime(17)); var_dump(gmp_prob_prime(-31)); var_dump(gmp_prob_prime("172368715471481723")); for ($i = -1; $i < 12; $i++) { var_dump(gmp_prob_prime((773*$i)-($i*7)-1, $i)); $n = gmp_init("23476812735411"); var_dump(gmp_prob_prime(gmp_add($n, $i-1), $i)); } $n = gmp_init("19481923"); var_dump(gmp_prob_prime($n)); $n = gmp_init(0); var_dump(gmp_prob_prime($n)); try { var_dump(gmp_prob_prime(array())); } catch (\TypeError $e) { echo $e->getMessage() . \PHP_EOL; } echo "Done\n"; ?> --EXPECT-- int(0) int(2) int(2) int(2) int(0) int(0) int(2) int(2) int(2) int(0) int(0) int(0) int(0) int(0) int(0) int(0) int(2) int(0) int(2) int(0) int(0) int(0) int(0) int(0) int(0) int(0) int(0) int(0) int(0) int(0) int(0) int(0) int(0) int(0) int(0) int(0) int(0) int(0) gmp_prob_prime(): Argument #1 ($a) must be of type GMP|string|int, array given Done
The billionaire businessman David H. Koch has left his position on the board of trustees of the American Museum of Natural History (AMNH) after serving on it for 23 years. A spokesperson for the museum told the New York Times that Koch’s departure, which became official on December 9, was not a response to the demands of many scientists and activists that Koch be kicked off the boards of the AMNH and the Smithsonian’s National Museum of Natural History (NMNH). Koch had served on the AMNH’s board since 1992, donating about $23 million in as many years. In recognition for his support, the museum’s dinosaur wing was renamed the David H. Koch Dinosaur Wing. In recent years, however, Koch’s positions at the AMNH and its Smithsonian counterpart have increasingly drawn criticism for the apparent incongruity of having the executive vice president of a company, Koch Industries, responsible for enormous greenhouse gas emissions and hundreds of oil spills on the boards of major science institutions. Another cause for concern was Koch’s support, to the tune of over $67 million according to some, of groups and organizations that deny climate change. Last March a group of 39 scientists released an open letter demanding that Koch be kicked off the two natural history museums’ boards. In June protesters marched in Washington, DC, to deliver a petition signed by 430,000 people to representatives of the Smithsonian Institution, demanding that it kick Koch off the advisory board of the NMNH. All that had nothing to do with his decision, according to his spokesperson, Cristyne Nicholas. “He was not swayed by that at all and it absolutely did not factor into his decision,” Nicholas told the Times. “He remains supportive of the museum. … It is just that he does not have time to attend the board meetings.” Nevertheless, activists who’d demanded his departure are heralding this as a victory. “Regardless of official explanations, it is undeniable that Koch’s board position was the cause of great controversy over the last year,” said Beka Economopoulos, director of the art and activism collective Not An Alternative and co-founder of The Natural History Museum, a group scrutinizing the information disseminated by science museums. “Koch’s departure is an important step forward for the rapidly growing museum liberation movement that aims to free cultural institutions from ties to fossil fuel companies and other private interests,” Economopoulos told Hyperallergic. When asked which museums are the most attentive to the issues of the liberation movement, she cited the California Academy of Sciences in San Francisco. “They have one of the greenest buildings in the world — it generates more electricity than it takes in. They do a ton of programming out in schools and in the community. They don’t serve any junk food at the museum. They have a ton of cool initiatives,” she said. In terms of art museums, Economopoulos says no institutions have been receptive to the ideas yet, but added: “Hopefully the Brooklyn Museum will implement a new rental policy that precludes them from hosting developers / the real estate summit . I ossil fuel companies are to science museums what real estate interests are to art museums.” know they’re considering it — they could demonstrate leadership in the museum sector by pushing back against these problematic entanglements. F Koch still serves on the boards of some 20 institutions, including the Smithsonian’s NMNH. Last year the unveiling of the Metropolitan Museum of Art’s David H. Koch Plaza was met with protests. The AMNH sent Hyperallergic the following statement regarding Koch’s resignation from the board of trustees:
Think, Act and Invest Like Warren Buffett, with Larry Swedroe Episode: 174 Guest: Larry Swedroe The Lange Money Hour: Where Smart Money TalksJames Lange, CPA/Attorney Listen every first and third of each month on KQV 1410 AM or at our radio show archives. Note: Some events referenced in our archives have already passed. Welcome to The Lange Money Hour: Where Smart Money Talks with expert advice from Jim Lange, Pittsburgh-based CPA, attorney, and retirement and estate planning expert. Jim is also the author of Retire Secure! Pay Taxes Later. To find out more about his book, his practice, Lange Financial Group, and how to secure Jim as a speaker for your next event, visit his website at paytaxeslater.com. Now get ready to talk smart money. 1. Guest Introduction: Larry Swedroe, Author and Wealth Manager Dan Weinberg: And welcome to The Lange Money Hour. I’m Dan Weinberg along with CPA and attorney Jim Lange. Who wouldn’t want to invest like Warren Buffett? He’s one of the most successful investors in the world, and yet, most people don’t follow his recommendations about where they should be putting their money. Our guest tonight, nationally recognized investment expert Larry Swedroe, is the author of the book Think, Act and Invest Like Warren Buffett: The Winning Strategy to Help You Achieve Your Financial and Life Goals. Like Warren Buffett, Larry believes that passive investing through index funds is the best path to prosperity, and he’s done the research to back it up. Larry is director of research for Buckingham Asset Management. Including the Warren Buffett book we’ll be talking about tonight, he’s authored fourteen others, including his latest, The Incredible Shrinking Alpha, and he’s working on book number 15 as well, called Your Complete Guide to Factor-based Investing. Over the next hour, you’ll learn a great deal about passive and active investing and how to get into that Warren Buffett mind-set. So, let’s get started by saying good evening to Jim Lange and Larry Swedroe. Jim Lange: Welcome, Larry! Larry Swedroe: Hey, Jim, how you doing? Jim Lange: Good! It’s always a pleasure to have you on because not only do you have a great, let’s say, deal of wisdom, but it’s in so many areas, you know, like Dan mentioned, the author of 14 books. We’re probably going to concentrate a little bit on Think, Act and Invest Like Warren Buffett, but your book on alpha and your book on mistakes are also wonderful books, and if people are interested in your books, I would probably just go to Amazon. We’re probably going to be spending the most time on Think, Act and Invest Like Warren Buffett of the 14 books, and I know you’re working on another one, although I guess that’s a constant state. My favorites are the Warren Buffett, the mistake book, and the alpha book. But anyway, Warren Buffett is probably one of the most successful active investors of our time, very durable. People who bought in, if you will, 20, 30, 40 years ago are doing very, very well today. So, a lot of people are interested in investing more like Warren Buffett. On the other hand, Warren Buffett is an active manager, and you usually recommend passive investment strategies. What could somebody who is, let’s say, trying to be like Warren Buffett, but they are also buying into the passive investments, what should they be thinking about as they’re developing their portfolio? 2. Most Investors Do the Opposite of What Warren Buffett Advises Larry Swedroe: Well, I think, Jim, a good place to start that discussion is to note what I’ve found to be probably the biggest anomaly in all of finance, which is if you ask people who they think the greatest investor of all time is, I think you and I could agree that probably 98 percent of them would say Warren Buffett. Maybe a sprinkling might throw in Peter Lynch and maybe somebody else, and yet despite that fact, the vast majority of people not only ignore Warren Buffett’s advice, which he hands out liberally on national television and in his annual letters to the Berkshire Hathaway shareholders, but they tend to do exactly the opposite of what he recommends, and in my book, I touch on three key areas that investors, if they just followed those three, would be served so much better and end up with much better results. Jim Lange: Well, maybe that’s because people are listening to (Jim) Cramer on TV? Jim Lange: Well, to be fair, if you listen to Cramer and you follow all of his advice, today, you would have one million dollars … if you started with two million. By the way, that’s a joke, before I get sued! Larry Swedroe: Yeah. There’s actually a website that has tracked Cramer’s recommendations, and there are several academic papers actually that have also done so, and they found that basically, there is zero alpha, even before expenses, in his recommendations, and the only alpha that tends to occur is to the institutional investors, and here’s what happens. Cramer comes out with a recommendation, say, to buy a stock. The next morning, the stock jumps up because ? I’ll use the phrase ? dumb retail money thinks Cramer knows something. The institutions sit on the sideline and wait, then they come in and short the stocks, driving the price back down, and it ends up right back where it started. So, the institutions make money and the individuals lose. Jim Lange: Well, that almost sounds like a conspiracy, if you will. I hope Cramer isn’t benefitting from that information. I’m hoping that he is, even if he’s wrong, that he, like you, is a fiduciary adviser that has nothing but the best interests of his audience in mind. Larry Swedroe: I think he certainly does. Although I don’t know him personally, I think he’s a very smart guy, but he’s become an entertainer rather than an adviser. Jim Lange: All right, well, let’s get back to what you had said earlier about Warren Buffett. So first, we know that Warren Buffett gives his advice freely. Very frankly, the letters to the shareholders that he writes in the annual reports for Berkshire Hathaway are almost considered like a great financial resource in and of themselves, both for their content and their excellent prose. So, as you said, he isn’t shy about giving people advice. What advice does he give that most people do the opposite? Larry Swedroe: Right, so I broke this down in my book into three big issues that I thought would help people, and the first one is: Should you use active or passive funds? Now, we know that while there’s a trend towards more passive investing, with the key leader in that effort of course being John Bogle, I hope I’ve done my part to contribute, but the fact is, even today, probably 80 percent or more of individuals have their money invested in actively managed funds. Now, that’s way down from the 99 percent of twenty years ago. So, there is a trend there. So, Buffett, specifically on this issue, here’s what he has said: ‘By periodically investing in an index fund, the know-nothing investor can actually outperform most investment professionals.’ And then, he added: ‘Most investors, both institutional and individual, will find that the best way to own common stocks is through an index find that charges minimal fees, and those following this path are sure to beat the net results, after fees and expenses, delivered by the vast majority of professionals.’ So, his advice is very clear there: Do not use active funds. Jim Lange: So, basically, he’s saying ‘Do as I say, not as I do?’ Because, of course, he’s running … I don’t know if it’s the biggest, but it’s certainly one of the huge actively managed companies in the world. Larry Swedroe: Well, I would say it this way: Buffett might be saying this: ‘If you look in the mirror and you see Warren Buffett, you can go ahead and try to pick stocks and beat the market. But I’m the only one who does that. Maybe me and Charlie Munger.’ So, unless you think you’ve got the skill set and access to information that Buffett has, as well as his discipline, you are far more likely to produce better results by just building broadly diversified portfolios using index funds, and then staying the course, and very importantly, avoiding panic selling. Jim Lange: And I think, isn’t that one of the main focuses of a different book? And I don’t like to plug more than one book because then people might buy none, but to me, the most fun and, let’s say, the most readable of your books is Think, Act and Invest Like Warren Buffett, by Larry Swedroe, and that, and maybe about 10 others, or 13 others, are available on Amazon. But what you’re describing there sounds like another one of your books, which is The Incredible Shrinking Alpha, and I think there, that book and Warren Buffett are saying the same thing, which is it’s just too difficult, particularly after fees and expenses, for an active money manager to beat the index. 4. It’s Possible, but Very Few Win the Loser’s Game of Active Management Larry Swedroe: Yeah, it’s not that it’s impossible, and, of course, that slim odds of outperforming provide that hope, but here’s the important message in my book, The Incredible Shrinking Alpha. Twenty years ago, a fellow named Charles Ellis, one of the most respected men in our industry, he wrote a book called Winning the Loser’s Game. Now, a loser’s game is one where it’s possible to win, but the odds of doing so are so poor that the prudent thing is not to try at all and just don’t play. So, I’m sure we can all think of loser’s games like roulette, or buying lottery tickets, or any game at the Las Vegas casinos where they have the odds. Sure, you can win. It’s possible, and you might even be willing to lose some as an entertainment account, but you wouldn’t take your retirement account there. And when Ellis wrote his book almost 20 years ago, he noted that about 20 percent of actively managed funds were generating statistically significant alpha. So, you could say it wasn’t luck. That still meant that 80 percent were failing, and that’s even before the impact of taxes that individuals had to pay. That number would rise to about 90 percent on an after-tax basis. Today, that 80 percent failure rate is now up to 98 percent before taxes. So, only about 2 percent of actively managed funds are generating statistically significant alpha even if you’re investing in your IRA, and maybe 1 percent if it’s in a taxable account. Now, I don’t know about you, Jim, but I don’t like playing a game where I have a 1 in roughly 50 chance of winning pre-tax and 1 in 100 winning after tax. Jim Lange: So, we had some Missouri-type people show me. Can you give me a source for that statistic? Because there’s going to be a lot of active money managers who are going to want to argue with that, and I want to arm our listeners with that information and where they could find that. 5. With the Right Value Stocks, You Can Outperform Warren Buffett Larry Swedroe: Well, the best place is go pick up a copy of my book, which cites the academic research, and one of the papers was written by a Nobel Prize winner, Gene Fama, and his colleague Ken French, called “Luck Versus Skill in Mutual Fund Performance,” and that paper is several years old, and a more recent paper also came to the same conclusion. But I can add this: This may surprise most of your listeners. Now, we all know that Warren Buffett has had a tremendous track record, but what most people don’t know is that almost all of his success occurred prior to the last 15 years, when the markets have gotten ? call it ? smarter, as the academic research has uncovered, the type of stocks that Warren Buffett bought and have now built mutual funds that gain exposure to them. So, I know you and I both use funds of Dimensional Fund Advisors to gain access to the type of value stocks that Warren Buffett bought. So, if we look at Berkshire Hathaway’s returns, and I wrote this up recently, for the 15-year period ending March 2016, Berkshire returned 8.2 percent. Now, the DFA runs two domestic value funds. One of them is large value. It returned 7.4 (percent). The other is small value which returned 10.1. The average of those two is 8.8, and Berkshire returned 8.2. So, it’s hard to argue even now that Berkshire is able to outperform similarly risky investments. That’s a pretty good example of the point we made that if you look back 30, 40 years ago, Buffett was swamping comparable funds, and today, for the last 15 years, he’s had a very difficult time generating outperformance on a risk-adjusted basis. Jim Lange: All right, so is it fair to say that you’re saying that for the last 15 years, one major source of his outperformance, if you’re even going to call it that, would actually be an asset-allocation issue, which he was just much smarter than, say, a typical mutual fund or even a standard recommendation by Vanguard that would not have as high an exposure to small value or large value? Larry Swedroe: Yeah. So, let me say it this way: We now know today the secret sauce that Buffett used. This is, I think, a good way for investors to think about it. Jim Lange: We all want to know the secret sauce. So everybody’s perking up now: “Oh boy! Secret sauce!” All right, Larry, tell us the secret sauce. 6. ‘Gross Profitability’ and ‘Equality Factor’ are Different from Earnings Larry Swedroe: Right, and he got that secret sauce from his mentors, David Dodd and Benjamin Graham, who authored the book Security Analysis. And Buffett, this wasn’t so secret, he would tell people for decades, ‘Here are the types of stocks I buy.’ And the first thing that the academics uncovered, by the way, and often, academics uncover these sources of excess returns by studying the performance of great investors, people who had high returns to figure out if their secret sauce could be replicated, meaning was there a common characteristic in a stock that anyone could identify and just buy all the stocks with that characteristic, or was their secret sauce not replicable? It was a skill set that they have. So, there would be no common trait that would be replicable. So, in the 1980s, academic research began to be published that showed that value stocks, stocks that had low prices relative to earnings or book value or cash flow, outperformed the market by roughly 5 percent a year, and eventually, then mutual funds, like Vanguard, would create a value index fund, DFA ? the fund family we use ? mostly created more sophisticated versions, and that captured much of Buffett’s alpha, because they would just buy all of the stocks; but still, Buffett outperformed those value funds. There was another missing ingredient that the academics hadn’t uncovered, and then in 2006, Ken French and Gene Fama, who I had mentioned earlier, wrote a paper that showed more profitable firms were generating higher returns even though you paid a higher price for them. So, they had higher price/earnings ratios, but that still did not prevent them from providing higher returns, and over time, more research came out on that. In 2012, a fellow named Robert Novy-Marx wrote a paper showing that if you focus on what he called ‘gross profitability,’ not earnings, but the revenue minus cost of goods sold, you actually generated about a 3½ percent a year premium, and it enhanced the value of value strategy. So, in other words, if you bought value stocks, stocks with low prices to earnings, but also then bought companies that were higher return on equity, higher gross profitability, you would do better, and the academics expanded on that and funds like DFA began to incorporate it, and today, that research has been further expanded to include what’s called ‘equality factor.’ So, not only was Buffett buying stocks at a higher return on equity, for example, or higher earnings, but he tended to buy higher-quality companies. Many of your listeners may know, Buffett has often talked about companies that have moats around them, so that gives them some protection. These companies tend to have more stable earnings, they tend to have higher margins, and they tend to use less leverage, and now we know there was a paper called Buffett’s Alpha that identified these common characteristics, and now we know that if you simply bought all of the types of stocks that Buffett bought, not just the ones he bought, but all of the stocks that had these common characteristics, you would have had basically the same return as Buffett if you also had his famous discipline. Now, very importantly, Jim, I don’t want your listeners to get the wrong message. This takes nothing away from Buffett’s accomplishments. He figured this stuff out 50 years before the academics. But today, you don’t need to be Warren Buffett to buy the same types of stocks. There are mutual funds like those of DFA, the fund family we use, AQR, another fund family we use, Bridgeway and others are incorporating these factors. We now know that it’s important to buy the type of stocks, not which ones, and that’s what’s showing up. Berkshire has not outperformed, as I mentioned, a combination of a DFA large and small value in the last 15 years. Jim Lange: Larry is the author of Think, Act and Invest Like Warren Buffett, by Larry Swedroe, available on Amazon. I am a big fan of Larry’s books. I like that one. I like his The Incredible Shrinking Alpha, and I like his book of Investment Mistakes Even Smart Investors Make(and How to Avoid Them). So Larry, you had mentioned French and Fama several times as authors of an important study, and they are, let’s say, the founding members and are still very, very involved with the group of index funds that we are advocates of, that is both your firm and ours, which is called Dimensional Fund Advisors. And you talked about an additional premium. I think a lot of people know about the equity premium, meaning if you are willing to buy companies instead of lending money to companies, that you can, over a longer period of time, expect a higher return. And we have spent the first portion talking about some of the advantages of a value and a small value premium, meaning that if you invest in smaller companies, and if you invest in value, that is companies that have a lower price/earnings ratio, that over a longer period of time, you can expect a higher return, and that is a premium. And I think that a lot of people know about that, but French and Fama actually identified another premium that you call profitability. Can you tell our listeners about the profitability premium and what the impact of that is on investors and DFA, and how that might compare to, let’s say, an excellent set of index funds that doesn’t use that? Perhaps like Vanguard? 7. The Importance of the ‘Profitability Factor’ in Index Funds Larry Swedroe: Well, Professors Fama and French are best known for creating what was called the ‘three factor model.’ It added on to our understanding of how markets work. Prior to their paper, which was published in 1993, the only premium that investors tended to focus on ? because it was the only one that was documented in the literature ? was this equity-risk premium, which has been about 8 percent a year. Fama and French summarized prior research which showed that small companies tended to outperform large companies by about 3 percent a year, and value companies tended to outperform growth companies by about 5 percent a year, and that led to the development of index funds, like Vanguard’s, to capture these premiums as indexes were created by companies such as Standard & Poor’s and Morgan Stanley’s Country Indices and the Russell Indices. They all published these smaller-cap indices and value indices, and then Vanguard created mutual funds to replicate them. DFA operates a little differently. They don’t actually create index funds, which just purely replicates some popular index, but they instead use academic definitions of these factors, as they’re called, which can be somewhat different from index funds, and we believe that they deliver superior results, and that’s why we use them. It was, as we discussed earlier, in 2006, Fama and French, in their research, uncovered this other newer factor on profitability and found that companies that have higher return on equity, higher cash flows and higher gross profitability saw higher sales minus their gross cost of the sales, so that profit margin actually tend to look like growth stocks. They’re growing faster, but they still outperform. So, they started to screen for this profitability factor in their funds and added that to their construction model. So, they would buy value stocks but then add exposure to these companies and focus on ones that had greater profitability. The index funds that Vanguard had, for example, at least currently, don’t do that. So, that’s one of the benefits because these retail indices don’t add that profitability factor in. Jim Lange: Okay. So, let’s take a look at that. So, let’s say, for example, that French and Fama said, ‘Hey, there’s this profitability premium that we have identified. We’re not going to use a traditional definition of an index, but we’re going to have whether you call it an enhanced index,’ or, you used the term ‘academic version of an index, and we’re going to include a profitability portion, or waiting, to the equities or the stocks that we pick.’ And then, if they are right, then theoretically, and let’s forget about expenses for the moment, if they are right, they should theoretically, either looking back or what they actually did, outperform a straight index fund that doesn’t have a profitability premium built in. Is that correct? Larry Swedroe: That’s correct, because the profitable stocks, if you look at them in isolation and don’t look at their other factors, have outperformed the low-profitability stocks by about 3½ percent a year. But the value premium captures some of that. If you add in profitability, you probably can pick up somewhere in the area of about 50 basis points a year. So, that’s what DFA thinks that they will add over the long-term if over just a pure value fund. That’s one of the ways that they can differentiate is by screening for this profitability factor and more heavily weighting the stocks in their fund that are more profitable and providing that benefit. One other thing, Jim, is this: Vanguard uses popular indices, and I’m just going to use a simple example for your audience. So, let’s say there’s an index that splits stocks between the top half of stocks if you rank by price-to-earnings ratios. So, the ones that have the highest PEs are called growth, and the bottom half of stocks are value, and Vanguard’s value fund buys those bottom 50 percent. DFA, using an academic definition, the way academics tend to split things up, is they would take the bottom 30 percent of stocks in that index when you rank them. So, if you buy the stocks that have the bottom 30 percent as ranked by price-to-earnings ratio, you’ll end up with stocks that have, on average, a lower PE ratio than if you buy the bottom half. That’s simple math, right? So, if you look at DFA’s value funds and compare them to Vanguard’s value funds, you will note that they tend to have lower price/earnings ratios, lower price-to-book ratios, and lower price for cash-flow ratios, which means, based on the evidence, that you also have higher returns in the past and should expect them as well in the future. So, that’s some of the differences. Jim Lange: Okay, and then there’s also another filter of profitability. Larry Swedroe: Right. 8. Small-Value Funds from the Bottom Third Capture More Premium Jim Lange: All right. So now, I’m going to walk without a net for a moment. Okay, so we have some of these theoretical differences, and I guess that we could expand that theoretical difference to not only the value, that is not just picking the lower half, but picking the lower third. They might do the same thing with size. So, they’re not necessarily picking the lower half that go into their small indexes, but maybe the lower third. Larry Swedroe: That’s exactly right, and if you look at, for example, Vanguard’s small-value fund, it has an average market capitalization, the last time I looked, at about $2.8 billion. DFA’s fund had an average market capitalization of about half of that. So, the evidence shows that the smaller the company, the higher the historical return. So, it doesn’t mean Vanguard’s fund is bad. It does exactly what it’s supposed to and does it at very low cost. It just doesn’t give you as much exposure to that size factor, or the value factor, so you capture less of the premium than the DFA funds do. Jim Lange: All right, so let me see if I am correctly paraphrasing you, that DFA has … their value funds actually have a lower price/earnings ratio than, say, the equivalent value fund in Vanguard, for example, and their small companies are actually smaller. That is, they have a lower total capitalization than the small in, say, Vanguard, and, at least historically, the smaller ones, that is, the, let’s call it, very small or micro, have outperformed small, and the very low price/earnings ratio have still outperformed what might traditionally be called value, but value with a higher price/earnings ratio. And if you combine those two, you can expect better performance. Is that a fair characterization? Larry Swedroe: Yeah, and the logic is simple as well, with the research of these companies are riskier, and therefore, investors require a larger premium to invest in, and that means they will only buy them if their prices are lower. So, here’s a good example for your audience I’m just pulling up while we’re chatting. The DFA small-value fund currently has an average market capitalization of $1.4 billion, and the Vanguard fund has an average market capitalization of $2.9 billion. The DFA fund has a weighted average price/earnings ratio of $15.6 billion. The Vanguard fund has a P/E ratio of much closer to $17 billion. So, the higher the price you pay, the lower the expected returns. That’s a very good example, and here’s another one. DFA looks at price-to-book ratio, so the lower the price of the book, the higher the expected return. DFA’s fund has a price-to-book ratio of $1.1 billion. Vanguard’s fund is $1.6 billion. So, almost 50 percent more expensive relative to book value. Of course, Buffett likes to buy stocks that trade at low prices, and DFA funds look more like the kind of company that Warren Buffett buys. So, that’s the difference. Both funds do exactly what they’re supposed to do at relatively low cost, but DFA funds give you more exposure to these factors, and as we mentioned, DFA is now adding a screen for profitability as well, and as one example, that should show up in lower prices to cash flow. DFA’s fund is $4.7 billion, and Vanguard’s fund is higher at over $6 billion. 9. Beyond the Investments, You Need the Discipline to Withstand Bear Markets Jim Lange: One of the things that Warren Buffett talks about, and I think is an important point, is that it’s not just about the investments. So, you know, the early part of the show, we were talking about index funds and some of the advantages of value and small value and some of the differences, say, for example, between Dimensional Fund Advisors and Vanguard, but I think Warren Buffett’s point is hey, it’s not just about that. It’s about other areas. So, Larry, if you could tell us what you think Warren Buffett means by that, and where do you get that type of advice? Whether you can read that, or whether that is an advice that an adviser would give, but could you tell us a little bit about the idea that it’s not just about investments? Larry Swedroe: Well, I think the first thing is that I always tell people you should never work with an investment adviser, which may sound strange, but you should only work with someone who is a true wealth adviser looking out for your entire financial picture. But before we delve into that portion of it, what Buffett is talking about, I believe, is here, you could have a great investment plan in terms of your asset allocation, and if there is even such a thing as a perfect asset allocation. It does you no good unless you have the stomach-acid ability to deal with bear markets, which we know occur with great regularity. In fact, Jim, when the 2008 crisis hit, I did a little research and went back 40 years and found that we had had a major crisis, maybe not quite as big as that one, but a major crisis about once every 2½ years. So, if you’ve got, even at 65, you’ve probably got a 30-year horizon you got to plan for, you should be planning to have to deal with 14 or so crises, and boy, you’d better have the discipline, and that’s really what separates Warren Buffett, I think, from other investors, not just that he had identified these stock characteristics to buy, but he never in his career engaged in a panic selling, and that’s a big problem. So, the job of a good adviser is not only to make sure you have the right strategy, and you haven’t taken too much risk because if you do, you will panic and sell, but to enforce the discipline, not only to buy when panics happen and markets crash, not because you’re predicting anything, but you’re simply rebalancing the portfolio to its target, but also tax managing the portfolio, harvesting losses in taxable accounts. For example, in 2008, we had many clients who owned small businesses and operated at a loss, and we were engaged in lots of Roth conversions because they could convert from a traditional IRA to a Roth and not have to pay any taxes because their business provided the losses to shield the income that year, and then all future withdrawals would be tax-free. So, you want somebody who is looking at all of those issues and providing the discipline. Returning to this issue about why you want to work with a true wealth manager, someone like yourself, you could have that perfect investment plan, but as a great example, that plan can fail for reasons that have nothing to do with investing. Great example: I worked with a young adviser about 20 years ago and helped him review his plan, a pretty good one. I made some minor suggestions, but after that, we did a needs analysis. He was a young guy, married with a couple of kids, and he didn’t have enough insurance. We recommended he buy a couple-million-dollar declining-term policy, because that would be the cheapest to cover the need, and as he lived and worked, he would save and invest, and that would grow, and each year that passed by, there’d be one less year to support. The good news was he took our advice, bought that policy; the bad news, unfortunately, was he was dead a year later of cancer. So you could’ve had that perfect investment plan, but if somebody wasn’t looking out for situations that had nothing to do with that, whether it was the loss of a life, having liability insurance, making sure you have an umbrella policy, having disability policy, many of our people now looking at needs for long-term health care, and integrating the estate planning. Things like when do you take Social Security, which I recommend your book to all of your listeners. It’s an important part of the story here. And making sure your estate plan, your will, having durable powers of attorney for health and medical care, these are all extremely important issues and they change over time as life events occur. Jim Lange: Well, you did happen to mention the two areas that we love to, and we call it running the numbers. So, what you did in 2008 just makes all the sense in the world. That is, I always tell people, let’s say, we typically work more with IRA and retirement-plan owners and business owners, but I always tell people that usually the best years to make a Roth IRA conversion, and typically, you want to do it at the lowest tax rates like you did in the year 2008, for your business owners, are the years that they do not have wages, so they’re retired. On the other hand, they are less that 70, so they don’t have their minimum-required distributions from their IRA, and we actually run mathematical models and determine literally the ideal both year and amount to convert, and then we integrate that strategy with Social Security and come up with, ‘Well, this is what we think that you should do for Social Security. This is what we think your spouse should do for Social Security. This is what we think you should do for Roth IRA conversions.’ And then we show the, let’s say, the differences between what we might come up with, which might be a combination of holding off on Social Security in a series of Roth conversions versus, say, taking it at 62 and not making Roth conversions, and the difference over time can literally be hundreds of thousands of dollars. So, I think your point that it’s not just about investments is very important, and then you had mentioned estate planning and insurance and disability and all these factors that probably should be taken into consideration. Larry Swedroe: And Jim, one quick thing I want to make clear: There’s no robo adviser that will be looking out for all of these issues for you. 10. Why the New Rules on Fiduciary Responsibility are Critical Jim Lange: All right, good. I like a little slam on the robo advisers. But the other thing that is, let’s say perhaps one difference between a wealth adviser and a financial adviser, or even a stockbroker, and now, we have some new legislation on it, is the relatively new requirement that if you are going to invest somebody’s rollover 401(k) plan, that that must be invested with a fiduciary standard. I was wondering if you could tell our audience what that means and what the implications of that are, and how that actually might affect their choice of who they work with. Larry Swedroe: This is really unfortunately one of the great tragedies in our country that the politicians have been overrun, if you will, by the lobbyists. Imagine if you went to a doctor, and the doctor was not required to give advice that was in your best interest, or an attorney, whatever. They all have fiduciary responsibilities. A fiduciary, under the law, is required to give advice that only considers the client’s or the patient’s position. So you can’t recommend anything because you or your firm will benefit from it. Unfortunately, stockbrokers, insurance agents, almost all of them and many other advisers, anyone who’s on commissions, for example, operates under a much weaker standard called the suitability standard, and the simple example I like to provide is this: Let’s say we decide to recommend somebody put money in an S&P 500 index fund because that’s appropriate. You and I would have to recommend the lowest-cost vehicle because that’s in their interest because all S&P 500 index funds are identical, except for expenses. So, you might recommend a Vanguard fund or an ETF, whichever is more appropriate for the situation. Both are going to be exceptionally low cost. On the other hand, if you’re an insurance salesman, you got a 401(k) plan and your funds are in there, they might recommend that you buy XYZ insurance company’s S&P 500 fund, which might have an expense ratio of 50 or 75 basis points instead of Vanguard’s maybe seven, and you and I legally couldn’t do it, but they can. So, the simple question I would ask your listeners to consider: Why would you ever ? I can’t think of a single reason why ? choose to work with somebody who is not required under the law to give you advice that’s solely in your interest? I think the answer is obviously there are no reasons, and the only reason people choose to do so is because they’re unaware of the difference. Jim Lange: I would agree with that. But I noticed, in your answer, you took a little shot at Washington and saying that you can’t understand why Washington doesn’t make that a strict rule. Larry Swedroe: I understand why. It’s the lobbyists who put pressure on these people, and every one of them who voted against the Obama program requiring fiduciary responsibilities, in my opinion, should be tarred and feathered and run out of town with no holds barred! Jim Lange: Well, it sounds like you don’t have a very strong opinion about this, so … Larry Swedroe: It’s a disgrace; actually, that anyone should be allowed to offer advice that isn’t in their client’s interest. I’m sure none of your listeners can come up with a single reason why they would ever choose to work with somebody who isn’t giving them advice that’s solely in their interest. Jim Lange: Well, is it fair to say that you also don’t have any strong feelings about the election, and I’m not going to even ask you about support, but maybe what listeners should be thinking about as the election nears? And by the way, you have about two minutes. 11. Whoever Wins the White House in November Will Have an Effect on Investors Larry Swedroe: All right. So, all of the advice I give, like you do, Jim, is based upon academic research, not our opinions. So, here’s what I can tell people. We know the research shows the following: when Democratic voters are faced with a Democratic president in office, they are much better investors than Republican voters, and the reverse is true. So, when Republican voters have a Republican president in office, they become much better investors than their Democratic counterparts, and the reason is simple: When the party you favor is in power, you tend to be more confident that problems will be resolved in a favorable way, so you tend to do nothing. Maybe, at most, rebalance your portfolio, but you don’t engage in panic selling. So, in 2000 through 2002, after the 9/11 events and the markets crashed, I know Republican investors were much better. They were much more likely to stay the course than Democrats. When 2008-2009, that financial crisis hit, the reverse was true. All the calls I was getting about worries about the next Great Depression came from Republicans, and Democrats tended to be more willing to stay the course. So, the advice that I have for your listeners is to be like Warren Buffett and not look at your political views, and don’t let them bias your investment decisions in any way. You’re likely to make a mistake if you do. Jim Lange: Well, thank you so much. Again, we are here with Larry Swedroe, author of Think, Act and Invest Like Warren Buffett. Thank you so much, Larry.
Q: Was authorial intent ever taken seriously in academic literary theory? What does the author mean? and What does the author want to say/convey/express/...? are questions we heard countless times during literature classes at school. In other words, it is a common didactic device. However, in the 1940s, W.K. Wimsatt and Monroe Beardsley, both representatives of the New Criticism, formulated their theory of the intentional fallacy, claiming that the design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art. (See also the older question What is the “Intentional Fallacy”?.) However, did older theories actually posit that the author's intention was the standard for interpreting and judging literature? Or have questions about authorial intent always been a teaching device that had little value in (academic) literary theory? A: Wimsatt and Beardsley's essay The Intentional Fallacy wasn't flogging a dead horse, nor did it bury the concept of authorial intent. One of the most influential statements of intentionalism is E. D. Hirsch's book Validity in Interpretation (1967). In an essay entitled "Why Intentionalism Won’t Go Away", Denis Dutton describes Hirsch's stance as follows (my emphasis): Hirsch’s intentionalism stands apart from that of someone like Tolstoy because it is not so much a particular conception of art which motivates him to adopt it as it is a strongly held view of criticism. For Hirsch, unless we have a standard of interpretive correctness, criticism loses its status as a cognitive discipline. Without a notion of the author’s meaning as a guide — almost a regulative ideal, it would seem — criticism would be unable to decide between competing interpretations of works of literature (or art). The result, for Hirsch, would be chaos: anybody’s interpretation as good as anybody else’s. Hirsch does not deny, of course, that works of art may mean different things to critics or to audiences in different historical epochs. This is in fact how it is that works of art can have different significances to people. But the meaning of a text is always one and the same thing: it is a meaning that the work had for its maker, the artist or writer. Hirsch's brand of intentionalism isn't the only one; there is also a weaker form known as hypothetical intentionalism. One representative of this type of intentionalism, Alexander Nehamas, argues that interpretation is a matter of attributing an intended meaning to a hypothetical author, distinct from the historical writer. This view allows the interpreter to find meaning even in features of the work that may have been mere accidents on the part of the historical writer. (Quoted from Teaching and Learning Guide for: Authors, Intentions and Literary Meaning by Sherri Irvin.) In summary, one can say that intentionalism is not dead but is the subject of theoretical debate.
Altered Morphology and Immunohistochemical Characteristics in Metastatic Malignant Melanoma After Therapy With Vemurafenib. Metastatic melanoma is traditionally diagnosed using classic morphologic features in addition to immunohistochemical studies. The authors report a case of metastatic malignant melanoma where both morphology and immunohistochemistry were altered after treatment. This 51-year-old patient presented with metastatic melanoma to the brain and axilla. Initially, both metastases showed classic morphology and diffuse staining with the pan-melanoma antibody cocktail. This cocktail is a combination of 3 antibodies commonly used to diagnose melanocytic neoplasms: Melan-A (MART-1), tyrosinase, and HMB-45. In combination, the cocktail is highly sensitive for detecting melanocytic neoplasms and is commonly used to diagnose metastatic melanoma. Her tumor was positive for the BRAF 1799T>A (V600E) mutation, and she was treated with BRAF inhibitor therapy (vemurafenib). However, the axillary tumor recurred after treatment with vemurafenib. The recurrent tumor showed a markedly different morphology and complete loss of staining with the pan-melanoma antibody cocktail. This loss of staining accompanied by the change in morphology was an observation not previously documented after therapy with vemurafenib. This case demonstrates a potential pitfall in the diagnosis of metastatic or recurrent malignant melanoma.
Total Locking Plate Casters Choose from 2-3/8" x 3-5/8" or 3-1/8" x 4-1/8" Mounting Plate Sizes Total locking Casters are very common in applications when a standard brake just doesn't cut it. Caster HQ's Total Lock Casters lock the Wheel from Spinning and the Swivel Raceway from Turning all with a simple push of a pedal. Total Locking Casters allow you to turn your swivel caster into a completely stationary Caster to keep your equipment from rolling and secure. This page covers all of our Total Locking Plate Casters. The link Below shows the same casters but in a Threaded Stem Version if a Plate won't work for your application. We offer multiple sizes from 3 Inch Total Lock Casters all the way to 5 inch Total Lock Casters. If you have a specific size give us a call of send us an email and we will help you find exactly what you're looking for. Click Here For a Stem Mount Version
Everything about this place was filthy and the air conditioner did not work there was construction mess left every where the barhroom was gross the toilet paper holder was even broken the shower doors where falling off to if I wasn't with my kid I would have rather slept in my car Location Peter us 6,7 30 de junio de 2017 Esta entrada no tiene comentarios. Scott us 7,5 11 de junio de 2017 Esta entrada no tiene comentarios. Charles us 2,5 21 de marzo de 2017 Stay away!!!! No coffee food ice machine! Children playing in parking lot !! Ok with manager! Couldn't take shower bathroom was off full sink was scary went to gas station to brush teeth it was cleaner!!! nothing!!!no coffee, GEORGE D ca 2,5 6 de abril de 2016 It was an awful place. The room was very very dirty. Looked like they hadn't swept or cleaned the floors and furniture for months. It had a bad smell to the room. I would never go there again or recommend anybody to ever go there. Said no pets but there were several dogs there.
Q: Return from a for loop but keep loop running I have a for loop that needs to return something on every iteration: for(var i=0;i<100;++i) { return i; } but return breaks the loop. How can I return but kee the loop running? A: Store it inside an array. var array = []; for (var i = 0; i < 100; ++i) { array.push(i); } A: The context here is unclear. If you are trying to execute a function for each iteration of the loop, why not something simple like calling a function from within the loop? for(var i=0;i<100;++i) { processIteration(i) } In processIteration you could simply handle what you need to happen with that value. A: Store the values you need to return in an array, then after the loop, return the array.
--- abstract: | A metric space $M$ us said to have the fibered approximation property in dimension $n$ (br., $M\in \mathrm{FAP}(n)$) if for any $\epsilon>0$, $m\geq 0$ and any map $g\colon{\mathbb I}^m\times{\mathbb I}^n\to M$ there exists a map $g'\colon{\mathbb I}^m\times{\mathbb I}^n\to M$ such that $g'$ is $\epsilon$-homotopic to $g$ and $\dim g'\big(\{z\}\times{\mathbb I}^n\big)\leq n$ for all $z\in{\mathbb I}^m$. The class of spaces having the $\mathrm{FAP}(n)$-property is investigated in this paper. The main theorems are applied to obtain generalizations of some results due to Uspenskij [@vu] and Tuncali-Valov [@tv1]. address: - 'Uniwersytet Humanistyczno-Przyrodniczy Jana Kochanowskiego w Kielcach (Poland), and Ivan Franko National University of Lviv (Ukraine)' - 'Department of Computer Science and Mathematics, Nipissing University, 100 College Drive, P.O. Box 5002, North Bay, ON, P1B 8L7, Canada' author: - Taras Banakh - Vesko Valov title: Spaces with fibered approximation property in dimension $n$ --- [^1] Introduction ============ All spaces in the paper are assumed to be Tychonoff and all maps continuous. By $C(X,M)$ we denote all maps from $X$ into $M$. We say that a metric space $M$ has the fibered approximation property in dimension $n$ (br., $M\in \mathrm{FAP}(n)$), where $n\geq 0$, if for any $\epsilon>0$, any $m\geq 0$ and any map $g\colon{\mathbb I}^m\times{\mathbb I}^n\to M$ there exists a map $g'\colon{\mathbb I}^m\times{\mathbb I}^n\to M$ such that $g'$ is $\epsilon$-homotopic to $g$ and $\dim g'(\{z\}\times{\mathbb I}^n)\leq n$ for all $z\in{\mathbb I}^m$. In the paper we investigate the class of spaces having the $\mathrm{FAP}(n)$-property, where $n\geq 0$. According to [@tv1], this class contains all Euclidean spaces. It is shown in Theorem 2.7 below that a complete metric space has the $\mathrm{FAP}(n)$-property if and only if it has locally the same property. So, any Euclidean manifolds also has the $\mathrm{FAP}(n)$-property, $n\geq 0$. Another $\mathrm{FAP}(n)$-spaces are described in the last section. For example, if $M$ is a manifold modeled on the $n$-dimensional Menger cube, or $M={\mathbb I}^n$, then $M\times Z$ has the $\mathrm{FAP}(n)$-property for any completely metrizable space $Z$. We also introduced a subclass of $\mathrm{FAP}(n)$-spaces, the [*strong $\mathrm{FAP}(n)$-spaces*]{}, see Section 4. For example, any product of finitely many 1-dimensional completely metrizable $\mathrm{LC}(0)$-spaces without isolated points is a strong $\mathrm{FAP}(n)$-space for all $n\geq 0$ (Corollary 4.5). Next theorem is the main result in this paper. \[ap\] Let $f\colon X\to Y$ be a perfect map with ${\dim_\triangle}(f)\leq n$, where $X$ and $Y$ are paracompact spaces. If $M\in \mathrm{FAP}(n)$ is completely metrizable, then $\mathcal{R}_n^f(Y,M)=\{g\in C(X,M):\dim g(f^{-1}(y))\leq n\hbox{~}\hbox{~}\mbox{for all}\hbox{~} y\in Y\}$ is a $G_\delta$-subset of $C(X,M)$ and every simplicially factorizable map in $C(X,M)$ is homotopically approximated by maps from $\mathcal{R}_n^f(Y,M)$. Let $f\colon X\to Y$ be a perfect $0$-dimensional surjection between paracompact spaces and $M$ a completely metrizable $\mathrm{ANR}$. Then the maps $g\in C(X,M)$ such that $\dim g(f^{-1}(y))=0$ for all $y\in Y$ form a dense $G_\delta$-subset of $C(X,M)$. Corollary 1.2 was obtained in [@tv1] in the particular case when $Y$ is a $C$-space and $M$ an Euclidean space (see also [@vu] for the case $X$ compact, $Y$ a $C$-space and $M={\mathbb I}$). Let $M\in \mathrm{FAP}(n)$ be a completely metrizable $\mathrm{ANR}$ and $f\colon X\to Y$ a perfect $n$-dimensional surjection between paracompact spaces with $Y$ being a $C$-space. Then the maps $g\in C(X,M)$ such that $\dim g(f^{-1}(y))\leq n$ for all $y\in Y$ form a dense $G_\delta$-subset of $C(X,M)$. The version of Corollary 1.3 with $M$ being an Euclidean space was established in [@tv1]. Let us explain the notions in Theorem 1.1. A map $g\in C(X,M)$ is homotopically approximated by maps from $\mathcal{H}$ means that for every function ${\varepsilon}\in C(X,(0,1])$ there exists $g'\in\mathcal{H}$ which is ${\varepsilon}$-homotopic to $g$. Here, the maps $g$ and $g'$ are said to be ${\varepsilon}$-homotopic, if there is a homotopy $h\colon X\times{\mathbb I}\to M$ connecting $g$ and $g'$ such that each set $h(\{x\}\times{\mathbb I})$ has a diameter $<{\varepsilon}(x)$, $x\in X$. The function space $C(X,M)$ appearing in this theorem is endowed with the source limitation topology whose neighborhood base at a given function $f\in C(X,M)$ consists of the sets $$B_\rho(f,{\varepsilon})=\{g\in C(X,M):\rho(g(x),f(x))<{\varepsilon}(x)\hbox{~}\forall\hbox{~}x\in X\},$$ where $\rho$ is a fixed compatible metric on $M$ and ${\varepsilon}:X\to(0,1]$ runs over continuous positive functions on $X$. If $X$ is paracompact, the source limitation topology doesn’t depend on the metric $\rho$ and it has the Baire property provided $M$ is completely metrizable. We say that a map $g\colon X\to M$ is simplicially factorizable [@bv] if there exists a simplicial complex $L$ and two maps $g_1\colon X\to L$ and $g_2\colon L\to M$ such that $g=g_2\circ g_1$. In each of the following cases the set of simplicially factorizable maps is dense in $C(X,M)$ (see [@bv Proposition 4]): (i) $M$ is an $\mathrm{ANR}$; (ii) $\dim X\leq k$ and $M$ is $\LC[k-1]$; (iii) $X$ is a $C$-space and $M$ is locally contractible. The dimension ${\dim_\triangle}(f)$ was defined in [@bv]: ${\dim_\triangle}(f)$ of a map $f:X\to Y$ is equal to the smallest cardinal number $\tau$ with the following property: for every open cover ${\mathcal U}$ of $X$ there is a map $g:X\to \mathbb I^\tau$ such that the diagonal product $f\Delta g:X\to Y\times \mathbb I^\tau$ is a ${\mathcal U}$-disjoint map. The last one means that every $z\in (f\Delta g)(X)$ has a neighborhood $V$ such that $(f\Delta g)^{-1}(V)$ is the union of a disjoint open in $X$ family refining ${\mathcal U}$. According to results from [@lev], [@bp:98] and [@tv], for any perfect map $f:X\to Y$ between paracompact spaces we have: (i) $\dim f\leq {\dim_\triangle}(f)$; (ii) ${\dim_\triangle}(f)=0$ iff $\dim f=0$; (iii) ${\dim_\triangle}(f)=\dim f$ if $Y$ a $C$-space; (iv) ${\dim_\triangle}(f)\leq\dim f+1$ if the spaces $X,Y$ are compact. Some properties of $\mathrm{FAP}(n)$-spaces =========================================== Suppose that $(M,\rho)$ is a complete metric space and $Z\subset M$ a closed set. If $f\colon X\to Y$ is a perfect surjective map such that $X$ and $Y$ are paracompact and $\dim f\leq n$, let $$\mathcal{R}_n^f(H,Z)=\{g\in C(X,M):\dim g(f^{-1}(y))\cap Z\leq n\hbox{~}\forall y\in H\hbox{~}\}$$ with $H\subset Y$. Let also $\mathcal{R}_n^f(H,Z,k)$, where $H\subset Y$ and $k\geq 1$, denote the set of all maps $g\in C(X,M)$ satisfying the following condition: - Each set $\Gamma(g,y)=g(f^{-1}(y))\cap Z$, $y\in H$, can be covered by an open family $\gamma(g,y)$ in $M$ of mesh $\leq 1/k$ and order $\leq n$. Recall that the order of $\gamma(g,y)$ is $\leq n$ provided any point of $M$ is contained in at most $n+1$ elements of $\gamma(g,y)$. \[intersection\] $\mathcal{R}_n^f(H,Z)$ is the intersection of all $\mathcal{R}_n^f(H,Z,k)$, $k\geq 1$, for any $H\subset Y$. Let $g\in\mathcal R_n^f(H,Z)$. Then $\dim \Gamma(g,y)\leq n$ for all $y\in H$. Hence, $\Gamma(g,y)$ admits an open in $M$ cover of mesh $\leq 1/k$ and order $\leq n$ for any $k\geq 1$ and $y\in H$. Therefore, $\mathcal R_n^f(H,Z)$ is contained in the intersection of all $\mathcal{R}_n^f(H,Z,k)$, $k\geq 1$. On the other hand, if $g\in C(X,M)$ belongs to this intersection and $y\in Y$ is fixed, then each $\Gamma(g,y)$ admits open covers of arbitrary small mesh and order $\leq n$. So, $\dim\Gamma(g,y)\leq n$ and $g\in\mathcal R_n^f(H,Z)$. \[nbd\] Suppose $X$ and $Y$ are metric spaces and $g\in\mathcal{R}_n^f(y,Z,k)$ for some $y\in Y$ and $k\geq 1$. Then there exists a neighborhood $V_y$ of $y$ in $Y$ and $\delta_y>0$ such that $g'\in\mathcal{R}_n^f(y',Z,k)$ provided $y'\in V_y$ and $g'\in C(X,M)$ with $\rho(g'(x),g(x))<\delta_y$ for all $x\in f^{-1}(y')$. The same conclusion remains true if $Z=M$ and $X, Y$ paracompact. Assume first that $X$ and $Y$ are metric spaces. In case $\Gamma(g,y)\neq\varnothing$, it can be covered by an open in $M$ family $\gamma(g,y)$ of mesh $\leq 1/k$ and order $\leq n$. Let $G=\cup\gamma(g,y)$ and $\Pi=M\backslash G$. If $\Gamma(g,y)=\varnothing$, let $\Pi=Z$. Hence, in both cases we have $$Z\cap\Pi\cap g(f^{-1}(y))=\varnothing.\leqno{(1)}$$ It suffices to show there exists a neighborhood $V_y$ of $y$ in $Y$ such that $$\delta_y=\rho\big(g(f^{-1}(V_y)),Z\cap\Pi\big)>0.$$ Indeed, otherwise there would be a sequence $\{x_i\}_{i\geq 1}\subset X$ such that $\{f(x_i)\}_{i\geq 1}$ converges to $y$ and $\rho\big(g(x_i),Z\cap\Pi\big)\leq 1/i$, $i\geq 1$. Passing to a subsequence, we may assume that $\{x_i\}_{i\geq 1}$ also converges to a point $x\in f^{-1}(y)$. So, $g(x)\in Z\cap\Pi\cap g(f^{-1}(y))$, which contradicts $(1)$. If $Z=M$, we let $G=\cup\gamma(g,y)$ and $\displaystyle\delta_y=\frac{1}{2}\rho\big(g(f^{-1}(y)),M\backslash G\big)$, where $\gamma(g,y)$ is as above. Using that $f$ is perfect, we can find a neighborhood $V_y$ of $y$ in $Y$ such that $\rho\big(g(f^{-1}(V_y)),M\backslash G\big)\geq\delta_y$. Then $V_y$ and $\delta_y$ are as required. \[open\] Let $H\subset Y$ be closed. Then every $\mathcal{R}_n^f(H,Z,k)$ is open in $C(X,M)$ in each of the following two cases: $(i)$ $Z\subset M$ is closed and both $X$ and $Y$ are metric spaces; $(ii)$ $Z=M$ and $X,Y$ are paracompact. The lemma follows from the proof of [@bv1 Proposition 3.3]. For completeness, we provide the arguments. We consider only the first case, the second one is similar. Suppose $g_0\in\mathcal{R}_n^f(H,Z,k)$. Then, by Lemma \[nbd\], for every $y\in H$ there exist a neighborhood $V_y$ and a positive $\delta_y\leq 1$ such that $g\in\mathcal{R}_n^f(y',Z,k)$ for any $y'\in V_y$ provided $g|f^{-1}(y')$ is $\delta_y$-close to $g_0|f^{-1}(y')$. The family $\{V_y\cap H:y\in H\}$ can be supposed to be locally finite in $H$. Then the set-valued map $\varphi\colon H\to (0,1]$, $\varphi(y)=\cup\{(0,\delta_z]:y\in V_z\}$ is lower semi-continuous. By [@rs Theorem 6.2, p.116], $\varphi$ admits a continuous selection $\beta\colon H\to (0,1]$. Let $\overline{\beta}:Y\to (0,1]$ be a continuous extension of $\beta$ and $\alpha=\overline{\beta}\circ f$. It suffices to show that if $g\in C(X,M)$ with $\rho\big(g_0(x),g(x)\big)<\alpha(x)$ for all $x\in X$, then $g\in\mathcal{R}_n^f(y,Z,k)$ for every $y\in H$. So, we take such a $g$ and fix $y\in H$. Then there exists $z\in H$ with $y\in V_{z}$ and $\alpha(x)\leq\delta_{z}$ for all $x\in f^{-1}(y)$. Hence, $\rho\big(g_0(x),g(x)\big)<\delta_z$, $x\in f^{-1}(y)$. Therefore, according to the choice of $V_z$ and $\delta_z$, $g\in\mathcal{R}_n^f(y,Z,k)$. Lemmas 2.1 and 2.3 imply the following proposition. \[gdelta\] Let $H\subset Y$ be a closed set. Then $\mathcal R_n^f(H,Z)$ is a $G_\delta$-subset of $C(X,M)$ in any of the cases $(i)$ and $(ii)$ from Lemma $2.3$. Next lemma is very useful when dealing with homotopically dense subsets of function spaces. Here, a set $U\subset C(X,M)$ is said to be homotopically dense in $C(X,M)$ if for every $g\in C(X,M)$ and ${\varepsilon}\in C(X,(0,1])$ there exists $g'\in U$ which is ${\varepsilon}$-homotopic to $g$. [@bv Lemma 2.2] Let $X$ be a metric space and $G\subset C(X,M)$. Suppose $\{U(i)\}_{i\geq 1}$ is a sequence of open subsets of $C(X,M)$ such that - for any $h\in G$, $i\geq 1$ and any function $\eta\in C(X,(0,1])$ there exists $g_i\in B_\rho(h,\eta)\cap U(i)\cap G$ which is $\eta$-homotopic to $h$. Then, for any $g\in G$ and ${\varepsilon}\colon X\to (0,1]$ there exists $g'\in\bigcap_{i=1}^{\infty} U(i)$ and an ${\varepsilon}$-homotopy connecting $g$ and $g'$. Moreover, $g'|A=g_0|A$ for some $g_0\in C(X,M)$ and $A\subset X$ provided $g_i|A=g_0|A$ for all $i$. Let $X$ be a metric space and $\{G_i\}_{i\geq 1}$ a sequence of homotopically dense $G_\delta$-subsets of $C(X,M)$. Then the set $\bigcap_{i=1}^{\infty}G_i$ is also homotopically dense in $C(X,M)$. Each $G_i$ is the intersection of a sequence $\{G_{ij}\}_{j\geq 1}$ of open sets in $C(X,M)$. Since $G_i$ is homotopically dense in $C(X,M)$, so are all $G_{ij}$, $j\geq 1$. Then we apply Lemma 2.5 for the sequence $\{G_{ij}\}_{i,j\geq 1}$ with $G$ being the whole space $C(X,M)$. We are going to show the local nature of the $\mathrm{FAP}(n)$-properties. \[local\] A complete metric space $M$ possesses the $\mathrm{FAP}(n)$-property if and only if every $z\in M$ has a neighborhood $U_z\in \mathrm{FAP}(n)$. It is easily seen that if $M\in \mathrm{FAP}(n)$, then every open set $U\subset M$ also has the $\mathrm{FAP}(n)$-property. Suppose every $z\in M$ has an open neighborhood $U_z\in \mathrm{FAP}(n)$. Fix an integer $m\geq 0$ and consider the projection $\pi\colon {\mathbb I}^m\times{\mathbb I}^n\to{\mathbb I}^m$. We need to prove that the set $\mathcal R_n^\pi({\mathbb I}^m,M)$ is homotopically dense in $C({\mathbb I}^m\times{\mathbb I}^n,M)$. To this end, using an idea from the proof of [@mv Theorem 3.6], for every $z\in M$ choose a positive $\epsilon_z$ such that $U_z$ contains the closed ball $\overline{B(z,3\epsilon_z)}$ with center $z$ and radius $3\epsilon_z$. Following the notations from the beginning of this section (with $X$ replaced by ${\mathbb I}^m\times{\mathbb I}^n$ and $Y$ by ${\mathbb I}^m$), we consider the sets $\mathcal R(z)=\mathcal R_n^\pi({\mathbb I}^m,\overline{B(z,\epsilon_z)})$, $z\in M$. *Claim $1$. Every $\mathcal{R}(z)$, $z\in M$, is a homotopically dense $G_\delta$-subset of $C({\mathbb I}^m\times{\mathbb I}^n,M)$.* All $\mathcal{R}(z)$ are $G_\delta$-subsets of $C({\mathbb I}^m\times{\mathbb I}^n,M)$ by Proposition 2.4. To show their homotopical density in $C({\mathbb I}^m\times{\mathbb I}^n,M)$, fix $z_0\in M$, $g_0\in C({\mathbb I}^m\times{\mathbb I}^n,M)$ and $\epsilon>0$ with $\epsilon<\epsilon_{z_0}$. Let $A_{z_0}=g_0^{-1}\big(\overline{B(z_0,2\epsilon_{z_0})}\big)$ and $W_{z_0}=g_0^{-1}\big(B(z_0,3\epsilon_{z_0})\big)$. Choose finitely many sets $K_i=A_i\times B_i$, $i=1,2,..,k$, such that $A_i\subset{\mathbb I}^m$ and $B_i\subset{\mathbb I}^n$ are homeomorphic to ${\mathbb I}^m$ and ${\mathbb I}^n$, respectively, and $A_{z_0}\subset K=\bigcup_{i=1}^{i=k}K_i\subset W_{z_0}$. We can also suppose that there exists a polyhedron $L$ such that $A_{z_0}\subset L\subset K$. For every $i$ consider the set $$\mathcal R_i=\{h\in C(K_i,U_{z_0}):\dim h(\{y\}\times B_i)\leq n\hbox{~}\forall y\in A_i\hbox{~}\}\leqno{(2)}$$ and let $p_i\colon C(K,U_{z_0})\to C(K_i,U_{z_0})$ be the restriction map $g\rightarrow g|K_i$, $g\in C(K,U_{z_0})$. Obviously, $p_i$ are continuous. By Proposition 2.4, each $\mathcal R_i$ is a $G_\delta$-subset of $C(K_i,U_{z_0})$. Hence, all $p_i^{-1}(\mathcal R_i)$ are $G_\delta$-subsets of $C(K,U_{z_0})$. Moreover, each $\mathcal R_i$ is homotopically dense in $C(K_i,U_{z_0})$ because $U_{z_0}\in \mathrm{FAP}(n)$. This, according to the Homotopy Extension Theorem, implies that $p_i^{-1}(\mathcal R_i)$ are also homotopically dense in $C(K,U_{z_0})$. So, by Corollary 2.6, $\mathcal H=\bigcap_{i=1}^{i=p}p_i^{-1}(\mathcal R_i)$ is homotopically dense in $C(K,U_{z_0})$. Then there exists a map $h\in\mathcal H$ which is $\epsilon$-homotopic to $g_0|K$. Applying again the Homotopy Extension Theorem for the maps $h|L$ and $g_0$, we obtain a map $g^*\in C({\mathbb I}^m\times{\mathbb I}^n,M)$ such that $g^*|L=h|L$ and $g^*$ is $\epsilon$-homotopic to $g_0$. Let us show that $g^*\in\mathcal{R}(z_0)$, or equivalently, $\dim g^*(\{y\}\times{\mathbb I}^n)\cap\overline{B(z_0,\epsilon_{z_0})}\leq n$ for every $y\in{\mathbb I}^m$. It is easily seen that $(g^*)^{-1}(z)\subset A_{z_0}$ for every $z\in\overline{B(z_0,\epsilon_{z_0})}$. The last inclusion yields that $g^*(\{y\}\times{\mathbb I}^n)\cap\overline{B(z_0,\epsilon_{z_0})}\subset h((\{y\}\times{\mathbb I}^n)\cap A_{z_0})$ for any $y\in\pi(A_{z_0})$ and $g^*(\{y\}\times{\mathbb I}^n)\cap\overline{B(z_0,\epsilon_{z_0})}=\varnothing$ if $y\not\in\pi(A_{z_0})$. Therefore, the proof of the claim is reduced to show that $\dim h((\{y\}\times{\mathbb I}^n)\cap A_{z_0})\leq n$ for any $y\in\pi(A_{z_0})$. And this is really true. Indeed, for any such $y$ let $\Lambda(y)=\{i\leq k:y\in A_i\}$. Then $(\{y\}\times{\mathbb I}^n)\cap A_{z_0}=\bigcup_{i\in\Lambda(y)}(\{y\}\times B_i)\cap A_{z_0}$. Since $h|K_i\in\mathcal R_i$, by $(2)$ we have $\dim h((\{y\}\times B_i)\cap A_{z_0})\leq n$ for every $i\in\Lambda(y)$. Hence, $h((\{y\}\times{\mathbb I}^n)\cap A_{z_0})$ is the union of its closed sets $h((\{y\}\times B_i)\cap A_{z_0})$, $i\in\Lambda(y)$, each of dimension $\leq n$. So, $\dim h((\{y\}\times{\mathbb I}^n)\cap A_{z_0})\leq n$ which completes the proof of the claim. Now, we can show that $\mathcal R_n^\pi({\mathbb I}^m,M)$ is homotopically dense in $C({\mathbb I}^m\times{\mathbb I}^n,M)$. To this end, fix $g\in C(X,M)$ and $\eta>0$, and choose finitely many points $z_i\in M$, $i=1,..,q$, such that $g({\mathbb I}^m\times{\mathbb I}^n)\subset\bigcup_{i=1}^{i=q}B(z_i,\epsilon_{z_i}/2)$. Let $\delta=\min\{\eta,\epsilon_{z_i}/2:i\leq q\}$. By the above claim, each $\mathcal{R}(z_i)$ is a homotopically dense $G_\delta$-subset of $C({\mathbb I}^m\times{\mathbb I}^n,M)$. Therefore, so is the set $\bigcap_{i\leq q}\mathcal{R}(z_i)$ according to Corollary 2.6. Hence, there exists $g'\in\bigcap_{i\leq q}\mathcal{R}(z_i)$ which is $\delta$-homotopic to $g$. It is easily seen that $g'({\mathbb I}^m\times{\mathbb I}^n)\subset\bigcup_{i=1}^{i=q}B(z_i,\epsilon_{z_i})$, so $g'(\{y\}\times{\mathbb I}^n)\subset\bigcup_{i\leq q}g'(\{y\}\times{\mathbb I}^n)\cap\overline{B(z_i,\epsilon_{z_i})}$ for any $y\in{\mathbb I}^m$. Observe that each set $g'(\{y\}\times{\mathbb I}^n)\cap\overline{B(z_i,\epsilon_{z_i})}$, $i\leq q$, is of dimension $\leq n$ because $g'\in\mathcal{R}(z_i)$. Hence, $\dim g'(\{y\}\times{\mathbb I}^n)\leq n$ for all $y\in{\mathbb I}^m$. Thus, $g'\in\mathcal R_n^\pi({\mathbb I}^m,M)$. This completes the proof. Next proposition shows that in the definition of $\mathrm{FAP}(n)$-spaces we can consider any product ${\mathbb I}^m\times{\mathbb I}^k$, $m\geq 0$ and $k\leq n$. If a metrizable space $M$ has the $\mathrm{FAP}(n)$-property, then any map $g\colon{\mathbb I}^m\times{\mathbb I}^k\to M$, where $m\geq 0$ and $k\leq n$, can be approximated by a map $g'\colon{\mathbb I}^m\times{\mathbb I}^k\to M$ such that $\dim g'(\{z\}\times{\mathbb I}^k)\leq n$ for all $z\in{\mathbb I}^m$. Suppose $M$ has the $\mathrm{FAP}(n)$-property. Let $\epsilon>0$ and $g\colon{\mathbb I}^m\times{\mathbb I}^k\to M$ with $k\leq m$. Take a retraction $r\colon{\mathbb I}^n\to{\mathbb I}^k$ and consider the maps $\pi_1\colon{\mathbb I}^m\times{\mathbb I}^n\to{\mathbb I}^m\times{\mathbb I}^k$ and $\pi_2\colon{\mathbb I}^m\times{\mathbb I}^k\to{\mathbb I}^m$ defined, respectively, by $\pi_1((z,x))=(z,r(x))$ and $\pi_2(z,y)=z$. Then $\pi=\pi_2\circ\pi_1\colon{\mathbb I}^m\times{\mathbb I}^n\to{\mathbb I}^m$ is the natural projection. Since $M\in \mathrm{FAP}(n)$, there exists $h\in C({\mathbb I}^m\times{\mathbb I}^n, M)$ which is $\epsilon$-homotopic to the map $g\circ\pi_1$ and $\dim h(\{z\}\times{\mathbb I}^n)\leq n$ for all $z\in{\mathbb I}^m$. Consequently, the map $g'=h|({\mathbb I}^m\times{\mathbb I}^k)$ is $\epsilon$-homotopic to $g$ and $\dim g'(\{z\}\times{\mathbb I}^k)\leq n$, $z\in{\mathbb I}^m$. Next theorem provides a characterization of $\mathrm{FAP}(n)$-spaces in terms of simplicial maps. For a complete metric space $M$ the following conditions are equivalent: 1. $M$ possesses the $\mathrm{FAP}(n)$-property; 2. If $p\colon K\to L$ is at most $n$-dimensional simplicial map between finite simplicial complexes, then the set $\mathcal R_n^p(L,M)$ is homotopically dense in $C(K,M)$; $(i)\Rightarrow (ii)$ Suppose $M\in \mathrm{FAP}(n)$ and $p\colon K\to L$ a simplicial map between finite simplicial complexes with $\dim p=k\leq n$. Let $K^{(0)}$, $L^{(0)}$ be the set of vertices of $K$ and $L$, respectively, and fix $g_0\in C(K,M)$ and $\epsilon>0$. First, we assume that $K$ is a simplex. Then $L$ is also a simplex and, since $\dim p=k$, $p^{-1}(z)\cap K^{(0)}$ contains at most $k+1$ points for every vertex $z\in L^{(0)}$. Consequently, we can find a map $e^{(0)}\colon K^{(0)}\to\sigma_k^{(0)}$ which is injective on each set $p^{-1}(z)\cap K^{(0)}$, $z\in L^{(0)}$. Here, $\sigma_k$ is a $k$-dimensional simplex. This map induces an affine map $e\colon K \to\sigma_k$. Then the diagonal map $h=p\triangle e\colon K\to L\times\sigma_k$ is an affine embedding. So, there exists a retraction $r\colon L\times\sigma_k\to K$ such that $h\circ r$ is the identity on $h(K)$. Consider the projection $\pi\colon \colon L\times\sigma_k\to L$. By Proposition 2.8, there exists a map $\overline{g}\colon L\times\sigma_k\to M$ $\epsilon$-homotopic to $g_0\circ r$ such that $\dim\overline{g}(\{z\}\times\sigma_k)\leq n$ for every $z\in L$. Then for the map $g'=\overline{g}\circ h$ we have $\dim g'(p^{-1}(z))\leq n$ because $h(p^{-1}(z))$ is homeomorphic to a subset of $\{z\}\times\sigma_k$. Moreover, it follows that $g'$ is $\epsilon$-homotopic to $g_0$. Therefore, in this case $\mathcal R_n^p(L,M)$ is homotopically dense in $C(K,M)$. Now, we can prove the general case. Let $\{K_i:i\leq s\}$ be all simplexes of $K$ and for each $i\leq s$ we denote $$\mathcal H_i=\{g\in C(K,M):\dim g(p^{-1}(z)\cap K_i)\leq n\hbox{~}\forall\hbox{~}z\in p(K_i)\}.$$ According to Proposition 2.4, $\mathcal H_i$ are $G_\delta$ in $C(K,M)$. It is easily seen that $\mathcal R_n^p(L,M)=\bigcap_{i=1}^{i=s}\mathcal H_i$. So, by Corollary 2.6, it suffices to show that each $\mathcal H_i$ is homotopically dense in $C(K,M)$. Using the previous case, each set $\mathcal K_i=\{g\in C(K_i,M):\dim g(p^{-1}(z)\cap K_i)\leq n\hbox{~}\forall\hbox{~}z\in p(K_i)\}$ is homotopically dense in $C(K_i,M)$. Therefore, there exists a map $g_i\in\mathcal K_i$ which is $\epsilon$-homotopic to $g_0|K_i$. Then, by the Homotopy Extension Theorem, $g_i$ can be extended to a map $\overline{g}_i\in C(K,M)$ $\epsilon$-homotopic to $g_0$. Obviously, $\overline{g}_i\in\mathcal H_i$. So, each $\mathcal H_i$ is homotopically dense in $C(K,M)$ which completes the proof. $(ii)\Rightarrow (i)$ This implication is trivial because any projection $\pi\colon{\mathbb I}^m\times{\mathbb I}^n\to{\mathbb I}^m$ is a simplicial map with respect to suitable triangulations of ${\mathbb I}^m$ and ${\mathbb I}^m\times{\mathbb I}^n$. Proof of Theorem 1.1 and Corollaries 1.2 - 1.3 ============================================== In this section, following the notations from Section 2, we assume that $(M,\rho)$ is a completely metrizable $\mathrm{FAP}(n)$-space. As we already observed, every $g\in C(X,M)$ is simplicially factorizable provided $M$ is an $\mathrm{ANR}$. Moreover, if $f\colon X\to Y$ is a perfect map between paracompact spaces, then $\dim f={\dim_\triangle}(f)$ when either $\dim f=0$ or $Y$ is a $C$-space [@bv]. Let us also note that every $\mathrm{ANR}$ has the $\mathrm{FAP}(0)$-property. Hence, the proofs of Corollaries 1.2 and 1.3 follow from Theorem 1.1. By Proposition 2.4, $\mathcal R_n^f(Y,M)$ is a $G_\delta$-subset of $C(X,M)$. So, to prove Theorem 1.1 it suffices to show that any simplicially factorizable map in $C(X,M)$ can be approximated by maps from $\mathcal R_n^f(Y,M)$. This will be done in Proposition 3.3 below. Recall that a map $p\colon K\to L$ between two simplicial complexes is a $PL$-map if $p(\sigma)$ is contained in a simplex of $L$ and $p$ is linear on $\sigma$ for every simplex $\sigma\in K$. \[compact\] Let $p\colon K\to\sigma$ be a $PL$-map between a finite simplicial complex $K$ and a simplex $\sigma$ with $\dim p\leq n$. Suppose $g_0\in C(K,M)$ such that $\dim g_0(p^{-1}(y))\leq n$ for all $y\in\partial\sigma$, where $\partial\sigma$ is the boundary of $\sigma$. Then, for every $\epsilon>0$ there exists a map $g\in\mathcal R_n^p(\sigma,M)$ which is $\epsilon$-homotopic to $g_0$ and $g|p^{-1}(\partial\sigma)=g_0|p^{-1}(\partial\sigma)$. We may assume that $p$ is simplicial because any $PL$-map between finite simplicial complexes is simplicial with respect to some triangulations of the complexes. Let $\Omega=p^{-1}(\partial\sigma)$ and $G=\{g\in C(K,M):g|\Omega=g_0|\Omega\}$. All sets $U(k)=\mathcal R_n^p(\sigma,M,k)$, $k\geq 1$, are open in $C(K,M)$ and their intersection is $\mathcal R_n^p(\sigma,M)$, see Lemmas 2.1 and 2.3. So, by Lemma 2.5, it suffices to show that each $U(k)$ has the following property: any $g\in G$ can be homotopically approximated by maps from $U(k)\cap G$. So, fix $g\in G$, $k\geq 1$ and $\delta>0$. We are going to find $h\in U(k)\cap G$ which is $\delta$-homotopic to $g$. Since $g|\Omega=g_0|\Omega$, $g\in\mathcal R_n^p(y,M,k)$ for every $y\in\partial\sigma$. Consequently, each $y\in\partial\sigma$ has a neighborhood $V_y$ in $\sigma$ with corresponding $\delta_y>0$ both satisfying the hypotheses of Lemma \[nbd\]. Choose finitely many $y_i\in\partial\sigma$, $i\leq s$, such that $\displaystyle V=\bigcup_{i\leq s}V_{y_i}$ covers $\partial\sigma$. Let $F=\sigma\backslash V$ and $\displaystyle\eta=\min\{\delta,\delta_{y_i}:i\leq s\}$. We consider such a triangulation $T$ of $\sigma$ that the complex $L=\{\tau\in T:\tau\cap F\neq\varnothing\}$ is disjoint with $\partial\sigma$. Because $K$ and $\sigma$ are finite complexes, both they have triangulations $T_K$ and $T_\sigma$ such that $T_\sigma$ is a subdivision of $T$ and $p$ remains simplicial with respect to $T_K$ and $T_\sigma$. So, we can apply Theorem 2.9 to find a map $g_1\in \mathcal R_n^p(\sigma,M)$ which is $\eta$-homotopic to $g$. Then the map $g_2\colon \Omega\cup p^{-1}(L)\to M$, $g_2|\Omega=g|\Omega$ and $g_2|p^{-1}(L)=g_1|p^{-1}(L)$, is $\eta$-homotopic to $g|\Omega\cup p^{-1}(L)$. Since $\Omega\cup p^{-1}(L)$ is a subcomplex of $K$, by the Homotopy Extension Theorem, $g_2$ can be extended to a map $h\colon K\to M$ which is $\eta$-homotopic to $g$. We have $h\in\mathcal R_n^p(y,M,k)$ for all $y\in\sigma$. Indeed, this follows from the choice of $V_{y_i}$ and $\delta_i$, $i\leq s$ (when $y\in V$), and from $g_1\in\mathcal R_n^p(\sigma,M)$ (when $y\in L$). Hence, $h\in U(k)\cap G$ which completes the proof. Next step is to prove that the set $\mathcal R_n^f(L,M)$ is homotopically dense in $C(N,M)$ for any perfect $PL$-map $f\colon N\to L$ between simplicial complexes with $\dim f\leq n$ . \[simcomplex\] Let $N,L$ be simplicial complexes and $f\colon N\to L$ a perfect $PL$-map with $\dim f\leq n$. Then $\mathcal R_n^f(L,M)$ is a homotopically dense subset of $C(N,M)$ We follow the arguments from the proof of [@bv Lemma 11.3]. Fix $g\in C(N,M)$ and $\alpha\in C(N,(0,1])$. We are going to find $h\in\mathcal R_n^f(L,M)$ which is $\alpha$-homotopic to $g$. Let $L^{(i)}$, $i\geq 0$, be the $i$-dimensional skeleton of $L$ and put $L^{(-1)}=\varnothing$ and $h_{-1}=g$. Construct inductively a sequence $(h_i:N\to M)_{i\geq 0}$ of maps such that - $h_{i}|f^{-1}(L^{(i-1)})=h_{i-1}|f^{-1}(L^{(i-1)})$; - $\displaystyle h_{i}$ is $\displaystyle\frac{\alpha}{2^{i+2}}$-homotopic to $h_{i-1}$; - $\dim h_i(f^{-1}(y))\leq n$ for every $y\in L^{(i)}$. Assuming that the map $h_{i-1}:N\to M$ has been constructed, consider the complement $L^{(i)}\setminus L^{(i-1)}=\sqcup_{j\in J_i}\overset{\circ}\sigma_j$, which is the discrete union of open $i$-dimensional simplexes. Since, by [@bv Lemma 4.1], each $f^{-1}(\sigma_j)$ is a finite subcomplex of $N$, and $\dim h_{i-1}(f^{-1}(y))\leq n$ for every $y\in L^{(i-1)}$, we can apply Lemma \[compact\] to find a map $g_j:f^{-1}(\sigma_j)\to M$, $j\in J_i$, such that - $g_j$ coincides with $h_{i-1}$ on the set $f^{-1}(\sigma^{(i-1)}_j)$; - $g_j$ is $\displaystyle\frac{\alpha}{2^{i+2}}$-homotopic to $h_{i-1}$; - $\dim g_j(f^{-1}(y))\leq n$ for every $y\in\sigma_j$. Define a map $\varphi_i:f^{-1}(L^{(i)})\to M$ by the formula $$\varphi_i(x)=\begin{cases} h_{i-1}(x)&\mbox{if $x\in f^{-1}(L^{(i-1)})$;}\\ g_j(x)&\mbox{if $x\in f^{-1}(\sigma_j)$.} \end{cases}$$ It can be shown that $\varphi_i$ is $\displaystyle\frac{\alpha}{2^{i+2}}$-homotopic to $h_{i-1}|f^{-1}(L^{( (i)})$. Moreover, $f^{-1}(L^{(i)})$ is a subcomplex of $N$ (according to [@bv Lemma 4.1]). So, by the Homotopy Extension Theorem, there exists a continuous extension $h_i:N\to M$ of the map $\varphi_i$ which is $\displaystyle\frac{\alpha}{2^{i+2}}$-homotopic to $h_{i-1}$. The map $h_i$ satisfies the inductive conditions. Then the limit map $h=\lim_{i\to\infty}h_i:N\to M$ is well-defined, continuous and $\alpha$-homotopic to $g$. Finally, since $h|f^{-1}(L^{(i)})=h_i|f^{-1}(L^{(i)})$ for every $i\geq 0$, $h\in\mathcal R_n^f(L,M)$. Now, we can complete the proof of Theorem 1.1. \[general reg\] Let $f\colon X\to Y$ be a perfect map between paracompact spaces with ${\dim_\triangle}(f)\leq n$. Then every simplicially factorizable map $g\in C(X,M)$ can be homotopically approximated by simplicially factorizable maps $h\in C(X,M)$ such that $\dim h(f^{-1}(y))\leq n$ for every $y\in Y$. We follow the construction from the proof of [@bv1 Proposition 3.4]. Fix a simplicially factorizable map $g\in C(X,M)$ and $\epsilon\in C(X,(0,1])$. Then there exist a simplicial complex $D$ and maps $g_D\colon X\to D$, $g^D\colon D\to M$ with $g=g^D\circ g_D$. The metric $\rho$ induces a continuous pseudometric $\rho_D$ on $D$, $\rho_D(x,y)=\rho(g^D(x),g^D(y))$. Since $D$ is a neighborhood retract of a locally convex space (see [@ca] and [@si]) and any sufficiently close maps from a given space into $D$ are homotopic, we apply [@bv Lemma 8.1] to find an open cover ${\mathcal U}$ of $X$ satisfying the following condition: if $\alpha\colon X\to K$ is a ${\mathcal U}$-map into a paracompact space $K$ (i.e., $\alpha^{-1}(\omega)$ refines ${\mathcal U}$ for some open cover $\omega$ of $K$), then there exists a map $q'\colon G\to D$, where $G$ is an open neighborhood of $\overline{\alpha(X)}$ in $K$, such that $g_D$ and $q'\circ\alpha$ are $\epsilon/2$-homotopic with respect to the pseudometric $\rho_D$. Let ${\mathcal U}_1$ be an open cover of $X$ refining ${\mathcal U}$ with $\inf\{\epsilon(x):x\in U\}>0$ for all $U\in{\mathcal U}_1$. Next, according to [@bv Theorem 6], there exists a locally finite open cover ${\mathcal V}$ of $Y$ such that: for any ${\mathcal V}$-map $\beta\colon Y\to L$ into a simplicial complex $L$ we can find an ${\mathcal U}$-map $\alpha\colon X\to K$ into a simplicial complex $K$ and a perfect $PL$-map $p\colon K\to L$ with $\beta\circ f=p\circ\alpha$ and $\dim p\leq{\dim_\triangle}f$. Take $L$ to be the nerve of the cover ${\mathcal V}$ and $\beta\colon Y\to L$ the corresponding natural map. Then there are a simplicial complex $K$ and maps $p$ and $\alpha$ satisfying the above conditions. Hence, the following diagram is commutative: $$\xymatrix{ X\ar[d]_f\ar[r]^\alpha& K\ar[d]^p\\ Y\ar[r]_\beta&L }$$ The choice of the cover ${\mathcal U}$ guarantees the existence of a map $\varphi_D\colon G\to D$, where $G\subset K$ is an open neighborhood of $\overline{\alpha(X)}$, such that $g_D$ and $h_D=\varphi_D\circ\alpha$ are $\epsilon/2$-homotopic with respect to $\rho_D$. Then, according to the definition of $\rho_D$, $h'=g^D\circ \varphi_D\circ\alpha$ is $\epsilon/2$-homotopic to $g$ with respect to $\rho$. Replacing the triangulation of $K$ by a suitable subdivision, we may additionally assume that no simplex of $K$ meets both $\overline{\alpha(X)}$ and $K\backslash G$. So, the union $N$ of all simplexes $\sigma\in K$ with $\sigma\cap\overline{\alpha(X)}\neq\varnothing$ is a subcomplex of $K$ and $N\subset G$. Moreover, since $N$ is closed in $K$, $p_N=p|N\colon N\to L$ is a perfect map and $\dim p_N\leq{\dim_\triangle}f$. Therefore, we have the following commutative diagram, where $N$ and $L$ are finite complexes, $p_N$ is a $PL$-map and $\varphi=g^D\circ \varphi_D$: $$\xymatrix{ N\ar[d]_{p_N}&X\ar[d]^f\ar[l]_{\alpha}\ar[r]^{\varphi\circ\alpha}&M\\ L&Y\ar[l]^\beta\ar[ru]_{\varphi} }$$ Using that $\alpha$ is a ${\mathcal U}_1$-map and $\inf\{\epsilon(x):x\in U\}>0$ for all $U\in{\mathcal U}_1$, we can construct a continuous function $\epsilon_1:N\to(0,1]$ with $\epsilon_1\circ\alpha\leq\epsilon$. Then, by Lemma \[simcomplex\], there exists a map $\varphi_1\in C(N,M)$ which is $\epsilon_1/2$-homotopic to $\varphi$ and $\dim\varphi_1(p_N^{-1}(z))\leq n$ for every $z\in L$. Let $g'=\varphi_1\circ\alpha$. Obviously, $g'$ is simplicially factorizable. It is easily seen that $g'$ and $g$ are $\epsilon$-homotopic and $g'(f^{-1}(y))\subset \varphi_1(p_N^{-1}(\beta(y)))$ for all $y\in Y$. So, $\dim g'(f^{-1}(y))\leq\dim\varphi_1(p_N^{-1}(\beta(y)))\leq n$. The proof is completed. Some more examples of $\mathrm{FAP}(n)$-spaces ============================================== The class of $AP(n,0)$-spaces was introduced by the authors in [@bv1]: we say that a metrizable space $M$ has the [*$\mathrm{AP}(n,0)$-approximation property*]{} (br., $M\in\mathrm{AP}(n,0))$ if for every ${\varepsilon}>0$ and a map $g\colon{\mathbb I}^n\to M$ there exists a $0$-dimensional map $g'\colon{\mathbb I}^n\to M$ which is ${\varepsilon}$-homotopic to $g$. Next proposition provides a wide class of spaces with the $\mathrm{FAP}(n)$-property. Let $M_1\in\mathrm{AP}(n,0)$ be a completely metrizable $n$-dimensional space, $n\geq 0$. Then $M_1\times M_2$ has the $\mathrm{FAP}(n)$-property for any completely metrizable space $M_2$. We are going to show that every map $g=(g_1,g_2)\colon{\mathbb I}^m\times{\mathbb I}^n\to M_1\times M_2$, where $m\geq 0$, can be homotopically approximated by a map $h\in C({\mathbb I}^m\times{\mathbb I}^n, M_1\times M_2)$ with $\dim h(\{z\}\times{\mathbb I}^n)\leq n$ for all $z\in{\mathbb I}^m$. Denote by $\pi$ the projection $\pi\colon{\mathbb I}^m\times{\mathbb I}^n\to {\mathbb I}^m$. Since $M_1$ has the $\mathrm{AP}(n,0)$-property, the map $g_1\colon{\mathbb I}^m\times{\mathbb I}^n\to M_1$ can be homotopically approximated by a map $h_1\colon{\mathbb I}^m\times{\mathbb I}^n\to M_1$ such that all restrictions $h_1|(\{z\}\times{\mathbb I}^n)$, $z\in{\mathbb I}^m$, have $0$-dimensional fibers, see [@bv1 Theorem 1.1]. This means that the diagonal product $\pi\triangle h_1\colon{\mathbb I}^m\times{\mathbb I}^n\to{\mathbb I}^m\times M_1$ is a 0-dimensional map. It follows from our definition that every metrizable space has the $\mathrm{FAP}(0)$-property. So, by Theorem 1.1, the map $g_2\colon{\mathbb I}^m\times{\mathbb I}^n\to M_2$ can be homotopically approximated by a map $h_2\colon{\mathbb I}^m\times{\mathbb I}^n\to M_2$ such that all images $h_2\big((\{z\}\times{\mathbb I}^n)\cap h_1^{-1}(z_1)\big)$, $(z,z_1)\in{\mathbb I}^m\times M_1$, are $0$-dimensional. Then $h=(h_1,h_2)\colon{\mathbb I}^m\times{\mathbb I}^n\to M_1\times M_2$ approximates $g$. For any $z\in{\mathbb I}^m$ consider the map $p_z:h(\{z\}\times{\mathbb I}^n)\to h_1(\{z\}\times{\mathbb I}^n)$, $p_z(h(z,t))=h_1((z,t))$, $t\in{\mathbb I}^n$. Observe that $\dim h_1(\{z\}\times{\mathbb I}^n)\leq n$ (recall that $\dim M_1=n$) and $p_z^{-1}(z_1)=h_2\big((\{z\}\times{\mathbb I}^n)\cap h_1^{-1}(z_1)\big)$ for any $z_1\in h_1(\{z\}\times{\mathbb I}^n)$. So, $\dim p_z=0$. According to the dimension-lowering Hurewicz theorem, $\dim h(\{z\}\times{\mathbb I}^n)\leq\dim h_1(\{z\}\times{\mathbb I}^n)+\dim p_z\leq n$. This completes the proof. Since every space with the disjoint $(n-1)$-disks property $\mathrm{DD^{n-1}P}$, in particular, every manifold modeled on the $n$-dimensional Menger cube or the $n$-dimensional Nbeling space, is an $\mathrm{AP}(n,0)$-space, see [@bv1 Corollary 6.5], we have the following Let $X$ be a completely metrizable $n$-dimensional space with the disjoint $(n-1)$-disks property. Then $X\times M$ has the $\mathrm{FAP}(n)$-property for any completely metrizable space $M$. Now, we introduced a subclass of the $\mathrm{FAP}(n)$-spaces: a metric space $M$ is said to be a [*strong $\mathrm{FAP}(n)$-space*]{} if $M\in\mathrm{FAP}(k)$ for all $k\leq n$. This is equivalent to the following condition: any map $g\in C({\mathbb I}^m\times{\mathbb I}^k,M)$, where $m\geq 0$ and $k\leq n$, can be homotopically approximated by a map $g'\in C({\mathbb I}^m\times{\mathbb I}^k,M)$ with $\dim g'(\{z\}\times{\mathbb I}^k)\leq k$ for all $z\in{\mathbb I}^m$. The local nature of strong $\mathrm{FAP}(n)$-spaces follows from Theorem 2.7. A complete metric space $M$ has the strong $\mathrm{FAP}(n)$-property iff every $z\in M$ has a neighborhood with the same property. By [@tv1], any Euclidean space possesses the strong $\mathrm{FAP}(n)$-property for all $n\geq 0$. More general examples of strong $\mathrm{FAP}(n)$-spaces are provided by next proposition. Let each $M_i$, $i=1,2,..,n$, be a completely metrizable $\mathrm{LC}^{0}$-space without isolated points. Then the product $\prod_{i=1}^{i=n}M_i$ is a strong $\mathrm{FAP}(n)$-space. According to [@bv1 Corollary 6.3], any product of $k$ many completely metrizable spaces $\mathrm{LC}^{0}$-space without isolated points has the $\mathrm{AP}(k,0)$-property. Then Proposition 4.1 completes the proof. Any product of finitely many completely metrizable $1$-dimensional $\mathrm{LC}^{0}$-spaces without isolated points has the $\mathrm{FAP}(n)$-property for all $n\geq 0$. [999]{} T. Banakh and V. Valov, *General position properties in Fiberwise Geometric Topology*, book in progress (ArXiv:1001.2494). T. Banakh and V. Valov, *Approximation by light maps and parametric Lelek maps*, Topology Appl. (to appear). R. Cauty, [*Convexité topologique et prolongement des fonctions continues*]{}, Compos. Math. [**27**]{} (1973), 233–273. E. Matsuhashi, V. Valov, *Krasinkiewicz spaces and parametric Krasinkiewicz maps*, preprint (arXiv:0802.4436v2). M. Levin, [*Bing maps and finite-dimensional maps*]{}, Fund. Math. **151, 1** (1996), 47–52. B. Pasynkov, *On geometry of continuous maps of countable functional weight*, Fundam. Prikl. Matematika [**4, 1**]{} (1998), 155–164 (in Russian). D. Repovš and P. Semenov, [*Continuous selections of multivalued mappings*]{}, Math. and its Appl. [**455**]{}, Kluwer, Dordrecht (1998). O. Sipachëva, [*On a class of free locally convex spaces*]{}, Mat. Sb. [**194:3**]{} (2003), 25–52 (in Russian); translation in: Sb. Math. [**194:3-4**]{} (2003), 333–360. M. Tuncali and V. Valov, [*On dimensionally restricted maps*]{}, Fund. Math. [**175**]{}, (2002), no. 1, 35–52. M. Tuncali and V. Valov, [*On finite-dimensional maps II*]{}, Topology and Appl. **132** (2003), 81–87. V. Uspenskij, *A remark on a question of R. Pol concerning light maps*, Topology Appl. **103, 3** (2000), 291–293. [^1]: The second author was partially supported by NSERC Grant 261914-08.
Dancing on a Dime Dancing on a Dime is a 1940 Paramount Pictures movie directed by Joseph Santley about five actors and dancers putting on a show while living in a theatre. It is adapted from a novel of the same name written by Dorothy Young, which itself is based loosely on her own life. It starred Robert Paige, Peter Lind Hayes, Eddie Quillan, Frank Jenks, and Grace McDonald. It is known for its song, I Hear Music. References External links Category:1940 films Category:American films Category:American black-and-white films Category:Films directed by Joseph Santley Category:American musical films Category:1940s musical films
Precision medicine in the age of big data: The present and future role of large-scale unbiased sequencing in drug discovery and development. High throughput molecular and functional profiling of patients is a key driver of precision medicine. DNA and RNA characterization has been enabled at unprecedented cost and scale through rapid, disruptive progress in sequencing technology, but challenges persist in data management and interpretation. We analyze the state-of-the-art of large-scale unbiased sequencing in drug discovery and development, including technology, application, ethical, regulatory, policy and commercial considerations, and discuss issues of LUS implementation in clinical and regulatory practice.
A biopic of Madonna, titled Blond Ambition, has been named the most admired unproduced film script currently in the Hollywood system. Written by Elyse Hollander, Blond Ambition has topped the newly published Black List, which invites film executives to vote on their favourite scripts that will not have started shooting before the end of 2016. Blond Ambition topped the pile of 73 scripts, taking 48 votes; its description is: “In 1980s New York, Madonna struggles to get her first album released while navigating fame, romance, and a music industry that views women as disposal [sic].” Hollander appears to be a genuine industry newcomer, having written and directed a series of short films and, according to IMDb, acted as an uncredited director’s assistant on Birdman. Three scripts were in joint second place, taking 35 votes each. Life Itself, from experienced writer Dan Fogelman (Crazy, Stupid, Love; Tangled) is a drama “that weaves together a number of characters whose lives intersect over the course of decades from the streets of New York to the Spanish countryside and back”. The Olympian, by Tony Tost, is the “true story of an underdog rower trying to make it into the 1984 Olympics”. The Post, by Liz Hannah, tells the story of the Washington Post and publication of the Pentagon Papers. The Black List, which has been published annually since 2005, stresses that it is a “most-liked” list, but it has become increasingly influential over the years, with several recent awards contenders, including Manchester By the Sea, Miss Sloane and The Founder, featuring on past lists.
Farmer Boys Farmer Boys is a quick service restaurant chain based primarily in California, with headquarters in Riverside, California. History Farmer Boys was founded by the Havadjias brothers in 1981. The Havadjias family were immigrants from Cyprus. The brothers originally owned Astro Burgers in Torrance, California in 1979 and Theodore's Restaurant in Hollywood, California in 1981 prior to owning McCoy's Restaurant in Perris, California. McCoy's Restaurant became the first Farmer Boys restaurant following the name change in August 1981. About 16 years later, Farmer Boys had grown to eight restaurants. A year later, Farmer Boys was granted franchise status and later in the year, a Farmer Boys restaurant opened in Temecula, California. To date, there are over 90 Farmer Boys restaurants in California and Nevada. Today, thanks to an ever-expanding franchise operation, the Farmer Boys family continues to grow. Awards Farmer Boys was voted Best Burger in the Inland Empire by MyFoxLA Hotlist Inland Empire, KTTV-TV. KCBS-TV (CBS 2) has named Farmer Boys as the best burger in Southern California, as well as Inland Empire Magazine's best burger. Farmer Boys has also won awards for their deep-fried zucchini and monster-sized onion rings, described as "'monster truck tires' compared to competitors' 'puny little training wheels'". See also List of hamburger restaurants References External links Farmer Boys Category:Fast-food chains of the United States Category:Fast-food franchises Category:Fast-food hamburger restaurants Category:Regional restaurant chains in the United States Category:Companies based in Riverside, California Category:Restaurants established in 1981 Category:Restaurants in Riverside County, California Category:1981 establishments in California
import tests.model_control.test_ozone_custom_models_enabled as testmod testmod.build_model( ['BoxCox'] , ['Lag1Trend'] , ['Seasonal_DayOfWeek'] , ['AR'] );
Patrick Coolican, the Las Vegas Sun Anjeanette Damon, the Reno Gazette-Journal Ian Mylchreest, KNPR's State of Nevada ... on revelations concerning Sarah Palin from day one of the RNC, and what to expect today.
Your opinion about developer salaries and getting a raise - wildmXranat Location: Toronto, Canada<p>Job: PHP, Symfony, LAMP<p>From the outset, the job began rather plain. I joined a team that took over a Symfony framework application. After about a year, the company took on a new project and I became one of the three developers tasked with making it happen.<p>Fast-forward 18 months; the project is doing great and I have amassed tons of experience. Despite not being lead, I took charge submitting ideas that pushed the project ahead. It sort of became my trademark and I saw my value rise. With my hiring anniversary just passing in February, I asked to have a meeting concerning my compensation.<p>I'm asking for feedback and your opinion regarding salary negotiation. Do you feel that salary guides are accurate or inflated? One comes to mind: http://www.roberthalftechnology.com/External_Sites/content/DM-FreeResources/RHT/downloads/RHT-SG-2011.pdf<p>Is it realistic to assume that a base salary of 63k x 18% or about 75k is an amount to shoot for ? What is your experience in these types of situations ? ====== freddealmeida In the past, what I normally did was to work from contribution to revenue. What % of revenue do you contribute with your work? Work from there. If you contribute 400k a year, asking for 100K is more than acceptable. Also factor in the cost for them losing you. Will they require someone immediately? pay for headhunters? or will they hardly need you. Use this as a calibration of the number. Also find out what everyone else is making. Especially your boss. and your colleagues. Some transparency is required on your part and some hustling. Also, how will you add more value once the raise is given. what will it allow you to do? what is the value to the company in addition to keeping a seasoned developer. ie. what is your action plan if you get a raise. vs not getting it. you should also have a plan for not getting the raise. consider this your black swan event insurance. If you do not get what you want or need, what will you do? no extra money? how about extra time? or new tech? or will you quit? If you are very valued and are ready to quit if you do not get what is your actual value to the firm, you are in a better situation to shift negotiations.
Q: How come a Desktop Environment be one layer under a shell (kernel-DE-shell instead kernel-shell-DE)? I ask the following question as a followup to this question. How come a Desktop Environment be one layer under a shell (kernel-DE-shell instead kernel-shell-DE)? Why I ask this question In Ubuntu, for example, both the Gnome Shell and Unity GUI for Gnome shell, are 2 layers above the Gnome Desktop Environment (DE), respectively. My assumption Maybe the order is different between CLI-only and CLI+GUI systems, that is, maybe in CLI only systems it is, for example: kernel-shell(sh,Bash)-utilities. and in CLI+GUI system it is, for example: kernel-primary shell(sh,Bash)-DE-secondary shell(Gnome shell)-GUI(Unity). A: There is no primary shell. If you’re running the default GNOME 3 desktop, then the stack is Kernel → X.org or Wayland → GNOME session manager (which starts a number of GNOME helper applications) → GNOME Shell (which uses a number of GNOME libraries) If you’re running Unity, then the stack is Kernel → X.org or Mir or Wayland → GNOME session manager → Unity (which also uses a number of GNOME libraries) If you’re running a command-line shell in a virtual console or an old-school terminal, then the stack is Kernel → login → shell A desktop environment is a whole set of applications working together to provide a consistent experience to the user. The “shell” is one of those applications (the one which acts as the last layer in the interface to the user, i.e. the one which has first dibs on user-initiated events such as keystrokes).
Great Coconut Vape! Posted by Mitch Clarke on 13th May 2015 This definitely caught me by surpirse. I'm not usually a fan of coconut vapes, but this is making me doubt my stance. The coconut isn't overpowering nor underwhelming; just a soft, creamy base elevated by the rich pretzel accent. Definitely a must-try, if not a must-buy. No warranty or exchanges allowed due to sanitary and safety reasons. See Warranty section for details.
Need a Pro? For most, a garage door repair is NOT a "do it yourself" project. FREE Estimates Overhead Door Opener In Northbrook Need garage door service help outside of Northbrook? Use search tool below: Trying to find the very best priced garage door opener in Northbrook? Right here's some info to assist the do it your self, or if you're like the majority of and require a certified expert garage opener installer ... you can constantly call 866-334-9930 ext 1 Many garage doors are adequately heavy enough to significantly inflict injury as well as death, so security functions on garage door openers are essential. We looked mainly at garage door openers created to be set up by an expert door business, so it's crucial the item you pick provides its own layer of consumer support. When looking for details about numerous garage door openers or those who can repair or replace them in Northbrook, then call us for a quote. Video About Garage Door Openers Northbrook OH Overhead Door Opener Help In Northbrook866-334-9930 ext 1 Garage door openers have actually enhanced considerably over the previous 9 years, with the very best enhancements made in security and sound decrease. The garage door openers discovered in Northbrook houses that have the power to raise the heaviest doors such as a 1/2 hp motor are best matched for conventional aluminum sectional doors are openers efficient in opening a conventional seven-foot door or those with extension kits if your door is taller. A good garage door opener in Northbrook is the one you set up and afterwards never ever consider once more. There are 4 kinds of garage door openers: chain drive, screw drive, belt drive, and torsion drive. Ultra-quiet belt drive designs go beyond the low-cost garage door opener rate point entirely. GarageDoorsofAmerica.com makes it easy for you to get garage door opener help when you need it. A quick call to describe what is wrong with your garage opener can get us into action to resolve your problem the same day in most cases. Extremely competitive pricing for both new sales and service of garage door openers in various price ranges, you'll be glad you made the call.
"use strict"; this.name = "Null AI"; // does nothing
Preparation:In A Pot Add Milk And Ilaichi Seeds, Rice Flour And Sugar. Leave On A Very Low Flame To Cook And Keep Stirring. When The Mixture Thickens Add Kewra Essence And Remove From Heat. Garnish With Dry Fruits.
Campus Construction Update: July 26, 2011 “It’s the windows,” said Doris Vincent, administrative assistant for the Off-Campus Study Program. “There are so many big, bright windows.” It was also the wall colors that glowed so pleasantly in the light from all those windows: sage green, warm cream, caramel, a rich medium blue. And it was the generous space in the new offices. And the long views from the big windows. And the cozy lounges. And, of course, the “tokonoma” in the Asian studies common space on the second floor — tokonoma being the Japanese term for a recessed display area for artistic valuables. In short, the few staffers and faculty from Off-Campus Study and the foreign language departments who toured the renovated Roger Williams Hall on July 7 were glowing right along with the wall paint, delighted by the many improvements, aesthetic and functional, over their previous quarters. The tour was a prelude to moving day on July 25, when offices were turned over to the Bill’s 27 occupants. In an arrangement similar to what Bates and general contractor Wright-Ryan Construction worked out for Hedge Hall, which opened to occupants on June 27, Bates has assumed operational control of the building while the contractor continues its work. Of which there ain’t much, at least inside. “The places where furniture goes are done,” project manager Paul Farnsworth reported on July 21. “The last thing that we’re doing is touch-up paint on the handrails in Stair Two,” the stairway in the new section of the building. The end game for the Bill differs in significant ways from Hedge’s. Both buildings are undergoing punch-list inspections and corrections. But at Hedge, in order to shift workers over to the Bill as soon as possible, that process began only after most other work was complete. At Roger Williams, they’ve been working on punch lists as they’ve gone along, so they’re ahead of the game. The Bill has also been faster because of its configuration. Its ceilings are higher, which makes some tasks easier to do. And while its footprint is close to square, Hedge’s is long and skinny, which both constricts the space for people to work in and, crucially, enforces stricter adherence to the building plans. Don’t take that wrong. Because architects can’t foresee every condition in a building, Farnsworth explains that sometimes you need to move a wall slightly, for instance, to make things fit. “In Hedge, we didn’t really have that option as much, where in Roger Williams we were like, ‘Yeah, we can move that out an inch — no one will know the difference.’ ” Outside the Bill, workers poured a substantial set of steps leading from the stair tower to the Library Quad, and graded the area near the Evelyn Minard Phillips Remembrance Garden. With new grass already sprouting on the side of Roger Bill facing Pettengill Hall, the remaining loam and grass seed should be spread by the end of July. Finally, 17 months after it appeared, the fence around the construction site was taken down on July 20. We walked by one day and it took a few minutes to figure out what was missing. And when we finally did realize that the thing we were missing was an obstructed view, we took it as a signal from the heavens. We began our reporting on the Hedge Hall and Roger Williams renovation with the arrival of the construction fence — and as the fence goes, so does Campus Construction Update, at least for now. Thanks for reading.