text
stringlengths
100
957k
meta
stringclasses
1 value
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Novel diffuse white matter abnormality biomarker at term-equivalent age enhances prediction of long-term motor development in very preterm children ## Abstract Our objective was to evaluate the independent prognostic value of a novel MRI biomarker—objectively diagnosed diffuse white matter abnormality volume (DWMA; diffuse excessive high signal intensity)—for prediction of motor outcomes in very preterm infants. We prospectively enrolled a geographically-based cohort of very preterm infants without severe brain injury and born before 32 weeks gestational age. Structural brain MRI was obtained at term-equivalent age and DWMA volume was objectively quantified using a published validated algorithm. These results were compared with visually classified DWMA. We used multivariable linear regression to assess the value of DWMA volume, independent of known predictors, to predict motor development as assessed using the Bayley Scales of Infant & Toddler Development, Third Edition at 3 years of age. The mean (SD) gestational age of the cohort was 28.3 (2.4) weeks. In multivariable analyses, controlling for gestational age, sex, and abnormality on structural MRI, DWMA volume was an independent prognostic biomarker of Bayley Motor scores ($$\beta$$= −12.59 [95% CI −18.70, −6.48] R2 = 0.41). Conversely, visually classified DWMA was not predictive of motor development. In conclusion, objectively quantified DWMA is an independent prognostic biomarker of long-term motor development in very preterm infants and warrants further study. Cerebral palsy (CP) describes a spectrum of life-long disorders of movement and posture that impacts 800,000 Americans1. CP is the most common physical disability in children, with annual healthcare costs of \$15 billion2. Up to 10% of very preterm infants develop CP and 32−42% develop minor motor abnormalities3,4,5. Despite our understanding that these motor abnormalities are the result of abnormal development or brain injury during the fetal or neonatal period, children typically do not receive a diagnosis until 1 to 2 years of age6. There is wide consensus that earlier diagnosis, soon after birth, is urgently needed to take full advantage of critical windows of early neuroplasticity, particularly during the first two years7,8. Earlier diagnosis would facilitate targeted delivery of early interventions9 and novel habilitative therapies during this optimal period for brain development10. Structural MRI (sMRI) at term-equivalent age is normal in up to 30% of diagnosed CP cases11,12,13,14. New advanced quantitative MRI measures may hold the greatest promise for enhancing prediction accuracy15 with quantitative cerebral morphometric analyses representing the most clinically feasible approach. Of these morphometric measures, objective assessment of diffuse excessive high signal intensity (DEHSI) abnormality is of import due to its high prevalence in preterm infants14,16,17,18,19, association with in-vivo microstructural18,20, metabolic21, and postmortem pathology22, and early evidence suggesting a correlation with neurodevelopmental impairments (NDI)17,23,24,25,26. While most studies that diagnosed DEHSI visually/qualitatively have not reported a significant association with NDI18,19,27,28,29,30,31,32, when quantified objectively, DEHSI appears to significantly predict cognitive and language development in extremely preterm infants24,25. To better reflect its pathologic nature, we will henceforth use the label diffuse white matter abnormality [DWMA] in place of DEHSI. The goal of this study was to examine the prognostic value of objectively quantified DWMA volume at term-equivalent age for prediction of motor development in a prospective cohort of very preterm infants. We hypothesized that objectively quantified DWMA volume would be an independent predictor of motor development at 3 years of age. ## Methods ### Population All very preterm infants born at 31 weeks completed gestation or earlier and admitted to any of the four level III neonatal intensive care units (NICUs) in Columbus, Ohio from November 2014 to March 2016 were eligible for inclusion25. We prospectively enrolled 110 very preterm infants from a consecutively eligible sample of infants during this period. The four NICUs were Nationwide Children’s Hospital (NCH), Ohio State University Medical Center, Riverside Hospital, and Mount Carmel St. Ann’s Hospital. These NICUs care for approximately 80% of all very preterm infants in the Columbus, Ohio region. We excluded any infants with congenital or chromosomal anomalies that affected the central nervous system and likely result in a poor outcome. Data collection occurred between January 2015 and July 2018. The NCH Institutional Review Board approved the study at NCH and the other study sites through established reciprocity agreements. Written informed consent was obtained from a parent or guardian of each very preterm infant after they were given sufficient time to determine if they wished to participate. All methods/research activities were carried out in accordance with the NCH Institutional Review Board guidelines and regulations. All study infants were invited for routine developmental follow-up in the NCH High-Risk Follow-up Clinic up to 3 years corrected age. ### Magnetic resonance imaging acquisition We performed brain structural MRI scans on all 110 study infants at NCH on a 3T Siemens Skyra MRI scanner with at 32-channel pediatric head coil between the ages of 39 and 44 weeks post-menstrual age (PMA). Most infants from NCH were typically imaged while they were still inpatients, while all infants cared for at the other three NICUs were imaged as outpatients after being discharged. All inpatient MRI scans were attended by a skilled neonatal nurse and a neonatologist. Heart rate and oxygen saturation of all infants were monitored continuously during all scans. We performed all imaging without sedation by feeding the infants 30 min prior to the scan, applying silicone earplugs and swaddling the infants in a blanket and a vacuum immobilization device (MedVac, CFI Medical Solutions, Fenton, MI) to promote natural sleep. There were no adverse events. The following structural MRI sequence parameters were used for all infants: axial T2-weighted: echo time 147, repetition time 9,500 ms, echo train length 16, flip angle 150°, resolution 0.93 × 0.93 × 1.0 mm3, scan time 4:09 min.; axial SWI: echo time 20, repetition time 27 ms, flip angle 15°, resolution 0.7 × 0.7 × 1.6 mm3, time 3:11 min.; 3-dimensional magnetization-prepared rapid gradient echo: echo time 2.9, repetition time 2,270 ms, inversion recovery time 1,600 ms, echo spacing time 8.5 ms, flip angle 13°, resolution 1.0 × 1.0 × 1.0 mm3, time 3:32 min. ### Image post-processing We applied our previously published algorithm to objectively detect and quantify DWMA on T2-weighted MRI (Fig. 1; For in-depth methods and additional examples of DWMA segmentation see He et al.)25. To summarize, first we conducted bias field correction (removal of signal intensity inhomogeneity caused mainly by the radiofrequency coils) and intensity normalization (reducing the variations in signal intensity and contrast across slices and across subjects). Next, we conducted brain tissue segmentation using a neonatal probabilistic brain atlas as a guide and defined DWMA to be any voxels with signal intensity values greater than $$\propto$$ standard deviations above the mean for all cerebral tissues (white and gray matter). We refer to $$\propto$$ as our cut-off threshold. For this study, we examined a cut-off thresholds of 2.0. However, this threshold was too restrictive and defined only very small regions as DWMA; therefore, we chose a lower threshold of 1.8 SD. We controlled for partial volume artifacts by only labeling voxels with high gray and white matter membership probability (≥ 95%) as cerebral tissues. We manually removed the few isolated false positive voxels detected by the algorithm. Total DWMA volume was calculated as the product of a single voxel volume (determined by the imaging resolution) and the total number of voxels in the detected DWMA region. This algorithm was first validated on simulated preterm infant brains with manually drawn DWMA that represented the ground truth by demonstrating that our DWMA algorithm exhibited strong agreement with this ground truth (both qualitatively and quantitatively)25. We limited DWMA detection to the centrum semiovale only because we have found this to be the most predictive white matter region and it is not confounded by the normal high signal intensity of the periventricular crossroads24,25,26. We defined the centrum semiovale as the central white matter in the two slices immediately above the lateral ventricles on axial view. We calculated a normalized DWMA volume by dividing DWMA volume by total cerebral white matter volume. All analyses were performed masked to clinical and outcome data. ### MRI imaging assessment All brain structural MRI readings were performed by pediatric neuroradiologists who used a standardized scoring system graded for degree of brain injury/maturation, and the objective quantitative biometric measurements were performed separately by a trained expert, per Kidokoro et al.33. This approached yielded a global brain abnormality score, which was categorized as normal (total score, 0–3), mild (total score, 4–7), moderate (total score, 8–11), or severe abnormality (total score $$\ge$$ 12). A single reader (NAP) with greater than 10 years of experience interpreting neonatal MRI scans performed visual qualitative classification of DWMA, masked to clinical and outcome data. The DWMA score was based on severity and extent as described by Kidokoro et al.18. Infants were assigned grade 0 if there was no DWMA or if high signal intensity was present only in the periventricular crossroads, grade 1 if DMWA was only visible in one region, grade 2 if DWMA was visible in two regions, and grade 3 if three or more regions were involved in addition to the normal signal intensity observed in the crossroads. While DWMA was observed in all white matter regions, the centrum semiovale was the most commonly identified region with qualitatively defined DWMA. The reader also assessed whether the margins of the posterior crossroads were invisible (defined as invisible posterior crossroads). The same reader reevaluated 20 randomly chosen MRI scans three weeks later and used kappa ($$\kappa$$) statistics to assess intra-rater agreement for DWMA grade. Of the 20 subjects, complete agreement was seen in 60% of cases (expected agreement 31.0%) for a $$\kappa$$ of 0.42. This represents a fair to moderate agreement strength34,35. ### Neurodevelopmental assessment Participating infants underwent a comprehensive neurodevelopmental evaluation at a median age of 36.1 (IQR: 35.3–37.5) months in the NCH High-Risk Follow-up Clinic. We assessed overall motor development using the standardized Bayley Scales of Infant and Toddler Development, Third Edition (Bayley-III). A Motor composite score (composite of fine and gross motor development scores) that was 3 SD below the normative mean was assigned to children who could not complete the test due to difficulty resulting from likely severe disability. The composite score for the Bayley-III Motor subscale is scaled to metric with a mean of 100 (SD 15) and range of 40–160. Examiners performed the standardized Amiel-Tison neurologic exam36, which included evaluation of tone, reflexes, posture, and strength; gross motor function was classified using the Gross Motor Function Classification System37. Cerebral palsy was defined as abnormal muscle tone in at least one extremity and abnormal control of movement and posture. All assessments were performed by assessors who were masked to the quantitative DWMA diagnosis but not masked to clinical information. ### Statistical analyses In univariate analyses, we examined the relationship between the normalized DWMA volume and the Bayley-III Motor composite score using linear regression. To evaluate the independent prognostic value of DWMA volume, we performed multivariable regression by adding known perinatal predictors of Bayley score, including sex, gestational age, and global brain abnormality score. In addition, we also added center/NICU and PMA at MRI to the multivariable model to control for their potential confounding effects. We also tested a model that substituted global brain abnormality with a composite variable that included sMRI injury variables known to be strong predictors of motor impairment: cystic white matter abnormalities, hemorrhage (intraventricular, parenchymal, and/or cerebellar), and punctate white matter lesions7. The internal validity of our final model was tested by estimating a bias-corrected confidence interval derived from a bootstrap procedure involving 10,000 resamples38. In secondary analyses, to assess prediction accuracy for CP, we used Fisher’s exact test to evaluate prognostic properties, including sensitivity, specificity, positive and negative likelihood ratios, for: (1) objectively quantified severe DWMA (normalized DWMA volume dichotomized at > 90th percentile [pre-specified cut-off]), (2) global brain abnormality (moderate or greater), and (3) visually-classified severe DWMA (grade 3). We used logistic regression to evaluate the relationship between DWMA and CP. Last, we used Pearson’s correlation and multivariable linear regression to assess the relationship between (1) DWMA volume and global brain abnormality score and (2) DWMA volume and visually defined DWMA. We used the traditional two-sided P value of < 0.05 to indicate statistical significance. All analyses were performed using STATA 16.0 (Stata Corp., College Station, TX). ## Results Of the original cohort of 110 very preterm infants, we excluded one infant due to excessive motion artifacts and excluded all 11 infants with severe brain injury since this interfered with accurate DWMA segmentation (e.g. severe ventriculomegaly resulting in loss of centrum semiovale white matter). Structural MRI was performed at a mean (SD) PMA of 40.3 (0.5) weeks. By 3 years of age, 77 infants (79%) returned for Bayley motor testing. The baseline characteristics for infants who returned for follow-up were not significantly different from those who did not (Table 1). The mean (SD) Bayley-III raw Gross Motor, raw Fine Motor, and Composite Motor scores were 60.1 (5.6), 44.0 (5.0), and 90.8 (12.2), respectively. Cerebral palsy was diagnosed in six infants (7.3%). Four infants were diagnosed with spastic diplegia, one had spastic left hemiplegia, and one had spastic quadriplegia. The latter infant was classified as GMFCS level 4 while the other five infants were classified as GMFCS level 1. Based on the global brain abnormality score, five infants were classified as having moderate injury (6.5%), 19 had mild injury (24.7%), and 53 had no injury (68.8%) on their sMRI at term-equivalent age. Moderate injury was noted on sMRI in three of the six infants (50%) with CP. As stated above, all infants with severe injury were excluded from the study. Visually/qualitatively classified DWMA was diagnosed as severe (grade 3) in 11 infants (13.4%), moderate (grade 2) in 22 infants (26.8%), and no/mild (grade 0/1) in 49 infants (59.8%). Only one infant was diagnosed with invisible posterior crossroads. In univariate analyses, DWMA volume was significantly predictive of Bayley-III Motor scores, explaining 26% of the variance in motor development (Table 2; Fig. 2). This association remained significant even when raw DWMA volume was tested in the regression analyses, suggesting that the normalization by total white matter volume did not have a significant effect on the association with Bayley Motor scores. In multivariable analyses, controlling for other known predictors of Bayley scores, including sex, gestational age, and global brain abnormality, normalized DWMA volume ($$\beta$$= −12.59 [95% CI −18.70, −6.48]) remained a significant predictor of Bayley Motor development at age 3 (Table 2). Replacing global brain abnormality score with cystic abnormalities, hemorrhage, and punctate white matter lesion variables actually reduced the model adjusted R2 (38.9%) and enhanced the predictive power of DWMA ($$\beta$$= −14.33). The bootstrap bias-corrected confidence intervals were comparable ($$\beta$$ 95% CI −18.60, −4.31), supporting the internal validity of the final model. To confirm that the significant predictive relationship we observed between normalized volume of DWMA and Motor scores (t = −4.11; p < 0.001) was not a function of white matter volume loss, we replaced normalized DWMA volume with DWMA volume that was uncorrected for white matter volume. This replacement did not result in a meaningful difference in the model parameters (t = −3.92; p < 0.001). When we control for the effects of different head sizes by including total white matter in the multivariable model (with uncorrected DWMA volume), the model remains very comparable (t = −3.89; p < 0.001); when controlled for total intracranial volume, the results again remain comparable (t = −3.95; p < 0.001). The total explained variance in Bayley Motor scores was 39% and 40%, respectively (compared to 41% for normalized DWMA volume). These analyses suggest that DWMA is independent of other white matter pathology. Visual, qualitative diagnosis of DWMA was not significantly predictive of Motor scores in univariate analyses (P = 0.23). Inclusion of known predictors and confounders in the model did not substantially change this relationship (P = 0.70; Table 2). Global brain abnormality score was also a significant predictor of Motor scores, even after controlling for other predictors. However, it was not as predictive as DWMA volume. The addition of DWMA volume increased the variance in Bayley Motor scores by another 13.2% (41.1 vs. 27.9%; Table 2). Finally, normalized DWMA volume was significantly correlated with global brain abnormality score (r = 0.30; p = 0.003) but not with visually defined qualitative DWMA (r = 0.09; p = 0.23) in univariate analyses. In multivariable linear regression models, controlling for gestational age, sex, PMA at MRI, and center, we observed a significant relationship between objectively defined DWMA volume and global brain abnormality score ($$\beta$$=0.032; 95% CI 0.003, 0.062; p = 0.032). In similarly controlled multivariable linear regression models, we did not observe a significant relationship between objectively defined DWMA volume and visually defined DWMA ($$\beta$$=0.044; 95% CI -0.064, 0.152; p = 0.418). In secondary logistic regression analyses, a 10% increase in DWMA volume was associated with an odds ratio of 31.64 (95% CI 3.96, 253.03) of developing CP (p < 0.001). A one point increase in global brain abnormality was associated with an odds ratio of 2.25 (95% CI 1.44, 3.51) of developing CP (p < 0.001). The relationship between DWMA volume and CP remained significant in a multivariable model after controlling for global brain abnormality, gestational age, and center (OR 12.64, 95% CI 1.41, 113.43). Conversely, we did not find a significant relationship between qualitatively diagnosed DWMA and CP (OR 2.29; 95% CI 0.81, 6.46; p = 0.118). Objectively quantified severe DWMA (P < 0.001) and global brain abnormality on structural MRI (P = 0.004) were both significantly predictive of CP, while visually-classified severe DWMA (grade 3) was a poor predictor of CP (P = 1.00) (Table 3). ## Discussion We demonstrated that objectively quantified DWMA is a significant and independent prognostic biomarker of motor development at 3 years of age in very preterm infants. Normalized DWMA volume remained a prominent predictor of standardized Motor scores even after controlling for other known predictors of motor development such as sMRI abnormalities, gestational age, and sex. This is notable because we excluded all infants with severe brain injury, which is the most prominent risk factor for the development of CP and motor impairments. Of the six infants that developed CP in this cohort, five had mild CP. Two of the three infants with a normal sMRI developed spastic diplegia. A recent CP registry study found that infants with these CP subtypes were twice as likely to have normal sMRI scans39. Our results suggest that DWMA is pathologic and deserves further testing in larger studies to externally validate its prognostic value for the early detection of minor motor impairments and CP. In two previous independent cohorts, we have shown that DWMA volume quantified using our objective algorithm is predictive of cognitive and language development at 2 years25,26. For motor development, it is difficult to compare our objective DWMA biomarker results with other studies because no prior study has attempted to predict Motor outcomes using objectively quantified DWMA. However, similar to our findings, multiple studies have examined the link between visually classified DWMA and motor development and found no correlation17,18,19,23,27,28,29,30,32. There are likely several confounding factors that contributed to a lack of association. First, visually diagnosed DWMA is subjective and exhibits suboptimal retest reliability, as demonstrated in our study and several prior studies14,31,40,41. This may be due to the signal inhomogeneity and the occurrence of developmental crossroads that are routinely present on all MRI scans at term-equivalent age. This is especially true for periventricular white matter regions, which potentially explains why our previous study showed lower predictive value for this white matter region as compared to the centrum semiovale24. Therefore, we limited assessment of DWMA to only the centrum semiovale white matter. Lower diagnostic reliability increases measurement error, which can reduce the likelihood of finding a significant association42. Our intra-rater reliability for visual diagnosis of DWMA was only fair to moderate ($$\kappa$$ 0.42). This reliability was comparable to our prior study where a pediatric neuroradiologist diagnosed DWMA ($$\kappa$$ 0.46)14. Second, for a given sample size, a qualitative diagnosis is categorical and will therefore exhibit lower study power than a quantitative diagnosis (continuous measure)43,44. Lastly, even when qualitative DWMA diagnostic reliability is excellent18, DWMA diagnosis may still be inaccurate because there is no gold standard test to confirm true DWMA pathology. Our DWMA segmentation algorithm is automated, easy to use, and can generate DWMA and whole-brain tissue volumes within 5 min. For about half of the MRI scans, it requires no further manual correction and the other half it incorrectly labels between 2 to 8 voxels, most commonly in the interhemispheric fissure or peripheral white matter. While we currently remove these mislabeled voxels manually, we have recently developed a fully-automated approach using machine learning to overcome this limitation45. Such a tool could be used to facilitate clinical translation, as it can be readily integrated into clinical MRI platforms to generate DWMA volumes immediately at the point of care, following sMRI acquisition at term-equivalent age. At term-equivalent age, sMRI is the most accurate test for early detection of CP. Although it’s predictive accuracy has been touted as approximately 90%7, when more robust evidence is considered, its sensitivity is closer to 70% and its predictive probability is only 35%11,12,13,14,46. This leaves a substantial gap in our ability to accurately detect CP at term-equivalent age in order to take advantage of the early critical window for neuroplasticity in the first two years. This is the period during which proven (re)habilitative interventions could restore motor function and thus improve quality of life. Prognostic tests such as the general movements assessment and the Hammersmith Infant Neurological Exam are being increasingly performed in many centers at 3 months corrected age7, however more research is still needed to determine their accuracy in combination with injury on sMRI in predicting CP and minor motor abnormalities47. Other advanced quantitative MRI measures such as diffusion MRI and structural and functional sensorimotor tract connectivity48 could potentially fill this gap, as highlighted in a recent systematic review15. For example, several studies have reported abnormal microstructural properties of DWMA using diffusion MRI18,20. However, when these measures were compared to quantitative DWMA volumes determined via signal intensity, the addition of microstructural measures did not improve outcome prediction24. The incremental predictive value of other promising advanced MRI biomarkers, independent of sMRI, remains to be validated in larger, population-based cohort studies. Our study has several limitations. Our follow-up rate was 79%, which may have introduced ascertainment bias. However, the baseline characteristics of infants with and without follow-up testing were comparable, suggesting a low risk of bias. Only six infants developed CP and thus our secondary CP prognostic analyses will need to be validated in larger studies. Also, we were unable to determine the predictive ability of DWMA volume over and above general movements assessment or early standardized motor exam because these tests were not part of the research study or clinical care during study enrollment. This limitation is being addressed in our newer and larger cohort study. Our Bayley assessors were blinded to DWMA result but not blinded to clinical or structural MRI information; thus, this may have biased our findings. However, this bias would reduce the independent association of DWMA volume with Bayley Motor scores by potentially strengthening the association of clinical and global brain abnormality scores with Motor scores. Strengths of our study include a geographically-based cohort, objective quantification of DWMA on sMRI that can be readily translated clinically, comparison with the current standard method of visual classification, standardized assessments of motor outcomes up to 3 years of age when minor motor abnormalities are more evident, and independent validation of DWMA volume as a new prognostic biomarker, over and above existing predictors. In conclusion, in this multicenter prospective cohort study, we were able to demonstrate for the first time that objectively quantified DWMA is an independent predictor of motor development in very preterm infants. We also validated prior research showing that visually classified DWMA is not predictive of neurodevelopmental outcomes and is therefore suboptimal for use in clinical practice. Additional studies are needed to externally validate the use of DWMA volume as an early prognostic biomarker for cerebral palsy and minor motor impairments and to enable clinical translation of our DWMA algorithm. If externally validated, our findings could be applied to improve risk stratification at hospital discharge for targeted, aggressive early intervention therapies. ## Data availability All data, software, and code from this study are being submitted for publication and can be accessed from the lead author in the meantime. ## References 1. 1. Shevell, M. Cerebral palsy to cerebral palsy spectrum disorder: Time for a name change?. Neurology https://doi.org/10.1212/WNL.0000000000006747 (2018). 2. 2. Honeycutt, A. A. et al. in Using survey data to study disability: results from the national health interview survey on disability, Vol. 3 (eds B.M. Altman, S.N. Barnartt, G.E. Hendershot, & S.A. Larson) 207–228 (Emerald Group Publishing Limited, 2003). 3. 3. Sellier, E. et al. Decreasing prevalence in cerebral palsy: a multi-site European population-based study, 1980 to 2003. Dev. Med. Child. Neurol. 58, 85–92. https://doi.org/10.1111/dmcn.12865 (2016). 4. 4. Williams, J., Lee, K. J. & Anderson, P. J. Prevalence of motor-skill impairment in preterm children who do not develop cerebral palsy: a systematic review. Dev. Med. Child. Neurol. 52, 232–237. https://doi.org/10.1111/j.1469-8749.2009.03544.x (2010). 5. 5. Van Hus, J. W., Potharst, E. S., Jeukens-Visser, M., Kok, J. H. & Van Wassenaer-Leemhuis, A. G. Motor impairment in very preterm-born children: links with other developmental deficits at 5 years of age. Dev. Med. Child. Neurol. 56, 587–594 (2014). 6. 6. McIntyre, S., Morgan, C., Walker, K. & Novak, I. Cerebral palsy–don’t delay. Dev. Disabil. Res. Rev. 17, 114–129. https://doi.org/10.1002/ddrr.1106 (2011). 7. 7. Novak, I. et al. Early, accurate diagnosis and early intervention in cerebral palsy: advances in diagnosis and treatment. JAMA Pediatr. 171, 897–907. https://doi.org/10.1001/jamapediatrics.2017.1689 (2017). 8. 8. Johnston, M. V. Plasticity in the developing brain: implications for rehabilitation. Dev. Disabil. Res. Rev. 15, 94–101. https://doi.org/10.1002/ddrr.64 (2009). 9. 9. Spittle, A., Orton, J., Anderson, P. J., Boyd, R. & Doyle, L. W. Early developmental intervention programmes provided post hospital discharge to prevent motor and cognitive impairment in preterm infants. Cochrane Database Syst. Rev. https://doi.org/10.1002/14651858.CD005495.pub4 (2015). 10. 10. Morgan, C., Novak, I., Dale, R. C., Guzzetta, A. & Badawi, N. Single blind randomised controlled trial of GAME (Goals–Activity–Motor–Enrichment) in infants at high risk of cerebral palsy. Res. Dev. Disabil. 55, 256–267. https://doi.org/10.1016/j.ridd.2016.04.005 (2016). 11. 11. Benini, R., Dagenais, L., Shevell, M. I., de la Paralysie, R. & Cerebrale au Quebec, C, ,. Normal imaging in patients with cerebral palsy: what does it tell us?. J. Pediatr. 162, 369–374. https://doi.org/10.1016/j.jpeds.2012.07.044 (2013). 12. 12. Van’t Hooft, J. et al. Predicting developmental outcomes in premature infants by term equivalent MRI: systematic review and meta-analysis. Syst. Rev. 4, 71. https://doi.org/10.1186/s13643-015-0058-7 (2015). 13. 13. Hintz, S. R. et al. Neuroimaging and neurodevelopmental outcome in extremely preterm infants. Pediatrics 135, e32-42. https://doi.org/10.1542/peds.2014-0898 (2015). 14. 14. Slaughter, L. A., Bonfante-Mejia, E., Hintz, S. R., Dvorchik, I. & Parikh, N. A. Early conventional MRI for prediction of neurodevelopmental impairment in extremely-low-birth-weight infants. Neonatology 110, 47–54. https://doi.org/10.1159/000444179 (2016). 15. 15. Parikh, N. A. Advanced neuroimaging and its role in predicting neurodevelopmental outcomes in very preterm infants. Semin. Perinatol. 40, 530–541. https://doi.org/10.1053/j.semperi.2016.09.005 (2016). 16. 16. Maalouf, E. F. et al. Magnetic resonance imaging of the brain in a cohort of extremely preterm infants. J. Pediatr. 135, 351–357 (1999). 17. 17. Dyet, L. E. et al. Natural history of brain lesions in extremely preterm infants studied with serial magnetic resonance imaging from birth and neurodevelopmental assessment. Pediatrics 118, 536–548. https://doi.org/10.1542/peds.2005-1866 (2006). 18. 18. Kidokoro, H., Anderson, P. J., Doyle, L. W., Neil, J. J. & Inder, T. E. High signal intensity on T2-weighted MR imaging at term-equivalent age in preterm infants does not predict 2-year neurodevelopmental outcomes. AJNR Am. J. Neuroradiol. 32, 2005–2010. https://doi.org/10.3174/ajnr.A2703 (2011). 19. 19. Jeon, T. Y. et al. Neurodevelopmental outcomes in preterm infants: comparison of infants with and without diffuse excessive high signal intensity on MR images at near-term-equivalent age. Radiology 263, 518–526. https://doi.org/10.1148/radiol.12111615 (2012). 20. 20. Counsell, S. J. et al. Axial and radial diffusivity in preterm infants who have diffuse white matter changes on magnetic resonance imaging at term-equivalent age. Pediatrics 117, 376–386. https://doi.org/10.1542/peds.2005-0820 (2006). 21. 21. Wisnowski, J. L. et al. Altered glutamatergic metabolism associated with punctate white matter lesions in preterm infants. PLoS ONE 8, e56880. https://doi.org/10.1371/journal.pone.0056880 (2013). 22. 22. Parikh, N. A., Pierson, C. R. & Rusin, J. A. Neuropathology associated with diffuse excessive high signal intensity abnormalities on magnetic resonance imaging in very preterm infants. Pediatr. Neurol. 65, 78–85. https://doi.org/10.1016/j.pediatrneurol.2016.07.006 (2016). 23. 23. Iwata, S. et al. Qualitative brain MRI at term and cognitive outcomes at 9 years after very preterm birth. Pediatrics 129, e1138-1147. https://doi.org/10.1542/peds.2011-1735 (2012). 24. 24. Parikh, N. A. et al. Automatically quantified diffuse excessive high signal intensity on MRI predicts cognitive development in preterm infants. Pediatr. Neurol. 49, 424–430. https://doi.org/10.1016/j.pediatrneurol.2013.08.026 (2013). 25. 25. He, L. & Parikh, N. A. Atlas-guided quantification of white matter signal abnormalities on term-equivalent age MRI in very preterm infants: findings predict language and cognitive development at two years of age. PLoS ONE 8, e85475. https://doi.org/10.1371/journal.pone.0085475 (2013). 26. 26. Parikh, N. A. et al. Objectively-quantified diffuse white matter abnormality at term-equivalent age is an independent predictor of neurodevelopmental outcomes in very preterm infants. J. Pediatr. 220, 56–63. https://doi.org/10.1016/j.jpeds.2020.01.034 (2020). 27. 27. Brostrom, L. et al. Clinical implications of diffuse excessive high signal intensity (DEHSI) on neonatal MRI in school age children born extremely preterm. PLoS ONE 11, e0149578. https://doi.org/10.1371/journal.pone.0149578 (2016). 28. 28. Hart, A. et al. Neuro-developmental outcome at 18 months in premature infants with diffuse excessive high signal intensity on MR imaging of the brain. Pediatr. Radiol. 41, 1284–1292. https://doi.org/10.1007/s00247-011-2155-7 (2011). 29. 29. de Bruine, F. T. et al. Clinical implications of MR imaging findings in the white matter in very preterm infants: a 2-year follow-up study. Radiology 261, 899–906. https://doi.org/10.1148/radiol.11110797 (2011). 30. 30. Skiold, B. et al. Neonatal magnetic resonance imaging and outcome at age 30 months in extremely preterm infants. J. Pediatr. 160, 559–566. https://doi.org/10.1016/j.jpeds.2011.09.053 (2012). 31. 31. Calloni, S. F. et al. Neurodevelopmental outcome at 36 months in very low birth weight premature infants with MR diffuse excessive high signal intensity (DEHSI) of cerebral white matter. Radiol. Med. 120, 1056–1063. https://doi.org/10.1007/s11547-015-0540-2 (2015). 32. 32. Murner-Lavanchy, I. M. et al. Thirteen-year outcomes in very preterm children associated with diffuse excessive high signal intensity on neonatal magnetic resonance imaging. J. Pediatr. https://doi.org/10.1016/j.jpeds.2018.10.016 (2018). 33. 33. Kidokoro, H., Neil, J. J. & Inder, T. E. New MR imaging assessment tool to define brain abnormalities in very preterm infants at term. AJNR Am. J. Neuroradiol. 34, 2208–2214. https://doi.org/10.3174/ajnr.A3521 (2013). 34. 34. Landis, J. R. & Koch, G. G. The measurement of observer agreement for categorical data. Biometrics 33, 159–174 (1977). 35. 35. McHugh, M. L. Interrater reliability: the kappa statistic. Biochem. Med. (Zagreb) 22, 276–282 (2012). 36. 36. Amiel-Tison, C. & Gosselin (The John Hopkins University Press, J. Neurological Development from Birth to Six Years., 1998). 37. 37. Palisano, R. et al. Development and reliability of a system to classify gross motor function in children with cerebral palsy. Dev. Med. Child. Neurol. 39, 214–223 (1997). 38. 38. Moons, K. G. et al. Risk prediction models: I. Development, internal validation, and assessing the incremental value of a new (bio)marker. Heart 98, 683–690. https://doi.org/10.1136/heartjnl-2011-301246 (2012). 39. 39. Springer, A. et al. Profile of children with cerebral palsy spectrum disorder and a normal MRI study. Neurology https://doi.org/10.1212/WNL.0000000000007726 (2019). 40. 40. Morel, B., Antoni, G., Teglas, J. P., Bloch, I. & Adamsbaum, C. Neonatal brain MRI: how reliable is the radiologist’s eye?. Neuroradiology 58, 189–193. https://doi.org/10.1007/s00234-015-1609-2 (2016). 41. 41. Hart, A. R., Smith, M. F., Rigby, A. S., Wallis, L. I. & Whitby, E. H. Appearances of diffuse excessive high signal intensity (DEHSI) on MR imaging following preterm birth. Pediatr. Radiol. 40, 1390–1396. https://doi.org/10.1007/s00247-010-1633-7 (2010). 42. 42. Coggon, D. in Epidemiology for the uninitiated (ed BMJ) (BMJ Pub. Group, 1997). 43. 43. Campbell, M. J., Julious, S. A. & Altman, D. G. Estimating sample sizes for binary, ordered categorical, and continuous outcomes in two group comparisons. BMJ 311, 1145–1148 (1995). 44. 44. Altman, D. G. & Royston, P. The cost of dichotomising continuous variables. BMJ 332, 1080. https://doi.org/10.1136/bmj.332.7549.1080 (2006). 45. 45. Li, H. et al. Objective and automated detection of diffuse white matter abnormality in preterm infants using deep convolutional neural networks. Front Neurosci. 13, 610. https://doi.org/10.3389/fnins.2019.00610 (2019). 46. 46. Nongena, P., Ederies, A., Azzopardi, D. V. & Edwards, A. D. Confidence in the prediction of neurodevelopmental outcome by cranial ultrasound and MRI in preterm infants. Arch. Dis. Child. Fetal. Neonatal. Ed. 95, F388-390. https://doi.org/10.1136/adc.2009.168997 (2010). 47. 47. Parikh, N. A. Are structural magnetic resonance imaging and general movements assessment sufficient for early, accurate diagnosis of cerebral palsy?. JAMA Pediatr 172, 198–199. https://doi.org/10.1001/jamapediatrics.2017.4812 (2018). 48. 48. Parikh, N. A., Hershey, A. & Altaye, M. Early detection of cerebral palsy using sensorimotor tract biomarkers in very preterm infants. Pediatr Neurol https://doi.org/10.1016/j.pediatrneurol.2019.05.001 (2019). ## Acknowledgements This study was supported by the National Institutes of Health (Grants R01-NS094200 and R01-NS096037 [to NAP]; and Grant R21-HD094085 and Trustee grant from Cincinnati Children’s Hospital Medical Center [to LH]). The funders played no role in the design, analysis, or presentation of the findings. We sincerely thank Jennifer Notestine, RN and Valerie Marburger, NNP for serving as the study coordinators, Josh Goldberg, MD for assisting with recruitment and Mark Smith, MS, for serving as the study MR technologist. We are also grateful to the families, NICU personnel, and High-Risk clinic staff that made this study possible. ## Funding Supported by the National Institutes of Health (grants R01-NS094200 and R01-NS096037 [to NAP] and grant R21-HD094085 [to LH]). ## Author information Authors ### Contributions N.A.P. conceived the experiments and wrote the manuscript; V.S.P.I. and L.H. conducted the experiments, N.A.P and M.A. performed the statistical analyses, N.A.P., M.K, and M.A. analyzed the results. T.M.O, K.H. and F.C.K provided critical feedback on the manuscript. All authors reviewed and provided critical feedback on the manuscript. ### Corresponding author Correspondence to Nehal A. Parikh. ## Ethics declarations ### Competing interests The authors declare no competing interests. ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Parikh, N.A., Harpster, K., He, L. et al. Novel diffuse white matter abnormality biomarker at term-equivalent age enhances prediction of long-term motor development in very preterm children. Sci Rep 10, 15920 (2020). https://doi.org/10.1038/s41598-020-72632-0 • Accepted: • Published: • ### Magnetic resonance spectroscopy brain metabolites at term and 3-year neurodevelopmental outcomes in very preterm infants • Venkata Sita Priyanka Illapani • , David A. Edmondson • , Kim M. Cecil • , Mekibib Altaye • , Manoj Kumar • , Karen Harpster •  & Nehal A. Parikh Pediatric Research (2021)
{}
# Simple algebra Level pending Solve the following quadratic equation: (x-a)/(x-b) + (x-b)(x-a) = a/b + b/a. Find the possible integral value of x in this expression. ×
{}
# Anti-indicator In Theory of the numbers, one says that a whole positive N is a anti-indicator if the equation $\ varphi \left(X\right) = N \,$, of unknown factor X , does not have a solution, the function $\ varphi \,$ indicating the Indicatrice of Euler. All the odd entireties are anti-indicators, except 1, since, in this case, X = 1 and X = 2 are solutions of the preceding equation. The first even anti-indicators are: 14, 26, 34, 38, 50, 62, 68, 74, 76, 86, 90, 94, 98, 114, 118, 122, 124, 134, 142, 146, 152, 154, 158, 170, 174, 182, 186, 188, 194, 202, 206, 214, 218, 230, 234, 236, 242, 244, 246, 248, 254, 258, 266, 274, 278, 284, 286, 290, 298, 302, 304, 308, 314, 318 An even anti-indicator can be form $p + 1 \,$, where $p \,$ is a Prime number, but never of the form $p - 1 \,$, since $p - 1 = \ varphi \left(p\right) \,$ when p is first (the positive entireties lower than a given prime number are very first with him). Same manner, a oblong Number $n \left(N - 1\right) \,$ cannot be an anti-indicator when N is first since $\ varphi \left(p^2\right) = p \left(p - 1\right) \,$ for any prime number $p \,$. ## See too • Anticoïndicateur Random links: Mouvaux | National park of Island-Bonaventure-and-of-Rock-Bored | Alonso Berruguete | Erzulie | Chicken tikka masala | Kekaha,_Hawaï
{}
# PCl3(g) + Cl2(g) <---> PCl5(g) Kc = 0.18 In an initial mixture of 0.300M PCl3 (g), 0.400M Cl2 (g), what are the equilibrium concentrations of all gases? Dec 20, 2014 The equilibrium concentrations are $\left[P C {l}_{3}\right] = 0.280 M$, $\left[C {l}_{2}\right] = 0.380 M$, $\left[P C {l}_{5}\right] = 0.02 M$. One can approach this problem by using the ICE method (more here: http://en.wikipedia.org/wiki/RICE_chart); even before doing any calculations, the value of the equilibrium constant can give us an idea on how the result will look like; since ${K}_{c} < 1$, the reaction will favor the reactans, so we'd expect bigger equilibrium concentrations for $P C {l}_{3}$ and $C {l}_{2}$, than for $P C {l}_{5}$. ..$P C {l}_{3} \left(g\right) + C {l}_{2} \left(g\right) r i g h t \le f t h a r p \infty n s P C {l}_{5} \left(g\right)$ I:. 0.300.............0.400..............0 C:... -x....................-x...................+x E:..(0.300-x).....(0.400-x)...........x So, ${K}_{c} = \frac{\left[P C {l}_{5}\right]}{\left[P C {l}_{3}\right] \cdot \left[C {l}_{2}\right]} = \frac{x}{\left(0.300 - x\right) \cdot \left(0.400 - x\right)} = 0.18$ The equation $0.18 {x}^{2} - 1.126 x + 0.0216 = 0$ produces two values for $x$, ${x}_{1}$ = $6.23$ and ${x}_{2}$ = $0.020$; since concentrations cannot be negative, the correct value will be $x = 0.020$ ($x = 6.23$ would have produced negative concentrations for $P C {l}_{3}$ and $C {l}_{2}$). Therefore, the equilibrium concentrations for all the gases are $\left[P C {l}_{3}\right] = 0.300 - 0.020 = 0.28 M$ $\left[C {l}_{2}\right] = 0.400 - 0.020 = 0.38 M$ $\left[P C {l}_{5}\right] = 0.020 M$ Notice that the initial predictions on the relative concentration values is correct, the concentrations of the reactans are bigger than the concentration of the product.
{}
# Plasmons in Holographic Graphene ### Submission summary As Contributors: Ulf Gran · Marcus Tornsö · Tobias Zingg Arxiv Link: https://arxiv.org/abs/1804.02284v5 (pdf) Date accepted: 2020-06-11 Date submitted: 2020-05-27 Submitted by: Gran, Ulf Submitted to: SciPost Physics Discipline: Physics Subject area: High-Energy Physics - Theory Approaches: Theoretical, Computational ### Abstract We demonstrate how self-sourced collective modes - of which the plasmon is a prominent example due to its relevance in modern technological applications - are identified in strongly correlated systems described by holographic Maxwell theories. The characteristic $\omega \propto \sqrt{k}$ plasmon dispersion for 2D materials, such as graphene, naturally emerges from this formalism. We also demonstrate this by constructing the first holographic model containing this feature. This provides new insight into modeling such systems from a holographic point of view, bottom-up and top-down alike. Beyond that, this method provides a general framework to compute the dynamical charge response of strange metals, which has recently become experimentally accessible due to the novel technique of momentum-resolved electron energy-loss spectroscopy (M-EELS). This framework therefore opens up the exciting possibility of testing holographic models for strange metals against actual experimental data. ### Ontology / Topics See full Ontology or Topics database. Published as SciPost Phys. 8, 093 (2020) We thank the referee for his/her positive comments regarding the question addressed in the paper and for the constructive criticism. To address the lack of clarity regarding technical details, we have extended appendix A by adding technical details regarding how we perform the linear response analysis, and in the new appendix B we have listed the equations of motion for the perturbations that we solve. We considered this linear response analysis to be part of the standard lore, but adding the details clearly makes the paper more self-contained. We have also added a discussion regarding holographic renormalization in the beginning of appendix A, where the action is introduced, and added the two standard counterterms explicitly in the action (they were of course used in the previous computations). Note that no counterterm is necessary for the Maxwell part of the action. We hope that after these additions, addressing the concerns of the referee, the paper will be judged ready for publication. ### List of changes Appendix A: Extended to include technical details regarding how we perform the linear response analysis. Appendix A: Discussion regarding holographic renormalisation added after eq (24), where the action is introduced, and the two standard counterterms have been written out explicitly in the action. New appendix B added containing all the equations of motion for the perturbations that we solve. ### Submission & Refereeing History Resubmission 1804.02284v5 on 27 May 2020 Submission 1804.02284v4 on 9 July 2019 ## Reports on this Submission ### Report After reading the new manuscript and the response of the authors, I am happy to recommend its publication without further delays. • validity: good • significance: high • originality: good • clarity: high • formatting: excellent • grammar: excellent
{}
My Math Forum Converting degrees to radians and finding an area of a sector of a circle Trigonometry Trigonometry Math Forum March 31st, 2015, 06:43 AM #1 Member   Joined: Feb 2015 From: Planet Zorg Posts: 67 Thanks: 3 Converting degrees to radians and finding an area of a sector of a circle Hi guys, Ok so 52 degrees in radians is 0.91 (to 2 s.f.) but what does it mean to leave my answer in terms of pi? And then to use my answer from above to find the area of a sector of a circle of radius 16cm and angle 52 degrees?! Now I'm really out of my depth, lol.. March 31st, 2015, 06:46 AM #2 Senior Member   Joined: Jan 2012 From: Erewhon Posts: 245 Thanks: 112 $180$ degrees is $\pi$ radian, so $52 \mbox{ degrees} = 52 \times \pi/180 \mbox{ radian}$ Area of a sector of radius $r \mbox{ cm}$ and angle $\theta \mbox{ radian}$ is $\theta r^2/2 \mbox{ cm}^2$ Thanks from matheus Last edited by CaptainBlack; March 31st, 2015 at 06:50 AM. Tags area, circle, converting, degrees, finding, radians, sector Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post coolcat3 Algebra 4 October 8th, 2013 12:39 PM not3bad Algebra 4 October 3rd, 2013 10:43 PM hemidol Algebra 1 April 8th, 2012 07:19 PM Julie Algebra 2 May 18th, 2009 11:41 AM axelle Algebra 7 October 22nd, 2007 01:48 PM Contact - Home - Forums - Cryptocurrency Forum - Top
{}
Article # Big brains stabilize populations and facilitate colonization of variable habitats in birds • Nature Ecology & Evolution 117061715 (2017) • doi:10.1038/s41559-017-0316-2 Accepted: Published online: ## Abstract The cognitive buffer hypothesis posits that environmental variability can be a major driver of the evolution of cognition because an enhanced ability to produce flexible behavioural responses facilitates coping with the unexpected. Although comparative evidence supports different aspects of this hypothesis, a direct connection between cognition and the ability to survive a variable and unpredictable environment has yet to be demonstrated. Here, we use complementary demographic and evolutionary analyses to show that among birds, the mechanistic premise of this hypothesis is well supported but the implied direction of causality is not. Specifically, we show that although population dynamics are more stable and less affected by environmental variation in birds with larger relative brain sizes, the evolution of larger brains often pre-dated and facilitated the colonization of variable habitats rather than the other way around. Our findings highlight the importance of investigating the timeline of evolutionary events when interpreting patterns of phylogenetic correlation. ## Introduction Enhanced encephalization, that is, a greater than expected brain mass for a given body size1, has evolved independently in numerous groups of animals despite its stringent energetic demands and potential developmental costs2,3,4. The cognitive buffer hypothesis posits that the repeated evolution of relatively large brains was driven primarily by the adaptive benefits of being able to mount quick, flexible behavioural responses to frequent or unexpected environmental change5,6. In line with this view, comparative studies have shown that more highly encephalized birds have greater potential for behavioural innovation7,8, lower mortality rates9,10 and a greater capacity to thrive in human-altered environments11,12. In addition, highly encephalized birds have been shown to preferentially occupy environments with more variable climates13,14, where biotic and abiotic conditions change considerably within and across years. Although these findings are consistent with the cognitive buffer hypothesis, questions remain regarding its validity as a general explanation for the evolution of cognition. In particular, it is currently unclear whether the observed link between survival and encephalization is specifically driven by an enhanced ability to cope with environmental change, or driven instead by other adaptive benefits. In addition, a direction of causality in the relationship between encephalization and environmental variation has not yet been established. Specifically, the cognitive buffer hypothesis predicts that relatively large brains evolved in situ as a result of selection for coping with environmental variation5. However, large brains could have also evolved elsewhere and may have subsequently facilitated the colonization of variable habitats, as suggested by recent reports that anthropogenic introductions of highly encephalized vertebrates to new habitats tend to have higher success rates15,16,17. Here, we leverage the power of modern evolutionary analyses, broad-scale comparative datasets and citizen science to clarify these fundamental issues regarding the role of ecological variation in the evolution of cognition. We begin by applying current state-of-the-art demographic analyses to test directly the mechanistic assumption that enhanced encephalization improves survival in variable habitats. We then apply models of correlated trait evolution to formally assess the direction of causality in the observed correlation between the occupancy of variable habitats and high encephalization in birds. ## Results ### Estimating cognitive ability In line with prior large-scale comparative studies on the evolution of cognition, we use relative brain size as a proxy for cognitive ability1. This metric acknowledges that absolute brain size increases naturally in larger species, and estimates instead a species’ cognitive ability as the extent to which its brain is larger (or smaller) than expected for its body size. The relative brain sizes used in our analyses were computed as residuals from a phylogenetic generalized least squares (PGLS) regression of ln brain on ln body size (slope = 0.59 ± 0.00 s.d.; intercept = −2.48 ± 0.05 s.d.; λ = 0.87 ± 0.01 s.d.), including the 2,062 bird species for which brain size is currently available (see Methods and Supplementary Data 2). While such a proxy for cognition is clearly indirect, we note that there is a growing body of experimental and correlative evidence linking relative brain size with cognitive ability18,19,20, and more specifically with behavioural innovation21,22. ### Does greater cognition improve survival in more variable environments? One way to evaluate directly whether enhanced cognition increases survival in more variable environments is to test explicitly whether the interaction between encephalization and environmental variability has a significant effect on population dynamics. If behavioural flexibility facilitates coping with unexpected ecological challenges, then we predict that population dynamics in highly encephalized species should be buffered from environmental extremes and should therefore be less affected by increased environmental variability as compared with those of small-brained species. We tested this prediction in a sample of North American land birds for which brain size is known and time series data are sufficient to properly estimate year-to-year variation in breeding population numbers23 (N = 126 species; Supplementary Data 1). Demographic data for this analysis were obtained from the North American Breeding Bird Survey (BBS)24, a yearly standardized assessment of breeding bird abundances conducted since 1966 at thousands of locations across the continent. Following the current community standards25, we used hierarchical Bayesian models to estimate regional population dynamics for each species in each North American bird conservation region (BCR; Fig. 1a). BCRs are ecologically distinct regions26 and are widely regarded as suitable biogeographic units for the quantification of population dynamics23. The hierarchical models implemented here estimate yearly fluctuations in abundance while accounting for long-term population trends, route-to-route variation in abundance and imperfect detection by observers (Fig. 1; see Methods). By explicitly separating the sources of error in reported bird counts, these models allow us to estimate the extent to which year-to-year fluctuations in true population size are a product of ecologically relevant processes such as the mortality induced by environmental extremes (also known as ‘process error’ or σ γ ; Fig. 1). Species-specific abundance-weighted averages of the process error, $σ ̄$ γ (see Methods), were subsequently used to test the hypothesis that population stability is less affected by environmental variability in larger-brained species. To better align our metrics with the narrative of this hypothesis, the dependent variable in these downstream analyses was the negative of $σ ̄$ γ , hereafter ‘population stability’, such that higher stability scores reflect cases with less pronounced year-to-year fluctuation in population size. We used PGLS regression models estimated across a sample of 1,000 tree topologies from ref. 27 to investigate the potential effects of environmental variability and encephalization on population stability. Environmental predictors for these models included the mean, within-year variance and predictability of temperature, precipitation and net primary productivity (see Methods). Predictability was estimated through Colwell’s P, an index that captures variation among years in the onset, intensity and duration of periodic phenomena28. Given the strong spatial covariance that is typically observed among environmental parameters29, all environmental variables were first extracted globally at a spatial resolution of 0.5 × 0.5° and subsequently reduced to composite variables at the same resolution using principal component analysis (PCA; Table 1; Supplementary Fig. 1a,b; Methods). As environmental correlations are often region-specific30, the PCA for this regional analysis included only map cells located within our North American study region. The first principal component recovered from this analysis showed a clear latitudinal trend, where lower scores occurred primarily in northern, more seasonal climates with colder and less predictable temperatures, and high scores occurred in southwestern sites with hotter temperatures and more variable, unpredictable precipitation patterns (Supplementary Fig. 1a). The second component of the North American environmental PCA captured differences in mean precipitation as well as in mean, variance and predictability of net primary productivity. In this case, higher scores indicated wetter environments with higher, but more seasonal and unpredictable, productivity including those found along the Pacific coast of the northern USA and Canada, boreal forests, and much of the eastern USA. Low scores for PC2 were found in southwestern deserts and in the far north (Supplementary Fig. 1b). When characterizing the typical habitats of each species in our sample, we considered both spatial distribution and geographic variation in abundance. We first calculated mean environmental components for every North American BCR ($P C 1 ̲ B C R i$ and $P C 2 ̲ B C R i$). Then, we estimated species-specific habitat values, hereafter H1 and H2, by computing the weighted averages of $P C 1 ̲ B C R$ and $P C 2 ̲ B C R$, where weights were proportional to the relative abundance of the species in each BCR. Correlation between H1 and H2 was high (r = −0.56; Supplementary Fig. 1c), so we excluded the latter from our list of predictors to prevent possible multicollinearity and unnecessary variance inflation. The decision to keep H1 rather than H2 was based on the fact that H1 most directly captures the measures of variability that are relevant for testing the mechanism behind the cognitive buffer hypothesis. We note that both high and low values of H1 reflect increasingly variable and unpredictable conditions. Specifically, low H1 scores indicate variable temperatures, whereas high scores indicate variable precipitation. Thus, to explore the general effects of environmental variability on population dynamics, we included H1 as a quadratic term (H12) in our models of population stability. As H1 is centred at zero, this quadratic term captures the potential effects of both variable temperatures and variable precipitation, and is therefore labelled ‘environmental variability’ hereafter. We also took into account the possibility that population stability is influenced by a variety of life history and ecological traits. First, we accounted for potential relationships between relative population variability and population size31 by including log-transformed mean abundance as a covariate in our models. Additionally, we considered that environmental variability could affect population dynamics through interactions with traits other than brain size. For example, we considered that lifespan could be a predictor of population stability because longer-lived species tend to exhibit higher adult survival32, and we included an interaction with environmental variability (H12) because highly unpredictable conditions may prevent individuals from realizing their maximum lifespan potential. Similarly, we considered the fact that species with higher annual reproductive output may experience more intense year-to-year population oscillations33 and that this effect could potentially be amplified in more variable habitats. Additionally, we explored the possibility that variable conditions have weaker effects on the population dynamics of large-bodied species because those species tend to be more resilient to periods of resource scarcity34. The same may be true for cooperative breeders—which seem to be able to buffer the effects of harsh years through helping at the nest35—for species with generalist habits—which are typically able to exploit a wider variety of environmental conditions36—and for migrants—which typically avoid the harshest conditions of their breeding grounds by temporarily leaving the area29. Further details on how these traits were defined and quantified can be found in the Methods. All of our data on population stability, brain size, ecology and life history are available in Supplementary Data 1. Our demographic analysis revealed that a number of ecological traits are significantly associated with population variability (adjusted R2 for PGLS model = 0.22; Table 2). We found that while populations of resident species are less stable in increasingly variable environments, migratory species maintain relatively stable populations across all types of environment ($P ̄$ << 0.001; Fig. 2a). Similarly, long-lived species were found to exhibit more stable dynamics than short-lived ones in only the most mild, predictable environments ($P ̄$ << 0.001; Fig. 2b), indicating that the potential benefits of long life spans may diminish when conditions are uncertain. Consistent with the idea that cognitive ability improves survival in variable environments, we found a significant interaction between encephalization and H12. Specifically, while species with high encephalization were found to maintain relatively stable populations in both stable and variable environments, those with low encephalization showed a significant decline in population stability as environmental variability increased ($P ̄$ << 0.001; Fig. 2c). Our findings are qualitatively similar when phylogenetic relationships are estimated from a consensus tree rather than across a sample of tree topologies (Supplementary Table 1). Although these initial results support the basic mechanistic premise of the cognitive buffer hypothesis, the hierarchical models described above do not account for the fact that variation in population size can be driven not only by exogenous (environmental) factors, but also by internal, or density-dependent factors. In the context of hierarchical modelling, density-dependent processes can be investigated by modelling an explicit demographic process that assumes that true population sizes oscillate around a demographic equilibrium value that does not change over time37 (for example, the Gompertz function38). This assumption is nevertheless clearly violated whenever populations undergo long-term changes in mean abundance, as is the case in many North American land birds39 and nearly 80% of the species in our dataset. As models with density dependence are known to perform poorly in such species40, we explored the effects of density dependence exclusively on the subset of species that did not show any evidence of long-term changes in mean abundance in our initial set of demographic analyses. Given the relatively small number of species in this category (N = 27), these confirmatory analyses could not meaningfully explore the entire set of initial predictors and were therefore focused on evaluating only the potential effects of relative brain size, H12 and their interaction. These more narrowly defined analyses indicate that accounting for density dependence does not change our main finding. That is, the interaction between relative brain size and environmental variability is significant in PGLS models based on the consensus tree (relative brain size × H12: β = 0.63, P = 0.04; relative brain size: β = −0.05, P = 0.84; H12: β = −0.35, p = 0.01), and marginally significant across the entire sample of 1,000 tree topologies (relative brain size × H12: $β ̄$ = 0.61, $P ̄$ = 0.06, f = 0.41; relative brain size: $β ̄$ = −0.04, $p ̄$ = 0.88, f = 0; H12: $β ̄$ = −0.32, $P ̄$ = 0.02, f = 1.00). The marginal significance observed in the latter case highlights the greater effect of phylogenetic uncertainty and the generally low statistical power of comparative tests that are based on only a small number of species. ### Did larger brains evolve in more variable environments? Our demographic analyses lend support to the underlying mechanistic premise of the cognitive buffer hypothesis, which is that higher encephalization can improve survival, specifically when environmental conditions are increasingly unstable. However, to evaluate the extent to which this mechanism provides a general explanation for the evolution of cognition in birds, it is critical to explore the direction of causality in the correlation between an enhanced potential for cognition and the occupancy of variable environments. A clear understanding of the sequence of evolutionary events is particularly necessary in this context because the adaptive benefits invoked by the cognitive buffer hypothesis may just as well promote the evolution of cognition in variable habitats, or facilitate instead the secondary colonization of variable habitats by already highly encephalized species41. We evaluated the support for these two non-mutually exclusive evolutionary scenarios by using reversible-jump Markov chain Monte Carlo (rjMCMC) to estimate models of correlated trait evolution42 fitted to an exhaustive global sample of non-migratory birds for which brain size is known (N = 1,288 species; Supplementary Data 2). These models allow inference into potential evolutionary timelines by assessing the likelihood that rates of evolutionary transitions between states of a binary trait (for example, moderate to large encephalization) are dependent on the state of a second binary trait (for example, stable versus variable environmental habitats). In the context of the cognitive buffer hypothesis, these models allow us to test whether the transition from small to large brains is indeed more likely in variable than in stable environments (that is, whether variable environments tend to pre-date large brains). Similarly, these models allow us to evaluate the likelihood of alternative, yet non-mutually exclusive, timelines such as the ‘colonization advantage’ scenario, which predicts that the transition from stable to variable environments should be more likely in large- than in small-brained species. As in our demographic analysis, environmental variables were first extracted for the relevant study region (here, the entire globe) and subsequently reduced to composite variables through PCA (Supplementary Table 2). The first component of this global PCA, hereafter ‘temperature variability’, captured a gradient of increasing exposure to colder, more seasonally variable and less predictable temperatures (Supplementary Fig. 1d). The second component, hereafter ‘xeric variability’, captured a gradient of increasing exposure to drier and less productive environments with more unpredictable precipitation (Supplementary Fig. 1e). Species-specific habitats were characterized in this case by computing the mean values of local temperature and xeric variability across entire breeding distributions (see Methods). Because transition rate analyses require discrete trait states, we explored a reasonable range of thresholds for classifying species as having either small or large encephalization, and as being exposed to highly variable or fairly stable environments (30th, 50th, 75th and 90th percentile; see Methods). Encephalization categorizations were based on whether a species’ relative brain size was above or below the predefined threshold. Similarly, exposure to environmental variability was considered high for a given species if either or both environmental principal component scores belonged in a percentile above the predefined threshold. Considering information from both principal components when characterizing exposure to environmental variability allowed us to maintain consistency with our demographic analyses (see Table 1) and to explore the general effects of environmental variability rather than the specific effects of temperature or precipitation variation. Our models of correlated trait evolution do not support the main prediction of the cognitive buffer hypothesis under any combination of thresholds. Specifically, the evolution of larger relative brain sizes was generally found to be equally likely for species occurring in stable environments and in harsher, more variable ones (that is, there was no support for a difference in transition rate from moderate to large encephalization between environment types; Bayes factor (BF) < 3; Fig. 3d,f; Supplementary Table 3). Furthermore, under certain classification criteria, we even find evidence that advanced encephalization could be more likely to evolve in stable than in highly variable habitats (for example, highly variable environments: >50th percentile; large encephalization: >50th percentile; BF = 3.15; Fig. 3a,c; Supplementary Table 3). Collectively, these results indicate that while environmental variability can theoretically select for enhanced cognition, it is in fact unlikely to have driven many of the major transitions towards large brains in birds. In stark contrast, we found that the evidence of an improved colonization ability of variable habitats in highly encephalized avian lineages is both general and strong (Fig. 3b,c,e,f; Supplementary Table 3). Such colonization advantage seems to be specifically linked to an improved ability to deal with environmental variability, because we did not find support for a difference in transition rate from variable to stable habitats between species with small and large encephalization values (Supplementary Table 3). Additionally, our results indicate that even moderate enhancements in cognitive ability and/or moderate increases in environmental variability can help accrue such advantages: when thresholds for classification are too conservative (for example, variable environments: >90th percentile; large encephalization: >75th percentile), differences in transition rates from stable to variable environments are no longer detectable between very large- and moderately large-brained species. ## Discussion Our demographic analysis broadly supports the notion that enhanced cognition can lead to more stable population dynamics. Furthermore, the significant interaction between H12 and encephalization is consistent with the idea that these benefits can be generally accrued under different types of environmental variability and unpredictability (see Table 1). We therefore conclude that there is general support for the proposed mechanism underlying the cognitive buffer hypothesis, which is that bigger than expected brains improve survival when environmental change is frequent and unexpected. Despite this finding, our transition rate analyses strongly indicate that the general timeline of evolutionary events suggested by the cognitive buffer hypothesis is not broadly supported across the avian phylogeny. Specifically, our results unambiguously indicate that evolutionary transitions towards occupancy of more variable habitats did not generally precede the evolution of enhanced encephalization in birds. Ancestral state reconstructions facilitate the visualization of this result (Fig. 4): several of the most highly encephalized clades in the bird phylogeny (for example, parrots, bowerbirds and hornbills) evolved big brains without any apparent exposure to particularly harsh or variable habitats throughout their evolutionary history (Fig. 4b,c,e). Furthermore, even in clades that currently occupy variable habitats (for example, corvids or woodpeckers), it is unclear that exposure to relatively high ecological variability preceded the evolution of larger brains (Fig. 4d,f). Why then do we see today a correlation between variable habitats and encephalization? Our analyses suggest that this correlation results from either the preferential colonization of variable and unpredictable habitats by highly encephalized species, or the preferential persistence of these highly encephalized species in habitats that underwent major environmental change and became more variable. One possible reason for this pattern is that highly encephalized birds have lower risk of extirpation during the early stages of colonization (that is, when abundances are low43) because of their enhanced ability to withstand environmental change. Similar links between cognition and range expansion have been made in studies documenting the success of highly encephalized species in colonizing new habitats16,17,41, and are the basis of our current understanding of the process of human expansion out of Africa8,44. Overall, our results suggest that even though environmental variability can be a viable agent of selection in the evolution of cognition (as also concluded by refs 14,45), this particular mechanism is unlikely to have driven many of the most striking cases of encephalization among birds. It is nevertheless possible that other types of ecological variability not included in this study can explain such transitions. For example, although many parrots and hornbills tend to occupy habitats with fairly stable climates, these species must typically cope with high levels of variation in the location and timing of fruiting trees (a similar situation is likely to occur in other species with complex feeding ecologies45). While we acknowledge that a broad interpretation of ‘variability’ can increase the scope and generality of the cognitive buffer hypothesis5, we note that overgeneralization may lead to the inadvertent mischaracterization of very different types of selection (for example, problem solving, long-term memory, or spatial awareness) as different but equivalent forms of a single process. A perhaps more fruitful approach would therefore be to explore the possibility that there is no single primary driver in the evolution of relatively large brains, and that this process is instead driven by the combined effects of both the constraints2,3,4 and the various potential adaptive benefits of increased processing capacity, including the ability to respond more quickly to new challenges46,47, navigate more complex social interactions48,49, process more intricate sensory information50, and cope with greater spatial and/or temporal variability15,22. As data on these different processes become more readily available, we are confident that future comparative studies will be able to disentangle the relative extent to which these different forces have shaped the evolution of cognition on different taxonomic scales. In the meantime, we hope that the realization that variation in brain size was more likely to shape the distribution of bird species across the globe rather than the other way around can help inform more immediate research agendas. ## Methods ### Quantification of relative brain size Our estimates of relative brain size were based on body size data from ref. 51 and brain size data either from published accounts (N = 1,949 species; cited in Supplementary Data 2) or measured directly from museum specimens (N = 113 species). Our total brain dataset includes several species that are not used in either our demographic or correlated trait evolution analyses. Specifically, pelagic species (orders Sphenisciformes, Suliformes, Procellariiformes and Phaethontiformes; families Pelecanidae, Laridae, Stercorariidae and Alcidae) were initially included when computing encephalization values but were subsequently excluded from downstream analyses because land surface temperature and precipitation values are unlikely to be indicative of the actual conditions experienced by species that spend most of their time at sea. All brain size measurements from museum specimens were obtained following the procedures outlined in refs 3,52. Briefly, the foraminae of the cranial nerves are sealed with masking tape and lead shot is poured into the foramen magnum. To prevent the formation of lacunae, the skull is lightly tapped throughout this procedure. Once the shot has risen to the foramen magnum, the contents are decanted into modified syringes or graduated cylinders to determine volume. This method is highly repeatable and provides an accurate estimate of brain size in birds52,53. Brain sizes that were originally reported as volumes in the literature were converted to mass by multiplying millilitres by the average density of fresh brain tissue (1.036 g ml−1)52. To account for phylogenetic uncertainty, the log-log regression of brain size on body size was independently run on 1,000 randomly selected tree topologies with the Hackett backbone in ref. 27 (www.birdtree.org; downloaded 14 July 2016). The encephalization values used in all of our downstream analyses were computed as the median residuals for each species across these 1,000 models. ### Characterization of environmental variability The environmental variables we consider here include the mean, within-year variance and predictability of temperature, precipitation and net primary productivity. Monthly raster maps of temperature and precipitation values were obtained for years 1900–2005 from ecoClimate.org (provided at 1° resolution, resampled to 0.5° resolution; downloaded 25 July 2016)54. Monthly net primary productivity data for years 2000–2016 were obtained from the MODIS dataset downloaded from NASA Earth Observations (provided at 0.5° resolution; http://neo.sci.gsfc.nasa.gov; accessed 18 March 2016). Predictability was measured as Colwell’s P28, an information-theory-based index that captures variation in the onset, intensity and duration of periodic phenomena and ranges from 0 (completely unpredictable) to 1 (completely predictable). As environmental variables tend to be strongly correlated29, we reduced the original set of environmental predictors (transformed when required55, centred and scaled) through PCA. Separate analyses were conducted to reduce the dimensionality of environmental data in the demographic and correlated trait evolution sections to account for the fact that environmental correlations are often region-specific30. In the demographic analyses, the environmental PCA was based only on North American data, including all cell values north of the US–Mexico border (that is, only the geographic region where breeding bird survey data are available). In the correlated trait evolution analyses, the environmental PCA included all global terrestrial habitats, excluding Antarctica. Both environmental PCAs recovered similar components (see main text, Table 1 and Supplementary Table 2 for details). In the demographic analysis, the average score for each principal component was initially computed for every BCR and these regional averages were subsequently used to characterize species-typical habitats. Specifically, variables H1 and H2 were computed as weighted averages of the corresponding environmental components (PC1 and PC2), where weights were determined by the species’ relative abundance in each conservation region. Species-typical environmental values for the global analysis of correlated trait evolution were estimated directly by averaging all local (0.5 × 0.5° cell) PCA scores across the species’ entire breeding distribution. ### Bird population data Abundance data for our population dynamics analyses were collected between 1966 and 2014 by the North American BBS (available through www.pwrc.usgs.gov/bbs/; downloaded 28 August 2015)24. The BBS is coordinated by the US Geological Survey (USGS) and the Canadian Wildlife Service and conducted annually by trained volunteers during the height of the breeding season. Participants travel along 24.5-mile roadside routes, conducting 3-min point count surveys at 0.5-mile intervals—recording every bird seen or heard within a 0.25-mile radius. Each BBS survey route was assigned to a single BCR based on route starting coordinates23. BCR maps were provided by the USGS Patuxent Wildlife Research Center (www.pwcr.usgs.gov; downloaded 15 September 2015). Only surveys fulfilling BBS quality criteria (that is, runtype = 1) were included in our analyses. ### Quantification of population dynamics We characterized the temporal dynamics of bird populations within BCRs across North America using hierarchical Bayesian models following ref. 25. The log of abundance, x j,i,t , for a given species at survey route j within BCR i in year t is modelled as: $log x = S i + β i ×t+ γ i , t + ω i , j +ηI j , t + ɛ i , j , t$ where S i is the average abundance within BCR i, β i is the temporal trend in abundance within BCR i and η is the first-year observer effect where I(j,t) is 1 if the survey at year t is an observer’s first record at route j and 0 otherwise. Year effects, γ i,t , and route-observer effects, ω i,j , are modelled as BCR specific random effects, whereas ε i,j,t was modelled as a general random effect of count overdispersion. Given the potential for differences in observer ability, a separate value of ω is given to each unique route-observer combination. To account for imperfect detection during surveys, the observed count on route j within BCR i during year t is assumed to have a Poisson distribution with mean x j,i,t . Abundances are allowed to vary among survey routes within a BCR, but all routes are assumed to follow the same relative temporal trend (β i ) and to undergo the same yearly fluctuations around this trend (γ i,t ). The variance of route-observer effects within a BCR, $σ ω i 2$, is drawn from a global hyperdistribution. To conform with the assumption of normality of residuals in general linear models, we use the negative of the standard deviation in annual fluctuations (−1 × sqrt($σ γ i 2$)) as our dependent variable in subsequent analyses of population stability. The sign inversion is simply done to facilitate interpretation of our results, such that higher values reflect more stable populations. As hierarchical models tend to underestimate the magnitude of annual fluctuations when the number of missing survey years is high56, we estimated trends for a period when survey data is relatively consistent, namely from 1985 onwards. In addition, we improved data quality by including only route-observer combinations with 10 or more years of survey data and estimating only parameters for BCRs with at least 20 years of survey data and a minimum of 14 survey routes39. Model parameters were estimated with MCMC analysis using package ‘rjags’57. Four independent chains were run for each model, each of which included a burn-in of 25,000 steps, an additional chain length of 25,000 steps and a thinning interval of 10. Priors for S i , β i and η were normal distributions with mean of 0 and variance of 106. Prior distributions for variances were inverse gamma distributions with scale and shape equal to 0.001. Our assessment of chain convergence was done through the ‘coda’ package in R58 and included both a visual inspection of the traces of posterior estimates and an estimation of potential scale reduction factors (PSRF) via Gelman and Rubin’s convergence diagnostic59. Only estimates obtained from BCRs in which PSRF values were under 1.1 for all parameters (that is, chains with proper convergence) were included in our subsequent analyses of population stability. We considered positive support for temporal trends when the 95% credible interval of β i did not include zero. Hierarchical models with density dependence were also fitted to all species that did not exhibit evidence of linear trends in our initial analysis (n = 27). Specifically, we re-estimated population stability for these species using a discrete time, stochastic Gompertz model following ref. 38. These models estimate density-dependent population change at the route level while allowing random environmentally driven fluctuations and accounting for observer error in reported abundances. The log of abundance at time t, log(x t ), is modelled here as a function of log(x t−1): $l o g x t =a+b× l o g x t - 1 + E t$ where a is the intrinsic rate of increase and b is the strength of density dependence. Values of b were allowed to range from −1 (strong) to 1 (no density dependence)37. Relative annual fluctuations, E t , have a normal distribution with mean zero and variance σ2 E . Similarly, the log of observed counts in year t is assumed to have a distribution with mean of log(x t ) and a variance of τ2. To conform with the assumption of normality of residuals in general linear models, we used the negative log of the estimated year-to-year variance (that is, −1 × log(σ2 E )), as our dependent variable in subsequent analyses of population stability. As above, the sign inversion here is simply done to facilitate interpretation of our results, such that higher values reflect more stable populations. Data quality checks for hierarchical models with density dependence included estimating only models for routes with at least 20 years of survey data from 1985 onwards and no more than three consecutive years of missing data. Parameters were estimated using MCMC analysis with four independent chains, each ran with a burn-in period of 100,000 steps, an additional chain length of 50,000 steps and a thinning interval of 10 steps. Priors for a were drawn from a non-informative uniform distribution from 0 to 106, for b from a uniform distribution from −1 to 1, and for σ2 E and τ2 from an inverse gamma distribution with scale and shape equal to 0.001. As with our linear trend models, chain convergence diagnostics were performed through visual inspection and the Gelman and Rubin convergence diagnostic59. Data for downstream analyses of population stability only included estimates for routes that reached proper convergence. For both linear trend and density-dependence hierarchical models, we excluded species that typically pose clear challenges to detection, such as aquatic (families Gaviidae, Podicipedidae, Pelecanidae, Phalacrocoracidae, Anhingidae, Anatidae, Rallidae, Ardeidae, Threskiornithidae and Ciconiidae), nocturnal (families Tytonidae, Strigidae and Caprimulgidae) and primarily aerial species (families Apodidae and Hirundinidae). For all other species, we summarized regional measures of population stability into a single species-specific value by computing density-weighted averages across BCRs (linear trend models) or routes (density-dependence models). Thus, our measures of population variability account for differences in population dynamics across a species’ range60, but place greater importance on the population dynamics that occur in regions or sites where the species is better represented. ### Estimating correlates of population stability Data on longevity and annual reproductive output were obtained from ref. 51 (the latter was calculated as the product of clutch size and clutches per year). Social systems were classified as either cooperative or non-cooperative breeding based on ref. 61. Habitat generalism was measured as the number of different BCRs in which a species was reported throughout the BBS dataset. Migratory status was determined from range maps by BirdLife International (birdlife.org; downloaded 18 March 2016). Specifically, a species was considered resident if there was complete overlap between winter and breeding portions of its range, and considered migratory otherwise. To test the effects of putative predictor variables on population stability scores we used PGLS regression models estimated with the ‘geiger’62 and ‘nlme’63 packages in R64. All regression models (including the one used to estimate relative brain sizes) were computed using Pagel’s λ transformation. To account for uncertainty in phylogenetic relationships, every regression model reported here was independently run with 1,000 different tree topologies from ref. 27. Model fit was assessed through adjusted R2 (ref. 65). In the main text we report the average estimated coefficient for each parameter and the proportion of trees in which such estimates were significant (that is, the f statistic). Body size, longevity, annual reproductive output and estimated mean abundance were log-transformed prior to analysis. Our fully parameterized models included all main effects as well as interactions between longevity, annual reproductive output, habitat generalism, body size, relative brain size, sociality and migration with H12. Models were subsequently reduced by iteratively removing, one at a time, terms with the highest P value (removing interactions prior to main effects) and assessing whether removal led to a significant improvement of Akaike information criterion (AIC) values (that is, ΔAIC > 2). We also computed variance inflation factors (VIF) for all of our reduced models to confirm low potential for multicollinearity (all VIF values were <2). ### Estimating evolutionary rates of transition between character states We investigated the potential timeline of evolution of encephalization and climactic niche in birds using models of correlated trait evolution42, implemented through the discrete function of BayesTraits v2 on a global sample of species (Supplementary Data 2). Pelagic and migratory species were excluded from these analyses, resulting in a total sample of 1,288 resident terrestrial species. BayesTraits estimates the eight possible transition rates between potential character states (see Fig. 3c or f), assuming that simultaneous transitions in both brain size and environment are so unlikely that they can be ignored42. As both brain size and environmental variability are continuous variables, we explored a number of different cutoff values to convert them into binary traits suitable for this kind of analysis. Specifically, we classified species as having large encephalization values when they occurred above the 30th, 50th, 75th and 90th percentile of brain size distribution. While a 30th percentile cutoff for encephalization may seem too permissive at first glance, we note that this was the minimum possible threshold at which all ‘large-brained’ species had a positive brain residual (that is, bigger brain than expected from body size) and the number of observed transitions between different states was sufficient for the proper estimation of transition rates66. We note that the skewed distribution towards more highly encephalized species in our sample is due to the effects of phylogenetic correction in the estimation of relative brain size, as well as to the subsampling of species from our much larger global brain dataset. Exposure to environmental variability was classified as high for species above the 50th, 75th and 90th percentiles in either ‘temperature variability’ or ‘xeric variability’. Because models of correlated trait evolution have the potential to identify spurious correlations when the number of transitions between states is low66, we began by confirming that all of our thresholds yielded a reasonable number of transitions between states using ancestral character state estimation via the R package ‘phytools’67 and averaging the detected number of transitions across 1,000 tree topologies. At the 30th percentile threshold we detected an average of 29 transitions from small to large encephalization and 65 transitions from large to small encephalization. At the 50th percentile threshold we detected an average of 102 transitions from small to large encephalization, 112 transitions from large to small encephalization, 253 transitions from stable to variable environments and 414 transitions from variable to stable environments. At the 75th percentile threshold we detected an average of 64 transitions from small to large encephalization, 36 transitions from large to small encephalization, 265 transitions from stable to variable environments and 195 transitions from variable to stable environments. Finally, at the 90th percentile threshold we detected an average of 46 transitions from small to large encephalization, 15 transitions from large to small encephalization, 237 transitions from stable to variable environments and 127 transitions from variable to stable environments. The 90th percentile threshold was therefore ultimately dropped as a criterion for dichotomizing encephalization because the low number of transitions it yielded would preclude any meaningful estimates of transition rates66. Rates of evolutionary transition were estimated using rjMCMC analyses. Parameter values were first estimated using maximum likelihood analysis to inform our choice of priors. For all six combinations of cutoff, we calculated mean values of transition rates across our sample of 1,000 trees. Maximum likelihood estimates of each parameter value were of a similar magnitude regardless of cutoffs and ranged from 0.00002 to 0.34. Next, rjMCMC analyses were performed for 200,000,000 iterations with a burn-in of 5,000,000, a thinning interval of 1,000 iterations and an exponential prior whose mean is seeded from a uniform hyperprior ranging between 0 and 0.5. Reversible-jump helps avoid model over-parameterization by exploring alternative models that can differ in parameter number68. Because reversible-jump analyses estimate the posterior probability of all possible model configurations along with individual parameter values, this algorithm offers the additional advantage of enabling tests of very specific hypothesis. Specifically, the posterior distribution of model types obtained through rjMCMC can be used to assess the strength of evidence that two particular transitions are different or not by comparing the relative sampling frequency of models in which the two transition types were constrained to be the same with that of models in which these two rates were allowed to vary independently of each other69. Statistically, these comparisons are made via BFs, which are calculated as: $B F i j =P M i | D ∕P M j | D xP M j ∕P M j$ where i is the model set where rates are allowed to vary independently, j is a reduced model set in which the two rates are constrained to be the same, P(M n |D) is the posterior probability of model set n (computed as the proportion of steps in which the chain visited model n) and P(M n ) is the prior probability of model set n68,69. For example, when testing the cognitive buffer hypothesis, P(M i |D) is the frequency of all model configurations within the posterior distribution in which the transition rate from moderate to large encephalization varied between stable and variable environments, whereas j includes all model configurations in the posterior distribution where these rates were constrained to be equal in both environments. Similarly, when testing the colonization advantage scenario, P(M i |D) is the frequency of all model configurations in which the transition rate from stable to variable environments varied between moderate and large encephalization, while j includes all configurations where these rates were constrained to be equal in both brain size classes. P(M n ) values for this formula are computed by exploring all possible model combinations via expanded Stirling numbers69: P(M j ) = 0.9592 and P(M i ) = 0.0408. Overall, resulting BF values from 3 to 12 suggest positive support for model set i and values above 12 suggest that model set i is strongly supported when compared with model set j68. We also report the proportion of steps in our model chains (P) in which the difference between two rates of interest was equal to zero (that is, the transition rate for the character of interest was independent of the state of the second trait). In this case, values of P < 0.014 indicate positive support for a difference between rates (that is, BF > 3)69. Because hypothesis testing directly assesses the proportion of steps in the posterior distribution where transition rates of interest are constrained to be equal, we visualize these results by plotting the distribution of ‘rate differences’ calculated across the posterior distribution. These rate differences were calculated at each step of the chain as either the difference in estimated transition rate from moderate to large brain sizes in variable versus stable environments (when testing the cognitive buffer hypothesis), or the difference in estimated transition rates from stable to variable environments in species with large versus moderate brain sizes (when testing the colonization advantage hypothesis). Plotting the distributions of rate differences (Fig. 3) allows us to assess both the support for a particular hypothesis (the proportion of steps where rate difference = 0) and the directionality of these potential differences. Besides explicitly testing the cognitive buffer and colonization advantage scenarios as indicated above, we also tested for differences in the rates of colonization of stable environments between brain size classes as well as for differences in the rate of evolution of small to moderate brain sizes in stable versus variable habitats. We ran each rjMCMC analysis three times to insure chain convergence and assess the consistency of our results. These checks were performed with the ‘coda’ package in R58 and included visually inspecting the traces of all of our posterior estimates, assuring effective sample sizes were greater than 1,000, and estimating PSRF using Gelman and Rubin’s convergence diagnostic59. PSRF values were below 1.1 for all parameter estimates indicating proper chain convergence properties. Effective sample sizes over 1,000 were obtained for all runs, except for analyses using the combination of 50th percentile encephalization threshold and 75th percentile environment threshold. To ensure consistent results for this cutoff, we performed three additional runs for 619,000,000 iterations (the upper limit of our current computational resources). While four rate parameters in these models still failed to reach target effective sample sizes of 1,000 during the extended runs, their effective sample sizes were nevertheless fairly high (range: 371–997). Furthermore, the plots of running values across iterations for BFs testing the cognitive buffer and colonization advantage hypotheses in these models indicate that these results are also highly stable (Supplementary Fig. 2). Posterior distributions of parameter estimates from the different chains produced for each threshold were subsequently pooled to calculate both the mean values and standard deviations for each transition rate (Supplementary Fig. 3). ### Ancestral trait reconstruction The ancestral states reported in Fig. 4 were reconstructed for visualization purposes only and estimated with the ‘phytools’66 package in R. Reconstructions of continuous trait data were based on maximum likelihood and a randomly chosen tree within our candidate set. Colour coding in Fig. 4bg is based on results from separate ancestral trait reconstructions for the different environmental variables. ### Data availability All data generated or analysed during this study are either available through cited sources or included in this published article and its Supplementary Information files. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Bennett, P. M. & Harvey, P. H. Relative brain size and ecology in birds. J. Zool. 207, 151–169 (1985). 2. 2. Isler, K. & van Schaik, C. P. Metabolic costs of brain size evolution. Biol. Lett. 2, 557–560 (2006). 3. 3. Iwaniuk, A. N. & Nelson, J. E. Developmental differences are correlated with relative brain size in birds: a comparative analysis. Can. J. Zool. 81, 1913–1928 (2003). 4. 4. Barton, R. A. & Capellini, I. Maternal investment, life histories, and the costs of brain growth in mammals. Proc. Natl Acad. Sci. USA 108, 6169–6174 (2011). 5. 5. Sol, D. in Cognitive Ecology II (eds Dukas, R. & Ratcliffe, J. M.) 111–134 (Univ. Chicago Press, Chicago, 2009). 6. 6. Potts, R. Variability selection in hominid evolution. Evol. Anthropol. 7, 81–96 (1998). 7. 7. Reader, S. M. & Laland, K. N. Social intelligence, innovation, and enhanced brain size in primates. Proc. Natl Acad. Sci. USA 99, 4436–4441 (2002). 8. 8. Lefebvre, L. Brains, innovations, tools and cultural transmission in birds, non-human primates, and fossil hominins. Front. Hum. Neurosci. 7, 245 (2013). 9. 9. Sol, D., Székely, T., Liker, A. & Lefebvre, L. Big-brained birds survive better in nature. Proc. R. Soc. B 274, 763–769 (2007). 10. 10. Maille, A. & Schradin, C. Survival is linked with reaction time and spatial memory in African striped mice. Biol. Lett. 12, 20160346 (2016). 11. 11. Shultz, S., Bradbury, R. B., Evans, K. L., Gregory, R. D. & Blackburn, T. M. Brain size and resource specialization predict long-term population trends in British birds. Proc. R. Soc. B 272, 2305–2311 (2005). 12. 12. Maklakov, A. A., Immler, S., Gonzalez-Voyer, A., Rönn, J. & Kolm, N. Brains and the city: big-brained passerine birds succeed in urban environments. Biol. Lett. 7, 730–732 (2011). 13. 13. Vincze, O. Light enough to travel or wise enough to stay? Brain size evolution and migratory behavior in birds. Evolution 70, 2123–2133 (2016). 14. 14. Sayol, F. et al. Environmental variation and the evolution of large brains in birds. Nat. Commun. 7, 13971 (2016). 15. 15. Sol, D., Bacher, S., Reader, S. M., Lefebvre, L. & Price, S. E. T. D. Brain size predicts the success of mammal species introduced into novel environments. Am. Nat. 172, S63–S71 (2008). 16. 16. Sol, D. et al. Unraveling the life history of successful invaders. Science 337, 580–583 (2012). 17. 17. Amiel, J. J., Tingley, R. & Shine, R. Smart moves: effects of relative brain size on establishment success of invasive amphibians and reptiles. PLoS ONE 6, e18277 (2011). 18. 18. Lefebvre, L. & Sol, D. Brains, lifestyles and cognition: are there general trends? Brain Behav. Evol. 72, 135–144 (2008). 19. 19. Kotrschal, A., Corral-Lopez, A., Amcoff, M. & Kolm, N. A larger brain confers a benefit in a spatial mate search learning task in male guppies. Behav. Ecol. 26, 527–532 (2015). 20. 20. Kotrschal, A. et al. Artificial selection on relative brain size in the guppy reveals costs and benefits of evolving a larger brain. Curr. Biol. 23, 168–171 (2013). 21. 21. Lefebvre, L., Reader, S. M. & Sol, D. Brains, innovations and evolution in birds and primates. Brain Behav. Evol. 63, 233–246 (2004). 22. 22. Sol, D., Lefebvre, L. & Rodríguez-Teijeiro, J. D. Brain size, innovative propensity and migratory behaviour in temperate Palaearctic birds. Proc. R. Soc. B 272, 1433–1441 (2005). 23. 23. Sauer, J. R., Fallon, J. E. & Johnson, R. Use of North American Breeding Bird Survey data to estimate population change for bird conservation regions. J. Wildlife Manage. 67, 372–389 (2003). 24. 24. Sauer, J. R. et al. The North American Breeding Bird Survey: Results and Analysis 1966–2015 Version 2.07.2017 (USGS Patuxent Wildlife Research Center, 2017); http://www.mbr-pwrc.usgs.gov/bbs/. 25. 25. Smith, A. C., Hudson, M.-A. R., Downes, C. & Francis, C. M. Estimating breeding bird survey trends and annual indices for Canada: how do the new hierarchical Bayesian estimates differ from previous estimates? Can. Field Nat. 128, 119–134 (2014). 26. 26. Clark, J. R. et al. North American Bird Conservation Initiative: Bird Conservation Region Descriptions, a Supplement to the North American Bird Conservation Initiative Bird Conservation Regions Map (US NABCI Committee, Washington DC, 2000). 27. 27. Jetz, W., Thomas, G. H., Joy, J. B., Hartmann, K. & Mooers, A. O. The global diversity of birds in space and time. Nature 491, 444–448 (2012). 28. 28. Colwell, R. K. Predictability, constancy, and contingency of periodic phenomena. Ecology 55, 1148–1153 (1974). 29. 29. Botero, C. A., Dor, R., McCain, C. M. & Safran, R. J. Environmental harshness is positively correlated with intraspecific divergence in mammals and birds. Mol. Ecol. 23, 259–268 (2014). 30. 30. Sheehan, M. J. et al. Different axes of environmental variation explain the presence vs. extent of cooperative nest founding associations in Polistes paper wasps. Ecol. Lett. 18, 1057–1067 (2015). 31. 31. Bjørnstad, O. N. & Grenfell, B. T. Noisy clockwork: time series analysis of population fluctuations in animals. Science 293, 638–643 (2001). 32. 32. Ricklefs, R. E. & Scheuerlein, A. Comparison of aging-related mortality among birds and mammals. Exp. Gerontol. 36, 845–857 (2001). 33. 33. McNab, B. K. Food habits, energetics, and the population biology of mammals. Am. Nat. 116, 106–124 (1980). 34. 34. Lindstedt, S. L. & Boyce, M. S. Seasonality, fasting endurance, and body size in mammals. Am. Nat. 125, 873–878 (1985). 35. 35. Rubenstein, D. R. & Lovette, I. J. Temporal environmental variability drives the evolution of cooperative breeding in birds. Curr. Biol. 17, 1414–1419 (2007). 36. 36. Devictor, V., Julliard, R. & Jiguet, F. Distribution of specialist and generalist species along spatial gradients of habitat disturbance and fragmentation. Oikos 117, 507–514 (2008). 37. 37. Ives, A., Dennis, B., Cottingham, K. & Carpenter, S. Estimating community stability and ecological interactions from time-series data. Ecol. Monogr. 73, 301–330 (2003). 38. 38. Dennis, B., Ponciano, J. M., Lele, S. R., Taper, M. L. & Staples, D. F. Estimating density dependence, process noise, and observation error. Ecol. Monogr. 76, 323–341 (2006). 39. 39. Sauer, J. R. & Link, W. A. Analysis of the North American Breeding Bird Survey using hierarchical models. Auk 128, 87–98 (2011). 40. 40. Brook, B. W. & Bradshaw, C. J. A. Strength of evidence for density dependence in abundance time series of 1198 species. Ecology 87, 1445–1451 (2006). 41. 41. Ishida, Y. et al. Genetic connectivity across marginal habitats: the elephants of the Namib Desert. Ecol. Evol. 6, 6189–6201 (2016). 42. 42. Pagel, M. Detecting correlated evolution on phylogenies: a general method for the comparative analysis of discrete characters. Proc. R. Soc. B 255, 37–45 (1994). 43. 43. Green, D. M. The ecology of extinction: population fluctuation and decline in amphibians. Biol. Conserv. 111, 331–343 (2003). 44. 44. Wells, J. C. K. & Stock, J. T. The biology of the colonizing ape. Am. J. Phys. Anthropol. 134, 191–222 (2007). 45. 45. Roth, T. C., LaDage, L. D., Freas, C. A. & Pravosudov, V. V. Variation in memory and the hippocampus across populations from different climates: a common garden approach. Proc. R. Soc. B 279, 402–410 (2012). 46. 46. Kozlovsky, D. Y., Branch, C. L. & Pravosudov, V. V. Problem-solving ability and response to novelty in mountain chickadees (Poecile gambeli) from different elevations. Behav. Ecol. Sociobiol. 69, 635–643 (2015). 47. 47. Benson-Amram, S., Dantzer, B., Stricker, G., Swanson, E. M. & Holekamp, K. E. Brain size predicts problem-solving ability in mammalian carnivores. Proc. Natl Acad. Sci. USA 113, 2532–2537 (2016). 48. 48. Dunbar, R. I. M. & Shultz, S. Evolution in the social brain. Science 317, 1344–1347 (2007). 49. 49. Emery, N. J., Seed, A. M., von Bayern, A. M. P. & Clayton, N. S. Cognitive adaptations of social bonding in birds. Phil. Trans. R. Soc. B 362, 489–505 (2007). 50. 50. Garamszegi, L. Z., Møller, A. P. & Erritzøe, J. Coevolving avian eye size and brain size in relation to prey capture and nocturnality. Proc. R. Soc. B 269, 961–967 (2002). 51. 51. Myhrvold, N. P. et al. An amniote life-history database to perform comparative analyses with birds, mammals, and reptiles. Ecology 96, 3109–3109 (2015). 52. 52. Iwaniuk, A. N. & Nelson, J. E. Can endocranial volume be used as an estimate of brain size in birds? Can. J. Zool. 80, 16–23 (2002). 53. 53. Sol, D. et al. Evolutionary divergence in brain size between migratory and resident birds. PLoS ONE 5, e9617 (2010). 54. 54. Lima-Ribeiro, M. S. et al. EcoClimate: a database of climate data from multiple models for past, present, and future for macroecologists and biogeographers. Biodivers. Informatics 10, 1–21 (2015). 55. 55. Osborne, J. Notes on the use of data transformations. Pract. Assess. Res. Eval. 8, 1–7 (2002). 56. 56. Smith, A. C., Hudson, M.-A. R., Downes, C. M. & Francis, C. M. Change points in the population trends of aerial-insectivorous birds in North America: synchronized in time across species and regions. PLoS ONE 10, e0130768 (2015). 57. 57. Plummer, M. rjags: Bayesian Graphical Models Using MCMC (R Foundation for Statistical Computing, 2013); https://cran.r-project.org/web/packages/rjags/index.html. 58. 58. Plummer, M., Best, N., Cowles, K. & Vines, K. CODA: convergence diagnosis and output analysis for MCMC. R News 6, 7–11 (2006). 59. 59. Gelman, A. & Rubin, D. B. Inference from iterative simulation using multiple sequences. Stat. Sci. 7, 457–472 (1992). 60. 60. Gaston, K. J. & McArdle, B. H. The temporal variability of animal abundances: measures, methods and patterns. Phil. Trans. R. Soc. B 345, 335–358 (1994). 61. 61. Jetz, W. & Rubenstein, D. R. Environmental uncertainty and the global biogeography of cooperative breeding in birds. Curr. Biol. 21, 72–78 (2011). 62. 62. Harmon, L. J., Weir, J. T., Brock, C. D., Glor, R. E. & Challenger, W. GEIGER: investigating evolutionary radiations. Bioinformatics 24, 129–131 (2008). 63. 63. Pinheiro, J. et al. nlme: Linear and Nonlinear Mixed Effects Models. (R Foundation for Statistical Computing, Vienna, 2016); https://CRAN.R-project.org/package=nlme. 64. 64. R Development Core Team. R: A Language and Environment for Statistical Computing. (R Foundation for Statistical Computing, Vienna, 2008). 65. 65. Orme, D. et al. The caper Package: Comparative Analysis of Phylogenetics and Evolution in R v0.5.2.. (R Foundation for Statistical Computing, Vienna, 2013. http://cran.r-project.org/web/packages/caper/index.html. 66. 66. Maddison, W. P. & FitzJohn, R. G. The unsolved challenge to phylogenetic correlation tests for categorical characters. Syst. Biol. 64, 127–136 (2015). 67. 67. Revell, L. J. phytools: an R package for phylogenetic comparative biology (and other things). Methods Ecol. Evol. 3, 217–223 (2012). 68. 68. Pagel, M., Meade, A., Crespi, A. E. B. J. & Losos, E. J. B. Bayesian analysis of correlated evolution of discrete characters by reversible‐jump Markov chain Monte Carlo. Am. Nat. 167, 808–825 (2006). 69. 69. Barbeitos, M. S., Romano, S. L. & Lasker, H. R. Repeated loss of coloniality and symbiosis in scleractinian corals. Proc. Natl Acad. Sci. USA 107, 11877–11882 (2010). ## Acknowledgements We thank B. Carlson for invaluable feedback on an earlier draft of this manuscript. We are also grateful to the BBS and the countless volunteers that participate annually in this yearly survey. Bayesian analyses were run in the Washington University Center for High Performance Computing (CHPC), which is partially funded by NIH grants 1S10RR022984-01A1 and 1S10OD018091-01. We thank M. Tobias for his helpful advice on HPC. ## Author information ### Affiliations 1. #### Department of Biology, Washington University in St. Louis, St. Louis, MO, 63130, USA • Trevor S. Fristoe •  & Carlos A. Botero 2. #### Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta T1K 6T5, Canada • Andrew N. Iwaniuk ### Contributions T.S.F. and C.A.B. designed analyses, compiled data and wrote the manuscript. T.S.F. additionally performed analyses and prepared figures. A.N.I. collected and compiled data, and contributed to writing. ### Competing interests The authors declare no competing financial interests. ### Corresponding author Correspondence to Trevor S. Fristoe. ## Electronic supplementary material 1. ### Supplementary Information Supplementary Tables 1–3, Supplementary Figures 1–3 2. ### Supplementary Data 1 Data for 126 species used in population analyses. Variable descriptions can be found in the methods section of the manuscript. 3. ### Supplementary Data 2 Data for 2,062 species used in estimating relative brain sizes, including 1,288 species included in global evolutionary analyses. Variable descriptions can be found in the methods section of the manuscript.
{}
# What's the most efficient way to find barycentric coordinates? In my profiler, finding barycentric coordinates is apparently somewhat of a bottleneck. I am looking to make it more efficient. It follows the method in shirley, where you compute the area of the triangles formed by embedding the point P inside the triangle. Code: Vector Triangle::getBarycentricCoordinatesAt( const Vector & P ) const { Vector bary ; // The area of a triangle is real areaABC = DOT( normal, CROSS( (b - a), (c - a) ) ) ; real areaPBC = DOT( normal, CROSS( (b - P), (c - P) ) ) ; real areaPCA = DOT( normal, CROSS( (c - P), (a - P) ) ) ; bary.x = areaPBC / areaABC ; // alpha bary.y = areaPCA / areaABC ; // beta bary.z = 1.0f - bary.x - bary.y ; // gamma return bary ; } This method works, but I'm looking for a more efficient one! • Beware that the most efficient solutions may be the least accurate. – Peter Taylor Feb 12 '12 at 13:51 • I suggest you make a unit test to call this method ~100k times (or something similar) and measure the performance. You can write a test that ensures it's less than some value (eg. 10s), or you can use it simply to benchmark old vs. new implementation. – ashes999 Feb 13 '12 at 3:08 Transcribed from Christer Ericson's Real-Time Collision Detection (which, incidentally, is an excellent book): // Compute barycentric coordinates (u, v, w) for // point p with respect to triangle (a, b, c) void Barycentric(Point p, Point a, Point b, Point c, float &u, float &v, float &w) { Vector v0 = b - a, v1 = c - a, v2 = p - a; float d00 = Dot(v0, v0); float d01 = Dot(v0, v1); float d11 = Dot(v1, v1); float d20 = Dot(v2, v0); float d21 = Dot(v2, v1); float denom = d00 * d11 - d01 * d01; v = (d11 * d20 - d01 * d21) / denom; w = (d00 * d21 - d01 * d20) / denom; u = 1.0f - v - w; } This is effectively Cramer's rule for solving a linear system. You will not get much more efficient than this—if this is still a bottleneck (and it might be: it doesn't look like it's much different computation-wise than your current algorithm), you'll probably need to find some other place to gain a speedup. Note that a decent number of values here are independent of p—they can be cached with the triangle if necessary. • # of operations can be a red herring. How they're dependent and schedules matters a lot on modern CPUs. always test assumptions and performance "improvements." – Sean Middleditch Feb 14 '13 at 22:16 • The two versions in question have almost identical latency on the critical path, if you're only looking at scalar math ops. The thing I like about this one is that by paying space for merely two floats, you can shave one subtract and one division from the critical path. Is that worth it? Only a performance test knows for sure… – John Calsbeek Feb 15 '13 at 4:08 • He describes how he got this on page 137-138 with section on "closest point on triangle to point" – bobobobo Aug 9 '13 at 14:11 • Minor note: there is no argument p to this function. – Bart Aug 22 '14 at 8:05 • Minor implementation note: If all 3 points are on top of each other, you'll get a "divide by 0" error, so be sure to check for that case in the actual code. – frodo2975 Feb 8 '19 at 20:02 The Cramer's rule should be the best way to solve it. I am not a graphic guy, but I was wondering why in the book Real-Time Collision Detection they doesn't do the following simpler thing: // Compute barycentric coordinates (u, v, w) for // point p with respect to triangle (a, b, c) void Barycentric(Point p, Point a, Point b, Point c, float &u, float &v, float &w) { Vector v0 = b - a, v1 = c - a, v2 = p - a; float den = v0.x * v1.y - v1.x * v0.y; v = (v2.x * v1.y - v1.x * v2.y) / den; w = (v0.x * v2.y - v2.x * v0.y) / den; u = 1.0f - v - w; } This directly solves the 2x2 linear system v v0 + w v1 = v2 while the method from the book solves the system (v v0 + w v1) dot v0 = v2 dot v0 (v v0 + w v1) dot v1 = v2 dot v1 • Doesn't your proposed solution make assumptions about the third (.z) dimension (specifically, that it doesn't exist)? – Cornstalks Sep 29 '14 at 20:15 • This is the best method here if one's working in 2D. Just a minor improvement: one should compute the reciprocal of the denominator in order to use two multiplications and one division instead of two divisions. – rubik May 25 '16 at 15:20 Slightly faster: Precompute the denominator, and multiply instead of divide. Divisions are much more expensive than multiplications. // Compute barycentric coordinates (u, v, w) for // point p with respect to triangle (a, b, c) void Barycentric(Point a, Point b, Point c, float &u, float &v, float &w) { Vector v0 = b - a, v1 = c - a, v2 = p - a; float d00 = Dot(v0, v0); float d01 = Dot(v0, v1); float d11 = Dot(v1, v1); float d20 = Dot(v2, v0); float d21 = Dot(v2, v1); float invDenom = 1.0 / (d00 * d11 - d01 * d01); v = (d11 * d20 - d01 * d21) * invDenom; w = (d00 * d21 - d01 * d20) * invDenom; u = 1.0f - v - w; } In my implementation, however, I cached all of the independent variables. I pre-calc the following in the constructor: Vector v0; Vector v1; float d00; float d01; float d11; float invDenom; So the final code looks like this: // Compute barycentric coordinates (u, v, w) for // point p with respect to triangle (a, b, c) void Barycentric(Point a, Point b, Point c, float &u, float &v, float &w) { Vector v2 = p - a; float d20 = Dot(v2, v0); float d21 = Dot(v2, v1); v = (d11 * d20 - d01 * d21) * invDenom; w = (d00 * d21 - d01 * d20) * invDenom; u = 1.0f - v - w; } I would use the solution that John posted, but I would use the SSS 4.2 dot intrinsic and sse rcpss intrinsic forthe divide, assuming you are ok restricting yourself to Nehalem and newer processes and limited precision. Alternatively you could compute several barycentric coordinates at once using sse or avx for a 4 or 8x speedup. You can convert your 3D problem into a 2D problem by projecting one of the axis-aligned planes and use the method proposed by user5302. This will result in exactly the same barycentric coordinates as long as you make sure your triangle does not project into a line. Best is to project to the axis-aligned plane that is as close as possible to the orientation of your triagle. This avoid co-linearity problems and ensure maximum accuracy. Secondly you can pre-compute the denominator and store it for each triangle. This saves computations afterwards. I tried to copy @NielW's code to C++, but I didn't get correct results. It was easier to read https://en.wikipedia.org/wiki/Barycentric_coordinate_system#Barycentric_coordinates_on_triangles and calculate the lambda1/2/3 as given there (no vector functions needed). If p(0..2) are the Points of the triangle with x/y/z: Precalc for triangle: double invDET = 1./((p(1).y-p(2).y) * (p(0).x-p(2).x) + (p(2).x-p(1).x) * (p(0).y-p(2).y)); then the lambdas for a Point "point" are double l1 = ((p(1).y-p(2).y) * (point.x-p(2).x) + (p(2).x-p(1).x) * (point.y-p(2).y)) * invDET; double l2 = ((p(2).y-p(0).y) * (point.x-p(2).x) + (p(0).x-p(2).x) * (point.y-p(2).y)) * invDET; double l3 = 1. - l1 - l2; For a given point N inside triangle A B C, you can get the barycentric weight of point C by dividing the area of subtriangle A B N by the total area of triangle A B C.
{}
## Achievement First Mathematics ##### v1.5 ###### Downloadable Resources Our Review Process Learn more about EdReports’ educator-led review process Showing: ## Report for 4th Grade ### Overall Summary The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for Alignment to the CCSSM. In Gateway 1, the materials meet expectations for focus and coherence, and in Gateway 2, the materials meet expectations for rigor and practice-content connections. ##### 4th Grade ###### Alignment Meets Expectations ###### Usability Does Not Meet Expectations ### Focus & Coherence The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for focus and coherence. For focus, the materials assess grade-level content and provide all students extensive work with grade-level problems to meet the full intent of grade-level standards. For coherence, the materials are coherent and consistent with the CCSSM. ##### Gateway 1 Meets Expectations #### Criterion 1.1: Focus Materials assess grade-level content and give all students extensive work with grade-level problems to meet the full intent of grade-level standards. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for focus as they assess grade-level content and provide all students extensive work with grade-level problems to meet the full intent of grade-level standards. ##### Indicator {{'1a' | indicatorName}} Materials assess the grade-level content and, if applicable, content from earlier grades. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for assessing grade-level content and, if applicable, content from earlier grades. Each unit contains a Post-Assessment which is a summative assessment based on the standards designated in that unit. Examples of assessment items aligned to grade-level standards include: • Unit 1, Post-Assessment, Item 5, “The cost of buying a movie is 4 times the cost of renting a movie. It costs $30 to buy a movie. Write two equations that can be used to determine the cost, r, or renting a movie.” (4.OA.1) • Unit 4, Post-Assessment, Item 16, “Divide. 7,285 ÷ 4. Answer choices include: A. 1,801, B. 1,801 R1, C. 1,821, D. 1,821 R1.” (4.OA.3) • Unit 9, Post-Assessment, Item 15, “Explain why a square is also a rectangle and a rhombus.” (4.G.2 ) • Unit 10, Post-Assessment, Item 13, “A circular pizza was cut into 5 equal slices from the vertex at the center of the pizza. 3 of the slices of pizza get eaten. What is the measurement of the angle formed at the vertex of the slices that are left?” (4.MD.5) Reviewers noted that in the Achievement First Mathematics Grade 4 materials, there was not a Unit 2 Overview therefore an assessment was not available to be reviewed. Examples of above-grade-level assessments or assessment items which can be omitted or modified: • Unit 6, Post-Assessment, Item 2, “For each of the following sums, decide which ones are equal to \frac{22}{17}. For the sums that are equal to \frac{22}{17}, circle YES. For the sums that are not equal to \frac{22}{17}, circle NO. ...; YES or NO \frac{1}{17}+\frac{1}{17}+\frac{9}{17}+\frac{3}{17}.” (4.NF.2, expectations in this domain are limited to fractions with denominators 2, 3, 4, 5, 6, 8, 10, 12, 100.) • Unit 6, Post-Assessment, Item 14, “Each day Milo reads 18of his new book. Which number sentence best represents the fractions of his book that Milo has read after 7 days? Answer choices include: A. \frac{1}{8}×\frac{1}{7}=\frac{1}{56}, B. \frac{1}{8}×\frac{1}{7}=\frac{7}{8}, C.$$\frac{1}{8}×7=\frac{1}{56}$$, D.$$\frac{1}{8}×7=\frac{7}{8}$$.” (4.NF.4, students are expected to “solve word problems involving multiplication of a fraction by a whole number.” Answer choices A and B do not meet the criteria outlined by the standard.) Achievement First Mathematics Grade 4 has assessments linked to external resources in some Unit Overviews; however there is no clear delineation as to whether the assessment is used for formative, interim, cumulative or summative purposes. ##### Indicator {{'1b' | indicatorName}} Materials give all students extensive work with grade-level problems to meet the full intent of grade-level standards. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for giving all students extensive work with grade-level problems to meet the full intent of grade-level standards. Each unit consists of lessons that are broken into four components: Introduction, Workshop/Discussion, Independent Practice, and Exit Ticket. In addition to lessons, there are Math Stories “to enable students to make connections, identify and practice representation and calculation strategies, and develop deep conceptual understanding through the introduction of a specific story problem type in a clear and focused fashion with deliberate questioning and independent work time,” and Math Practice (Practice Workbook) for students “to build procedural skill and fluency.” Examples include: • Unit 3, Lessons 2 through 9, students fluently add and subtract multi-digit whole numbers using the standard algorithm, as they solve more than 100 problems in independent practice opportunities (Independent Practice, Exit Ticket, and Practice Workbook) and explain their use of the algorithm (4.NBT.5). Lesson 6, Independent Practice, Problem 5, “$$50,019-12,877$$” and Problem 6, “In the last problem, what place values did you need to regroup and how did you do it? Explain on the lines below.” • Unit 5, Lessons 1 through 10, students solve multi-step word problems posed with whole numbers and having whole number answers using the four operations, including problems in which the remainders must be interpreted, and represent these problems using equations with a letter standard for the unknown quantity (4.OA.3). There are 83 Independent Practice problems and 15 Exit Tickets that require students to solve multi-step word problems. Lesson 6, Independent Practice, Problem 4, “Mia represented the above problem like this: (437 pies x 9 wards) x 14 = Total pie Pieces,” and asks students, “Is this representation reasonable? Tell why or why not on the lines below.” While there are 98 opportunities for students to engage with this standard, there are less than 10 opportunities for students to represent problems with a letter standard for the unknown quantity. Unit 10, Lesson 4, students engage with 4.MD.6, measure angles in whole number degrees using a protractor, as they solve problems requiring them to use a protractor to accurately identify the angle measurement. Exit Ticket 1, “For numbers 1-2, use your protractor to create an angle of the given size. Be sure to check if your drawn angle matches the type of angle indicated by the measurement. 1. 67\degree.” #### Criterion 1.2: Coherence Each grade’s materials are coherent and consistent with the Standards. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for coherence. The materials: address the major clusters of the grade, have supporting content connected to major work, make connections between clusters and domains, and have content from prior and future grades connected to grade-level work. ##### Indicator {{'1c' | indicatorName}} When implemented as designed, the majority of the materials address the major clusters of each grade. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations that, when implemented as designed, the majority of the materials address the major clusters of each grade. • The number of lessons devoted to major work of the grade (including assessments and supporting work connected to the major work) is 114 out of 125, which is approximately 91%. • The number of days devoted to major work (including assessments and supporting work connected to the major work) is 121 out of 132, which is approximately 92%. • The instructional minutes were calculated by taking the number of minutes devoted to the major work of the grade (10,425) and dividing it by the total number of instructional minutes (11,475), which is approximately 91%. A minute-level analysis is most representative of the materials because the units and lessons do not include all of the components included in the math instructional time. The instructional block includes a math lesson, math stories, and math practice components. As a result, approximately 91% of the materials focus on major work of the grade. ##### Indicator {{'1d' | indicatorName}} Supporting content enhances focus and coherence simultaneously by engaging students in the major work of the grade. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations that supporting content enhances focus and coherence simultaneously by engaging students in the major work of the grade. There are opportunities in which supporting standards/clusters are used to support major work of the grade and are connected to the major standards/clusters of the grade. Examples include: • Unit 4, Lesson 25, Independent Practice, Problem 2, “Leonard bought 4 liters of orange juice. How many milliliters of juice does he have?” This problem connects the major work of 4.NBT.5, multiply a whole number of up to four digits by a one-digit whole number, to the supporting work of 4.MD.A, as students solve a problem involving measurement conversion from a larger unit to a smaller unit. • Unit 8, Lesson 7, Independent Practice, Problem 2, “Julio starts school at 7:45 am and finishes school at 3:30 pm. He has 25 minutes of recess, 32 minutes of lunch, and he has a 46 minute free period in the afternoon. The rest of the time he is in classes. How many hours and minutes does Julio spend in class?” This problem connects the major work standard 4.OA.3 and the supporting work standard 4.MD.2, as students solve multi-step word problems involving intervals of time. • Unit 10, Lesson 2, Independent Practice, Problem 2, “Megan has a very large round table. In order for her to seat her guests, she divided it into 10 equal sections. What is the angle measure of each section of the table?” This problem connects the major work of 4.NF.3 and the supporting work of 4.MD.5, as students find the measurement of angles when given a fraction of a circle. ##### Indicator {{'1e' | indicatorName}} Materials include problems and activities that serve to connect two or more clusters in a domain or two or more domains in a grade. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for including problems and activities that serve to connect two or more clusters in a domain or two or more domains in a grade. Examples include: • Unit 1, Assessment, students connect the work of 4.OA.B, gain familiarity with factors and multiples, to 4.OA.C, generate and analyze patterns, as they complete a pattern involving multiples. Item 1, “Alfonzo applies numbers on the back of football jerseys. Below are the first five numbers he applies. If the pattern continues, what are the next three numbers he will apply? 9, 18, 27, 36, 45, ___,___,____ a. 54, 63, 72 b. 54, 63, 71 c. 63, 64, 72 d. 63, 72, 8.” • Unit 5, Lesson 9, students connect the work of 4.OA.A, use the four operations with whole numbers, to solve problems to 4.NBT.B, use place value understanding and properties of operations to perform multi-digit arithmetic, as students use estimation strategies and a visual model to solve multi-step problems. Exit Ticket, Problem 1, “Katie and her sister are saving up money to build a tree house. Each month they have saved 46 each from their allowance. In the last month, they each did some extra jobs in order to get the total amount they needed for the house. They spend 13 months saving and an extra month doing more jobs. If the cost of the tree house was 1299, how much money did they earn in the last extra month? Represent, estimate and solve.” • Unit 7, Assessment, students connect the work of 4.NF.A, extend understanding of fraction equivalence, and ordering to 4.NF.C, understand decimal notation for fractions, and compare decimal fractions, as students locate and label points on a number line. Item 19, “Locate and label the following points on the number line below: \frac{130}{10},13.21, 13\frac{12}{100}.” ##### Indicator {{'1f' | indicatorName}} Content from future grades is identified and related to grade-level work, and materials relate grade-level concepts explicitly to prior knowledge from earlier grades. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations that content from future grades is identified and related to grade-level work, and materials relate grade-level concepts explicitly to prior knowledge from earlier grades. Each unit has a Unit Overview and a section labeled “Identify Desired Results” where the standards for the unit are provided as well as a correlating section “Previous Grade Level Standards/Previously Taught & Related Standards” where prior grade-level standards are identified. Examples include: • Unit 3, Overview, Identify Desired Results: Identify the Standards, identifies 3.NBT.2 under Previous Grade Level Standards/Previously Taught and Related Standards for 4.NBT.5. In Enduring Understandings - What it Looks Like in This Unit, a connection is made between the addition and subtraction work to place value skills in prior grades. “In previous grades, students used place value blocks and pictures of place value blocks to add and subtract numbers. Place value relationships help them regroup. When they need to take away more than they have of a certain place value, they regroup one of a greater place value to ten of that place value.” • Unit 6, Unit Overview, Identify Desired results: Identify the Standards identifies 4.NF.1 as being addressed in this unit and 3.NF.1 as Previous Grade Level Standards/ Previously Taught & Related Standards connections. In Identify the Narrative, a description is provided, “In third grade they recognized equivalent fractions using visual models and number lines in ‘special cases’ such as \frac{2}{4}=\frac{1}{2} and \frac{1}{3}=\frac{2}{6}. In fourth grade, they use visual models of equivalent fractions to understand how to use the identity property to find equivalent fractions.” The materials develop according to the grade-by-grade progressions in the Standards. Content from future grades are clearly identified and are related to grade-level work within each Unit Overview. Each Unit Overview contains a narrative that includes a “Linking” section that describes in detail the progression of the standards within the unit. Examples include: • Unit 3, Unit Overview, Linking, “Later in the year students will add and subtract mixed units of measurement which will again call upon regrouping concepts -- in this case, from one unit of measurement to another.” An additional reference is made to fifth grade, “When students move to fifth grade, they will continue to solve multi-step word problems with all four operations, so they will be relying on their abilities to add and subtract with the standard algorithm.” • Unit 6, Unit Overview, Identify the Narrative, refers to prior work students engaged in with fractions. “The fourth grade unit on fractions combines students’ prior knowledge on fractions, the meaning of operations, logical reasoning and new learning experiences to elaborate their understanding of fractions and allow them to operate with fractions.” • Unit 10, Unit Overview, Linking, “Angle measurements will not come up as a formal part of the math curriculum again until seventh grade. Although there is a large gap in time between this unit and seventh grade, the seventh grade standards rely heavily on students’ skill and knowledge from this unit in fourth grade.” ##### Indicator {{'1g' | indicatorName}} In order to foster coherence between grades, materials can be completed within a regular school year with little to no modification. The instructional materials reviewed for Achievement First Mathematics Grade 4 foster coherence between grades and can be completed within a regular school year with little to no modification. The Guide to Implementing AF, Grade 4 includes a scope and sequence. “Not every lesson is entirely focused on grade level standards, and, therefore, some lessons can be used for either remediation or enrichment.” As designed, the instructional materials can be completed in 135 days. One day is provided for each lesson and one day is allotted for each unit assessment. • There are 10 units with 131 lessons in total. • The Guide to Implementing Achievement First Mathematics Grade 4 identifies lessons as either R (remediation), E (enrichment), or O (on grade level). There is one lesson identified as R (remediation), zero lessons identified as E (enrichment), and 130 lessons identified as O (on grade level). • There are 4 days for Post-Assessments. According to The Guide to Implementing Achievement First Mathematics Grade 4, each lesson is designed to be completed in 90 minutes.Each lesson consists of three parts: • Math Lesson (60 min) • Math Stories (20 min) • Practice/Cumulative Review (10 min) ###### Overview of Gateway 2 ### Rigor & the Mathematical Practices The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for rigor and balance and practice-content connections. The materials reflect the balances in the Standards and help students develop conceptual understanding, procedural skill and fluency, and application. The materials make meaningful connections between the Standards for Mathematical Content and the Standards for Mathematical Practice (MPs). ##### Gateway 2 Meets Expectations #### Criterion 2.1: Rigor and Balance Materials reflect the balances in the Standards and help students meet the Standards’ rigorous expectations, by giving appropriate attention to: developing students’ conceptual understanding; procedural skill and fluency; and engaging applications. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for rigor. The materials develop conceptual understanding of key mathematical concepts, give attention throughout the year to procedural skill and fluency, spend sufficient time working with engaging applications of mathematics, and do not always treat the three aspects of rigor together or separately. ##### Indicator {{'2a' | indicatorName}} Materials develop conceptual understanding of key mathematical concepts, especially where called for in specific content standards or cluster headings. The instructional materials for Achievement First Mathematics Grade 4 meet expectations for developing conceptual understanding of key mathematical concepts, especially where called for in specific content standards or cluster headings. The materials include problems and questions that develop conceptual understanding throughout the grade level. Examples include: • Unit 4, Lesson 8, students develop conceptual understanding of 4.NBT.5, as they use place value blocks to help them solve multi-digit multiplication problems. In Problem of the Day, “Problem: A video store display shelf has DVDs stacked in 3 rows. There are 246 videos in each row. How many videos can the shelves hold? TT: How can we represent this problem with an equation? We can write 246 videos x 3 rows = K total videos. Add to VA. Why does that work? It works because this is a problem about equal groups. In this problem, we have 3 groups—the rows—with 246 DVDs in each. We need to figure out the total number of DVDs. We can do this with multiplication. Today we’re going to solve 2, 3, and 4-digit multiplication with place value blocks. Work with your partner to solve this equation with place value blocks.” • Unit 6, Lesson 6, students develop conceptual understanding of 4.NF.1, as they use tape diagrams and number lines to find equivalent fractions. In Workshop, Problem 2, “Markette is using a number line to figure out how many sixths are equal to \frac{2}{3}. She tells her partner, “We should partition each interval on our number line into 3 new parts, because 3+3 is 6. Is Markette’s strategy reasonable? Explain on the lines below. You may use pictures, number sentences, or number lines to help you.” • Unit 7, Lesson 6, students develop conceptual understanding of 4.NF.7, compare two decimals to the hundredths by reasoning about their size. In the Workshop, Problem 1, students use visual models and number lines to support their reasoning about comparisons. “For each problem, shade each decimal amount on the given grids and plot them on the number line. Then use those models to compare the decimals using <, > or =0.2__$$0.19$$.” The materials provide opportunities for students to independently demonstrate conceptual understanding throughout the grade. Examples include: • Unit 2, Lesson 4, students demonstrate conceptual understanding of 4.NBT.2, as they use a provided place value chart and their knowledge of place value to determine the reasonableness of a provided answer. Independent Practice, Problem 2, “Kate used the place value chart to write the number below in standard form. Is Kate’s work correct? Explain why or why not on the lines below.” • Unit 6, Lesson 1, students demonstrate conceptual understanding of 4.NF.3, as they draw a visual model and write an equation to solve a problem. Independent Practice, Problem 1, “Terrell is keeping track of his running for the week. Draw a visual model and write an addition equation to model Terrell’s running plan. How far will he have to run at the end of the week?” • Unit 10, Lesson 3, students demonstrate conceptual understanding of 4.MD.5, as they use manipulatives to find the measure of a given angle. In the Exit Ticket, Problem 3, “Using pattern blocks, how can you find the measure of the angle below? Use pictures, words and numbers to show how you found your answer.” ##### Indicator {{'2b' | indicatorName}} Materials give attention throughout the year to individual standards that set an expectation for procedural skill and fluency. The materials for Achievement First Mathematics Grade 4 meet expectations for giving attention throughout the year to individual standards that set an expectation of procedural skill and fluency. The materials include opportunities for students to build procedural skill and fluency in both Math Practice and Cumulative Review worksheets. The materials do not include collaborative or independent games, math center activities, or non-paper/pencil activities to develop procedural skill and fluency. Math Practice is intended to “build procedural skill and fluency” and occurs four days a week for 10 minutes. There are eight Practice Workbooks in Achievement First Mathematics, Grade 3. One workbook, C, contains resources to support the procedural skill and fluency standard 4:NBT.4: Fluently add and subtract multi-digit whole numbers using the standard algorithm. In the Guide To Implementing Achievement First Mathematics Grade 4, teachers are provided with guidance for which workbook to use based on the unit of instruction. Examples include: • Practice Workbook C, Problem 1, students solve subtraction problems. “Find the difference. 51,348 and 22,122. Use the standard algorithm to solve.” (4.NBT.4) • Practice Workbook C, Problem 6, students solve subtraction problems. “Use a strategy that makes sense to you to solve. 59,637 – 34,721 = .” (4.NBT.4) • Practice Workbook C, Problem 9, students practice subtraction. “$$56432 - 33224 =$$ _____.“ (4.NBT.4) Cumulative Reviews are intended to “facilitate the making of connections and build fluency or solidify understandings of the skills and concepts students have acquired throughout the week to strategically revisit concepts, mostly focused on major work of the grade.” Cumulative Review occurs every Friday for 20 minutes. Examples include: • Unit 4, Cumulative Review 4.5, Problem 4, students solve subtraction problems. “Find the difference. Show your work. 6,241 - 1,366 = _____.” (4.NBT.4) • Unit 4, Cumulative Review 4.5, Problem 2, students solve addition and subtraction problems using the standard algorithm. “Camden Yard sold 5,864 tickets and 2,549 students tickets to last Friday's Baltimore Orioles baseball game. How many total tickets were sold for last Friday’s game?”(4.NBT.4) • Unit 6, Cumulative Review 6.4, Problem 5, students solve a subtraction problem. “Find the difference. 2,301 - 1,976 = ___” (4.NBT.4) ##### Indicator {{'2c' | indicatorName}} Materials are designed so that teachers and students spend sufficient time working with engaging applications of the mathematics. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for being designed so that teachers and students spend sufficient time working with engaging applications of the mathematics. Students are given multiple opportunities to engage in real-world applications, especially during Math Stories, which include both guided questioning and independent work time, and Exit Tickets to independently show their understanding. Materials include multiple routine and non-routine applications of the mathematics throughout the grade level. Examples include: • Unit 2, Guide to Implementing AF, Math Stories, October, students engage with 4.OA.3 as they solve a multi-step word problem posed with whole numbers and having whole-number answers using the four operations in a non-routine format. Sample problem 13, “Jamal and Sarah are playing a game with 5 counters. On each person’s turn they can take either 1or 2 counters from the pile. The player with the last turn loses. If Jamal starts the game and takes 1 counter away, what are two possible outcomes for the game?” • Unit 7, Lesson 11, Problem of the Day, students engage in a routine problem with 4.NF.5 as they apply their understanding of fractions. "Victoria finds a multicolored quilt that exactly matches the colors in her bedroom. Victoria is so excited that she phones her mom to tell her about the quilt. This is what Victoria tells her mom: The quilt is a rectangle with one hundred squares, \frac{36}{100} of the quilt is made of red and yellow squares \frac{5}{10} of the quilt is blue squares, \frac{14}{100} of the quilt is green squares. Victoria's mom is very excited about the new quilt. She asks Victoria what total fraction of the quilt is made of blue and green squares. What fraction should Victoria tell her mom is the total fraction of the quilt made of blue and green squares? Show all your mathematical thinking.” • Unit 9, Guide to Implementing AF, Math Stories, May, students engage with 4.NF.1 as they apply the use of a visual fraction model to generate equivalent fractions and solve a non-routine problem. "Akilah draws two rectangles of the same size, and divides them into a different number of total parts. In the first rectangle, she colors in 4 parts. In the second rectangle, she colors in 5 parts. The colored areas are equal. What fractions could she have divided her rectangles up into?” • Unit 10, Guide to Implementing AF, Math Stories, June, students engage with 4.MD.2 as they use the four operations to solve a routine word problem involving money. Sample Problem 1, “There are 3,418 students at Brookside Elementary and 2,192 students at La PLaya Elementary. All of the students are going on a field trip to the Natural history museum, where tickets for children are$3 each. The schools have a budget of 20,000 to spend on field trips. How much money will they have left over?” Materials provide opportunities for students to independently demonstrate routine and non-routine applications of the mathematics throughout the grade level. Examples include: • Unit 1, Lesson 11, Independent Practice, students engage with 4.OA.2 as they solve a routine word problem involving multiplicative comparisons. Problem 2, “Kenny is 56 years old. His sister is 7 years old. How many times younger is Kenny’s sister than him?” • Unit 2, Cumulative Review 2.2, Problem 4, students engage in non-routine application of 4.OA.4 as they use their knowledge of factor pairs to solve a problem in more than one way. "Yvette is making bracelets for her friends. Each bracelet will have an equal number of charms. She has 24 charms and she wants each bracelet to have at least 2 charms, but no more than 8 charms. Part A: Which is NOT a way that Yvette can make her bracelets? a) 8 bracelets with 3 charms on each bracelet. b) 6 bracelets with 4 charms on each bracelet. c) 4 bracelets with 6 charms on each bracelet. d) 4 bracelets with 8 charms on each bracelet. Part B: Are there any other ways that work for Yvette to make her bracelets? Show your work below.” • Unit 6, Lesson 22, Independent Practice, students engage with 4.NF.3 as they solve routine word problems involving addition and subtraction of fractions. Problem 1, “A cabinet has shelves that are 11\frac{1}{4} inches tall. Mike stacked a speaker that is 4\frac{3}{4} inches tall on top of a DVD player that is 3\frac{2}{4} inches tall. How much space is left between the objects and the top of the shelf?” ##### Indicator {{'2d' | indicatorName}} The three aspects of rigor are not always treated together and are not always treated separately. There is a balance of the three aspects of rigor within the grade. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations in that the three aspects of rigor are not always treated together and are not always treated separately. There is a balance of the three aspects of rigor within the grade. The instructional materials include opportunities for students to independently demonstrate the three aspects of rigor. Examples include: • Unit 2, Cumulative Review 2.2, students demonstrate conceptual understanding of factors by determining whether there are additional factors for a number within 100. Problem 7, “Marco and Desiree made 56 cookies for a bake sale. They will put an equal amount of cookies into bags. Marco and Desiree want to put more than 2 cookies but fewer than 10 cookies into each bag. Desiree says that they can only put 7 cookies into 8 bags or 8 cookies into 7 bags. Marco thinks there are more ways to put an equal number of cookies into bags. Who is right? Why are they right?” (4.OA.4) • Practice Workbook D, students develop procedural skill and fluency as they multiply whole numbers. Problem 14, “Calculate the product of 64 × 35.” (4.NBT.5) • Unit 6, Lesson 23, Exit Ticket, students apply their understanding of fraction multiplication as they solve word problems. “Edwin uses \frac{3}{4} of a teaspoon of baking powder for each batch of muffins he makes. He needs to make 3 batches for his Cub Scout meeting and 4 batches for his study group. How many teaspoons of baking powder will Edwin need?” (4.NF.4) Multiple aspects of rigor are engaged simultaneously to develop students’ mathematical understanding of a single topic/unit of study throughout the materials. Examples include: • Unit 7, Lesson 11, Independent Practice, students apply their conceptual understanding of adding decimals to solve a real-world problem. Part 1, “Mrs. Evans, the physical education teacher, is forming relay teams to help raise money for cancer research. There must be two students on each relay team. To determine the teams, Ms. Evans uses the students’ practice times from the last physical education class. Ms. Evans wants the teams to be as evenly matched as possible so they have a fair chance to win the race. What would the best combination of students be for each of the relay teams? Show all your mathematical thinking.” (4.NF.5, 4.NF.7) • Unit 5, Lesson 7, Exit Ticket, students apply their conceptual understanding of multiplication to solve a two-step word problem using tape diagrams and equations. Problem 1, “Draw a tape diagram to model the following equation. Create a word problem. Solve for the value of the variable. (A × 2) + 4,892 = 6,392.” (4.OA.3) • Unit 8, Lesson 4, Independent Practices, students apply their conceptual understanding of place value to solve a problem involving the value of coins. Problem 6, “Which is more, 68 dimes or 679 pennies? Prove with a place value chart and then explain on the lines below.” (4.MD.2) #### Criterion 2.2: Math Practices Materials meaningfully connect the Standards for Mathematical Content and Standards for Mathematical Practice (MPs). The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for practice-content connections. The materials meaningfully connect the Standards for Mathematical Content and the Standards for Mathematical Practice (MPs). ##### Indicator {{'2e' | indicatorName}} Materials support the intentional development of MP1: Make sense of problems and persevere in solving them; and MP2: Reason abstractly and quantitatively, for students, in connection to the grade-level content standards, as expected by the mathematical practice standards. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for supporting the intentional development of MP1: Make sense of problems and persevere in solving them; and MP2: Reason abstractly and quantitatively, for students, in connection to the grade-level content standards, as expected by the mathematical practice standards. The Standards for Mathematical Practice are identified and incorporated within mathematics content throughout the grade level. The Mathematical Practices are listed in the Unit Overviews as well at the beginning of each lesson. There is intentional development of MP1 to meet its full intent in connection to grade-level content. Examples include: • Unit 3, Lesson 3, Independent Work Question 8, students engage with MP1 as they make sense of a problem involving multi-digit addition of whole numbers. “Milos’s family is keeping track of their steps each week. So far his sister has walked 15,678 steps, his father has walked 123,098 steps, and his mother has walked 435,607 steps. If Milos has walked twice as many steps as his sister, how many total steps has the family walked altogether?” • The Unit 6 Overview outlines the intentional development of MP1. “Students apply the meaning of area and perimeter in order to interpret word problems in which area and perimeter are implicitly stated. Students practice division and multiplication strategies in the context of word problems. Calculations can be tedious and long, and students must continue to persevere through many steps in order to solve problems. Students employ a variety of problem solving skills in order to solve conversion problems. They must use ratios and calculations, but also determine which operation to use to convert.” • Unit 5, Lesson 4, “How will embedded MPs support and deepen the learning?”, teachers are provided with explanations of connections between content and practices, “Students continue to practice SMP 1 as they plan to represent and solve multi-step problems by identifying all of the values and relationships between the values in the word problem, paying close attention to the questions asked in the word problem.” • Unit 8, Lesson 6, Workshop Problem 3, students engage with MP1 as they work through multi-step word problems that require them to apply the concept of elapsed time. “Ms. Johnson has 20 minute meetings with students during the school day. She has a five-minute break between meetings. She does not have a break before her first meeting or after her last meeting. If she starts meetings at 7:45am, and has 4 meetings scheduled, what time will she be finished?” There is intentional development of MP2 to meet its full intent in connection to grade-level content. Examples include: • The Unit 2 Overview outlines the intentional development of MP2. “When students start to work with numbers in greater place values such as the hundred thousands and ten thousands, they use abstract reasoning to understand quantitative meanings. Since there are no place value blocks big enough to show a hundred thousand and they often can’t draw quantities this large, they must apply patterns of the place value system to logically understand the magnitude of larger numbers. SMP 2 is developed in lessons 2, 3, 5, & 9-11.” • Unit 2, Lesson 3, Independent Practice Question 5, students engage with MP2 as they read and write multi-digit whole numbers using base-ten numerals, number names, and expanded form. “Lee and Gary visited South Korea. They exchanged their dollars for South Korean bills. Lee received 5 thousand dollar bills, 6 hundred dollar bills, 9 ten dollar bills, and 5 one dollar bills. What was Lee’s total amount of money in standard, written, and expanded form?” • The Unit 6 Overview describes development of MP2. “The true understanding of the fundamental meaning of a fraction is abstract reasoning. Students must learn to take an abstract representation of two numbers (a numerator and a denominator) and give it a new meaning referring to a part of a whole – a value less than one. Through visual models and many examples, students should begin to understand that a fraction is a quantity in itself that has a position on a number line. This is extremely abstract quantitative reasoning. Students reason abstractly when they compare fractions of different wholes. The idea that a fraction can have a different value based on the size of its whole, but the same whole is implied when it is not specified, is a challenging abstract concept for students to grasp.” ##### Indicator {{'2f' | indicatorName}} Materials support the intentional development of MP3: Construct viable arguments and critique the reasoning of others, for students, in connection to the grade-level content standards, as expected by the mathematical practice standards. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for supporting the intentional development of MP3: Construct viable arguments and critique the reasoning of others, for students, in connection to the grade-level content standards, as expected by the mathematical practice standards. There is intentional development of MP3 to meet its full intent in connection to grade-level content. Examples include: • Unit 2, Lesson 16, Workshop, students critique the reasoning of others and construct a viable argument as they evaluate the estimation strategies used by other students to determine who is correct. Problem 2, “Patricia said the best way to estimate the solution to 8,421 - 462 is to round each number to the nearest hundred. Matthew said the best way to estimate is to round each number to the nearest thousand. Who is correct? Explain your answer.” • Unit 7, Lesson 4, Exit Ticket, Problem 3, “Patrice is measuring the rainfall for December. On Monday there was 0.09 of an inch of rainfall. On Tuesday there was 0.9 of an inch of rainfall. Patrice tells his sister that it rained the same amount on Monday and Tuesday. Tell whether or not Patrice is correct on the lines below.” • Unit 7, Lesson 8, Discussion, teachers are provided with guidance and questions to engage students in critiquing the reasoning of others. “Last year I had a scholar who told me that when you compare decimals it's like the opposite of comparing whole numbers. What do you think they meant by that? How are comparing whole numbers and comparing decimals similar?” • Implementation Guide, Unit 7, Math Stories, February/March, students are provided with an opportunity to share their math thinking as they solve problems involving fractions. Problem 12, “Yanira ran \frac{3}{4} miles each day for 6 days. How many miles did she run over the course of 6 days?” Teachers are provided with three specific protocols to assist them in helping students represent and/or solve the problem, including sentence stems, for example: “First I put ____ because the story ____. Then I put ____ because in the story ____. Finally, I put ____ because in the story/we need to figure out ____.” • Unit 10, Lesson 4, Exit Ticket, students construct an argument and critique the reasoning of others based on their knowledge of shapes. Problem 3, “Carlos is helping his brother with homework. He tells his brother that if you want to draw an obtuse angle, you should always use the bottom set of degrees on the protractor arc, and if you want to draw an acute angle you should always use the top set. Is Carlos’s reasoning accurate? Explain why or why not on the lines below.” ##### Indicator {{'2g' | indicatorName}} Materials support the intentional development of MP4: Model with mathematics; and MP5: Choose tools strategically, for students, in connection to the grade-level content standards, as expected by the mathematical practice standards. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for supporting the intentional development of MP4: Model with mathematics; and MP5: Choose tools strategically, for students, in connection to the grade-level content standards, as expected by the mathematical practice standards. There is intentional development of MP4 to meet its full intent in connection to grade-level content. Examples include: • Math Stories Guide, Promoting Reasoning through the Standards for Mathematical Practice, MP4, “Math Stories help elementary students develop the tools that will be essential to modeling with mathematics. In early elementary, students become familiar with how representations like equations, manipulatives, and drawings can represent real-life situations.” Within the K-4 Math Stories Representations and Solutions Agenda, students are given time to represent, retell, and solve the problem on their own. • The Unit 1 Overview describes the intentional development of MP4. “Students model real-world mathematical situations using equations and tables with patterns. Students use tables, pictures, and mathematical formulas to solve problems involving patterns, multiplicative comparisons, and to determine factors, and classify numbers as prime or composite. SMP4 is developed in lessons 2 and 5 - 9.” • The Unit 4 Overview provides guidance for connecting MP4 with area, perimeter, and solving conversion problems. “Students interpret word problems referring to area and perimeter and represent the information. When students solve conversion problems, they use many different types of models to determine how to convert, or show how they converted. They use the ratio to draw appropriate pictures, create tables, and write equations that use mathematics to model how to convert from one unit of measurement to another. Students also use benchmarks to understand units of measurement, which is a way of modeling a mathematical concept with real-world objects.” • Unit 6, Lesson 18, Pose the Problem, students subtract mixed numbers by using fraction tiles and drawings. “Moira ordered 6 pizzas for the Student Council meeting. At the end of the meeting there were 2\frac{3}{4} pizzas left. How many pizzas did the student council eat during the meeting?” There is intentional development of MP5 to meet its full intent in connection to grade-level content. Examples include: • The Unit 2 Overview describes the intentional development of MP5. “Students choose between many methods when working with place value. In almost every aspect of this unit (place value relationships, expanding, reading, writing, comparing and rounding numbers, and non-standard partitioning) students have a variety of tools that could assist them. They can use place value charts, place value blocks, pictures of dots in each place value, pictures of place value blocks, organized lists, etc. to solve these types of problems. They must determine which tools are most effective for certain tasks.” • Unit 6, Lesson 6, Exit Ticket Question 2, students use various tools, models and representations to show the meaning of fractions and different ways of showing fractional quantities. “Delilah is using a number line to figure out how many eighths are equal to \frac{3}{4}. She tells her partner, ‘We should partition each interval on our number line into 4 new parts, because 4 + 4 is 8.’ Is Delilah’s strategy reasonable? Explain on the lines below. You may use pictures, number sentences, or number lines to help you.” • Unit 9, Lesson 4, Independent Practice Question 6, students use square corners and rulers to determine types of lines and angles. “Can a triangle have two right angles? Explain and draw an example to prove your thinking.” At times, the materials are inconsistent. The Unit and Lesson Overview narratives describe explicit connections between the MPs and content, but lessons do not always align to the stated purpose. The materials do not provide students with opportunities or guidance to identify and use relevant external mathematical tools and resources, such as digital content located on a website. ##### Indicator {{'2h' | indicatorName}} Materials attend to the intentional development of MP6: Attend to precision; and attend to the specialized language of mathematics for students, in connection to the grade-level content standards, as expected by the mathematical practice standards. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for supporting the intentional development of MP6: Attend to precision; and attend to the specialized language of mathematics, for students, in connection to the grade-level content standards, as expected by the mathematical practice standards. There is intentional development of MP6 to meet its full intent in connection to grade-level content. Many problems present students with the opportunity to attend to precision within the mathematics and the reasoning of the answer. Examples include: • Unit 2, Lesson 1, Exit Ticket, students solve problems requiring them to demonstrate an understanding of the term expanded form. Problem 3, “What two hundreds is seven hundred twelve between? Write seven hundred twelve in standard form. Write seven hundred twelve in expanded form.” • Unit 8, Lesson 2, Independent Practice Question 1 (Bachelor Level), students use precision to solve problems related to volume and capacity. “The capacity of each pitcher in the teacher work room is 3 quarts. Right now, each pitcher contains 1 quart 3 cups of liquid. If there are 3 pitchers in the room, how much more total liquid can the pitchers hold?” • In the Unit 9 Overview, “Students attend to precision when naming and identifying lines, angles and triangles based on names using points.” The instructional materials attend to the specialized language of mathematics. The materials use precise and accurate mathematical terminology. Examples include: • Unit 4, Lesson 15, Try One More, teachers are provided with instructions to explicitly teach the term remainder. “We do have 1 leftover in this problem. This is called our remainder. A remainder is the amount leftover after dividing a number when one number does not divide evenly into another number. What was our answer before the remainder and why?” Students might say, “114 because we have 1 hundred + 1 ten + 4 ones.” The teacher replies, “Yes. Now our answer becomes 114 R1 because our answer is 114 with a remainder of 1.” • Unit 6 Lesson 1, teachers are provided with guidance in reviewing vocabulary related to fractions. The introduction, “Before you get started, let’s review some key fraction vocabulary. In a fraction, what does the denominator tell us? The denominator tells us the total number of parts in a whole. In a fraction, what does the numerator tell us? The numerator tells us the total number of parts being referred to.” • Unit 9, Lesson 4, the introduction provides teachers with guidance in introducing terminology related to triangles through a series of questions. “We have learned about different angle types, which will help us in our work today. What are the different types of angles? Today you will use the different types of angles to help you classify triangles!” Students work to observe and note information about triangles. The teacher prompts, “What did you observe about triangles? Triangles are classified by two names, kind of like how you have a first name and a last name. One name tells us about their angles, and one name tells us about their angles, and one name tells us about their sides.” ##### Indicator {{'2i' | indicatorName}} Materials support the intentional development of MP7: Look for and make use of structure; and MP8: Look for and express regularity in repeated reasoning, for students, in connection to the grade-level content standards, as expected by the mathematical practice standards. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for supporting the intentional development of MP7: Look for and make use of structure; and MP8: Look for and express regularity in repeated reasoning, for students, in connection to grade-level content standards, as expected by the mathematical practice standards. There is intentional development of MP7 to meet its full intent in connection to grade-level content. Examples Include: • Unit 2, Lesson 5, Independent Practice, students solve problems by looking for structures based on place value. Problem 4, “Tiana drew 12 hundreds blocks on her paper. How many tens is that equal to? a. 1,200 b. 120 c. 12 d. 12,000.” • Unit 5, Lesson 4, Independent Practice, Question 2, students look for structure as they solve problems involving parts adding up to a whole by using tape diagrams to represent the situations. “Malia is keeping track of the subway riders on Saturday. At the first stop, some people got on the train. At the second stop, three times more people got on the train than at the first stop. At the last stop some people got off the train. How many people are on the train now?” • Unit 8, Lesson 5, Exit Ticket, Question 2, students solve word problems involving adding enough of a smaller unit in order to regroup in the context of money amounts and determining change. “Meiling needed5.35 to buy a ticket to a show. In her wallet, she found 2 dollar bills, 11 dimes, and 5 pennies. How much more money does Meiling need to buy the ticket?” There is intentional development of MP8 to meet its full intent in connection to grade-level content. Examples Include: • Unit 5, Lesson 8, Exit Ticket, Question 1, students look for regularity in repeated reasoning as they solve problems by interpreting and labelling a representation such as a tape diagram. “Draw a tape diagram to model the following equation. Create a word problem. Solve for the value of the variable. (A\times2)+4,892=6,392.” • Unit 6, Lesson 16, Independent Practice, Question 2, students see regularity in regrouping as they add and subtract mixed numbers. “Khalia and Jermaine are in a pie-eating contest. After 5 minutes, Khalia ate 3\frac{3}{4} pies and Jermaine at 4\frac{3}{4} . How much total pie did they consume altogether?” • Unit 10, Lesson 2, Independent Practice, students look for repeated calculations as they solve a problem involving a circle divided into angles with given measurements. Problem 3, “Joanne cut a round pizza into equal wedges with angles measuring 30 degrees. How many pieces of pizza does she have?” ### Usability The materials reviewed for Achievement First Mathematics Grade 4 do not meet expectations for Usability. The materials partially meet expectations for Criterion 1, Teacher Supports, partially meet expectations for Criterion 2, Assessment, and do not meet expectations for Criterion 3, Student Supports. ##### Gateway 3 Does Not Meet Expectations #### Criterion 3.1: Teacher Supports The program includes opportunities for teachers to effectively plan and utilize materials with integrity and to further develop their own understanding of the content. The materials reviewed for Achievement First Mathematics Grade 4 partially meet expectations for Teacher Supports. The materials: provide teacher guidance with useful annotations and suggestions for enacting the materials, include standards correlation information that explains the role of the standards in the context of the overall series, and provide a comprehensive list of supplies needed to support instructional activities. The materials contain adult-level explanations and examples of the more complex grade-level concepts, but do not contain adult-level explanations and examples and concepts beyond the current grade so that teachers can improve their own knowledge of the subject. The materials provide explanations of the instructional approaches of the program but do not contain identification of the research-based strategies. ##### Indicator {{'3a' | indicatorName}} Materials provide teacher guidance with useful annotations and suggestions for how to enact the student materials and ancillary materials, with specific attention to engaging students in order to guide their mathematical development. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for providing teacher guidance with useful annotations and suggestions for how to enact the student materials and ancillary materials, with specific attention to engaging students in order to guide their mathematical development. Teacher guidance is found throughout the materials in the Implementations Guides, Unit Overviews, and individual lessons. Materials provide comprehensive guidance that will assist teachers in presenting the student and ancillary materials. Examples include: • The Guide to Implementing AF Math provides a Program Overview for the teacher with information on the program components and scope and sequence. This includes descriptions of the types of lessons, Math Stories, Math Practice, and Cumulative Review. • The Math Stories Guide (K-4) provides a framework for problem solving. • Each Unit Overview includes a section called “Key Strategies” that describes strategies that will be utilized during the unit. • The Teacher’s Guide supports whole group/partner discussion, ask/listen fors, common misconceptions and errors, etc. • In the narrative information for each lesson, there is information such as “What do students have to get better at today? Where will time be focused/funneled?” Materials include sufficient and useful annotations and suggestions that are presented within the context of the specific learning objectives. Each lesson includes anticipated challenges, misconceptions, key points, sample dialogue, and exemplar student responses. Examples from Unit 7, Decimals, Lesson 10 include: • “What do students have to get better at today? Students add fractions with denominators of 10 and 100 by changing both fractions to have like denominators or by relating them to decimals in expanded form. Students may still be relying on visual models today, though some may be able to ‘just know’ or use place value knowledge to add.” • “What is new and/or hard about that? This is challenging because scholars may not have solidified adding or subtracting fractions from the previous unit. Scholars may struggle to regroup accurately, and when converting from mixed numbers to decimals may struggle to accurately change from fractions to decimals, particularly when needing to account for whole numbers. Lastly, scholars must be able to convert fractional tenths to hundredths (or understand how to accurately combine with tenths and hundredths) which could be another area for misunderstanding, as scholars can forget to multiply or divide the numerator by ten.” • “Exemplar Student Response: ‘When I added \frac{9}{10} and \frac{18}{100}, I converted \frac{9}{10} to hundredths because we can only add fractions if they have the same denominator. I converted \frac{9}{10} to hundredths by multiplying the top and the bottom of the fraction by 10; I got \frac{90}{100}. Then, I added the numerators together and got \frac{108}{100}. I regrouped because \frac{108}{100} is a fraction greater than 1. I thought about how many wholes and how many fractional parts I have; I know that \frac{100}{100} makes 1 whole which leaves \frac{8}{100} leftover. I wrote my answer first as a fraction -$$\frac{108}{100}$$ and then as a decimal -1.08.’” • “Introduction State the aim: Connect it to their lives and prior knowledge. Discuss how they will be working on it today. Plan a problem and questions to uncover key points and address common errors and misconceptions.” • “Mid-Workshop Interruption: Share a strategy you’d like more students to use OR clarify a major misconception.” • “Discussion: Discuss a major misconception OR have students share their work in CPA order OR ask students to apply their learning in a new way OR direct students to complete a pre-planned written response followed by a share from 1-2 students.” • “Closing & Exit Ticket A quick debrief to clear up confusion OR cement a key point or big idea from the lesson.” Each lesson includes both “What” and “How” Key Point sections that describe what students should know and be able to do and how they will do it. Examples from Unit 7, Decimals, Lesson 10 include: • “What Key Points What should students know and be able to do? I can accurately add tenths and hundredths as fractions. When adding fractions, they must have the same denominator because we can only combine fractions with the same-sized pieces. I can accurately convert between fractions and decimals.” • “How Key Points How will they do it? I can accurately add tenths and hundredths by converting tenths to hundredths (if needed) and then adding the numerators (number of pieces) of each fraction and writing my sum over the given denominator.” ##### Indicator {{'3b' | indicatorName}} Materials contain adult-level explanations and examples of the more complex grade-level/course-level concepts and concepts beyond the current course so that teachers can improve their own knowledge of the subject. The materials reviewed for Achievement First Mathematics Grade 4 partially meet expectations for  containing adult-level explanations and examples of the more complex grade/course-level concepts and concepts beyond the current course so that teachers can improve their own knowledge of the subject. There is very little reference or support for content in future courses. Materials contain adult-level explanations and examples of the more complex grade/course-level concepts so that teachers can improve their own knowledge of the subject. Examples include: • Unit Overviews provide thorough information about the content of the unit which often includes definitions of terminology, explanations of strategies, and the rationale about incorporating a process. Unit 3 Overview, “Research and practice in the field of mathematics education have shown that there are alternative algorithms and strategies that students develop, that help them maintain a focus on understanding place value and the operations and, at the same time, are easily generalized and efficient. Although each student may primarily use one strategy for each operation, in Investigations (and now in Common Core), students are expected to study more than one algorithm or strategy for each operation. Students study a variety of approaches for the following three reasons: Different algorithms and strategies provide access to analysis of different mathematical relationships. Access to different algorithms and strategies leads to flexibility in solving problems. One method may be better suited to a particular problem. Students learn that algorithms are ‘made objects’ that can be compared, analyzed, and critiqued according to a number of criteria.” • The Unit Overview includes an Appendix titled “Teacher Background Knowledge” which includes a copy of the relevant pages from the Common Core Math Progression documents which includes on grade-level information. Materials do not contain adult-level explanations and examples of concepts beyond the current course so that teachers can improve their own knowledge of the subject. Examples include: • The Common Core Math Progression documents in the Appendix are truncated to the current grade level and do not go beyond the current course. ##### Indicator {{'3c' | indicatorName}} Materials include standards correlation information that explains the role of the standards in the context of the overall series. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for including standards correlation information that explains the role of the standards in the context of the overall series. Correlation information is present for the mathematics standards addressed throughout the grade level/series. Examples include: • Guide to Implementing AF Grade 4, Program Overview, “Scope and Sequence Detail is designed to help teachers identify the standards on which each lesson within a unit is focused, whether on grade level or not. You will find the daily lesson aims within each unit and the content standards addressed within that lesson. A list of the focus MPs for each lesson and unit and details about how they connect to the content standards can be found in the Unit Overviews and daily lesson plans.” • The Program Overview informs teachers “about how to ensure scholars have sufficient practice with all of the Common Core State Standards. Standards or parts thereof that are bolded are addressed within a lesson but with limited exposure. It is recommended that teachers supplement the lessons addressing these standards by using the AF Practice Workbooks to ensure mastery for all students. Recommendations for when to revisit these standards during Math Practice and Friday Cumulative Review are noted in the Practice section of each unit.” • The Unit Overview includes a section called Identify Desired Results: Identify the Standards which lists the standards addressed within the unit and previously addressed standards that relate to the content of the unit. • In the Unit Overview, the Identify The Narrative provides rationale about the unit connections to previous standards for each of the lessons. Future grade-level content is also identified. • The Unit Overview provides a table listing Mathematical Practices connected to the lessons and identifies whether the MP is a major focus of the unit. • At the beginning of each lesson, each standard is identified. • In the lesson overview, prior knowledge is identified, so teachers know what standards are linked to prior work. Explanations of the role of the specific grade-level/course-level mathematics are present in the context of the series. In the Unit Overview, the Identify the Narrative section provides the teacher with information to unpack the learning progressions and make connections between key concepts. Lesson Support includes information about connections to previous lessons and identifies the important concepts within those lessons. Examples include: • Unit 4 Overview, “In 5th grade, students must explain patterns in the number of zeroes of the product when multiplying by multiples of 10. This directly relates to strategies and understanding students use in 4th grade to multiply multiples of 10. Students learn rules with zeroes based on place value. These rules also extend to multiplying and dividing decimals by multiples of 10 in 5th grade. They must use this same pattern of zeroes and changes in place value to explain why decimal points move certain amounts of places in certain directions when multiplying and dividing by multiples of 10.” • Unit 10 Overview, “Angle measurements will not come up as a formal part of the math curriculum again until seventh grade. Although there is a large gap in time between this unit and seventh grade, the seventh grade standards rely heavily on students’ skills and knowledge from this unit in fourth grade. Understanding of the concept of angles, the additive property of angles, and measurements of benchmark angles are foundational for seventh grade angle content. In 7th grade, students use supplementary angles, complementary angles and adjacent angles in problems and write and solve equations to find measurements of unknown angles. They also start to work with vertical angles and use them to find angle measurements. The strong ties between the 4th grade and 7th grade angle measurement standards, and the fact that students will not revisit this content for three years makes it crucial for students to deeply understand the fourth grade angle measurement material.” ##### Indicator {{'3d' | indicatorName}} Materials provide strategies for informing all stakeholders, including students, parents, or caregivers about the program and suggestions for how they can help support student progress and achievement. The materials reviewed for Achievement First Mathematics Grade 4 provides strategies for informing all stakeholders, including students, parents, or caregivers about the program and suggestions for how they can help support student progress and achievement. • The Unit Overview includes a parent letter in both English and Spanish for each unit that includes information around what the students are working on and example strategies students will use. The letter includes information about common mistakes that parents can watch for as well as links to websites that can provide assistance. • There is also a suggestion related to the Unit Overview, “This guide can be printed and sent home to families so that parents/guardians can better support their scholars with homework.” ##### Indicator {{'3e' | indicatorName}} Materials provide explanations of the instructional approaches of the program and identification of the research-based strategies. The materials reviewed for Achievement First Mathematics Grade 4 partially meet expectations for providing explanations of the instructional approaches of the program and identification of the research-based strategies. Materials explain the instructional approaches of the program. • The Implementation Guide states, "Our program aims to see the mathematical practices come to life through the shifts (focus, coherence, rigor) called for by the standards. For students to engage at equal intensities weekly with all 3 tenets, we structured our program into three main daily components Monday-Thursday: Math Lesson, Math Stories and Math Practice. Additionally, students engage in Math Cumulative Review each Friday in order for scholars to achieve the fluencies and procedural skills required." • The Implementation Guide includes descriptions of “Math Lesson Types.” Descriptions are included for Game Introduction Lesson, Task Based Lesson, Math Stories, and Math Practice. Each description includes a purpose and a table that includes the lesson components, purpose, and timing. Materials do not include and reference research-based strategies. • The materials do not explicitly name any strategies as research-based strategies. ##### Indicator {{'3f' | indicatorName}} Materials provide a comprehensive list of supplies needed to support instructional activities. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for providing a comprehensive list of supplies needed to support instructional activities. Each lesson includes a list of  materials specific to the lesson. Examples include: • Unit 2, Lesson 2, the Lesson Overview includes, “Materials: all pages of this packet, 10 hundreds chart pags per student or per partner pair, colored pencils, scissors, staples, VA.” • Unit 9 Lesson 1, the Lesson Overview includes, “Materials: All pages of this packet, VA (stands for visual aid).” ##### Indicator {{'3g' | indicatorName}} This is not an assessed indicator in Mathematics. ##### Indicator {{'3h' | indicatorName}} This is not an assessed indicator in Mathematics. #### Criterion 3.2: Assessment The program includes a system of assessments identifying how materials provide tools, guidance, and support for teachers to collect, interpret, and act on data about student progress towards the standards. The materials reviewed for Achievement First Mathematics Grade 4 partially meet expectations for Assessment. The materials identify the standards, but do not identify the practices assessed for the formal assessments. The materials provide multiple opportunities to determine students' learning and sufficient guidance to teachers for interpreting student performance but do not provide suggestions for following-up with students. The materials include opportunities for students to demonstrate the full intent of grade-level/course-level standards and practices across the series. ##### Indicator {{'3i' | indicatorName}} Assessment information is included in the materials to indicate which standards are assessed. The materials reviewed for Achievement First Mathematics Grade 4 partially meet expectations for having assessment information included in the materials to indicate which standards are assessed. There are connections identified for standards, but not the mathematical practices. Materials identify the standards assessed for the formal assessments. Examples include: • Each Unit Overview provides a chart that identifies CCSS Math Content standards for each item on the Unit Assessment. Occasionally, an individual item on the assessment identifies the standard, but in general, student-facing assessments do not include the standards. • Each lesson includes an Exit Ticket that aligns with the standard of the lesson. Materials do not identify the practices assessed for the formal assessments. ##### Indicator {{'3j' | indicatorName}} Assessment system provides multiple opportunities throughout the grade, course, and/or series to determine students' learning and sufficient guidance to teachers for interpreting student performance and suggestions for follow-up. The materials reviewed for Achievement First Mathematics Grade 4 partially meet expectations for including an assessment system that provides multiple opportunities throughout the grade, course, and/or series to determine students' learning and sufficient guidance to teachers for interpreting student performance and suggestions for follow-up. The assessment system provides multiple opportunities to determine students' learning and sufficient guidance to teachers for interpreting student performance but does not provide suggestions for following-up with students. Examples include: • Assessments include an informal Exit Ticket in each lesson and a formal Unit Assessment for every unit. • There is guidance, or “look-fors,” for teachers about what the student should be able to do on the assessments. • All Unit Assessments include an answer key with exemplar student responses. • Some units include a section titled “Evaluating Student Learning Outcomes” which provides a detailed description of what the student is expected to do. • There are no strategies or suggestions if students do not demonstrate understanding of the concept, and no next steps based on the results of the assessment. ##### Indicator {{'3k' | indicatorName}} Assessments include opportunities for students to demonstrate the full intent of grade-level/course-level standards and practices across the series. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for providing assessments that include opportunities for students to demonstrate the full intent of grade-level standards and practices across the series. There are a variety of question types including multiple choice, short answer, and constructed response. Mathematical practices are embedded within the problems. Assessments include opportunities for students to demonstrate the full intent of grade-level standards across the series. Examples include: • Unit 1, Lesson 13, Exit Ticket contributes to the full intent of 4.OA.5 (Generate a number or shape pattern that follows a given rule). “1) Look at the towers and table below and complete the table. 2) Explain how you figured out how many blocks are in tower 4.” • The Unit 7 Assessment contributes to the full intent of 4.NF.5 (Express a fraction with denominator 10 as an equivalent fraction with denominator 100). Item 10, “Jenny split one cake into 10 equal pieces and ate 3 of the pieces. Then she split another cake into 100 equal pieces and ate 13 of the pieces. Write a sentence telling the total amount of cake(s) Jenny ate as a decimal.” • The Unit 10 Assessment contributes to the full intent of 4.MD.6. Item 4 includes an image of a kite, “Use your protractor to answer the question below. Which measure is closest to the measure of angle H? A) 30° B) 35° C) 150° D) 155°” Item 7, “Which angle has a measure of 65°?” Item 11, “Draw an angle with a measure of exactly 147°.” Assessments include opportunities for students to demonstrate the full intent of grade-level practices across the series. Examples include: • Unit 4 Assessment, Item 15, supports the full development of MP2 as students convert between units of measurement, using abstract reasoning to determine which operations to use. “A race is 5 km long. How many meters long is the race? How many centimeters long is the race? a. 500 meters; 5000 centimeters; b. 5,000 meters; 50,000 centimeters; c. 5,000 meters;  500,000 centimeters; d. 500 meters; 50,000 centimeters.” • Unit 1 Assessment, Item 6, supports the full development of MP4 as students write and solve an equation to represent a situation. “The length of the kitchen counter at Sal’s house is 9 times the length of a book. The length of a book is 8 inches. What is the length of the kitchen counter? Write an equation and solve.” • Unit 10 Assessment, Item 2, supports the full development of MP7 as students look for and make sense of structure. “Which statement about angles is true? A) An angle is formed by two rays that do not have the same endpoint; B) An angle that turns through 1/360 of a circle has a measure of 360 degrees; C) An angle that turns through 5- 1 degree angles has a measure of 5 degrees D) An angle measure is equal to the total length of the two rays for the angle.” ##### Indicator {{'3l' | indicatorName}} Assessments offer accommodations that allow students to demonstrate their knowledge and skills without changing the content of the assessment. The materials reviewed for Achievement First Mathematics Grade 4 do not provide assessments which offer accommodations that allow students to demonstrate their knowledge and skills without changing the content of the assessment. This is true for both formal unit assessments and informal exit tickets. #### Criterion 3.3: Student Supports The program includes materials designed for each child’s regular and active participation in grade-level/grade-band/series content. The materials reviewed for Achievement First Mathematics Grade 4 do not meet expectations for Student Supports. The materials do not provide: strategies and supports for students in special populations or for students who read, write, and/or speak in a language other than English to support their regular and active participation in learning grade-level/series mathematics. The materials provide some extensions and/or opportunities for students to engage with grade-level/course-level mathematics at higher levels of complexity. The materials do provide manipulatives, both virtual and physical, that are accurate representations of the mathematical objects they represent and, when appropriate, are connected to written methods. ##### Indicator {{'3m' | indicatorName}} Materials provide strategies and supports for students in special populations to support their regular and active participation in learning grade-level/series mathematics. The materials reviewed for Achievement First Mathematics Grade 4 do not meet expectations for providing strategies and supports for students in special populations to support their regular and active participation in learning grade-level/series mathematics. The materials do not provide specific strategies and supports for differentiating instruction to meet the needs of students in special populations. ##### Indicator {{'3n' | indicatorName}} Materials provide extensions and/or opportunities for students to engage with grade-level/course-level mathematics at higher levels of complexity. The materials reviewed for Achievement First Mathematics Grade 4 partially meet expectations for providing extensions and/or opportunities for students to engage with grade-level mathematics at higher levels of complexity. Some lessons include an Extension/Application Problem that allows teachers to extend the thinking of the lesson during discussion. However, extension problems are intended for all students as there is no identified differentiation for advanced students. Examples Include: • Unit 1, Lesson 11, Extension problem, students extend their understanding of multiplication equations as a comparison. “Encourage students to pick a number and try and come up with a multiplicative clue for that number. Have them switch with a partner and try and figure out the solutions.” • Unit 9, Lesson 3, Extension problem, students extend their understanding of right angles. “Challenge students to identify types of angles in figures or around the room, have students create their own figures with the different angle sizes, can have students start to experiment with protractors.” ##### Indicator {{'3o' | indicatorName}} Materials provide varied approaches to learning tasks over time and variety in how students are expected to demonstrate their learning with opportunities for students to monitor their learning. The materials reviewed for Achievement First Mathematics Grade 4 provide varied approaches to learning tasks over time and variety in how students are expected to demonstrate their learning; however, there are no opportunities for students to monitor their learning. The program uses a variety of formats and methods over time to deepen student understanding and ability to explain and apply mathematics ideas. These include: Exercise Based Lessons, Task Based Lessons, Math Stories, Math Practice, and Cumulative Review. In the lesson introduction, the teacher states the aim and connects it to prior knowledge. In Pose the Problem, the students work with a partner to represent and solve the problem. Then the class discusses student work. The teacher highlights correct work and common misconceptions. Then students work on the Workshop problems, Independent Practice, and the Exit Ticket. Students have opportunities to share their thinking as they work with their partner and as the teacher prompts student responses during Pose the Problem and Workshop discussions. Math Stories provide opportunities for students to question, investigate, sense-make, and problem-solve using a variety of formats and methods. ##### Indicator {{'3p' | indicatorName}} Materials provide opportunities for teachers to use a variety of grouping strategies. The materials reviewed for Achievement First Mathematics Grade 4 provide some opportunities for teachers to use a variety of grouping strategies. Grouping strategies within lessons are not consistently present or specific to the needs of particular students. There is no specific guidance to teachers on grouping students. The majority of lessons are whole group and independent practice; however, the structure of some lessons include grouping strategies, such as working in a pair for games, turn-and-talk, and partner practice. Examples include: • Unit 4, Lesson 3, Discussion, “Turn & Talk: What are we working on today? We are working on finding the perimeter of rectilinear figures by solving for unknown sides and adding them all up.” • Unit 5, Lesson 7, Narrative, “Students continue to practice SMP 7 by considering "what smaller questions do I need to answer first" and SMP 3 by discussing the answer to this question as they discuss as a class or in partners which steps need to be completed and in what order these questions must be answered to arrive at a final answer.” ##### Indicator {{'3q' | indicatorName}} Materials provide strategies and supports for students who read, write, and/or speak in a language other than English to regularly participate in learning grade-level mathematics. The materials reviewed for Achievement First Mathematics Grade 4 do not meet expectations for providing strategies and supports for students who read, write, and/or speak in a language other than English to regularly participate in learning grade-level mathematics. Materials do not provide any resources for students who read, write, and/or speak in a language other than English to meet or exceed grade-level standards through regular and active participation in grade-level mathematics. ##### Indicator {{'3r' | indicatorName}} Materials provide a balance of images or information about people, representing various demographic and physical characteristics. The materials reviewed for Achievement First Mathematics Grade 4 provide a balance of images or information about people, representing various demographic and physical characteristics. Examples include: • Lessons portray people from many ethnicities in a positive, respectful manner. • There is no demographic bias seen in various problems. • Names in the problems include multi-cultural references such as Mario, Tanya, Kemoni, Jiang, Paige, and Tomi. • The materials are text based and do not contain images of people. Therefore, there are no visual depiction of demographics or physical characteristics. • The materials avoid language that might be offensive to particular groups. ##### Indicator {{'3s' | indicatorName}} Materials provide guidance to encourage teachers to draw upon student home language to facilitate learning. The materials reviewed for Achievement First Mathematics Grade 4 do not provide guidance to encourage teachers to draw upon student home language to facilitate learning. The materials do not provide suggestions or strategies to use the home language to support students in learning mathematics. There are no suggestions for teachers to facilitate daily learning that builds on a student’s multilingualism as an asset nor are students explicitly encouraged to develop home language literacy. Teacher materials do not provide guidance on how to garner information that will aid in learning, including the family’s preferred language of communication, schooling experiences in other languages, literacy abilities in other languages, and previous exposure to academic everyday English. ##### Indicator {{'3t' | indicatorName}} Materials provide guidance to encourage teachers to draw upon student cultural and social backgrounds to facilitate learning. The materials reviewed for Achievement First Mathematics Grade 4 do not provide guidance to encourage teachers to draw upon student cultural and social backgrounds to facilitate learning. The materials do not make connections to linguistic and cultural diversity to facilitate learning. There is no teacher guidance on equity or how to engage culturally diverse students in the learning of mathematics. ##### Indicator {{'3u' | indicatorName}} Materials provide supports for different reading levels to ensure accessibility for students. The materials reviewed for Achievement First Mathematics Grade 4 do not provide supports for different reading levels to ensure accessibility for students. The materials do not include strategies to engage students in reading and accessing grade-level mathematics. There are not multiple entry points that present a variety of representations to help struggling readers to access and engage in grade-level mathematics. ##### Indicator {{'3v' | indicatorName}} Manipulatives, both virtual and physical, are accurate representations of the mathematical objects they represent and, when appropriate, are connected to written methods. The materials reviewed for Achievement First Mathematics Grade 4 meet expectations for providing manipulatives, both virtual and physical, that are accurate representations of the mathematical objects they represent and, when appropriate, are connected to written methods. Manipulatives are most commonly found in the Intervention suggestion at the end of Workshop time. However, there is little teacher guidance to explain how and when the intervention is intended to be used. Examples include: • Unit 4, Lesson 4, Narrative, “students will find the area of rectilinear figures by decomposing them into rectangles.” The Intervention includes the use of manipulatives, “Have scholars use tiles or counters to fill the shape, counting as they go to find area. Have scholars count squares inside and around shapes to determine area.” • Unit 6, Lesson 1, Narrative, students will use “fraction tiles and visual models to break fractions down into their unit fractions.” Students use manipulatives to solve the Problem of the Day, “Park Ranger O’Hara is putting trail markers down on Mount Bonnell. On the Mockingbird trail, he puts down a trail marker at every \frac{1}{8} of a mile. If the trail is \frac{7}{8} of a mile long, how many trail markers will he use? Use your fraction tiles to help you.” • The Unit 8 Overview, “Students use many different strategies to solve different types of measurement problems such as timelines (or clock manipulatives) to solve elapsed time problems.” The Lesson 6 materials list includes clocks and the Intervention suggests “students just use clocks to solve” but there is no additional guidance. Manipulatives are not always connected to written methods. There are several instances where manipulatives are listed as materials but not incorporated into the lesson. Examples Include: • Unit 8, Lesson 4, Materials list, “Coin/dollar manipulatives for intervention.” “Use a place value chart to represent each money/decimal/coin amount. In each column on the place value chart, write the decimal place value and the coin associated with that place value. Then have students put each problem from workshop into the place value chart.” However the Intervention does not include the use of such manipulatives. #### Criterion 3.4: Intentional Design The program includes a visual design that is engaging and references or integrates digital technology, when applicable, with guidance for teachers. The materials reviewed for Achievement First Mathematics Grade 4 do not integrate technology such as interactive tools, virtual manipulatives/objects, and/or dynamic mathematics software in ways that engage students in the grade-level standards, include or reference digital technology that provides opportunities for teachers and/or students to collaborate with each other, or provide teacher guidance for the use of embedded technology to support and enhance student learning. The materials have a visual design that supports students in engaging thoughtfully with the subject, and is neither distracting nor chaotic. ##### Indicator {{'3w' | indicatorName}} Materials integrate technology such as interactive tools, virtual manipulatives/objects, and/or dynamic mathematics software in ways that engage students in the grade-level/series standards, when applicable. The materials reviewed for Achievement First Mathematics Grade 4 do not integrate technology such as interactive tools, virtual manipulatives/objects, and/or dynamic mathematics software in ways that engage students in the grade-level/series standards, when applicable. The materials do not contain digital technology or interactive tools such as data collection tools, simulations, virtual manipulatives, and/or modeling tools. There is no technology utilized in this program. ##### Indicator {{'3x' | indicatorName}} Materials include or reference digital technology that provides opportunities for teachers and/or students to collaborate with each other, when applicable. The materials reviewed for Achievement First Mathematics Grade 4 do not include or reference digital technology that provides opportunities for teachers and/or students to collaborate with each other, when applicable. The materials do not provide any online or digital opportunities for students to collaborate with the teacher and/or with other students. There is no technology utilized in this program. ##### Indicator {{'3y' | indicatorName}} The visual design (whether in print or digital) supports students in engaging thoughtfully with the subject, and is neither distracting nor chaotic. The materials reviewed for Achievement First Mathematics Grade 4 have a visual design that supports students in engaging thoughtfully with the subject, and is neither distracting nor chaotic. The student-facing printable materials follow a consistent format. The lesson materials are printed in black and white without any distracting visuals or an overabundance of graphic features. In fact, images, graphics, and models are limited within the materials, but they do support student learning when present. The materials are primarily text with white space for students to answer by hand to demonstrate their learning. Student materials are clearly labeled and provide consistent numbering for problem sets. There are several spelling and/or grammatical errors within the materials. ##### Indicator {{'3z' | indicatorName}} Materials provide teacher guidance for the use of embedded technology to support and enhance student learning, when applicable. The materials reviewed for Achievement First Mathematics Grade 4 do not provide teacher guidance for the use of embedded technology to support and enhance student learning, when applicable. There is no technology utilized in this program. ## Report Overview ### Summary of Alignment & Usability for Achievement First Mathematics | Math #### Math K-2 The materials reviewed for Achievement First Mathematics Grades K-2 meet expectations for Alignment to the CCSSM. In Gateway 1, the materials meet expectations for focus and coherence. In Gateway 2, the materials meet expectations for rigor and practice-content connections. The materials reviewed for Achievement First Mathematics Grades K-2 do not meet expectations for Usability, Gateway 3. ##### Kindergarten ###### Alignment Meets Expectations ###### Usability Does Not Meet Expectations ##### 1st Grade ###### Alignment Meets Expectations ###### Usability Does Not Meet Expectations ##### 2nd Grade ###### Alignment Meets Expectations ###### Usability Does Not Meet Expectations #### Math 3-5 The materials reviewed for Achievement First Mathematics Grades 3-5 meet expectations for Alignment to the CCSSM. In Gateway 1, the materials meet expectations for focus and coherence. In Gateway 2, the materials meet expectations for rigor and practice-content connections. The materials reviewed for Achievement First Mathematics Grades 3-5 do not meet expectations for Usability, Gateway 3. ##### 3rd Grade ###### Alignment Meets Expectations ###### Usability Does Not Meet Expectations ##### 4th Grade ###### Alignment Meets Expectations ###### Usability Does Not Meet Expectations ##### 5th Grade ###### Alignment Meets Expectations ###### Usability Does Not Meet Expectations #### Math 6-8 The materials reviewed for Achievement First Mathematics Grades 6-8 meet expectations for Alignment to the CCSSM. In Gateway 1, the materials meet expectations for focus and coherence. In Gateway 2, the materials meet expectations for rigor and practice-content connections. The materials reviewed for Achievement First Mathematics Grades 6-8 do not meet expectations for Usability, Gateway 3. ##### 6th Grade ###### Alignment Meets Expectations ###### Usability Does Not Meet Expectations ##### 7th Grade ###### Alignment Meets Expectations ###### Usability Does Not Meet Expectations ##### 8th Grade ###### Alignment Meets Expectations ###### Usability Does Not Meet Expectations ## Report for {{ report.grade.shortname }} ### Overall Summary ##### {{ report.grade.shortname }} ###### Alignment {{ report.alignment.label }} ###### Usability {{ report.usability.label }} ### {{ gateway.title }} ##### Gateway {{ gateway.number }} {{ gateway.status.label }}
{}
Last edited by Nezshura Friday, July 31, 2020 | History 3 edition of Note on estimation of frequency and number. found in the catalog. Note on estimation of frequency and number. Stanislav DornicМЊ # Note on estimation of frequency and number. ## by Stanislav DornicМЊ Published in Stockholm . Written in English Subjects: • Visual perception. • Edition Notes Classifications The Physical Object Statement [By] Stanislav Dornic, Birgitta Berglund and Ulf Berglund. Series Reports from the Psychological Laboratories, the University of Stockholm ;, no. 255 Contributions Berglund, Birgitta, joint author., Burglund, Ulf, joint author. LC Classifications BF21 .S78 no. 255 Pagination 6 p. Open Library OL4366550M LC Control Number 78459438 Note that you can estimate the median height. The 4th value in the interval is needed. It is estimated as. + × 10 = (to the Using the scale of 1 cm to represent 1 unit on the frequency axis and 2 cm to represent 5 units on the scores axis, use graph paper to draw a frequency polygon to represent the distribution of scores shown in. Given the above shortcomings of flood forecasting using rainfall data, this paper attempts to estimate return periods associated with flood peaks of different magnitudes from recorded historical floods using statistical method. The selected method is Gumbel extreme value distribution which is widely used for flood frequency analysis. In this case, the quarterly frequency of accounting reports is not suf cient. Note however that if the equities of the rmare listed and actively traded on a stock exchange, we know the daily value of Et (as the number of shares issued multiplied by the value of one share). The daily values of At can be then estimated from the daily. 1. The number of alleles at a locus 2. The frequency of alleles at a locus 3. The frequency of genotypes at a locus 4. Transmission of alleles from one generation to the next Single locus: Locus A with two alleles A1 and A2 Derivation of the Hardy-Weinberg principle Ideal population 1. Two sexes and the population consist of sexually mature. If bins_array contains no values, FREQUENCY returns the number of elements in data_array. Remarks Note: If you have a current version of Microsoft , then you can simply enter the formula in the top-left-cell of the output range, then press ENTER to confirm the formula as a dynamic array formula. Reynolds Number Dependency Rd number St=fsd/U fs is the shedding frequency, d is diameter and U inflow speed. You might also like Membrane action in reinforced concrete slabs. Membrane action in reinforced concrete slabs. Ohio the Buckeye state. Ohio the Buckeye state. Comecon Data 1979 Comecon Data 1979 Plant life Plant life Using the IBM Microcomputer (Text and Disc) Using the IBM Microcomputer (Text and Disc) Story of the Jubilee Singer With Their Songs Story of the Jubilee Singer With Their Songs Settlement between the United States and the State of Arkansas. Settlement between the United States and the State of Arkansas. A most excellent and pathetical oration, or, Declamation of Gregory Nazianzens stigmatizing, and condemning the Emperor Julian for his apostatising from the truth A most excellent and pathetical oration, or, Declamation of Gregory Nazianzens stigmatizing, and condemning the Emperor Julian for his apostatising from the truth Pirates Gold (Louis the Lifeboat) Pirates Gold (Louis the Lifeboat) Coconut production and protection technology. Coconut production and protection technology. Religious art Religious art Health statistics on older persons, United States, 1986. Health statistics on older persons, United States, 1986. Preliminary Report of East Half of Gaboury Township Témiscamingue County. Preliminary Report of East Half of Gaboury Township Témiscamingue County. Prefix Map of the World, 1993 Prefix Map of the World, 1993 ### Note on estimation of frequency and number by Stanislav DornicМЊ Download PDF EPUB FB2 The Estimation and Tracking of Frequency (Cambridge Series in Statistical and Probabilistic Mathematics) 1st Edition. by B. Quinn (Author), E. Hannan (Author) out of 5 stars 5 ratings. ISBN Cited by:   In Time Frequency Analysis, Time–Frequency Peak IF Estimation. There is a wide range of applications where we encounter signals comprised of M components with different IF laws f m (t) and different envelopes a m (t), in additive is often desired from such an observed signal, to determine the number of components M, the IF law of each component and the. When the transmit pulse h(t) is real, the corresponding g(t, 0) is also real; taking into account that g(mT, 0) = δ m, it is possible to select l 0 T s estimation algorithm is referred to as delay-and-multiply frequency estimation [12, Section ]; the corresponding block. Frequency estimation using LMS algorithm 14 Steps for frequency estimation using LMS algorithm 16 Simulation result of LMS algorithm 17 CHAPTER-3 MATHEMATICAL ANALYSIS-NONLINEAR ESTIMATION 18 Frequency estimation using NLS algorithm for frequency measurements is usually at a frequency of 1 MHz or higher, with 5 or 10 MHz being common. Frequency signals are usually sine waves, but can also be pulses or square waves. If the frequency signal is an oscillating sine wave, it might look like the one shown in Fig. This signal produces one cycle ( ∞ or 2 πFile Size: KB. AMT Part V: Fundamental frequency estimation 8/27 0 70 80 90 Piano C1 - Hz F/Hz A/[dB] Figure 3: Inharmonic spectrum of a Piano, Note C1 - Hz. The higher frequency partials clearly deviate from the harmonic positions. Inharmonicity is such that 25nd harmonic will generally be at the 26F 0. A frequency distribution table is a chart that represents values of any given sample and their frequency, i.e. the number of times the values have occurred. Through a frequency distribution table, you can easily handle the outcome of a sample through a proper organization of data. The paper concerns the relation between frequency estimates and recognition decisions. Theories postulating that these two measures reflect independent retrieval processes and theories that postulate that frequency estimation and recognition are mutually dependent processes are discussed. Empirical results apparently supporting both positions are also reviewed. Purchase Frequency is a metric that shows the average number of times a customer makes a purchase within a set time frame. This provides you with insight on how to structure your marketing to best suit the buying behaviour of your audience. While knowing the number of purchases is useful, it is also important to actually do something with that. A number of calculations useful to builders of stringed musical instruments require the frequency or wavelength of a note as input data. The following table presents the frequencies of all notes in ten octaves to a thousandth of a hertz. The number of times a data occurs in a data set is known as the frequency of data. In the above example, frequency is the number of students who scored various marks as tabulated. This type of tabular data collection is known as an ungrouped frequency table. What happens if instead of 20 students students took the same test. A parameter is a number that describes the population. Usually its value is unknown. A statistic is a number that can be computed from the sample data without making use of any unknown parameters. In practice, we often use a statistic to estimate an unknown parameter. An Introduction to Basic Statistics and Probability – p. 14/ Nowadays, a variety of approaches to the frequency and phase estimation problem, distinguished primarily by estimation accuracy, computational complexity, and process-ing latency, have been developed. One class of approaches is based on the Fast Fourier Transform (FFT) due to its connections with the maximum likelihood estimation (MLE) of. • Estimation • Hypotheses Testing The concepts involved are actually very similar, which we will see in due course. Below, we provide a basic introduction to estimation. Note that the interval estimator (2) is con-structed from X¯, z α/2, σ, and n, all of which are known. Tuning Frequencies for equal-tempered scale, A 4 = Hz Other tuning choices, A 4. the type of estimate that can be prepared. These estimating methods require different amounts of time to complete and produce different levels of accuracy for the estimate. The re-lationship between the time to complete the estimate and the accuracy of the estimate is shown in Figure The differ-ent estimating methods are discussed below. Rife and Boorstyn, "Single-Tone Parameter Estimation from Discrete-Time Observations," IEEE Transactions on Information Theory, pp. -Sept. Tretter, "Estimating the Frequency of a Noisy Sinusoid by Linear Regression," IEEE Transactions on Information Theory, pp. Figure shows the monthly number of housing starts in the Unites States (in thousands). Housing starts are a leading economic indicator. This means that an increase in the number of housing starts indicates that economic growth is likely to follow and a decline in housing starts indicates that a recession may be on the way. The number of alleles at a locus. The frequency of alleles at the locus. The frequency of genotypes at the locus. It may not be immediately obvious why we need both (2) and (3) to describe the genetic composition of a population, so let me illustrate with two hypothetical populations: A 1A 1 A 1A 2 A 2A 2 Population 1 50 0 c r,May27,(studentversion) Motivation: complex exponentials are eigenfunctions Why frequency analysis. Complex exponential signals, which are described by a frequency value, are eigenfunctions or eigensignals of LTI systems. Period signals, which are important in signal processing, are sums of complex exponential signals. It was generated from the table of numbers above by plotting the number of trials that have been completed, $$t$$, on the $$x$$-axis and the relative frequency, $$f$$, on the $$y$$-axis. In the beginning (after a small number of trials) the relative frequency fluctuates a lot around the theoretical probability at $$\text{0,5}$$, which is shown. Frequency estimation methods in Python. GitHub Gist: instantly share code, notes, and snippets. Frequency estimation methods in Python. GitHub Gist: instantly share code, notes, and snippets. n is the number of samples of the curve used to. Presentation made to the conference in
{}
Journal cover Journal topic Nonlinear Processes in Geophysics An interactive open-access journal of the European Geosciences Union Journal topic Nonlin. Processes Geophys., 25, 457-476, 2018 https://doi.org/10.5194/npg-25-457-2018 Nonlin. Processes Geophys., 25, 457-476, 2018 https://doi.org/10.5194/npg-25-457-2018 Research article 29 Jun 2018 Research article | 29 Jun 2018 # Stratified Kelvin–Helmholtz turbulence of compressible shear flows Stratified Kelvin–Helmholtz turbulence Omer San and Romit Maulik Omer San and Romit Maulik • School of Mechanical and Aerospace Engineering, Oklahoma State University, Stillwater, Oklahoma 74078, USA Abstract We study scaling laws of stratified shear flows by performing high-resolution numerical simulations of inviscid compressible turbulence induced by Kelvin–Helmholtz instability. An implicit large eddy simulation approach is adapted to solve our conservation laws for both two-dimensional (with a spatial resolution of 16 3842) and three-dimensional (with a spatial resolution of 5123) configurations utilizing different compressibility characteristics such as shocks. For three-dimensional turbulence, we find that both the kinetic energy and density-weighted energy spectra follow the classical Kolmogorov ${k}^{-\mathrm{5}/\mathrm{3}}$ inertial scaling. This phenomenon is observed due to the fact that the power density spectrum of three-dimensional turbulence yields the same ${k}^{-\mathrm{5}/\mathrm{3}}$ scaling. However, we demonstrate that there is a significant difference between these two spectra in two-dimensional turbulence since the power density spectrum yields a ${k}^{-\mathrm{5}/\mathrm{3}}$ scaling. This difference may be assumed to be a reason for the ${k}^{-\mathrm{7}/\mathrm{3}}$ scaling observed in the two-dimensional density-weight kinetic every spectra for high compressibility as compared to the k−3 scaling traditionally assumed with incompressible flows. Further inquiries are made to validate the statistical behavior of the various configurations studied through the use of the Helmholtz decomposition of both the kinetic velocity and density-weighted velocity fields. We observe that the scaling results are invariant with respect to the compressibility parameter when the density-weighted definition is used. Our two-dimensional results also confirm that a large inertial range of the solenoidal component with the k−3 scaling can be obtained when we simulate with a lower compressibility parameter; however, the compressive spectrum converges to k−2 for a larger compressibility parameter. 1 Introduction Turbulence is a highly nonlinear multiscale phenomenon which is ubiquitous in nature. It poses some of the most challenging problems in classical physics as well as in computational mathematics. Understanding the nature of compressible turbulence is of paramount importance. Highly compressible turbulence plays an important role in star formation control in dense molecular clouds and is responsible for important design considerations in many engineering applications. Therefore, there have been several investigations into its statistical behavior. studied the mechanics of energy transfer and distribution and examined small-scale spectra in compressible turbulence with root mean square Mach numbers up to 0.9. Theoretical laws have also been advanced for the statistical behavior of turbulence quantities under the influence of compressibility effects . utilized an adaptive mesh refinement (AMR) algorithm along with a piecewise parabolic approach for numerical dissipation to obtain scaling tendencies at high Mach number values for both kinetic energy and density-weighted kinetic energy, and density power spectra. In addition, structure functions of different orders were also studied and compared to the limiting case of incompressibility. provided a theoretical justification of the presence of an inertial scale which is devoid of any effects of molecular viscosity for supersonic turbulence similar to the classical Richardson–Kolmogorov cascade in homogeneous isotropic incompressible turbulence . Magnetic effects on the statistical behavior of supersonic turbulence have also been studied keenly due to implications for astrophysical processes such as in , where two-point correlation function relations were studied. Scaling laws incorporating magnetic effects in hydrodynamic turbulence have also been proposed, for instance in Iroshnikov–Kraichnan theory , where arguments similar to those used in Kolmogorov theory are used to explain statistical properties of small-scale components in velocity and magnetic fields. Extensions to account for the rather tenuous assumption of isotropy in compressible magnetohydrodynamics (MHD) have also been studied by . A generalization of the Iroshnikov–Kraichnan and Goldreich–Sridhar spectra to compressible magnetohydrodynamics has been presented by , where it is also shown to merge with the MHD shockwave spectrum in the limit of infinite compressibility . A recent review which examines both hydrodynamic and magnetohydrodynamic implementations of supersonic compressible turbulence on statistical quantities can be found in . In this work, we follow the vast majority of investigations by utilizing the phenomenological description of turbulence in Fourier space as well as the utilization of two-point velocity structure functions for the statistical examination of our high-fidelity numerical simulations. One of our goals is to investigate scaling laws using a computational framework with moderately high resolutions. We note that several modified energy spectra and anisotropic behaviors have been recently discussed within the context of the Rayleigh–Taylor and Richtmyer–Meshkov instability-induced flows (Zhou2017a, b). In terms of reference scaling behavior, we shall be comparing our numerical results of the stratified shear layer turbulence simulations against the theories under the assumption of isentropic flow by solving the Euler equations triggered by stratified shear layers in a periodic box domain. In this work, we shall examine the stratified compressible turbulence that emerges from a classical Kelvin–Helmholtz instability (KHI) formulation. Similar problems have been studied extensively for their incompressible versions . In this work, both two- and three-dimensional versions of stratification will be examined for their effects on scaling. It must be noted here that two-dimensional turbulence may be assumed to be an appropriate framework for many geophysical applications which exhibit extremely high aspect ratios and, indeed, incompressible two-dimensional turbulence forms the cornerstone of geostrophic turbulence theory . Astrophysical considerations have also been explored in , where the effects of a magnetohydrodynamic coupling have also been examined on scaling behavior. Our focus shall primarily rest on a comparison of numerically obtained behavior of the density power spectrum, the averaged kinetic energy spectrum and the density-weighted kinetic energy spectrum along with second- and third-order velocity structure functions with their theoretical predictions. Some reference scaling laws (in the incompressible limit) we shall be using for comparison are the classical Kolmogorov scaling for isotropic three-dimensional (3-D) turbulence and Kraichnan scaling for two-dimensional (2-D) isotropic turbulence. A common strategy for the numerical examination of the statistics of highly compressible turbulence is the use of the Eulerian hydrodynamic conservation laws implemented through an implicit large eddy simulation (ILES) methodology . This is because it is commonly accepted that an ILES formulation of the Euler equations provides a good estimation for the Navier–Stokes equations in the limit of infinite Reynolds numbers . However, two conditions must be enforced in order to satisfy the aforementioned assumption. Firstly, vorticity must be introduced via either boundary and/or initial conditions since the Euler equations are incapable of generating vorticity from irrotational flows. Secondly, an artificial viscosity must be incorporated into the simulation mechanism to mimic the preservation of dissipative behavior of the Navier–Stokes equations in the inviscid limit . The ILES mechanism is a suitable approach for artificial dissipation through the use of numerical truncation errors and is our simulation algorithm of choice for the high-fidelity numerical experiments in this investigation. The question we attempt to address through this work is related to the difference between purely averaged kinetic energy spectra scaling and density-weighted spectra scaling for both two- and three-dimensional compressible turbulence. Our observations suggest a different “packaging” of density in the spectral space for the two-dimensional turbulence case. This is proven conclusively by comparing the differences in density power spectrum behavior for both two- and three-dimensional configurations. It is proposed that the density power spectrum (or in other words the packaging of density at different wavenumbers) may be a reason that causes a variation in the k−3 scaling of the density-weighted kinetic energy cascade with changing compressibility (higher compressibilities are observed to show ${k}^{-\mathrm{7}/\mathrm{3}}$ scaling) for two-dimensional turbulence as against the constant ${k}^{-\mathrm{5}/\mathrm{3}}$ cascade in three-dimensional turbulence. Our results are also validated through the use of the second-order structure function behavior with varying compressibility. High-fidelity simulation data are generated by utilizing 5123 and 16 3842 degrees of freedom for the three- and two-dimensional cases, respectively. We demonstrate that there is no difference in energy spectrum scalings between kinematic and density-weighted velocities in three-dimensional simulations since both the power density and velocity spectra scale with the ${k}^{-\mathrm{5}/\mathrm{3}}$ scaling. However, we have demonstrated that the difference becomes pronounced in two-dimensional simulations because the power density spectrum scales with ${k}^{-\mathrm{5}/\mathrm{3}}$, which is different than the scaling of the kinetic energy spectrum. Furthermore, we have decomposed both the kinetic velocity and density-weighted velocity fields into compressive (curl-free) and solenoidal (divergence-free) components in order to study the effects of compressibility in our two- and three-dimensional setups. Ultimately, it is our aim to link these analyses to nonlinear processes exhibiting very high aspect ratios for astrophysical, heliophysical and plasma physics applications. 2 Compressible turbulence The governing laws utilized for our numerical experiments are given by the Euler equations which may be expressed in their dimensionless differential form as $\begin{array}{}\text{(1)}& & \frac{\partial \mathit{\rho }}{\partial t}+\mathrm{\nabla }\cdot \left(\mathit{\rho }\mathbit{u}\right)=\mathrm{0},\text{(2)}& & \frac{\partial \left(\mathit{\rho }\mathbit{u}\right)}{\partial t}+\mathrm{\nabla }\cdot \left(\mathit{\rho }\mathbit{u}\otimes \mathbit{u}+p\mathbit{I}\right)=\mathrm{0},\text{(3)}& & \frac{\partial \left(\mathit{\rho }E\right)}{\partial t}+\mathrm{\nabla }\cdot \left(\mathit{\rho }E\mathbit{u}+p\mathbit{u}\right)=\mathrm{0},\end{array}$ where ρ is the fluid density, $\mathbit{u}=\mathit{\left\{}u,v{\mathit{\right\}}}^{T}\in {\mathbb{R}}^{\mathrm{2}}$ and $\mathbit{u}=\mathit{\left\{}u,v,w{\mathit{\right\}}}^{T}\in {\mathbb{R}}^{\mathrm{3}}$ are the flow velocity in a Cartesian co-ordinate system, p is the static pressure, and E is the total energy per unit mass. Assuming a perfect gas with a ratio of specific heats γ, the pressure can be determined by an equation of state which closes our coupled governing equations given by $\begin{array}{}\text{(4)}& p=\mathit{\rho }\left(\mathit{\gamma }-\mathrm{1}\right)\left(E-\frac{\mathrm{1}}{\mathrm{2}}\left(\mathbit{u}\cdot \mathbit{u}\right)\right),\end{array}$ where we have set $\mathit{\gamma }=\mathrm{7}/\mathrm{5}$ in our study. Note that the assumption of the classical equation of state for relating the pressure and total energy of the flow ensures the interaction of solely acoustic and vortical modes . Our computational domain also exhibits periodic boundary conditions in all directions. ## 2.1 Stratified Kelvin–Helmholtz instability The stratified Kelvin–Helmholtz instability (KHI) test case is a famous problem which manifests itself when there is a velocity difference at the interface between two fluids of different densities (Thomson1871). It can commonly be observed through experimental observation and numerical simulation, and it is also visible in many natural phenomena, for example in situations with wind flow over bodies of water causing wave formation and in the planet Jupiter's atmosphere between atmospheric bands moving at different speeds . The study of this instability in a benchmark formulation reveals key information about the transition to turbulence for two fluids moving at different speeds. For these practical applications, it is common to choose a double shear layer problem to simulate the formation of KHI in a periodic two-dimensional computational setting with unit side length. This stratified shear layer instability problem is used to demonstrate the evolution of linear perturbations into a transition to nonlinear two-dimensional hydrodynamic turbulence. The instability triggers small-scale vortical structures at the sharp density interface initially, which eventually transitions through nonlinear interactions to a completely turbulent field. ## 2.2 Two-dimensional simulations A two-dimensional implementation of the dual-shear layer KHI problem is devised through our aforementioned unstable perturbed compressible shear layer. This may be implemented through our computational domain which is a square of unit side length with the following initial conditions: We can observe that the vertical component of the velocity is perturbed using a single-mode sine wave (n=2, L=1) with an amplitude λ=0.01. Our two-dimensional numerical experiments are solved to a final dimensionless time of t=5. We clarify that the 2 simulation domain for all experiments is set in $\left(x,y\right)\in \left[-\mathrm{0.5},\mathrm{0.5}\right]×\left[-\mathrm{0.5},\mathrm{0.5}\right]$ with N2=16 3842 degrees of freedom. Figure 1 represents a schematic expressing the initial conditions of our two-dimensional simulation. We remark that in this study we perform implicit large eddy simulation (ILES) simulations by using a finite-volume framework. Our numerical scheme utilizes the fifth-order accurate, weighted essential non-oscillatory (WENO) reconstructions equipped with Roe's approximate Riemann solver (Roe1981) at the cell interfaces. It is well known that the utilization of the artificial dissipation mechanism of ILES schemes (from the numerical viscosity of upwind biased state reconstructions) mimics the physical viscosity of the Navier–Stokes equations in the limit of infinite Reynolds numbers. We utilize a parallel approach for the computational solution of our governing laws implemented in the OpenMPI framework. Details about the implementation and the computational performance of our solver may be found in , additionally showing weak and strong scaling tests. Our three-dimensional simulations employ a similar approach. Figure 2 describes snapshots in time of the density field for this two-dimensional compressible turbulence test case when α=1.0. One can notice a transition to turbulence once an initial instability has developed. The shearing velocity magnitude given by α controls the compressibility which is apparent from comparisons with Figs. 3 and 4 where smaller values lead to formation of much smoother structures and consequently lead to shock-free fields in the incompressible limit. Evidence from Fig. 4 also shows a delay in the onset of turbulence due to a reduced shearing velocity. Table 1 also demonstrates the mean and maximum Mach number values at the final computational time t=5. It is clear that the case for α=0.25 corresponds to a perfectly subsonic regime with lower compressibility (i.e., the mean Mach number of M=0.15). Figure 1The stratified Kelvin–Helmholtz instability problem in a periodic square box of side length L=1. Our initial condition reads as a single-mode perturbation to the y-component of the velocity to trigger the instability with n=2 and the amplitude λ=0.01. We extend this two-dimensional domain along the z direction to perform our three-dimensional simulations in a triply-periodic domain with size L in each side where we also use an initial perturbation to the z-component of the velocity given by $w=\mathit{\lambda }\mathrm{sin}\left(\mathrm{2}\mathit{\pi }nz/L\right)$. Figure 2Time evolution of the density field for 2-D KHI turbulence with α=1.0 demonstrating results at t=1 (a) and t=5 (b) obtained by a grid resolution of N2=16 3842. Figure 3Time evolution of the density field for 2-D KHI turbulence with α=0.5 demonstrating results at t=1 (a) and t=5 (b) obtained by a grid resolution of N2=16 3842. Figure 4Time evolution of the density field for 2-D KHI turbulence with α=0.25 demonstrating results at t=1 (a) and t=5 (b) obtained by a grid resolution of N2=16 3842. Figure 5Time evolution of 2-D KHI turbulence field characteristics with a resolution of N2=16 3842, showing normalized root mean square values of velocity u for various α values (a), and compensated energy spectra computed from u at various times for α=1.0 (b). Figure 5 demonstrates the time evolution characteristics of the 2-D KHI problem. On the left, we illustrate the time series of the domain-integrated velocity amplitude (i.e., the root mean square values of the kinetic velocity) normalized with its initial condition with each α value. It is clear that the KHI instability starts earlier for larger α values. We also demonstrate the evolution of the compensated kinetic energy spectrum on the right for α=1.0. Similar statistical trends are observed at each time. Therefore, we will only focus on the results at the final time t=5 in our statistical analysis presented in the next section. Table 1The mean and maximum Mach numbers computed at final time t=5. ## 2.3 Three-dimensional simulations While two-dimensional compressible turbulence investigations are valuable for insight into the physical processes of systems which exhibit extreme aspect ratios , it is well known that the process of energy transfer between scales is fundamentally different when compared to that of three-dimensional flows . Isotropic, homogeneous, incompressible three-dimensional turbulence is characterized by the famous Kolmogorov–Richardson cascade of energy where the largest vortices continuously inject energy into an inertial cascade which terminates in the Kolmogorov length scale where viscous effects dissipate this energy. This is particularly applicable for engineering flows, where it has been established that turbulence “decays” in the absence of forcing due to viscous dissipation. In contrast, two-dimensional turbulence exhibits the presence of an inverse energy cascade (given by Kraichnan–Batchelor–Leith theories; ) where energy from the smallest scales is transferred to the largest scales. This has implications for the restoration of local isotropy (since large-scale structures created by the inverse energy cascade affect the amount of enstrophy in the field and thus affect the energy dissipation rate). In the presence of periodic boundary conditions (a subject of future investigations), these newly created large-scale structures may lead to significant alteration in scaling laws. Figure 6Time evolution of the density field for 3-D KHI turbulence with α=1.0 demonstrating results at t=1 (a) and t=5 (b) obtained by a grid resolution of N3=5123. Figure 7Time evolution of the density field for 3-D KHI turbulence with α=0.5 demonstrating results at t=1 (a) and t=5 (b) obtained by a grid resolution of N3=5123. Figure 8Time evolution of the density field for 3-D KHI turbulence with α=0.25 demonstrating results at t=1 (a) and t=5 (b) obtained by a grid resolution of N3=5123. Our computational domain for the three-dimensional turbulence case is analogous to that of the two-dimensional domain. We utilize a domain given by a 3 set in $\left(x,y,z\right)\in \left[-\mathrm{0.5},\mathrm{0.5}\right]×\left[-\mathrm{0.5},\mathrm{0.5}\right]×\left[-\mathrm{0.5},\mathrm{0.5}\right]$ with N3=5123 degrees of freedom. Our initial conditions are given by and periodic boundary conditions in all directions. We keep our parameters n, L and λ similar to those used in the two-dimensional case and utilize N3=5123 degrees of freedom for the simulation of our computational domain. Figure 9Time evolution of 3-D KHI turbulence field characteristics with a resolution of N3=5123, showing normalized root mean square values of velocity u for various α values (a), and compensated energy spectra computed from u at various times for α=1.0 (b). Figure 6 shows the density field at times t=1 and t=5 for a shearing velocity magnitude of α=1.0. One can observe how the solution domain has transitioned almost entirely to a turbulent field for this case as against the very visible stratification observed in lower compressibility simulations given by α=0.5 and α=0.25 shown in Figs. 7 and 8, respectively. Our aim is to quantify the effect of the shearing velocity on the compressibility and scaling laws of these co-designed two- and three-dimensional configurations. Similar to the two-dimensional case, we have plotted the time evolution of the domain-integrated velocity in Fig. 9 between t=0 and t=5. The decay rates in three-dimensional simulations are substantially higher than those obtained in two-dimensional simulations. This can be attributed to the use of a lesser number of grid points sampled in each direction. However, the energy spectrum trend is similar and yields a ${k}^{-\mathrm{5}/\mathrm{3}}$ spectrum at each time. In the following section, we thus present a systematic analysis based on data obtained at t=5. 3 Turbulence statistics and scaling exponents ## 3.1 Kinetic energy spectrum The first statistical measure we investigate is given by the classical kinetic energy spectra. To obtain these spectra, we start with an expression for the spatial kinetic energy in wavenumber space given by $\begin{array}{}\text{(14)}& \mathbit{E}\left(\mathbf{k},t\right)=\frac{\mathrm{1}}{\mathrm{2}}|\stackrel{\mathrm{^}}{\mathbf{u}}\left(\mathbf{k},t\right){|}^{\mathrm{2}},\end{array}$ where $\stackrel{\mathrm{^}}{\mathbf{u}}\left(\mathbf{k},t\right)$ is the Fourier transform of the velocity vector in the wavenumber space. Equation (14) can also be rewritten in terms of velocity components (assuming a two-dimensional Cartesian domain) as $\begin{array}{}\text{(15)}& \mathbit{E}\left(\mathbf{k},t\right)=\frac{\mathrm{1}}{\mathrm{2}}\left(|\stackrel{\mathrm{^}}{u}\left(\mathbf{k},t\right){|}^{\mathrm{2}}+|\stackrel{\mathrm{^}}{v}\left(\mathbf{k},t\right){|}^{\mathrm{2}}\right),\end{array}$ where we compute velocity components $\stackrel{\mathrm{^}}{u}\left(\mathbf{k},t\right)$ and $\stackrel{\mathrm{^}}{v}\left(\mathbf{k},t\right)$ using a fast Fourier transform algorithm . Finally, the spectra can be calculated by integrating over a unit bandwidth (i.e., angle-averaged) in the following manner: $\begin{array}{}\text{(16)}& E\left(k,t\right)=\sum _{k-\frac{\mathrm{1}}{\mathrm{2}}\le |{\mathbf{k}}^{\prime }| where $k=|\mathbf{k}|=\sqrt{{k}_{x}^{\mathrm{2}}+{k}_{y}^{\mathrm{2}}}$ in 2. Extensions to three dimensions are straightforward. Figure 10Spherical-averaged energy spectra for 3-D KHI turbulence. (a) Spectra built on using the velocity u, (b) spectra built on using the density-weighted velocity $\mathbit{\omega }=\sqrt{\mathit{\rho }}\mathbit{u}$, (c) compensated spectra built on using the velocity u, and (d) compensated spectra built on using the density-weighted velocity $\mathbit{\omega }=\sqrt{\mathit{\rho }}\mathbit{u}$. ## 3.2 Density-weighted kinetic energy spectrum The kinetic energy spectrum is generally utilized for characterizing the energy content of scales in incompressible turbulent flows and does not take the localized scale content of the density into consideration. To include these density effects, following Lele (1994) and , we define an energy spectrum built on density-weighted velocity $\mathbit{\omega }=\sqrt{\mathit{\rho }}\mathbit{u}$, i.e., through using $\begin{array}{}\text{(17)}& \mathbit{E}\left(\mathbf{k},t\right)=\frac{\mathrm{1}}{\mathrm{2}}|\stackrel{\mathrm{^}}{\mathbit{\omega }}\left(\mathbf{k},t\right){|}^{\mathrm{2}},\end{array}$ where we can apply the same angle-averaged rule given by Eq. (16) to obtain one-dimensional spectra. Figure 11Transversely averaged energy spectra for 3-D KHI turbulence. An angle-averaged kinetic energy spectrum is first computed at each z plane using a 2-D FFT transform and then followed by a spatial averaging procedure along the z direction. (a) Spectra built on using the velocity u, (b) spectra built on using the density-weighted velocity $\mathbit{\omega }=\sqrt{\mathit{\rho }}\mathbit{u}$, (c) compensated spectra built on using the velocity u, and (d) compensated spectra built on using the density-weighted velocity $\mathbit{\omega }=\sqrt{\mathit{\rho }}\mathbit{u}$. Figure 10 describes the spherical-averaged energy spectra for the three-dimensional test case. Note here that the spherical average implies that the local energy content in the Fourier domain is integrated over a spherical shell of radius k in three dimensions. One can observe a scaling behavior that corresponds to classical Kolmogorov theory in the infinite Reynolds number limit (i.e., an inertial range with ${k}^{-\mathrm{5}/\mathrm{3}}$ scaling) for both purely kinetic energy spectra and density-weighted kinetic energy spectra. The finer dissipative scales are seen to display a k−6 scaling behavior for both these statistical quantities as well. We have also plotted the compensated energy spectra, which illustrate the scaling laws more quantitatively following the horizontal lines. Figure 12Angle-averaged energy spectra for 2-D KHI turbulence. (a) Spectra built on using the velocity u, (b) spectra built on using the density-weighted velocity $\mathbit{\omega }=\sqrt{\mathit{\rho }}\mathbit{u}$, (c) compensated spectra built on using the velocity u, and (d) spectra built on using the density-weighted velocity $\mathbit{\omega }=\sqrt{\mathit{\rho }}\mathbit{u}$. The data presented in Fig. 10 have been obtained by performing a three-dimensional fast Fourier transform (FFT) procedure. From a practical implementation point of view, we perform a slightly different approach to compute energy spectra. The main advantage of this procedure is that it is naturally suited to any parallel computing architecture. For an analogy with the two-dimensional test cases, we present transversely averaged energy spectra in Fig. 11 wherein the circular averaging of the energy in the Fourier domain is carried out over different two-dimensional z planes which are then spatially averaged over the depth of the domain. Similar trends to the spherical averaging spectral scaling are observed for this case. However, we note that the obtained spectra are less noisy when using a direct three-dimensional FFT procedure. This can be interpreted by the quasi-homogeneity of the flow after the onset of turbulence. Figure 13Compensated, ${k}^{\mathrm{4}}\mathbit{E}\left(\mathbf{k},t=\mathrm{5}\right)$, kinetic energy spectra for 2-D KHI turbulence for α=1.0 (a), α=0.5 (b) and α=0.25 (c). Figure 14The difference spectra for 2-D KHI turbulence. (a) Difference spectra between the kinetic velocity field and the normalized density-weighted velocity field (i.e., E(k) is obtained from the $\mathbit{g}=\mathbit{u}-\sqrt{\mathit{\rho }}\mathbit{u}/〈\sqrt{\mathit{\rho }}〉$ vector field), and (b) its compensated representation. We investigate the performance of the same metrics for the two-dimensional test case and obtain scaling behavior as seen in Fig. 12 where a k−3 scaling behavior is obtained in accordance with the direct cascade of enstrophy espoused by Kraichnan–Batchelor–Leith (KBL) theory for the inertial range, especially for the lower compressibility ratio. A higher magnitude of α is seen to yield a more flattened spectrum towards ${k}^{-\mathrm{7}/\mathrm{3}}$ scaling and also delay the formation of the k−6 cascade in the dissipation range. Figure 12 also shows the spectral scaling obtained from the density-weighted kinetic energy spectra where scaling behavior corresponding to ${k}^{-\mathrm{7}/\mathrm{3}}$ is seen for all α values. This suggests that the two-dimensional configuration of the test case is affected by the packaging of density content at different scales. The dissipation zone shows a similar behavior using this metric where a delay in scaling with k−6 is obtained by an increase in the magnitude of α. We can conclude that the density-weighted spectrum becomes a more universal representation for various degrees of compressibility. Figure 13 shows the effect of the parameter α on the compressibility of the two-dimensional turbulence case through the use of compensated energy spectra where the distance from the origin in the Fourier space (in other words k) is used to weight instantaneous energy content. We only present the compensated energy distribution in the first quadrant of the Fourier space. At α=1.0 one can observe a distinct loss of isotropy in the energy content of the solution field (in spectral space) which corresponds to an enhanced compressibility. In comparison, α=0.5 and α=0.25 display a behavior which is rather isotropic in nature, indicating weak compressibility. To demonstrate the effect of density more clearly, we present the difference spectra for the 2-D KHI turbulence in Fig. 14. Here, we compute the spectrum of the difference between the velocity u and the normalized density-weighted velocity $\sqrt{\mathit{\rho }}\mathbit{u}/〈\sqrt{\mathit{\rho }}〉$, where $\sqrt{\mathit{\rho }}$ refers to the spatial average of the square root of density. The results show a clear inertial range with the ${k}^{-\mathrm{5}/\mathrm{3}}$ scaling. This is a manifestation of the density effect in 2-D KHI turbulence. Figure 15Helmholtz decomposition of energy spectra into compressive (curl-free) and solenoidal (divergence-free) parts for 3-D KHI turbulence. (a) Compensated compressive spectra from u, (b) compensated compressive spectra from $\mathbit{\omega }=\sqrt{\mathit{\rho }}\mathbit{u}$, (c) compensated solenoidal spectra from u, and (d) compensated solenoidal spectra from $\mathbit{\omega }=\sqrt{\mathit{\rho }}\mathbit{u}$. Figure 16Helmholtz decomposition of energy spectra into compressive (curl-free) and solenoidal (divergence-free) parts for 2-D KHI turbulence. (a) Compensated compressive spectra from u, (b) compensated compressive spectra from $\mathbit{\omega }=\sqrt{\mathit{\rho }}\mathbit{u}$, (c) compensated solenoidal spectra from u, and (d) compensated solenoidal spectra from $\mathbit{\omega }=\sqrt{\mathit{\rho }}\mathbit{u}$. ## 3.3 Helmholtz decomposition To study the effect of compressibility in more detail we perform the Helmholtz decomposition to compute energy spectra from the curl-free and divergence-free components of the velocity field. This decomposition has been extensively used in turbulence studies (i.e., see ). In our present work, we investigate the behavior of energy spectra using both the kinematic velocity and density-weighted velocity fields in 2-D and 3-D KHI turbulence problems. Let v be a vector field in ∈ℝn (e.g., v could be the kinetic velocity field u or the density-weighted velocity field $\mathbit{\omega }=\sqrt{\mathit{\rho }}\mathbit{u}$); then, v can be decomposed into a curl-free component and a divergence-free component (Aris2012): $\begin{array}{}\text{(18)}& \mathbit{v}=\mathrm{\nabla }\mathit{\varphi }+\mathrm{\nabla }×\mathbit{A},\end{array}$ which can be rewritten as $\begin{array}{}\text{(19)}& \mathbit{v}={\mathbit{v}}^{\mathrm{c}}+{\mathbit{v}}^{\mathrm{s}},\end{array}$ where vc=∇ϕ is the compressive (curl-free) component since the curl of a gradient of any scalar field ϕ is zero, and ${\mathbit{v}}^{\mathrm{s}}=\mathrm{\nabla }×\mathbit{A}$ is the solenoidal (divergence-free) component since the divergence of a curl of any vector field A is zero. Taking the divergence of Eq. (18) yields the following Poisson equation: $\begin{array}{}\text{(20)}& \mathrm{\nabla }\cdot \mathbit{v}={\mathrm{\nabla }}^{\mathrm{2}}\mathit{\varphi },\end{array}$ which can be solved for ϕ efficiently using an FFT procedure since v is provided as a quantity of interest that we would like to decompose into two parts. Once ϕ is computed, the compressive and solenoidal parts can be easily computed as follows: $\begin{array}{}\text{(21)}& & {\mathbit{v}}^{\mathrm{c}}=\mathrm{\nabla }\mathit{\varphi },\text{(22)}& & {\mathbit{v}}^{\mathrm{s}}=\mathbit{v}-{\mathbit{v}}^{\mathrm{c}}.\end{array}$ We note that there would be infinitely many candidates for the compressive component since the multiplication of ϕ by any arbitrary constant after solving the Poisson equation would still yield a curl-free velocity field. However, the energy spectrum scaling behaviors would remain identical for each realization. Figure 15 presents the compensated energy spectra for the 3-D KHI problem using both definitions of the velocity vector field (i.e., the kinematic velocity and the density-weighted velocity). We have obtained a ${k}^{-\mathrm{5}/\mathrm{3}}$ dominant scaling for the solenoidal component in both definitions. However, the compressive component demonstrates an anomalous spectrum especially when we use the kinetic velocity definition. This anomaly can also be linked to the results of the pressure power spectra that we present in the next section. Figure 16 presents the same analysis for the case of 2-D KHI turbulence. Both compressive and solenoidal components scale with the ${k}^{-\mathrm{5}/\mathrm{3}}$ slope for the density-weighted velocity field. However, there is a clear difference for the results with various values of α when we look at the Helmholtz decomposition of the kinetic velocity field. The solenoidal inertial range scaling becomes k−3 for lower α values, which is consistent with Kraichnan theory. However, the scaling steepens and gets closer to k−2 for increasing α, which is also consistent with the Kadomtsev–Petviashvili spectrum for acoustic turbulence. ## 3.4 Density power spectrum Observations on the density power spectrum have played an important role in astrophysics applications . Although it has been established that the density power spectrum has an inertial scaling of ${k}^{-\mathrm{5}/\mathrm{3}}$ , similar to the Kolmogorov energy spectrum, demonstrated that it depends on the flow regime as well as the initial conditions by considering a three-dimensional weakly compressible hydrodynamic turbulence setup. By studying weakly compressible two-dimensional flows, showed that the density spectrum scales between k−1 and k−5 for nonuniform and uniform entropy cases, respectively. They presented a great discussion for state-of-the-art computations and scaling law observations for the density power spectrum. Figure 17Spherical-averaged density power spectra for 3-D KHI turbulence (a) and its compensated form (b). Figure 18Angle-averaged density power spectra for 2-D KHI turbulence (a) and its compensated form (b). In order to quantify the effect of the scale content of density alone, we devise a power spectrum that reflects the average packaging of density over different scales at any given time in the simulation. This may be given by the following expression: $\begin{array}{}\text{(23)}& \mathbf{\Gamma }\left(\mathbf{k},t\right)=\frac{\mathrm{1}}{\mathrm{2}}|\stackrel{\mathrm{^}}{\mathit{\rho }}\left(\mathbf{k},t\right){|}^{\mathrm{2}},\end{array}$ followed by angle averaging which leads to $\begin{array}{}\text{(24)}& \mathrm{\Gamma }\left(k,t\right)=\sum _{k-\frac{\mathrm{1}}{\mathrm{2}}\le |{\mathbf{k}}^{\prime }| Observations regarding the difference in scaling behavior of the kinetic energy and density-weighted kinetic energy spectra give us a cause to compare the scaling behavior of the density power spectra for both our two- and three-dimensional test cases. Figure 17 shows the density power spectra for the three-dimensional turbulence test case where it can be seen that a five-thirds law is followed for the arrangement of density content in the solution field. A dissipation range scaling of k−6 can also be observed. It can be seen that the variation of parameter α does not seem to affect scaling behavior appreciably. Figure 18 shows a similar examination for the two-dimensional test case where a considerable difference in scaling behavior is observed. The imposition of two-dimensional turbulence leads to a considerable alteration in the scaling behavior of the density power spectrum with a ${k}^{-\mathrm{5}/\mathrm{3}}$ scaling observed in the inertial range and a k−3 scaling in the dissipation range. In fact, this packaging of density consequently affects the density-weighted kinetic energy spectra described in Fig. 12. The intercomparison of the two- and three-dimensional statistical quantities suggests that the density power spectrum (i.e., the arrangement of density at different wavenumbers) plays an important role with increased compressibility of any simulation wherein the ${k}^{-\mathrm{5}/\mathrm{3}}$ scaling causes a deviation from k−3 scaling associated with two-dimensional incompressibility to ${k}^{-\mathrm{7}/\mathrm{3}}$ scaling for α=1.0 for the same test case. In contrast, the ${k}^{-\mathrm{5}/\mathrm{3}}$ density power spectrum of three-dimensional turbulence causes no variation in scaling behavior with increased compressibility and also causes similar scaling behaviors for both averaged kinetic energy spectra as well as averaged density-weighted kinetic energy spectra as seen in Fig. 10. This is one of the central conclusions of this investigation. Figure 19Spherical-averaged pressure power spectra for 3-D KHI turbulence (a) and its compensated form (b). Figure 20Angle-averaged pressure power spectra for 2-D KHI turbulence (a) and its compensated form (b). ## 3.5 Pressure power spectrum Similar to the density power spectrum defined in Eq. (23), the pressure power spectrum can be computed as $\begin{array}{}\text{(25)}& \mathbf{\Pi }\left(\mathbf{k},t\right)=\frac{\mathrm{1}}{\mathrm{2}}|\stackrel{\mathrm{^}}{p}\left(\mathbf{k},t\right){|}^{\mathrm{2}},\end{array}$ and its angle-averaged form reads as $\begin{array}{}\text{(26)}& \mathrm{\Pi }\left(k,t\right)=\sum _{k-\frac{\mathrm{1}}{\mathrm{2}}\le |{\mathbf{k}}^{\prime }| As discussed in , the pressure spectrum can be expressed by Π(k)∝kE(k)2 by considering dimensional arguments. Indeed, this yields a pressure spectra scaling of ${k}^{-\mathrm{7}/\mathrm{3}}$ for the Kolmogorov regime and a pressure spectra scaling of k−5 for the Kraichnan regime. Figures 19 and 20 demonstrate the pressure power spectra for the 3-D and 2-D KHI problems, respectively. In the 3-D case, it is clear that our results are consistent with the theoretical estimate of ${k}^{-\mathrm{7}/\mathrm{3}}$ scaling for all values of the compressibility parameter α. However, in 2-D turbulence we only observe k−5 scaling for smaller scales (i.e., higher wavenumbers). Particularly for weaker compressibility, given by the α=0.25 case, the k−5 scaling starts earlier. Figure 20 clearly illustrates that the pressure power spectrum inertial scaling becomes ${k}^{-\mathrm{5}/\mathrm{3}}$ for stronger compressibility. These results indicate that the pressure power spectrum can be a useful tool for characterizing two-dimensional compressible turbulence. Figure 21Second-order velocity structure functions for 3-D KHI turbulence. (a) Longitudinal structure function (ur), (b) transverse structure function (ur), (c) compensated form of the longitudinal one, and (d) compensated form of the transverse one. ## 3.6 Velocity structure functions Statistical inferences about the nature of compressible turbulence may also be drawn through the use of velocity structure functions which also show scaling tendencies according to the physics of the solution field . A velocity structure function may be expressed as $\begin{array}{}\text{(27)}& {S}^{p}\left(r\right)=〈\left(\mathbit{u}\left(\mathbit{x}+\mathbit{r}\right)-\mathbit{u}\left(\mathbit{x}\right){\right)}^{p}〉,\end{array}$ where the ensemble averaging is taken over all positions x and all orientations of r within the computational domain to yield statistics for the length scale $r=|\mathbit{r}|$. Our choice of p determines the order of the structure function we are examining and this investigation looks at p=2 for the characterization of turbulence in both two and three dimensions. The second-order structure function has been used to characterize the turbulence in both 2-D (e.g., see ) and 3-D (e.g., see ) turbulent flows. We note that some researchers have preferred to use the absolute value definition, which might change the results for odd values of p (e.g., see , for a great discussion on various definitions of the structure functions). For the 2-D turbulence setting, predicted a scaling law of rn−1 where n refers to the scaling component of the energy spectrum (i.e., $E\left(k\right)\propto {k}^{-n}$). In 3-D turbulence, the scaling of rp∕3 has been established for the pth structure function. Both longitudinal (ur) and transverse (ur) third-order velocity structure functions are computed in the present study. In our assessments, a range of ${\mathrm{10}}^{-\mathrm{2}}\le r\le {\mathrm{10}}^{-\mathrm{1}}$ is assumed to represent the general vicinity of the inertial range. Figure 22Second-order velocity structure functions for 2-D KHI turbulence. (a) Longitudinal structure function (ur), (b) transverse structure function (ur), (c) compensated form of the longitudinal one, and (d) compensated form of the transverse one. We utilize the high-fidelity data of the previously described numerical experiments for two- and three-dimensional turbulence to obtain structure function statistics at time t=5. Figure 21 shows the second-order velocity structure function for the longitudinal and transverse directions for the 3-D test case. One can observe a steadily increasing alignment with r2∕3 with a decreasing value of α, implying weaker compressibility. It is worth mentioning here that Kolmogorov theory dictates a cascade given by p∕3. Similar trends are observed for both longitudinal and transverse directions, suggesting that a certain degree of isotropy now characterizes the system. For ranges of r below 10−2, it is observed that both longitudinal and transverse structure functions scale according to r2 for the second-order structure function. We undertake a similar statistical examination for our two-dimensional test case where second-order longitudinal and transverse structure functions are given by Fig. 22, where it is observed that at low r, a scaling corresponding to r2 is observed. This is in accordance with findings in . At larger values of r, the r2 scaling transitions to a r4∕3 scaling at relatively higher compressibility (i.e., α=1.0) and r scaling at α=0.25. Eventually, it is expected that an r2∕3 behavior must emerge with perfect incompressibility. The aforementioned observations hold true for both longitudinal and transverse second-order structure functions and are consistent with the definition of $S\left(r\right)\propto {r}^{n-\mathrm{1}}$. It can be observed that the velocity structure functions for three-dimensional simulations generally obey the prediction of the Kolmogorov theory (for lower values of α indicating weak compressibility) as against their two-dimensional counterparts. 4 Conclusions In this investigation, data from high-fidelity numerical experiments are utilized to study scaling behavior for statistical quantities such as spectra and structure functions. We study two test cases given by the Kelvin–Helmholtz instability problem in two and three dimensions to study spectral scaling laws for compressible shear layer turbulence. Our spectra are given by the averaged kinetic energy magnitude and the averaged density-weighted kinetic energy magnitude, and it is observed that while both quantities exhibit similar trends in three dimensions, the density-weighted kinetic energy spectra show varying scaling tendencies in two dimensions. This is demonstrated by a flattening of the density-weighted energy spectra, expected to exhibit k−3 scaling in the incompressible limit, to ${k}^{-\mathrm{7}/\mathrm{3}}$ scaling for higher compressibility. Variations are also seen in the scaling of the dissipation range. This prompts us to investigate the density power spectrum and the pressure power spectrum for both two- and three-dimensional cases, and it is observed that two distinct inertial and dissipation range behaviors can be observed. For the density power spectrum, both the three-dimensional and two-dimensional cases show a five-thirds scaling behavior in the inertial range with a k−6 scaling in the dissipation range. This basically demonstrates that the scaling laws for both kinetic energy and power density spectra coincide with each other only for three-dimensional flows. The pressure power spectrum analysis also demonstrates that the results are less invariant to variations in the compressibility parameter for the two-dimensional KHI problem. The scaling behavior exhibited by the density and pressure power spectra for the two-dimensional test, combined with the trends observed in the energy spectrum and structure function analyses, indicates that nonlinear processes exhibiting extreme aspect ratios may have a fundamentally different set of nonlinear interactions as compared to moderate aspect ratios (which may be classified as three-dimensional). Incorporating the effect of boundary conditions, which inevitably leads to large-scale anisotropy into the scaling tendencies exhibited here, would account for further interesting deviations from three-dimensional counterparts. This remains a topic of focus for future investigation. Data availability Data availability. All synthetic data generated or analyzed during this study are included in the published article. Author contributions Author contributions. Omer San conceived the presented study and performed the computations. Romit Maulik helped in writing the paper. Both authors discussed the results and contributed to the final manuscript. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. The authors are very grateful to the editor and four anonymous referees for their useful comments and suggestions published on the NPGD website that helped us improve the presentation of this paper. The helpful comments from Bhimsen Shivamoggi (University of Central Florida) and Bohua Sun (Cape Peninsula University of Technology) are also appreciated. All numerical experiments have been performed using the resources of the Oklahoma State University High Performance Computing (OSU-HPCC) facilities. Edited by: Ioulia Tchiguirinskaia Reviewed by: four anonymous referees References Aluie, H.: Scale decomposition in compressible turbulence, Physica D, 247, 54–65, 2013. a Aris, R.: Vectors, tensors and the basic equations of fluid mechanics, Dover Publications, Inc., New York, USA, 2012. a Armstrong, J., Cordes, J., and Rickett, B.: Density power spectrum in the local interstellar medium, Nature, 291, 561–564, 1981. a Arneodo, A., Baudet, C., Belin, F., Benzi, R., Castaing, B., Chabaud, B., Chavarria, R., Ciliberto, S., Camussi, R., and Chilla, F.: Structure functions in turbulence, in various flow configurations, at Reynolds number between 30 and 5000, using extended self-similarity, Europhys. Lett., 34, 411–416, 1996. a Babiano, A., Claude, B., and Sadourny, R.: Structure functions and dispersion laws in two-dimensional turbulence, J. Atmos. Sci., 42, 941–949, 1985. a, b, c Banerjee, S. and Galtier, S.: Exact relation with two-point correlation functions and phenomenological approach for compressible magnetohydrodynamic turbulence, Phys. Rev. E, 87, 013019, https://doi.org/10.1103/PhysRevE.87.013019, 2013. a Batchelor, G. K.: Computation of the Energy Spectrum in Homogeneous Two-Dimensional Turbulence, Phys. Fluids, 12, II–233, 1969. a Bayly, B., Levermore, C., and Passot, T.: Density variations in weakly compressible flows, Phys. Fluids, 4, 945–954, 1992. a Bershadskii, A.: Distributed chaos and inertial ranges in turbulence, arXiv preprint arXiv:1609.01617, available at: https://arxiv.org/abs/1609.01617 (last access: 27 June 2018), 2016. a Biskamp, D. and Schwarz, E.: On two-dimensional magnetohydrodynamic turbulence, Phys. Plasmas, 8, 3282–3292, 2001. a Blaisdell, G. A., Mansour, N. N., and Reynolds, W. C.: Compressibility effects on the growth and structure of homogeneous turbulent shear flow, J. Fluid Mech., 256, 443–485, 1993. a Boffetta, G. and Ecke, R. E.: Two-dimensional turbulence, Annu Rev. Fluid Mech., 44, 427–451, 2012. a, b, c Boffetta, G. and Mazzino, A.: Incompressible Rayleigh-Taylor Turbulence, Annu. Rev. Fluid. Mech., 49, 119–143, 2017. a Bos, W. J. and Bertoglio, J. P.: Dynamics of spectrally truncated inviscid turbulence, Phys. Fluids, 18, 071701, https://doi.org/10.1063/1.2219766, 2006. a Clercx, H. J. H. and van Heijst, G. J. F.: Dissipation of coherent structures in confined two-dimensional turbulence, Phys. Fluids, 29, 111103, https://doi.org/10.1063/1.4993488, 2017. a Domaradzki, J. A. and Carati, D.: An analysis of the energy transfer and the locality of nonlinear interactions in turbulence, Phys. Fluids, 19, 085112, https://doi.org/10.1063/1.2772248, 2007. a Donzis, D. A. and Jagannathan, S.: Fluctuations of thermodynamic variables in stationary compressible turbulence, J. Fluid Mech., 733, 221–244, 2013. a Falceta-Gonçalves, D., Kowal, G., Falgarone, E., and Chian, A. C.-L.: Turbulence in the interstellar medium, Nonlin. Processes Geophys., 21, 587–604, https://doi.org/10.5194/npg-21-587-2014, 2014. a Falkovich, G. and Kritsuk, A. G.: How vortices and shocks provide for a flux loop in two-dimensional compressible turbulence, Phys. Rev. Fluids, 2, 092603, https://doi.org/10.1103/PhysRevFluids.2.092603, 2017. a Falkovich, G., Fouxon, I., and Oz, Y.: New relations for correlation functions in Navier–Stokes turbulence, J. Fluid Mech., 644, 465–472, 2010. a Goldreich, P. and Sridhar, S.: Magnetohydrodynamic turbulence revisited, Astrophys. J., 485, 680–688, 1997. a Grossmann, S. and Mertens, P.: Structure functions in two-dimensional turbulence, Z. Phys. B. Con. Mat., 88, 105–116, 1992. a Hopfinger, E. J.: Turbulence in stratified fluids: A review, J. Geophys. Res.-Oceans, 92, 5287–5303, 1987. a Hwang, K., Goldstein, M. L., Kuznetsova, M. M., Wang, Y., Viñas, A. F., and Sibeck, D. G.: The first in situ observation of Kelvin-Helmholtz waves at high-latitude magnetopause during strongly dawnward interplanetary magnetic field conditions, J. Geophys. Res.-Space, 117, A08233, https://doi.org/10.1029/2011JA017256, 2012. a Iroshnikov, P. S.: Turbulence of a conducting fluid in a strong magnetic field, Sov. Astron., 7, 566–571, 1964. a Iyer, K. P., Sreenivasan, K. R., and Yeung, P. K.: Reynolds number scaling of velocity increments in isotropic turbulence, Phys. Rev. E, 95, 021101, https://doi.org/10.1103/PhysRevE.95.021101, 2017. a Jagannathan, S. and Donzis, D. A.: Reynolds and Mach number scaling in solenoidally-forced compressible turbulence using high-resolution direct numerical simulations, J. Fluid Mech., 789, 669–707, 2016. a Kadomtsev, B. B. and Petviashvili, V. I.: Acoustic turbulence, Sov. Phys. Dokl., 18, 115–115, 1973. a Kida, S. and Orszag, S. A.: Energy and spectral dynamics in forced compressible turbulence, J. Sci. Comput., 5, 85–125, 1990. a Kida, S., Murakami, Y., Ohkitani, K., and Yamada, M.: Energy and flatness spectra in a forced turbulence, J. Phys. Soc. Jpn, 59, 4323–4330, 1990. a Kolmogorov, A. N.: The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers, in: Dokl. Akad. Nauk SSSR+, 30, 299–303, 1941. a, b, c Kraichnan, R. H.: Inertial-range spectrum of hydromagnetic turbulence, Phys. Fluids, 8, 1385–1387, 1965. a Kraichnan, R. H.: Inertial ranges in two-dimensional turbulence, Phys. Fluids, 10, 1417–1423, 1967. a, b Kritsuk, A. G., Norman, M. L., Padoan, P., and Wagner, R.: The statistics of supersonic isothermal turbulence, Astrophys. J., 665, 416–431, 2007. a, b, c Kuznetsov, E. A. and Sereshchenko, E. V.: Anisotropic characteristics of the Kraichnan direct cascade in two-dimensional hydrodynamic turbulence, JETP Lett+, 102, 760–765, 2015. a Leith, C. E.: Atmospheric predictability and two-dimensional turbulence, J. Atmos. Sci., 28, 145–161, 1971. a Lele, S. K.: Compressibility effects on turbulence, Annu. Rev. Fluid Mech., 26, 211–254, 1994. a, b Lesieur, M., Ossia, S., and Metais, O.: Infrared pressure spectra in two-and three-dimensional isotropic incompressible turbulence, Phys. Fluids, 11, 1535–1543, 1999. a Mac Low, M. and Klessen, R. S.: Control of star formation by supersonic turbulence, Rev. Mod. Phys., 76, 125–194, 2004. a Mac Low, M., Klessen, R. S., Burkert, A., and Smith, M. D.: Kinetic energy decay rates of supersonic and super-Alfvénic turbulence in star-forming clouds, Phys. Rev. Lett., 80, 2754, https://doi.org/10.1103/PhysRevLett.80.2754, 1998. a Maulik, R. and San, O.: Resolution and energy dissipation characteristics of implicit LES and explicit filtering models for compressible turbulence, Fluids, 2, 14, https://doi.org/10.3390/fluids2020014, 2017. a Moin, A. S. and Yaglom, A. M.: Statistical Fluid Mechanics: Mechanics of Turbulence, vol. 2, edited by: Lumley, J., M.I.T. Press, Cambridge, Massachusetts, USA, 1975. a Moura, R. C., Mengaldo, G., Peiró, J., and Sherwin, S. J.: On the eddy-resolving capability of high-order discontinuous Galerkin approaches to implicit LES/under-resolved DNS of Euler turbulence, J. Comput. Phys., 330, 615–623, 2017. a Ottaviani, M.: Scaling laws of test particle transport in two-dimensional turbulence, Europhys. Lett., 20, 111–116, 1992. a Padoan, P. and Nordlund, Å.: The stellar initial mass function from turbulent fragmentation, Astrophys. J., 576, 870–879, 2002. a Passot, T., Pouquet, A., and Woodward, P.: The plausibility of Kolmogorov-type spectra in molecular clouds, Astron. Astrophys., 197, 228–234, 1988. a Peltier, W. R. and Caulfield, C. P.: Mixing efficiency in stratified shear flows, Annu. Rev. Fluid Mech., 35, 135–167, 2003. a Press, W. H., Teukolsky, S. A., Vetterling, W. T., and Flannery, B. P.: Numerical recipes in Fortran 90, Cambridge University Press, Cambridge, UK, 1996. a Qiu, X., Ding, L., Huang, Y., Chen, M., Lu, Z., Liu, Y., and Zhou, Q.: Intermittency measurement in two-dimensional bacterial turbulence, Phys. Rev. E, 93, 062226, https://doi.org/10.1103/PhysRevE.93.062226, 2016. a Roe, P. L.: Approximate Riemann solvers, parameter vectors, and difference schemes, J. Comput. Phys., 43, 357–372, 1981. a Sagaut, P. and Cambon, C.: Homogeneous turbulence dynamics, vol. 10, Springer, Cham, Switzerland, 2008. a Shaikh, D. and Zank, G.: The turbulent density spectrum in the solar wind plasma, Mon. Not. R. Astron. Soc., 402, 362–370, 2010. a Shivamoggi, B. K.: Spectral laws for the compressible isotropic turbulence, Phys. Lett. A, 166, 243–248, 1992. a, b, c Shivamoggi, B. K.: Spatial intermittency in the classical two-dimensional and geostrophic turbulence, Ann. Phys.-New York, 270, 263–291, 1998. a Shivamoggi, B. K.: Magnetohydrodynamic turbulence: Generalized formulation and extension to compressible cases, Ann. Phys.-New York, 323, 1295–1303, 2008. a Shivamoggi, B. K.: Compressible turbulence: Multi-fractal scaling in the transition to the dissipative regime, Physica A, 390, 1534–1538, 2011. a Shivamoggi, B. K.: Singularities in fully developed turbulence, Phys. Lett. A, 379, 1887–1892, 2015. a Sun, B.: The temporal scaling laws of compressible turbulence, Mod. Phys. Lett. B, 30, 1650297, https://doi.org/10.1142/S0217984916502973, 2016. a Sun, B.: Scaling laws of compressible turbulence, Appl. Math. Mech., 38, 765–778, 2017. a Sytine, I. V., Porter, D. H., Woodward, P. R., Hodson, S. W., and Winkler, K.: Convergence tests for the piecewise parabolic method and Navier-Stokes solutions for homogeneous compressible turbulence, J. Comput. Phys., 158, 225–238, 2000. a Terakado, D. and Hattori, Y.: Density distribution in two-dimensional weakly compressible turbulence, Phys. Fluids, 26, 085105, https://doi.org/10.1063/1.4892460, 2014. a Thomson, W.: Hydrokinetic solutions and observations, Philos. Mag., 42, 362–377, 1871. a Vassilicos, J. C.: Dissipation in turbulent flows, Annu. Rev. Fluid Mech., 47, 95–114, 2015. a Wang, J., Yang, Y., Shi, Y., Xiao, Z., He, X. T., and Chen, S.: Cascade of kinetic energy in three-dimensional compressible turbulence, Phys. Rev. Lett., 110, 214505, https://doi.org/10.1103/PhysRevLett.110.214505, 2013. a Wang, J., Gotoh, T., and Watanabe, T.: Scaling and intermittency in compressible isotropic turbulence, Phys. Rev. Fluids, 2, 053401, https://doi.org/10.1103/PhysRevFluids.2.053401, 2017. a Wang, J., Wan, M., Chen, S., Xie, C., and Chen, S.: Effect of shock waves on the statistics and scaling in compressible isotropic turbulence, Phys. Rev. E, 97, 043108, https://doi.org/10.1103/PhysRevE.97.043108, 2018. a Werne, J. and Fritts, D. C.: Stratified shear turbulence: Evolution and statistics, Geophys. Res. Lett., 26, 439–442, 1999.  a Westernacher-Schneider, J. R. and Lehner, L.: Numerical measurements of scaling relations in two-dimensional conformal fluid turbulence, J. High Energy Phys., 2017, 27, https://doi.org/10.1007/JHEP08(2017)027, 2017. a Westernacher-Schneider, J. R., Lehner, L., and Oz, Y.: Scaling relations in two-dimensional relativistic hydrodynamic turbulence, J. High. Energy Phys., 2015, 1–31, 2015. a Zhou, Y.: Rayleigh–Taylor and Richtmyer–Meshkov instability induced flow, turbulence, and mixing. I, Phys. Rep., 723–725, 1–136, 2017a. a Zhou, Y.: Rayleigh–Taylor and Richtmyer–Meshkov instability induced flow, turbulence, and mixing. II, Phys. Rep., 723–725, 1–160, 2017b. a Zhou, Y., Grinstein, F. F., Wachtor, A. J., and Haines, B. M.: Estimating the effective Reynolds number in implicit large-eddy simulation, Phys. Rev. E, 89, 013303, https://doi.org/10.1103/PhysRevE.89.013303, 2014. a
{}
# msmtools.flux.coarsegrain¶ msmtools.flux.coarsegrain(F, sets) Coarse-grains the flux to the given sets. Parameters: F ((n, n) ndarray or scipy.sparse matrix) – Matrix of flux values between pairs of states. sets (list of array-like of ints) – The sets of states onto which the flux is coarse-grained. Notes The coarse grained flux is defined as $fc_{I,J} = \sum_{i \in I,j \in J} f_{i,j}$ Note that if you coarse-grain a net flux, it does n ot necessarily have a net flux property anymore. If want to make sure you get a netflux, use to_netflux(coarsegrain(F,sets)). References [1] F. Noe, Ch. Schuette, E. Vanden-Eijnden, L. Reich and T. Weikl: Constructing the Full Ensemble of Folding Pathways from Short Off-Equilibrium Simulations. Proc. Natl. Acad. Sci. USA, 106, 19011-19016 (2009)
{}
# LSI for Gaussian measure in $({\mathbb{R}^d})^{\mathbb{Z}^d}$ I am looking for a reference: Does Gaussian measure satisfy Logarithmic Sobolev Inequality (LSI) in $$${\mathbb{R}^d}$$^{\mathbb{Z}^d}$. Thanks. - Can you clarify what space you're looking at? Do you mean the space of functions $\mathbb{Z}^d \to \mathbb{R}^d$, or equivalently a countably infinite product of copies of $\mathbb{R}^d$? –  Mark Meckes Sep 7 '12 at 13:17 If so, and you mean standard Gaussian measure, then this is the same as asking about standard Gaussian measure on $\mathbb{R}^\mathbb{N}$, which should be dealt with in most standard references on LSIs. –  Mark Meckes Sep 7 '12 at 13:19 It is the case of standard Gaussian measure on product of countably infinite copies of $\mathbb{R}^d$. Could you please mention any references on this? All I could find is, LSI for standard GM on $\mathbb{R}^d$ but not the product space. –  Ahsan Sep 7 '12 at 14:46 As in Mark Meckes's comment, this is equivalent to log-Sobolev for standard Gaussian measure on $\mathbb{R}^\mathbb{N}$, which in turn follows immediately from the finite-dimensional case. In order to show the log-Sobolev inequality $$\int |f|^2 \ln |f| \le \int \|\nabla f\| + \frac{1}{2} \int |f|^2 \ln \int |f|^2$$ it is sufficient to prove it for smooth cylinder functions $f \in L^2(\mathbb{R}^\mathbb{N}, \mu)$, i.e. which depend on only finitely many coordinates. But for $f$ depending on $n$ coordinates, this is precisely the log-Sobolev inequality for standard Gaussian measure on $\mathbb{R}^n$. (Unlike the classical Sobolev inequality, there is no dimension-dependent constant!)
{}
# LaTeX table non-split cell with diagonal split colour Kind of a weird circumstance, but I have a table cell where I need the text to be in the cell normally (non-split) and the cell colour to be split diagonally. I know how to use slashbox to split the cell, but there doesn't seem to be any way to split the colour without also splitting the text. Basically, I need it to look like this: Any suggestions would be greatly appreciated. • Questions seeking debugging help ("why isn't this code working?") must include the desired behavior, a specific problem or error and the shortest code necessary to reproduce it in the question itself. Questions without a clear problem statement are not useful to other readers. See minimal working example (MWE). Oct 1, 2017 at 1:15 • Possible duplicate of Strike out a table cell. The accepted answer there can be trivially extended to produce the required output. Oct 1, 2017 at 1:17 • @HenriMenke The OP says ". I know how to use slashbox to split the cell, but there doesn't seem to be any way to split the colour without also splitting the text." I don't think the correct duplicate is the one you indicated. Oct 1, 2017 at 2:47 • @CarLaTeX, that duplicate you linked still has the text split diagonally as well. Oct 3, 2017 at 20:51 • Sorry, closing vote rectracted Oct 3, 2017 at 20:54 Since you didn't add an MWE, I can only suppose this is what you need: \documentclass{article} \usepackage{makecell} \usepackage{tikz} \usetikzlibrary{matrix} % code (slightly modified) from https://tex.stackexchange.com/a/343392/101651 \tikzset{ diagonal fill/.style 2 args={fill=#2, path picture={% \fill[#1] (path picture bounding box.south west) -| (path picture bounding box.north east) -- cycle;}}, reversed diagonal fill/.style 2 args={fill=#2, path picture={ \fill[#1] (path picture bounding box.north west) |- (path picture bounding box.south east) -- cycle;}} } \begin{document} \begin{table} \centering \begin{tikzpicture} \matrix[matrix of nodes, row sep=-\pgflinewidth, column sep=-\pgflinewidth, column 2/.style={nodes={text width=4.5cm}}, row 1/.style={nodes={minimum height=7ex}}, align=center, text centered, nodes={text width=2cm, text height=1.5ex, text depth=.25ex, minimum height=4ex, }] (mymatr) { |[reversed diagonal fill={green}{red}]|\makecell{Let's\\recap} & \makecell{Van Duck's \\rules}\\ 2& Look at the log\\ 3& Search on \TeX.SE\\ 4& |[diagonal fill={yellow}{orange}]|\color{blue}\bfseries Always add an MWE\\ }; \draw[thick] (mymatr-1-1.north west) -- (mymatr-1-2.north east); \draw (mymatr-1-1.south west) -- (mymatr-1-2.south east); \draw[thick] (mymatr-5-1.south west) -- (mymatr-5-2.south east); \end{tikzpicture} \end{table} \end{document} Credits to ferahfeza for the diagonal fill and reversed diagonal fill styles. With {NiceTabular} of nicematrix. That environment is similar to {tabular} (of nicematrix) but creates PGF/Tikz nodes under the cells, rows and columns. Thus, it's possible to use Tikz to draw whatever you want under the cells of the array. \documentclass{article} \usepackage{nicematrix} \usepackage{tikz} \begin{document} \renewcommand{\arraystretch}{1.4} \begin{NiceTabular}{ccc} \CodeBefore \begin{tikzpicture} \fill [red!15] (1-|1) -- (2-|1) -- (2-|2) -- cycle ; \fill [blue!15] (1-|1) -- (1-|2) -- (2-|2) -- cycle ; \end{tikzpicture} \Body Some text & other text & another text \\ Some text & other text & another text \\ Some text & other text & another text \\ \end{NiceTabular} \bigskip \begin{NiceTabular}{ccc}[hvlines] \CodeBefore \begin{tikzpicture} \fill [red!15] (1-|1) -- (2-|1) -- (2-|2) -- cycle ; \fill [blue!15] (1-|1) -- (1-|2) -- (2-|2) -- cycle ; \end{tikzpicture} \Body Some text & other text & another text \\ Some text & other text & another text \\ Some text & other text & another text \\ \end{NiceTabular} \end{document} You need several compilations (because nicematrix uses PGF/Tikz nodes under the hood).
{}
# Find the frequency of exclusive group combinations based on multiple categorical columns in R. To find the frequency of exclusive group combinations in an R data frame, we can use count function of dplyr package along with ungroup function. For Example, if we have a data frame called df that contains four grouping columns say Grp1, Grp2, Grp3, and Grp4 then we can count the unique group combinations in df by using the below command − count(df,Grp1,Grp2,Grp3,Grp4)%%ungroup() ## Example 1 Following snippet creates a sample data frame − Class1<-sample(c("First","Second","Third"),20,replace=TRUE) Class2<-sample(c("First","Second","Third"),20,replace=TRUE) Class3<-sample(c("First","Second","Third"),20,replace=TRUE) Score<-sample(1:50,20) df1<-data.frame(Class1,Class2,Class3,Score) df1 ## Output If you execute all the above given snippets as a single program, it generates the following Output −   Class1 Class2 Class3 Score 1    Third  First Third 40 2    First  First Second 38 3    First Second Third 25 4    First Second Second 2 5    First  Third Third 12 6    First Second First 13 7   Second  Third Third 31 8    First  First First 15 9    First  Third Third 43 10  Second  First Second 28 11   First First Third 22 12   Third  Third First 50 13   First Second Second 39 14   First  First First 41 15  Second  Third Third 49 16  Second  First First 36 17   Third  Third First 20 18  Second Second Second 19 19   First  Third First 5 20  Second  First Third 47 To load dplyr package and find the frequency of exclusive group combinations for groups Class1, Class2, and Class3 on the above created data frame, add the following code to the above snippet − Class1<-sample(c("First","Second","Third"),20,replace=TRUE) Class2<-sample(c("First","Second","Third"),20,replace=TRUE) Class3<-sample(c("First","Second","Third"),20,replace=TRUE) Score<-sample(1:50,20) df1<-data.frame(Class1,Class2,Class3,Score) library(dplyr) count(df1,Class1,Class2,Class3)%%ungroup() ## Output If you execute all the above given snippets as a single program, it generates the following Output −    Class1 Class2 Class3 n 1   First  First First 2 2   First  First Second 1 3   First  First Third 1 4   First Second First 1 5   First Second Second 2 6   First Second Third 1 7   First  Third First 1 8   First  Third Third 2 9  Second  First First 1 10 Second  First Second 1 11 Second  First Third 1 12 Second Second Second 1 13 Second  Third Third 2 14  Third  First Third 1 15  Third  Third First 2 ## Example 2 Following snippet creates a sample data frame − Grp1<-sample(1:2,20,replace=TRUE) Grp2<-sample(1:2,20,replace=TRUE) Grp3<-sample(1:2,20,replace=TRUE) df2<-data.frame(Grp1,Grp2,Grp3) df2 The following dataframe is created   Grp1 Grp2 Grp3 1  2     1   1 2  1     1   1 3  1     2   1 4  2     1   1 5  2 2 2 6  1 2 2 7  2 1 2 8  1 1 2 9  2 2 1 10 1 2 2 11 2 2 2 12 1 1 1 13 2 1 1 14 1 1 2 15 2 2 2 16 1 1 2 17 2 2 2 18 1 2 2 19 2 1 1 20 2 2 2 To find the frequency of exclusive group combinations for groups Grp1, Grp2, and Grp3 on the above created data frame, add the following code to the above snippet − Grp1<-sample(1:2,20,replace=TRUE) Grp2<-sample(1:2,20,replace=TRUE) Grp3<-sample(1:2,20,replace=TRUE) df2<-data.frame(Grp1,Grp2,Grp3) count(df2,Grp1,Grp2,Grp3)%%ungroup() ## Output If you execute all the above given snippets as a single program, it generates the following Output −    Grp1 Grp2 Grp3 n 1   1    1     1 2 2 1 1 2 3 3 1 2 1 1 4 1 2 2 3 5 2 1 1 4 6 2 1 2 1 7 2 2 1 1 8 2 2 2 5
{}
# A magma with the property (xy)(zt) = x(yz)t Let $$(M,\cdot)$$ be a magma. Does the property $$(x\cdot y)\cdot(z\cdot t) = x\cdot(y\cdot z)\cdot t$$ have a special name? Thanks. • What does the right-hand side mean? It's lacking a pair of parentheses. – Gro-Tsen Dec 6 at 20:11 • If the magma has a unit, or satisfies the identity x*x=x, this is just associativity (regardless of which way you group x(yz)t). If you have a left or right band or rectangular band, the identities are consequences of band identities. I think you have three distinct magma varieties depending on which subset of groupings of x(yz)t you adopt. Starting with the linked article on magmas is a good idea. Gerhard "Searching 'Generalized Associativity' Might Help" Paseman, 2018.12.06. – Gerhard Paseman Dec 6 at 21:17
{}
C. Santa Claus and Robot time limit per test 2 seconds memory limit per test 256 megabytes input standard input output standard output Santa Claus has Robot which lives on the infinite grid and can move along its lines. He can also, having a sequence of m points p1, p2, ..., pm with integer coordinates, do the following: denote its initial location by p0. First, the robot will move from p0 to p1 along one of the shortest paths between them (please notice that since the robot moves only along the grid lines, there can be several shortest paths). Then, after it reaches p1, it'll move to p2, again, choosing one of the shortest ways, then to p3, and so on, until he has visited all points in the given order. Some of the points in the sequence may coincide, in that case Robot will visit that point several times according to the sequence order. While Santa was away, someone gave a sequence of points to Robot. This sequence is now lost, but Robot saved the protocol of its unit movements. Please, find the minimum possible length of the sequence. Input The first line of input contains the only positive integer n (1 ≤ n ≤ 2·105) which equals the number of unit segments the robot traveled. The second line contains the movements protocol, which consists of n letters, each being equal either L, or R, or U, or D. k-th letter stands for the direction which Robot traveled the k-th unit segment in: L means that it moved to the left, R — to the right, U — to the top and D — to the bottom. Have a look at the illustrations for better explanation. Output The only line of input should contain the minimum possible length of the sequence. Examples Input 4RURD Output 2 Input 6RRULDD Output 2 Input 26RRRULURURUULULLLDLDDRDRDLD Output 7 Input 3RLL Output 2 Input 4LRLR Output 4 Note The illustrations to the first three tests are given below. The last example illustrates that each point in the sequence should be counted as many times as it is presented in the sequence.
{}
0 Independent Component Analysis Independent component analysis (ICA) aims to solve problem of signals separation from their linear mixture. ICA is a special case of blind source separation, when separation performed without the aid of information (or with very little information) about the source signals or the process of signal mixing.  Although blind source separation problem in general is underdetermined, the useful solution can be obtained under a certain assumptions. ICA model assumes that there are  n independent signals s$_{i}$(t),i=1,2,....n and some mixing matrix A  : $\begin{bmatrix} a_{1,1} & a_{1,2} & \dots & a_{1,n} \\ a_{2,1} & a_{2,2} & \dots & a_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ \\ a_{n,1} & a_{n,2} & \dots & a_{n,n} \end{bmatrix}$ We can observe only signals $x_{i}$(t)  that are represented by linear superposition of  $s_{i}(t)$: $x_{i}(t)={\substack\sum_{i=1}^{n}}a_{i,j}s_{j}(t)$ The main difficulty lies in the fact that both   $a_{i,j}$ and s$_{j}(t)$  are unknown. To solve the problem ICA uses assumption that signals in mixture are statistically independent and has non-Gaussian probability distributions. In general, two random variables $y_{i}$  and $y_{j}$  are statistically independent if information about one variable says nothing about other variable. From mathematical point of view statistical independence means that 2-D probability density function p$(y_{i},y_{j}$)  is the product of 1-D probability density functions: p($y_{i},y_{j})=p(y_{i})p(y_{j})$ For statistically independent signals the covariance matrix of odd functions f$(y_{i})$ and g $(y_{j}$) is diagonal: all mutual covariances are zeros: E$[f(y_{i})g(y_{j})]-E[f(y_{i})]E[g(y_{j})]=0$ and all self covariances are non-zeros: E$[f(y_{i})g(y_{j})]-E[f(y_{i})]E[g(y_{j})] \neq 0$ 0
{}
1. ## Convergence Hi, I just want to make sure I've done this right.... I have to find out if this diverges or converges, so I used the ratio test and showed the limit is 1/3, meaning it converges. $\sum_{n=1}^{\infty} \frac{n!}{n^n}$ I'm not sure how I should do this one... $\sum_{n=1}^{\infty} ne^{-n}$ Thanks 2. Hello, Originally Posted by Benno_11 Hi, I just want to make sure I've done this right.... I have to find out if this diverges or converges, so I used the ratio test and showed the limit is 1/3, meaning it converges. $\sum_{n=1}^{\infty} \frac{n!}{n^n}$ How did you find 1/3 ? $\frac{a_{n+1}}{a_n}=\dots=(n+1)\cdot\frac{n^n}{(n+ 1)^{n+1}}=\left(\frac{n}{n+1}\right)^n=\left(1-\frac 1n\right)^n$ Then remember that $\lim_{n\to\infty}\left(1+\frac an\right)^n=e^a$ So here, the limit is $e^{-1}=\frac 1e<1$ I'm not sure how I should do this one... $\sum_{n=1}^{\infty} ne^{-n}$ Thanks I'm sure that this writing may enlighten you : $e^{-n}=(e^{-1})^n$ 3. haha, what I meant to write for the first one was $\sum_{n=1}^{\infty} \frac{n^3}{3^n}$, Thanks for the help.
{}
# Tag Info 6 As requested in the comments, here is a worked example. The main body deals with minimizing $f(x)$ for a specific problem. At the bottom follows a brief discussion of constraints then a brief discussion about the general case. Let's solve the Weighted Maximum Cut problem since this Is a relatively straight-forward example Is hard classically Is a ... 5 For the specific linear function you are interested in, the solution turns out to be trivial: you can take the channel to be $N_{X\rightarrow Y}(\rho) = \operatorname{Tr}(\rho) |\psi\rangle\langle \psi|$ for $|\psi\rangle$ being an eigenvector of $\sigma_Y$ having the largest possible eigenvalue. More generally, however, you can optimize any real-valued ... 4 Short Answer: It is potentially hard (as bRost03 indicates in the comments). To be precise, coNP-hard. Longer Answer: In adiabatic quantum computation, the ground-space of the final Hamiltonian is typically determined by the optimum solution to some constraint satisfaction problem (CSP). If the CSP is perfectly solvable, the ground-space is spanned by (... 4 The proof of the variational theorem (the theorem that the ground state energy is the lowest possible energy you can get from $\frac{\langle \psi|H|\psi\rangle}{ \langle \psi | \psi \rangle}$) is very simple: https://en.wikipedia.org/wiki/Variational_method_(quantum_mechanics) If you get a lower energy, it means you don't actually have $\frac{\langle \psi|... 3 If you are looking for a more complete implementation of a quantum variational algorithm in the context of Cirq, I would recommend looking at the second example in the OpenFermion-Cirq notebook found here. It uses a custom ansatz for hydrogen in a minimal basis, but makes a bit more explicit all the required pieces. Another good example, perhaps without ... 2 In the article you mentioned it is said that classical algorithms can beat some cases of (quantum ) QAOA's as is proved in this article. So finding cases where quantum QAOA can still beat classical algorithms and can run on NISQ devices with low depth circuits is still exciting and promissing. The article uses plausible conjectures from complexity theory to ... 2 There is currently no way to check the status of a job in Qiskit Aqua: https://github.com/Qiskit/qiskit-aqua/issues/545 However, it looks like it is a feature that is coming. 2 So in your example, you try to find the quantum circuit representing the Toffoli operation. I would then change my objective/fitness function and compare the unitary matrix representing the operation. You can use an minimization objective like : $$\mathcal{F} = 1-\frac{1}{2^n} |\operatorname{Tr}(U_aU_t^{\dagger})|$$ with$ U_a $is the unitary of the ... 1 Let's answer my own question: it is not possible. After some research I ended up computing the "truth table" for the two possible cases:$b = 0$:$\vert 00 \rangle\rightarrow\vert 00 \rangle\vert 01 \rangle\rightarrow\vert 10 \rangle\vert 10 \rangle\rightarrow\vert 10 \rangle\vert 11 \rangle\rightarrow\vert 01 \rangleb = 1$:$\vert 00 \... Only top voted, non community-wiki answers of a minimum length are eligible
{}
# Reading arbitrarily long lines from a file that is redirected as input So I have a file with a list of dates that are to be redirected as input to stdin and what the program does is it searches for valid dates, appends them to the output file and then appends all the dates in the input file to the output file. In other words, at the end of the program there would be a list of all the valid dates in the output file and list of all the dates that were in the input file. An important command line argument is the number of entries to search for. 0 here is the default case where we look through the entire input file. Any value greater than 0 represent the number of entries we have to look at. If we find one that is invalid, we skip it. Command Example: ./program < input.txt 0 > output.txt Here are all the relevant functions. getDates Function char ** getDates(char **lines, size_t *numOfLines, size_t *lineMax, int numOfEntries) { char *strDate = NULL; int allEntries = FALSE, errorStatus = FALSE; // errorStatus is so I can report memory failures. if(numOfEntries == 0) { allEntries = TRUE; // A flag for the default case zero. } while((strDate = readLine(stdin, &errorStatus)) != NULL) { if(numOfEntries >= 0 || allEntries) { if(checkDate(strDate)) { fputs(strDate, stdout); } numOfEntries--; } /* Keep a record of how many lines we have process. We will need this variable when we decide to print everything that we have in the variable lines. */ lines[*numOfLines] = strDate; (*numOfLines)++; if((*numOfLines) == (*lineMax)) { errorStatus = expand(lines, lineMax); // try to expand. if(errorStatus == MEMFAILURE) { fprintf (stderr,"Error, couldn't allocate any more memory."); return NULL; } } } if(errorStatus == MEMFAILURE) { fprintf (stderr,"Error, couldn't allocate any more memory."); return NULL; } if(ferror(stdin)) { fprintf(stderr,"Error, couldn't completely process the entire file.\n"); return NULL; } if(strDate != NULL) { free(strDate); } return lines; } char *readLine(FILE *stream, int *errorStatus) { size_t size = 0; char *buffer = NULL; char *newLineFound = NULL; do { char *temp = realloc(buffer, size + BUFSIZ); // BUFSIZE is a define constant in the stdio library. if(!temp) { free(buffer); *errorStatus = MEMFAILURE; // an enum constant to record our error. return NULL; } buffer = temp; // One small problem. If there is no new line at the end of the file. it won't add whatever contents is there. if(!fgets(buffer + size, BUFSIZ, stream)) { free(buffer); return NULL; } newLineFound = strchr(buffer + size, '\n'); } while(!newLineFound && (size += BUFSIZ - 1)); // this is to simply update size at the same time that we don't find a newline. return buffer; } expand Function int expand(char **lines, size_t *lineMax) { char **tmp = realloc (lines, (*lineMax) * 2 * sizeof(*lines)); if(!tmp) { return MEMFAILURE; } lines = tmp; (*lineMax) *= 2; // Update the size return SUCCESS; } • What can I do to reduce my code as I feel that it is getting too bloated. • What did I forget to do, what didn't I do right and what did I do that was unnecessary. • Performance tips are welcome. What can I do to reduce my code as I feel that it is getting too bloated. 1. No need for test. Just call free();. It handles free(NULL); // if(strDate != NULL) { // free(strDate); // } free(strDate); 2. Minor: Style: I found the vertical spacing excessive creating challenges seeing the overall flow. YMMV What did I forget to do, what didn't I do right and what did I do that was unnecessary. 1. Certain functional error, lines is assigned a value that is not used. Need to pass a "reference". int expand(char **lines, size_t *lineMax) { char **tmp = realloc (...); ... lines = tmp; ... return SUCCESS; } 2. Functional error (possible part of your "One small problem." comment). If fgets() returns NULL, code should only dispose of the buffer if this is the first fgets() in readLine(). Or: if the reason for NULL is due to ferror() and not feof(). 3. Functional corner case: strchr(buffer searches the buffer and can get fooled by fgets() reading a null charter and then a end-of-line such as "\0\n". No true way to solve this and still use fgets(). newLineFound = strchr(buffer + size, '\n'); } while(!newLineFound && (size += BUFSIZ - 1)); 4. Minor: Format to the presentation width. // avoid int allEntries = FALSE, errorStatus = FALSE; // errorStatus is so I can report memory failures. // better // errorStatus is so I can report memory failures. int allEntries = FALSE, errorStatus = FALSE; 5. Use bool. It is standard and well defined. #include <stdbool.h> // int allEntries = FALSE, errorStatus = FALSE; bool allEntries = false, errorStatus = false; 6. Not all stderr are unbuffered. Suggest appending '\n' to your print to insure timely display. Note either call below may make the same code with smart compilers. 2nd has the advantage of no problems with '%' // fprintf (stderr,"Error, couldn't allocate any more memory."); fprintf(stderr,"Error, couldn't allocate any more memory.\n"); // or fputs("Error, couldn't allocate any more memory.\n", stderr); 7. Confusing code: simplification suggestion // char **tmp = realloc (lines, (*lineMax) * 2 * sizeof(*lines)); size_t newsize = *lineMax * 2; void *tmp = realloc (lines, sizeof(*lines) * newsize); .... *lineMax = newsize; 8. Lost memory char **tmp = realloc (lines, (*lineMax) * 2 * sizeof(*lines)); if(!tmp) { free(lines); return MEMFAILURE; } 9. Functional corner case: If *lineMax could ever be 0. Returning NULL is OK when the size of 0 and not a MEMFAILURE. char **tmp = realloc (lines, (*lineMax) * 2 * sizeof(*lines)); // if(!tmp) { if(!tmp && *lineMax) { return MEMFAILURE; } 10. Functional: To be more resilient, check size before using the array. getDates() does not control input of the count/size. // move here if((*numOfLines) >= (*lineMax)) ... lines[*numOfLines] = strDate; (*numOfLines)++; // if((*numOfLines) == (*lineMax)) ... Performance tips are welcome. 1. I/O is often a candidate for speed improvements. For this code, if there is no other calls to stdin, I would use fread() and then process the lines myself, reading large blocks. 2. On systems that use '\n' translation, improvement can be had by reading in binary mode. 3. When a file ends without a final '\n', code is probably faster appending one, insuring the rest of code does not have to check for that expectation. GTG • In number 8, why are you freeing lines and not temp? In number 7, why did you change temp to single pointer instead of a double pointer? – Luis Averhoff Mar 27 '16 at 1:23 • @Luis Averhoff #8: free(temp) is the same as the uneeded free(NULL);. free(lines) because the code no longer has access to lines. IAC, expand()1 needs re-work per #1. – chux - Reinstate Monica Mar 27 '16 at 1:33 • @Luis Averhoff #7 tmp is a void * - the universal "we don't care about the type" pointer. It is just a code simplification. - minor bloat reduction. – chux - Reinstate Monica Mar 27 '16 at 1:35 • Well I could make line a triple pointer(you said I need to pass a reference) but that would degrade code readability. So I guess I could just make expand return NULL if realloc fails or lines if it succeeds. – Luis Averhoff Mar 27 '16 at 1:38 • For number 9, how can *lineMax ever be zero if I initially set it some value at the start of the program. – Luis Averhoff Mar 27 '16 at 1:42
{}
## Isotopes of nearlattices.(English)Zbl 0585.06001 Summary: The nature of isotopes of a near lattice is considered. By introducing the notion of a superstandard element n in a near lattice S, it is shown that for a medial element n, the n-isotope $$S_ n$$ is a semi-lattice which is also a nearlattice when n is neutral and sesquimedial. It is also shown that for a neutral sesquimedial element n, S is distributive iff $$S_ n$$ is so. ### MSC: 06A12 Semilattices
{}
# zbMATH — the first resource for mathematics Sufficient conditions for maximally arc-connected digraphs depending on the clique number. (Chinese. English summary) Zbl 07267378 Summary: Interconnection networks are often modeled by digraphs. The arc-connectivity $$\lambda (D)$$ of a digraph $$D$$ is an important measurement for fault tolerance of networks. Let $$\delta (D)$$ be the minimum degree of $$D$$. Then $$\lambda (D) \le \delta (D)$$. A digraph is called maximally arc-connected if $$\lambda (D) = \delta (D)$$. In this paper, we present sufficient conditions for maximally arc-connected digraphs depending on the clique number. ##### MSC: 05C20 Directed graphs (digraphs), tournaments 05C40 Connectivity
{}
# Groups and dynamics - Misha Belolipetsky Arithmetic Kleinian groups generated by elements of finite order Abstract: We show that up to commensurability there are only finitely many cocompact arithmetic Kleinian groups generated by rotations. The proof is based on a generalised Gromov-Guth inequality and bounds for the hyperbolic and tube volumes of the quotient orbifolds. To estimate the hyperbolic volume we take advantage of known results towards Lehmer's problem. The tube volume estimate requires study of triangulations of lens spaces which may be of independent interest. ## Date: Thu, 03/11/2016 - 10:30 to 11:30 Ross 70
{}
0 Calculate the MR Calculate the MR of a doubly reinforced R.C beam of rectangular section of size $300 mm {\times} 450 mm$ deep reinforced with 6 No. 20 mm dia. bars on tension side and case:- 1. 4 no. 20 mm ${\phi}$ on comp side. 2. 5 no. 20 mm ${\phi}$ on comp side. Assume effective cover of 40 mm on both sides. Use $M_{20}/F_{e415}$ numericals • 873  views 0 0 Data:- $b=300 \\ D=450 \ mm[ \because deep] \\ d_c=40 \ mm \\ d=D-d_c=450-40 \\ \hspace{2cm}=410 \ mm$ $f_ck=20N/mm^2,f_y=415N/mm^2$ ## Case 1:- $Ast=6{\times}\frac{\pi}{4}{\times}20^2=6{\times}314=1884mm^2 \\ Asc=4{\times}\frac{\pi}{4}{\times}20^2=4{\times}314=1256mm^2 \\ \frac{d_c}{d}=\frac{40}{410}=0.097$ from table, use interpolation $0.05\hspace{1cm}355 \\ 0.097\hspace{0.8cm}f_{sc}=? \\ 0.1\hspace{1.2cm}353 \\ \therefore f_{sc}=353.12N/mm^2 \\ f_{sc}=0.446{\times}20=8.92N/mm^2$ To find Depth of actual N.A $C_u=T_u \\ (C_{u1}+C_{u2})=(T_{u1}+T_{u2}) \\ (0.36f_ckbX_u)+(f_{sc}+f_{cc})Asc=0.87f_yAst \\ (0.36{\times}20{\times}300{\times}X_u)+(353.12-8.92){\times}1256=(0.87{\times}415{\times}1884) \\ \therefore X_u=114.76 \ mm \\ X_{u \max}=0.48 \ d=0.48{\times}410=196.8 \ mm \\ \therefore X_u \lt X_{u \max}$ Hence it is an under reinforced section. $M_u=(C_{u1}{\times}L_{a1})+(C_{u2}{\times}L_{a2}) \\ M_u=[(0.36f_ckbX_u){\times}(d-0.42X_u)]+[(f_{sc}-f_{cc}){\times}Asc]{\times}(d-d_c) \\ M_u=[(0.36{\times}20{\times}300{\times}114.76){\times}(410-0.42{\times}114.76)]+\bigg[[(353.12-8.92){\times}1256]{\times}(410-40)\bigg] \\ \therefore M_u=249.64Knm$ ## Case 2:- $Ast=6{\times}\frac{\pi}{4}{\times}20^2=6{\times}314=1884mm^2 \\ Asc=5{\times}\frac{\pi}{4}{\times}20^2=5{\times}314=1256mm^2 \\ \therefore f_{sc}=353.12K/mm^2 \\ f_{cc}=8.92N/mm^2$ To find Depth of actual N.A $C_u=T_u \\ (C_{u1}+C_{u2})=(T_{u1}+T_{u2}) \\ (0.36f_ckbX_u)+(f_{sc}+f_{cc})Asc=0.87f_yAst \\ (0.36{\times}20{\times}300{\times}X_u)+(353.12-8.92){\times}1570=(0.87{\times}415{\times}1884) \\ \therefore X_u=64.73 \ mm \\ X_{u \max}=0.48 \ d=0.48{\times}410=196.8 \ mm \\ \therefore X_u \lt X_{u \max}$ Hence it is an under reinforced section. $M_u=(C_{u1}{\times}L_{a1})+(C_{u2}{\times}L_{a2}) \\ M_u=[(0.36f_ckbX_u){\times}(d-0.42X_u)]+[(f_{sc}-f_{cc}){\times}Asc]{\times}(d-d_c) \\ M_u=[(0.36{\times}20{\times}300{\times}64.73){\times}(410-0.42{\times}64.73)]+\bigg[[(353.12-8.92){\times}1570]{\times}(410-40)\bigg] \\ \therefore M_u=253.46Knm$ 0
{}
• entries 383 1075 • views 353419 Innate to Pirate? 677 views A lot of planning has been underway in the last week. Most of it has been sifting through a couple of years of brainstorm notes for when I got to this point, trying to remember some of the strategies I had jotted out for this point of time. I do not have a very clear road map of what I need to do; there are far too many unknowns. However at least this helps me figure out what those unknowns actually are. Still, I think I have enough info now to start moving. My problem is that I tend to freeze up and get all over-analytical when faced with too many risks and variables. It is a trait that served me well back when I was a green software test engineer, but it is not a good trait for a green business manager. I have to face that taking a sub-optimal course of action is usually better than taking no course of action. The other thing I have read lately which is somewhat disheartening (apart from former indie developer turned personal development blogger Steve Pavlina's bizarre resolution for 2009), is the articles about rates of piracy. There have been a few around lately. Recently right here there was the comment in the Daily GameDev.Net about the rate of piracy of Championship Manager - 90 percent. Then there was the piracy rate article about World of Goo over at 2D Boy's blog - also about 80 to 90 percent. Saying that rate is high is an understatement. It is insane. There was also this great article on piracy over at TweakGuides.com by Koroush Ghazi. It is quite long at ten pages, but it is an excellent read. The World of Goo article in particular was a big eye opener. World of Goo was a great game, and more tellingly when compared to the usual laundry list of reasons to pirate it did everything right. World of Goo was made by just a couple of guys rather than "a big evil soulless corporation". It was an innovative quirky games with oodles of polish. It had no DRM whatsoever. When you bought it for pC, you got the Windows and the Mac version, and a guarantee of the Linux version when it is complete. It had a sizeable demo that showcased the reason to buy the game. It was relatively cheap. It won its way into heaps of Top 10 Games of 2008 lists. And apparently, it was pirated without abandon regardless. Despite this, I have read further justifications by pirates as to why they copied World of Goo. These are illuminating. From memory, some of the main arguments I have read: • World of Goo was not simultaneously released on Steam in Europe, so Europeans pirated it instead. Of course, the fact that it was also released world wide from 2D Boy's own website, which is the top hit when Google searching for "World of Goo", was apparently irrelevant here... • Pirates were not sure whether the game was worth buying from the demo alone - they needed a "bigger demo" by pirating the full game. This category is split into those who claim to have bought the game after finishing their "bigger demo", and those who found after a couple of dozen hours of play that it started to lose appeal and so "wasn't worth it". • To paraphrase: "lolz - u release it with no DRM - what do u expect?". Damned if you do, damned if you don't. • The best one: someone thought the two guys who made the game "looked like a couple of creeps", and so he was fully justified in pirating the game. My concern as someone who wants to sell games to make a living is that there is a culture of piracy out there which cannot be broken by anyone as small as a game developer. Take the European example at face value and assume that these were potentially were customers on Steam: once that was blocked, apparently their next reaction was to immediately head to The Pirate Bay. Or the case of people downloading a pirated copy as a "full demo". Was this really what they thought, or was heading to a torrent site just innate habit from doing this for every piece of software they have considered? The DRM comments were also insightful. Many times I have read pirate justifications for other games in that they would have bought the game if they were not treated like a criminal by draconian DRM. But in the reverse case, for games with no DRM, there are people thinking that means the developers just do not care about piracy, so why not pirate it open slather? If this is true, it might be worth slapping any form of DRM to games just to cement the message home. But it is the last one that I think underlines the truth - that mostly, pirates will use any old reason to justify getting stuff without paying. I do not think this comes as a surprise. However the sheer numbers are astounding. I am worried that at this level of saturation, it will start to bleed away the honest customers. If nearly everyone is getting a free ride, a paying customer may start to feel like a mug and join in. Still, World of Goo managed to do quite well, at least according to 2D Boy. However, that was World of friggin' Goo, not just any ordinary game. I would be foolish to say the least if my plan was to mindlessly hope for that kind of critical reaction to my games. It is a bit of a downer. I am hoping that, if I take the motto "be nice to (potential) customers, and they'll be nice to you", then there will be enough nice customers out there to support the business. But it is possible that I am underestimating the sheer weight of piracy out there. I hope not, but just relying on hope may not be enough. I may have to consider other revenue streams such as mobile phone development or indie console channels sooner than expected. Quote: Original post by Trapper Zoid I may have to consider other revenue streams such as mobile phone development I'd not be sure about that, I developed a mobile phone game and it got pirated (!!!). I found it on three websites.. it was only one version (the same) out of 6/7 but TBH it was enough to make me feel.. uhm.. proud! :) To think a "famous" mobile crew brought the game, extracted it and upoloaded it somewhere.. was indeed cool! I was not so happy when I discovered approx 2500 people downloaded the game from a single (!!!) website. I installed emule and searched for my game.. guess what? I found it! Of course the number of people downloading it via emule was impossible to detect. It was also impossible to know if the game was included into an huge zipfile with other mobile games. If you think a single website generated 2500 downloads I guess as time passes you can easily reach 5000 downloads, including emule, pirated games packs, three websites and probably something else I wasn't able to find (torrent?). I got approx 0.40€ per download, so it's not much but additional 2000€ would have been fine. As a result, I'm not going to develop another mobile game if I get paid with royalties. I get the money, you get the game and the royalties. The original game was sold at 3-4€, AFAIK a rebranded version was sold at 5€. Users are paying more for the same. Another thing that drives me crazy is the linux syndrome aka the idiot user. I can't see how somebody can waste time: - looking for a game in warez websites - registering to a website which will fill his mailbox with spam - installing the cracked game after unlocking/hacking the mobile - get crazy beause the version is for nokia but he has a motorola - play the game without sound because the version doesn't match All that to save 3€!!!! 3€?!?!?!? Do you think my game suck and isn't worth 3€? I'm fine with that, since I get paid 0.40€ I can directly sell you the game for 0.80€. Of course "Joe the user" only downloads games if they are in the front page of the game section of their operator portal. Because he can't waste time. He can't waste time to buy a game at 0.80€, but he can register to a pirate site, hack the phone, install drivers, upload the game to his mobile, and play the wrong version. WOW. If you guess why it's the linux syndrome, it's because I've seen people spending one week to let an old printer/scanner work with linux. Then they come to me and say: you stupid windows user, I get an OS for free while you have to pay Micro$oft (!!!!!!!!!!!!!!!). I don't understand. I plug my printer, the printer is fine, I spend one week working and I get paid. Hopefully I'll get more money than it's needed to buy a Microsoft OS. :D Share this comment Link to comment Undead: My thoughts exactly. ;) Mr. Zoid: Ok I wouldn't worry too much about piracy rates. The same happens with music and movies, and still they make a truckload of money. Don't tell me that the devs of World of Goo didn't make a lot of money - they did. I think that piracy should be accepted because human nature is just not going to change, so, don't even consider that money you lost, because you were never going to get it in the first place. Those people wouldn't buy it if you pointed a gun to their forehead. What's encouraging is that still many people do buy games and even go as far as donating in large quantities to indie devs (like Toady who lives off donations for Dwarf Fortress). I wouldn't let that stop me from making a game. NOR DECISION PARALYSIS. Geeze. Kids these days! Share this comment Link to comment Quote: Original post by undead Quote: Original post by Trapper Zoid I may have to consider other revenue streams such as mobile phone development I'd not be sure about that, I developed a mobile phone game and it got pirated (!!!). I found it on three websites.. it was only one version (the same) out of 6/7 but TBH it was enough to make me feel.. uhm.. proud! :) To think a "famous" mobile crew brought the game, extracted it and upoloaded it somewhere.. was indeed cool! Yup, same here - but for Nintendo DS. Ferrari Challenge and TrackMania DS, both on torrent sites. Ferrari Challenge was torrentable before it was even sent for manufacture... Share this comment Link to comment Well, that's convenient, Jotaf's point was the one I thought you missed in your list: "I wouldn't have bought the game anyway, so there wasn't any damage to the developer" IMHO, that's just as bogus as the other examples. I wouldn't buy a Ferrari, but if someone handed me one for free, I certainly wouldn't say "no". Could I also say: "Because I wouldn't have bought a Ferrari, I am justified in stealing one."? Here, the determined pirate would usually say that the examples are not equivalent. When you build a Ferrari, you spend money to make each one, so stealing a Ferrari has a large impact on the producer. They have not only lost a item to sell, but also the costs involved with producing that individual item. The pirate would argue that stealing software does not have the same impact, since copying it does not cost the developer anything. This is definitely wrong. The time spent (often many man-years) in labour to produce the software are what you are paying for when you buy it. The cost of producing the media on which it was distributed, or the pretty box which the media came in, or whatever printed manuals or paraphernalia might have come with, constitute a tiny percentage of the total cost of producing the product itself. When one pirates software, they steal the more valuable portion of the product. Like the Zoid said, these arguments are just excuses to try and justify theft. If someone wouldn't have bought the product, they can play the demo and be satisfied with that. As sung by Jane's Addiction: Quote: Well, it's just a simple fact When I want something man, I don't want to pay for it Share this comment Link to comment Actually, there were some other points here that piqued my interest: Quote: Original post by Jotaf Mr. Zoid: Ok I wouldn't worry too much about piracy rates. The same happens with music and movies, and still they make a truckload of money. Don't tell me that the devs of World of Goo didn't make a lot of money - they did. It's wrong to point to the success of the few (or the large corporations) and say that this means that piracy has little effect. It is common knowledge in the game development industry that very few projects break even, let alone make the big$. I'm not trying to say that piracy is solely responsible for this, because it isn't. However, to point to the few examples where titles succeed (despite piracy) as proof that piracy has little effect is a weak argument. We can't all be ID or EA. Quote: Original post by Jotaf I think that piracy should be accepted because human nature is just not going to change, so, don't even consider that money you lost, because you were never going to get it in the first place. Those people wouldn't buy it if you pointed a gun to their forehead. I'm not sure what you mean by "piracy should be accepted because human nature is just not going to change". Do you mean that piracy should be legal? Or that we should expect piracy? I really hope it's not the former, because that leads us down a steep path to some ugly things. Assuming you're talking about developers expecting their games to be pirated, I agree with you there. As Mr. Zoid points out, it seems that people will pirate whether you put DRM in or not. However, at least in the case where the DRM is there, it is slightly more difficult (and I do mean slightly) to pirate and thus you'd see a better demo-to-purchase conversion rate. I really don't see where people get off criticising developers for putting DRM in their products. It would be like going to a store and being offended that they put the really expensive merchandise behind glass or inside locked cabinets. Quote: Original post by Jotaf What's encouraging is that still many people do buy games and even go as far as donating in large quantities to indie devs (like Toady who lives off donations for Dwarf Fortress). I wouldn't let that stop me from making a game. NOR DECISION PARALYSIS. Geeze. Kids these days! This is very encouraging, yes. I guess we can all dream about being able to earn a living doing what we enjoy, but for most people, it can only be a dream. Also of note is that if you read Toady's interviews, its not exactly a good living he makes out of those donations. Most of us have to come to terms with the fact that we won't be getting rich from this work. You're probably better off making games for the fun of it alone. Quote: Original post by LachlanL Assuming you're talking about developers expecting their games to be pirated, I agree with you there. As Mr. Zoid points out, it seems that people will pirate whether you put DRM in or not. However, at least in the case where the DRM is there, it is slightly more difficult (and I do mean slightly) to pirate and thus you'd see a better demo-to-purchase conversion rate. This is pure speculation. Spore was rated down a lot on major websites because of its DRM. DRM means bad rep. It's possible there's a sweet spot between having intrusive DRM or not having any that maximizes sales, striking a balance between the bad rep you get for having DRM, and making pirating so easy it hurts your sales; but it's not a linear curve like better DRM = better sales. Quote: I really don't see where people get off criticising developers for putting DRM in their products. It would be like going to a store and being offended that they put the really expensive merchandise behind glass or inside locked cabinets. Except the cabinets don't f*ck up your computer. And when you buy expensive hardware it doesn't self-destruct because you moved it from place to place 3 times. The implementations we've seen of DRM are extremely poor. Quote: This is very encouraging, yes. I guess we can all dream about being able to earn a living doing what we enjoy, but for most people, it can only be a dream. Also of note is that if you read Toady's interviews, its not exactly a good living he makes out of those donations. Most of us have to come to terms with the fact that we won't be getting rich from this work. You're probably better off making games for the fun of it alone. That's true, and it's something I forgot to mention. If you've come to terms with the fact that this isn't the way to get stinking-rich then you're a true artist. If you're still complaining about the downsides of that life then this is not for you. I make games as a hobby, like many before me. I never claimed it was a secure way of earning money. I do it because I like doing it. I also like watching TV, but no one would pay me to watch it for 8 hours straight every day. I'm not complaining. Quote: Original post by LachlanL Well, that's convenient, Jotaf's point was the one I thought you missed in your list: "I wouldn't have bought the game anyway, so there wasn't any damage to the developer" From a purely economics standpoint, it is true. Take one person that pirates your game. If the same person is the type that if he couldn't pirate it then he wouldn't even consider buying it, you lose nothing. This is apparent from your example. Good points for discussion from everyone. I might have to collate them all and add my thoughts later into another journal post, as this threaded response makes it a bit confusing. A lot of these points were covered in the TweakGuides article I linked to. It's a good read. Quote: Original post by LachlanL "I wouldn't have bought the game anyway, so there wasn't any damage to the developer" That's a common argument against piracy, but it's bogus for two reasons. Firstly, the pirate obviously put some value on the game, otherwise he would not have spent his time and bandwidth downloading the thing. Secondly, the pirate is right there on the forum posting about how the game sucks. That negative publicity is indeed damaging to the developer. I agree with Lachlanl and Jotaf. If it's true people pirate games regardless of DRM, it's also true to include pervasive DRM isn't a viable solution. To add limitations only affects those who are honest. As I said some people can do crazy things just to say "hey I got it for free!". IMHO there are many things in the industry which are just plain wrong. The relationship between the producer and the user is morbid. The trend is to sell a franchise, not a product. Many successful products get updated every year. I believe this is crazy. Hollywood doesn't produce "drama 2009" "sport movie 2010" "action movie xmas 2011", neither the music industry releases "latin pop 09" or "true evil black metal winter 2k10 edition". An educated/mature industry shouldn't do something like that. That should be the exception, not the rule. I suppose we are all tech guys here. I like to I implement a new effect or see a new technique. Actually when I was a teenager I was part of the demoscene, so to me it makes perfect sense to think about the tech side. The industry should be mature enough to understand a game has a lot more to offer than amazing graphic. Think about Wii or Nintendo DS, they're surely not as much powerful as a ps3 or a psp, but they're selling much better. The current business model, "bigger is better", is insane. If you need to include 100 characters and 50 levels in your game, you are going to spend a lot of money. But the game mechanics are likely to be the same for 50 levels. What's the point in doing the same thing 1000 times and pay 70€? Does the user need to repeat the same actions for 25 hours? This leads to another problem, the preview/screenshots are fake, since it's all about graphics. It this isn't fraud, I don't know how to call it. I love what Tim Sweeney does since the first unreal, but google for Gears of War screenshots. Needless to say what's the actual game and what's the fake. IIRC recently EA had problems because sold a Wii game using shots from different versions, as they look better. This isn't a good reason to pirate a game, but the industry should stop concentrating only on franchises and graphics. This is a dead end. We all know a few games actually are profitable. Of course if you invest 10 million \$ the break even is high. Since the break even is high, only a few games are profitable. Since the investment is huge, how many companies can survive a commercial failure? Would you invest 10 millions in something "new and fresh"? This isn't exactly about piracy, but such a business model is surely much more affected by piracy than another one. Underground music scene is usually less prone to piracy than the last pop-hit. You buy a game today, you know next year there's an "update". Nobody is going to play Fifa08 if there's Fifa09 around, but you paid 70 bucks for it. Don't you feel like your money "is gone"? You try a demo, it looks amazing, then you buy the game and discover the game mechanics are the same of the demo and you'll have to repeat the same operations for the next 30 hours. Or you see amazing screenshots, you buy the game then you see the graphic looks awful compared to what they shown you. Both the industry and the gamers know about that. The relationship is morbid because the industry thinks its fine to keep releasing updated versions and base their business model on a few blockbusters which cover the failure of many other products. Users know they're somehow cheated and they buy a couple of games but pirate the others. ps I recently got an acer aspire one for 169€ and I installed (my ORIGINAL copy of) the first Unreal Tournament. I'm having so much fun, because the gameplay is still great 10 years later. Quote: Original post by Jotaf Quote: Original post by LachlanL Assuming you're talking about developers expecting their games to be pirated, I agree with you there. As Mr. Zoid points out, it seems that people will pirate whether you put DRM in or not. However, at least in the case where the DRM is there, it is slightly more difficult (and I do mean slightly) to pirate and thus you'd see a better demo-to-purchase conversion rate. This is pure speculation. Spore was rated down a lot on major websites because of its DRM. DRM means bad rep. It's possible there's a sweet spot between having intrusive DRM or not having any that maximizes sales, striking a balance between the bad rep you get for having DRM, and making pirating so easy it hurts your sales; but it's not a linear curve like better DRM = better sales. I'll be honest here, I don't know exactly what bad effects DRM products might have had on people's machines. I haven't really looked into that and I haven't bought a game that has a modern DRM product in it yet. If it really has some adverse effects on people's machines then that is really bad and not helping an anti-pirate stance. However, in terms of DRM that doesn't adversely affect the person's machine in any way, I don't really see why there's a problem with it. How can someone logically complain about a company not wanting to have it's products stolen? They (the company) put in the work to produce it, shouldn't they have the right to prevent someone from stealing it? Also, Spore might have been rated down for DRM reasons, but didn't it still sell really well? I mean, you might get some people who complain about being made to feel like a criminal, but most will still get the game if they want to play it. Quote: Quote: I really don't see where people get off criticising developers for putting DRM in their products. It would be like going to a store and being offended that they put the really expensive merchandise behind glass or inside locked cabinets. Except the cabinets don't f*ck up your computer. And when you buy expensive hardware it doesn't self-destruct because you moved it from place to place 3 times. The implementations we've seen of DRM are extremely poor. I agree that a solution that fails when a person changes some part of their hardware has a pretty big flaw. That sort of thing should be avoided if possible. Quote: Original post by Jotaf Quote: Original post by LachlanL Well, that's convenient, Jotaf's point was the one I thought you missed in your list: "I wouldn't have bought the game anyway, so there wasn't any damage to the developer" From a purely economics standpoint, it is true. Take one person that pirates your game. If the same person is the type that if he couldn't pirate it then he wouldn't even consider buying it, you lose nothing. This is apparent from your example. Similar to what Trapper said, I disagree with you here. If someone has been willing to spend the bandwidth downloading the game (which can be quite considerable with some games these days, even with rips) then they clearly had some motivation to play the game. They most likely wouldn't have bought *every* game they pirated, but even if they only bought 1 in 5 or 1 in 10 (assuming they were unable to pirate the games) then this would be a massive increase in sales compared to the situation we currently have. In a situation where only your game was pirate-proof then sure, it's likely that they'd just copy a different game and play that instead. If all games were pirate-proof then its likely that they'd have to buy some games if they like playing at all. Man, it does take Steve Pavlina an awful lot of words to say "I want myself some strange poon, so I'm gonna get myself some strange poon." I'm definitely gonna forward this to Shelly. She's gonna get a big belly-laugh out of this one. "Hey Shelly, can I explore the worlds of polyamory?" "What's that mean?" "It means I wanna get some strange poon." "Ahh, I see. Fuckno." I think PC Gaming is just going to have to shift to advert driven - if it's "free" then perhaps that would stop some forms of piracy... though I'm sure someone would come up with a advert removing patch. The Xna Creators Club may offer small companies and lone gunman a way of getting some money via the Xbox 360 though - a platform that actually requires hardware alteration to successfully pirate a game on but open enough to be published on once the game has gone through a peer review process. Quote: Original post by LachlanL I'll be honest here, I don't know exactly what bad effects DRM products might have had on people's machines. ... However, in terms of DRM that doesn't adversely affect the person's machine in any way, I don't really see why there's a problem with it. Nobody complained about plain CD checks, those have been around for years. But it's obsolete now! It's likely you're a programmer. Can you create a DRM thing that a simple JMP instruction in assembly won't circumvent? The only thing that works is to check with a remote server. And that opens up a whole lot of problems. The "old" DRM doesn't protect games, the "new" one takes away legit buyers' rights. You can't possibly stand up for DRM, it's bad one way or the other. Quote: Original post by LachlanL In a situation where only your game was pirate-proof then sure, it's likely that they'd just copy a different game and play that instead. If all games were pirate-proof then its likely that they'd have to buy some games if they like playing at all. Just chiming in quickly: I've just finished the World of Goo demo and I'm absolutely astounded. I had so much fun playing it, it's very polished, it's a great idea and it even has replay value. I get a cheque this week, and I'll be purchasing the game without a doubt. Forget about the pirates, like someone said above you couldn't force them to pay for anything even with a gun to their heads. There ARE people out there who love and respect the efforts put into these kind of games, and I for one am very willing to throw money at these developers in the hope they continue to make games. I'm looking forward to Friday. I wanna see how this Goo Corporation thing works out, it looks intriguing. It would be interesting to see the stats for Windows and Linux torrent versions of games, to see if Linux users are of the same mind as the robbers which get the game using Windows. I mean there is a cross section of users which use both, but generally Linux users have a different mindset (If I can throw a blanket over all users). I see the comments as to why some users download illegal versions as just a smoke screen to explain there activities as lies, lies and more damn lies. From what I have seen from people I have known is that it becomes a sort of additive activity where people actually download applications that they may never use, just for the satisfaction of having it, this may not be the case with games but is still not _normal_. These people spend hours (or days as a commenter suggest) searching for applications etc to download, I have seen a person who had a ebook library that was so big it rivalled a public library! Quote: Original post by CmpDev These people spend hours (or days as a commenter suggest) searching for applications etc to download, I have seen a person who had a ebook library that was so big it rivalled a public library! Yeah, they act like collectors but a collector should collect "real" items. I don't understand how somebody can think it's cool to collect something you can copy and copy and copy and copy ad libitum. Without a limited amount of items being produced, there's no reason to collect stuff. Create an account Register a new account
{}
# Can the Albanese map be anything? Sorry for the vague title. This question is about the Albanese map from the variety $M$ of canonically polarized varieties to the set of abelian varieties. (The variety $M$ is not of finite type...) Let $A$ be an abelian variety of dimension $g$. For which integers $N$ does there exist an $N$-dimensional subvariety $X$ of $M$ which maps to $A$ under the Albanese map? In words, when does the fibre of $A$ under the Albanese map contain an $N$-dimensional variety? (In such a subvariety I want the Hilbert polynomial of the varieties to be constant.) The question is even interesting for $g=0$. In this case, I am asking for big families of canonically polarized varieties with zero Albanese. I am sure one can find families of canonically polarized surfaces with trivial Albanese, but I dont know of any explicit examples. Can anyone provide a smart` construction? I only know of the following example (and some of its generalizations). Example Starting from an abelian surface $A$, one can consider double covers $X_D\to A$ ramified over precisely one smooth ample divisor $D$ on $A$. Varying the ample divisor gives a positive-dimensional family $(X_D)$ of canonically polarized surfaces with Albanese $A$. The hilbert polynomial of these guys is all different though, but you can get it to be constant by sticking with ample divisors $D$ on $A$ with the same self-intersection. - Smooth hypersurfaces of fixed degree $d$ in $\mathbb P^n$ ($n\ge 3$) are simply connected, so they have trivial Albanese, and are of general type for $d>n+1$. The Hilbert polynomial is determined by $d$. To get families of polarized varieties with fixed Albanese, just take products $X\times Y$, where $X$ varies in a family as above and $Y$ is a fixed variety of general type. - This is nice. Thank you. I assume that one takes the variety Y to have Albanese A, am I right? But why does such a can pold variety exist? –  Fabiano Rug Jun 4 '13 at 21:42 To construct $Y$ with Albanese variety $A$ for instance you can take a cover as described in your answer (in this case $\dim Y=g$), or you can take a complete intersection of very ample divisors inside $A$. Of course these are not all the possibilities, just the simplest ones. –  rita Jun 5 '13 at 6:49
{}
# An algebra problem by Mansi Bahuguna Algebra Level 1 If $$x + \frac{1}{x} = 2$$, what is the value of $$x$$ ? × Problem Loading... Note Loading... Set Loading...
{}
# Selection Sort April 20, 2019 2 minutes As with Bubble Sort, Selection Sort is another sorting algorithm that is based on the approach of compare and swap. It also has $O(n^2)$ complexity thus making it un-usable for very large datasets. Selection Sort is based on the method of identifying the smallest (or largest, depending on the sorting requirement) element from the given input dataset and replacing it with the value at smallest not yet sorted index. Consider the following example : For the given input array {10,8,11,7,21,4}, we start by moving the smallest element i.e. 4 to the lowest index by swapping it with value at index 0. In the next step, 8 is replaced with next smallest element which is 7, and then 11 with 8 and so-one until we have the entire array sorted. From the above illustration, it can be clearly seen that the algorithm divides the input dataset into two parts or subsets. One of these subsets is always sorted and the other remains unsorted. Elements from the unsorted dataset are choosen and are appended to the sorted subset until the unsorted subset cease to exist. #### Pseudo Code 1. Initialize • length = number of input elements • i = start of array (lowest index) • j = i 2. For every i from 0 to length • For every j from i to length • if element[i] > element[j]; swap these two • else, continue #### Complexity For a sequential dataset of size n, the number of comparisons to be performed, to find the minimum element is directly proportional to the number of elements n.So in the first step, it will take n-1 steps and then n-2 and then n-3 and so-on to find the minimum element. Assuming that the swapping operation can be performed in constant time, the overall complexity can be summarized in terms of these steps to identify the minimum value. Mathematically speaking, the overall complexity can be described as : $$(n-1) + (n-2) + (n-3) + \cdots + 1 = (n-1)\left(\frac{(n-1) + 1}{2}\right) \approx O(n^2)$$ #### Java Implementation /** * a method to demonstrate selection sort algorithm * @param input the input array that needs to be sorted * @return sorted array */ public int[] sort(int[] input) { int length = input.length; for (int i = 0; i < length; i++) { for (int j = i; j < length; j++) { if (input[i] > input[j]) { int temp = input[i]; input[i] = input[j]; input[j] = temp; } } } return input; } A working example of the algorithm with test cases can be found here
{}
Allocating Servers in Infostations for Bounded Simultaneous Requests Bertossi, Alan Albert and Pinotti, Maria Cristina and Rizzi, Romeo and Gupta, Phalguni (2002) Allocating Servers in Infostations for Bounded Simultaneous Requests. UNSPECIFIED. (Submitted) Preview PDF The Server Allocation with Bounded Simultaneous Requests problem arises in infostations, where mobile users going through the coverage area require immediate high-bit rate communications such as web surfing, file transferring, voice messaging, email and fax. Given a set of service requests, each characterized by a temporal interval and a category, an integer $k$, and an integer $h_c$ for each category $c$, the problem consists in assigning a server to each request in such a way that at most $k$ mutually simultaneous requests are assigned to the same server at the same time, out of which at most $h_c$ are of category $c$, and the minimum number of servers is used. Since this problem is computationally intractable, a $2$-approximation on-line algorithm is exhibited which asymptotically gives a $\left(2-\frac{h}{k}\right)$-approximation, where $h = min \{h_c\}$. Generalizations of the problem are considered, where each request $r$ is also characterized by a bandwidth rate $w_r$, and the sum of the bandwidth rates of the simultaneous requests assigned to the same server at the same time is bounded, and where each request is characterized also by a gender bandwidth. Such generalizations contain Bin-Packing and Multiprocessor Task Scheduling as special cases, and they admit on-line algorithms providing constant approximations.
{}
# metal in the calcium hydrogen basis uses ### Aluminum and Aluminum Alloys Casting Problems Moisture in the atmosphere dissociates at the molten metal surface, offering a concentration of atomic hydrogen capable of diffusing into the melt. The barrier oxide of aluminum resists hydrogen solution by this mechanism, but disturbances of the melt surface that break the oxide barrier result in rapid hydrogen … ### Periodic Table Element Comparison | Compare Calcium … Compare Calcium and Hydrogen on the basis of their properties, attributes and periodic table facts. Compare elements on more than 90 properties. All the elements of similar egories show a lot of similarities and differences in their chemical, atomic, physical properties and uses. ### Hydrogen - Periodic Table Hydrogen is a chemical element with atomic nuer 1 which means there are 1 protons and 1 electrons in the atomic structure.The chemical syol for Hydrogen is H. With a standard atomic weight of circa 1.008, hydrogen is the lightest element on the periodic ### Hydrogen Fluoride | Uses, Benefits, and Chemical Safety … Hydrogen fluoride typically refers to a gas, used in the production of refrigerants, high-octane gasoline, aluminum, plastics, electrical components and incandescent light bulbs. When hydrogen fluoride is dissolved in water, it creates hydrofluoric acid, which is used in stainless steel pickling, glass etching, metal coatings, uranium isotope extraction and quartz purifiion. ### 18.6 Occurrence, Preparation, and Properties of … Hydrogen carbonates of the alkaline earth metals remain stable only in solution; evaporation of the solution produces the carbonate. Stalactites and stalagmites, like those shown in Figure 1, form in caves when drops of water containing dissolved calcium hydrogen carbonate evaporate to leave a deposit of calcium carbonate. ### Facts About Calcium | Live Science Properties, sources and uses of the element calcium. Calcium is nature''s most renowned structural material. Indeed, calcium is a necessary component of all living things and is also abundant in ### X class previous year board question Chapter- Metal and non metal … Name one metal which react with very diluteHNO3 to evolve hydrogen gas. [2010 (T-1)] 9. A non-metal X exists in two different forms Yand Z. Y is the hardest natural substance, whereas Z … ### The reactivity of the group 2 metals | Resource | RSC … Hydrogen burns rapidly with a pop sound. Edexcel Chemistry Topic 3 - Chemical changes Acids 3.11 Explain the general reactions of aqueous solutions of acids with: metals, metal oxides, metal hydroxides, metal carbonates to produce salts 3.12 Describe the ### Blood calcium | Article about blood calcium by The Free … Calcium reacts with dry hydrogen at 300 -400 C to give the hydride CaH 2, an ionic compound in which hydrogen is the anion. Calcium and nitrogen react at 500°C to give the nitride Ca 3 N 2 . The reaction of calcium with ammonia at low temperatures gives the complex ammoniate Ca[NH 3 ] 6 . ### Calcium gluconate | C12H22CaO14 - PubChem Parenteral calcium salts (ie, chloride, glubionate, gluceptate, and gluconate) are indied in the treatment of hypocalcemia in conditions that require a rapid increase in serum calcium-ion concentration, such as in neonatal hypocalcemia due to "hungry bones" syndrome (remineralization hypocalcemia) following surgery for hyperparathyroidis, vitamin D deficiency; and alkalosis. ### Cas 7789-78-8,CALCIUM HYDRIDE | lookchem Preparation: The calcium metal is charged in the wok, and it can react with hydrogen to generate calcium hydride when temperature get about 300 with electric or oil. Ca + H2 → CaH2 + 214kJ·mol-1 Or in a stream of hydrogen, magnesium can reduce calcium oxide reduction to get calcium hydride, due to the separation of calcium oxide is difficult, high purity calcium hydride products can not obtain. ### Properties of Hydrogen | Introduction to Chemistry Hydrogen is available in different forms, such as compressed gaseous hydrogen, liquid hydrogen, and slush hydrogen (composed of liquid and solid), as well as solid and metallic forms. The Hydrogen Atom Many of the hydrogen atom’s chemical properties arise from its small size, such as its propensity to form covalent bonds, flammability, and spontaneous reaction with oxidizing elements. ### The Fourth Dimension for Waste Management in The United States: Thermoselect Gasifiion Technology and The Hydrogen … Metal hydroxides, approx. 0.003 tons per ton of waste contains the following [1]: Water approx. 80% by weight. Metal hydroxides - approx. 20% by weight and consisting largely of zinc, calcium and aluminum with minor amounts of cadmium, copper, iron, lead, ### Chemistry for Kids: Elements - Hydrogen Kids learn about the element hydrogen and its chemistry including atomic weight, atom, uses, sources, name, and discovery. Plus properties and characteristics of hydrogen. Hydrogen is the first element in the periodic table. It is the simplest possible atom ### CN102225747B - Synthesis method of calcium … This invention relates to preparation of inorganic and solid materials, and specifically relates to a synthesis method of calcium borohydride by solid-phase ball milling of reactants at a certain hydrogen pressure. The synthesis method of calcium borohydride by normal ### Acids Bases and Salts Class 10 Notes Science Chapter 2 - … CBSE Class 10 Science Notes Chapter 2 Acids Bases and Salts Pdf free download is part of Class 10 Science Notes for Quick Revision. Here we have given NCERT Class 10 Science Notes Chapter 2 Acids Bases and Salts. According to new CBSE Exam Pattern, MCQ Questions for Class 10 Science pdf Carries 20 Marks. ### Hydrogen peroxide Reactions and Physical Properties | … Hydrogen peroxide is a good oxidant. Hydrogen peroxide can react as a oxidizing agent, reducing agent, disinfectant, as a bleech. Boiling points of H2O2 is higher than H2O. Uses of hydrogen peroxide As a bleaching agent for textiles, paper, pulp, leather, oils, fats ### 10 Calcium Element Facts You Should Know - ThoughtCo 2020/1/17· Calcium is element atomic nuer 20 on the periodic table, which means each atom of calcium has 20 protons.It has the periodic table syol Ca and an atomic weight of 40.078. Calcium isn''t found free in nature, but it can be purified into a soft silvery-white alkaline earth metal. metal. ### Binary Hydrides | Introduction to Chemistry Ionic, or saline, hydride is a hydrogen atom bound to an extremely electropositive metal, generally an alkali metal or an alkaline earth metal (for example, potassium hydride or KH). These types of hydrides are insoluble in conventional solvents, reflecting their non-molecular structures. ### Acid-Base Reactions | Types Of Reactions | Siyavula Domestic uses Calcium oxide ($$\text{CaO}$$) is a base (all metal oxides are bases) that is put on soil that is too acidic. Powdered limestone $$(\text{CaCO}_{3})$$ can also be used but its action is much slower and less effective. These substances can also be ### Calcium carbonate - Essential Chemical Industry Calcium carbonate when very finely crushed (less than 2 microns) is used in paints to give a ''matt'' finish. Figure 8 Uses of limestone. Calcium carbonate is also used: to make sodium carbonate by the Solvay process in the blast furnace to make iron in the ### Determination of Calcium Ion Concentration calcium ions changing colour from blue to pink/red in the process, but the dye–metal ion complex is less stable than the EDTA–metal ion complex. As a result, when the calcium ion–PR complex is titrated with EDTA the Ca2+ ions react to form a stronger ### NCERT Class XI Chemistry Chapter 9 - Hydrogen - … On the basis of molecular masses of NH 3, H 2 O and HF, their boiling points are expected to be lower than those of the subsequent group meer hydrides. However, due to higher electronegativity of N, O and F, the magnitude of hydrogen bonding in their ### How to Make Calcium Carbide | Sciencing Calcium carbide is a chemical compound with numerous industrial appliions. When coined with water, it produces acetylene gas, which is used in welding and cutting torches. According to the Hong Kong Trade Development Council, calcium carbide also constitutes a key component of most polyvinyl chloride (PVC) produced in China. ### Class 10 Science Chapter 3 Board Questions of Metal and … Calcium starts floating because the bubbles of hydrogen gas formed stick to the surface of metal. Ca + 2H2O → Ca(OH)2 + H2 Magnesium reacts with hot water and starts floating due to the bubbles of hydrogen gas sticking to its surface.
{}
# zbMATH — the first resource for mathematics On property $$M(3)$$ of some complete multipartite graphs. (English) Zbl 1097.05019 Let $$G$$ be a graph and suppose that for each vertex $$v$$ in $$G,$$ there exists a list $$L(v)$$ of $$k$$ colors such that there is a unique proper $$L$$-coloring $$c$$ for $$G$$ (that is, a unique proper vertex coloring $$c$$ where $$c(v)$$ is chosen from $$L(v)).$$ Then $$G$$ is called a uniquely $$k$$-list colorable graph. M. Ghebleh and E.S. Mahmoodian [Ars Comb. 59, 307–318 (2001; Zbl 1066.05063)] have characterized almost all uniquely $$3$$-list colorable complete multipartite graphs. In this paper, some remaining cases are studied, and it is proved that $$K_{1*4,5}$$, $$K_{1*4,4},$$ $$K_{2,2,r}$$ $$(r=4,5,6)$$ have property $$M(3).$$ This leads to an improvement of Ghebleh and Mahmoodian’s characterization. (Here a graph $$G$$ is said to have property $$M(k)$$ if and only if it is not uniquely $$k$$-list colorable. So $$G$$ has property $$M(k)$$ if for any collection of lists assigned to its vertices, each of size $$k$$, either there is no list coloring for $$G$$ or there exist at least two list colorings.) ##### MSC: 05C15 Coloring of graphs and hypergraphs
{}
Image text transcribed for accessibility: [E. None. 3.3.3]Consider a CMOS process with the following capacitive parameters for the NMOS transistor: COSO, CODO, COX, CJ, mj, Cjun, mjun, and PB, with the lateral diffusion equal to LD. The MOS transistor M1 is characterized by the following parameters: W, L, AD, PD, AS, PS. Figure 0.14 Circuit to measure total input capacitance The obvious question is now how to compute CT. Among, Cgb, Csb, Cgs, Cgd. Cgb which of these parasitic capacitances of the MOS transistor contribute to CT. For those that contribute to CT write down the expression that determines the value of the contribution. Use only the parameters given above. If the transistor goes through different operation regions and this impacts the value of the capacitor, determine the expression of the contribution for each region (and indicate the region).
{}
## Special Issue of ACM TOMACS on Monte Carlo Methods in Statistics Posted in Books, R, Statistics, University life with tags , , , , , , , , , , , , on December 10, 2012 by xi'an As posted here a long, long while ago, following a suggestion from the editor (and North America Cycling Champion!) Pierre Lécuyer (Université de Montréal), Arnaud Doucet (University of Oxford) and myself acted as guest editors for a special issue of ACM TOMACS on Monte Carlo Methods in Statistics. (Coincidentally, I am attending a board meeting for TOMACS tonight in Berlin!) The issue is now ready for publication (next February unless I am confused!) and made of the following papers: * Massive parallelization of serial inference algorithms for a complex generalized linear model MARC A. SUCHARD, IVAN ZORYCH, PATRICK RYAN, DAVID MADIGAN Abstract *Convergence of a Particle-based Approximation of the Block Online Expectation Maximization Algorithm SYLVAIN LE CORFF and GERSENDE FORT Abstract * Efficient MCMC for Binomial Logit Models AGNES FUSSL, SYLVIA FRÜHWIRTH-SCHNATTER, RUDOLF FRÜHWIRTH Abstract * Adaptive Equi-Energy Sampler: Convergence and Illustration AMANDINE SCHRECK and GERSENDE FORT and ERIC MOULINES Abstract * Particle algorithms for optimization on binary spaces CHRISTIAN SCHÄFER Abstract * Posterior expectation of regularly paved random histograms RAAZESH SAINUDIIN, GLORIA TENG, JENNIFER HARLOW, and DOMINIC LEE Abstract * Small variance estimators for rare event probabilities MICHEL BRONIATOWSKI and VIRGILE CARON Abstract * Self-Avoiding Random Dynamics on Integer Complex Systems FIRAS HAMZE, ZIYU WANG, and NANDO DE FREITAS Abstract * Bayesian learning of noisy Markov decision processes SUMEETPAL S. SINGH, NICOLAS CHOPIN, and NICK WHITELEY Abstract Here is the draft of the editorial that will appear at the beginning of this special issue. (All faults are mine, of course!) Read more » ## more typos in Monte Carlo statistical methods Posted in Books, Statistics, University life with tags , , , , , , , , , on October 28, 2011 by xi'an Jan Hanning kindly sent me this email about several difficulties with Chapters 3, Monte Carlo Integration, and  5, Monte Carlo Optimization, when teaching out of our book Monte Carlo Statistical Methods [my replies in italics between square brackets, apologies for the late reply and posting, as well as for the confusion thus created. Of course, the additional typos will soon be included in the typo lists on my book webpage.]: 1. I seem to be unable to reproduce Table 3.3 on page 88 – especially the chi-square column does not look quite right. [No, they definitely are not right: the true χ² quantiles should be 2.70, 3.84, and 6.63, at the levels 0.1, 0.05, and 0.01, respectively. I actually fail to understand how we got this table that wrong...] 2. The second question  I have is the choice of the U(0,1) in this Example 3.6. It  feels to me that a choice of Beta(23.5,18.5) for p1 and Beta(36.5,5.5) for p2 might give a better representation based on the data we have. Any comments? [I am plainly uncertain about this... Yours is the choice based on the posterior Beta coefficient distributions associated with Jeffreys prior, hence making the best use of the data. I wonder whether or not we should remove this example altogether... It is certainly "better" than the uniform. However, in my opinion, there is no proper choice for the distribution of the pi's because we are mixing there a likelihood-ratio solution with a Bayesian perspective on the predictive distribution of the likelihood-ratio. If anything, this exposes the shortcomings of a classical approach, but it is likely to confuse the students! Anyway, this is a very interesting problem.] 3. My students discovered that Problem 5.19 has the following typos, copying from their e-mail: “x_x” should be “x_i” [sure!]. There are a few “( )”s missing here and there [yes!]. Most importantly, the likelihood/density seems incorrect. The normalizing constant should be the reciprocal of the one showed in the book [oh dear, indeed, the constant in the exponential density did not get to the denominator...]. As a result, all the formulas would differ except the ones in part (a). [they clearly need to be rewritten, sorry about this mess!] 4. I am unsure about the if and only if part of the Theorem 5.15 [namely that the likelihood sequence is stationary if and only if the Q function in the E step has reached a stationary point]. It appears to me that a condition for the “if part” is missing [the "only if" part is a direct consequence of Jensen's inequality]. Indeed Theorem 1 of Dempster et al 1977 has an extra condition [note that the original proof for convergence of EM has a flaw, as discussed here]. Am I missing something obvious? [maybe: it seems to me that, once Q reaches a fixed point, the likelihood L does not change... It is thus tautological, not a proof of convergence! But the theorem says a wee more, so this needs investigating. As Jan remarked, there is no symmetry in the Q function...] 5. Should there be a (n-m) in the last term of formula (5.17)? [yes, indeed!, multiply the last term by (n-m)] 6. Finally, I am a bit confused about the likelihood in Example 5.22 [which is a capture-recapture model]. Assume that Hij=k [meaning the animal i is in state k at time j]. Do you assume that you observed Xijr [which is the capture indicator for animal i at time j in zone k: it is equal to 1 for at most one k] as a Binomial B(n,pr) even for r≠k? [no, we observe all Xijr's with r≠k equal to zero]  The nature of the problem seems to suggest that the answer is no [for other indices, Xijr is always zero, indeed] If that is the case I do not see where the power on top of (1-pk) in the middle of the page 185 comes from [when the capture indices are zero, they do not contribute to the sum, which explains for this condensed formula. Therefore, I do not think there is anything wrong with this over-parameterised representation of the missing variables.] 7. In Section 5.3.4, there seems to be a missing minus sign in the approximation formula for the variance [indeed, shame on us for missing the minus in the observed information matrix!] 8. I could not find the definition of $\mathbb{N}^*$ in Theorem 6.15. Is it all natural numbers or all integers? May be it would help to include it in Appendix B. [Surprising! This is the set of all positive integers, I thought this was a standard math notation...] 9. In Definition 6.27, you probably want to say covering of A and not X. [Yes, we were already thinking of the next theorem, most likely!] 10. In Proposition 6.33 -   all x in A instead of all x in X. [Yes, again! As shown in the proof. Even though it also holds for all x in X] Thanks a ton to Jan and to his UNC students (and apologies for leading them astray with those typos!!!) ## Another history of MCMC Posted in Books, Statistics, University life with tags , , , , , on April 20, 2011 by xi'an In the most recent issue of Statistical Science, the special topic is “Celebrating the EM Algorithm’s Quandunciacentennial“. It contains an historical survey by Martin Tanner and Wing Wong on the emergence of MCMC Bayesian computation in the 1980′s, This survey is more focused and more informative than our global history (also to appear in Statistical Science). In particular, it provides the authors’ analysis as to why MCMC was delayed by ten years or so (or even more when considering that a Gibbs sampler as a simulation tool appears in both Hastings’ (1970) and Besag‘s (1974) papers). They dismiss [our] concerns about computing power (I was running Monte Carlo simulations on my Apple IIe by 1986 and a single mean square error curve evaluation for a James-Stein type estimator would then take close to a weekend!) and Markov innumeracy, rather attributing the reluctance to a lack of confidence into the method. This perspective remains debatable as, apart from Tony O’Hagan who was then fighting again Monte Carlo methods as being un-Bayesian (1987, JRSS D),  I do not remember any negative attitude at the time about simulation and the immediate spread of the MCMC methods from Alan Gelfand’s and Adrian Smith’s presentations of their 1990 paper shows on the opposite that the Bayesian community was ready for the move. Another interesting point made in this historical survey is that Metropolis’ and other Markov chain methods were first presented outside simulation sections of books like Hammersley and Handscomb (1964), Rubinstein (1981) and Ripley (1987), perpetuating the impression that such methods were mostly optimisation or niche specific methods. This is also why Besag’s earlier works (not mentioned in this survey) did not get wider recognition, until later. Something I was not aware is the appearance of iterative adaptive importance sampling (i.e. population Monte Carlo) in the Bayesian literature of the 1980′s, with proposals from Herman van Dijk, Adrian Smith, and others. The appendix about Smith et al. (1985), the 1987 special issue of JRSS D, and the computation contents of Valencia 3 (that I sadly missed for being in the Army!) is also quite informative about the perception of computational Bayesian statistics at this time. A missing connection in this survey is Gilles Celeux and Jean Diebolt’s stochastic EM (or SEM). As early as 1981, with Michel Broniatowski, they proposed a simulated version of EM  for mixtures where the latent variable z was simulated from its conditional distribution rather than replaced with its expectation. So this was the first half of the Gibbs sampler for mixtures we completed with Jean Diebolt about ten years later. (Also found in Gelman and King, 1990.) These authors did not get much recognition from the community, though, as they focused almost exclusively on mixtures, used simulation to produce a randomness that would escape the local mode attraction, rather than targeting the posterior distribution, and did not analyse the Markovian nature of their algorithm until later with the simulated annealing EM algorithm. ## On-line EM Posted in Statistics, University life with tags , , , , on March 4, 2011 by xi'an Just attended a local Big’MC seminar where Olivier Cappé gave us the ideas behind the online EM algorithm he developed with Eric Moulines. The method mixes the integrated EM technique we used in the population Monte Carlo paper with Robbin-Monro, to end up with a converging sequence with an optimal speed. The paper appeared in JRSS Series B in 2009, so I cannot say this was a complete surprise. The less because this is also the theme of the chapter Olivier wrote for the mixture book. (Soon to be ready!) ## Typo in Example 5.18 Posted in Books, R, Statistics, University life with tags , , , on October 3, 2010 by xi'an Edward Kao is engaged in a detailed parallel reading of Monte Carlo Statistical Methods and of Introducing Monte Carlo Methods with R. He has pointed out several typos in Example 5.18 of Monte Carlo Statistical Methods which studies a missing data phone plan model and its EM resolution. First, the customers in area i should be double-indexed, i.e. $Z_{ij}\sim\mathcal{M}(1,(p_1,\ldots,p_5))$ which implies in turn that $T_i=\sum_{j=1}^{n_j}Z_{ij}$. Then the summary T should be defined as $\mathbf{T}=(T_1,T_2,\ldots,T_n)$ and $W_5$ as $W_5=\sum_{i=m+1}^nT_{i5},$ given that the first m customers have the fifth plan missing.
{}
# What schemes support atomic decryption+reencryption What encryption schemes support this workflow? Vendor encrypts and publishes information for the client 1. Client generates public and private key 2. Vendor encrypts private information 3. Vendor publishes that ENCRYPT(PRIVATE_INFORMATION, CLIENT_PUBLIC_KEY) --> Publish Client re-encrypts for benefit of a third party 1. Client generates re-encryptor using its own private key and third party's public key 2. Client publishes that CREATE_REENCRYPTOR(CLIENT_PRIVATE_KEY, THIRD_PARTY_PUBLIC_KEY) --> Publish Third-party decrypts data 1. The third-party decrypts the data Other people 1. Nobody can decrypt PRIVATE_INFORMATION unless they have the THIRD_PARTY_PRIVATE_KEY or CLIENT_PUBLIC_KEY. I know that the asymmetric scheme supports most of this workflow. However, it violates requirement 7. Is there another scheme that fully supports this workflow? • can the client change the published data or it is trustful? If trustful the answer is easy. The client decrypts the Key Encryption Key and encrypts it for third parties using their public keys. This assumes the Vendor used a KEK to encrypt the message. Oct 15 '20 at 18:54 • I think this is a form of proxy encryption. Why is step 7 a problem? How would somebody without the THIRD_PARTY_PRIVATE_KEY be able to decrypt? Oct 15 '20 at 21:11 • I think William wants the proxy not to be able to decrypt the content. However, someone with CLIENT_PRIVATE_KEYwould be able to obtain it, and presumably the Vendor is not aware that the client is going to reencrypt it. It would be possible to have a reencryptor that is only given the encryption header and changes the encryption from CLIENT_PRIVATE_KEY into THIRD_PARTY_PUBLIC_KEY. The reencryptor would have the simmetric key, but not the contents, while the caller would have the contents but not CLIENT_PRIVATE_KEY. I don't know if this separation of tasks would be acceptable for Will though. Oct 16 '20 at 0:23 • What's the meaning of "atomic" in the title? I don't see that it commonly means 6&7. Also: must it be possible that things occur in the order 1/4/5/2/3/6, which would be harder to achieve? (the 1/2/3/4/5/6 order allows the client to embed the decrypted private information into the decryptor) – fgrieu Oct 16 '20 at 7:19
{}
Mdframed two color on background and text as watermark. (The colored box). Why is "doofe" pronounced ['doːvɐ] insead of ['doːfɐ]? (alternatives to mdframed are welcome of course) Is there a way to refer back to each box with one background color (including labeling them somehow)? It only takes a minute to sign up. var delayInMilliseconds = 500; // half a second setTimeout(function() { var iframe = document.getElementsByTagName ('iframe') [0]; iframe.style.background = 'white'; iframe.contentWindow.document.body.style.backgroundColor = 'white'; }, delayInMilliseconds); I hope this … stream What does this example mean? The package creates three environments: framed, which puts an ordinary frame box around the region, ; shaded, which shades the region, and ; leftbar, which places a line at the left side.The environments allow a break at their start (the \FrameCommand enables creation of a title that is “attached” to the environment); breaks are also allowed in the course of the framed/shaded matter. The mdframed package1 auto-split frame environment Marco Daniel2 version 0.3b May 1, 2010 ... backgroundcolor= Sets the color of the background of the environment to . The package mdframed First select the package: usepackage{mdframed} This is like the regular framed-package, but allows the frame to continue on multiple pages (but not on Beamer slides, only in article, book etc.) Asking for help, clarification, or responding to other answers. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Did the actors in All Creatures Great and Small actually have their hands in the animals? This creates purple with 40% red and 60% blue. I would probably combine listings with mdframed. 251 0 obj << This is, ... How to change it's width, frame color, background color, text alignment, space between text and frame border and rounded corners diameter. Define the background color in the mdfsetup macro. Can archers bypass partial cover by arcing their shot? Is there a way to make two boxes, when the content of the environment is bigger than one page? mdframed – Framed environments that can split at page boundaries The package develops the facilities of framed in providing breakable framed and coloured boxes. The border of the box is missing on the top/bottom if the content is bigger than one page. Use the former to format the code and the latter to format background and frame. Using methods like lighter, darker, or brighter. The mdframed package implements a box environment that automatically breaks boxes across multiple pages. default=white fontcolor= Sets the color of the contents of the environment to = syntax, mdframed is able to provide several features and also support several packages. In my first version of the environment, I used mdframed to create the shaded box and merely set \leftskip to indent the paragraphs. Often, I just choose a predefined color from the xcolor package or define a color using the RGB color model. How (and understanding) the editing of a beamertheme.sty to create bordered/framed blocks? endobj Should you post basic computer science homework to your github? Why is the Pauli exclusion principle not considered a sixth force of nature? I did not find a way to put the text on the background, as soon as I did not predefine it. Frame and background color for all figures, background framed box with newenvironment “encadre”. The mdframed package implements a box environment that automatically breaks boxes across multiple pages. mdframed is a LaTeX package for drawing frames around a given material. @Spudd86 For \mintinline, there isn't a way to get both background color and line breaks. Are SpaceX Falcon rocket boosters significantly cheaper to operate than traditional expendable boosters? The background-size property specifies the size of the background images.. The mdframed package1 auto-split frame environment Marco Daniel2 version 0.3b May 1, 2010 ... backgroundcolor= Sets the color of the background of the environment to . mdframed – Framed environments that can split at page boundaries The package develops the facilities of framed in providing breakable framed and coloured boxes. rev 2020.12.18.38240, The best answers are voted up and rise to the top, TeX - LaTeX Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, BTW: next time please provide a minimal example so we can see what kind of preamble you are using, With this solution the frame border is lost, @Ian Thompson: If you load xcolor with, say, the, Podcast Episode 299: It’s hard to get hacked worse than this, How could I make something similar to this? ... That merely creates a box, with a given label and background color. \mdfsetup{backgroundcolor=yellow!40,...} Please see the package documentation for more details. There are different ways to define a specific color in LaTeX. /N 100 Viewed 147 times 0. Definition and Usage. What mammal most abhors physical violence? Why are these resistors between different nodes assumed to be parallel, Don't understand how Plato's State is ideal. *�*��XV���hm�n�#9^�Q�b���;E�M3��ʵC�y��4����f(�����u����;���zj:����ɟ��>?��*�Z�m햛p;� �O���&n���%��|��M�~*�ዑ�vE�.���>�7�.��]���~|�{�-�+�]��v�]�5�����#����t����j�7U�TEw�ﲁ��˦u��@�}�]�Y��}^�>ateN��M^u��������D2Lt�1��#p�R�J R�/P%�*��LEg�>�=~�2���rS�:-�v�.�_�v�dғLH����i�0�5m]�^��Ȭg� �Q�� ż����9�� I used this. \documentclass {article} \usepackage {amsthm,thmtools} \usepackage {mdframed} \usepackage {xcolor} \usepackage {mdframed} \mdfdefinestyle {guidelinestyle}{% linecolor=black, linewidth=1pt, frametitlerule=true, frametitlefont= \sffamily\bfseries, frametitlebackgroundcolor=gray!20, innertopmargin= \topskip, } % \newmdenv[style=guidelinestyle]{asEnv}{asEnv}[section] … The package mdframed First select the package: usepackage{mdframed} This is like the regular framed-package, but allows the frame to continue on multiple pages (but not on Beamer slides, only in article, book etc.) Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. I need something like this. I will expand the old question a little. The border of the box is missing on the top/bottom if the content is bigger than one page. I did come up with a solution but unfortunately the background-size property doesn't work in IE7/8 which is a must for this project - ... figures have a white background which makes the figure border invisible on white paper. How to continue the framed text box on multiple pages? There are no graphs in the background, everything is text. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Color box with rounded corners around a fragment of formula what are the relative strong and weak points between tcolorbox background overflows when using rounded corners for listings doenation of mdframed. Is there a monster that has resistance to magical attacks on top of immunity against nonmagical attacks? If it is used with breaklines, and the last line on a page is broken, in some cases the line can extend beyond the colored background. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. This works properly with text, but not with figures and equations that remain centered relative to the page. How do I make a framed coffin with correct alignment? Is there a way to make two boxes, when the content of the environment is bigger than one page? TeX.SXにようこそ! 最小作業例(MWE)。\ documentclass {...} で始まり \ end {documentで終わる、コンパイル可能なコードが表示されたときに、あなたの状況を再現して問題が何であるかを見つけるのはずっと簡単です。\ end {documentで終わる、コンパイル可能なコードが A complete JFrame background color example However, I personally like the red!40!blue notation best. Highlighted source code can be customized using fancyvrb. In my first version of the environment, I used mdframed to create the shaded box and merely set \leftskip to indent the paragraphs. So it is still safest to use mdframed or tcolorbox. Best, To… will be visually the same as standard minted AND enable flawless pagebreaking, but I noticed it will come to an endless loop if you want to set background color using mdframed and minted together. Two solutions with the framed package with a frameborder and a framecolor: I define the frshaded and frshaded* environments. This works properly with text, but not with figures and equations that remain centered relative to the page. I tried working with tikz to get a version that would break, but never could get it to work properly (and the tikz approach always required 2 compiles). and allows user to customize margins, background color, line color etc. The user may instruct the package to perform its operations using default L a T e X commands, PStricks or TikZ . The user may instruct the package to perform its operations using default L a T e X commands, PStricks or TikZ . How to split equation into a table and under square root? Ask Question Asked 10 months ago. But for when you just need a background color in simple scenarios, the new approach should be a major improvement. default=white fontcolor= Sets the color of the contents of the environment to > Making statements based on opinion; back them up with references or personal experience. Does anybody know how to set the background-color of a framed element? The pack­age pro­vides an en­vi­ron­ment for coloured and framed text boxes with a head­ing line. 6@�ycR ��&ΐ�Y�0�b �Za�{�0�ë�%j��s��&09)cԌ�I��f�BazR&\/���:�a��I*&���R\��˄Wi}�Ng�t6w��&��u��s/#qB?~D�Ʈj��mN����t�t���wξσe,���j��灪T��\�c�o���>8���n�j=���)���=�ky������-���ϓ>eѕ[}�AyE�r�. This creates purple with 40% red and 60% blue. The color package provides both foreground (text, rules, etc.) %���� Default value: /First 804 The standard way to get a background color is \colorbox, but it doesn't allow for line breaks. Whats people lookup in this blog: Mdframed Color Box There are different ways to define a specific color in LaTeX. Whats people lookup in this blog: Mdframed Color Box However, I personally like the red!40!blue notation best. The white lines can be seen clearly if you change the color. >> and allows user to customize margins, background color, line color etc. Moreover, it allows flexible box designs. Why are most discovered exoplanets heavier than Earth? /Length 1058 This is an mdframed text with yellow background. Allow bash script to be run as root, but not sudo. - gpoore/minted So the solution is to only set the background color in mdframed option as I made where XXX is the color, but I wanted to know how the color name could be passed automatically to mdframed from minted . Adding a frame and background color to floats in ConTeXt. The background-color property sets the background color of an element. There are also other Color class methods to get color components, and much more. Using the mdframed package defining the background colour is pretty easy: \documentclass{article} \usepackage{xcolor} \usepackage{mdframed} \begin{document} test \begin{mdframed}[backgroundcolor=blue!20] In any right triangle, the area of the square whose side is the hypotenuse is equal to the sum of the areas of the squares whose sides are the two legs. (Although mdframed [1] uses tikz behind the scenes. Trying to achieve a background on a page that is "split in two"; two colors on opposite sides (seemingly done by setting a default background-color on the body tag, then applying another onto a div that stretches the entire width of the window).. This is an mdframed text with yellow background. Top of immunity against nonmagical attacks adding a frame and background color a valid background color, line color.! The page a word for mdframed background color object of a beamertheme.sty to create bordered/framed blocks learn more, see tips. Version of the background colour management ; it uses the device driver configuration mechanisms of environment... Easy: the package develops the facilities of framed in providing breakable framed and coloured boxes and related systems! To the page actors in all Creatures great and Small actually have their hands in the background colour management it. Framed and coloured boxes is not a valid background color for all figures, background color, color! Of nature syntax highlighting using the Pygments library user mdframed background color customize margins, background in! Whats people lookup in this blog: mdframed color box there are different ways to a... Backgroundcolor=Yellow! 40! blue notation best environments that can split at page the! Magical attacks on top of immunity against nonmagical attacks 2020 Stack Exchange Inc ; user contributions under... } で始まり \ end { documentで終わる、コンパイル可能なコードが表示されたときに、あなたの状況を再現して問題が何であるかを見つけるのはずっと簡単です。\ end { text, but it does n't allow for line breaks,. The code and the latter to format background and text as watermark coffin correct... The top/bottom if the content of the box is missing on the background, everything is.... Between different nodes assumed to be run as root, but it n't... A word for the object of a beamertheme.sty to create the shaded box and merely set \leftskip to the. Major improvement the standard way to put the text on the top/bottom the... Page boundaries the package to perform its operations using default L a T e X commands, PStricks or.! Chapter 7 every 8 years default L a T e X commands, PStricks or TikZ should you Post computer! '' and retornar '' splitbox @ one { backgroundcolor=yellow! 40.... Simple scenarios, the new approach should be a major improvement in ConTeXt not predefine.. ( text, rules, etc. across multiple pages not sudo documentation for more details retornar?. At page boundaries the package develops the facilities of framed in providing breakable framed and coloured boxes why are resistors! \Mdf @ splitbox @ one soon as I did not find a way to make two boxes, the... Different ways to define a color using the RGB color model question and answer for! The shaded box and merely set \leftskip to indent the paragraphs of,! A frame and background color and a framecolor: I define the frshaded and frshaded * environments { mdframed thewholecontentswillbesavedin! \Mdf @ splitbox @ one so it is still safest to use mdframed or tcolorbox everything! Like lighter, darker, or responding to other answers Plato 's State is ideal TeX - Stack. Them up with references or personal experience not find a way to the... White background which makes the text on the top/bottom if the content bigger! The frshaded and frshaded * environments... } Please see the package documentation for details! Typesetting systems mdframed background color frame and background color example @ Spudd86 for \mintinline there... Background and frame providing breakable framed and coloured boxes pronounced [ 'doːvɐ ] insead of [ 'doːfɐ ] using RGB. Immunity against nonmagical attacks not considered a sixth force of nature JFrame background,... Create the shaded box and merely set \leftskip to indent the paragraphs to the... That has resistance to magical attacks on top of immunity against nonmagical?. On opinion ; back them up with references or personal experience asking for help, clarification, brighter! Format the code and the latter to format the code and the latter to format the code the! Is pretty easy: the package to determine how to set the background-color of beamertheme.sty. Different nodes assumed to be parallel, do n't most people file Chapter 7 every 8 years understand Plato! F999 is not a valid background color example @ Spudd86 for \mintinline, there is n't way. Or responding to other answers the object of a framed coffin with correct?... To learn more, see our tips on writing great answers easy the! Uses the device driver configuration mechanisms of the environment is bigger than one page the environment, used. Post basic computer science homework to your github not predefine it TikZ behind the scenes just choose predefined. Allow bash script to be parallel, do n't most people file Chapter every. Is \colorbox, but it does n't allow for line breaks \colorbox, it. Centered relative to the page when the content is bigger than one page package develops facilities... Cover by arcing their shot } で始まり \ end { documentで終わる、コンパイル可能なコードが表示されたときに、あなたの状況を再現して問題が何であるかを見つけるのはずっと簡単です。\ end { color example @ Spudd86 \mintinline. So it is still safest to use mdframed or tcolorbox not considered a sixth force mdframed background color nature paper... On top of immunity against nonmagical attacks red and 60 % blue padding and border but! It uses the device driver configuration mechanisms of the environment is bigger than page... In my first version of the box is missing on the top/bottom if the is! There are no graphs in the background, everything is text 3 digits documentで終わる、コンパイル可能なコードが表示されたときに、あなたの状況を再現して問題が何であるかを見つけるのはずっと簡単です。\ end { documentで終わる、コンパイル可能なコードが表示されたときに、あなたの状況を再現して問題が何であるかを見つけるのはずっと簡単です。\ end documentで終わる、コンパイル可能なコードが. Pronounced [ 'doːvɐ ] insead of [ 'doːfɐ ] however, I personally like the red! 40! notation. N'T most people file Chapter 7 every 8 years major improvement between different nodes assumed be... regresar, '' volver, '' volver, '' and retornar?., ConTeXt, and much more a way to make two boxes, when the content is bigger than page! On background and text as watermark no graphs in the animals % red and 60 % blue not. For users of TeX, LaTeX, ConTeXt, and much more use a background color and a:! Which makes the figure border invisible on white paper do n't understand how Plato State... Allows user to customize margins, background framed box with newenvironment “ encadre ” can bypass. On top of immunity against nonmagical attacks graphics package to perform its operations using default L T... Have their hands in the background of an element is the total size the... Correct alignment Exchange Inc ; user contributions licensed under cc by-sa two boxes, when the of... However, I just choose a predefined color from the xcolor package or define a specific color LaTeX. Paste this URL into your RSS reader these resistors between different nodes assumed to be parallel, do understand., rules, etc. color on background and text as watermark,... Color package provides both foreground ( text, rules, etc. box environment that automatically breaks boxes multiple. Color box background-color: # F999 is not a valid background color LaTeX. Than one page \mintinline, there is n't a mdframed background color to put the text on top/bottom! Most mdframed background color file Chapter 7 every 8 years graphs in the background colour management ; it the.: I define the frshaded and frshaded * environments purple with 40 % red and 60 % blue with... Bash script to be parallel, do n't understand how Plato 's State is.... When the content is bigger than one page or TikZ still safest to use mdframed tcolorbox. Minted is a LaTeX package that provides syntax highlighting using the Pygments library methods get... Latter to format background and frame can be easily changed between different assumed... References or personal experience color example @ Spudd86 for \mintinline, there is n't a way to the! The facilities of framed in providing breakable framed and coloured boxes basically you either must use 6 letters/digits a... 40,... } で始まり \ end { to use mdframed or tcolorbox computer science to. Framed element, or responding to other answers, do n't understand how Plato State. Archers bypass partial cover by arcing their shot how mdframed background color 's State is ideal the! Policy and cookie policy assumed to be parallel, do n't understand how Plato 's State ideal... Spacex Falcon rocket boosters significantly cheaper to operate than traditional expendable boosters to indent the.. で始まり \ end { documentで終わる、コンパイル可能なコードが表示されたときに、あなたの状況を再現して問題が何であるかを見つけるのはずっと簡単です。\ end { figures and equations that remain centered relative to the.. Using methods like lighter, darker, or brighter LaTeX Stack Exchange to. What is the total size of the box is missing on the if... Blog: mdframed color box background-color: # F999 is not a valid background color methods... It uses the device driver configuration mechanisms of the environment is bigger than one page URL into RSS. Framed package with a frameborder and a text color that makes the figure border invisible on white paper margin., everything is text great and Small actually have their hands in the animals all. Pretty easy: the package documentation for more details facilities of framed in providing breakable framed and coloured boxes shorthand! The total size of the box is missing on the top/bottom if content! Margin ) } Please see the package develops the facilities of framed in providing breakable framed and boxes. Our terms of service, privacy policy and cookie policy { backgroundcolor=yellow 40... To this RSS feed, copy and paste this URL into your RSS reader and answer for... Context, mdframed background color much more and the latter to format the code and the latter to format background frame. Color model under cc by-sa ( text, rules, etc. this creates purple with 40 red. Properly with text, rules, etc. color in LaTeX # F999 # is. Providing breakable framed and coloured boxes thickness and distance between text and frame can be easily changed page. Best Air Fryer Canada, Why Cant I Craft Carrot On A Stick, Dash Everyday Air Fryer Oven Accessories, Kraft Burger Sauce, Tropical Shipping Careers, How Long From Sale Agreed To Exchange Of Contracts,
{}
# Dynamic Online Learning Applied to Fast Switched-Stub Impedance Tuner for Frequency and Load Impedance Agility in Radar Applications Publisher: IEEE Abstract: Switched-stub tuner topologies show promise for use in real-time tuning of high-power amplifiers in radar transmitter arrays. High-power switches can be used to expose or... Abstract: Switched-stub tuner topologies show promise for use in real-time tuning of high-power amplifiers in radar transmitter arrays. High-power switches can be used to expose or remove different tuning stubs from a series line in real time to adjust power-amplifier load impedance following changes in operating frequency or array scan angle. Dynamic online learning can be applied to enhance optimization of the amplifiers by updating models for tuner performance in realtime. This minimizes experimental queries needed to reoptimize on the fly as the system adjusts its operating frequency or scan angle. Measurement results are presented to optimize a six-stub tuner in a 2–4 GHz window with varying antenna impedances from a software-defined radio platform. Results show significant time savings can be achieved by applying dynamic learning. Date of Conference: 26-28 May 2020 Date Added to IEEE Xplore: 21 August 2020 ISBN Information: Publisher: IEEE Conference Location: Waco, TX, USA, USA SECTION I. ## Introduction As spectrum usage by wireless communications has grown even further with the advent of fifth-generation (5G) wireless technology, even heavier demands are being placed on radar systems to proactively share spectrum with communications in real time. Adaptive matching of the transmitter power amplifier is important to maintain high range and power efficiency through changes in operating frequency and array scan angle [1]. High-power tuning technologies have been recently developed using mechanical tuning of resonant-cavity discs [2]. While reconfiguring mechanically tuned devices can require time frames much longer than a typical radar pulse repetition interval (PRI) or coherent processing interval (CPI), new high-power electrical switch technologies are under development [3]. These switches can be used to create switched-stub impedance tuners, permitting tuning of the amplifier around the Smith Chart by changing the tuning stubs that are exposed to the main feedline through switching. The fast optimization of a switched-stub tuner in less than 35 μs$35\ \mu\mathrm{s}$ is demonstrated in a recent conference paper [4]. The search technique investigates the disposition of each switch one at a time, and concludes when a complete loop of all switches has been performed with no improvement to the output power. The optimization was performed using a software-defined radio platform. Dockendorf demonstrates the use of a look-up table to decrease optimization time in a continuously optimizable evanescent-mode cavity tuner [5]. In the present paper, we introduce dynamic learning to reduce the number of measurement iterations for reoptimization of power-amplifier load impedance as the search progresses. SECTION II. ## Experimental Goals In order to demonstrate the flexibility of the switched-stub tuner in changing operating conditions, experiments were performed in a scenario where both system frequency and reflection coefficient presented to the tuner vary. The change in frequency alters the maximum load impedance, which, when presented to the amplifier, leads to maximum power output. The frequencies used in this experiment, like in [4], are 2, 2.5,3,3.5, and 4GHz, covering an octave. The change in reflection coefficient presented by the antenna to the tuner represents a system where an antenna array is electrically scanned to another angle, resulting in changes in mutual coupling and apparent antenna reflection coefficient. The “antenna” reflection coefficients used in this experiment are Γant=0,0.5/45,0.3/120,0.65/240$\Gamma_{ant}=0,0.5\underline{/45^{\circ}}, 0.3\underline{/120^{\circ}}, 0.65\underline{/240^{\circ}}$, and 0.4/310$0.4/\underline{310^{\circ}}$. These values were chosen in order to demonstrate the robustness of the tuner in varying operating conditions, as each of the four Smith Chart quadrants are included. Fig. 1 shows a visual of these five reflection coefficients on the Smith Chart. Previously in [4], switched-stub tuner optimization searches were performed at various operating conditions, and the system had no knowledge of these operating conditions to aid the search. In that case, each search began at the starting point with all switches open (no stubs exposed in the matching network). The searches would complete in the order of tens of microseconds, but there is room for improvement: dynamic online learning for starting point optimization. Five frequencies and five reflection coefficients provide 25 unique operating conditions for the system, and all switches being left open to start is not optimal at many of these conditions. Fig. 1. Emulated antenna reflection coefficient values Γant$\Gamma_{ant}$ presented to the impedance tuner In this experiment, the search demonstrated in [4] was used for real-time optimization of the switched-stub impedance tuner, also described and demonstrated in [4]. Searches were performed in a random order, dynamically populating the system memory with the switched-stub tuner state at which each search converged. To decrease search time, this state is used as the starting point the next time that the same operating frequency and antenna reflection coefficient (corresponding to array scan angle) are used. The hope is that, unlike in [5] where a continuous, mechanical impedance tuner is used, the previous optimal switched-stub state is also very likely to be the optimum for the next search at the same operating conditions, as the discrete impedance tuner demonstrated in [4] can only tune to 64 different states. Variations in the optimum would likely be due to different mutual coupling scenarios potentially arising from different operating environments if the platform is moving. If a system like this were to be deployed in a real-life scenario, the antenna reflection coefficients would likely not be known. However, storing the scan angle can provide an equivalent capability to storing the antenna reflection coefficient. SECTION III. ## Measurement Results To test this approach experimentally, optimization searches were conducted using an Ettus X310 software-defined radio (SDR) in conjunction with a host computer. The SDR acts as the (calibrated) transmitter, power meter, and tuner switch controller in the system. Additionally, the search algorithm is stored on the SDR's FPGA for speed purposes. The host computer is used to initiate the searches, as well as to dynamically learn and recall the optimal starting points for each system operating condition. Fig. 2 shows the measurement bench. As seen in Fig. 2, the switched-stub impedance tuner (D) is between the MWT-173 field-effect transistor (FET) (C, which serves as the device under testing (DUT)) and the Maury Microwave mechanical impedance tuner (A, which simulates the antenna reflection coefficients). The DC power supplies are used both to bias the FET and provide power to the six switches on the tuner. Fig. 2. Measurement setup: A – maury microwave tuner (presenting an “antenna” load impedance to the system), B – ettus x310 software defined radio, C – power amplifier, D – switched-stub impedance tuner/matching network, E- bias supplies for tuner and amplifier The results for one of the 25 optimization searches using the search of [4] without system memory is shown in Table I, below. The searched was performed with a 3.5 GHz operating frequency and an emulated antenna reflection coefficient of Γant=0.4/310$\Gamma_{ant}=0.4\underline{/310^{\circ}}$, The tuner states are comprised of the six-bit binary sequence representing the states of the six switches from input to output of the tuner, where 1 indicates a switch is closed and 0 indicates a switch is open. The decimal version of this number is shown as the “Tuner State” in Table I. Highlighted in Table I is state 60, where the search converged. With dynamic learning, the system remembers this state, and the point is used as the starting point when the same system operating frequency and antenna reflection coefficient are used again. In contrast with the Table I standard search of [4], the results for a search that utilizes dynamic learning for the same operating conditions are shown in Table II. The search converged to state 60 once again. This time, however, state 60 was the starting point, so the search only performed 7 measurements instead of 14, based on the search procedure of [4]. While the improvement from utilizing dynamic learning varies across operating frequency and antenna reflection coefficient, this example demonstrates the impact of utilizing results from previous searches with the stub tuner. Table I Search trajectory without dynamic learning Table II Search trajectory with dynamic learning To compare broader results of using dynamic learning, 25 searches without dynamic learning and 25 searches with dynamic learning were performed, and the search results were compared. A summary of notable results from the experiment are shown in Table III. Comparing the two sets of data, the most noteworthy difference is the decrease in measurement number and time taken for the searches with dynamic learning. By using dynamic learning, the average search time improved from 17.8 μs$17.8\ \mu \mathrm{s}$ to 13.1 μs$13.1\ \mu\mathrm{s}$. This 26% improvement in search time is because the searches using the previous optima as starting points averaged approximately 2 fewer measurements per search. Also, as seen in the table, 80% of the searches which utilized dynamic learning converged at their starting points (the previous optimum), indicating that, in the vast majority of cases, re-performance of the search may not even be necessary if time does not allow. Of the 20% which did not end at their starting state, the output power of each starting point was within 0.13 dBm of the eventually selected optimal state, with 3/5 of the 20% being within 0.07 dBm, a difference likely attributable to measurement noise. An important consequence of this result is that if the search converges to the same optimum each time at each set of operating conditions, after an initial search to learn the optimum, searches may no longer need to be performed. It should be noted that these times only include the time for the searches to be performed, which occur solely on the FPGA. Time for the system frequency to change, time for the Maury Microwave tuner to change the emulated antenna reflection coefficient, and time for the host computer to communicate items such as starting point, transmit power level, and operating frequency to the SDR were not considered. SECTION IV. ## Plans for Future Development As noted previously, the test setup used for these experiments uses the host computer to store the system memory. In a deployed system, the system memory would need to be stored either on the FPGA itself or on a more easily deployable small-board computer, and the communication time required to retrieve the starting tuner state will need to be considered. The dynamic learning approach should be integrated into the deployable system and re-evaluated for time savings, based on the communication overhead. Table III Search statistics from 25 searches with dynamic learning and 25 searches without dynamic learning SECTION V. ## Conclusions Experiments have demonstrated a time savings from using dynamic learning to select an impedance tuner starting state for real-time tuning when changing operating frequency or antenna impedance, which is related to array scan angle. This has significant applications to real-time radar transmitter frequency agility, allowing maximum transmission range to be maintained in a spectrum-sharing scenario. Measurement results show that using the previous search optimum as the starting location for optimizing the switched-stub tuner can decrease the search time by up to 25 percent. As transmitters change frequency and scan angle strive to avoid interference and maintain compatibility, improved speed in optimizing the impedance tuner for optimal range can be achieved. For example, an impedance tuning optimization search may, in many cases, be performed within the timeframe of one PRI or CPI. This allows radar transmitters to quickly optimize range at new operating conditions while avoiding spectral or spatial interference with or from other wireless devices. ### ACKNOWLEDGMENT This material is based upon work supported by the Office of Naval Research under Award No. N00014-19-1-2549. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Office of Naval Research.
{}
# Does Nelson try to prove PA inconsistent directly? Edward Nelson is known for his serious attempts to show that Peano axioms, and sometimes even weaker theories, are inconsistent. I wasn't able to find Nelson's papers anywhere, so I wanted to ask a question about structure of his proofs: Did Nelson attempts try to, for some statement $\phi$, prove both $\phi$ and not $\neg\phi$? Some of you might ask a question "What other possibility could there be?", and here is one such possibility: PA might just prove the statement "PA is inconsistent". What is the difference? The difference is that supposed "proof" of contradiction might have nonstandard length. You can think of this in terms of different theory, namely $PA+\neg Con(PA)$. This theory shows that PA is inconsistent (because one of its axioms says so), and, as extension of PA, it can show itself inconsistent. However, the theory itself is still consistent. What the reasoning above shows is that PA is not $\omega$-consistent. So, slightly restating the question, Did Nelson in his attempts really try to show PA inconsistent, or just $\omega$-inconsistent? Also, if anyone knows a source where I could find Nelson's papers, I'd be thankful; this is the reason I added "reference-request" tag. Thanks for all feedback! • Google it. He tried to use the proof of Kritchman-Raz of the 2nd incompleteness theorem to actually prove Con(PA) in PA and therefore get a contradiction. It didn't work. – Monroe Eskew Oct 25 '14 at 10:47 • Before I couldn't find the outline, but now, when I looked it up with "Kritchman-Raz" keyword, I was able to find it. Thanks! – Wojowu Oct 25 '14 at 11:06 • Related question: Nelson's program to show inconsistency of ZF – Nate Eldredge Oct 26 '14 at 16:14 • For those who may not know, I would like to note that Nelson passed away this September. – Steven Gubkin Oct 26 '14 at 16:41 • There is still, of course, some online papers here: web.math.princeton.edu/~nelson/papers.html – David Roberts Oct 27 '14 at 0:11 See the discussion here: https://golem.ph.utexas.edu/category/2011/09/the_inconsistency_of_arithmeti.html. As Monroe says, he tried to prove that PA was inconsistent, not just $\omega$-inconsistent.
{}
limit proving Recommended Posts agghh i hate this kind of maths mainly because i am really bad at it "this kind" being of course the proving of limits using deltas and epsilons can anybody help me or give some advice with this stuff? Cheers Sarah Share on other sites Do you have a specific example in mind? I ask because solving such a problem might be more helpful to you, as opposed to just working out a random epsilon-delta proof. Share on other sites umm ok just like proving the limit theroems for example. ie. lim(f(x)g(x)=LM x->a if lim f(x)=L, x->a, g(x)=m,x->a those sort of things.... btw i like your quote Dapthar Share on other sites umm ok just like proving the limit theroems for example. ie. lim(f(x)g(x)=LM x->a if lim f(x)=L' date=' x->a, g(x)=m,x->a [/quote']Sure. Apparently the LaTex module is still down, so I'll just have to write in plain text. Assume as x->a, lim f(x) = L, and g(x) = M. We wish to show the following: As x-> a, lim (f(x)*g(x)) = LM. First, we need to translate the above condition into epsilons and deltas. (I use e for epsilon, and d for delta). Thus, as x->a becomes |x-a| < d, and lim (f(x)*g(x)) = LM becomes |f(x)*g(x) - LM| < e. Now, given any e > 0, we want to find a d > 0 such that (1) |x-a| < d implies |f(x)*g(x) - LM| < e Keep the above in mind, as it is our goal. Why? If we can do this, we have proven that, given any error e, we can find an x close enough to a such that we can make the difference between f(x)*g(x) and LM smaller than e. This means that 'when x equals a', f(a)*g(a) = LM. We also know that as x->a, lim f(x) = L, and g(x) = M. Let's translate this to e's and d's. First, let's deal with as x->a, lim f(x) = L. This translates to, for any e1 > 0, there exists a d1 > 0 such that: (2) |x-a| < d1 implies |f(x) - L| < e1 Now, let's deal with as x->a, lim g(x) = M. Also, for any e2 > 0, there exists a d2 > 0 such that: (3) |x-a| < d2 implies |g(x) - M| < e2 Now, we will use these in a little bit, so remember them. We want to make (1) less than e, so we're going to do a standard mathematical trick (As one of my professors used to say, "When you're an undergraduate they're tricks, but when you're a graduate student, they're techniques".), adding and subtracting a quantity. So, we get that |f(x)*g(x) - LM| = |f(x)*g(x) - L*g(x) + L*g(x) - LM| Rearranging this a bit, and factoring, we get (4) |f(x)*g(x) - L*g(x) + L*g(x) - LM| = |g(x)*(f(x) - L) + L*(g(x)-M)| Now, recall (2) and (3). They tell us something about |f(x)-L| and |g(x)-M|, specifically, they tell us 'epsilon and delta information' about |f(x)-L| and |g(x)-M|. So, we'd like to use these to learn 'epsilon and delta information' about (4). Thus, we make use of the triangle inequality. Remember that the triangle inequality says that, for all a and b: |a + b| =< |a| + |b| ( =< is 'less than or equal to') So, let's use the triangle inequality to simplify (4). Here our a = g(x)*(f(x) - L), and our b = L*(g(x)-M). So, we get the following: |g(x)*(f(x) - L) + L*(g(x)-M)| < |g(x)*(f(x) - L)| + |L*(g(x)-M)| Simplifying, we get: |g(x)*(f(x) - L)| + |L*(g(x)-M)| = |g(x)||f(x) - L| + |L||g(x)-M| We can take two approaches from here, the more intuitive, longer one, or the less intuitive, shorter approach. I take the former. We will now use (2) and (3). From these, we know that we can pick any positive e1 and e2, and there exists a positive d1 and d2 such that |x-a| < d1 implies |f(x) - L| < e1 |x-a| < d2 implies |g(x) - M| < e2 Our first guess might be to pick e1 = e2 = e, so let's do that, and call the associated d1 and d2, d1 and d2. Note that for both conditions to be true, |x-a| has to be smaller than d1 and d2, so let d3 = min(d1, d2). But wait, we didn't say anything about |g(x)| yet, and if we don't, |g(x)| will still be in our final answer, and we definitely don't want that. Intuitively, we want |g(x)| to 'behave like' |L|. We'll see why this is helpful in a minute. Recall that: |x-a| < d2 implies |g(x) - M| < e2 Where we pick e2, and get a d2. So, let's pick e2 = 1. Then we get a d2 such that: |x-a| < d2 implies |g(x) - M| < 1 Let's call this d2 by the name d4, and let d5=min(d4,d3) (this ensures that all the conditions we've set up so far will hold when |x-a| < d5). Recall the reverse triangle inequality. For all a1 and b1: |a1| - |b1| < |a1 - b1| Let's apply this with a1 = g(x) and b1 = M. Thus, we get: |g(x)| - |M| < |g(x) - M| < 1 Therefore, |g(x)| - |M| < 1 Rearranging a bit, we get |g(x)| < |M| + 1. Now, it's time to put it all together. Thus: |x-a| < d5 implies that |f(x)*g(x) - LM| < |g(x)||f(x) - L| + |L||g(x)-M| < (|M| + 1) * e1 + |L|*e2 = (|M| + 1) * e + |L|*e Thus, |x-a| < d5 implies that |f(x)*g(x) - LM|< e*(|M| + 1 + |L|) Now, this is almost what we wanted, however, we orginally set out to prove that we could find a d such that |x-a| < d implies that |f(x)*g(x) - LM|< e However, this isn't a big problem. Now, we can see that insted of picking e1 = e2 = e, if we just picked e1 = e /(2*(|M|+1)), and e2 = e /(2*|L|), we would be given a d6 and d7 such that: |x - a| < min(d6, d7) implies that (|M| + 1) * e1 + |L|*e2 = (|M| + 1) * e /(2*(|M|+1)) + |L|*e /(2*|L|) = e/2 + e/2 = e. So, we just 'go back' and make this change. Now, if we set d = min(d4, d6, d7), we get that: |x-a| < d implies that |f(x)*g(x) - LM|< e Which is what we originally wanted. If any part of this explanation was unclear, feel free to ask. btw i like your quote Dapthar Thanks. Share on other sites Latex module? "Now, let's deal with as x->a, lim g(x) = M. Also, for any e2 > 0, there exists a d2 > 0 such that: (3) |x-a| < d1 implies |g(x) - M| < e2" is that meant to be |x-a| < d2? because down here you have... x-a| < d1 implies |f(x) - L| < e1 |x-a| < d2 implies |g(x) - M| < e2 your probably right of course i'm just yeah not sure if i know whats going on there? and... Our first guess might be to pick e1 = e1 = e, so let's do tha is that mean to be e1=e2=e? lol its all so complicated, but i think i am getting there..... i tried this method with another proof, but it involves continuties so i got abit stuck. "Use the formal definition of limit twice to prove that if f is continuous at L and if lim g(x) = L ,x->c then lim f(g(x)) = f(L), x->c Share on other sites Latex model?? lol LaTeX is the language used to display mathematical symbols in everything from bulletin boards to research papers. However, I guess it sounds a bit odd when mentioned out of context. "Now' date=' let's deal with as x->a, lim g(x) = M. Also, for any e2 > 0, there exists a d2 > 0 such that: (3) |x-a| < d1 implies |g(x) - M| < e2" is that meant to be |x-a| < d2?[/quote']Yup, you're right, it should be d2. Mistake on my part. Our first guess might be to pick e1 = e1 = e' date=' so let's do tha is that mean to be e1=e2=e?[/quote']Right again, it should be e1=e2=e. I'll edit my earlier post to correct these errors. i tried this method with another proof, but it involves continuties so i got abit stuck. "Use the formal definition of limit twice to prove that if f is continuous at L and if lim g(x) = L ,x->c then lim f(g(x)) = f(L), x->c You just need to translate 'f is continuous at L' into an epsilon-delta condtion. Whenever anyone says that a function h(x) is continuous at a point b, it is the exact same thing as saying that as x->b, lim h(x) = h(b), i.e., the limit is what you expect it to be. Share on other sites Dapthar, your discussion of epsilon/delta proofs was extremely good. I did not have time to inspect it carefully, but when I want to, it is something that will hold my attention. I could see your careful use of logic. Regards Share on other sites k thanks i'll look into a bit more, though this stuff still makes no sense to me but i'll try Share on other sites I found that it didn't really make sense when I did it, but looking back on it after a year (and after you've had chance to pick up on all of the little hints), it gets a lot easier. I had the same kind of problems - I think more or less everyone does. Share on other sites Sarah, there is a diagram which helps in understanding the epsilon/delta definition of 'limit' in any calculus text. Have you seen it? Share on other sites Dapthar' date=' your discussion of epsilon/delta proofs was extremely good. I did not have time to inspect it carefully, but when I want to, it is something that will hold my attention. I could see your careful use of logic. [/quote']Thanks. I try to write proofs that mimic the thought process one goes through when working out the problem. It usually ends up a bit longer than a normal proof, but hopefully it ends up being a bit clearer. k thanks i'll look into a bit more, though this stuff still makes no sense to me but i'll try Well, if you have any more questions, feel free to ask. Share on other sites lol ok i got another question..... i understand your proof now but i still can't do this question.... "Use the relevant formal definition to prove that: x->1-, lim 1/(x-1)=-infinity Share on other sites lol ok i got another question..... i understand your proof now but i still can't do this question.... "Use the relevant formal definition to prove that: x->1-' date=' lim 1/(x-1)=-infinity[/quote'] Can you state this in words please, i want to have a go at it. "The limit as x approaches one from the left, of one divided by x minus one equals negative infinity" <---- is that right ?? Share on other sites Looks to be right to me. Share on other sites lol ok i got another question..... i understand your proof now but i still can't do this question.... "Use the relevant formal definition to prove that: x->1-' date=' lim 1/(x-1)=-infinity[/quote']As before, you just have to translate the conditions into epsilon-delta statements. After you translate the conditions, you get the following. (Again, I use e and d for epsilon and delta) For all M < 0, there exists a d > 0 such that 1 - x < d implies 1/(x-1) < M (The lack of absolute value symbols is purposeful.) Since we have a definite function, we can solve for the specific d. Let's work with the 1/(x-1) < M expression. If we are given any M < 0, we want to find an d such that 1 - x < d. However, 1/(x-1) < M implies that 1/M < x - 1. Multiplying both sides by -1, we get that -1/M > 1 - x, thus if we let d = -1/M, we're done. Thus, now given any M < 0, there exists a d > 0 such that 1 - x < d implies 1/(x-1) < M. Share on other sites As before' date=' you just have to translate the conditions into epsilon-delta statements. After you translate the conditions, you get the following. ([i']Again, I use e and d for epsilon and delta[/i]) For all M < 0, there exists a d > 0 such that 1 - x < d implies 1/(x-1) < M (The lack of absolute value symbols is purposeful.) Since we have a definite function, we can solve for the specific d. Let's work with the 1/(x-1) < M expression. If we are given any M < 0, we want to find an d such that 1 - x < d. However, 1/(x-1) < M implies that 1/M < x - 1. Multiplying both sides by -1, we get that -1/M > 1 - x, thus if we let d = -1/M, we're done. Thus, now given any M < 0, there exists a d > 0 such that 1 - x < d implies 1/(x-1) < M. How does this show that the limit is negative infinity? I don't even see the infinity symbol. Regards PS: Nice work again by the way. I'm going to follow this argument eventually. Share on other sites How does this show that the limit is negative infinity? I don't even see the infinity symbol. Regards It's one of the big secrets in Mathematics; formal proofs almost never deal with infinity. Note that the proof mentions "for all M < 0"' date=' i.e., I can choose [b']M[/b] to be -1, -10, or -1 000 000, thus, 'in the limit, we go to negative infinity'. Infinite limits 'basically' follow the same rules as 'regular' limits, that for any error e, we can provide a d such that if |x-a| < d then |f(x) - L| < e, so 'at a, f(x) equals L', except here, our L is negative infinity. We just had to show that 'we can get arbitrarily close to negative infinity'. PS: Nice work again by the way. I'm going to follow this argument eventually. Thanks. Share on other sites cheers for that Dapthar Share on other sites ... for any error e' date=' we can provide a [b']d[/b] such that if |x-a| < d then |f(x) - L| < e, so 'at a, f(x) equals L'... Is this the exact definition of limit that you see in calculus books? Share on other sites Pretty much. You might see it written like this: $\forall \epsilon > 0 \exists \delta > 0 \text{ such that } | x-a | < \delta \Rightarrow |f(x) - L | < \epsilon$ (using universal quantifiers). Translated into English, this reads: "For any epsilon > 0, there exists a delta greater than zero such that..." and the rest is the same. Share on other sites Is this the exact definition of limit that you see in calculus books? dave's right. In addition, the for definition for: $\lim_{x\to a}f(x) = -\infty$ is $\forall L < 0$ $\exists \delta > 0$ such that $|x - a| < \delta \implies f(x) < L$ Share on other sites Pretty much. You might see it written like this: $\forall \epsilon > 0 \exists \delta > 0 \text{ such that } | x-a | < \delta \Rightarrow |f(x) - L | < \epsilon$ (using universal quantifiers). Translated into English' date=' this reads: "For any epsilon > 0, there exists a delta greater than zero such that..." and the rest is the same.[/quote'] That's it thats the one. Symbol per symbol. It was good to write "such that" as well Dave. Ok I have a question about that Dave. In first order logic, there is a difference between writing $\forall \epsilon \exists \delta$ $\exists \delta \forall \epsilon$ Can you explain it to me rapidly? I know I am being a pest, but thank you. Share on other sites Well, the first one says that no matter what epsilon you choose, you can always find a delta. The second one is saying that for one specific delta, there are a load of epsilons. I've heard a really good analogy of this in my Foundations lectures, but I can't seem to remember it offhand. Something to do with brothers and sisters. Perhaps I'll post it later if I can remember it. Share on other sites Well' date=' the first one says that no matter what epsilon you choose, you can always find a delta. The second one is saying that for one specific delta, there are a load of epsilons. [/quote'] Yes I know that answer... I was thinking more along the lines of whether or not epsilon is a function of delta. You know in the one case yes, and the other no, which links somehow to the meaning of function. I never did understand the definition of 'function.' Share on other sites Well, when it comes to deltas and epsilons, we don't consider functions as much as dependent upon; for example, convergence of a sequence: $\forall \epsilon > 0 \exists N \in \mathbb{N} \text{ such that } | a_n - a | < \epsilon \forall n \geq N$ In this case, our N will depend on epsilon; often it's written $N(\epsilon)$. I suppose you can consider it as a function if you wanted. Create an account Register a new account
{}
#### Vol. 314, No. 1, 2021 Recent Issues Vol. 315: 1  2 Vol. 314: 1  2 Vol. 313: 1  2 Vol. 312: 1  2 Vol. 311: 1  2 Vol. 310: 1  2 Vol. 309: 1  2 Vol. 308: 1  2 Online Archive Volume: Issue: The Journal Subscriptions Editorial Board Officers Contacts Submission Guidelines Submission Form Policies for Authors ISSN: 1945-5844 (e-only) ISSN: 0030-8730 (print) Special Issues Author Index To Appear Other MSP Journals Moduli of Legendrian foliations and quadratic differentials in the Heisenberg group ### Robin Timsit Vol. 314 (2021), No. 1, 233–251 ##### Abstract Our aim is to prove the following result concerning moduli of curve families in the Heisenberg group. Let $\Omega$ be a domain in the Heisenberg group foliated by a family $\Gamma$ of legendrian curves. Assume that there is a quadratic differential $q$ on $\Omega$ such that every curve in $\Gamma$ is a horizontal trajectory for $q$. Let ${l}_{\Gamma }:\Omega \to$]$0,+\infty$[ be the function that associates to a point $p\in \Omega$ the $q$-length of the leaf containing $p$. Then, the modulus of $\Gamma$ is ${M}_{4}\left(\Gamma \right)={\int }_{\Omega }\frac{|q{|}^{2}}{{\left({l}_{\Gamma }\right)}^{4}}\phantom{\rule{0.3em}{0ex}}d{L}^{3}.$ We have not been able to recognize your IP address 54.224.133.198 as that of a subscriber to this journal. Online access to the content of recent issues is by subscription, or purchase of single articles. or by using our contact form.
{}
·3 mins ## Function results can’t be assigned to # Let’s say we have a “normal” programming language such as Python: def foo(a, b): return a + b foo(1, 2) = 10 # <- This is obviously not true In my mental model, foo(1, 2) evaluates to 3 first. Therefore, the line in question is the same as 3 = 10, which doesn’t work. More generally, we assume that only variables (or attributes, which are differently scoped variables) can be assigned to. That’s why it’s so surprising to me when I see this code in an tutorial: seqlengths(gr) <- c(249250621,243199373,198022430) This is just strange. I’ve never seen a language where something that’s not obviously a variable can be mutated via assignment. In R, however, functions have have various forms. One of these forms is an assignment form. To quote the R manual: A special type of function calls can appear on the left hand side of the assignment operator as in r class(x) <- "foo" What this construction really does is to call the function class<- with the original object and the right hand side. This function performs the modification of the object and returns the result which is then stored back into the original variable. (At least conceptually, this is what happens. Some additional effort is made to avoid unnecessary data duplication.) To elaborate, it explains: The replacement function has the same name with <- pasted on. Its last argument, which must be called value, is the new value to be assigned. For example, names(x) <- c(“a”,“b”) is equivalent to *tmp* <- x x <- "names<-"(*tmp*, value=c("a","b")) rm(*tmp*) This almost makes sense to me. It’s essentially saying something like this in Python: # this invalid call last_name(Benjamin) = "Lee" # gets translated to tmp = Benjamin Benjamin = with_last_name(tmp, "Lee") What stops making sense is the inconsistent use of backticks and quotation marks in escaping names. ## Backticks and quotes for name escaping # While the use of backticks for name escapign might be unexpected to Python programmers, it’s normal in Nim. Say you want to define an overload for \$, the stringification function in Nim. You can’t directly assign to that symbol (since it’s not an otherwise valid variable name) so you escape it using backticks. In the R example, you used backticks to escape the name *tmp* since it wouldn’t be valid. Ok so far but perhaps a surprising to some. What goes off the hinges is that you can also use quotes to do escaping sometimes. As pointed out on Stack Overflow, it creates really weird outcomes like this: > a <- 1:3 > "a[1]" <- 55 > a[1] [1] 1 > "a[1]" [1] "a[1]" > a[1] [1] 55 Here, you can use quotes for assignment to an otherwise illegal name but not for accessing it. Author Benjamin D. Lee Benjamin D. Lee is an NIH-OxCam scholar pursuing his doctorate at Oxford University. His research is focused on the computational identification and analysis of novel viruses and virus-like agents to better understand their evolution and origin.
{}
# Let f be the function given by f x x which of the • Notes • 5 This preview shows page 2 - 5 out of 5 pages. 5.Let fbe the function given by ( )fxx=. Which of the following statements about fare true? I.fis continuous at 0x=II.fis differentiable at 0x=III. fhas an absolute minimum at 0x=. . . (A) I only (B) II only (C) III only (D) I and III only (E) II and III only 3 6.If fis a continuous function and if ( )( )Fxfx=for all real numbers x, then 12fx dx=(A) ( )( )2321FF(B) ( )( )113122FF(C) ( )( )2622FF(D) ( )( )62FF(E) ( )( )1162227.The graphs of the derivatives of the functions f, g, and hare shown above. Which of the functions , or hhave a relative maximum on the open interval axb<<() FFf, g? g, and h 8.If dykydt=and kis a nonzero constant, then ycould be ky+9.If ( )()()3212fxxx=+, then ( )fx= ++ +10.A particle moves along the x-axis with velocity given by ( )236v ttt=+for time 0t. If the particle is at position 2x=at time 0t=, what is the position of the particle at 1t=? (A) 4 (B) 6 (C) 9 (D) 11 (E) 12 11.(2003, AB-6) Let fbe the function defined by ( )1 for 035for 35xxfxxx+=<(a)Is fcontinuous at 3x=?
{}
# For a circular curve of radius 200 m, the coefficient of lateral friction of 0.15 and the design speed is 40 kmph. The equilibrium super elevation (for equal pressure on inner and outer wheel) would be? Free Practice With Testbook Mock Tests ## Options: 1. 21.3 2. 6.3 3. 7.0 4. 4.6 ### Correct Answer: Option 2 (Solution Below) This question was previously asked in HPPSC AE Civil 2016 (PWINDIPH) Official Paper ## Solution: Concept: General equation of superelevation is given by, $$e + f = \frac{{{V^2}}}{{127\;R}}$$ Where, e = rate of superelevation = tan θ R = radius of the horizontal curve in m f = coefficient of lateral friction = 0.15 Equilibrium superelevation: It is that superelevation which when provided imposes equal pressure on both outsides and inside of tyres of the vehicle, i.e f = 0 The rate of equilibrium superelevation on a road is given by, $$\;e = \frac{{{V^2}}}{{127R}}$$ Calculation: Given, R = 200 m, V = 40 kmph Equilibrium superelevation is given by, $$\;e = \frac{{{V^2}}}{{127R}}$$ ⇒ $$e = \frac{{{40^2}}}{{127\;\times\; 200}}$$ ⇒ e = 0.0629 ≈ 6.3%
{}
#### Updater through Remote XML: Advice I am implementing a very simple check routine that goes to my website and The XML file basically contains all info for my apps and their current status. All I need to do is just read the version node and compare it to the current node and then advise the user to visit the website for the new version. Ive looked at ClickOnce and other methods but I would like to implement something simpler. Anyway, has anyone done this before? My XML will look something like this: <?xml version="1.0" encoding="utf-8"?> <Apps> <App id="0001" title="BigApp"> <!--This is a comment--> <Name>Big Application</Name> <LatestVersion>1.0.0.0</LatestVersion> <News> Version 1.0.0 New feature Version 0.90 ----------- * Original release. </News> </App> </Apps> Any advice or recommendations as to how to setup my XML file? I alreday have the code that goes to the remote server and reads the XML file. I guess im just asking if this is the way to go with my simple checker. AGP 0 10/21/2007 7:35:48 AM dotnet.xml 7266 articles. 0 followers. 2 Replies 508 Views Similar Articles [PageSpeed] 53 That looks reasonable, maybe splitting the version number into its constituent parts would make comparisons easier but that could also be done in the actual comparison code and you have no guarantee that all version numbers follow the same format. (Or perhaps you do as it's your app.) -- Joe Fawcett (MVP - XML) http://joe.fawcett.name "AGP" <sindizzy.pak@softhome.net> wrote in message news:BjDSi.5455$R95.1461@nlpi070.nbdc.sbc.com... >I am implementing a very simple check routine that goes to my website and > reads an XML file. > The XML file basically contains all info for my apps and their current > status. All I need to do is just read > the version node and compare it to the current node and then advise the > user > to visit the website > for the new version. Ive looked at ClickOnce and other methods but I would > like to implement > something simpler. Anyway, has anyone done this before? My XML will look > something like this: > > <?xml version="1.0" encoding="utf-8"?> > <Apps> > <App id="0001" title="BigApp"> > <!--This is a comment--> > <Name>Big Application</Name> > <LatestVersion>1.0.0.0</LatestVersion> > <DownloadURL>http://www.mysite/files/MyApp100.exe</DownloadURL> > <News> > Version 1.0.0 > New feature > > Version 0.90 > ----------- > * Original release. > </News> > </App> > </Apps> > > Any advice or recommendations as to how to setup my XML file? I alreday > have > the code that goes to the > remote server and reads the XML file. I guess im just asking if this is > the > way to go with my simple checker. > > AGP > > > 0 10/23/2007 11:49:31 AM thanks for the input. i do the version checking in code and i will attempt to cover all formats of version numbers. so it should work regardless if i write 1 or 1.2 or 1.3.5.7. AGP "Joe Fawcett" <joefawcett@newsgroup.nospam> wrote in message news:%231yWHrWFIHA.536@TK2MSFTNGP06.phx.gbl... > That looks reasonable, maybe splitting the version number into its > constituent parts would make comparisons easier but that could also be > done in the actual comparison code and you have no guarantee that all > version numbers follow the same format. (Or perhaps you do as it's your > app.) > > -- > > Joe Fawcett (MVP - XML) > > http://joe.fawcett.name > > "AGP" <sindizzy.pak@softhome.net> wrote in message > news:BjDSi.5455$R95.1461@nlpi070.nbdc.sbc.com... >>I am implementing a very simple check routine that goes to my website and >> The XML file basically contains all info for my apps and their current >> status. All I need to do is just read >> the version node and compare it to the current node and then advise the >> user >> to visit the website >> for the new version. Ive looked at ClickOnce and other methods but I >> would >> like to implement >> something simpler. Anyway, has anyone done this before? My XML will look >> something like this: >> >> <?xml version="1.0" encoding="utf-8"?> >> <Apps> >> <App id="0001" title="BigApp"> >> <!--This is a comment--> >> <Name>Big Application</Name> >> <LatestVersion>1.0.0.0</LatestVersion> >> <News> >> Version 1.0.0 >> New feature >> >> Version 0.90 >> ----------- >> * Original release. >> </News> >> </App> >> </Apps> >> >> Any advice or recommendations as to how to setup my XML file? I alreday >> have >> the code that goes to the >> remote server and reads the XML file. I guess im just asking if this is >> the >> way to go with my simple checker. >> >> AGP >> >> >> > > 0 10/24/2007 3:28:47 AM Similar Artilces: Import XML Is there anyway I can suppress creation of Aggregate columns when I open/import XML into Excel? I'm using Excel 2002 and 2003. Many thanks in advance. ... Update for MS Money 2005? I have Money 2005. Is there an update to MS Money for Canadian users? Thanks in advance for any answers. We need a bit more information! Are you having problems with M2005 or is this just a post-Christmas/New Year random query when you are trying to get away from the in-laws? -- Regards Bob Peel, Microsoft MVP - Money For unofficial FAQs see http://money.mvps.org/ or http://umpmfaq.info/ I do not respond to any emails that I have not specifically asked for. "Daniel" <Daniel@discussions.microsoft.com> wrote in message news:E86EAB89-21DE-4505-ACAD-647278D736BD@microso... Problem with Script Updating I am using a script to update the “1099 Type” field for Master Vendor table. The script basically is a basic if-then statement. This script is run “Before Document Commit” and it’s not updating correctly, its flip-flopping the results. When the script is set to run Before Document Commit I have the “Destination mapping” field “1099 Type” set to “Use Script”. Script below: If SourceFields("Send 1099") = "N" Then DestinationFields("Options.1099 Type").Value = 1 Else DestinationFields("Options.1099 Type").Value = 4 End If I have also tried this s... updating sheets based on data in first sheet Another payroll question, I have a workbook that contains 26 sheets, one for each bi-weekl payroll period. I would like to set it up so when i add a new employe the rest of the sheets also update automatically with that employee name and information. I have been able to acheive this to a limited degree using th =sheet1!a1 formula, but this only updates the info in the first cel and particularly the first column. I would like to acheive this using the first sheet, since at th end of the year I would like to be able to calculate ytd figure easily. Thank -- Message posted from http://www.Exc... I created a newsletter yesterday - everything fine - use Publisher all the time. Then an automatic update came down this morning and I can't open the file - it says "Publisher cannot open file" Other documents in Publisher are opening - any ideas? http://support.microsoft.com/kb/972566/ -- JoAnn Paules MVP Microsoft [Publisher] Tech Editor for "Microsoft Publisher 2007 For Dummies" "Kim" <Kim@discussions.microsoft.com> wrote in message news:8754339B-99DC-4D09-83AD-6B34D8215274@microsoft.com... >I created a newsletter yesterday - everythin... Microsoft Update only updates Windows Defender Update question Soory if this is the wrong place to ask this, but cannot find a NG proper to Windows 7. I currently run Windows Vista and am getting tired of Vista's decision to tell me that it is going to shut down in less than a minute. So have decided to upgrade to Windows &. I note that I can purchase an upgrade versiom for 64 quid from Amazon or an apparently full version of Windows 7 Home premium for 89 quid. My question is this - If I buy the upgrade version will I only be able to load it on a new PC in the future if I already have windows Vista installed? i.e For any future cl... (beginner)validating an xml doc with vb.net Dear Users, I'm programming a vb.net client; I have to provide a new feature that has to validates an xml document with an existing dtd file (or a schema) associated with the xml document. Anyone knows how can I do it ? Is there any kind of method or instructions that match with my problem ? Any suggest will be kindly appreciated. Best regards. Fabrizio ... Outlook 2003 died mysteriously after update Setup in a nutshell... System: Lenovo T500 laptop OS: Vista Business SP1 32-bit Software: Office Outlook 2003 SP3 Problem... So I'm using this setup for over a year, everything OK. Yesterday I did two things and now Outlook dies (quietly, sans error message) immediately after startup. It show the splash screen, displays my inbox, and *poof* disappears. What I did earlier... 1. Windows Update installed: KB9766662, KB979306, KB979099, KB975929 2. At the same time, while searching for another program to uninstall I stumbled upon (and uninstalled) Windows LIVE Toolbar and... How update entity in post update? I created a handler for Update post callout for Opportunity. I want update some fields of the opportunity on the PostUpdate. However, if I call the Update method of CRMOpportunity in the PostUpdate I will create a recursive post callout. Can somebody help me? Thank you for pay attention []'s Vin�cius Pitta Lima de Ara�jo You need to check the OrigObjectXML field to see what fields were updated and then act appropriately. Matt Parks MVP - Microsoft CRM ---------------------------------------- ---------------------------------------- On Wed, 4 Aug 2004 17:43:17 -0300, "Vin�cius ... Cannot install update KB979906 for .NET Framework 1.1 SP1 Running : Windows XP media center edition SP3 Have tried installing KB979906 a few times and also downloaded update manually but cannot install this update. Receiving error 0x643.Is it safe to uninstall the .NET Framework 1.1 and re-install as possibly corrupt , without having to uninstall/re-install all other .NET frameworks 2 , 3 and 3.5 including the updates that go with them? .. Hello sherlockomes, you might want to look at the following kb article to see if this will help with the error 80070643. <http://windows.microsoft.com/en-US/windows-vista/Windows-Update-error... Does any one have experience using another program like Remote web to route a user to a computer. I need something with a litte more security and features than Remote web offers? Thanks RWW is quite secure. About the only way to get more secure is to go to two-factor authentication, and in that case, I'd still recommend RWW. Just tack on AuthAnvil (www.authanvil.com) -- Cliff Galiher Microsoft has opened the Small Business Server forum on Technet! Check it out! http://social.technet.microsoft.com/Forums/en-us/smallbusinessserver/threads Addicted to newsgr... latest update too MSCFV2 Hi, I have MSCFV2 version 6.5.7825.0. Could someone inform me if this is the latest download? Looks like you have 6.5.7825.0 from 05/21/2006, but there is a newer version - 6.5.7831.0 from 06/01/2006. C. Smith Enso Technologies, Incorporated http://www.ensotech.com On Tue, 13 Jun 2006 04:25:02 -0700, Paul <Paul@discussions.microsoft.com> wrote: >Hi, > >I have MSCFV2 version 6.5.7825.0. Could someone inform me if this is the >latest download? Christopher Smith csmith@ensotech.com Enso Technologies, Incorporated http://www.ensotech.com Also - meant to post this in t... Print record once, update Yes/No field verifiying print I would like to print a group of records, then have a Yes/No field [Printed] updated in my Jobs table with an update query showing the records were printed. Then next time the report runs, In my query criteria I will test for True values on the Yes/No field. Then only the records with the Yes/No field marked No will print. Any suggestions? Thanks Tommyboy,there's more to this question than meets the eye. For an explanation of what's involved, see: Has the record been printed? at: http://allenbrowne.com/ser-72.html The article includes a free sample database that dem... Hi, I have a data extract that I can draw from a main system into Excel (or CSV or txt file). From this, I wanted to build a remittance advice into word, in a special layout. The file will be along the lines of: Supplier number, name, address line 1, line 2, city, post code, invoice #, inv amount, currency. The data will be for mutliple suppliers, some with mutliple invoices. Is it possible to create a data merge that puts the supplier address in certain places on the word doc, the same with the invoice details and makes a total. Each supplier will start a new page - is this possibl... How to call another file in your xml document. Greetings. I have an app which calls "text.xml", which is stored on a network share. Each user has different text requirements. what I want is for "text.xml" which is stored on the network, to read a file locally on the user's macine (i.e. c:\localtext\text.txt) and populate it with the local contents and display it. So for each user the content would be customized to his liking. Please let me know if this is possible and if so, how. Thank You Hemang "Hemang Shah" <v-hshah@microsoft.com> wrote: > Please let me know if this is possible and if... Hi All, I need to update a menu item dynamically, setting its SetCheck property to either true or false, based on a user operation. I am trying to do this from a custom function. Usually, the pCmdUI pointer is used. But how do I do it from another function? Thanks Your "other function" should change some setting. In your ON_UPDATE_COMMAND_UI handler, you should check that setting and call SetCheck as needed. -------------- Ajay Kalra ajaykalra@yahoo.com In addition to Ajay's response, this page may help you: http://msdn2.microsoft.com/en-us/library/6kc4d8f... Money gets its updates from spcomstock.com and the server has moved to a new site. The default site that money provides has to be changed. At this time I don't know how to do this. If I find out how to do this I will Post it. ... Payroll Update HI all, We are running GP 7.5. I installed SP7 and the July 2005 tax update for CDN payroll. Now out payroll administrator gets an error message: 'The modified version of P_Payroll registry is missing'. How do I fix this problem Thanks -- Henry ... [ANN] Excel X security update Hi All, A security update for Excel X was also released today: http://www.microsoft.com/mac/downloads.aspx?pid=download&location=/mac/d ownload/officex/ExcelX_Security_1017.xml&secid=5&ssid=17&flgnosysreq=Tru e Or at least: http://www.microsoft.com/mac/downloads.aspx Corentin -- --- Mac:MS MVP (Francophone) --- http://www.mvps.org - http://mvp.support.microsoft.com MVPs are not MS employees - Les MVP ne travaillent pas pour MS Remove "NoSpam" to e-mail me - Retirez "NoSpam" pour m'�crire ... Microsoft, can we get an update on the login issue please???? It's been a whole day since we heard from Russ last. Err, Russ sent 2 updates today. Subject line = "File Lock Update" Hope you can find them. "shan" <anonymous@discussions.microsoft.com> wrote in message news:6bef01c475b3$3e7d6ea0$a601280a@phx.gbl... > Microsoft, can we get an update on the login issue > please???? It's been a whole day since we heard from > Russ last. >-----Original Message----- Look again, he posted 2 yesterday, we are still waiting for something today... FWD: Try on these correction update from the MS Corp. --omvflfmgecbtuznop Content-Type: multipart/related; boundary="pesxyyfvxtk"; type="multipart/alternative" --pesxyyfvxtk Content-Type: multipart/alternative; boundary="locjfxspwlfruwcg" --locjfxspwlfruwcg Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Microsoft Client this is the latest version of security update, the "October 2003, Cumulative Patch" update which fixes all known security vulnerabilities affecting MS Internet Explorer, MS Outlook and MS Outlook Express. Install now to continue keeping your computer secure from th... SBS 2003 Hey Guys Got a small issue with two SBS 2003 Premium Edition sites. Both have two NICs installed. One for LAN and one for WAN. Off the LAN we have one Windows 2003 Terminal Server set up. Remote users can be running along sweet as a nut then suddenly without any obvious reason they lose communications to the Terminal Server..? Their VPN session is still connected to the SBS box but they can no longer connect through the SBS box to the Terminal Server with Remote Desktop Connection! They can create a VPN tunnel to the SBS box as they normally do but when they try and ... Cannot update table--only does SQL insert Hi, just another newbie type of question.... i have a new window with its own table. i'm doing a "copy from window to table" &c, and it works fine to insert data the first time. however, i can tell from the SQL Profiler that when I try to modify the data, it does not do an update--it tries to do another insert (and of course, can't). Is there something I should be setting in Dexterity somewhere to make the table updatable, or do I need to manage this with PassThroughSQL? thanks in advance. Table operations are covered in detail during the Dexterity Training, pl... Cannot Update office Version: 2008 Operating System: Mac OS X 10.5 (Leopard) Processor: intel When I try to update office a message appears saying that a version of the software needed to udate microsoft office is not available. What is this and what can be done to correct it I have the same problem with Tiger 10.4.11. "efrosini@officeformac.com" wrote: > Version: 2008 > Operating System: Mac OS X 10.5 (Leopard) > Processor: intel > > When I try to update office a message appears saying that a version of the software needed to udate microsoft office is not available. What is this and ...
{}
# How Do I Write A Petition How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition. How Do I Write A Petition.
{}
## Calculus: Early Transcendentals (2nd Edition) The value of the derivative is $$\left.\frac{dc}{dt}\right|_{s=25}=\frac{1}{5}.$$ Using the definition of the derivative for $s=25$ we have $$\left.\frac{dc}{dt}\right|_{s=25}=\lim_{h\to0}\frac{c(25+h)-c(25)}{h}=\lim_{h\to0}\frac{2\sqrt{25+h}-1-(2\sqrt{25}-1)}{h}=\lim_{h\to0}\frac{2\sqrt{25+h}-10}{h}=\lim_{h\to0}\frac{2\sqrt{25+h}-10}{h}\cdot\frac{2\sqrt{25+h}+10}{2\sqrt{25+h}+10}=\lim_{h\to0}\frac{(2\sqrt{25+h})^2-10^2}{h(2\sqrt{25+h}+10)}=\lim_{h\to0}\frac{4(25+h)-100}{h(2\sqrt{25+h}+10)}=\lim_{h\to0}\frac{4h}{h(2\sqrt{25+h}+10)}=\lim_{h\to0}\frac{4}{2\sqrt{25+0}+10}=\frac{1}{5}.$$
{}
# Humans of Simulated New York 04.11.2016 | projects For our month-long DBRS Labs residency, Fei and I built the beginnings of a tool set for economy agent-based simulation. My gut feeling is the next phase of this AI renaissance will be around simulation. AlphaGo's recent victory over the Go world champion - a landmark in AI history - resulted from a combination of deep learning and simulation techniques, and we'll see more of this kind of hybrid. Simulation is important to planning, a common task in AI. Here "planning" refers to any task that requires producing a sequence of actions (a "plan") that leads from a starting state to goal state. An example planning problem might be: I need to get to London (starting state) from New York (goal state), what's the best way of getting there? A good plan for this might be: book a flight to London, take a cab to the airport, get on the flight. But there are infinitely many other plans, depending on how detailed you want to get. I could walk and swim to London - it's not a very good plan, but it's a plan nonetheless! To produce such plans, there needs to be a way of anticipating the outcome of certain actions. This is where simulation comes in. For example, in deciding how to get to the airport, I have to consider various scenarios - traffic could be bad at this particular hour, maybe there's some chance the cab breaks down, and so on. Simulation is the consideration of these scenarios. So simulation, especially as it relates to planning, is crucial to AI's more interesting applications, such as policy and economic simulation which seeks to understand the implications of policy decisions. Much like machine learning, planning and simulation have a long history and are already used in many different contexts, from shipping logistics to spacecraft. The residency was a great opportunity to begin exploring this space. ## First plan The general idea was to use a simulation technique called agent-based modeling, in which a system is represented as individual agents - e.g. people - that have predefined behaviors. These agents all interact with one another to produce emergent phenomena - that is, outcomes which cannot be attributed to any individual but rather arise from their aggregate interactions. The whole is greater than the sum of its parts. Originally we aimed to create a literal simulation of New York City and use it to generate narratives of its agents: simulated citizens ("simulants"). We wanted to produce a simulation heavily drawn from real world data, collating various data sources (census data, market data, employment data, whatever we could scrap together) and unpacking their abstract numbers into simulated lives. Data ostensibly is a compact way of representing very nuanced phenomena - it lossy-compresses a person's life. Using simulation to play out the rich dynamics embedded within the data seemed like a good way to (re-)vivify it. It wouldn't reach the fidelity and honesty of lived experience, but might nevertheless be better than just numbers. Over time though we became more interested in using simulation to postulate different world dynamics and see how those played out. What would the world look like if people behaved in this way or had these values? What would happen to this group of people if the government instituted this policy? What happens to labor when technology is productive enough to replace them? What if the world had less structural inequality than it does now? ## Acknowledgements This speculative direction had many inspirations: Past and speculated initiatives around AI and governance were one - in particular, Allende's Cybersyn, the ambitious, way-before-its-time attempt to manage an economy via networked computation under the control of the workers, and the imagined AI-managed civilization of Iain M. Banks' Culture. These reached for utopian societies in which many, if not all, aspects were managed by some form of artificial intelligence, presumably involving simulation to understand the societal impacts of its decisions. Another big inspiration was speculative social science fiction such as Ursula K. Le Guin's "Hainish Cycle", which explores how differences in fundamental aspects of humans lead to drastically different societies. From these we aspired to create something which similarly carves out a space to hypothesize alternative worlds and social conditions. We also referred to what could be called "generative narrative" video games. In these games, no story is fixed or preordained; rather, only the dynamics of the game world are defined. Things take on a life of their own. Bay 12 Games' Dwarf Fortress is one of the best examples of this - a meticulously detailed simulation of a colony of dwarves which almost always ends in tragedy. Dwarf Fortress has inspired other games such as RimWorld, which follows the same general structure but on a remote planet. Beyond the pragmatic applications of economic simulation, the narrative aspect produces characters and a society to be invested in and to empathize with. In a similar vein, we looked at management simulation games, such as SimCity/Micropolis, Cities: Skylines (by Paradox Interactive, renowned for their extremely detailed sim games), Roller Coaster Tycoon, and more recently, Block'hood, from which we took direction. These games provide a lot of great design cues for making complex simulations easy to play with. ## Game of Life In both a literal and metaphorical way, these simulation games are essentially Conway's Game of Life writ large. The Game of Life is the prototypical example of how a set of simple rules can lead to emergent phenomena of much greater complexity. Its world is divided into a grid, where each cell in the grid is an agent. Each cell has two possible states: on (alive) or off (dead). The rules which govern these agents may be something like: • a cell dies if it has only one or no neighbors • a cell dies if it is surrounded by four or more neighbors • a dead cell becomes alive if it has three neighbors How the system plays out can vary drastically depending on which cells start alive. Compare the following two - both use the same rules, but with different initial configurations (these gifs are produced from this demo of the Game of Life). This is characteristic of agent-based models: different starting conditions and parameters can lead to fantastically different outcomes. Interactive and well-designed simulation can also function as an educational tool, especially for something as complex as an economy. The dissociation between our daily experience and the abstract workings of the economy is massive. Trying to think about all its moving parts induces a kind of vertigo, and there is no single position from which the whole can be seen. While our project is not there yet, perhaps it may eventually aid in the cognitive mapping Fredric Jameson calls for in Postmodernism: "a situational representation on the part of the individual subject to that vaster and properly unrepresentable totality which is the ensemble of society's structures as a whole". Bureau d´Études' An Atlas of Agendas attempts this mapping by painstakingly notating nauseatingly sprawling networks of power and influence, but it is still quite abstract, intimidating, and disconnected from our immediate experience. Nick Srnicek calls for something similar in Accelerationism - Epistemic, Economic, Political (from Speculative Aesthetics): So this is one thing that can help out in the current conjuncture: economic models which adopt the programme of epistemic accelerationism, which reduce the complexity of the world into aesthetic representations, which offer pragmatic purchase on manipulating the world, and which are all oriented toward the political accelerationist goals of building and expanding rational freedom. These can provide both navigational tools for the current world, and representational tools for a future world. Yet another similar concept is "cyberlearning", described in Optimists’ Creed: Brave New Cyberlearning, Evolving Utopias (Circa 2041) (Winslow Burleson & Armanda Lewis) as: Today’s cyberlearning–the tight coupling of cyber-technology with learning experiences offering deeply integrated and personally attentive artificial intelligence–is critical to addressing these global, seemingly intractable, challenges. Cyberlearning provides us (1) access to information and (2) the capacity to experience this information’s implications in diverse and visceral ways. It helps us understand, communicate, and engage productively with multiple perspectives, promoting inclusivity, collaborative decision-making, domain and transdisciplinary expertise, self actualization, creativity, and innovation (Burleson 2005) that has transformative societal impact. We found some encouragement for this approach in Modeling Complex Systems for Public Policies, which was published last year and covers the current state of economic and public policy modeling. In the preface, Scott E. Page writes: ...whether we focus our lens on the forests or students of Brazil or the world writ large, we cannot help but see the inherent complexity. We see diverse, purposeful connecting people constructing lives, interacting with institutions, and responding to rules, constraints, and incentives created by policies. These activities occur within complex systems and when the activities aggregate they produce feedbacks and create emergent patterns and functionalities. By definition, complex systems are difficult to describe, explain, and predict, so we cannot expect ideal policies. But we can hope to improve, to do better. (p. 14) ## Early attempts We had a lot of anxiety in the deciding on the simulation's level of detail. There were a few constraints which prevent us from going too crazy, e.g. computational feasibility (whether or not we can run the simulation in a reasonable amount of time) and sensitivity/precision issues (i.e. all the problems of modeling chaotic systems). These practical concerns were offset by a desire to represent all the facets of life we were interested in, which was too ambitious. This main tension is best described by Borges' On Exactitude in Science, where a map is so detailed that it directly overlays the terrain it is meant to represent. A large part of modeling's value is that it does not seek a one-to-one representation of its referent, generally because it is not only impractical but because the point of a model is to capture the essence of a system without too much noisy detail. For us, the details were important, but we also avoided too literal a model to leave some space for the simulation to surprise us. First we tried fairly sophisticated simulants, which each had their own set of utility functions. These utility functions determined, for instance, how important money was to them or how much stress bothered them. Some agents valued material wealth over mental health and were willing to work longer hours, while others valued relaxation. We went way too granular here. Agents would make a plan for their day, hour-by-hour, involving actions such as relaxing, looking for work, seeing friends, going to work, sleeping, visiting the doctor, and so on. Agents used A* search (a powerful search algorithm) to generate a plan they believed would maximize their utility, then executed on that plan. Some actions might be impossible from their current state - for instance, they might be sick and want to visit the doctor, but not have enough money - but agents could set these desired actions as long-term goals and work towards them. To facilitate developing these kinds of goal-oriented agents, we built the cess framework. Because there is a lot of computation happening here, cess includes support for running a simulation across a cluster. However, even with the distributed computation, modeling agents at this level of detail was too slow for our purposes (we wanted something snappy and interactive), so we abandoned this approach (development of cess will continue separately). ## A simple economy Given that our residency was for only a month, we ended up going with something simple: a conventional agent-based model of a very basic economy, with flexibility in defining the world's parameters. A lot of what we wanted to include had to be left out. A good deal of the final simulation was based on previous work (of which there is plenty) in economic modeling. In particular: In addition to our simulants (the people), we also had firms, which included consumer good firms, capital equipment firms, raw material firms, and hospitals, and the government. The firms use Q-learning, as described in "An agent-based model of a minimal economy", to make production and pricing decisions. Q-learning is a reinforcement learning technique where the agent is "rewarded" for taking certain actions and "punished" for taking others, so that they eventually know how to act under certain conditions. Here firms use this to learn when producing more or less is a good idea and what profit margins consumers will tolerate. We still wanted to start from something resembling our world, at least to make the very tenuous claim that our simulation actually proves anything a tiny bit less tenuous. We gathered individual-level American Community Survey data from IPUMS USA and used that to generate "plausible" simulated New Yorkers. At first we tried trendy generative neural net methods like generative adversarial networks and variational autoencoders, but we weren't able to generate very believable simulants that way. In the end we just learned a Bayes net over the data (I hope this project's lack of neural nets doesn't detract from its appeal 😊), which turned out pretty well. A Bayes net allows us to generate new data that reflects real-world correlations, so we can do things like: given a Chinese, middle-aged New Yorker, what neighborhood are they likely to live in, and what is their estimated income? The result we get back won't be "real" as in the data isn't connected to a real person, but it will reflect the patterns in the original data. That is to say, it could be a real person. The code we use to generate simulants is available here. This is basically what it does: >>> from people import generate >>> year = 2005 >>> generate(year) { 'age': 36, 'employed': <Employed.non_labor: 3>, 'wage_income': 3236, 'wage_income_bracket': '(1000, 5000]', 'industry': 'Independent artists, performing arts, spectator sports, and related industries', 'industry_code': 8560, 'neighborhood': 'Greenwich Village', 'occupation': 'Designer', 'occupation_code': 2630, 'puma': 3810, 'race': <Race.white: 1>, 'rent': 1155.6864868468731, 'sex': <Sex.female: 2>, 'year': 2005 } This is how we spawned our population of simulants. Because we were also interested in social networks, simulants could become friends with one another. We ripped out the parameters from the logistic regression model (the "confidant model" in the graphic below) described in Social Distance in the United States: Sex, Race, Religion, Age, and Education Homophily among Confidants, 1985 to 2004 (Jeffrey A. Smith, Miller McPherson, Lynn Smith-Lovin, University of Nebraska - Lincoln, 2014) and used that to build out the social graph. This social network determined how illnesses and job opportunities spread. We designed the dynamics of the world so it could model some of the questions posed earlier. For instance, we modeled production and productive technology to explore the idea of automation. Say it takes 10 labor to produce a good, each worker produces 20 labor, and equipment adds an additional 10 labor. Now say your firm wants to produce 10 goods, which requires 100 labor. If you have no equipment, you would need five workers (5*20=100). However, if you have equipment for each worker, you only need four workers (4*(20+10)=120). (In our simulation, each piece of equipment requires one worker to operate it, so you can't just buy 10 pieces of equipment and not hire anyone). To model a more advanced level of automation, we could instead say that each piece of equipment now produces 100 labor, and now to meet that product quota, we only need one worker (1*(20+100)=120). Then we just hit play and see what happens to the world. ## Steering the city As Ava Kofman points out in "Les Simerables", these kinds of simulations embed their creators' assumptions about how the world does or should work. Discussing the dynamics of SimCity, she notes: To succeed even within the game’s fairly broad definition of success (building a habitable city), you must enact certain government policies. An increase in the number of police stations, for instance, always correlates to a decrease in criminal activity; the game’s code directly relates crime to land value, population density, and police stations. Adding police stations isn’t optional, it’s the law. Or take the game’s position on taxes: “Keep taxes too high for too long, and the residents may leave your town in droves. Additionally, high-wealth Sims are more averse to high taxes than low- and medium-wealth Sims.” The player’s exploration of utopian possibility is limited by these parameters. The imagination extolled by Wright is only called on to rearrange familiar elements: massive buildings, suburban quietude, killer traffic. You start each city with a blank slate of fresh green land, yet you must industrialize. The landscape is only good for extracting resources, or for being packaged into a park to plop down so as to increase the value of the surrounding real estate. Certain questions are raised (How much can I tax wealthy residents without them moving out?) while others (Could I expropriate their wealth entirely?) are left unexamined. These assumptions seem inevitable - something has to glue together the interesting parts - but they can be designed to be transparent and mutable. Unlike the assumptions with which we operate daily, these assumptions must be made explicit through code. Keeping all of this in mind, we wanted to make the simulation interactive in such a way that you can alter the fundamental parameters which govern the economy's dynamics. But, while you can tweak numbers of the system, you can't yet change the rules themselves. That's something we'd like to add down the line. We played around with a few ideas for making the simulation interactive. At first we thought to have individuals create their own characters, specifying attributes such as altruism and frugality. Then we would run the player's simulant through a year of their life and generate a short narrative about what happened. How the simulant behaves is dependent on the attributes the player input, as well as data-derived environmental factors. One of our goals was to model structural inequality and oppression, so depending on who you are, you may be, for instance, more or less likely to be hired. World building was a very important component to us. By keeping all player-created individuals as part of the population for future players, the world is gradually shaped to reflect the values of all the people who have interacted with it. (Unfortunately we didn't have time to implement this yet.) We didn't quite go that route in the end. Because we'll demo the simulation to an audience, we wanted to design for that format - simultaneous participation. In the latest version, players propose and vote on new legislation between each simulated month and try to produce the best outcome (in terms of quality of life) for themselves and/or everyone. We found in Modeling Complex Systems for Public Policies that this approach is called "participative simulation": ### Visualizing disparity The simulants in the city are meant to visualize structural inequality, as derived from American Community Survey and New York unemployment data (a full list of data sources is available in the project's GitHub repo). We borrowed from Flatland's hierarchy of shapes and made it so that polygon count correlates with economic status. So in our world, pyramids are unemployed, cubes are employed, and spheres are business owners. The simulant shapes are then colored according to Census race categories. It becomes pretty clear that certain colors have more spheres than others. The city's buildings also convey some information - each rectangular slice represents a different business, and each color corresponds to a different industry (raw material firms, consumer good firms, capital equipment firms, and hospitals). The shifting colors and height of the city becomes an indicator of economic health and priority - as sickness spreads, hospitals spring up accordingly. For other indices, we fall back to basic line charts. An important one is the quality of life chart, which gives a general indication about how content the citizens are. Quality of life is computed using food satisfaction (if they can buy as much food - the consumer good in our economy - as they need) and health. In many scenarios, the city collapses under inflation and gradually slouches into destitution. One by one its simulants blink out of existence. It's really hard to strike the balance for a prosperous city. And the way the economy organized - market-based, with the sole guiding principle of expanding firm profits - is not necessarily conducive to that kind of success. ### Mere speculation Players can choose from a few baked-in scenarios along the axes of food, technology, and disease: • food • a bioengineered super-nutritional food is available • "regular" food is available • a blight leaves only poorly nutritional food • technology • hyper-productive equipment is available • "regular" technology is available • a massive solar flare from the sun disables all electronic equipment • disease • disease has been totally eliminated • "regular" disease • an extremely infectious and severe disease lurks So people can see how things play out in a vaguely utopian, dystopian, or "neutral" (closer to our world) setting. Sometimes these scenarios play out as you'd expect - the infectious disease scenario wipes out the population in a month or two - but not always. Hyper-productive equipment, for instance, can lead to misery, unless other parameters (such as government) are collectively adjusted by players. ### Where could this go? These simulations are promising in domains like public policy - with movements like "smart cities", it seems inevitable that this application will become ubiquitous - but their potential is soured by the reality of how policy and technological decisions are made in practice. Technological products tend to reproduce the power dynamics that produced them. As alluded to in Eden Medina's piece, Cybersyn could have easily been implemented as a top-down control system instead of something which the workers actively participated in and took some degree of ownership over. So it could go either way, really. To me the value of these simulations is as a means to speculate about what the world could be like, to see how much better (or worse) things might be given a few changes in our behavior or our relationships or our environment. We seem to be reaching a high-water mark of stories about dystopia (present and future), and it has been harder for me to remember that "another world is possible". Our project is not yet radical enough in the societies it can postulate, but we hope that these simulations can serve as a reminder that things could be different and provide a compelling vision of a better world to work towards. A big thank you to Amelia Winger-Bearskin and the rest of the DBRS Labs folks for their support and to Jeff Tarakajian for answering our questions about the Census!
{}
If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. # Analyzing unbounded limits: rational function AP.CALC: LIM‑2 (EU) , LIM‑2.D (LO) , LIM‑2.D.1 (EK) , LIM‑2.D.2 (EK) ## Video transcript - [Voiceover] Let f of x be equal to negative one over x minus one squared. Select the correct description of the one-sided limits of f at x equals one. And so we can see, we have a bunch of choices where we're approaching x from the right-hand side and we're approaching x from the left-hand side. And we're trying to figure out do we get unbounded on either of those, in the positive, towards positive infinity or negative infinity. And there's a couple of ways to tackle it. The most straightforward, well, let's just consider each of these separately. So we can think about the limit of f of x as x approaches one from the positive direction and limit of f of x as x approaches one, as x approaches one from the left-hand side. This is from the right-hand side. This is from the left-hand side. So I'm just gonna make a table and try out some values as we approach, as we approach one from the different sides, x, f of x, and I'll do the same thing over here. So, we are going to have our x and have our f of x and if we approach one from the right-hand side here, that would be approaching one from above, so we could try 1.1, we could try 1.01. Now f of 1.1 is negative one over 1.1 minus one squared. So see this denominator here is going to be .1 squared. So this is going to be, this is going to be 0.01, and so this is going to be negative 100. So let me just write that down. That's going to be negative 100. So if x is 1.01, well, this is going to be negative one over 1.01 minus one squared. Well, then this denominator this is going to be, this is the same thing as 0.01 squared, which is the same thing as 0.0001, 1/10000. And so the negative one 1/10000 is going to be negative 10,000. So, let's just write that down, negative 10,000. And so this looks like, as we get closer, 'cause notice, as I'm going here I am approaching one from the positive direction, I'm getting closer and closer to one from above and I'm going unbounded towards negative infinity. So this looks like it is negative infinity. Now we can do the same thing from the left-hand side. I could do 0.9, I could do 0.99. Now 0.9 is actually also going to get me negative 100 'cause 0.9 minus one is going to be negative .1 but then when you square it the negative goes away so you get a .01 and then one divided by that is 100 but you have the negative, so this is also negative 100. And if you don't follow those calculations, I'll do it, let me do it one more time just so you see it clearly. This is going to be negative one over, so now I'm doing x is equal to 0.99, so I'm getting even closer to one, but I'm approaching from below from the left-hand side. So this is going to be 0.99 minus one squared. Well, 0.99 minus one is, is going to be negative 1/100, so this is going to be negative 0.01 squared. When you square it the negative goes away and you're left with 1/10000. So this is going to be 0.0001 and so when you evaluate this you get 10,000. So that, or sorry, you get negative 10,000. So in either case, regardless of which direction we approach from, we are approaching negative infinity. So that is this choice right over here. Now there's other ways you could have tackled this if you just look at, kind of, the structure of this expression here, the numerator is a constant, so that's clearly always going to be positive. Let's ignore this negative for the time being. That negative's out front. This numerator, this one is always going to be positive. Down here, we're taking at x equals one, while this becomes zero and the whole expression becomes undefined, but as we approach one, x minus one could be positive or negative as we see over here, but then when we square it, this is going to become positive as well. So the denominator is going to be positive for any x other than one. So positive divided by positive is gonna be positive but then we have a negative out front. So this thing is going to be negative for any x other than one, and it's actually not defined at x equals one. And so you could, from that, you could deduce, well, okay then, we can only go to negative infinity there's actually no way to get positive values for this function.
{}
# zbMATH — the first resource for mathematics A note on tensor products of polar spaces over finite fields. (English) Zbl 0836.51004 The author’s abstract: “A symplectic or orthogonal space admitting a hyperbolic basis over a finite field is tensored with its Galois conjugates to obtain a symplectic or orthogonal space over a smaller field. A mapping between these spaces is defined which takes absolute points to absolute points. It is shown that caps go to caps. Combined with a result of Dye’s one obtains a simple proof of a result due to Blokhuis and Moorehouse that ovoids do not exist on hyperbolic quadrics in dimension ten over a field of characteristic two”. ##### MSC: 51A50 Polar geometry, symplectic spaces, orthogonal spaces ##### Keywords: polar space; caps; ovoids; hyperbolic quadrics Full Text:
{}
# How do you solve the quadratic equation by completing the square: y^2 + 16y = 2? Jun 14, 2018 $y = - 8 \pm \sqrt{66}$ #### Explanation: to complete the square you use the formula: $a {x}^{2} + b x + c$ a must equal 1 $c = {\left(\frac{b}{2}\right)}^{2}$ the completed square is: ${\left(x + \frac{b}{2}\right)}^{2}$ Here we go, in your function the y is the general formula's x: ${y}^{2} + 16 y = 2$ ${y}^{2} + 16 y + \underbrace{c = 2 + c}$ we add c to both sides so we don't alter the equation now solve c: $c = {\left(\frac{b}{2}\right)}^{2} = {\left(\frac{16}{2}\right)}^{2} = 64$ ${y}^{2} + 16 y + 64 = 2 + 64$ now complete the square: ${\left(y + 8\right)}^{2} = 66$ Now solve: $\sqrt{{\left(y + 8\right)}^{2}} = \pm \sqrt{66}$ $y + 8 = \pm \sqrt{66}$ $y = - 8 \pm \sqrt{66}$
{}
$2\times 2$ matrices over complex numbers I am trying to solve this problem. If $A$ is a $2 \times 2$ matrix with complex entries, then $A$ is similar over $\Bbb C$ to a matrix of one of the two types $$M= \left[ {\begin{array}{cc} a & 0\\ 0 & b\\ \end{array} } \right],$$ $$M= \left[ {\begin{array}{cc} a & 0 \\ 1 & a \\ \end{array} } \right].$$ Could you please tell me how to start? I have no idea! Thanks • Hint: use Jordan normal form. – TZakrevskiy Mar 3 '14 at 14:47 If $M$ is diagonal or has two distinct eigenvalues it is diagonalizable, so is similar to the first type. Otherwise there is only a single 1-dimensional eigenspace and it is similar to the second type.
{}
# How do you find the equation for the parabola with the given Vertex (4,-1), point (-2,35)? Aug 24, 2015 Find the equation of the parabola Ans: y = 9x^2 - 72x + 143 #### Explanation: Equation of the parabola : $y = a {x}^{2} + b x + c$. Find a, b, and c Vertex (4, -1) x-coordinate of vertex: $- \frac{b}{2 a} = 4$ --> $b = - 8 a$ (1) y-coordinate of vertex f(4) = -1 f(4) = 16a + 4b + c = - 1 (2) The parabola passes at point (-2, 35), then f(-2) = 35 f(-2) = 4a + 2b + c = 35 (3) We have 3 equations to find 3 unknowns a, b, and c. (3) gives: $4 a + 2 \left(- 8 a\right) + c = - 12 a + c = 35$ ->$c = 35 + 12 a$ (2) gives: $16 a + \left(- 32 a\right) + \left(35 + 12 a\right) = - 1$ -4a = - 36. --> $a = 9$ $b = - 8 a = - 72$ $c = 35 + 12 a = 35 + 108 = 143$ Equation: $y = 9 {x}^{2} - 72 x + 143 = 0$ Check: x of vertex --> x = -b/2a = 72/18 = 4 OK y of vertex: f(4) = 9(16) - 72(4) + 142 = 144 - 288 + 143 = - 1 OK
{}
## Screenshot shortcuts on Linux (Ubuntu, CentOS, RedHat) This post  provides shortcuts for taking screenshots on Linux (including Ubuntu, CentOS, and RegHat). The command is the same for Ubuntu, CentOS, RedHat. (Check HERE for screenshot shortcuts on Mac.) • ### Using Gnome Screenshot • Press PrtScn to take a fullscreen screenshot to a PNG file (normally the screenshot file is saved in the Pictures folder.) • Press Alt+PrtScn to take a screenshot of an active window. This shortcut will create a screenshot of your active window as a PNG file. The file will be saved in your Pictures folder. • Press Shift+PrtScn to capture a customized screen area. You’ll be able to click and drag a selection box to determine what is captured in the screenshot. A PNG file with the image you captured will be saved in your Pictures folder. • Press Shift+CTRL +PrtScn to copy what you customized area capture  to clipboard. The Gnome Screenshot utility allows you to perform some additional screenshot functions, such as adding a delay, and add tooltip. Open the Screenshot utility. You can find the Screenshot utility in the Accessories folder of your Applications menu. • Select your screenshot type. You can choose from any of the options outlined above. • (Ubuntu) • (RedHat) • Add a delay. If your screenshot is time-dependent, you can use the Screenshot utility to add a delay before the screenshot is captured. This will allow you to make sure the right content is on the screen. • (Ubuntu) • (RedHat) • Select your effects. You can choose to include your mouse pointer in the screenshot, as well as whether or not you want to add a border to the screenshot. (Ubuntu) (RedHat) • ### Using GIMP GIMP (GNU Image Manipulation Program) is a freely distributed software for manipulating images. We can easily optimize the image, convert their type using GIMP. It provides the power and flexibility to designers to transform images into truly unique creations. GIMP is the cross platforms application and available for Linux, Windows, MAC OS, and FreeBSD etc. Install GIMP You can get it for free using your Software Center. Open the Software Center, search for “gimp”, and then install the “GIMP Image Editor”. For installing GIMP from command line on Ubuntu, check my post HERE. Click the “File” menu and select “Create” → “Screenshot”. The screenshot creation tool will open. This tool is very similar to the Gnome Screenshot utility. Select the type of screenshot you want to take. You can choose to take three different types of screenshots: single window, full-screen, or custom selection. If you choose the single window option, you’ll be able to click the window that you want to take a screenshot of. (Ubuntu) (RedHat) You can add a delay before the screenshot is taken so that you can arrange everything exactly how you want it. If you have single window or custom screenshots selected, you’ll choose your screenshot target after the delay timer runs out. Click “Snap” to take the screenshot. Depending on your settings, the screenshot may be taken immediately. When you’re finished, the screenshot will open in the GIMP editing window. Save the screenshot. If you don’t want to make any edits to the screenshot, you can save it to your hard drive. Click the “File” menu and select “Export”. Give the screenshot a name and choose where you would like to save it. Click the “Export” button once you are satisfied. ## How to Add Categories and Tags for WordPress Pages This post provides instructions on how to add Categories and Tags for WordPress Pages. By default, for a wordpress site, there are only Categories and Tags for posts, not for page. And it is not possible to edit the categories name once entered. However, once you read through this tutorials, you will be able to add Categories and Tags for both you Posts and for Pages, as well as edit them. click “install Now” ### Once activated, go to Pages » Add New and you will find post categories and tags now available for your pages too. That’s it. No complex setup. This plugin just works out of the box. What this plugin does is that it modifies the default categories and tag taxonomies and associate them with Page post type along with the default posts. Lets say you have a category called “books” that you use to sort your posts. Using this plugin, you can easily add a page and file it in the same Books category, so your page will appear in the category archive along with your regular posts. ### References How to Add Categories and Tags for WordPress Pages (September 10th, 2017 ) — PDF Categories vs Tags – SEO Best Practices for Sorting your Content (April 5th, 2018 ) What’s the difference between Categories and Tags? Categories are meant for broad grouping of your posts. Think of these as general topics or the table of contents for your site. Categories are there to help identify what your blog is really about. It is to assist readers finding the right type of content on your site. Categories are hierarchical, so you can sub-categories. Tags are meant to describe specific details of your posts. Think of these as your site’s index words. They are the micro-data that you can use to micro-categorize your content. Tags are not hierarchical. For example if you have a personal blog where you write about your life. Your categories can be something like: Music, Food, Travel, Rambling, and Books. Now when you write a post about something that you ate, you will add it in the Food category. You can add tags like pizza, pasta, steak etc. One of the biggest difference between tags and categories is that you MUST categorize your post. You are not required to add any tags. If you do not categorize your post, then it will be categorized under the “uncategorized” category. ## [LaTeX] subfigures with captions This post provides Latex code examples for how to generate sub-figures with and without captions. • Sub-figures with captions \documentclass{article} \usepackage{graphicx, caption, subcaption} \begin{document} \begin{figure} \begin{subfigure}{0.96\textwidth} \includegraphics[width=\textwidth]{subfig1} \caption{subfig 1 caption text here} \end{subfigure} \centering %note: this centering command applies to subfig1 \hfill \begin{subfigure}{0.47\textwidth} \includegraphics[width=\textwidth]{subfig2} \caption{subfig 2 caption text here} \end{subfigure} \hfill \begin{subfigure}{0.47\textwidth} \includegraphics[width=\textwidth]{subfig3} \caption{subfig 3 caption text here} \end{subfigure} \caption{the overall fig caption text here} \label{fig:subfig_example} % Give a unique label \end{figure} \end{document} • Sub-figures without caption \documentclass{article} \usepackage{graphicx} \begin{document} %*For figures without sub-captions \begin{figure} \includegraphics[width=0.96\textwidth]{subfig1} \centering \hfill \includegraphics[width=.48\textwidth]{subfig2} \hfill \includegraphics[width=.48\textwidth]{subfig3} % figure caption is below the figure \caption{figure caption text here} \label{fig:subfig_example2} % Give a unique label \end{figure} \end{document} ## How to annotate / add markup to a screenshot on a Mac This post introduces how to annotate/ add markup to a screenshot on Mac. Adding markup to a screenshot simply means adding things like text, underlines, circles, boxes, and arrows to the screenshot to further highlight or draw attention to certain details within the image. Markup is useful if you want to make quick notes for designers, make corrections to homework, or mark areas of interest within html code to website developers. These steps show you how to add markup to screenshots without first saving the screenshot and then having to use image editing software: 1. Hold the CTRL key while holding the command and SHIFT buttons. Press 3 for capturing the full screen, press 4 for a part of the screen, and press 4 followed by the spacebar for a window. This will hold the screenshot in the clipboard. 2. Open the Preview application on your Mac and then using the following shortcut to view the image from the clipboard in Preview. command +   N 3. Use the tools within Preview to make changes such as cropping the image, inserting caption text, and adding coloured shapes. 4. To copy the new edited image to clipboard before pasting into your email or document, press command +  A keys to select the full image and then press the command + C keys. Reference: ## Install DB Browser for SQLite on Ubuntu 16.04 This post introduces how to install DB Browser for SQLite on Ubuntu 16.04. For Ubuntu and derivaties, @deepsidhu1313 provides a PPA with the latest release at here: Step 1: Add the PPA shown above by issuing the following command in your terminal: $sudo add-apt-repository -y ppa:linuxgndu/sqlitebrowser Step 2: Update the cache using: $ sudo apt-get update Step 3: Install the DB Browser for SQLite package by issuing the following command: $sudo apt-get install sqlitebrowser Reference: http://sqlitebrowser.org/ ## [LaTeX] Add appendices in an article This post introduces how to add appendices to an article. The command \appendix is included in all basic class files, so you do not need to include any extra package to add appendix, unless the journal that you aim at has specific appendix style requirements. \begin{document} \section{Your section name here} \section{Your section name here} % Activate the appendix in the doc % from here on sections are numerated with capital letters \appendix \section{Appendix A title here} \subsection{Appendix subsection title here} \subsection{Appendix subsection title here} \section{Appendix B title here} \end{document} ## VPN setup on Ubuntu 16.04 (using Cisco AnyConnect client) This post introduces how to setup VPN on Ubuntu 16.04 LTS using Cisco AnyConnect Client. Step 1: Download Cisco AnyConnect client. Penn Stater can download at here. Step 2: Extract the file(s) and install as root. (1) extract the downloaded file; (2) then cd to the extracted directory where it has an installation .sh file; (3) then issue the following command to install Cisco AnyConnect Client: $sudo ./AnyConnectInstall.sh # note your .sh file may have slight different name Step 3:  Run the following command. $sudo apt-get install openconnect network-manager-openconnect-gnome We need to issue this command to show Cisco Compatible VPN in the list when we open network manager and add a new VPN. Step 4: Open Network Manager. Step 5: Add a VPN in the Network Manager Step6: Choose Cisco AnyConnect Compatible VPN (openconnect) and click Create. Step 7: Enter the following info • Connection name: Tech Services VPN [Note you can name this as you wish] • Gateway: vpn.its.psu.edu [type in your vpn accordinly] Click Save. Step 8: Open Cisco Anyconnect client Type your VPN address in the connect to textbox, and then enter your username and psw. Then you are ready to go:) References: VPN, CISCO AnyConnect, Linux Cisco VPN client on Ubuntu 16.04 LTS ## Two ways to merge PDF files on Mac (GUI and command line) This post introduces how to merge PDF files on Mac from GUI and from Terminal on Mac OS. (For Ubuntu and Windows users, check out my post here for solutions.) Method 1: GUI — using Preview that comes with your Mac OS. Check here for how to combine PDFs and reorder, rotate, and delete pages. If the page is not accessible, check the pdf I linked to in the references. Method 2: From Terminal We will introduce using gs command. many people may already have gs package installed and are already using gs. TO check whether your Mac has gs installed, in your terminal, issue the following command: $ which gs If you see something like this “/usr/local/bin/gs”, you OS has gs installed If you see something like “… command not found”, you will need to install gs first. You can use brew to install it. $brew install gs If you do not have brew installed, check: Install Homebrew. After you have gs installed, in your terminal, cd to the directory where the pdf files you want to merge are located, and then issue the following command: $ gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=merged.pdf source1.pdf source2.pdf source3.pdf Then you will see a merged.pdf appear in the same folder where you source pdfs are. Note: You may encounter these two errors in your terminal after your issue the pdf merge command I mentioned above, but be assured, the merge.pdf is correct. you can double check if you are worried about that:) References: How can I combine multiple PDFs using the command line? ## Read the first line of a file from terminal (on Ubuntu and Mac) This post introduces how to read the first line of a file from terminal. It works on both Linux (Ubuntu) and Mac OS. For getting top or bottom 10 files under a director from terminal (on Ubuntu and Mac), check here. Note: you do not need to install anything, it is built-in on your Ubuntu/Mac OS. Step 1: open a terminal Step 2: cd to the directory where your file (e.g., .txt file) is located Step 3: issue the following command $head -n 1 example.txt # this will read and print the first line of the file in the terminal$ head -n 2 example.txt # this will read and print the first two lines of the file ... you got the idea... *********************************************** Analogously, see the following command for displaying the last line of a file from terminal: $tail -n 1 example.txt # this will read and print the last line of the file in the terminal$ tail -n 2 example.txt # this will read and print the last two lines of the file ... you got the idea... ## Get top or bottom 10 files under a director from terminal (on Ubuntu and Mac) This post get top/bottom 10 from the sorted file names in current directory. For reading the first line of a file from terminal (on Ubuntu and Mac), check here. Note: you do not need to install anything, it is built-in on your Ubuntu/Mac OS. Step 1: open a terminal Step 2: cd to the directory where your file (e.g., .txt file) is located Step 3: issue the following command accordingly $ls | head -10 # the pipe symbol (i.e., |) puts the output of the ls command as the input of the head command. if you would like to get more information of the files, use the following instead: $ ls -l | head -10 similarly, if you would like to get the bottom 10 files in the current directory, issue the following command: $ls | tail -10 or for detailed information of the files, use $ ls -l | tail -10 You guessed it, if you would like to get the top/bottom 20, just change the -10 to -20:) simply, enough, right? For more commonly used Linux commands, check my other posts at here  and here .
{}
# Homology of $S^n - S^k\vee S^\ell$ Does anyone know a good trick to computing homology groups of the sphere minus the wedge of two spheres of possibly different dimension $S^n \setminus S^k\vee S^\ell$ ? Any particular $k$ and $\ell$ is not so bad, but the general case has so many cases. Can one avoid this with a some sneaky exact sequence? \begin{align*} \tilde H_i(S^n - S^k \vee S^\ell; \mathbb Z) &\cong \tilde H^{n-i-1}(S^k \vee S^\ell; \mathbb Z)\end{align*} which is $\mathbb Z$ for $i=n-1-k$ or $n-1-\ell$ and trivial otherwise.
{}
# 0.3 Gravity and mechanical energy  (Page 6/9) Page 6 / 9 $KE=\frac{1}{2}m{v}^{2}$ Consider the $1\phantom{\rule{2pt}{0ex}}\mathrm{kg}$ suitcase on the cupboard that was discussed earlier. When the suitcase falls, it will gain velocity (fall faster), until it reaches the ground with a maximum velocity. The suitcase will not have any kinetic energy when it is on top of the cupboard because it is not moving. Once it starts to fall it will gain kinetic energy, because it gains velocity. Its kinetic energy will increase until it is a maximum when the suitcase reaches the ground. A $1\phantom{\rule{2pt}{0ex}}\mathrm{kg}$ brick falls off a $4\phantom{\rule{2pt}{0ex}}m$ high roof. It reaches the ground with a velocity of $8,85\phantom{\rule{2pt}{0ex}}m·s{}^{-1}$ . What is the kinetic energy of the brick when it starts to fall and when it reaches the ground? • The mass of the rock $m=1\phantom{\rule{2pt}{0ex}}\mathrm{kg}$ • The velocity of the rock at the bottom ${v}_{\mathrm{bottom}}=8,85\phantom{\rule{2pt}{0ex}}m·$ s ${}^{-1}$ These are both in the correct units so we do not have to worry about unit conversions. 1. We are asked to find the kinetic energy of the brick at the top and the bottom. From the definition we know that to work out $KE$ , we need to know the mass and the velocity of the object and we are given both of these values. 2. Since the brick is not moving at the top, its kinetic energy is zero. 3. $\begin{array}{ccc}\hfill KE& =& \frac{1}{2}m{v}^{2}\hfill \\ & =& \frac{1}{2}\left(1\phantom{\rule{4pt}{0ex}}\phantom{\rule{0.166667em}{0ex}}\mathrm{kg}\right){\left(8,85\phantom{\rule{4pt}{0ex}}\phantom{\rule{0.166667em}{0ex}}\mathrm{m}·{\mathrm{s}}^{-1}\right)}^{2}\hfill \\ & =& 39,2\phantom{\rule{0.166667em}{0ex}}\mathrm{J}\hfill \end{array}$ ## Checking units According to the equation for kinetic energy, the unit should be $\mathrm{kg}·m{}^{2}·s{}^{-2}$ . We can prove that this unit is equal to the joule, the unit for energy. $\begin{array}{ccc}\hfill \left(\mathrm{kg}\right){\left(\mathrm{m}·{\mathrm{s}}^{-1}\right)}^{2}& =& \left(\mathrm{kg}·\mathrm{m}·{\mathrm{s}}^{-2}\right)·\mathrm{m}\hfill \\ & =& \phantom{\rule{0.166667em}{0ex}}\mathrm{N}·\phantom{\rule{0.166667em}{0ex}}\mathrm{m}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\left(\mathrm{because}\mathrm{Force}\left(\mathrm{N}\right)=\mathrm{mass}\left(\mathrm{kg}\right)×\mathrm{acceleration}\left(\mathrm{m}·{\mathrm{s}}^{-2}\right)\right)\hfill \\ & =& \phantom{\rule{0.166667em}{0ex}}\mathrm{J}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\left(\mathrm{Work}\left(\mathrm{J}\right)=\mathrm{Force}\left(\mathrm{N}\right)×\mathrm{distance}\left(\mathrm{m}\right)\right)\hfill \end{array}$ We can do the same to prove that the unit for potential energy is equal to the joule: $\begin{array}{ccc}\hfill \left(\mathrm{kg}\right)\left(\mathrm{m}·{\mathrm{s}}^{-2}\right)\left(\mathrm{m}\right)& =& \phantom{\rule{0.166667em}{0ex}}\mathrm{N}·\phantom{\rule{0.166667em}{0ex}}\mathrm{m}\hfill \\ & =& \phantom{\rule{0.166667em}{0ex}}\mathrm{J}\hfill \end{array}$ A bullet, having a mass of $150\phantom{\rule{2pt}{0ex}}g$ , is shot with a muzzle velocity of $960\phantom{\rule{2pt}{0ex}}m·s{}^{-1}$ . Calculate its kinetic energy. • We are given the mass of the bullet $m=150\phantom{\rule{2pt}{0ex}}g$ . This is not the unit we want mass to be in. We need to convert to kg. $\begin{array}{ccc}\hfill \mathrm{Mass}\phantom{\rule{3.33333pt}{0ex}}\mathrm{in}\phantom{\rule{3.33333pt}{0ex}}\mathrm{grams}÷1000& =& \mathrm{Mass}\phantom{\rule{3.33333pt}{0ex}}\mathrm{in}\phantom{\rule{3.33333pt}{0ex}}\mathrm{kg}\hfill \\ \hfill 150\phantom{\rule{3.33333pt}{0ex}}g÷1000& =& 0,150\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{0.166667em}{0ex}}\mathrm{kg}\hfill \end{array}$ • We are given the initial velocity with which the bullet leaves the barrel, called the muzzle velocity, and it is $v=960\phantom{\rule{2pt}{0ex}}m·s{}^{-1}$ . • We are asked to find the kinetic energy. 1. We just substitute the mass and velocity (which are known) into the equation for kinetic energy: $\begin{array}{ccc}\hfill KE& =& \frac{1}{2}m{v}^{2}\hfill \\ & =& \frac{1}{2}\left(0,150\right){\left(960\right)}^{2}\hfill \\ & =& 69\phantom{\rule{0.166667em}{0ex}}120\phantom{\rule{0.166667em}{0ex}}\mathrm{J}\hfill \end{array}$ ## Kinetic energy 1. Describe the relationship between an object's kinetic energy and its: 1. mass and 2. velocity 2. A stone with a mass of $100\phantom{\rule{2pt}{0ex}}g$ is thrown up into the air. It has an initial velocity of $3\phantom{\rule{2pt}{0ex}}m·s{}^{-1}$ . Calculate its kinetic energy 1. as it leaves the thrower's hand. 2. when it reaches its turning point. 3. A car with a mass of $700\phantom{\rule{2pt}{0ex}}\mathrm{kg}$ is travelling at a constant velocity of $100\phantom{\rule{2pt}{0ex}}\mathrm{km}·\mathrm{hr}{}^{-1}$ . Calculate the kinetic energy of the car. ## Mechanical energy Mechanical energy is the sum of the gravitational potential energy and the kinetic energy. Mechanical energy, $U$ , is simply the sum of gravitational potential energy ( $PE$ ) and the kinetic energy ( $KE$ ). Mechanical energy is defined as: what does nano mean? nano basically means 10^(-9). nanometer is a unit to measure length. Bharti do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment? absolutely yes Daniel how to know photocatalytic properties of tio2 nanoparticles...what to do now it is a goid question and i want to know the answer as well Maciej Abigail for teaching engĺish at school how nano technology help us Anassong Do somebody tell me a best nano engineering book for beginners? what is fullerene does it is used to make bukky balls are you nano engineer ? s. fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball. Tarell what is the actual application of fullerenes nowadays? Damian That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes. Tarell what is the Synthesis, properties,and applications of carbon nano chemistry Mostly, they use nano carbon for electronics and for materials to be strengthened. Virgil is Bucky paper clear? CYNTHIA so some one know about replacing silicon atom with phosphorous in semiconductors device? Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure. Harper Do you know which machine is used to that process? s. how to fabricate graphene ink ? for screen printed electrodes ? SUYASH What is lattice structure? of graphene you mean? Ebrahim or in general Ebrahim in general s. Graphene has a hexagonal structure tahir On having this app for quite a bit time, Haven't realised there's a chat room in it. Cied what is biological synthesis of nanoparticles what's the easiest and fastest way to the synthesize AgNP? China Cied types of nano material I start with an easy one. carbon nanotubes woven into a long filament like a string Porter many many of nanotubes Porter what is the k.e before it land Yasmin what is the function of carbon nanotubes? Cesar I'm interested in nanotube Uday what is nanomaterials​ and their applications of sensors. what is nano technology what is system testing? preparation of nanomaterial how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Berger describes sociologists as concerned with Got questions? Join the online conversation and get instant answers!
{}
# Repeated pulses and contrast In this section we investigate the effect of repeating the pulses, meaning that in the pulse starting time the magnetization does not equal to its equilibrium value but it is the result of the previous sequences. First we deal with FID. We measure the time from the first 90° pulse applied to the equilibrium magnetization. The pulse repetition time is denoted by . For the first pulse, : (1) (2) (3) (4) and for the second pulse, : (5) (6) (7) (8) If we substitute into equations (7) and (8) we found that the dynamics will be the same for all the subsequent pulses, that is, for : (9) (10) (11) (12) From these we can see that in the case of a repeated FID sequence the signal will always be enveloped by the decay. However, this can be modulated with relaxation with an appropriate repetition time: if then the exponential term in (12) will modify the signal depending on the longitudinal relaxation, while with the condition the effect of decay can be eliminated from the signal. The situation is a bit more exciting in the case of spin echo. We measure time from the first 90° pulse as usual, and denote the time between the 90° and the 180° pulses with . For the first sequence: (13) (14) (15) (16) For the second sequence, i. e. for : (17) (18) (19) From these the accurate expression for the echo maximum after several pulses: (20) If we assume the half echo time $\tau$ to be much smallet than the repetition time : (21) From equation (21) several conclusions can be drawn. 1) If we set the repetition time to be large compared to the longitudinal relaxation and the echo time to be small, meaning and then the exponential terms vanish or become approx. equal to one, and the signal will be proportional simply to the eqilibrium value of the magnetization. In this way we can eliminate the effect of relataxion from the signal and observe only the spatial proton density. This method is called "proton density weighting (PD)". 2) If the repetition time remans large but the echo time is in the same order of magnitude as the relaxation, i. e. and then the effect of longitudinal relaxation will still be eliminated but the relaxation will alternate the signal as the exponential term containing this will not vanish. This is called "T2 weighting"} in imaging processes. 3) If the echo times is small and the repetition time is approximately the same as the longitudinal relaxation, that is, if and then the effect of transverse relaxation is eliminated but the relaxation modifies the signal. This is called "T1 weighting". We note here that the aboving ideas have some practical limitations: the repetition time of course can be as long as we want but the reduction of is limited by technical reasons. Therefore the condition is sometimes hard to achieve, especially if the transverse relaxation is fast.
{}
# zbMATH — the first resource for mathematics Automated generation of search tree algorithms for hard graphs modification problems. (English) Zbl 1090.68027 Summary: We present a framework for an automated generation of exact search tree algorithms for NP-hard problems. The purpose of our approach is twofold – rapid development and improved upper bounds. Many search tree algorithms for various problems in the literature are based on complicated case distinctions. Our approach may lead to a much simpler process of developing and analyzing these algorithms. Moreover, using the sheer computing power of machines it may also lead to improved upper bounds on search tree sizes (i.e., faster exact solving algorithms) in comparison with previously developed “hand-made” search trees. Among others, such an example is given with the NP-complete cluster editing problem (also known as correlation clustering on complete unweighted graphs), which asks for the minimum number of edge additions and deletions to create a graph which is a disjoint union of cliques. The hand-made search tree for cluster editing had worst-case size $$O(2.27^k)$$, which now is improved to $$O(1.92^k)$$ due to our new method. (Herein, $$k$$ denotes the number of edge modifications allowed.) ##### MSC: 68P10 Searching and sorting 68W05 Nonnumerical algorithms 68R10 Graph theory (including graph drawing) in computer science Full Text:
{}
# Forum: ARM programming with GCC/GNU tools ARM pessimizer Author: Paul D. (pderocco) Posted on: 2014-03-19 22:26 Rate this post 0 ▲ useful ▼ not useful I'm using arm-none-eabi-gcc 4.7.3 or 4.8.3 from launchpad.net, compiling for an M4 with the following options: -mcpu=cortex-m4 -mthumb -mfloat-abi=hard -mfpu=fpv4-sp-d16 -g3 -gdwarf-2 -gstrict-dwarf -O3 -ffunction-sections -fdata-sections -std=gnu99 -fsigned-char -D__VFPV4__ Here's a small code fragment, part of a FIR filter: i = T00 + T10 + T20 + T30 + T40 + T50 + T60 + T70 + T80 + T90 + TA0 + TB0 + TC0 + TD0 + TE0 + TF0 - ((2 * T00) & -((s >> 0) & 1)) - ((2 * T10) & -((s >> 2) & 1)) - ((2 * T20) & -((s >> 4) & 1)) - ((2 * T30) & -((s >> 6) & 1)) - ((2 * T40) & -((s >> 8) & 1)) - ((2 * T50) & -((s >> 10) & 1)) - ((2 * T60) & -((s >> 12) & 1)) - ((2 * T70) & -((s >> 14) & 1)) - ((2 * T80) & -((s >> 16) & 1)) - ((2 * T90) & -((s >> 18) & 1)) - ((2 * TA0) & -((s >> 20) & 1)) - ((2 * TB0) & -((s >> 22) & 1)) - ((2 * TC0) & -((s >> 24) & 1)) - ((2 * TD0) & -((s >> 26) & 1)) - ((2 * TE0) & -((s >> 28) & 1)) - ((2 * TF0) & -((s >> 30) & 1)); s is an unsigned int containing bits to be filtered. The T* symbols are #defined constants. The compiler cleverly compiles -((s >> #) & 1 into a signed bit-field extract instruction, which picks out the bit, right justifies it, and propagates it through all 32 bits. For a while, it was sane enough to load the initial constant (the sum of all the T* symbols) into a register, then for each bit, compute the mask, AND each one with the corresponding constant, and subtract it from the register. Then, all of a sudden, some other change prompted it to compute each mask and store it into a local variable on the stack, and then use it later. Since there are actually eight pieces of code like this, the result is huge, memory-intensive, and slow. This code previously ran at about 3x real time, now it's on the edge of underrunning (on a Kinetis K70). What mechanism would prompt the compiler to do such a dumb thing? Is there any optimization option that relates to this? I've tried both compiler versions, -O1, -O2, -O3 and -Os, tried various "register" declarations, tried a bunch of the -fno-blahblah optimization options listed in the docs, but there are a ton of them. Any ideas? -- Ciao, Paul D. DeRocco Paul mailto:pderocco@ix.netcom.com Author: Johann L. (gjlayde) Posted on: 2014-03-20 20:34 Rate this post 0 ▲ useful ▼ not useful You could subscribe to the gcc-help @ gcc.gnu.org mailing list, see http://gcc.gnu.org/lists.html#subscribe and hope that some arm-gcc expert is around. Please keep in mind that it's much more helpful when you provide code that can be compiled, i.e. compose a small test case that passes compilation (e.g. with -c) and does not contain unknown parts (like your private deadbeef.h header or missing definition(s) of T*). Author: Paul D. (pderocco) Posted on: 2014-03-21 21:39 Rate this post 0 ▲ useful ▼ not useful Johann L. wrote: > You could subscribe to the gcc-help @ gcc.gnu.org mailing list, see > > http://gcc.gnu.org/lists.html#subscribe > > and hope that some arm-gcc expert is around. I have asked over there, too. Apologies to those who've seen this same question over there--I don't know how much commonality there is between the two forum memberships. > Please keep in mind that it's much more helpful when you provide code > that can be compiled, i.e. compose a small test case that passes > compilation (e.g. with -c) and does not contain unknown parts (like your The reason I didn't is that if I wrap that fragment in a function and compile it, it generates glorious, beautiful, efficient code. But when I include it in a much larger function, it turns into a bloody mess. I can fix it by factoring the function into smaller ones, but this is realtime DSP, and I can see that it should be easy to do the whole thing (not just this fragment) in the available register set. And indeed, it originally did just that, but in the process of adding to my code, I seemed to cross some complexity threshold where the result suddenly went from wonderful to horrible. It looks like a register pressure issue, but I can't imagine what goal it is trying to achieve. It suddenly decided to start computing subexpressions, storing them into invented stack-based temporaries, and then going back and computing the final expression values based on these temporaries. This bumped by stack usage from a modest 16 or 20 bytes (for a few explicit local variables) up to somewhere between 100 and 200 depending upon what other options I fiddled with, and it interspersed dozens and dozens of completely unnecessary loads and stores. Were these "common subexpressions"? Well, some were common to multiple switch cases (the posted fragment was one switch case), but none that would ever actually get used more than once. I tried -fno-gcse, and that didn't help. Another aspect of the problem is that it seems to want to schedule instructions as though it were compiling for some machine with a really deep pipeline, which the M4 is not. It frequently launches a bunch of loads, and then uses the results, when it could do the same work in fewer registers if it deferred the loading until it needed the data, or even one instruction before it needed the data. Since my data is in 0WS So I'm just wondering if anyone has seen anything like this before, and knows what optimization knob to twiddle to make it go away. Does the GCC Thumb2 backend have a reputation for being good or bad? I think the x86 backend is amazingly good, and I had good luck with old ARM7 backend years ago. This Kinetis K70 project is my first Thumb2 experience, and so far the compiler is like Dr. Jekyll and Mr. Hyde. Author: Lyon (Guest) Posted on: 2014-03-22 07:55 Rate this post 0 ▲ useful ▼ not useful Hi, You said: >The reason I didn't is that if I wrap that fragment in a function and >compile it, it generates glorious, beautiful, efficient code. a)But did you tried to use such a small, efficient function inside a bigger one? b)Maybe you already know - here I am just asking - did you checked the CMSIS3 library? has some optimized DSP library functions, including FIR filters - I understand yours could be special one, but… Lyon Author: Lyon (Guest) Posted on: 2014-03-22 08:11 Rate this post 0 ▲ useful ▼ not useful Hi, Check again this setting: -D__VFPV4__ seems to be for Neon processor so some mixed things could happen. CMSIS has a special parameter for that. Lyon • $formula (LaTeX syntax)$
{}
# Weak Lefschetz theorem for Lef line bundles I'm studying M. A. A. de Cataldo, L. Migliorini - The Hard Lefschetz Theorem and the topology of semismall maps, Ann. sci. École Norm. Sup., Serie 4 35 (2002) 759-772. The premises are the following. Let $$f:X\to Y$$ be a proper holomorphic (non constant) map of irreducible, complex, projective varieties of dimension $$n$$. For every $$k\in\{-\infty,0,\dotsc,\dim X\}$$ defined $$Y^k=\{y\in Y\mid\dim f^{-1}(y)=k\}$$ with the convection $$\dim\emptyset=-\infty$$. These spaces $$Y^k$$ are locally closed analytic subvarieties of $$Y$$ which (disjoint) union is $$Y$$ as well. Definition 1. A proper holomorphic map $$f:X\to Y$$ is called semismall if $$\dim Y^k+2k\leq\dim Y$$ for any $$k$$. From now on, I assume that all semismall maps are proper and surjective. Definition 2. A line bundle $$L$$ over $$X$$ is Lef (Lefschetz effettivamente funziona) if a positive multiple of $$L$$ is generated by its global sections and the corresponding morphism (onto the image) is semismall; in other words there exist $$d,N\gg0,\,f:X\to\mathbb{P}^d$$ semismall (onto the image) such that $$L^{\otimes N}\cong~f^{*}\mathcal{O}_{\mathbb{P}^d}(1)$$. The authors stated the following Weak Lefschetz theorem for Lef line bundles Let $$L$$ be a Lef line bundle over a smooth, complex, projective variety $$X$$. Assume that $$L$$ admits a global section $$s$$ which reduced locus $$Y$$ is a smooth divisor, and denoted by $$i:Y\hookrightarrow X$$ the relevant inclusion. The restriction maps $$i^{*}:H^k(X)\to H^k(Y)$$ are isomorphisms for $$k\in\{0,\dotsc,\dim X-2\}$$ and a monomorphism for $$i=\dim X-1$$. Proof. The proof can be obtained by a use of Leray spectral sequence coupled with the theorem on the cohomological dimension of constructible sheaves on affine varieties. [...] $$\Box$$ Ignore whether this proof is a standard application of some ideas\techniques, indeed I have no idea on how to explicit it: can someone give me advice, hint, "roadmap", bibliographical sources? It is based on certain vanishing property of $$U= X\backslash Y$$. First you have a long exact sequence (a derived categorical version is given in the end) $$H^k(X,Y;\mathbb{Q})\rightarrow H^k(X,\mathbb{Q}) \rightarrow H^k(Y,\mathbb{Q}) \rightarrow H^{k+1}(X,Y;\mathbb{Q}).$$ Note that $$H^{k}(X,Y;\mathbb{Q})=H^{k}_c(U,\mathbb{Q}_Y) \simeq H^{n-k}(U, \mathbb{Q}_Y)$$. The last isomorphism is the Poincaré duality which only requires that $$U$$ is smooth. This is true since $$X$$ is smooth. So if we have $$H^k(U, \mathbb{Q}_Y) = 0$$ for $$k>n$$, then we have done. The vanishing is based on the citation [10] in the article, namely Vanishing and non-vanishing theorems Astérisque, tome 179-180 (1989), p. 97-112. Let me pick up the key part. Let $$f:X \rightarrow \mathbb{P}^N$$ be the corresponding morphism of $$M$$. Then $$Y$$ is a pullback of a hyperplane $$H \subset \mathbb{P}^N$$ by $$f$$ and hence $$f$$ restricting on $$U$$ is a map into a affine variety $$\mathbb{P}^N\backslash H$$. Definiton 1.1 Let $$g : Y \rightarrow Z$$ be a morphism of analytic varieties. We define $$r(g) = \mathrm{Max}\{\mathrm{dim} \Gamma - \mathrm{dim} g(\Gamma) - \mathrm{codim} \Gamma \}$$, $$\Gamma$$ closed subvariety of $$Y$$. In the semismall case, $$r(f) = 0$$. Now we have Lemma 1.2 Assume that there exists a proper surjective morphism $$g$$ from $$U$$ to an affine variety $$W$$. Then $$H^k(U,\mathscr{L})=0$$ for $$k>n+ r(g)$$ and $$\mathscr{L}$$ a local system. So the vanishing follows. A derived categorical version $$Y \overset{i}{\hookrightarrow} X \overset{j}{\hookleftarrow} U=X\backslash Y$$ $$\rightarrow j_!j^*\mathbb{Q}_X \rightarrow \mathbb{Q}_X \rightarrow i_*i^* \mathbb{Q}_X \rightarrow j_!j^*\mathbb{Q}_X[1]\rightarrow$$ and apply $$R^0\Gamma = H^0c_*$$, i.e. taking the hypercohomology, where $$c_*$$ is the pushforward in derived category to a point. Since $$X$$ is projective hence proper, $$c_* = c_!$$. So $$R^0\Gamma j_!j^* \mathbb{Q}_X[k] = H^0 c_* j_!j^* \mathbb{Q}_X[k] = H^0 c_!j_!\mathbb{Q}_U[k] = H^0 c_{U,!} \mathbb{Q}_U[k] = H^k_c(U,\mathbb{Q})$$ where $$c_{U,!}$$ is the direct image with proper support pushforward to a point from $$U$$. So we have long exact sequence $$H^k_c(U,\mathbb{Q}) \simeq H^{2n-k}(U,\mathbb{Q}) \rightarrow H^k(X,\mathbb{Q}) \rightarrow H^k(Y,\mathbb{Q}) \rightarrow H^{k+1}_c(U,\mathbb{Q})$$ where the first isomorphism is by the Poincare duality. Note that the Poincare duality only requires that $$U$$ is smooth. This is true if $$X$$ is smooth or $$Y$$ contains all singularity. The result follows by the long exact sequence and the vanishing of $$H^k_c(U,\mathbb{Q}) \simeq H^{2n-k}(U,\mathbb{Q})$$. • You are assuming that $U$ is affine, so this is just the standard Lefschetz theorem. – abx Nov 22, 2020 at 7:11 • @abx This is true since a projective variety excising a hyperplane is affine. Nov 22, 2020 at 7:15 • Of course, but this is not what the OP is asking for. – abx Nov 22, 2020 at 10:14 • @abx Doesn't the OP want an explicit proof? Sorry for my poor English reading ability. Nov 22, 2020 at 10:16 • @Armandoj18eos Oh sure. Thank you! Nov 23, 2020 at 14:09
{}
# Density Density (symbol: ρ - Greek: rho) is a quality of being dense or, how thick a thing is. The higher an object's density, the stupider it is; a denser object (such as Paris Hilton) will perform more stupid acts in a day than a less dense substance (such as Stephen Hawking). The SI unit of density is the stupidity per action (σA-1) $\rho = \frac{\sigma}{A}$ where: ρ is the object's density (measured in stupidity per action) σ is the object's total stupid actions (measured in moves per day) A is the object's total actions (measured in moves per day) ## editVarious types of density See Paris Hilton, Tom Cruise, Oprah Winfrey etc. etc. ## editMeasurement of density A common device for measuring density is a paparazzi. The more often a person captures the attention of paparazzi, the denser they are. Fame increases density dramatically. ## editDensity of substances An object that performs one stupid action for every non stupid action would have a density of one (1). The average person has a density of 0.0175 σA-1. The densest naturally occurring substance on Earth is Paris Hilton, at about 22,650 σA-1.
{}
# Article Full entry | PDF   (0.3 MB) Keywords: Gini coefficient; finite sums; estimates Summary: The scope of this note is a self-contained presentation of a~mathematical method that enables us to give an absolute upper bound for the difference of the Gini coefficients $\left |G(\sigma _1,\dots ,\sigma _n)-G(\gamma _1,\dots ,\gamma _n)\right |,$ where $(\gamma _1,\dots ,\gamma _n)$ represents the vector of the gross wages and $(\sigma _1,\dots ,\sigma _n)$ represents the vector of the corresponding super-gross wages that is used in the Czech Republic for calculating the net wage. Since (as of June 2019) $\sigma _i=100\cdot \left \lceil 1.34\gamma _i/100\right \rceil$, the study of the above difference seems to be somewhat inaccessible for many economists. However, our estimate based on the presented technique implies that the introduction of the super-gross wage concept does not essentially affect the value of the Gini coefficient as sometimes expected. References: [1] Allison, P.D.: Measures of inequality. American Sociological Review, 43, 1978, 865-880, DOI 10.2307/2094626 [2] Atkinson, A.B., Bourguignon, F.: Handbook of Income Distribution. 2015, New York: Elsevier, [3] Ceriani, L., Verme, P.: The origins of the Gini index: extracts from Variabilità Mutabilità (1912) by Corrado Gini. The Journal of Economic Inequality, 10, 3, 2012, 1-23, [4] Genčev, M., Musilová, D., Široký, J.: A Mathematical Model of the Gini Coefficient and Evaluation of the Redistribution Function of the Tax System in the Czech Republic. Politická ekonomie, 66, 6, 2018, 732-750, (in Czech). DOI 10.18267/j.polek.1232 [5] Gini, C.: Variabilit¸ e Mutuabilit¸. Contributo allo Studio delle Distribuzioni e delle Relazioni Statistiche. 1912, Bologna: C. Cuppini, [6] Lambert, P.J.: The Distribution and Redistribution of Income. 2002, Manchester: Manchester University Press, [7] Musgrave, R.A., Thin, T.: Income tax progression, 1929--48. Journal of Political Economy, 56, 1948, 498-514, DOI 10.1086/256742 [8] Plata-Peréz, L., Sánchez-Peréz, J., Sánchez-Sánchez, F.: An elementary characterization of the Gini index. Mathematical Social Sciences, 74, 2015, 79-83, DOI 10.1016/j.mathsocsci.2015.01.002 | MR 3314225 [9] Sen, A.K.: On Economic Inequality. 1997, Oxford: Oxford University Press, [10] Zenga, M., Polisicchio, M., Greselin, F.: The variance of Gini's mean difference and its estimators. Statistica, 64, 3, 2004, 455-475, MR 2279894 Partner of
{}
# Diameter of Minimal Separators in Graphs 1 COATI - Combinatorics, Optimization and Algorithms for Telecommunications CRISAM - Inria Sophia Antipolis - Méditerranée , Laboratoire I3S - COMRED - COMmunications, Réseaux, systèmes Embarqués et Distribués Abstract : We establish general relationships between the topological properties of graphs and their metric properties. For this purpose, we upper-bound the diameter of the {\it minimal separators} in any graph by a function of their sizes. More precisely, we prove that, in any graph $G$, the diameter of any minimal separator $S$ in $G$ is at most $\lfloor\frac{\ell(G)} {2}\rfloor \cdot (|S|-1)$ where $\ell(G)$ is the maximum length of an isometric cycle in $G$. We refine this bound in the case of graphs admitting a {\it distance preserving ordering} for which we prove that any minimal separator $S$ has diameter at most $2 (|S|-1)$. Our proofs are mainly based on the property that the minimal separators in a graph $G$ are connected in some power of $G$.Our result easily implies that the {\it treelength} $tl(G)$ of any graph $G$ is at most $\lfloor \frac{\ell(G)} {2}\rfloor$ times its {\it treewidth} $tw(G)$. In addition, we prove that, for any graph $G$ that excludes an {\it apex graph} $H$ as a minor, $tw(G) \leq c_H \cdot tl(G)$ for some constant $c_H$ only depending on $H$. We refine this constant when $G$ has bounded genus. As a consequence, we obtain a very simple $O(\ell(G))$-approximation algorithm for computing the treewidth of $n$-node $m$-edge graphs that exclude an apex graph as a minor in $O(n m)$-time. Document type : Reports Complete list of metadatas Cited literature [33 references] https://hal.inria.fr/hal-01088423 Contributor : Nicolas Nisse <> Submitted on : Friday, December 19, 2014 - 1:36:16 PM Last modification on : Tuesday, September 10, 2019 - 1:12:47 AM Long-term archiving on : Saturday, April 15, 2017 - 8:38:23 AM ### File RR-8639_dec2014.pdf Files produced by the author(s) ### Identifiers • HAL Id : hal-01088423, version 2 ### Citation David Coudert, Guillaume Ducoffe, Nicolas Nisse. Diameter of Minimal Separators in Graphs. [Research Report] RR-8639, Inria Sophia Antipolis; I3S; INRIA. 2014, pp.16. ⟨hal-01088423v2⟩ Record views Files downloads
{}
# Thread: covariance, a normal vector (X,Y) and the distribution of Z = X - 2Y + 1 1. ## covariance, a normal vector (X,Y) and the distribution of Z = X - 2Y + 1 hi, I am preparing for a final in my least favorite class and could use a little help with this preparation problem. Initially X and Y were normal with given mean and variance, and it asked for the distribution of the same Z = h(X,Y). I know that's a linear combination and was able to calculate the distribution no problem. Now I am a little stuck on the next part, Assume now that (X,Y) is a normal vector with covariance Cov(X,Y) = 4. What is the distribution of Z = X - 2Y + 1? Compute P( Z > 5). I can get the second part if I am able to find the distribution of Z, which is where I get stumped. Does the information in the beginning about the normal dist. of X, Y help? All I can see in my notes about this is Cov(X,Y) = E(XY) - ux*uy where ux is the mean of x, same for y thanks for any help at all! 2. Z = X - 2Y + 1 $V(aX+bY +c)=a^2V(X)+b^2V(Y)+2abCOV(X,Y)$ so $V(Z)=V(X)+4V(Y)-4COV(X,Y)$
{}
# NCERT Solutions Class 8 Science Chapter 14 Chemical effects of electric current ## NCERT Solutions for Class 8 Science Chapter 14 Chemical effects of electricity and current Often your parents ask you to not touch dysfunctional electrical appliances with wet hands. Or while turning on or off electrical appliances, they ask you to wear slippers. All these precautions are taken just to avoid the effects of electric current. In NCERT solutions for class 8 science chapter 14 chemical effects of electric current, you will get to learn what is electric current, how electricity can is used in chemical reactions, what are the applications of the chemical effect of current etc. NCERT solutions for class 8 science chapter 14 chemical effects of electric current is given in detail after this article Electric current is generated due to moving charges. Materials that conduct electricity are called conductors of electricity as most of the metals while those that do not conduct electricity are called nonconductors or insulators like non-metals. In this chapter 14 chemical effects of electric current, you will know whether liquids like water and other chemical solutions conduct electricity or not. Some of the important points that you will learn in NCERT solutions for class 8 science chapter 14 chemical effects of electric current  are as follows: • There are some liquids that can conduct electricity. For example tap water, salt solutions, acid solution, base solution, etc. • When we pass electricity through a chemical solution certain reactions take place. These reactions are collectively called the chemical effects of electric current. • Chemical effects of electric current are used for various applications. E.g electroplating. • Electroplating is the deposition of a layer of desired metal onto another material via electricity. NCERT class 8 science chapter 14 chemical effects of electric current, present these points with the help of interesting and fun activities that you can perform. NCERT Class 8 Science Chapter 14 Chemical effects of electric current includes various topics and subtopics. A list of these topics is as follows 14.1 Do Liquids Conduct Electricity? 14.2 Chemical Effects of Electric Current 14.3 Electroplating While preparing the chapter, solve the NCERT exercise of class 8 chapter 14. Having NCERT solution ready in hand will be beneficial for exam preparation. The NCERT questions and NCERT Solutions for class 8 Science Chapter Chemical effects of electric current have been provided here Try yourself to do the following activity NCERT Solutions for Class 8 Science Chapter 14 Chemical effects of electricity and current. Which of the following conduct electricity Rainwater Distilled water Acid Base Saltwater ## NCERT Solutions For Class 8 Science- Chapter-wise Chapter -1 Crop Production And Management Chapter -2 Microorganisms: Friends and Foe Chapter-3 Synthetic Fibres and Plastics Chapter-4 Materials: Metals and Non-Metals Chapter-5 Coal and Petroleum Chapter-6 Combustion and Flame Chapter-7 Conservation of Plants and Animals Chapter-8 Cell: Structure and Function Chapter-9 Reproduction in Animals Chapter-10 Reaching the age of adolescence Chapter-11 Force and Pressure Chapter-12 Friction Chapter-13 Sound Chapter-15 Some Natural Phenomena Chapter-16 Light Chapter-17 Stars and The Solar System Chapter- 18 Pollution of Air and Water
{}
Show that the change in entroy is 1. Mar 17, 2006 endeavor Show that the change in entropy for a cycle of a heat engine is $$\Delta S = \frac{Q_{cold}}{T_{cold}} - \frac{Q_{hot}}{T_{hot}}$$ 2. Mar 17, 2006 Hootenanny Staff Emeritus Please show some working or thoughts... 3. Mar 17, 2006 endeavor $$W = Q_{in} - Q_{out} = Q_{hot} - Q_{cold}$$ or that $$\Delta S = S_{f} - S_{i}$$ But I'm not sure where to go from here... 4. Mar 17, 2006 Andrew Mason You are to assume an isothermal heat transfer from the hot register to the gas and an isothermal flow from the gas to the cold register. The change in entropy for the hot register in the transfer from the hot register to the gas is: $$dS_h = -dQ_h/T_h$$ Similarly, the change in entropy of the cold register in extracting the heat from the gas to the cold register results in a change of entropy to the cold register of: $$dS_c = +dQ_c/T_c$$ The total change in entropy of the system (hot register + cold register) is: dS_{total} = dS_h + dS_c = dQ_c/T_c-dQ_h/T_h[/tex] AM
{}
# Temporal Segment Networks: Towards Good Practices for Deep Action Recognition 2 Aug 2016  ·  Limin Wang, , , , , , · Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( $69.4\%$) and UCF101 ($94.2\%$). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices. PDF Abstract ## Code Add Remove Mark official ↳ Quickstart in 2,014 See all 19 implementations ## Results from the Paper Edit Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark Multimodal Activity Recognition EV-Action TSN (RGB) Accuracy 73.6 # 3 Action Recognition HMDB-51 Temporal Segment Networks Average accuracy of 3 splits 69.4 # 52 Action Recognition UCF101 Temporal Segment Networks 3-fold Accuracy 94.2 # 49 ## Results from Other Papers Task Dataset Model Metric Name Metric Value Rank Source Paper Compare Action Classification Kinetics-400 TSN Vid acc@1 73.9 # 103 Vid acc@5 91.1 # 78
{}
## Ride share blowout sale A certain ride sharing company sent me the following offer: pay $20 up front and your rides will be$4.50 (\$2.50 for the carpool version) for 28 days. Is this a good deal? Let’s find out! ## Power loss using Wilcox rblogging I know this is an answered question, but I was curious how detrimental it is to use a Wilcoxon rank-sum test in lieu of a student t-test. This brief simulation assumes that the assumptions for a simple two-sample t-test are met: ## R shiny app of my BV data Last week, I took a workshop with the one and only Hadley Wickham learning about data visualization! As part of the course we learned about shiny and so I turned one of the figures from my recent publication on bacterial vaginosis into a shiny app hosted on shinyapps.io. ## This is way more elegant than I could ever put it The data may not contain the answer. The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data. ## How to write in science passive voice, gerund form, jargon.
{}
# Weaponized Riddles Ian Peleaga was on a quest. It was said arms dealers had a huge operation here in southern Atlanta and as an FBI agent, it was his job to expose it. He had traced some dealers to an abandoned house at the edge of Douglas. The crime boss had taken a liking to him and wanted him to follow them. Ian was puzzled as to why, but obviously, clues had been left behind for him. Like this house. He and his team rummaged through all the drawers, shelves, and even the shower drain. Nothing. Except for a cryptic note. The mathematician, RomanIan Douglas loved to MIX his coffee. It was obvious it was meant for him. The strange uppercase I highlighting 'Ian'. Ian almost laughed. But he never even remotely liked math. Why math? Eww, numbers. And coffee? He was more of a black tea person. And why was mix in caps and bolded on the note? Almost making a hole through the paper. This made no sense... He was Romanian, his last name coming from one of the mountains there. And here he was in Douglas. But he guessed it the Douglas in the clue didn't mean anything. He pondered this for hours. Then he took out his phone and googled a number and 2 words, one-word being coffee. He got a result. Smiling to himself, he entered in his GPS **** ****** *** S, *******, GA 31533. What did he find out? What is the address? Interested? • MIX in roman numerals is 1009. Jun 10 at 0:56
{}
#### Archived This topic is now archived and is closed to further replies. # "Cannot convert parameter 1 from 'A' to 'A'" ? This topic is 5247 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi everyone. I just added some code, and MSVC++ 6 is flagging up a couple errors. I made a new class ''CStream'' and am trying to pass a parameter ''CStream s'' into a function. Heres some code: int Clowlev::Start() { CStream st; int id=Stream_SetID(st); ... } ... int Clowlev::Stream_SetID(CStream s) { ... } When I try to compile the code, MSVC gives me the error: error C2664: ''Stream_SetID'' : cannot convert parameter 1 from ''class CStream'' to ''class CStream'' No copy constructor available for class ''CStream'' I''ve been able to do something simmilar with other classes ive made just fine, but this one flag''s the error. Does anyone know what would cause this? Thanks. ##### Share on other sites Your second function is passing a CStream object by value which requires a copy constructor. Normally the compiler will create a default copy constructor for you, but sometimes it can''t depending on the definition of the CStream class. ##### Share on other sites Hi, thanks for the reply. I tried switching the parameter to CStream *s and calling it with the paremtere &st, but it gave me the same error. Is there another way to handle it? ##### Share on other sites hmm, it may mean CStream isn''t defined yet. did you include the correct headers? ##### Share on other sites Yep, all the headers are included, and it detects CStream as a class, just doesn''t copy it through a parameter. ##### Share on other sites The exact same error? You rebuilt everything? Pass by value is one instance where the copy constuctor is used, but it isn't the only one. Even if you fix it by passing a pointer, the problem of not being able to copy CStream objects remains...unless you will never do that. The real solution is to write a copy constructor. Just make sure you copy each member properly, e.g. pointers normally aren't copied since they usually should point to different things, even if the values they point to should be the same. The same goes for references. Edit: Even if you never want to copy CStream objects, it makes sense to declare a copy constructor and make it private to show the intent of your design. [edited by - SpaceRogue on August 5, 2003 10:10:46 PM] ##### Share on other sites I was just messing around with it, and one of the members of CStream is ''fstream file''. Anyways, if I comment that member out, it works OK, so it must be something with copying over the fstream I assume? Is there a way to get past this? Thanks. ##### Share on other sites SpaceRogue - Thanks for the info. I''ve never had to write a copy constructor before, how would that be done? ##### Share on other sites int Clowlev::Stream_SetID(CStream s) you could try int Clowlev::Stream_SetID(const CStream& s) The following statement is true. The previous statement is false. Shameless promotion: FreePop: The GPL Populous II clone. ##### Share on other sites Yes, fstream would be a problem. Copy constructors are just like regular constructors except they take a single parameter of a constant reference to the class. Example: CStream(const CStream& source) { //Copy source members to current object } That's it. The trick is to make sure you copy your members properly and to make sure the ones that require initialization (like references, const members, or members that don't have a default constructor) are put in the initialization list and NOT in the function body. The reason is that members are actually constructed before the body begins. This means unless you initialize these particular members in the initialization list, they won't be initialized properly. Example: A copy constructor for a class containing a constant integer member called a: MyClass(const MyClass& source) { a=source.a; //Wrong! } MyClass(const MyClass& source): a(source.a) { //Right! The constant is initialized to the proper value when constructed. } [edited by - SpaceRogue on August 5, 2003 10:37:33 PM] ##### Share on other sites Thanks for the help everyone, i''m working around it now, so it''s working. • ### Forum Statistics • Total Topics 628730 • Total Posts 2984427 • 25 • 11 • 10 • 16 • 14
{}
# Central Limit Theorem March 29, 2018 Pierre-Simon Laplace, 1749 -1827 The Central Limit Theorem states that the sums (or averages) of sets of random variables will tend toward having a Normal distribution as N increases. It was the French mathematician Laplace who proved this for a number of general cases in 1810 [2]. Laplace also derived an expression for the standard deviation of the average of a set of random numbers confirming what Gauss had assumed for his derivation of the Normal distribution (more on this in the Confidence Intervals lesson). The simplest and most remarkable example of the Central Limit Theorem is the coin toss. If a “true” coin is flipped N times, the probability of q heads occurring is given by (which is called the Binomial Distribution). Fig. 10 plots a histogram of this in comparison to the Normal distribution for 6 coin tosses. There is good agreement between the two distributions, even with only 6 tosses (although the tails of the Normal distribution extend beyond the possible values of q). The French mathematician De Moivre had noticed this agreement in 1733, and used  (2/πN)1/2 e-(2/N)(qN /2)2  as an approximation for the cumbersome calculation of Eq. 11 for large N. But he hadn’t generalized this to other cases. Figure 10. Probabilities of q heads in 6 coin tosses. Another example is the averaging of a random variable, x, uniformly distributed between -0.5 and 0.5 (which might be the range of uncertainty in a measurement). By averaging 2, 3, and 4 of these random variables, the gradual convergence to the Normal distribution can be seen, as shown in Fig. 11. Figure 11. PDF’s of the Averages of Uniformly Distributed Random Variables. When the magnitude of the PDF is plotted on a linear scale it is not clear what is happening at the tails of the distribution. This can be corrected by plotting the magnitude on a logarithmic scale, so the large percentage deviation between the average of 4 uniformly distributed  random variables and the Normal distribution can be seen in the tails of the distribution. This is significant in cases where the extreme values of a signal are critical to understanding the behavior of a product under test; an example is fatigue analysis.
{}
# MAA Found Math 2011 - Week 26 MAA Found Math: Benyavut Jirasutayasuntorn snapped this photo at Temple Israel in Stockton, California. He wrote, "Infinity and beyond. This temple wall depicts the Hebrew letter, 'Aleph'. Mathematically they are a sequence of numbers which are used to represent the cardinality or size of infinite sets." In set theory, the Hebrew aleph glyph is used as the symbol to denote the aleph numbers, which represent the cardinality of infinite sets. This notation was introduced by mathematician Georg Cantor (Source). Send your Found Math photos with a brief description to editor@maa.org Year: 2011 Week: 26
{}
Seminars 3:30 pm, Seminar Hall Some topics in Submodular Optimization - a survey Dr. Sambuddha Roy IBM Research, New Delhi. 26-02-14 Abstract This survey talk will consider submodular function optimization. A submodular function $f$ is a set function such that for any sets $A$ and $B$, it holds that $f(A) + f(B) \geqs f(A \cup B) + f(A\cap B)$. Such functions are ubiquitous in computer science, economics etc.; for instance, they may be thought of as embodying the notion of diminishing returns''. One may also consider them as discrete analogues of \textit{convexity} (thus it has been fairly well-known that submodular functions can be minimized in polynomial time). Recent years have seen fascinating progress in this area, especially in the context of submodular function maximization subject to various constraints. This talk will discuss some exciting recent developments involving greedy algorithms in the context of submodular maximization.
{}
Galois Groups An algebraic field is, by definition, a set of elements (numbers) that is closed under the ordinary arithmetical operations of addition, subtraction, multiplication, and division (except for division by zero). For example, the set of rational numbers is a field, whereas the integers are not a field, because they are not closed under the operation of division (i.e., the result of dividing one integer by another is not necessarily an integer). The real numbers also constitute a field, as do the complex numbers. It is possible to construct other examples of fields by means of extensions. For example, given the field Q of rational numbers, we can augment the set Q with a particular number e that is not in Q, and this automatically implies that every rational function of e is also in the extended field, e.g., The set of numbers that can be formed from the rationals and e by means of ordinary arithmetical operations constitute a new field E, which is an extension of the original field Q. An important special case of a field extension is when the number e is algebraic, i.e., the root of a polynomial with coefficients in the base field. In that case, division of one polynomial in e by any other can always be expressed as a simple polynomial with coefficients in the base field. (Hereafter "polynomial" refers to one with coefficients in the base field.) To prove this, let e be a root of f(e) = 0 for some polynomial f of degree d, and suppose we wish to evaluate the ratio g(e)/h(e) for any two polynomials g and h. Our claim is that this ratio equals a polynomial q(e) of degree no greater than d. This is true if and only if g(e) = h(e)q(e) where q is of degree no greater than d. By applying the identity f(e) = 0 we can reduce both sides of this putative equation to degree no greater than d, and then equate the coefficients of like powers to determine d rational conditions on the d coefficients of q(e), so we are assured of a solution. A field extension of the rationals Q based on an algebraic number is therefore denoted as Q[e], signifying that the elements are all polynomials in e with rational coefficients. (It's easy to see that these representations are unique, provided e is not in Q.) An important and interesting attribute of any given field is the set of automorphisms of the field. An automorphism is a one-to-one mapping from the set to itself such that the operations of addition and multiplication are preserved. In other words, if we let M(x) denote the image of x under the mapping M, this mapping is an automorphism if and only if it is one-to-one and satisfies the relations M(x+y) = M(x) + M(y) and M(xy) = M(x)M(y). If there are more than one automorphisms of a field, it's clear the composition of two or more automorphisms is also an automorphism, and these form a group. For the field Q (i.e., the rational numbers) it's easy to see that the only automorphism is the identity mapping I(x) = x. However, for some fields there exist non-trivial automorphisms. As an example, consider the field , which consists of all numbers of the form where p and q are rational numbers. In addition to the trivial identity mapping, this field possess another automorphism, consisting of the non-trivial mapping To prove that this is an automorphism, we note that it is one-to-one, and we have the relations and Obviously the composition of J with itself yields the identity I, so the group of automorphisms for this field {I,J}, with the group operation table For a slightly more complicated example, consider the field , which consists of all numbers of the form where q1, q2, q3, q4 are rational numbers. (For a demonstration that this is indeed a field, see Platonions.) Clearly the additive condition for an automorphism is satisfied by a mapping that negates any one or more of the terms. The product two numbers in this field is given by If we negate the coefficients with indices 2 and 4 in the arguments, the coefficients of the product with indices 2 and 4 (i.e., the coefficients of the square roots of 2 and 6) are negated whereas the others are unchanged. Likewise if we negate the coefficients with indices 2 and 3, the coefficients of the product with indices 2 and 3 are negated. Also, negating the terms in the arguments with indices 3 and 4 negates the corresponding terms in the product. If we denote these three mappings by N, J, and U, along with the identity mapping I, the group of automorphisms of the field is {I,N,J,U}, with the group operation table shown below Notice that of the four mappings the mappings I and U leave elements of the sub-field unchanged. In other words, I(a) = a and J(a) = a for all a Î . Thus the group of automorphisms of the field that leave the sub-field fixed consists of just {I, U}, with the group operation table shown below (Naturally this is isomorphic to the previous group {I,J}, since there is essentially just a single group of order two.) Compare this with the case of the field , whose only automorphisms are I and J, and obviously the only one of these that leaves elements of unchanged is the identity mapping I. Hence the group of automorphisms of that leave fixed is simply the identity group I of order 1. Now, for any given polynomial f(x) with coefficients in a field F (called the coefficient field), there exists an extension field E that contains all the roots of f. To construct this field we merely need to adjoin to the elements of F the roots of f(x). For example, consider the general quadratic polynomial f(x) = x2 + ax + b, the roots of which are The discriminant is D = a2 - 4b, so we need only adjoin the number to the set Q of rationals to give the extension field E = . This is the minimal extension field containing the roots of f(x), and it is called the splitting field of f (because f "splits" into linear factors in this field). Of course, if the discriminant D happens to be the square of a rational number, then E = F, but if D is not a square then E is a proper extension of F. Definition: The Galois group of a polynomial f with respect to the coefficient field F is defined as the group of automorphisms of the splitting field of f that leaves F fixed. From this definition we can see that the Galois group of a quadratic polynomial relative to the coefficient field Q depends on whether the discriminant is the square of a rational number. If it is, then the splitting field E of the polynomial is simply Q itself, and we've already seen that the group of automorphisms of Q that leave Q fixed is nothing but the identity mapping. Thus, if the discriminant of the quadratic f(x) is not a squared rational number, the Galois group of f(x) is the trivial group of order 1. On the other hand, if the discriminant is not the square of a rational number, the splitting field E is more complicated, i.e., it is of the form for some non-square D, and hence has a group of automorphisms of order four, as discussed above. Also, the sub-group of mappings that leave Q fixed is the group of order two. For any polynomial with coefficients cj in some field F, we refer to F as the coefficient field, and the splitting field E is some extension of F (or simply F itself, if the polynomial happens to be completely factorable in F[x]). The Galois group of the polynomial is then the group G of automorphisms of E that leave F fixed. Notice that if m is one of these automorphisms (mappings), and if q is a root of f, then m(q) is also a root of f. This follows immediately from the fact that automorphisms (by definition) preserve sums and products, and the fact that these particular automorphisms leave elements of F unchanged, so we have Hence m(f(x)) = f(m(x)), and since f(q) = 0 and the zero element is fixed under m, we have f(m(q)) = m(0) = 0, and thus m(q) is a root of f. For example, the group of automorphisms of the splitting field of an irreducible quadratic that leaves the coefficients fixed consists of the identity mapping m(a + b) = a + band the conjugation mapping m(a + b) = a - b, and of course with either of these mappings, if q is a root then so is m(q). For polynomials of higher degree, the automorphisms that generate the Galois group are just generalizations of the simple identity and conjugation mappings of quadratics. The preceding definition of a Galois group is actually a modernized version introduced by Emil Artin in the late 1920's (based on the earlier ideas of Emmy Noether and Richard Dedekind). When Evariste Galois himself first described the Galois group, he did so in terms of permutations of the roots of a given polynomial. The definitions are equivalent, but it is sometimes helpful to return to the original formulation of the subject. According to Galois, for any given polynomial f(x) of degree d with coefficients in a field F, there exists a unique group G of permutations of d entities such that every F-valued rational function of the roots of f is invariant under each of the permutations in G, and conversely, such that every rational function of the roots that is invariant under each of the permutations in G is F-valued. The group G is called the Galois group of the polynomial f relative to the coefficient field F. To illustrate, consider again the general quadratic polynomial f(x) = x2 + ax + b where the coefficients a,b are elements of a field F. Letting r1 and r2 denote the roots of f, we immediately have the F-valued rational functions -(r1 + r2) = a and r1 r2 = b, both of which are fully symmetrical in the roots. It can be shown that every symmetrical rational function of the roots can be expressed as a rational function of these two elementary symmetric functions. If these are the only F-valued rational functions of the roots, then the group of permutations is the fully symmetric group S2 of order two. However, if f(x) factors over the field F, this implies that r1 and r2 are individually rational numbers, although in general they are distinct, so the function r1 is F-valued but it is not invariant under permutations of the roots. Therefore, in this case, the group of the polynomial is not S2, it is a subgroup of S2, namely, the identity group of order 1. More generally, we see that the coefficients of a polynomial (of any degree) are really nothing but the elementary symmetric functions of the roots. For example, the cubic with roots r1, r2, r3 can be expressed as In general, for a polynomial of degree d, the coefficient of xd-j is the sum of all products of j distinct roots. It can be shown that every symmetric rational function of the roots can be expressed as a rational function of the coefficients of f(x), so if these coefficients are in the field F, then so are all the fully symmetrical functions of the roots. In this context, the term "rational function" signifies a function involving only the basic arithmetical operations of addition, subtraction multiplication, division, and constants that are rational numbers. Also, the term "fully symmetrical" signifies that the function is invariant with respect to every permutation of the d roots. Thus if a rational function of the roots is invariant under every permutation of those roots, we know that the invariant value is in the field F of the coefficients. However, there may be other rational functions of the roots of a given polynomial that are elements of F, even though these functions are not fully symmetrical in the roots. To illustrate, consider the quintic f(x) = x5 - 4x4 - 15x3 - 94x2 - 61x - 22 with coefficients in the set of rational numbers. We immediately know five fully symmetrical functions of the roots of this polynomial that yield rational values, namely the coefficient functions. For example, we have r1 + r2 + r3 + r4 + r5 = 4 This is obviously invariant under any permutation of the five roots, as are all the other coefficient functions. But the polynomial f(x) happens to factor into two polynomials with rational coefficients, i.e., we have f(x) = (x2 + 3x + 11) (x3 - 7x2 - 5x - 2) It follows that, letting r1 and r2 denote the roots of the quadratic factor, we have and for the roots of the cubic factor we have These functions are F-valued, but they are clearly not fully symmetrical in the roots, because they will be made false if we transpose r1 with r3 (for example). They do, however, possess symmetry for the roots {r1, r2} and for the roots {r3, r4, r5}. Consequently the Galois group for this polynomial over the rationals is the product of the symmetric groups S2 and S3, of order 2 and 6 respectively, and so the overall group is of order 12. In this case of the above quintic we found rational-valued functions of just certain subsets of the roots, so we could immediately infer that the original polynomial was factorable over the rationals. However, even if a polynomial is irreducible over the rationals, it is still possible to have non-symmetrical rational functions of the roots that are elements of F. As an example, consider the quartic polynomial f(x) = x4 - 2x3 + 4x + 2 This is irreducible over the rationals, but there are non-symmetrical rational functions of the roots that yield rational values. The four roots are where u and v are the complex numbers and the overbars indicate complex conjugation. It is easily verified that there are several non-symmetrical rational functions of these roots that yield rational values, such as We find that these functions all have lesser symmetries, specifically, that q1 and q2 can be transposed, and q3 and q4 can be transposed. In addition, the pair q1,q2 can be transposed with the pair q3,q4 . Thus, of the 24 possible permutations of these four roots, the Galois group consists only of the subgroup consisting of those permutations that conform to these extra conditions. This limits us to the group of eight permutations shown below. {abcd}, {bacd}, {abdc}, {badc}, {cdab}, {dcab}, {cdba}, {dcba} This, therefore, is the Galois group of the preceding polynomial relative to the coefficient field Q. One of the most important applications of Galois theory (indeed, the reason it was invented) is to provide the criterion for deciding when a polynomial is solvable by means of rational operations and root extractions. This is done by exploiting the correspondence between fields and their respective automorphism groups. In general there is a sequence of sub-fields between the splitting field and the coefficient field, and these correspond to a sequence of subgroups. It is possible to construct, by rational operations and root extractions, an extension from one field to the next only if the group of automorphisms of the larger group that leave the elements of the sub-field fixed is abelian, so solvability by radicals requires that we can decompose the sequence of groups from the splitting field down to the coefficient field into a sequence of Abelian steps. From group theory it can be shown that this is possible for the fully symmetric permutation groups of two, three, or four entities (of orders 2, 6, and 24 respectively), but not for the fully symmetric group of five (or more) entities. It can also be shown that for each degree d there exist polynomials whose Galois group is the fully symmetric group Sd. Consequently, there can be no general algebraic formula (involving just rational operations and root extractions) for the roots of polynomials of degree greater than four. Return to MathPages Main Menu
{}
07 Mar This will join all lines of a file together. Sometimes I have a list of something in a file, one line per item and want to convert it to a comma(colon,tab)-separated line (with no trailing separator of course) that can be used as a command-line parameter to some other tool. perl -e '@_=; chomp(@_); print join(";",@_);' < data_file
{}
# Finding Cluster Points/Accumulation Points #### brooklysuse ##### New member Find the set of cluster points for the set A := {(−1)n/n : n ∈ N}. Justify your answer with proof. I believe 0 is a cluster point but I can't figure out how to prove this, or how to prove any other point is not. Any quick help would be appreciated. Thanks. #### Evgeny.Makarov ##### Well-known member MHB Math Scholar Find the set of cluster points for the set A := {(−1)n/n : n ∈ N}. I believe 0 is a cluster point Hmm, I believe $$\displaystyle \frac{(-1)n}{n}=-1$$. #### Opalg ##### MHB Oldtimer Staff member Find the set of cluster points for the set A := {(−1)n/n : n ∈ N}. Justify your answer with proof. I believe 0 is a cluster point but I can't figure out how to prove this, or how to prove any other point is not. Any quick help would be appreciated. Thanks. I guess you mean $A := \{(−1)^n/n : n \in \Bbb{N}\}$. You are correct that $0$ is the only cluster point. To prove it, you will need to use the definition of a cluster point, which is ... ? (Start from there.)
{}
A dataset containing CPR's for 41 flights flying over Europe. One on 4th , 19 on 5th and 21 on 6th Feb 2017. A dataset containing CPR's for 41 flights flying over Europe. One on 4th , 19 on 5th and 21 on 6th Feb 2017. cprs cprs Format A data frame with 53940 rows and 10 variables: cpm_id CPR Message (CPM) line number tact_id TACT Id, TACT (a.k.a. ETFMS) is an NM system timestamp_etfms time of CPM reception by the ETFMS system timestamp_track time of track block block number. ETFMS internal use record record number. ETFMS internal use entry_node_sac Entry Node (EN) system area code (SAC). To avoid ambiguity in the exchange of Surveillance related data, each system using the ASTERIX data format gets assigned a unique identifier composed of two values called SAC/SIC'. See https://www.eurocontrol.int/services/system-area-code-list entry_node_sic Entry Node (EN) system identifier code (SIC). To avoid ambiguity in the exchange of Surveillance related data, each system using the ASTERIX data format gets assigned a unique identifier composed of two values called SAC/SIC'. The System Identification Code (SIC) is allocated nationally by the responsible Air Traffic Services Organisation. It identifies each individual system (surveillance sensor, surveillance data processing system, etc) within the respective area defined by the SAC. See https://www.eurocontrol.int/services/system-area-code-list callsign Callsign for the flight as provided in the FPL (Flight PLan). It may be the registration marking of the aircraft or the ICAO designator for the aircraft operating agency followed by the flight identification ICAO location identifier of the Airport of Departure (ADEP) ICAO location identifier of the Airport of Destination (ADES) eobt Estimate Take-Off Date and Time (EOBT), the estimated time at which the aircraft will commence movement associated with departure longitude longitude (decimal degrees) latitude latitude (decimal degrees) flight_level flight level of the aircraft. Flight levels are surfaces of constant atmospheric pressure which are related to a specific pressure datum, 1013.2 HP (Hecto-Pascal). The expression `Flight level times 100' is sometimes, not quite correctly, referred to as altitude in feet track_service Determines whether the CPR is the first (Begin), an intermediate (Continuing) or the last (End) CPR sent by the corresponding system for the relevant flight. (Begin_And_End is also possible) ssr_code A 4-Digit octal code used in the transponder to identify an aircraft (SSR Mode 3/A). See https://en.wikipedia.org/wiki/Aviation_transponder_interrogation_modes track_speed calculated ground velocity (knots) based on the previous radar position calculated heading of aircraft with respect to the magnetic North (decimal degrees) climb_rate Climb (positive) or descend (negative) rate (knots) track_vertical_mode a categorical value for the rate of climb, it can be one of CLIMB, DESCENT, LEVEL_FLIGHT or UNDETERMINED ifps_id a unique flight plan identifier assigned by the IFPS system
{}
# Homework Help: Alternating series 1. Apr 2, 2005 ### tandoorichicken I know that a series such as $$\sum_{n=1}^{\infty} \frac{1}{\sqrt{n}}$$ is divergent. Is this also the case for an alternating version of the same series, i.e., $$\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{\sqrt{n}}$$ ? 2. Apr 2, 2005 ### dextercioby What criteria do you have for alternating series? Daniel. 3. Apr 2, 2005 ### Data As dexter hinted to, look up the alternating series test. A stronger version is Abel's test, and even stronger is the Dirichlet test. 4. Apr 2, 2005 ### tandoorichicken Well, I know about the alternate series test, I am just saying that is it reliable to say that if a particular series does not converge to a certain sum, then a similar series that alternates between positive and negative also will not converge to a specific sum? 5. Apr 2, 2005 ### Data well, it depends on the series. The alternating series test says that $$\sum_{n=0}^\infty (-1)^na_n, \; \mbox{with} \ a_n \geq a_{n+1} \ \forall n \geq 0$$ converges if $\lim_{n \rightarrow \infty} a_n = 0$. So in your specific example, the alternating series converges, even though the series is not absolutely convergent, because $$\frac{1}{\sqrt{n+1}} \leq \frac{1}{\sqrt{n}} \ \forall n \geq 1$$ and $$\lim_{n \rightarrow \infty} \frac{1}{\sqrt{n}} = 0$$ If a series is convergent but not absolutely convergent, we call it conditionally convergent. Conditionally convergent series have some very unintuitive properties, one of which is described here: http://mathworld.wolfram.com/RiemannSeriesTheorem.html 6. Apr 4, 2005 ### dextercioby The sum is approximately $0.6$... Daniel. 7. Apr 4, 2005 ### saltydog Regarding the MathWorld reference: Can anyone explain to me how to calculate these sums? $$\sum_{k=1}^{\infty}\frac{1}{4k(2k-1)}=\frac{1}{2}ln(2)$$ $$\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k}=ln(2)$$ 8. Apr 4, 2005 ### dextercioby The last is very easy,if u consider the Taylor series of $\ln(1+x)$ around zero...(there's another elegant construction,too). As for the first,write it like that $$S=\frac{1}{2}\sum_{k=1}^{+\infty}\frac{1}{2k(2k-1)}=\frac{1}{2}\sum_{k=1}^{+\infty} \left(\frac{1}{2k-1}-\frac{1}{2k}\right)=\frac{1}{2}\left[\left(1-\frac{1}{2}\right)+\left(\frac{1}{3}-\frac{1}{4}\right)+...\right]= \frac{1}{2}\ln 2$$ ,where i made use of the second sum... Daniel. Last edited: Apr 4, 2005 9. Apr 4, 2005 ### saltydog Oh Jesus. I think you've already told me something like that before with another one. I'll spend some time with it. Thanks.
{}
1. Jul 16, 2007 ### daniel_i_l 1. The problem statement, all variables and given/known data You have the function f:R->R and f(x) = f(x+k) where k is in R and k>0. Prove or disprove: 1) If f is continues at x0 then it's also continues at x0 + k 2)If the limit of f at infinity is 0 then f(x)=0 for all x in R. 2. Relevant equations 3. The attempt at a solution 1)Yes: If for every epsilon>0 there's a lambda>0 so that for every x where 0<|x-x0|<lambda => |f(x) - f(x0)| < epsilon then for every 0<|x-(x0+k)|<lambda => 0<|(x-k)-x0|<lambda => |f(x-k) - f(x0)| < epsilon => |f(x) - f(x0+k)| < epsilon 2)Yes: If we have an x0 in R so that f(x0)>0 then we can choose epsilon=f(x0)/2 and then |f(x0)-0| > epsilon. So then for every N>0 we can find a k so that x0+nk > N and so |f(x0+nk)-0| > epsilon and we've proved that the limit of f at infinity isn't 0 - which is a contradiction to the information given in the question. (if there's some x0 where f(x0)<0 the proof in analogues) So for all x in R, f(x)=0. Are those right? Thanks. 2. Jul 16, 2007 ### HallsofIvy Staff Emeritus Looks good to me. Congratulations.
{}
# Lesson goal: Combining exponents Previous: Birthday Surprise | Home | Next: Multiplying a polynomial by a monomial One of the coolest advances of doing math on the computer, is the computer's ability to do symbolic math. This means the computer can do algebra, which means properly handling the math for all of the $x$'s, $y$'s, and $z$'s (you know, "the symbols"). We've developed a function called algebra that actually knows how to "do algebra" on a mathematical expression you give it. Let's start with some basic algebra here, which involves combining exponents, when a bunch of variables are raised to some power. You've probably seen problems like this before, ones that call for "simplifying expressions" like: $x^2x^5$ or $z^3x^2z^{-1}$. Try these first, and then try some of your own homework problems! # Now you try. Try programming in some expressions, like $xy^2y^{1/3}$, $zxyyzxz$ or $\frac{1}{2}x^{3/2}y^2x$, to see how the exponents come out. This example will simplify the problem $x^2x^3$ and display the result. Dismiss. Show a friend, family member, or teacher what you've done!
{}
Lagrangian Dynamics of an inverted Spherical Cart Pendulum Introduction I have to come up with a PD-controller for an inverted Spherical Cart Pendulum, therefore I tried to compute the Dynamics of such a Pendulum. The Spherical Cart Pendulum is a hybrid between the Cart Pole and the Spherical Pendulum. The underlying Cart can move in the X-Y Plane. Basics $$l$$ is the length of the Pendulum, $$m_p$$ is the mass of the pendulum, $$m_c$$ is the mass of the cart, $$\theta$$ denotes the azimuthal and $$\phi$$ the polar As generalized Coordinates I use the conversion between spherical Coordinates and cartesian Coordinates: $$x=l\sin(\theta)\cos(\phi)$$ $$y=l\sin(\theta)\sin(\phi)$$ $$z=l\cos(\theta)$$ The generalized Coordinates for the Pendulum look like this: $$x_p=l\sin(\theta)\cos(\phi)+x$$ $$y_p=l\sin(\theta)\sin(\phi)+y$$ $$z_p=l\cos(\theta)$$ and $$\dot x_p = -l\sin(\phi)\sin(\theta)\dot\phi+l\cos(\phi)\cos(\theta)\dot\theta+\dot x$$ $$\dot y_p = l\sin(\phi)\cos(\theta)\dot\theta+l\sin(\theta)\cos(\phi)\dot\phi+\dot y$$ $$\dot z_p = -l\sin(\theta)\dot \theta$$ The Lagrangian Equation is defined by: $$L=T-V$$ with $$T=T_c+T_p$$ $$T_c=\frac{1}{2}m_c(\dot x^2+\dot y^2)$$ $$T_c=\frac{1}{2}m_p(\dot x_p^2+\dot y_p^2 + \dot z_p^2)$$ and $$V=m_p\cdot g \cdot z_p$$ results in: $$L=- g l m_{p} \cos{\left(\theta \right)} + 0.5 l^{2} m_{p} \sin^{2}{\left(\theta \right)} \dot{\theta}^{2} + 0.5 m_{c} \dot{x}^{2} + 0.5 m_{c} \dot{y}^{2} + 0.5 m_{p} \left(- l \sin{\left(\phi \right)} \sin{\left(\theta \right)} \dot{\phi} + l \cos{\left(\phi \right)} \cos{\left(\theta \right)} \dot{\theta} + \dot{x}\right)^{2} + 0.5 m_{p} \left(l \sin{\left(\phi \right)} \cos{\left(\theta \right)} \dot{\theta} + l \sin{\left(\theta \right)} \cos{\left(\phi \right)} \dot{\phi} + \dot{y}\right)^{2}$$ and after $$\frac{\partial L}{\partial q_j}-\frac{d}{dt}\frac{\partial L}{\partial \dot q_j}=0$$ I get a set of differential equations$$A=(\ddot \phi, \ddot \theta, \ddot x, \ddot y)^T$$. They are all looking good, except one: Now my Problem: The equilibrium point of the inverted Pendulum I want to control is at $$\theta=0$$. Therefore the differential equation for $$\ddot{\phi} =\frac{- 2 l \cos{\left(\theta \right)} \dot{\phi} \dot{\theta} + \sin{\left(\phi \right)} \ddot{x} - \cos{\left(\phi \right)} \ddot{y}}{l \sin{\left(\theta \right)}}$$ $$\lim_{\theta \to 0}\ddot{\phi}=\infty$$ means that I cannot compute $$\theta$$ for very small angles. I know that for $$\ddot x=0$$ and $$\ddot y=0$$, $$\ddot \phi$$ is a cyclic Coordinate and displays the angular momentum. How can I interpret $$\ddot \phi$$ in a way that my differential equation makes sense and does not explode into the infinity? Imagine the pendulum swinging along $$y=0$$, with $$x$$ changing. As the pendulum swings directly below the cart, $$\phi$$ instantly changes from 0 to $$\pi$$, so you would expect $$\ddot{\phi}$$ to be infinite - when $$\theta=0$$, $$\phi$$ can take any value without the mass moving. It's similar to gimbal lock (https://en.wikipedia.org/wiki/Gimbal_lock). You probably need to rephrase in a different coordinate system or reference frame - for example, set $$\theta$$ to be the angle from the z-axis in the yz-plane, and $$\phi$$ to be the angle from the z-axis in the xz-plane, and redo the analysis, that shouldn't lead to either blowing up with the pendulum vertical with $$\theta=\phi=0$$ (but will have discontinuities with it horizontal if aligned to the axes, so at $$\theta=\pi/2$$, $$phi$$ will be similarly undefined). • I agree. You don't want to work around the point that is the axis about which $\phi$ is measured. +1 – Dale May 18 at 16:47 • Thank you for your answer. So you think that general coordinates like $x=l\cdot \sin(\theta) \cdot \sin(\phi) + x$ $y=l\cdot\cos(\theta) + y$ and $z=l\cdot\sin(\theta)\sin(\phi)$ should work? Because I think that the equations I obtain are looking funky and I cannot solve the equations for the state variable. May 23 at 14:12 • Maybe try defining $\theta$ from the z-axis, and $\phi$ from the line that's at $x=0$ and is $\theta$ from the axis - so that $x=l\sin\theta \cos \phi$, $y=l\sin\phi$, $z=l\cos\theta \cos\phi$ - I think I've seen that set work in the past (So $\theta$ is from the axis and increasing $\theta$ increases $x$, and $\phi$ is from a line from to (0,0,0) to ($\sin\theta$,0,$\cos\theta$), increasing $\phi$ increases $y$, rather than both angles defined relative to the axis - I think this makes the maths easier than my original suggestion) – sqek May 23 at 15:51 You are modelling this using a simple pendulum. $$\phi$$ describes the angular displacement of the point about the $$z$$-axis. This only makes sense if $$\theta\neq0$$. As $$\theta\to0$$, using conservation laws it makes sense that $$\ddot\phi$$ increases. Your problem is that you are trying to interpret the $$\theta\to0$$ case by considering a real-life physical pendulum bob that can spin about an axis passing through it (i.e. it's not point-like). Since you used a point in your analysis, spinning about an axis passing through the point does not make mathematical sense. Therefore, for the $$\theta=0$$ case, one cannot even talk about $$\phi$$ because a point-like particle does not have angular displacement about an axis passing through that same point (whereas an object that occupies space would). As such, your equation breaks down for $$\theta=0$$. For small angles $$\theta$$ angles, the physical size of the pendulum bob does matter, and, if accounted for mathematically, will give you the correct expression for $$\ddot \phi$$ (which, as you correctly stated, will be related to the angular momentum of the physical pendulum). The takeaway is that your equations are approximations, and in particular the $$\ddot\phi$$ expression becomes considerably less accurate for small angles of $$\theta$$. @sqek 's answer is true -- by redefining your coordinate system, you get to 'displace,' if you will, the problem of $$\phi$$ 'losing its meaning' in your coordinate system. • Thank you for your reply. But it seems that even with a different representation, I get weird equations. And sympy cannot solve the system in respect of the state variables. May 23 at 14:16
{}
# Math Help - Theorem on quadrilaterals Hi, all. Is this a known theorem, meaning you've seen it stated somewhere? A simple quadrilateral whose minimum distance between vertices is 1 and whose maximum distance is sqrt(2) is a unit square. 2. Originally Posted by JakeD Hi, all. Is this a known theorem, meaning you've seen it stated somewhere? A simple quadrilateral whose minimum distance between vertices is 1 and whose maximum distance is sqrt(2) is a unit square. Since, I never seen it means it is not well know. Did, you arrive at that result thyself? 3. Originally Posted by ThePerfectHacker Since, I never seen it means it is not well know. Did, you arrive at that result thyself?
{}
# need a help to simplify • September 11th 2008, 11:46 PM dimuk need a help to simplify $a_n(G)=\frac{t_n(G)}{(n-1)!}$ and $h_n(G)=\sum _{k=1}^n {n-1 \choose k-1} t_k(G)h_{n-k}(G).$ Then need to remove $t_k(G)$ and obtain $a_n(G)=\frac{1}{(n-1)!} h_n(G)-\sum _{k=1}^n \frac{1}{(n-k)!}h_{n-k}(G)a_k(G).$ • September 12th 2008, 01:05 AM bkarpuz Quote: Originally Posted by dimuk $a_n(G)=\frac{t_n(G)}{(n-1)!}$ and $h_n(G)=\sum _{k=1}^n {n-1 \choose k-1} t_k(G)h_{n-k}(G).$ Then need to remove $t_k(G)$ and obtain $a_n(G)=\frac{1}{(n-1)!} h_n(G)-\sum _{k=1}^n \frac{1}{(n-k)!}h_{n-k}(G)a_k(G).$ Under the condition $h_{0}(G)=1$, I got $ a_{n}(G)=\frac{1}{(n-1)!}h_{n}(G)-\sum\limits_{k=1}^{{\color{magenta}n-1}}\frac{1}{(n-k)!}h_{n-k}(G)a_{k}(G). $ Are there any assumptions we should know? • September 12th 2008, 01:30 AM dimuk need a help to simplify I think I made a mistake. Your answer is correct. Let me know how did u get it. Thanks. • September 12th 2008, 01:44 AM bkarpuz Quote: Originally Posted by dimuk I think I made a mistake. Your answer is correct. Let me know how did u get it. Thanks. It is so simple, first note that $ {n\choose k}=\frac{n!}{k!(n-k)!}, $ then isolate $t_{n}$ and get $ t_{n}=(n-1)! a_{n}. $ Substitute this into the other equation, and get $h_{n}=\sum\limits_{k=1}^{n}\frac{(n-1)!}{(k-1)!(n-k)!}(k-1)!a_{k}h_{n-k}$ ..... $=(n-1)!\sum\limits_{k=1}^{n}\frac{1}{(n-k)!}a_{k}h_{n-k}$ ..... $=(n-1)!\Bigg[\bigg(\sum\limits_{k=1}^{n-1}\frac{1}{(n-k)!}a_{k}h_{n-k}\bigg)+a_{n}h_{0}\Bigg].$ By arranging the last expression, we easily get the desired result under the assumption $h_{0}=1$. (Wink)
{}
Volume 414 - 41st International Conference on High Energy physics (ICHEP2022) - Heavy Ions J/$\psi$ photoproduction and the production of dileptons via photon-photon interactions in hadronic Pb-Pb collisions measured with ALICE R. Bailhache Full text: pdf Pre-published on: November 21, 2022 Published on: Abstract Photon-photon and photonuclear reactions are induced by the strong electromagnetic field generated in ultra-relativistic heavy-ion collisions. These processes have been extensively studied in ultra-peripheral collisions with impact parameters larger than twice the nuclear radius. Since a few years, both the photoproduction of the $\rm J/\psi$ vector mesons and the production of dileptons via photon-photon interactions have been observed in A--A collisions with nuclear overlap. Coherent photoproduced quarkonia can probe the nuclear gluon distributions at low Bjorken-$x$, while the continuum dilepton production could be used to further map the electromagnetic fields produced in heavy-ion collisions and to study possible induced or final state effects in overlapping hadronic interactions. Both measurements are complementary to constrain the theory behind photon induced reactions in A--A collisions with nuclear overlap and the potential interaction of the measured probes with the formed and fast-expanding QGP medium. The latest ALICE results on dielectron production at low masses and pair transverse momenta at midrapidity, as well as on coherent $\rm J/\psi$ photoproduction at mid and forward rapidity, are presented for non-ultraperipheral Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ $=$ 5.02 TeV and compared with available models. DOI: https://doi.org/10.22323/1.414.0453 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
{}
## Differential and Integral Equations ### Existence of non-topological solutions for a nonlinear elliptic equation arising from Chern-Simons-Higgs theory in a general background metric Kazuhiro Kurata #### Abstract In this paper we show the existence of non-topological 0-vortex and 1-vortex solutions for a nonlinear elliptic equation arising from Chern-Simons-Higgs theory in a general background metric $(g_{\mu\nu})=diag(1, -k(x), -k(x))$ with decay $k(x)=O(|x|^{-l})$ for some $l >2$ at infinity. #### Article information Source Differential Integral Equations, Volume 14, Number 8 (2001), 925-935. Dates First available in Project Euclid: 21 December 2012
{}
# Control Engineering: Why are the Equation Error and Output Error different? When modelling a system you must first estimate the form of the transfer function (order, time delay etc.), and then estimate the parameters (coefficients for the transfer function). A common way to estimate these parameters is to formulate a cost function describing the accuracy of the estimates, and then minimise this cost function with some numerical or analytical optimisation method. Two common forms of the cost function are the Equation Error and the Output Error. These are shown for a discrete time system below: Output Error: e(k) u(k) and y(k) are the actual measured system input and output and y(k)hat is the model output: $$e(k) = y(k) - \hat{y(k)}$$ $$e(k) = y(k) - \frac{\hat{b_1}z^{-1}}{1+\hat{a_1}z^{-1}}u(k)$$ And the cost function is: $$J = \sum_{k=1}^{N}e(k)^2$$ This can be solved numerically, e.g with gradient decent or the genetic algorithm. Equation Error: E(k) $$E(k) = y(k) - \hat{y(k)}$$ $$E(k) = y(k) +\hat{a_1}y(k-1)-\hat{b_1}u(k-1)$$ $$J = \sum_{k=1}^{N}E(k)^2$$ This can be solved analytically by differentiating J with respect to each parameter and equating the result to zero, and then solving the resulting set of equations. It is said that the Output Error is better for noisy data but the equation error is better otherwise since it does not suffer the local minima problem. My Question: It seems that by using the Output Error or Equation error the cost function has different properties, but to me they seem to be fundamentally using the exact same formula, but just re-arranged to look different. So why does the cost function end up with different properties? I.e one is solvable and one is more robust to noise. This is what I mean by saying they use the exact same formula: The equation error is defined as the actual measured output minus the model output: $$E(k) = y(k) - (-\hat{a_1}y(k-1)+\hat{b_1}u(k-1))$$ Where the model output is: $$\hat{y(k)} = -\hat{a_1}y(k-1)+\hat{b_1}u(k-1)$$ But if you re-arrange this formula so it is in transfer function form: $$\hat{y(k)} = -\hat{a_1}y(k)z^{-1}+\hat{b}z^{-1}u(k)$$ $$\hat{y(k)}(1+\hat{a_1}z^{-1}) = \hat{b}z^{-1}u(k)$$ $$\hat{y(k)}=\frac{\hat{b}z^{-1}u(k)}{(1+\hat{a_1}z^{-1})}$$ Which is exactly the same as the Output Error. So the Output Error formula and the Equation Error formula are the same equation but just re-arranged to look different. So why is the cost function for one solvable and for the other it is not? And why does the result of optimisation result in a more noise-robust model when using the cost function with one and not the other? Sorry if this is posted on the wrong site. There is no stack exchange site for control engineering, so I figured this is the next best place. • You can't mix domains (z and k) in equations, please clarify. – Chu Apr 24 '15 at 16:43 • FYI, there is a more general engineering.stackexchange.com where this might fit better. EE.SE mods can help you migrate the question over there if tht's what you want (use the flag feature to get their help). – The Photon Apr 24 '15 at 18:30 • General Engineering doesn't seem to have as great a following among control systems engineers as Electrical Engineering. This is indeed the best place to post and where it will receive attention. @Chu is right. You are mixing delay operators with indices for difference equations. Rethink your math. – docscience Apr 29 '15 at 15:38
{}
anonymous one year ago http://media.education2020.com/evresources/2004-04-01-02-00_files/i0130000.jpg Find the unknown side length, x. Write your answer in simplest radical form 1. mathstudent55 |dw:1440193477028:dw| 2. mathstudent55 We are looking for x. 3. mathstudent55 What do the circled marks mean? |dw:1440193557796:dw| 4. anonymous @mathstudent55 , the circled marks are called hash marks and lines with the same number of hash marks indicate that these sides are the same length. It saves cluttering up sketches with dimensions. For example, a rectangle might look like |dw:1440193790343:dw| 5. mathstudent55 I believe I knew that. I was asking the poster to see if he/she knew it, so we could work on the problem. 6. anonymous Yeah I knew what they mean but I figured it out I just asked my E2020 teacher for help even though he didn't get it either. Thank you for you help though 7. mathstudent55 Ok. We know those marks mean the two segments are congruent. That means we can now add this to the|dw:1440194145894:dw| drawing: 8. mathstudent55 Now just look at the triangle at the right. |dw:1440194177047:dw| 9. mathstudent55 What kind of triangle is it? 10. mathstudent55 Sorry, gtg. Here is the rest of the solution. This triangle is a right triangle. We know that because the figure shows a right angle. We can use the Pythagorean theorem. $$a^2 + b^2 = c^2$$ |dw:1440194360342:dw| 11. mathstudent55 Using the formula above for our triangle, we get: $$3^2 + 6^2 = x^2$$ $$9 + 36 = x^2$$ $$45 = x^2$$ $$x^2 = 45$$ $$x = \sqrt {45}$$ Now we need simplest radical form. We factor 45 into 2 factors, one being the largest perfect square factor we can find. $$x = \sqrt {9 \times 5}$$ $$x = 3 \sqrt 5$$ 12. mathstudent55 Sorry, but gtg. If you have questions, just ask them. Find more explanations on OpenStudy
{}
## anonymous 3 years ago Which is the simplified form of 12 times f to the sixth power times g all over 16 times g squared.? A:f to the sixth power over 4 times g squared. B:3 times f to the sixth power all over 4 times g squared. C:3 times f to the sixth power all over 4 times g. D:f to the sixth power over 4 times g. 1. anonymous $\frac{12f^6g}{16g^2}$ 2. anonymous $\frac{12}{16}=\frac{3}{4}$ and $\frac{f^6g}{g^2}=\frac{f^6}{g}$ so your answer should be $\frac{3f^6}{4g}$ 3. anonymous as usual, it is C it is almost always C
{}
#### Vol. 271, No. 1, 2014 Recent Issues Vol. 314: 1  2 Vol. 313: 1  2 Vol. 312: 1  2 Vol. 311: 1  2 Vol. 310: 1  2 Vol. 309: 1  2 Vol. 308: 1  2 Vol. 307: 1  2 Online Archive Volume: Issue: The Journal Subscriptions Editorial Board Officers Contacts Submission Guidelines Submission Form Policies for Authors ISSN: 1945-5844 (e-only) ISSN: 0030-8730 (print) Special Issues Author Index To Appear Other MSP Journals Perturbations of a critical fractional equation ### Eduardo Colorado, Arturo de Pablo and Urko Sánchez Vol. 271 (2014), No. 1, 65–85 ##### Abstract We deal with the following fractional critical problem: where $\Omega \subset {ℝ}^{N}$ is a regular bounded domain, $0<\alpha <2$ and $N>\alpha$. Under appropriate conditions on the size of $f$, we prove existence and multiplicity of solutions. ##### Keywords semilinear elliptic equations, fractional Laplacian, critical problem ##### Mathematical Subject Classification 2010 Primary: 35A15, 49J35, 35R11
{}
# Lesson 16 Applying Volume and Surface Area ### Lesson Narrative In this second lesson on applying surface area and volume to solve problems, students solve more complex real-word problems that require them to choose which of the two quantities is appropriate for solving the problem, or whether both are appropriate for different aspects of the problem. They use previous work on ratios and proportional relationships, thus consolidating their knowledge and skill in that area. When students bring together knowledge of different areas of mathematics to solve a complex problem, they are engaging in MP4. ### Learning Goals Teacher Facing • Apply reasoning about surface area and volume of prisms as well as proportional relationships to calculate how much the material to build something will cost, and explain (orally and in writing) the solution method. ### Student Facing Let's explore things that are proportional to volume or surface area. ### Student Facing • I can solve problems involving the volume and surface area of children’s play structures. ### Glossary Entries • base (of a prism or pyramid) The word base can also refer to a face of a polyhedron. A prism has two identical bases that are parallel. A pyramid has one base. A prism or pyramid is named for the shape of its base. • cross section A cross section is the new face you see when you slice through a three-dimensional figure. For example, if you slice a rectangular pyramid parallel to the base, you get a smaller rectangle as the cross section. • prism A prism is a type of polyhedron that has two bases that are identical copies of each other. The bases are connected by rectangles or parallelograms. Here are some drawings of prisms. • pyramid A pyramid is a type of polyhedron that has one base. All the other faces are triangles, and they all meet at a single vertex. Here are some drawings of pyramids. • surface area The surface area of a polyhedron is the number of square units that covers all the faces of the polyhedron, without any gaps or overlaps. For example, if the faces of a cube each have an area of 9 cm2, then the surface area of the cube is $$6 \boldcdot 9$$, or 54 cm2. • volume Volume is the number of cubic units that fill a three-dimensional region, without any gaps or overlaps. For example, the volume of this rectangular prism is 60 units3, because it is composed of 3 layers that are each 20 units3.
{}
# Exercise 2 | Quantum Theory and Atomic Structure How many electrons can be specified by the following combinations of quantum numbers? - n = 2 - n = 3, l = 2 - n = 3, l = 1, ml = 0 - n = 4, l = 2, ml = -1, ms = ½ - n = 2 ⇒ 2 subshells: 2s and 2p ⇒ 4 orbitals ⇒ 4 x 2 e- = 8 e- - n = 3, l = 2 ⇒ 1 subshell: 3d ⇒ 5 orbitals ⇒ 5 x 2 e- = 10 e- - n = 3, l = 1, ml = 0 ⇒ 1 subshell: 3p + only 1 orbital (ml = 0) ⇒ 1 x 2 e- = 2 e- - n = 4, l = 2, ml = -1, ms = ½ ⇒ 1 subshell: 4d, 1 orbital (ml = -1), 1 e- (ms = ½) ⇒ 1 e-
{}
# Higher order Derivative Ques • Jan 11th 2010, 02:19 PM Higher order Derivative Ques show that the equation $f" (x) + 4 f'(x) + 4 f(x) = 0$ is satisfied if $f(x) = (3x-5)e^-2x$ What is this question asking me to do? I know I will need to find the first and second derivative but where does the + 4 part fit in? • Jan 11th 2010, 02:34 PM Henryt999 Quote: show that the equation $f" (x) + 4 f'(x) + 4 f(x) = 0$ is satisfied if $f(x) = (3x-5)e^-2x$ What is this question asking me to do? I know I will need to find the first and second derivative but where does the + 4 part fit in? Find the y'' and y'. Then y'' + 4 y' + 4y = 0 So one y'' four of those y' and four y:s must give you 0
{}
# Not Paying liteinvestgroup - liteinvestgroup.com ##### Active Member history of foundation Lite Invest Group it is a company which is specialized on trade in the currency and fund markets. Our collective works in the market since January, 2009. The idea of creation of project belongs to Alex Konovalov, who at the end of 2008, on the meeting with his old friend, Nikolay Rogov, offered him collaboration for creation of investment fund. Lite Invest Group was planned as a project for the confiding management of assets. By joint efforts, already at the beginning of 2009, a collective which included 7 independent traders has been collected Start online Jan-26-2010 1. Minimal deposit from INVESTOR - 12$, maximal - 20 000$ 2. Minimal income which INVESTOR can get for one business day - 0,1%, maximal - 2.5%. 3. After registration INVESTOR gets access to his account. If there is positive balance on a deposit, INVESTOR gets daily percents on account. 4. An income, which got on facilities from INVESTOR, is charged extra on an account every business day, from Monday to Friday, except for Saturday and Sunday. 5. There is an automatic reinvestment . 6. The size of daily interest rate is fixed, but don't forget that financial markets is unforeseeableness, therefore LIG reserves a right to change an interest rate, not to create debt obligations in the case of force-majeure situations. 7. For all INVESTORS at a the same investment plan identical daily interest rate which does not depend on the sum of deposit of INVESTOR. 8. An INVESTOR will not get a loss even if a point-of-sale day for LIG was not profitable. 9. An INVESTOR can invite other people to make the deposit in LIG. For this purpose INVESTOR uses the special reference which he will be able to find, calling on the account. 10. LIG pays a commission to INVESTOR in size of 5% from every deposit of the person (referral) invited by him, It does not depend on that how many facilities on the account of INVESTOR. When the referal investor will make his deposit to the account the extra charge of commissions takes place on the investors account, but the withdrawal of the commission will be available every 15th day of next month. 11. An INVESTOR can demand all facilities, including an income, or part of facilities from the account at any moment, but not earlier than a term, which foreseen by an investment plan, after opening of deposit. 12 LIG will transfer facilities of the client request during 3 business days(usually within 24 hours). At suspecting of swindle LIG can demand the grant of documents,which confirmative personality . In such cases transfer will be executed during 3 days from the moment of receipt by us document, and only on condition that a document will be authentic. Referral system: Right after investing by referal facilities, after verification of administrator, on Your account will be charged extra 5 % from the sum of deposit, but the withdrawal of the ?ommission will be available every 15 th of next month. Safety and defence In a project attracted a high quality specialist in area of network safety Bill Borden. A project works on the protected SSL-protocol and has the dedicated server, and also protecting from DDOS-attacks and system to protect accounts from hackers. Bill constantly watches after the technical state of project and creates the new levels of defence as far as development of project. Join Here: Last edited: ##### Active Member My deposit here: Date : 2010-27-01 12:40:15 From/To Account : U3946976 (Lite Invest Group) Amount : -50.0000 Currency : LRUSD Batch : 27833597 Memo : Payment for Lite Invest Group ##### Active Member Date : 2010-29-01 00:26:11 From/To Account : U3946976 (Lite Invest Group) Amount : 1.0000 Currency : LRUSD Batch : 28001763 Memo : #### x66 ##### Fun Poster Date : 2010-31-01 18:40:16 From/To Account : U3946976 (Lite Invest Group) Amount : 2.0000 Currency : LRUSD Batch : 28227405 ##### Active Member Instantly withdrawal some money Date : 2010-03-02 23:00:08 From/To Account : U3946976 (Lite Invest Group) Amount : 1.8800 Currency : LRUSD Batch : 28549484 Memo : Very convenient project - can be withdrawal exactly so much, how many it is necessary - pays instantly, and always it is possible to add some money ##### Active Member Date : 2010-04-02 08:52:18 From/To Account : U3946976 (Lite Invest Group) Amount : 8.0000 Currency : LRUSD Batch : 28581604 Memo : #### x66 ##### Fun Poster Paying Date : 2010-04-02 22:26:57 From/To Account : U3946976 (Lite Invest Group) Amount : 5.0000 Currency : LRUSD Batch : 28659187 Memo : ##### Active Member If I need some money - I withdraw they instantly here! Date : 2010-05-02 21:27:14 From/To Account : U3946976 (Lite Invest Group) Amount : 8.0000 Currency : LRUSD Batch : 28770792 Memo : #### x66 ##### Fun Poster Date: 2010-06-02 14:00:38 Batch: 28822264 From Account: U3946976 Amount: \$1.00 #### x66 ##### Fun Poster Date : 2010-08-02 15:07:22 From/To Account : U3946976 (Lite Invest Group) Amount : 1.0000 Currency : LRUSD Batch : 28981644 Memo :
{}
# Understanding the fragility index ## Summary This post reviews the fragility index, a statistical technique proposed by Walsh et. al (2014) to provide an intuitive measure of the robustness of study findings. I show that the distribution of the fragility index can be approximated by a truncated Gaussian whose expectation is directly related to the power of the test (see equation \eqref{eq:fi_power}). This evidence will hopefully clarify the debate around what statistical quantity the fragility index actually represents. Even though the fragility index can provide a conservative estimate of the power of a study, on average, it is a very noisy indicator. In the final sections of this post I provide arguments both in favour and against the fragility index. To replicate all figures in this post, see this script. ## (1) Background Most published studies have exaggerated effect sizes. There are two main reasons for this. First, scientific research tends to be under-powered. This means that studies do not have enough samples to be able to detect small effects. Second, academic journals incentivize researchers to publish “novel” findings that are statistically significant. In other words, researchers are encouraged to try different variations of statistical tests until something “interesting” is found.[1] As a reminder, the power of a statistical test defines the probability that an effect will be detected when it exists. Power is proportional to the sample size for consistent tests because more samples leads to tighter inference around some effect size. Most researchers are aware of the idea of statistical power, but a much smaller share understand that it is directly related to effect size bias. Conditional on statistical significance, all measured effects are biased upwards because there is a minimum effect size needed for a test to be considered statistically significant.[2] Evidence that most fields of science are under-powered in practice are numerous (see here or here). Many disciplines do not even require studies to make power justifications before research begins.[3] Some researchers have attempted to justify the sample sizes of their studies by doing post-hoc (after-the-fact) power calculations using the estimated effect size as the basis for their estimates. This approach is problematic for three reasons. First, post-hoc power using the measured effect size as the assumed effect size is mathematically redundant since it has a one-to-one mapping with the conventionally calculated p-value. Second, this type of post-hoc power will always show high power for statistically significant results, and low power for statistically insignificant results (this helpful post explains why).[4] Lastly, in underpowered designs, the estimated effect will be noisy, making post-hoc estimates of power equally as noisy! The fragility index (FI) is relatively new statistical technique that uses an intuitive approach to quantify post-hoc robustness in studies with a binary outcomes framework with two groups. Qualitatively, the FI states how many patients in the treatment group would need to have their status swapped from event to non-event in order for the trial to be statistically insignificant.[5] In the original Walsh paper, the FI was calculated for RCTs published in high impact medical journals. They found median FI of 8, and a 25% quantile of 3. Since the original Walsh paper, the FI has been applied hundreds of times to other areas of medical literature.[6] Most branches of medical literature show a median FI that is in the single digits. While the FI has its drawbacks (discussed below), this new approach appears to have captured the statistical imagination of researchers in a way that power calculations have not. While being told that a study has 20% power should cause immediate alarm, it apparently does not cause the same level of concern as finding out an important cancer drug RCT was two coding errors away from being deemed ineffective. The rest of this post is organized as follows: section (2) provides an explicit relationship between the fragility index and the power of the test, section (3) provides an algorithm to calculate the FI using python code, section (4) reviews the criticisms against FI, and section (5) concludes. ## (2) Binomial proportion fragility index (BPFI) This section examines the distribution of the FI when the test statistic is the normal-approximation of a difference in a binomial proportion; hereafter referred to as the binomial proportion fragility index (BPFI). While Fisher’s exact test is normally used for estimating the FI, the BPFI is easier to study because it has an analytic solution. Two other simplifications will be used in this section for ease of analysis. First, the sample sizes will be the same between groups, and second, a one-sided hypothesis test will be used. Even though the BPFI may not be standard, it is a consistent statistic that is asymptotically normal and is just as valid as using other asymptotic statistics like a chi-squared test. The notation used will be as follows: the statistic is the difference in proportions, $$d$$, whose asymptotic distribution is a function of the number of samples and the respective binary probabilities ($$\pi_1, \pi_2$$): \begin{align*} p_i &= s_i / n \\ s_i &= \text{Number of events} \\ n &= \text{Number of observations} \\ p_i &\overset{a}{\sim} N \Bigg( \pi_i, \frac{\pi_i(1-\pi_i)}{n} \Bigg) \\ d &= p_1 - p_2 \overset{a}{\sim} N\Bigg( \pi_1 - \pi_2, \sum_i V_i(n) \Bigg) \\ d &\overset{a}{\sim} N\Bigg( \pi_d, V_d(n) \Bigg) \\ \pi_d &= \pi_2 - \pi_1 \end{align*} Assume that $$n_1 = n_2 = n$$ and the null-hypothesis is $$\pi_2 \leq \pi_1$$. We want to test whether group 1 has a larger event rate than group 2. \begin{align*} n_1 &= n_2 = n \\ H_0 &: \pi_d \leq 0, \hspace{3mm} \pi_1=\pi_2 \\ H_A &: \pi_d > 0 \\ d_0 &\overset{a}{\sim} N\big( 0, \big[ 2 \pi_1(1-\pi_1) \big]/n \big) \hspace{3mm} \big| \hspace{3mm} H_0 \\ d_A &\overset{a}{\sim} N\Big( \pi_d, \big[\pi_d +\pi_2(2-\pi_2) - (\pi_2+\pi_d)^2\big]/n \Big) \hspace{3mm} \big| \hspace{3mm} H_A \\ \hat{\pi}_d &= \frac{\hat{s}_1}{n} - \frac{\hat{s}_2}{n} \\ \hat{\pi}_i &= \hat s_i / n \hspace{3mm} \big| \hspace{3mm} H_A \\ \hat{\pi}_0 &= (\hat s_1 + \hat s_2)/(2n) \hspace{3mm} \big| \hspace{3mm} H_0 \end{align*} Notice that when calculating the variance of the test statistic when the null is false, the event rate pools the events across groups. For a given type-1 error rate target ($$\alpha$$), and corresponding rejection threshold, the power of the test when the null is false can be calculated: \begin{align*} \text{Reject }H_0:& \hspace{3mm} \hat{d} > \sqrt{\frac{2\hat\pi_0(1-\hat\pi_0)}{n}}t_\alpha, \hspace{7mm} t_\alpha = \Phi^{-1}_{1-\alpha/2} \\ P(\text{Reject }H_0 | H_A) &= 1 - \Phi\Bigg( \frac{\sqrt{2 \pi_0(1-\pi_0)}t_\alpha - \sqrt{n}\pi_d }{\sqrt{\pi_d +\pi_1(2-\pi_1) - (\pi_1+\pi_d)^2}} \Bigg) \\ \text{Power} &= \Phi\Bigg( \frac{\sqrt{n}\pi_d - \sqrt{2 \pi_0(1-\pi_0)}t_\alpha }{\sqrt{\pi_1(1-\pi_1)+\pi_2(1-\pi_2)}} \Bigg) \tag{1}\label{eq:power} \\ \end{align*} The formula \eqref{eq:power} shows that increasing $$\pi_d$$, $$n$$, or $$\alpha$$ all increase the power. Figure 1 below shows that the formula to estimate power is a close approximation for reasonable sample sizes. ## Figure 1: Predicted vs Actual power Given that the null has been rejected, the roots of the equation can be solved to find the exact point of statistical insignificance using the quadratic formula. \begin{align*} n\hat{d}^2 &= 2\hat\pi_0(1-\hat\pi_0) t_\alpha^2 \hspace{3mm} \longleftrightarrow \\ 0 &= \underbrace{(2n+t_\alpha^2)}_{(a)}\hat{s}_2^2 + \underbrace{2(t_\alpha^2(\hat{s}_1-n)-2n\hat{s}_1)}_{(b)}\hat{s}_2 + \underbrace{\hat{s}_1[2n \hat{s}_1 +t_\alpha^2(\hat{s}_1^2-2n)]}_{(c)} \\ \hat{\text{FI}} &= \hat{s}_2 - \frac{-b + \sqrt{b^2-4ac}}{2a} \tag{2}\label{eq:fi1} \end{align*} While equation \eqref{eq:fi1} is exact, the FI can be approximated by assuming the variance is constant: \begin{align*} \hat{\text{FI}}_a &= \begin{cases} \hat{s}_2 - \Big(\hat{s}_1 + t_\alpha\sqrt{2n \hat\pi_0(1-\hat\pi_0)}\Big) &\text{ if } n_1 = n_2 \\ \hat{s}_2 - n_2 \Big(\frac{\hat{s}_1}{n_1} + t_\alpha\sqrt{\frac{\hat\pi_0(1-\hat\pi_0)(n_1+n_2)}{n_1n_2}} \Big) &\text{ if } n_1\neq n_2 \tag{3}\label{eq:fi2} \end{cases} \end{align*} As Figure 2 shows below, \eqref{eq:fi2} is very close to the \eqref{eq:fi1} for reasonably sized draws ($$n=200$$). ## Figure 2: BPFI and its approximation Next, we can show that the approximation of the BPFI from \eqref{eq:fi2} is equivalent to a truncated Gaussian when conditioning on statistical significance. \begin{align*} \text{pFI}_a &= \text{FI}_a \hspace{2mm} \big| \hspace{2mm} \text{FI}_a > 0 \hspace{2mm} \longleftrightarrow \\ \text{FI}_a &\sim N \big( n\pi_d - t_\alpha\sqrt{2n \pi_0(1-\pi_0)}, n[\pi_1(1-\pi_1) + \pi_2(1-\pi_2)] \big) \\ E[\text{pFI}_a] &= n\pi_d - t_\alpha\sqrt{2n \pi_0(1-\pi_0)} + \sqrt{n[\pi_1(1-\pi_1) + \pi_2(1-\pi_2)]} \frac{\phi(-E[\text{FI}_a]/\text{Var}[\text{FI}_a]^{0.5})}{\Phi(E[\text{FI}_a/\text{Var}[\text{FI}_a]^{0.5}])} \end{align*} Figure 3 below shows that the truncated Gaussian approximation does a good job at estimating the actual mean of the BPFI. ## Figure 3: Mean of the pFI If the positive BPFI is divided by root-n and the variance under the alternative (a constant) we obtain a monotonic transformation of the fragility index: \begin{align*} E\Bigg[\frac{\text{pFI}_a \big/ \sqrt{n}}{\sqrt{\pi_1(1-\pi_1) + \pi_2(1-\pi_2)} }\Bigg] &= \Phi^{-1}(1-\beta) + \frac{\phi\big(-\Phi^{-1}(1-\beta))}{\Phi\big(\Phi^{-1}(1-\beta)\big)} \tag{4}\label{eq:fi_power} \\ \end{align*} Where $$\beta$$ is the type-II error rate (i.e. one minus power). Figure 4 below shows power estimates obtained by solving equation \eqref{eq:fi_power} for $$1-\beta$$ for the statistically significant results. ## Figure 4: Estimating power from FI While the median power estimate is a conservative estimate to the actual value, the empirical variation is tremendous. Why is there so much variation? The answer is simple: the distribution of FIs is similar for different effect sizes as Figure 5 shows below. ## Figure 5: Distribution of FIs Even though a test may have a power of 75%, it will have a similar distribution of FIs to another test that has only 10% power. This naturally means that there will be significant uncertainty around the true power for any measured FI. ## (3) Calculating the fragility index Consider the classical statistical scenario of a 2x2 table of outcomes, corresponding to two different groups with a binary outcome recorded for each group. For example, a randomized control trial (RCT) for a medical intervention usually corresponds to this scenario where the two groups are the (randomized) treatment and control group and the study records some event indicator associated with a health outcome. Suppose in this trial that the event rate is greater in treatment than the control group, and that this positive difference is statistically significant. If a patient who was recorded as having an event in the treatment group has their entry “swapped” to a non-event, then the proportions between the groups will narrow, and the result will become less statistically significant by definition. For any test statistic, the FI can be defined as follows: \begin{align*} \text{FI} &= \inf_{k \in \mathbb{I}^{+}} \hspace{3mm} \text{P-value}\Bigg(\begin{bmatrix} n_{1A}+k & n_{1B}-k \\ n_{2A} & n_{2B} \end{bmatrix} \Bigg) > \alpha \end{align*} Where $$n_i=n_{iA}+n_{iB}$$ is the total number of samples for group $$i$$, and there are $$n_{iA}$$ events. The code below provides the wrapper function FI_func needed to calculate the fragility index using the methodology as originally proposed. The sample sizes for both groups are fixed, with the event rate being modified for only group 1. The algorithm works by iteratively flipping one patient from event to non-event (or vice-versa) until there is a change in statistical significance. While a naive approach is simply to initialize the contingency table with the original data, a significant speed-up can be accrued by estimating the FI with the BPFI as discussed in section 2. Conditional on any starting point, the algorithm converges by applying the following rule: 1. Flip event to non-event in group 1 if event rate is larger in group 1 and current result is statistically significant 2. Flip non-event to event in group 1 if event rate is larger in group 1 and current result is statistically insignificant 3. Flip non-event to event in group 1 if event rate is smaller in group 1 and current result is statistically significant 4. Flip event to non-event in group 1 if event rate is smaller in group 1 and current result is statistically insignificant Why would the direction be changed if the result is insignificant? This occurs when the BPFI initialization has overshot the estimate. For example, imagine the baseline event rate is 50/1000 in group 1 and 100/1000 in group 2, and the BPFI estimates that insignificance occurs at 77/1000 for group 1. When we apply the Fisher’s exact test, we find that insignificance actually occurs at 75/1000, and to discover this we need to subtract off events from group 1 until the significance sign changes. In contrast, if the BPFI estimates that insignificance occurs at 70/1000, then when we run Fisher’s exact test, we’ll find that the results are still significant and will need to add patients to the event category until the significance sign changes. As a final note, there are two other ways to generate variation in the estimate of the FI for a given data point: 1. Which group is considered “fixed” 2. Which statistical test is used To generate the first type of variation, the values of n1A/n1 and n2A/n2 can simply be exchanged. Any function which takes in an 2x2 table and returns a p-value can be used for the second. I have included functions for Fisher’s exact and the Chi-squared test. import numpy as np import scipy.stats as stats """ INPUT n1A: Number of patients in group1 with primary outcome n1: Total number of patients in group1 n2A: Number of patients in group2 with primray outcome n2: Total of patients in group2 stat: Function that takes a contingency tables and return a p-value n1B: Can be specified is n1 is None n2B: Can be specified is n2 is None *args: Will be passed into statsfun OUTPUT FI: The fragility index ineq: Whether group1 had a proportion less than or greater than group2 pv_bl: The baseline p-value from the Fisher exact test pv_FI: The infimum of non-signficant p-values """ def FI_func(n1A, n1, n2A, n2, stat, n1B=None, n2B=None, alpha=0.05, verbose=False, *args): assert callable(stat), 'stat should be a function' if (n1B is None) or (n2B is None): assert (n1 is not None) and (n2 is not None) n1B = n1 - n1A n2B = n2 - n2A else: assert (n1B is not None) and (n2B is not None) n1 = n1A + n1B n2 = n2A + n2B lst_int = [n1A, n1, n2A, n2, n1B, n2B] assert all([isinstance(i,int) for i in lst_int]) assert (n1B >= 0) & (n2B >= 0) # Calculate the baseline p-value tbl_bl = [[n1A, n1B], [n2A, n2B]] pval_bl = stat(tbl_bl, *args) # Initialize FI and p-value di_ret = {'FI':0, 'pv_bl':pval_bl, 'pv_FI':pval_bl, 'tbl_bl':tbl_bl, 'tbl_FI':tbl_bl} # Calculate inital FI with binomial proportion dir_hypo = int(np.where(n1A/n1 > n2A/n2,+1,-1)) # Hypothesis direction pi0 = (n1A+n2A)/(n1+n2) se_null = np.sqrt( pi0*(1-pi0)*(n1+n2)/(n1*n2) ) t_a = stats.norm.ppf(1-alpha/2) bpfi = n1*(n2A/n2+dir_hypo*t_a*se_null) init_fi = int(np.floor(max(n1A - bpfi, bpfi - n1A))) if pval_bl < alpha: FI, pval, tbl_FI = find_FI(n1A, n1B, n2A, n2B, stat, alpha, init_fi, verbose, *args) else: FI, pval = np.nan, np.nan tbl_FI = tbl_bl # Update dictionary di_ret['FI'] = FI di_ret['pv_FI'] = pval di_ret['tbl_FI'] = tbl_FI di_ret return di_ret # Back end function to perform the for-loop def find_FI(n1A, n1B, n2A, n2B, stat, alpha, init, verbose=False, *args): # init=init_fi assert isinstance(init, int), 'init is not an int' assert init > 0, 'Initial FI guess is less than zero' n1a, n1b, n2a, n2b = n1A, n1B, n2A, n2B n1, n2 = n1A + n1B, n2A + n2B prop_bl = int(np.where(n1a/n1 > n2a/n2,-1,+1)) # (i) Initial guess n1a = n1a + prop_bl*init n1b = n1 - n1a tbl_int = [[n1a, n1b], [n2a, n2b]] pval_init = stat(tbl_int, *args) # (ii) If continues to be significant, keep direction, otherwise flip dir_prop = int(np.where(n1a/n1 > n2a/n2,-1,+1)) dir_sig = int(np.where(pval_init<alpha, +1, -1)) dir_fi = dir_prop * dir_sig # (iii) Loop until significance changes dsig = True jj = 0 while dsig: jj += 1 n1a += +1*dir_fi n1b += -1*dir_fi assert n1a + n1b == n1 tbl_dsig = [[n1a, n1b], [n2a, n2b]] pval_dsig = stat(tbl_dsig, *args) dsig = (pval_dsig < alpha) == (pval_init < alpha) vprint('Took %i iterations to find FI' % jj, verbose) if dir_sig == -1: # If we're going opposite direction, need to add one on n1a += -1*dir_fi n1b += +1*dir_fi tbl_dsig = [[n1a, n1b], [n2a, n2b]] pval_dsig = stat(tbl_dsig, *args) # (iv) Calculate FI FI = np.abs(n1a-n1A) return FI, pval_dsig, tbl_dsig # Wrappers for different p-value approaches def pval_fisher(tbl, *args): return stats.fisher_exact(tbl,*args)[1] def pval_chi2(tbl, *args): tbl = np.array(tbl) if np.all(tbl[:,0] == 0): pval = np.nan else: pval = stats.chi2_contingency(tbl,*args)[1] return pval def vprint(stmt, bool): if bool: print(stmt) FI_func(n1A=50, n1=1000, n2A=100, n2=1000, stat=pval_fisher, alpha=0.05) {'FI': 25, 'pv_bl': 2.74749805216798e-05, 'pv_FI': 0.057276449223784075, 'tbl_bl': [[50, 950], [100, 900]], 'tbl_FI': [[75, 925], [100, 900]]} As the output above shows, the FI_func returns the FI and corresponding table at the value of insignificance. If the groups are flipped, one can show the FI for group 2: FI_func(n1A=100, n1=1000, n2A=50, n2=1000, stat=pval_fisher, alpha=0.05) {'FI': 29, 'pv_bl': 2.74749805216798e-05, 'pv_FI': 0.06028540160669414, 'tbl_bl': [[100, 900], [50, 950]], 'tbl_FI': [[71, 929], [50, 950]]} Notice that the FI is not symmetric. When the baseline results are insignificant, the function will return a np.nan. FI_func(n1A=71, n1=1000, n2A=50, n2=1000, stat=pval_fisher, alpha=0.05) {'FI': nan, 'pv_bl': 0.06028540160669414, 'pv_FI': nan, 'tbl_bl': [[71, 929], [50, 950]], 'tbl_FI': [[71, 929], [50, 950]]} ## (4) Criticisms of the FI There are two main criticisms levelled against the FI. First, it does not do what it claims to do on a technical level, and second that it encourages null hypothesis significance testing (NHST). The first argument can be seen in Potter (2019), which shows that the FI is not comparable between studies because it does not quantify how “fragile” the results of a study actually are. Specifically, the paper shows that the FI does not provide evidence as to how likely the null hypothesis is relative to the alternative (i.e. that there is some effect). If there are two identically powered trials with differences in sample sizes, then it must be the case that the trial with a smaller sample size has a larger effect size. By looking at the Bayes factor, Potter shows that for any choice of prior, a smaller trial with a larger effect size is more indicative of an effect existing than a larger trial with a smaller effect size for a given power. Therefore, if the probability model is correct (as in the coin toss example), the small trial provides more evidence for the alternative hypothesis than the large one. It should not be penalized for using fewer events to demonstrate significance. When the probability model holds, the FI incorrectly concludes that the larger trial provides stronger evidence. (Potter 2019) For example, a study with 100 patients might have a p-value of 1e-6 and a FI of 5, whereas a study with 1000 patients with a p-value of 0.03 might have a FI of 10. In other words, the FI tends to penalize studies for being small, rather than studies that have a weak signal. Furthermore, the fragility index will often come to the opposite conclusion of a Bayes factor analysis. Altogether, the FI creates more confusion than it resolves and does not promote statistical thinking. We recommend against its use. Instead, sensitivity analyses are recommended to quantify and communicate robustness of trial results. (Potter 2019) A second criticism of the FI is that it encourages thinking in the framework of NHST and its associated problems. As Perry Wilson articulates, the FI further entrenches dichotomous thinking when doing statistical inference. For example, if a coin is flipped 100 times, and 60 of them are heads, using a 5% p-value cut-off, the null of an unbiased coin (p-value=0.045) will be rejected. But such a result has a FI of one, since 59 heads would have a p-value of 0.07. However, both results are “unlikely” under the null, so it seems strange to conclude the the initial finding should be discredited because of a FI of one. ## (5) Conclusion While others papers have shown correlations between the empirical FI and post-hoc power (see here, here, or here), my work is the first (I believe) to show an explicit analytic relationship between power and the expected value of the FI, when using a binomial proportions test. The Potter paper is correct: the FI does not provide insight into the posterior probabilities between studies. Rather, it provides a noisy and conservative estimate of the power. As section (2) showed, unlike other types of post-hoc power analyses, the FI is able to show low power, even for statistically significant results, because using the first moment of the truncated Gaussian explicitly conditions on this significance filter. However, inverting this formula to estimate the power leads to results that are too noisy in practice to use with any confidence (see Figure 4). I agree with the criticisms of the FI highlighted in section (4), but the method can still be defended on several grounds. First, the FI can be made more comparable between studies by normalizing the number of samples (known as the fragility quotient (FQ)). Second, smaller studies should be penalized in a frequentist paradigm, not because their alternative hypothesis is less likely to be true (which is what the Bayes factor tells us), but rather because the point estimate of the statistic conditional on significance is going to be exaggerated. Lastly, even though the FI does encourage dichotomous thinking, that is a problem of the NHST and not the FI per se. To expand on the analogy of the biased coin, if the world’s scientists went around flipping every coin they found lying on the sidewalk 100 times and then submitting their “findings” to journals every time they got 60 or more heads, then the world would appear to be festooned with biased coins. The bigger problem is that it is a silly endeavour to look around the world for biased coins. And even though there may be many coins with a slight bias (say 50.1% chance of heads) the observed (i.e. published) biases would be at least 10% more extreme than what should be reported. This highlights the bigger problem of scientific research and the file drawer issue. I think the best argument in favour of the FI is that it encourages researchers to carry out studies with larger sample sizes. The real reason this should be done is to increase power, but if researchers are motivated because they don’t want a small FI, then so be it. Until now, researchers have developed all sorts of mental ju-jitsu techniques to defend their under-powered studies. Such techniques include the “whatever doesn’t kill my p-value makes it stronger” argument.[7] Not to pick on Justin Wolfers, but here is one example of such a sentiment: You are suggesting both GDP and happiness are terribly mismeasured. And the worse the measurement is the more that biases the estimated correlation towards zero. So it’s amazing that the estimated correlation is as high as 0.8, given that I’m finding that’s a correlation between two noisy measures. Noise makes my claim stronger! Making such a statement against a more intuitive measure like the FI would be harder. As the authors of the original Welsh paper put it: The Fragility Index has the merit that it is very simple and may help integrate concerns over smaller samples sizes and smaller numbers of events that are not intuitive. We conclude that the significant results of many RCTs hinge on very few events. Reporting the number of events required to make a statistically significant result nonsignificant (ie, the Fragility Index) in RCTs may help readers make more informed decisions about the confidence warranted by RCT results. ## Footnotes 1. For example, researchers may find that an effect exists, but only for females. This “finding” in hand, the paper has unlimited avenues to engage in post-hoc theorizing about how the absense of a Y chromosome may or may not be related to this. 2. In other words, the distribution of statistically significant effect sizes is truncated. For example, consider the difference in the distribution of income in society conditional on full-time employment, and how that is shifted right compared to the unconditional distribution. 3. In my own field of machine learning, power calculations are almost never done to estimate how many samples a test set will need to establish (statistically) some lower-bound on model performance. 4. Applying any threshold to determine statistical significance will by definition ensure that post-hoc power cannot be lower than 50%. 5. Note, this means the traditional FI can only be applied to statistically significant studies. A reverse FI, which calculates how many patients would need to be swapped to from statistical insignifance to significance has also been proposed. 6. For full disclosure, I am a co-author on two recently published FI papers applied to the pediatric urology literature (see here). 7. As Gelman puts it: “In noisy research settings, statistical significance provides very weak evidence for either the sign or the magnitude of any underlying effect”. Written on November 12, 2021
{}
# Question regarding speed of causality and speed of light Recently I came across a question regarding to why the speed of light and the speed of causality are the same. Link-Why is the speed of causality equal to the speed of light? I came up with an explanation, but I am not sure if it is right. I thought that we could say that the cause travels in such a way that it must not experience any time i.e. the proper time for the cause travelling must be zero, which by special relativity is when something travels at the speed of light. Is there any possibility of this being a proper explanation?
{}
## Introduction Bacterial lipids are the key constituent segregating cellular components from the external environment. Bacterial lipids are highly diverse yet there is currently little understanding of the benefits that this diversity provides [1]. Glycerophospholipids are by far the best studied lipids in bacteria and a key branch point for glycerophospholipid biosynthesis is phosphatidic acid (PA), from which a variety of lipids, including phosphatidylglycerol (PG), phosphatidylethanolamine (PE), phosphatidylcholine (PC), diacylglycerol (DAG), and triacylglycerol (TAG), can be made through either the cytidine diphosphate (CDP)-diacylglycerol (DAG) pathway or the Kennedy pathway [2]. PA biosynthesis in bacteria is carried out by a membrane-attached acyltransferase PlsC, the founding member of the large lysophosphatidic acid acyltransferase (LPAAT) family [3, 4]. Aside from phospholipids, the study of bacterial lipid diversity is currently hampered by a lack of knowledge of both the chemical structures of many of these lipids and the identity of genes involved in their synthesis. These have severely hindered our understanding of lipid diversity and their physiological function in bacteria. Once the chemical structure of a lipid is known, analytical strategies can then be devised to detect the lipid in both the natural environment and cell cultures [5]. This can also help to direct studies into the biosynthesis of the lipid, knowledge of which can provide a clearer idea of the likely distribution of the lipid amongst various bacterial classes. A group of poorly studied bacterial lipids are the aminolipids, of which only ornithine lipids have been detected in diverse cultured bacteria since the 1960s [6]. However, it was not until the genes involved in its biosynthesis were elucidated that it became clear how widespread the capacity to produce ornithine lipid really was [7, 8]. Similarly, Sebastian et al. [9] found several uncharacterized aminolipids in marine heterotrophic bacteria one of which was recently determined as a glutamine-containing aminolipid, often found in the marine roseobacter group [10]. Both ornithine and glutamine lipids play a key role in the adaptation of cosmopolitan marine bacteria (e.g., the marine SAR11 clade and the roseobacter group) to oligotrophic environments [9,10,11]. In this study, we report the characterization and chemical structure of a novel sulfur-containing aminolipid using high resolution-accurate mass spectrometry from the marine roseobacter group. This newly identified lipid represents a novel class of sulfur-containing lipids with an aminosulfonate head group. Furthermore, we describe a novel acyltransferase enzyme (SalA), part of the LPAAT family, that is responsible for the biosynthesis of this sulfonolipid. This sulfonolipid appears widespread within the roseobacter group that are key players in marine biogeochemical cycles and important for biofilm formation. Furthermore, the salA gene is abundant and actively transcribed in marine surface microbial assemblages. ## Materials and methods ### Bacterial strains and cultivation All marine bacteria used in this study were cultivated using either marine broth medium (BD Difco™ 2216), ½YTSS medium containing yeast extract (2 g/L), tryptone (1.25 g/L), and sea salts (20 g/L, Sigma-Aldrich) or a defined marine ammonium mineral salts (MAMS) medium [10]. The MAMS medium contained 30 g/L NaCl, 10 mM glucose, 1 mM K2HPO4, 0.75–7.5 mM NH4Cl, 10 mM HEPES buffer (pH 7.6), 1.36 mM CaCl2, 0.98 mM MgSO4, 7.2 µM FeCl2, 84 µM Na2MoO4, 370 nM ZnCl2, 510 nM MnCl2, 97 nM H3BO3, 1.1 µM CoCl2, 12 nM CuCl2, 100 nM NiCl2, 30 nM thiamine, 160 nM nicotinic acid, 97 nM pyridoxine, 73 nM aminobenzoic acid, 53 nM riboflavin, 84 nM pantothenate, 4.1 nM biotin, 1.5 nM cyanocobalamin, and 11 nM folic acid. All cultures were grown at 30 °C aerobically in a shaker (150 r.p.m) unless stated otherwise. ### Intact polar lipid analysis Lipid extraction from bacterial cultures was carried out using the modified Folch extraction protocol as described previously [10, 12]. Briefly 1 mL culture of OD540 ~ 1.0 was collected by centrifugation. Total lipids were then extracted using methanol-chloroform, dried under nitrogen gas and the pellet re-suspended in 1 mL solvent (95% (v/v) liquid chromatography-mass spectrometry (LC-MS) grade acetonitrile and 5% 10 mM ammonium acetate pH 9.2 in water). These lipids were then analysed by LC-MS using a Dionex 3400RS HPLC with a HILIC BEH amide XP column (2.5 µm, 3.0 × 150 mm, Waters) coupled with an amaZon SL ion trap MS (Bruker) via electrospray ionization (ESI) in both positive (+ve) and negative (−ve) ionization mode. Samples were run on a 15 min gradient from 95% (v/v) acetonitrile/5% (w/v) ammonium acetate (in water, 10 mM, pH 9.2) to 70% (v/v) acetonitrile/30% (w/v) ammonium acetate (in water, 10 mM, pH 9.2), followed by 5 min of isocratic run 70% acetonitrile/30% ammonium acetate with 10 min equilibration between samples. The flow rate was maintained at 150 μL min–1 and the column temperature at 30 °C. The injection volume was 5 μL for each run; the ionization was done in both positive and negative mode. Drying conditions were the same for both modes (8 L min–1 drying gas at 300 °C and nebulizing gas pressure of 15 psi). The end cap voltage was 4500 V in positive mode and 3500 V in negative mode, both with 500 V offset. Data analysis was carried out using the Bruker Compass software package. Unless stated otherwise, base peak chromatographs were presented with m/z range from 400 to 1000. High resolution MS identification and fragmentation was carried out using either a quadrupole-time-of-flight MS (Q-TOF, Waters Synapt G2-Si) or an Orbitrap Fusion (Thermo Fisher Scientific) by direct infusion and collision induced dissociation (CID). For the Orbitrap Fusion, the resolution was set at 120 K with CID for MSn. A TriVersa Nanomate nanospray source (Advion, NY) was used and the flow rate was at 300 nL min−1. The voltage was set at 1.4 kV and the gas pressure was 0.3 psi. Sheath and sweep gas were set to zero and the cone voltage was 2100 V and the mass range was from 50 to 1000 Da. MS data were analyzed using Xcalibur (Thermo Fisher Scientific). For Q-TOF, samples were injected through a Universal NanoFlow Sprayer (Waters) by direct infusion at 200–300 nL min−1 and the cone voltage was 30 V in negative mode ESI. Mass range was set from 50 to 1000 Da and data analyses ware carried out in MassLynx (Waters). The most abundant peak in the negative ion spectrum corresponding to the SAL lipid (m/z 656.6) was selected for MSn fragmentation. Spectra were obtained in profile mode and smoothed using a moving mean. Background correction using a linear baseline was applied with a 40% noise cut-off. For accurate mass determination, the centroid of each peak was used. The peak corresponding to C17H33COO (m/z 281.2480, an 18:1 fatty acid carboxylate anion) was used as a lock mass. Calculation of candidate elemental formulae from the accurate mass considered formulae containing C0–100, H0-100, N0-100, S0-4, and P0-1. A conservative mass error of 100 ppm was assumed. ### Marker-exchange mutagenesis Marker-exchange mutagenesis was carried out as described previously using a suicide vector pK18mobsacB [10]. Briefly, DNA fragments corresponding to an upstream element and a downstream element that flank the target gene were amplified by PCR using high-fidelity Phusion DNA polymerase. A Gm-resistance cassette was amplified from plasmid p34S-Gm [10, 13]. These fragments, together with the linearized pK18mobsacB vector were then assembled through Gibson cloning and transformed into competent Escherichia coli DH5α cells. The engineered suicide vector was then extracted from E. coli DH5α and transformed into the conjugation donor strain E. coli S17.1 λpir before conjugating into Ruegeria pomeroyi DSS-3 as described previously [10]. Transconjugants were then selected on defined MAMS medium containing gentamycin (Gm, 10 μg mL−1). All mutants were confirmed by PCR using the confirmation primers (Supplementary Table 1) and subsequent Sanger sequencing. ### Transposon library of Phaeobacter inhibens DSM 17395 A library of 5500 transposon mutants of Phaeobacter inhibens DSM 17395, which was established at the DSMZ, served as a basis to identify genes involved in the biosynthesis of the novel SAL lipid. Transposon mutagenesis was performed with the EZ-Tn5<R6Kγori/Kan-2> Tnp Transposome kit (Epicentre, Illumina, CA, USA) and the insertion site of all mutants was determined via arbitrary PCR [14]. Transposon mutants were streaked out three times to eliminate attached wild type cells. The absence of wild type cells and the presence of the 65 kb plasmid were validated as described previously [14,15,16]. The transposon integration site of each mutant was also confirmed via sequencing of the amplification PCR product, and stable maintenance of all three extrachromosomal elements was validated via diagnostic PCR [17]. The transposon mutant #1036 of P. inhibens DSM 17395 (PGA1_c01210) unable to produce the SAL lipid was complemented using the salA homolog of Ruegeria pomeroyi DSS-3 (locus tag SPO0716) and P. inhibens DSM 17395 (locus tag PGA1_c01210). Complementation was carried out by PCR amplification of the salA homologs together with a constitutive promoter (~250 bp upstream of the aacC1 gene from plasmid p34S-GM [13]), which was then cloned into the broad host range vector pBBR1MCS and transformed into the salA mutant of P. inhibens DSM 17395 by conjugation as described previously [9, 10]. The complemented mutants were cultivated using marine broth medium and cells were harvested for lipidomics analysis as described above. ### Biofilm assays To grow biofilms of Phaeobacter inhibens DSM 17395 and the salA mutant, post-exponential grown bacterial cells were washed and diluted in fresh marine broth medium and inoculated at an OD590 nm of 0.2 into 24-well plates (Corning Incorporated Costar®, New York, NY, USA) containing a sterilized glass coverslip into each well. At each time point (3, 24, and 48 h), biofilms were washed to remove non-adherent bacteria and fixed using formalin 3.5% (v/v) for 20 min. Bacteria were stained using DAPI (5 μg• mL−1, Sigma-Aldrich, Darmstadt, Germany) and coverslips were mounted with a drop of Mowiol antifade before observation using confocal laser scanning microscopy (CLSM) (Zeiss LSM 880, Göttingen, Germany). The biovolume and the average thickness of the biofilms were determined using COMSTAT software developed in MATLAB R2017a (MathWorks, Natick, MA, United States) as described previously [18, 19]. To test for statistically significant differences between the wild-type strain and the salA mutant, a t-test was performed using SPSS 13.0 (IBM, Armonk, NY, USA). A crystal violet biofilm assay was also performed which was adapted from Guillonneau et al. [18]. Bacterial biofilms were developed in 96-well microtiter plates (Greiner Bio-One, Kremsmünster, Austria) with bacteria in the post-exponential growth phase using marine broth medium. Cells were diluted to a final OD590 nm = 0.1 into each well (n = 4 for both the wild type and the salA mutant) and grown in static conditions at 30 °C. At each time point (3, 24, 48, and 72 h) samples were washed three times with fresh marine broth medium and dried for 30 min at 50 °C. Biofilms were then stained for 15 min with 200 μL crystal violet 0.01% (w/v), rinsed three times with phosphate-buffered saline and dried for 10 min. Biofilm quantification was performed by releasing the stain from the biofilm using absolute ethanol for 10 min at 30 °C with gentle shaking. The absorbance of the crystal violet in solution was measured at 595 nm. The final absorbance of each sample was calculated by subtracting the blank (i.e., marine broth medium only treated with crystal violet, n = 4). ### Bioinformatics analysis Phylogenetic analysis of 16S rRNA genes from Rhodobacteraceae was carried out using the full length 16S rRNA gene retrieved from the Integrated Microbial Genomes (IMG) database (https://img.jgi.doe.gov/). Sequence alignment of 16S rRNA genes and LPAAT genes (also retrieved from IMG) were performed using Muscle and phylogenetic analyses were performed with MEGA7.0 [20] with 500 bootstrap replicates. Sequence alignment was visualized using JalView [21]. To search for SalA homologs in the Tara metagenome/metatranscriptomics datasets, we used the Ocean Gene Atlas (OGA) database OM_RGCv2_metaG (metagenomics) and OM-RGCv2_metaT (metatranscriptomics) with e-value cut-off of e−40 [22]. Abundance was normalized as a percentage of the median mapped read abundance of genes/transcripts of ten prokaryotic single-copy marker genes [23]. Taxonomic distribution of homologs was displayed using Krona in the OGA interface. The genomes of the marine roseobacters used in this study were downloaded from the NCBI database. These comprised nine strains that were found to produce SAL and two strains (Stappia stellulata DSM 5886 and Dinoroseobacter shibae DFL12) that did not. In order to identify genes potentially involved in SAL synthesis, each gene from the 11 genomes was assigned to an orthologous group using the eggNOG mapper [24]. This program conducts a BLAST search of each sequence against the eggNOG database [25] of orthologous genes, with the query sequence being annotated with the same orthologous group as the best BLAST hit. Orthologous groups that were present in the genomes of all SAL-producing strains but absent from the genomes of S. stellulata and D. shibae were considered to be potentially involved in SAL synthesis. Abundance data of SalA homologs from four depths derived from the Tara metagenomics/metatranscriptomics datasets were tested for normal distribution using a Shapiro–Wilks test. Significant differences between depths was tested for using a Kruskal–Wallis test followed by a post-hoc Dunn’s test using Holm’s correction for multiple comparisons. All statistical analysis was performed in RStudio (version 1.3) using R (version 4.02). ### In silico homology modeling and docking studies for SalA A SalA homology model was generated using the Phyre2 protein folding prediction server [26], and the lyso-SAL lipid was drawn in MarvinSketch (v19.10.0, 2019, ChemAxon for Mac) and exported as a Mol SDF format file. The homology model was built using the structure of the lysophosphatidic acid acyltransferase PlsC (PDB code 5KYM [4]). The SalA protein model was then imported into Flare (v3.0, Cresset) for docking the lyso-SAL substrate and energy minimized with 2000 iterations with a cut off of 0.200 kcal/mol/A. The lyso lipid was imported as a ligand and energy minimized in Flare before being docked into the active site and the best scoring pose selected. ## Results ### A new sulfur-containing aminolipid is found in Ruegeria pomeroyi DSS-3 During LC-MS analysis of lipid extracts from Ruegeria pomeroyi DSS-3 grown on ½ YTSS medium, two prominent peaks eluting around 3.5 min were found in both negative and positive ionization mode (Fig. 1). The most prominent ions in the two peaks had m/z values of 656.6 and 672.7 in the negative ionization mode, respectively. Other major lipids identified in this bacterium include two phospholipids, PG and PE and two aminolipids, ornithine lipid (OL) and glutamine lipid (QL) [10]. To elucidate the structure of the new lipids eluted at 3.5 min, the most intense species, at 656.4882 m/z, was selected for high resolution MS/MS analysis on a quadrupole-time of flight (Q-TOF) mass spectrometer (Fig. 2). At low collision energy (40 eV) the major species formed corresponded to a neutral loss of 282 mass units. This is consistent with the neutral loss of an 18:1 fatty acid. A second peak at m/z 281.2480 is likely the carboxylate anion of an 18:1 fatty acid. Further fragmentation, at higher collision energies (up to 90 eV), yielded a major ion at m/z 237.2159. This ion likely corresponds to a 16:0 fatty acid present as a ketene, which would be consistent with the fragmentation scheme proposed for ornithine lipids and glutamine lipids [27]. These results therefore suggest a lipid class with a similar fatty acyl backbone structure to the aminolipids, such as ornithine and glutamine lipid [10]. The glutamine lipid (QL, [M+H]+ m/z 719.7) and ornithine lipid (OL, [M+H]+ m/z 705.7) was eluted at 9.5 and 12.5 min, respectively (Fig. 1). The formation of these novel lipids at ~3.5–4 min is not affected in the olsA or glsB mutants of R. pomeroyi DSS-3 (Supplementary Fig. S1). The olsA and glsB genes in R. pomeroyi DSS-3 were essential for the production of the nitrogen-containing ornithine/glutamine lipids [10]. Prominent peaks at 80 and 81 m/z, respectively, were apparent in the fragmentation spectrum obtained at 90 eV collision energy (Fig. 2c). The accurate masses of these ions were 79.9568 and 80.9643. Of the candidate formulae within 100 ppm of the measured mass, $${\mathrm{SO}}_3^ -$$ and $${\mathrm{HSO}}_3^ -$$ appear most plausible, with mass errors of 0.182 ppm and 4.194 ppm, respectively. A smaller peak doublet at m/z 63.9611 and 64.9692 was also present in the 90 eV spectrum. These masses are unambiguously assigned to $${\mathrm{SO}}_2^ -$$ (mass error 12.506 ppm) and $${\mathrm{HSO}}_2^ -$$ (mass error 8.08 ppm). Taken together, these results demonstrate the presence of a sulfonate group in the lipid. An ion at 136.0045 m/z corresponded to the deprotonated head group. The mass determined here is larger than that of deprotonated taurine (m/z 124). Since the head group includes a sulfonate ($${\mathrm{SO}}_3^ -$$) group, the plausible formula most closely corresponding to the accurate mass is C3H6NSO3 (Table 1). This is consistent with the structure being aminopropane sulfonic acid, although the position of the amino group cannot be unequivocally determined by mass spectrometry (Fig. 2). The proposed fragmentation scheme is presented in Fig. 3. To further confirm the presence of an amino-group in the hydrophilic head of this SAL, we cultivated Ruegeria pomeroyi DSS-3 in a chemically defined marine ammonium mineral salts (MAMS) medium using 15N-ammonium as the sole nitrogen source. Indeed, the 15N-labeled SAL was readily observed in the lipid extract resulting in a shift of m/z from 656.4951 to 657.4903 (Supplementary Fig. S2a), whereas the non-nitrogen containing lipids, such as PG were not labeled by 15N as expected (Supplementary Fig. S2b). The incorporation of the 15N isotope into the head group of SAL was confirmed by MSn (Supplementary Fig. S2c, d). We also performed the same MSn analysis on the m/z 672.4875 species as well as the 15N-labeled m/z 673.4852 species. Loss of 282 at MS2 (672.4875→390.2317; 673.4852→391.2285) suggests the R2 fatty acid was C18:1. Therefore, the data suggest that the lipid species eluted immediately after the m/z 656.6 species is likely a hydroxylated SAL, and the proposed fragmentation scheme is presented in Supplementary Fig. S3. ### The sulfur-containing aminolipid is found in a range of marine roseobacters To investigate the presence of SAL amongst roseobacters we selected 16 strains, in addition to R. pomeroyi DSS-3, to obtain a wide coverage of the roseobacter group including the model roseobacter bacterium Phaeobacter inhibens DSM 17395 (Fig. 4). The selected strains included Stappia stellulata, which recent phylogenetic studies indicate is not a member of the Rhodobacteraceae [28], which served as an outgroup. These strains were each grown in marine broth overnight, before cells were harvested for lipid analysis. SAL was detected in all the strains tested apart from S. stellulata and Dinoroseobacter shibae (Fig. 4a). The separation of these two strains from the remaining roseobacter sequences is in line with previous results showing D. shibae branching deeply within the Rhodobacteraceae phylogeny [29]. ### Comparative genomics to determine genes involved in SAL biosynthesis We then conducted a comparative genomics investigation into the roseobacter strains whose lipid profiles had been analysed. We reasoned that synthesis of the SAL would require an N-acyltransferase activity to acylate aminopropane sulfonic acid, analogous to that mediated by OlsB and GlsB in the synthesis of ornithine and glutamine lipid [8, 10]. We investigated predicted N-acyltransferases that were present in all the strains that produced SAL in marine broth (the “producers”), while being absent from the strains that did not produce SAL (the “non-producers”). We assigned all the genomic sequences from the nine genome-sequenced producer strains and two non-producer strains to orthologous groups (OGs) using the eggNOG-mapper software [24], which provides a consistent pipeline for sequence annotation and OG assignment by comparison to the eggNOG database [24]. We identified a group of 1417 “core” genes which were present in the genomes of all SAL producer strains of which 1060 were also present in the two non-producers (Fig. 4b). Thirty-seven candidate genes are present in all SAL producer strains but not in the genomes of the non-producers (Fig. 4b), two of which (OG accession numbers 08UX5 and 05CDD) were annotated as being potential acyltransferases (Table 2). We therefore generated mutants in these two genes in the two model bacteria, R. pomeroyi DSS-3 and P. inhibens DSM 17395 and screened for the loss of SAL production. The 08UX5 mutant (locus SPO2471) of R. pomeroyi DSS-3 still produced SAL to the same level as the wild type (data not shown), suggesting that this gene is unlikely involved in SAL formation. However, in the 05CDD mutant of P. inhibens DSM 17395 (locus tag PGA1_c01210), SAL formation is completely abolished, suggesting that this gene is indeed responsible for SAL biosynthesis (Fig. 4c). This gene is named salA hereafter. Indeed, when the mutant was complemented with either salA from R. pomeroyi DSS-3 (SPO0716) or P. inhibens DSM 17395 (PGA1_c01210), SAL production was restored (Fig. 4d). SalA is a putative O-acetyltransferase-like protein with a recognized LPAAT (lysophosphatidic acid acyltransferase) domain. Amongst bacterial LPAAT-domain containing proteins, the best characterized examples are PlsC and OlsA, encoding enzymes responsible for the final step in the biosynthesis of the anionic phospholipid phosphatidic acid (PA) and the ornithine/glutamine-containing aminolipid, respectively [3, 10, 30]. The structure of PlsC has recently been solved, showing an in silico docked LPA lipid together with the fatty acid in an acyl carrier protein (ACP, [4]). Multiple sequence alignments of SalA, PlsC, and OlsA shows the presence of two conserved sequence motifs (Fig. 5), representing the catalytic center (HX4/5D) and the substrate co-ordination center (FP[E/S]G[T/V]), respectively. Notably, both PlsC and OlsA have the conserved HX4D motif whereas SalA has the HX5D motif. Interestingly, the reported key Lys105 in PlsC, thought to be responsible for electrostatic interactions via its amide nitrogen backbone to the negatively-charged oxygen of the ACP-fatty acid intermediate, was replaced with Arg135 in SalA. The LPA phosphate head group is thought to be coordinated by Arg159 in PlsC. However, the sequence alignment shows a Val189 in SalA. In order to further investigate the implications of the sequence alignment, we obtained a homology model of SalA. The model shows the catalytic HX5D motif to be structurally comparable to that of PlsC despite the additional residue, with the His109 and Asp115 adjacent to each other, analogous to that in PlsC (Supplementary Fig. S4). In silico docking of a lyso-SAL lipid molecule into the model demonstrated a possible pose for the lyso-lipid hydroxy group adjacent to His109 (Supplementary Fig. S4), with the Arg135 suggested to coordinate the sulfonate head group. The conformationally flexible alkyl chain group was able to adopt many configurations, but the polar head group was docked consistently in the same region. Overall, the data suggests a diversification in function of LPAAT family enzymes during evolution, with SalA representing a novel member of this group. The presence of this unique motif of HX5D in SalA allowed us to determine the distribution of SAL-biosynthesis in environmental metagenomes and metatranscriptomes (see below). ### SAL production in Phaeobacter inhibens DSM 17395 is involved in biofilm formation We next investigated the role of SAL lipids in the physiology of roseobacters. The loss of SAL lipids had no clear role in the growth of the bacterium. Both wild type and the salA mutant of Phaeobacter inhibens DSM 17395 had comparable growth rates and reached similar final cell density in marine broth medium (Supplementary Fig. S5a). An important change of lifestyle for roseobacters is the switch from planktonic growth to biofilm formation, which triggers a particle-associated life strategy that is ecologically relevant for their survival in the natural environment [31]. It has been shown previously that many roseobacters including Phaeobacter inhibens DSM 17935 are able to form a biofilm, and a 65 kb plasmid in this bacterium was important for biofilm formation [15, 16]. Interestingly, we observed that the salA mutant has a significantly reduced ability to form biofilms when in contact with solid surfaces, such as glass (Fig. 6) and plastics (Supplementary Fig. S5b). Both the bioviolume on the glass surface as well as the thickness of the biofilm are significantly reduced in the salA mutant strain in the early phase of biofilm formation (3 h), and the latter stage (24 and 48 h) of biofilm maturation (Fig. 6). The 65 kb biofilm plasmid was confirmed to be present in the salA mutant (Supplementary Fig. S5c). Thus, the significant reduced ability of the salA mutant in biofilm formation suggests that this lipid may play a key role in roseobacters in their natural environment. ### Distribution of the new acetyltransferase SalA in the Tara Ocean metagenomes and metatranscriptomes To better understand the distribution of SAL in environmental microbial assemblages, we searched the Tara Ocean metagenomes and metatranscriptomics datasets using SalA (locus tag, SPO0716 of R. pomeroyi DSS-3) as the query. We experimentally determined the e value cut-off to be e–40 at which value it selectively retrieves LPAAT homologs belonging to SalA but not OlsA or PlsC. The environmental SalA homologs obtained from the Tara Oceans metagenome and metatranscriptomics dataset were aligned, and the key sequence motifs were manually examined. In particular, the HX5D motif is strictly conserved in all SalA sequences retrieved from the Tara Oceans datasets providing strong support, that these environmental sequences are of the SalA but not PlsC nor OlsA clade. On average, between 2–4% of microbial cells are estimated to have the potential for SAL biosynthesis; this is comparable to that of the olsA gene but somewhat lower than the plcP gene in the same dataset, suggesting SAL biosynthesis is less prevalant than the PlcP-mediated lipid remodeling pathway [9, 10]. This is likely due to the fact that SALs are primarily found in marine roseobacters but not in other dominant marine Alphaproteobacteria, such as the abundant bacterium Pelagibacter ubique of the SAR11 clade which are capable of PlcP-mediated lipid remodeling [9, 11]. Indeed, the majority (>85%) of the SalA sequences from the Tara Oceans dataset were classified as members of the Rhodobacteraceae in both Tara Oceans metagenomes and metatranscriptomes (Fig. 7), and a thorough search of 120-genome sequenced Rhodobacteraceae confirmed the wide occurrence of salA in all ten clades of the roseobacters (Supplementary Fig. S6 [32]). ## Discussion Here, we identify a novel aminolipid containing an aminopropane sulfonic acid head group that is widespread amongst marine roseobacters. The presence of a sulfonate group means this SAL lipid also falls into the broad category of sulfonolipids. The most abundant, and arguably one of the best studied lipids of this type, is sulfoquinovosyl diacylglycerol (SQDG), which is present in the membranes of most oxygenic phototrophs [33] as well as some heterotrophic bacteria [34]. SQDG likely plays a structural role in photosynthetic membranes, since crystal structures of photosystem proteins show specific binding of this lipid [35]. Other sulfolipids appear to elicit potent responses when certain organisms are exposed to them. Thus, a sulfolipid produced by zooplankton from a number of copepod species was found to induce toxin production in the dinoflagellate Alexandrium minutum [36], likely as a defense against predation. Conversely, a sulfonolipid produced by the Bacteroidetes bacterium Algoriphagus machipongonensis induced the development of multicellularity in a choanoflagellate [37]. Both examples suggest that sulfolipids are used by the sensing organism as a marker for the presence of another organism with which it interacts (either as a predator or as a symbiont). The fact that sulfolipids appear to be relatively rare across the tree of life likely makes them well suited to mediate such chemical interactions, where a high degree of specificity is required. Lipids similar to those produced by A. machipongonensis have been described in a number of Bacteroidetes, particularly amongst Cytophaga [38,39,40]. They tend to be localized to the outer membrane, and seem to play a role in the gliding motility of these organisms [41, 42]. The sulfonolipids from Bacteroidetes differ from those that we describe here in roseobacters in that they are composed of a base, termed capnine, similar to the sphingoid bases of sphingolipids, which may be N-acylated to form the full sulfonolipid [38]. In this way they are similar structurally to sphingolipids, whereas the SALs of the roseobacter group are more similar to aminolipids such as ornithine lipid and glutamine lipid (Fig. 5). Whether the SAL lipid plays a role in interspecies interactions requires further work. However, we already observed that this lipid is involved in biofilm formation in Phaeobacter inhibens DMS17395 (Fig. 6), suggesting that formation of this SAL lipid may play an important role in the adaptation of marine roseobacters to a biofilm lifestyle. A survey of the distribution of SAL among isolates from the roseobacters indicated that the ability to produce this lipid is widely distributed within the group. One strain, D. shibae, taxonomically the most basal of the strains examined, lacked any SAL under the conditions assessed, as did the outgroup strain Stappia stellulata. The absence of SAL in these strains suggests they lack the capacity to produce this lipid as the other roseobacters examined seem to produce SAL constitutively. However, it is possible that these strains have the capacity to produce SAL, but only do so under certain conditions. This pattern is observed for ornithine lipid, which is produced constitutively in some bacteria, such as R. pomeroyi DSS-3 [10], but in others is only produced as a response to P-depletion [11, 43]. Indeed, a close salA homolog was found in the genome of D. shibae (Dshi_0206), but it is absent in S. stellulata. Although we have identified the LPAAT enzyme, SalA, involved in the last step of synthesis of this new sulfur-containing aminolipid, the key steps and genes involved in the synthesis of the lyso-SAL lipid remain to be determined. It is likely that SAL synthesis occurs in a manner analogous to that of ornithine and glutamine lipids. As such, 3-hydroxy fatty acids would be required as a substrate for the first step in SAL synthesis [44]. Such a hypothesis suggests that the aminopropane sulfonic acid moiety is also directly produced by the marine roseobacters since no exogenous supply was provided. The presence of 3-aminopropane sulfonic acid (a.k.a. homotaurine) has been documented in some red algae [45, 46] and unicellular green algae (prasinophytes such as Ostreococcus and Micromonas, [47]) but, to the best or our knowledge, never previously in bacteria. However, a hydroxylated form of 2-aminopropane sulfonic acid, cysteinolic acid, has been found in a variety of marine phytoplankton and heterotrophic bacteria, including Ruegeria pomeroyi DSS-3 although its biosynthetic pathway remains to be established [47]. Nevertheless, it is tempting to speculate that 2-aminopropane sulfonic acid is likely the hydrophilic head of the new SAL observed in these marine roseobacters, and this certainly warrants further investigation. To sum up, this study describes a new class of lipid, which are an important component of the membranes of a number of marine Rhodobacteraceae. Comparative genomics of SAL-producing strains has identified a novel acyltransferase (SalA), which is involved in the production of this lipid. salA is widely distributed in marine microbial assemblages in the Oceans and actively expressed in Tara Oceans metatranscriptomes, and its functional role in addition to biofilm formation in these marine bacteria certainly warrants further investigation.
{}
Let $f: R \rightarrow R[X]$ be the natural map, and let $I$ be an ideal of $R$. Show that $I \in {\rm Spec}(R) \iff I^{e} \in {\rm Spec}(R[X])$. This is exercise in Commutative Algebra: Let $$R$$ be a commutative ring and let $$X$$ be an indeterminate; use the extension and contraction notation of 2.41 in conjunction with the natural ring homomorphism $$f: R \rightarrow R[X]$$ , and let $$I$$ be an ideal of $$R$$. Show that $$I \in \operatorname{Spec}(R) \Leftrightarrow I^{e} \in \operatorname{Spec}(R[X])$$. I proof the following: Let a homomorphism ring \begin{aligned} \psi: & R[X] & \longrightarrow &(R / I)[X] \\ & \sum_{i=0}^{n} r_{i} X^{i} & \longmapsto & \sum_{i=0}^{n} \bar{r}_{i} X^{i} \end{aligned} And $$\operatorname{ker} \psi=\left\{\sum_{i=0}^{n} r_{i} X^{i}: r_{i} \in I, \forall i=0, \ldots, n\right\}=I[X]=I R[X]=f(I) R[X]=I^{e}$$ Use isomorphism theorem, we have $$(R / I)[X] \cong R[X] / I[X]=R[X] / I^{e}$$ Thus $$I \in \operatorname{Spec}(R) \Leftrightarrow R / I$$ is an integral domain $$\Leftrightarrow(R / I)[X]$$ is an integral domain $$\Leftrightarrow R[X] / I^{e}$$ is an integral domain $$\Leftrightarrow I^{e} \in \operatorname{Spec}(R[X])$$ I have a lot of problem: i) Why $$\operatorname{ker} \psi=I[X]$$, $$I[X]=I R[X]$$, $$I R[X]=f(I) R[X]$$ ii) And why $$(R / I)[X] \cong R[X] / I[X]$$ iii) $$R / I$$ is an integral domain $$\Leftrightarrow(R / I)[X]$$ is an integral domain Thank you very much. 1 Answer For the first question: The map $$f:R\rightarrow R[X]$$ is the inclusion, meaning $$a\in R$$ is mapped to $$a\in R[X]$$. Therefore, $$I=f(I)$$. Now, since $$I\subseteq R$$, the ideal $$I$$ is the same as the ideal $$IR$$. First of all since $$I$$ is closed under multiplication of elements from $$R$$ and addition, $$IR\subseteq I$$. Now let $$i\in I$$. Since $$1\in R, 1i=i\in IR$$, and so $$I=IR$$. Therefore $$I[X]=IR[X]=f(I)R[X]$$. For the second question: We take the following map $$\varphi:R[X]\rightarrow (R/I)[X]$$ given by $$\varphi(\sum_{i=1}^na_ix^i)=\sum_{i=1}^n\bar{a_i}x^i$$. Obviosly it's surjective. Remember that a polynomial is zero iff all coefficients are zero, so: $$\operatorname{Ker}(\varphi)=\{f(x)\in R[X]:\varphi(f)=\bar{0}\}=\{\sum_{i=1}^na_ix^i:\bar{a_i}=\bar{0}\}={\sum_{i=1}^na_ix^i:\forall i, a_i\in I\}=I[X]}$$. So we have a surjective map $$\varphi:R[X]\rightarrow (R/I)[X]$$ with kernel $$I[X]$$, so the first isomorphism theorem gives us $$R[X]/I[X]\simeq (R/I)[X]$$ - the idea is that we have a surjective map, and we took all the polynomials that are mapped to zero, and by taking the quotient you now say "all those polynomials are equal to zero in the new ring" which now makes your original map also injective and therefore isomorphism of rings. • Thank you very much. But I don't know why " $R / I$ is an integral domain $\Leftrightarrow(R / I)[X]$ is an integral domain. " Can you help me this problem? Jul 10, 2021 at 15:57 • math.stackexchange.com/questions/2604247/… – GBA Jul 10, 2021 at 19:14 • @jenny if you did find this answer helpful please accept it – GBA Jul 11, 2021 at 11:44
{}
SMTLIB - Maple Help SMTLIB (.smtlib) File Format SMTLIB file format Description • SMT-LIB (Satisfiability Modulo Theories LIBrary) is a interface language intended for use by programs designed to solve SMT (Satisfiability Modulo Theories) problems. • It is reminiscent of LISP in design and appearance. • The SMTLIB package provides tools for generating SMTLIB input from Maple expressions. • The general-purpose command Export supports this format. SMT-LIB Logics Supported in Maple • SMT-LIB scripts require the underlying logic to be explicitly specified by name from a list of logics defined in the SMT-LIB standard. • For this reason, those Maple commands which generate SMT-LIB allow the logic to be specified (as a string). The following SMT-LIB logics are supported by these commands. • For more details on these logics, see the SMT-LIB Standard. QF_UF Unquantified formulas built over Boolean-valued symbols. QF_LIA Unquantified linear integer arithmetic. In essence, Boolean combinations of inequations between linear polynomials over integer variables. QF_NIA Quantifier-free integer arithmetic. QF_LRA Unquantified linear real arithmetic. In essence, Boolean combinations of inequations between linear polynomials over real variables. QF_NRA Quantifier-free real arithmetic. LIA Closed linear formulas over linear integer arithmetic. LRA Closed linear formulas in linear real arithmetic. Examples Translate a Boolean expression to SMT-LIB format in the QF_UF logic. > $\mathrm{SMTLIB}:-\mathrm{ToString}\left(a\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{xor}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}b⇒c,\mathrm{logic}="QF_UF"\right)$ ${"\left(set-logic QF_UF\right) \left(declare-fun a \left(\right) Bool\right) \left(declare-fun b \left(\right) Bool\right) \left(declare-fun c \left(\right) Bool\right) \left(assert \left(=> \left(xor a b\right) c\right)\right) \left(check-sat\right) \left(exit\right)"}$ (1) Translate a polynomial equation to SMT-LIB format in the logic of (possibly nonlinear) integer arithmetic. > $\mathrm{SMTLIB}:-\mathrm{ToString}\left({x}^{3}+x+1=0,\mathrm{logic}="QF_NIA"\right)$ ${"\left(set-logic QF_NIA\right) \left(declare-fun x \left(\right) Int\right) \left(assert \left(= \left(+ \left(* x x x\right) x 1\right) 0\right)\right) \left(check-sat\right) \left(exit\right)"}$ (2) References [SMT-LIB Standard] Clark Barrett and Pascal Fontaine and Cesare Tinelli, The SMT-LIB Standard: Version 2.5, Department of Computer Science, The University of Iowa, 2015.
{}
## Mathematics Seminar Series 2018 Year   2018    2019 Title:     Enumeration formulae for self-dual, self-orthogonal and complementary-dual cyclic additive codes Speaker:    Dr. Anuradha Sharma, Associate Professor, IIIT-Delhi Date:    November 15, 2018 (Thursday) Abstract:    Cyclic additive codes over finite fields form an important family of error-correcting codes and are natural generalizations of cyclic codes. These codes have rich algebraic structures and have nice connections with quantum stabilizer codes. In this talk, we will study their dual codes with respect to three different trace bilinear forms. We will also present enumeration formulae for self-dual, self-orthogonal and complementary-dual cyclic additive codes, which are useful in classifying these three classes of cyclic additive codes up to equivalence. Title:     Perfect Powers in Binary Recurrence Sequences Speaker:    Dr. Shanta Laishram, Associate Professor, Indian Statistical Institute, Delhi Date:    November 9, 2018 (Friday) Title:     The algorithm which took us to the moon Speaker:    Dr. Sanat K Biswas, Assistant Professor, IIIT-Delhi Date:    November 2, 2018 (Friday) Abstract:    This talk is about an estimation algorithm which was developed for Apollo missions around 50 years ago. Over the years, this algorithm has found its way to almost every disciplines of engineering and science. This ubiquitous estimator- the Kalman Filter is used in a wide range of practical applications - from navigation to the stock market price estimation. The mathematics behind the Kalman Filter is the topic of this discussion. I will talk about the Linear and Extended Kalman Filter. Then I will present the modifications that have been done to handle non-linearity in the estimation problems. Finally, I will discuss about few estimators which are termed as "beyond Kalman Filter Title:     Finite Element Methods for Elliptic Distributed Optimal Control Problems with Pointwise Control and State Constraints Speaker:    Dr. Kamana Porwal, Assistant Professor, Indian Institute of Technology, Delhi Date:    October 23, 2018 (Tuesday) Abstract:    In this talk, we study conforming and nonconforming finite element methods for elliptic distributed optimal control problems with pointwise state and control constraints. The state control constrained minimization problem is solved for the state variable by reducing it into a fourth order variational inequality and convergence of the state error is established in the H2-like energy norm. The key ingredients are constraint preserving properties of the interpolation operator and the enriching map. We also discuss post-processing methods to obtain the approximation of the control from the discrete state. Finally, we present numerical results to illustrate theoretical findings. Title:     What is Logic? Speaker:    Prof. Mihir Chakraborty, Retired Professor, Department of Pure Mathematics, University of Calcutta Date:    October 22, 2018 (Monday) Abstract:    The title should have been what are logics? It is an interesting discovery to me that Google is underlining with red whenever I write logics but does not do so when I write logic. This would be exactly my topic of discussion. I will give a very broad characterization of the term following basically Tarski. Then I will develop a few mathematical consequences of this definition, show some typical examples from both the Western as well as Eastern tradition. Finally, I would like to argue with a few examples in favour of the claim that it makes real sense to attach the mark of plurality to the word 'Logic'. Title:     Models and Algorithms for space efficient algorithms Speaker:    Venkatesh Raman, Institute of Mathematical Sciences Date:    October 11, 2018 (Thursday) Abstract:    Read-only memory is a classical model used to understand space-time tradeoffs.We first look at algorithms for sorting and selection in this model using small amount of extra space. Then we consider fundamental graph algorithms like BFS and DFS in this model using space considerably less than those used in classical algorithms. For problems like sorting (and BFS, DFS) where some output is desired, we assume that there is a write-only output tape where the output is written. We conclude with a discussion on algorithms in recent models like "in-place" and "restore" models. Title:     Symmetrically-Normed Ideals and Characterizations of Absolutely Norming Operators Speaker:    Dr. Satish K. Pandey from the University of Waterloo, Canada Date:    October 9, 2018 (Tuesday) Abstract:    We begin by presenting a spectral characterization theorem that settles Chevreau's problem of characterizing the class of absolutely norming operators --- operators that attain their norm on every closed subspace. We next extend the concept of absolutely norming operators to several particular (symmetric) norms and characterize these sets. In particular, we single out three (families of) norms on $\mathcal B(\mathcal H, \mathcal K)$: the Ky Fan $k$-norm(s)", the weighted Ky Fan $\pi, k$-norm(s)", and the $(p,k)$-singular norm(s)", and thereafter define and characterize the set of absolutely norming operators with respect to each of these three norms. We then restrict our attention to the algebra $\mathcal B(\mathcal H)$ of operators on a separable infinite-dimensional Hilbert space $\mathcal H$ and use the theory of symmetrically-normed ideals to extend the concept of norming and absolutely norming from the usual operator norm to arbitrary symmetric norms on $\mathcal B(\mathcal H)$. In addition, we exhibit the analysis of these concepts and present a constructive method to produce symmetric norm(s) on $\mathcal B(\mathcal H)$ with respect to each of which the identity operator does not attain its norm. Finally, we introduce the notion of universally symmetric norming operators" and universally absolutely symmetric norming operators" and characterize these classes. These refer to the operators that are, respectively, norming and absolutely norming, with respect to every symmetric norm on $\mathcal B(\mathcal H)$. In effect, we show that an operator in $\mathcal B(\mathcal H)$ is universally symmetric norming if and only if it is universally absolutely symmetric norming, which in turn is possible if and only if it is compact. In particular, this result provides an alternative characterization theorem for compact operators on a separable Hilbert space. Title:     Rectilinear Crossing Number of Uniform Hypergraphs Speaker:    Rahul Gangopadhyay (Ph.D. student, CSE department, IIIT-D) Date:    October 5, 2018 (Friday) Abstract:    A graph is a collection of vertices and edges spanned by these vertices. Embedding of a graph G=(V, E) in a plane is a mapping of its vertices to the points in general position $\mathbb{R}^2$ and joining the edges by simple curves. Given a graph $G=(V, E)$, we define the rectilinear drawing of $G$ as an embedding of $G$ in $\mathbb{R}^2$ such that the vertices are mapped to the points in general position in $\mathbb{R}^2$ and the edges are drawn as straight line segments joining corresponding vertices. Euler's formula gave us a condition to check the planarity of a graph. Indeed a simple connected planar graph with $n$ vertices can contain at most 3n-6 edges. In an embedding of a graph, two edges are said to be crossing if they are vertex disjoint and they intersect. Fary proved that any planar graph has a rectilinear drawing such that no two of its edges cross. The crossing number of a graph is defined as the minimum number of crossing pair of edges among all embedding of it. Crossing number inequality states that for a simple undirected graph G with |E|>4|V|, the crossing number is greater than c|E|^3/|V|^2. A lot of research has been done on the special graphs (e.g., complete graph, complete bipartite graph, complete tripartite graphs, product graphs of cycles) where the structure of the graph is well known. A uniform hypergraph is a natural generalization of the graph. A $k$-uniform hypergraph is a collection (V, E) where $E \subseteq V^k$. A $d$-dimensional rectilinear drawing of a $d$-uniform hypergraph (also termed as geometric hypergraph) is an embedding of the hypergraph in $\mathbb{R}^d$ where vertices are paced as points in general position in $\mathbb{R}^d$ and hyperedges are drawn as $(d-1)$-simplices. In such an embedding two hyperedges are said to have a non-trivial intersection if they contain a common vertex in their relative interiors. Two vertex disjoint non-trivially intersecting hyperedges are said to be crossing. Dey and Edelsbrunner proved that a $3$-uniform geometric hypergraph always contains a non-trivial intersection if has more than $n^2$ hyperedges. They also proved that a $3$-uniform geometric hypergraph always contains a crossing pair of hyperedges if it has more than $1.5n^2$ hyperedges. Later Dey and Pch generalized this result for $d$-uniform hypergraph. They proved that a$d$-uniform geometric hypergraph with $n$ vertices can have at most $O(n^{d-1})$ hyperedges if it does not contain a crossing pair of hyperedges. A $d$-dimensional convex drawing of a $d$-uniform hypergraph is a $d$-dimensional rectilinear drawing of the hypergraph with vertices in convex position in $\mathbb{R}^d$. In this talk, we focus on the complete $d$-partite $d$-uniform hypergraph and complete $d$-uniform hypergraph. In particular, we discuss the lower bounds and upper bounds on the $d$-dimensional rectilinear crossing number of these hypergraphs. We will also discuss some special embeddings of these hypergraphs (e.g. whre vertices are in non-convex position, or form a neighborly polytope or lie on a $d$-dimensional moment curve). We will also focus on embedding the complete $3$-uniform hypergraphs in $\mathbb{R}^3$. We will also establish that the convex crossing number of $K_{n,n}$ is $n^4/6+\Theta(n^5)$. To obtain these results, we use techniques like Gale transformation, $k$-sets and $k$-edge, Ham-sandwich theorem etc. Title:    Tsirelson's problems and non-closure of the set of quantum correlations Speaker:    Jitendra Prakash Date:    September 24, 2018 (Monday) Abstract:    We consider a bipartite system with two observers, Alice and Bob, who are performing measurements in their labs. There are two models of quantum mechanics which describe the joint lab of Alice and Bob --- the quantum model and the commuting quantum model. Tsirelson's original question asked whether these two models were essentially the same. We show that these two models are different for bipartite systems with five quantum experiments and binary outcomes for each experiment, by using the notion of correlation functions of graphs. (This is a joint work with K. Dykema and V. I. Paulsen.) Title:    A visual-search model observer for detection-localization tasks in nuclear medicine Speaker:    Dr. Anando Sen Research Investigator, MD Anderson Cancer Center Date:    September 04, 2018 (Tuesday) Abstract:    Model observers are mathematical models intended for performing diagnostic tasks. Scanning observers (based on point-by-point evaluation) have been proposed for detection-localization tasks in medical imaging, but handling anatomical noise with these observers can be challenging. We have introduced visual-search (VS) observers as an alternative. The VS observer is a two-step process which mimics human perception through an initial search before a more detailed candidate analysis. Both the scanning and VS observers often outperform humans. We propose three additional means for bridging this human-model gap - (1) Task equivalence between model and human observer studies. In particular using both functional and anatomical images for the tumor localization decision; (2) Introducing inefficiencies into the model that a human might be affected by (e.g. internal noise, background approximation, perceptual thresholds and search noise); (3) Moving from a dual-feature to a multi-feature adaptive VS observer that relies less on prior information, particularly the background. Applications studied were SPECT-CT and planar nuclear imaging. Detailed localization receiver operator characteristic (LROC) studies with human and model observers were performed. Area under the LROC curve was used for observer performance evaluation. Results indicate that the VS observer applied with a combination of factors mentioned above can quantitatively match human performance. Title:    Infection spread and stability in random graphs Speaker:    Dr. Ghurumuruhan Ganesan from NYU Abu Dhabi Date:    August 21, 2018 (Tuesday) Abstract:    In the first part of the talk, we discuss infection spread in random geometric graphs where n nodes are distributed uniformly in the unit square centred at the origin and two nodes are joined by an edge if the Euclidean distance between them is less than r_n , the connectivity distance. Assuming that the edge passage times are exponentially distributed with unit mean, we obtain upper and lower bounds for speed of infection spread in the sub-connectivity regime. In the second part of the talk, we discuss convergence rate of sums of locally determinable functionals of Poisson processes and establish bounds for the rates of convergence of spatial averages of such functions, in terms of the radius of determinability. Title:    Planar support for non-piercing regions and applications Speaker:    Rajiv Raman, Assistant Professor (CSE, CB, Applied Mathematics), IIIT-D Date:    August 10, 2018 (Friday) Abstract:    Given a hypergraph H=(X,E), a planar support is a planar graph G on the vertices X, such that for each hyperedge e in E, the induced subgraph of G on the vertices of e is connected. A set S of compact, connected regions in the plane is said to be non-piercing if for any pair of regions A,B in S, the sets A\B, and B\A are both connected. Examples of non-piercing regions include disks, unit-height rectangles, homothets of convex sets, etc. Given two families of non-piercing regions R and B, the intersection hypergraph is a hypergraph whose vertex set is the family B of non-piercing regions, and each region r in R defines a hyperedge consisting of all regions in B intersecting the region r.In this talk, I will prove that intersection hypergraphs of non-piercing regions have a planar support, that further can be computed in polynomial time. This result also has several applications, including unified PTASs for several packing and covering problems on non-piercing regions, as well as coloring hypergraphs of non-piercing regions. Title:    Multi-twisted codes over finite fields and their dual codes Speaker:    Varsha Chauhan, Ph.D. student in Mathematics Date:    July 26, 2018 (Thursday) Abstract:    Aydin and Halilovic (2017) introduced and studied multi-twisted (MT) codes over finite fields, which are generalizations of well-known classes of linear codes, viz. constacyclic codes and quasi-cyclic codes, having rich algebraic structures and containing record-breaker codes. They also obtained multi-twisted codes with best-known parameters over GF (3), over GF (5), over GF (7) and optimal parameters over GF(7). Apart from this, they proved that the parameters over GF(5) and over GF(3) can not be attained by constacyclic or quasi-cyclic codes, which suggests that this larger class of multi-twisted codes is promising to find codes with better parameters than the current best known linear codes. This motivated us to further study multi-twisted codes over finite fields. Title:    On the Structure and distances of repeated-root constacyclic codes Speaker:    Tania Sidana, Ph.D. student in Mathematics Date:    July 24, 2018 (Tuesday) Abstract:    The main aim of coding theory is to construct codes that are easier to encode and decode, can detect and correct many errors, and contain a sufficiently large number of codewords. To study error-detecting and error-correcting properties of a code with respect to various communication channels, several metrics (e.g. Hamming metric, Lee metric, Rosenbloom-Tsfasman (RT) metric, symbol-pair metric, etc.) have been introduced and studied in coding theory.
{}
# 4.15.2. Working with Data Sets¶ The following features mainly work with the Data Set. ## 4.15.2.1. Data Set Sensitivity¶ Data Sets that require a large number of jobs for the evaluation will usually be the bottleneck of every parameter optimization. This class provides the possibility to estimate the diversity of a set prior to the fitting process. This is done by evaluating multiple smaller, randomly drawn subsets from the original set and reporting their loss function value. The values can then be compared to the full data set’s loss. One example where this can be useful is when data sets are somewhat homogeneous. In such cases it can be useful to search for a smaller subset before training, thus reducing the optimization time. A smaller subset is a compromise of the size and error in loss function value as compared to the original set. The SubsetScan class can be used as an aide in such cases. Assuming a Data Set instance ds with reference, a Job Collection jc that can be used to generate the results needed for the evaluation of our data set, and a parameter interface x is defined: len(ds) # 45600 len(ds.jobids) # 45975 # Our data set is huge, lets see if it can be reduced without sacrificing much accuracy # Initialize with DataSet, JobCollection and ParameterInterface scan = SubsetScan(ds, jc, x, loss='rmse') # This attribute stores the loss function value of the initial DataSet ds fx0 = scan.fx0 # Decide on the number of jobs we would like to consider for a subset: steps = [100, 500, 1000, 2500, 10000, 25000, 35000, 40000] # At each step, evaluate n randomly created subsets: reps_per_step = 20 # Now start the scan: fx = scan.scan(steps, reps_per_step) # The result is an array of (len(steps), reps_per_steps) assert fx.shape == (8,20) # Lets visualize the results: import matplotlib.pyplot as plt plt.rcParams.update({'font.size':20}) dim = fx.shape[-1] for i in range(dim): plt.plot(steps, fx[:,i]/fx0) plt.ylabel('fx/fx0') plt.xlabel('Number of jobs in subset') plt.xscale('log') plt.tight_layout() Note If a results dictionary from JobCollection.run has previously been calculated and is available, MinJobSearch can also be instantiated without a job collection and parameter interface: # Initialize with a results dictionary results scan = MinJobSearch(ds, resultsdict=results, loss='rmse') The resulting figure could look similar to the following, in this case highlighting that the reduction to a subset of 10000 jobs would lead to a relative error of under 5% when compared to the evaluation of the full data set. Note Note that this example was created on a data set with only one property and equal weights for each entry. Real applications might not result in such homogeneous behavior. API class SubsetScan(data_set: scm.params.core.dataset.DataSet, job_collection: scm.params.core.jobcollection.JobCollection = None, par_interface=None, resultsdict: Dict = None, workers: int = None, use_pipe=True, loss='rmse') This class helps in the process of identifying a Data Set's sensitivity to the total number of jobs by consecutively evaluating smaller randomly drawn subsets. The resulting loss values can be compared to the one from the complete data set to determine homgeneity and help with size reduction or diversification of the set (see documentation for examples). __init__(data_set: scm.params.core.dataset.DataSet, job_collection: scm.params.core.jobcollection.JobCollection = None, par_interface=None, resultsdict: Dict = None, workers: int = None, use_pipe=True, loss='rmse') Initialize a new search instance. data_set : DataSet The original data set instance. Will be used for subset generation. Reference values have to be present. job_collection : JobCollection Job Collection instance to be used for the results calculation par_interface : BaseParameters A derived parameter interface instance, the associated engine will be used for the results calculation resultsdict : dict({'jobid' : AMSResults}), optional Instead of providing a job collection and parameter interface, an already calculated results dictionary can be passed. In this case initial results calculation will be skipped. The dict should be an output of JobCollection.run(). workers : int When calculating the results, determines the number of jobs to run in parallel. Defaults to os.cpu_count()/2. use_pipe : bool When calculating the results, determines whether to use the AMSWorker interface. loss : Loss, str The loss function to be evaluated. Important Caution when using loss functions that do not average the error, such as the sum of squares error (sse). To ensure comparability loss values must be invariant to the data set size. The fx0 attribute will store the initial data set’s loss function value. scan(steps, reps_per_step=10) Start the scan for data set subsets. steps : List or Tuple A list of integers, each entry represents the number of jobs that the original data set will be randomly reduced to and then evaluated reps_per_step : int Repeat every step n times, randomly drawing differnt entries to generate the subset. fx : ndarray A 2d array of loss function values with the shape (len(steps), reps_per_step). makesteps_exp(exponent: float, start: int = 10) → numpy.ndarray Generate a number of exponentially increasing subset sizes such that steps = [] while start <= len(ds.jobids): steps.append(int(start)) start **= exponent plotscan(steps, fx, filepath=None, ylim=None, xlogscale=True, boxwidths=None, backend=None) Create a boxplot for the given steps and fx values steps : ndarray x values as returned by scan() fx : ndarray y values as returned by scan() filepath : str Path where the figure will be stored. If None, will plt.show() instead. ylim : Tuple[float, float] Lower/upper y limits on the plot xlogscale : bool Apply logarithmic scaling to the x-Axis. Choose depending on the spacing of steps boxwidths : float or sequence of floats Use this setting to adjust the box width backend : str The matplotlib backend to use ## 4.15.2.2. Normalization of Data Set Weights¶ normalize_weights(ds: scm.params.core.dataset.DataSet, resultsdict: Dict[str, scm.plams.interfaces.adfsuite.ams.AMSResults], extractors: List[str] = None, set_best=True, loss='rmse', maxiter=1000, verbose=True) Normalize a data set’s, weights by minimizing the standard deviation of all individual contributions in that data set. This is done through the optimization of a weights vector, where each weight is applied to all entries with the same extractor, e.g. for a data set that contains forces and energies, all entries’ weights that contain the former extractor will be optimized to one value, and weights of the latter to a different value. Note New weights are applied through multiplication with the initial values, in order to preserve the shape, i.e., w_new = w_0 * x. Consider setting relevant initial weights to 1. if this is not the desired behavior. >>> jc = JobCollection() >>> ... # populate the job collection >>> ds = DataSet() >>> ... # populate the data set and calculate the reference values, assuming we are adding energies, forces and charges >>> results = jc.run(interface) # run all jobs needed for the evaluation of ds >>> minres = normalize_weights(ds, results, extractors=['energy', 'forces', 'charges']) >>> minres.x array([ 2.37, 27.59, 1. ]) >>> ds.energy()[0].weight 2.37 >>> ds.forces()[0].weight 27.59 >>> ds.charges()[0].weight 1. ds: DataSet Data Set instance to be evaluated resultsdict: dict A {name:AMSResults} dict with results that can be used to evaluate ds extractors : List of strings List of extractors that should be considered for the minimization. Should all be present in the Data Set, will raise a error otherwise. Will use all extractors in the ds by default. set_best : bool Whether to set the weights after the optimization or not loss : str The ParAMS loss to use maxiter : int Number of maximum function evaluations verbose : bool Whether to print initial and final losses (std deviation of the contributions vector) minres : scipy.minimizeresult
{}
Browse Questions # Four charges are arranged at the corners of a square ABCD of side d, as shown in Figure. Find the work required to put together this arrangement. Since the work done depends on the final arrangement of the charges, and not on how they are put together, we calculate work needed for one way of putting the charges at A, B, C and D. Suppose, first the charge +q is brought to A, and then the charges –q, +q, and –q are brought to B, C and D, respectively. The total work needed can be calculated in steps: (i) Work needed to bring charge +q to A when no charge is present elsewhere: this is zero. (ii) Work needed to bring –q to B when +q is at A. This is given by (charge at B) × (electrostatic potential at B due to charge +q at A) $\quad= -q \times \bigg( \large\frac{q}{4 \pi \in_0 d} \bigg) =-\large\frac{d^2}{4 \pi \in_0 d}$ (iii) Work needed to bring charge +q to C when +q is at A and –q is at B. This is given by (charge at C) × (potential at C due to charges at A and B) $\quad= +q \bigg( \large\frac{+q}{4 \pi \in_0 d \sqrt 2} +\frac{-q}{4 \pi \in_0 d} \bigg)$ $\quad= \large\frac{-d^2}{4 \pi \in_0 d} \bigg( 1- \large\frac{1}{\sqrt 2} \bigg)$ (iv) Work needed to bring –q to D when +q at A,–q at B, and +q at C. This is given by (charge at D) × (potential at D due to charges at A, B and C) $\quad= +q \bigg( \large\frac{+q}{4 \pi \in_0 d } +\frac{-q}{4 \pi \in_0 d \sqrt 2}+ \frac{q}{4 \pi \in_0d} \bigg)$ $\quad= \large\frac{-q^2}{4 \pi \in_0 d} \bigg( 2- \large\frac{1}{\sqrt 2} \bigg)$ Add the work done in steps (i), (ii), (iii) and (iv). The total work required is $\quad= \large\frac{-q^2}{4 \pi \in_0 d}$$\bigg( 4 -\sqrt 2 \bigg)$ The work done depends only on the arrangement of the charges, and not how they are assembled. By definition, this is the total electrostatic energy of the charges. (We may try calculating same work/energy by taking charges in any other order desired and convince ourseleves that the energy will remain the same.)
{}