text
stringlengths
188
632k
Statistics from Altmetric.com G184 HOW CAN WE RELIABLY MONITOR THIS EPIDEMIC OF OBESITY? DEVELOPMENT OF A METHODOLOGY M. C. J. Rudolf1, R. G. Feltbower1, R. Levine2, A. Connor2, M. Robinson2.1Leeds University, Leeds, UK; 2East Leeds PCT, Leeds, UK Background: Government targets have been set to halt the rise in childhood obesity by the year 2010, but no systems are in place to measure if this is achieved. School monitoring, apart from at school entry, has been discontinued, and routine data are often inaccurate, reported incompletely, or entered erroneously. Aims: To develop a feasible and cost effective methodology to monitor levels of obesity in school children. Methods: Ten primary (five inner city, five suburban) and three high schools were selected. Ethics approval was obtained for “opt out” consent. Children in reception, year 4, and year 8 were measured. Data were converted to SD scores and analysed by age, socioeconomic status, and ethnicity. Percentage of obese (BMI >98thcentile) and overweight (BMI >91stcentile) children were calculated. Sample size calculations were undertaken to ascertain how many schools would be required to detect an increase in weight of 0.1 BMI SD per year or an increase by 1% in numbers of obese children with 90% power. Results: A total of 999 children were measured with ascertainment of 95% in primary schools and 85% in high schools. Analysis confirmed the sample was representative of the city with respect to socioeconomic status and ethnicity. Data for each school year showed: mean BMI SD (95% CI) of 0.16 (0.05 to 0.28), 0.38 (0.26 to 0.50), and 0.53 (0.37 to 0.69), respectively; a slight increase over previous local data from 1996–2001; percentage obese 7.1%, 10.3%, and 12.9%, and percentage overweight 14.7%, 21.8%, and 24%. No significant trends were observed for socioeconomic status or ethnicity. Sample size calculations indicated that 460–580 children per age group would be needed per year over 4 years to demonstrate an increase of 0.1 SD per year, and 1480–2795 children per year to show an increase of 1% for obese children. Conclusion: It is feasible to monitor the epidemic using appropriately selected marker schools. Less than 10% of schools in the city would be required to monitor trends with confidence. BMI SD is a more feasible measure to use than numbers of obese or overweight children in the population. G185 BEHAVIOURAL OUTCOME AT ONE AND SIX MONTHS AFTER SEVERE/MODERATE AND MILD TRAUMATIC BRAIN INJURY IN CHILDHOOD; RELATIONSHIP TO PRE-INJURY BEHAVIOUR H. E. Miller1, A. L. Curran1, A. Brownson1, R. J. McCarter1, L. P. Hunt2, P. M. Sharples3.1Kids Head Injury Study, Frenchay Hospital, Bristol, UK; 2Institute of Child Health, Bristol Royal Hospital for Children, Bristol, UK; 3Kids Head Injury Study, Frenchay Hospital, and Institute of Child Health, Bristol Royal Hospital for Children, Bristol, UK Introduction: Traumatic brain injury is a major cause of paediatric hospital admissions. Disturbed behaviour is well recognised in traumatic brain injured children, but it is uncertain if this is due to injury or reflects pre-morbid functioning. Aims: To define behavioural outcome in traumatic brain injured children compared with controls; and to relate pre-injury behaviour in traumatic brain injured children to behaviour in controls and to behaviour post-traumatic brain injury. Methods: Longitudinal prospective study of a cohort of children admitted to hospital for traumatic brain injury compared with non-injured controls matched for age, sex, and socioeconomic status. The Glasgow Coma Score (GCS) on admission was used to classify traumatic brain injury into severe (GCS 3–8), moderate (GCS 9–12), and mild (GCS 13–15) categories. Pre- and post-injury behaviour was assessed using parent report form of the Achenbach Child Behaviour Checklist (CBCL). Statistical analysis was by two-way repeated measures ANOVA and one-way ANOVA with Scheffe’s multiple comparison measures. Results: Eighty six traumatic brain injured children and 47 controls were recruited. Mean age of traumatic brain injured children was 11.0 years (SD 3.8); mean age of controls 11.1 years (3.8). For post-injury behaviour, there were significant differences at 1 and 6 months between severe/moderate and mild traumatic brain injured children and controls for all aspects of the CBCL, that is, externalising index, internalising index, social competence, and total problem score (p<0.001 for all comparisons). For pre-injury behaviour, there were significant differences between traumatic brain injured groups and control children for CBCL externalising index (p 0.01) but not for internalising index (p 0.30), social competence (p 0.51), or total problem score (p 0.08). Comparison of pre- and post-injury behaviour up to six months in traumatic brain injured children showed no significant changes in CBCL externalising index (p 0.35) but significant changes in internalising index (p 0.01), social competence (p<0.001), and total problem score (p 0.003); in the latter analysis the two traumatic brain injury groups did not differ significantly. Conclusion: There are significant differences between traumatic brain injured children and controls for a range of behaviours. Group differences in externalising behaviours appear to reflect pre-morbid behavioural status, but this is not the case for internalising behaviours or social competence, which appear due to traumatic brain injury. G186 SYSTEMATIC REVIEW OF THE BENEFITS AND HAZARDS OF CO-SLEEPING IN INFANCY A. R. Crofton, P. J. Helms, A. S. Poobalan.University of Aberdeen, Aberdeen, UK Introduction: Co-sleeping is a heterogeneous practice, and case control studies of risk factors for sudden infant death syndrome (SIDS) give different estimates of the odds ratio associated with this practice. Sharing a bed with infants has been promoted in order to facilitate breastfeeding, especially in the early infant period. Other potential benefits have also been suggested. Methods: Databases searched (Medline, EMBASE, CINAHL, Cochrane, ASSIA, Psychology and Behavioural Sciences Collection (EBSCO), CAS, ZETOC) and authors contacted. Only case control and cohort studies were included for hazards, any study design was considered for benefits, and cross sectional, cohort, and case control studies for prevalence. Quality assessment was by the Newcastle-Ottawa scale. Results: Bed sharing was found to be an increasingly common, if heterogeneous, practice in the UK. In terms of risk all significant adjusted odds ratios were >1, that is, co-sleeping was not found to be protective in any studies. Smoking and bed sharing interacted in almost all studies where interaction and or stratification was accounted for. Five of six estimates where non-smoke exposed infants were included showed no increased risk from bed sharing. Age stratification was performed in four studies although data from two of these studies were included in a third. Substantially increased risks were evident for premature and low birth weight infants. No studies of appropriate design examined non-SIDS risk of death. Mothers were more likely to breastfeed if they bed shared, and to wean later. Some populations exhibited low levels of bed sharing coexisting with high prevalence of breastfeeding, but not the UK. Other data relating to parental sleep quality, bonding, child sleep problems, and postnatal depression were also discussed. No studies found increased risk of morbidities, such as risk of infection or hospital admission. Conclusions: Bed sharing is an independent risk factor for SIDS although it is not possible to identify a precise estimate of the age at which it ceases to be so. Bed sharing should be actively discouraged where there is parental smoking, recent alcohol ingestion, and where the infant is premature or of low birthweight. Although causation has not been demonstrated, given the close correlation of bed sharing with breastfeeding it must be assumed that a successful campaign against bed sharing in infancy would adversely affect the prevalence of breastfeeding. G187 EVALUATION OF THE UNICEF UK BABY FRIENDLY INITIATIVE FOR THE PROMOTION OF BREASTFEEDING: FINDINGS FROM THE MILLENNIUM COHORT STUDY S. E. Bartington, L. J. Foster, C. Dezateux.Institute of Child Health, London, UK Aims: To evaluate the influence of the UNICEF UK baby friendly initiative on breastfeeding initiation using data from the Millennium Cohort Study (MCS). Methods: The MCS is a disproportionately stratified sample of children born between September 2000 and January 2002 in the four countries of the UK. Maternal report of breastfeeding, ethnic group, academic qualifications, and socioeconomic status was obtained 9 months after birth, and analysed for 18 147 natural mothers of singletons. Maternity hospitals of birth were identified for 17 359 mothers and classified according to the level of participation in the UNICEF UK baby friendly initiative at March 2001 (England and Wales) and June 2001 (Scotland and Northern Ireland). Initiation of breastfeeding was defined as the proportion of all mothers who put their baby to the breast, even if this is on one occasion only.1 Logistic regression and multilevel modelling were used to explore the influence of individual, community, and hospital level factors on breastfeeding initiation. Results: In the UK 70% of mothers initiated breastfeeding and this was highest in England (72%) and lowest in Northern Ireland (51%). Ethnicity, socioeconomic status, academic qualifications, parity, and lone parent status were significantly associated with breastfeeding initiation. The proportion of MCS births in maternity units that held the full baby friendly accreditation award was highest in Scotland (21%), 10.4% in Northern Ireland, 4.5% in Wales, and 2.9% in England. In a preliminary analysis delivery in a maternity unit that held the full baby friendly accreditation award was associated with higher rates of breastfeeding initiation, after adjustment for social, demographic, and cultural factors. Conclusion: Breastfeeding initiation rates vary significantly between the four UK countries and by maternal, community, and hospital characteristics. Mothers who delivered in hospitals holding the full baby friendly accreditation award were more likely to initiate breastfeeding. These findings provide support for hospital engagement with the UNICEF UK baby friendly initiative. Further analyses will examine the influence of this initiative upon breastfeeding duration. The MCS is funded by the ESRC and a consortium of government funders. This work was supported by the Child Health Research Appeal Trust and the International Centre for Child Studies. G188 IMMUNISATION UPTAKE AMONG INFANTS IN THE UK: FINDINGS FROM THE MILLENNIUM COHORT STUDY L. Samad, H. Bedford, R. Tate, C. Dezateux, C. Peckham.Institute of Child Health, London, UK Aims: To determine maternal and sociodemographic factors associated with incomplete uptake of primary immunisation among infants in the UK. Methods: A prospective cohort study of children born in the UK from September 2000 to August 2001 in England and Wales, and from November 2000 to January 2002 in Scotland and Northern Ireland. A disproportionately stratified cluster sampling design was used to over represent disadvantaged (fourth quartile of child poverty index) and ethnic wards (⩾30% Black or Asian). The analysis included 18 503 mothers with infants aged 9 months at interview. The main outcome measures were: fully immunised, incompletely immunised, and unimmunised infants (at age 9 months), as reported by mothers. Results: Overall, 94.8% infants (17 544/18 488) were fully immunised, 3.9% (712/18 488) had received an incomplete course and 1.3% (232/18 488) were unimmunised. England had the highest weighted percent of incompletely immunised (3.6%) and unimmunised (1.3%) infants (n 11 495). Mothers who were lone parents, resident in disadvantaged or ethnic wards, with large families, who had smoked during pregnancy, or were teenagers were more likely to have incompletely immunised infants. Similar to the incompletely immunised group, unimmunised infants were more likely to have mothers who were resident in disadvantaged or ethnic wards, were lone parents, and had multiple children. However, mothers of unimmunised infants were significantly older that is, ⩾40 years (risk ratio 2.3, CI 1.3 to 4.0) and more likely to have been educated to degree level or above (risk ratio 1.9, 1.2 to 3.0), relative to mothers in the 20–29 years age group and those with no educational qualification. Medical reasons were cited as the main reason for incomplete immunisation by nearly 50% (328/697), whereas, beliefs and attitudes towards immunisation were the main reason (92/228) given for an infant being unimmunised. Conclusions: The majority of infants in this large cohort were reported to be fully immunised, with a small, although important group either incompletely immunised or unimmunised. Age and education were significantly different for mothers of incompletely immunised and unimmunised infants. Efforts are needed to improve immunisation uptake among infants of socially disadvantaged mothers, as well as targeting appropriate interventions towards older and highly educated mothers, whose children tend to receive no immunisations at all. G189 THE ASSOCIATION OF ETHNICITY WITH MMR UPTAKE IN YOUNG CHILDREN R. Mixer1, D. H. Newsom2, K. Jamrozic1.1Imperial College, London, UK; 2Brent Teaching Primary Care Trust, London, UK Aim: To determine the relation between ethnicity, other socioeconomic variables, and uptake of MMR vaccine. Methods: MMR status, for all children aged 18 months to 3 years, among 33 ethnic groups, was identified from a primary care trust’s community information system database. Ethnic groups were ranked according to MMR uptake and three larger groups representing high, intermediate, and low vaccine uptake were identified. Focus groups, drawn separately from each of these three ethnic groups, explored factors affecting the mothers’ decision making around immunisation. A total of six focus groups, two from each ethnic background, were conducted. Discussions were performed in a standard manner by a single researcher (RM). Audiotape and written questionnaire were used to record responses and establish other socioeconomic factors. Data were analysed using SPSS software. Results: Overall, MMR vaccine uptake was 74.3% among 6444 children identified. Asian children had highest (87.1%), African/Afro-Caribbean children (74.7%) intermediate, and white children (57.7%) the lowest MMR uptake. Ethnic differences were highly significant (χ2 154.6 (p<0.0001)); however other socioeconomic variables were not found to be significant (χ2 4.9 (p>0.5)). A total of 37 mothers took part in the six focus group interviews. The focus group discussions identified the following influences upon vaccination decisions: Media sources: Asian parents were more likely to source their media from Asian satellite networks, whereas Caucasian mothers used UK media and Internet sources. Family dynamics: The mother-in-law influenced decision-making amongst many Asian families. Involvement of the father was only seen among Caucasians. Health professionals’ advice: Asian families trusted advice, whereas Caucasian and Afro-Caribbean parents did so only if the person was known to them. Socioeconomic status: Caucasian parents were in a higher socioeconomic category than other parents. They thought that parents from higher socioeconomic groups were better informed about MMR vaccine. Conclusion: Ethnicity has a highly significant relationship with uptake of MMR. Other socioeconomic variables are less important. Ongoing work is needed to restore parental confidence in MMR vaccine. G190 CHILDHOOD PEDESTRIAN INJURIES IN IRELAND—ARE SOCIODEMOGRAPHIC FACTORS IMPORTANT? J. Walsh1, F. Trace2, A. J. Nicholson1, A. Kelly3.1Our Lady of Lourdes Hospital, Drogheda, Ireland; 2National Roads Authority, Dublin, Ireland; 3Trinity College, Dublin, Ireland Background: Motor vehicle crashes account for 1 in 5 of all childhood deaths with pedestrian injuries accounting for one third of these. Aims: To study all pedestrian injuries in under 18 year olds over a 7 year period (1996–2002) to ascertain the timing, light conditions, road conditions, age and sex of the child, the nature of injuries sustained, and whether sociodemographic factors influenced the rate of pedestrian injuries. Methods: For pedestrian injuries, police assistance is required and at the time a detailed form is completed by the attending officer and sent to the NRA for analysis. Details re the severity of injury, light and road conditions were collected. Injuries were subclassified as fatalities, serious (detained in hospital, fractures, severe head injury, severe internal injuries, or shock requiring treatment), or minor. Sociodemographic data were obtained via the Small Area Health Research Unit (SAHRU) system with wards defined on a socioeconomic deprivation scale from 1 (most affluent) to 10 (most deprived). All data were entered onto an SPSS database and later analysed. Results: Of 2461 pedestrian injuries, 94 (4%) were fatal, 384 (16%) were serious and, 1983 (80%) were minor. Males outnumbered females by a 3:2 ratio. Median age of those injured was 10 years with peaks in under 6 years and 14–18 year olds. A steady decline was evident from 1996 (430 injuries) to 2002 (305 injuries). 1740 (71%) injuries occurred in daylight with good visibility and 2047 (83%) occurred in dry weather conditions. Injuries tended to occur in early summer, at weekends, in 30 mph zones and where children were playing on the roadway or if crossing was masked by a parked vehicle. Deprivation indices via the SAHRU system showed that children in wards with the highest deprivation scores had five times the rate of pedestrian injury as those in more affluent wards (p<0.001). Conclusions: Childhood pedestrian injuries are declining, occur largely in urban areas, during the daytime, in summer, and in dry conditions. Children in deprived wards have a fivefold increase in pedestrian injury. G191 USE OF PERSONAL CHILD HEALTH RECORDS: FINDINGS FROM THE MILLENNIUM COHORT STUDY S. Walton, H. Bedford, C. Dezateux.Institute of Child Health, London, UK Aims: To establish Personal Child Health Record (PCHR) usage across the UK and to determine associations with demographic and socioeconomic factors. Methods: The Millennium Cohort Study (MCS) was designed to understand the key influences on the health and wellbeing of children, born in diverse social circumstances in the UK. The study included mothers whose children were born between September 2000 and January 2002 and were living in the UK at 9 months of age and eligible for Child Benefit. The population was stratified by UK country and electoral ward type, in order to adequately represent children from disadvantaged circumstances, from minority ethnic backgrounds and from Scotland, Northern Ireland, and Wales. Parents were interviewed at an average child age of 9 months and they were asked to produce and consult their child’s PCHR in order to answer questions about the child’s last weight. We considered there to be effective use of the PCHR if it was produced, consulted, and a record of the child’s last weight was found. The main survey comprised 18 819 babies born to 18 553 families. Analyses (adjusted for survey design) were restricted to natural mothers and first born infants in the case of multiple births. (n 18 503). Results: Overall, 92% of mothers produced their child’s PCHR and criteria for effective use were met by 85%. Of the PCHRs consulted, 97% had information relating to the child’s last weight recorded. All these outcomes varied by UK country (highest in England and lowest in Scotland). Effective use of the PCHR was reduced in association with factors reflecting social disadvantage (living in disadvantaged wards, young mothers, large family size, low maternal educational attainment, lone parent status, maternal longstanding illness, and unplanned pregnancies). In addition, mothers whose children had been admitted to hospital were less likely to be using the PCHR effectively. The reasons for this were unclear. Conclusions: This study is the first to explore PCHR use at a national level. Although the vast majority of mothers use the PCHR, its use is reduced among those living in disadvantaged circumstances and those whose child has been admitted to hospital. Both groups are likely to have greater healthcare needs. Effective use of the PCHR and the benefits of its use, result from partnerships between parents and health professionals (as encouraged in the NHS Plan, The Bristol enquiry, and the children’s Green Paper). If further improvements are to be made in PCHR use (as endorsed by the National Service Framework for Children) healthcare professionals should focus on their interactions with these two groups of families. G192 PREVALENCE OF COMPLEMENTARY AND ALTERNATIVE MEDICINE USE IN A TERTIARY PAEDIATRIC HOSPITAL N. W. Crawford1, D. R. Cincotta1, A. Lim2, C. V. E. Powell1.1Department of General Paediatrics, University Hospital of Wales, Cardiff, UK; 2Department of General Medicine, Royal Children’s Hospital, Melbourne, Australia Aim: To determine the prevalence of complementary and alternative medicine use among children and adolescents in a tertiary paediatric hospital’s inpatient and outpatient population. Methods: A structured, personal interview of 100 inpatients and 400 outpatients was conducted over a 2 month period in 2004. The yearly and monthly prevalence of complementary and alternative medicine use were assessed and divided into medicinal and non-medicinal therapies. This use was correlated with sociodemographic factors. Results: There were 500 completed questionnaires out of an initial study population of 581. The use of at least one type of complementary and alternative medicine in the past year was 41% (95% CI 37 to 46%) and past month 26% (95% CI 23 to 30%). The yearly prevalence of medicinal complementary and alternative medicine was 38% (95% CI 33 to 42%) and non-medicinal 12% (95% CI 9 to 15%). The users were more likely to have parents that were tertiary educated (mother: OR 2.3; 95% CI 1.6 to 3.3) and a family income >£30 000 (OR 4.0; 95% CI 1.7 to 9.2). The commonest medicinal complementary and alternative medicine were non-prescribed vitamins and minerals (23%) and herbal therapies (10%). Aromatherapy (5%) and reflexology (3%) were the commonest non-medicinal complementary and alternative medicine. 74% of complementary and alternative medicine use was self initiated and 62% cost less than £5 per month. 57% perceived at least one complementary and alternative medicine helpful and 5% experienced side effects. 66% did not disclose use to their doctor and no inpatient records documented recent complementary and alternative medicine use. Three per cent of participants were using herbs and prescription medicines concurrently. Conclusion: There is a high prevalence of complementary and alternative medicine use in our study population. Paediatricians need to be advocates for their patients, helping them and their parents make more informed, safe choices. G193 FIRST BATH OF LIFE: IS OUR PRACTICE SAFE? K. K. Tewary, R. Jayatunga. Sandwell General Hospital, West Bromwich, UK Background: Although accidental scald is not uncommon in childhood, it has only been infrequently reported in neonates. The first bath given in maternity/neonatal unit is a simple procedure, but can cause serious complications of hypothermia/scald. Although the recommended method for testing bath water temperature is with a scoop thermometer, most units continue to measure water temperature manually using hands/elbows. A serious accidental burn on a neonate prompted us to critically evaluate this practice. Aim: To compare the manual evaluation of bath water temperature measurement in maternity/neonatal unit against scoop thermometer. Method: Staff on maternity/neonatal unit recorded temperature of babies’ bath water manually. The standard water temperature was supposed not to be more than 37°C as suggested in our trust guideline. Measurements were simultaneously verified by a scoop thermometer. The experience/grade of staff and presence of any vascular disease were recorded. Results: Twenty four staff members participated (14 with >5 years professional experience). None reported any peripheral vascular disease. Although none reported the water temperature to be uncomfortably hot or cold, the thermometer temperature was >37°C in seven and <32.5°C in two case. Despite being aware of those results, the majority of staff still preferred manual testing. Conclusion: Prediction of water temperature can often be inaccurate by manual method despite long professional experience. A standardised guideline should be nationally implemented to prevent further similar complications and resistance to change has to be expected. G194 LYMPH NODES IN THE NECK: WHEN SHOULD WE BIOPSY THEM? A. Maaz, F. A. I. Riordan, K. Khoobarry. Department of Child Health, Birmingham Heartlands and Solihull Hospitals NHS Trust, Bordesley Green East, B9 5SS, Birmingham, UK Aims: To review neck lymph node biopsy results performed on children at our institution and to try and define the risk factors likely to yield pathology. Methods: Retrospective review of histopathology results of neck lymph node biopsies performed on children under the age of 16 years between 1 January 1993 and 31 December 2003. Results: A total of 81 neck lymph node biopsies were performed. Fifty eight per cent were male and 50% were between the ages of 2 and 5 years. Fifty three biopsies (65%) showed reactive hyperplasia, 23 (28%) were granulomatous, and five (6%) were malignant. Of the biopsies showing significant pathology 75% were more than 2 cm in size compared with 20% of those less than 2 cm. Shortness of symptom duration (75% pathological nodes in <1 month group) and involvement of submandibular (80% granulomas) and supraclavicular (100% malignant) chains were also predictive of significant pathology. Of the 23 granulomatous lesions, five (21%) were microbiologically confirmed as Mycobacterium Avium Intercellulare (MAI) infections. Seven were tuberculous (30%), however, only one had microbiological confirmation whereas six (26%) others were diagnosed on the basis of supportive evidence. Nine (39%) were classified as probable mycobacterial infections but mycobacterial cultures were not sent. Mantoux or Heaf tests were positive in 52% cases and AFB in 26%. There were five malignancies diagnosed. Three of these were in supraclavicular nodes; four had a history of less than one month with three being biopsied within one month of presentation. There were two cases of cat scratch disease, both were more than 2 cm in size. Histopathologically both were reactive. One node was found with chronic granulomatous disease. It was more than 2 cm in size and the patient had a history of less than a month. Conclusion: We conclude that neck lymph nodes that are more than 2 cm, are rapidly increasing in size and are either supraclavicular or submandibular are likely to yield significant pathology. Lymph node biopsies should be carried out in the presence of these risk factors and high degree of clinical suspicion. Lymph node biopsies, especially the ones from submandibular region should be sent for mycobacterial cultures. G195 SORTING SEIZURES FROM NON-EPILEPTIC EVENTS: AN OBSERVATIONAL STUDY R. C. Beach.Norfolk and Norwich University Hospital, Norwich, UK Aims: To elucidate the clinical spectrum of non-epileptic events and consider how they are distinguished from epileptic seizures. Methods: Case records of children (29 days–16 years) newly presenting to the Norfolk and Norwich Hospital with episodic seizures, non-epileptic events, attacks, collapses, apnoeas, or behaviours with a possible neurological origin were studied. Cases were ascertained through outpatients, admissions unit, accident and emergency, and wards and were classified by diagnosis at entry and 6–18 months later. Results: In two years 691 cases were ascertained. 271 were classified as non-epileptic events compared with 131 with epilepsy. Of 271 non-epileptic events 210 were diagnosed by clinical presentation and 61 were investigated (electroencephalogram 54, cerebral imaging 14, echocardiogram 6). Daydreams and behavioural episodes were the most frequently investigated events. Possible epilepsy was the initial diagnosis in 27 cases but subsequently non-epileptic events were diagnosed after investigation or observation of the natural history. One child was reclassified as epilepsy. 31 different non-epileptic event diagnoses were grouped as vasovagal (including reflex anoxic attacks) 116, behavioural 36, infantile 33, febrile 13, sleep related 13, and others 59. 88 children presented as emergencies with 20 being admitted. The remaining 191 were managed in outpatients. Only 53 children with turns required hospital follow up. Conclusions: Non-epileptic turns are the most frequent paroxysmal events in general hospital practice. Despite their variety most are accurately diagnosed and classified from clinical features with 22% having investigations. Diagnostic uncertainty can be clarified by observing the natural history. Explanation and reassurance enables most families to manage without continuing hospital follow up.
This is a problem set. Some of these are easy, others are far more difficult. The purpose of these problems sets are: - to build your skill applying computational thinking to a problem - to assess your knowledge and skills of different programming practices What is this problem set trying to do We are working with iteration and selection here. I found this problem on the daily coding problem. It's a great place to find fun problems to practice your programming skills. Alice wants to join her school's Probability Student Club. Membership dues are computed via one of two simple probabilistic games. The first game: roll a die repeatedly. Stop rolling once you get a five followed by a six. Your number of rolls is the amount you pay, in dollars. The second game: same, except that the stopping condition is a five followed by a five. Which of the two games should Alice elect to play? Does it even matter? Construct a program to simulate the two games and calculate their expected value. You should construct the program so it runs several thousand (10,000) iterations of each method and compares the results. How you will be assessed Your solution will be graded using the following axis: - To what extent does your code implement the features required by our specification? - To what extent is there evidence of effort? - To what extent did your code meet specifications? - To what extent did your code meet unit tests? - To what extent is your code free of bugs? - To what extent is your code written well (i.e. clearly, efficiently, elegantly, and/or logically)? - To what extent is your code eliminating repetition? - To what extent is your code using functions appropriately? - To what extent is your code readable? - To what extent is your code commented? - To what extent are your variables well named? - To what extent do you adhere to style guide? A possible solution Click the expand link to see one possible solution, but NOT before you have tried and failed! # Thank you to Patrick, an awesome 11th grade student in the year 2020!!! # # import random import os os.system("clear") # game 1 def game1(RunAmount): AmountWon = 0 for x in range(RunAmount): dicegame1 = random.randint(1, 6) if dicegame1 == 5: dicegame11 = random.randint(1, 6) if dicegame11 == 5: AmountWon = AmountWon + 1 else: continue return AmountWon # game 2 def game2(RunAmount): AmountWon2 = 0 for x in range(RunAmount): dicegame1 = random.randint(1, 6) if dicegame1 == 5: dicegame11 = random.randint(1, 6) if dicegame11 == 6: AmountWon2 = AmountWon2 + 1 else: continue return AmountWon2 game1RunAmount = int(input("Run game 1 how many times? ")) game2RunAmount = int(input("Run game 2 how many times? ")) amountOfDouble5 = (game1(game1RunAmount)) amountOf56 = (game2(game2RunAmount)) print(amountOfDouble5, "Double 5's out of ", game1RunAmount, " rolls which means Alice has to pay", amountOfDouble5, '$') print(amountOf56, "5's then 6's out of ", game2RunAmount, " rolls which means Alice has to pay", amountOf56, '$') if amountOfDouble5 > amountOf56: print("Alice should play game 2") elif amountOfDouble5 < amountOf56: print("Alice should play game 1")
Solved: Why many people overeat? London: Researchers claim to have finally unlocked the key to what causes some people to overeat. An international team, led by Yale School of Medicine, has found that "free radicals" -- molecules tied to ageing and tissue damage -- are crucial to suppressing appetite, the `Daily Express` reported. In fact, in its study of brain circuits that control hunger, the researchers have found that elevating free radical levels suppressed appetite in obese mice by activating satiety promoting melanocortin neurons. However, free radicals are also thought to drive the ageing process, they say. Lead author Prof Tamas Horvath said: "It`s a Catch-22. On one hand, you must have these critical signalling molecules to stop eating. On the other hand if exposed to them chronically, free radicals damage cells and promote ageing." The team said the crucial role of free radicals in promoting satiety, as well as degenerative processes linked to ageing, may explain why it has been difficult to develop therapeutic strategies for obesity without major side effects. Current studies are addressing whether satiety could be promoted without sustained elevation of free radicals in the brain and periphery. The latest findings have been published in the `Nature Medicine` journal. More from India More from World More from Sports More from Entertaiment - Essel Group launches 'Asha 2022' affordable housing scheme - Soldier buried under avalanche found alive at Siachen glacier - PM Modi praises Ekal Vidyalaya Foundation, deems mission successful - Does religion teach gender discrimination? - Panel discussion on detention of goat for grazing in bureaucrat's garden - 'Achhe din': Residents of PM Modi's adopted village to get free electricity, work in full swing - Siachen miracle: Lance Naik Koppad's wife terms her husband's 'news of being alive' as 'rebirth for family' - See pic: Call for further study after discovery of baboon-like creature on Mars! - Say goodbye to your acidity woes with these ten simple tips! - India prays for Siachen survivor Lance Naik Hanumanthappa; PM, Army Chief visit hospital
Overview General Information Study FAQ Study Diagnostic Criteria Online Resources Study Updates Study Contacts We invite your child to join a research study of atopic dermatitis ("eczema") at the National Institutes of Health (NIH). We are trying to determine how microbes, such as bacteria and fungi, contribute to this disease. With these studies, we hope to lay the groundwork for development of more effective treatments of atopic dermatitis. Flyer for the Study of Skin Microflora in Children with Atopic Dermatitis Julie A. Segre, Ph.D. National Human Genome Research Institute National Institutes of Health Heidi H. Kong, M.D . Dermatology Branch/National Cancer Institute 10 Center Dr Bethesda, MD 20892 To view the PDF document on this page, you will need Adobe Reader. Top of page Last Reviewed: April 19, 2012
Bill Clinton was the 42nd president of the United States, serving from 1993 to 2001. He was a member of the Democratic Party and the first president from the Baby Boomer generation. Some of his major achievements as president include: - Presiding over the longest period of peacetime economic expansion in American history1 - Signing the North American Free Trade Agreement (NAFTA) and the General Agreement on Tariffs and Trade (GATT), which expanded trade with other countries1 - Enacting the 1994 Crime Bill, which increased funding for law enforcement and crime prevention programs, and imposed stricter penalties for some offenses1 - Signing the Family and Medical Leave Act, which granted workers unpaid leave for family or medical reasons1 - Initiating the Clinton Doctrine, which stated that the U.S. would intervene militarily to stop genocide, ethnic cleansing, and human rights violations in other countries3 - Negotiating the Oslo Accords, which aimed to resolve the Israeli-Palestinian conflict, and the Dayton Agreement, which ended the Bosnian War13 Some of his major controversies as president include: - Failing to pass a comprehensive health care reform plan, known as the Health Security Act, which faced strong opposition from Republicans and some Democrats1 - Being accused of sexual harassment and misconduct by several women, such as Paula Jones, Kathleen Willey, and Juanita Broaddrick2 - Being involved in the Whitewater scandal, which investigated his involvement in a failed real estate venture in Arkansas2 - Firing seven employees of the White House travel office, which sparked allegations of cronyism and corruption2 - Pardoning 140 people on his last day in office, including his half-brother Roger Clinton and his former business partner Susan McDougal2 Bill Clinton is married to Hillary Rodham Clinton, who served as the First Lady of the United States from 1993 to 2001, as a U.S. Senator from New York from 2001 to 2009, as the Secretary of State from 2009 to 2013, and as the Democratic presidential nominee in 2016. Bill Clinton policies Bill Clinton was the 42nd president of the United States, and he implemented various policies during his presidency that affected the economy, trade, welfare, crime, environment, and foreign affairs. Some of his policies are: - Economic policy: Clinton raised taxes on higher income earners and cut spending on defense and welfare, which helped reduce the budget deficit and create a surplus. He also signed free trade agreements such as NAFTA and GATT, which expanded trade with other countries. His economic policies are credited with creating a decade of prosperity and job growth in the U.S12 - Welfare reform: Clinton signed the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, which reformed the welfare system by requiring recipients to work or seek work within two years, limiting the total time they could receive benefits to five years, and giving states more flexibility to design their own programs. The number of people on welfare declined significantly after the reform12 - Crime policy: Clinton enacted the Violent Crime Control and Law Enforcement Act of 1994, also known as the Crime Bill, which increased funding for law enforcement and crime prevention programs, imposed stricter penalties for some offenses, banned certain assault weapons, and expanded the death penalty. The Crime Bill also included the Violence Against Women Act, which provided resources and support for victims of domestic violence and sexual assault. The crime rate dropped during Clinton’s presidency, but some critics argue that the Crime Bill contributed to mass incarceration and racial disparities in the criminal justice system13 - Environmental policy: Clinton issued executive orders and signed legislation to protect the environment and combat climate change. He established 17 new national monuments, expanded the Clean Air Act, created the Energy Star program, ratified the Kyoto Protocol, and supported renewable energy sources. He also faced opposition from Republicans and some industries over his environmental regulations and initiatives24 - Foreign policy: Clinton pursued a doctrine of humanitarian interventionism, which stated that the U.S. would use military force to stop genocide, ethnic cleansing, and human rights violations in other countries. He also engaged in diplomacy and peace negotiations to resolve conflicts and promote cooperation. Some of his foreign policy achievements include the Oslo Accords between Israel and Palestine, the Dayton Agreement that ended the Bosnian War, the Good Friday Agreement that ended the Northern Ireland conflict, and the normalization of relations with Vietnam. Some of his foreign policy challenges include the Rwandan genocide, the Somali civil war, the Haitian coup d’état, the North Korean nuclear crisis, and the Kosovo War12
There has always been a large interest among the scientific community to produce fuel using microbes. Scientists have successfully engineered algae that when starved for nitrogen, put most of their stored energy, into fats. These fats are chemically similar to hydrocarbons and are thus further processed into “biofuel”. However, this processing of fats into biofuel is not a straightforward process. Most fats contain a long linear chain of hydrocarbons with one of the carbon ends linked to two oxygen atoms, making these fats slightly acidic in nature. The easiest way to process these fats into biofuel is to remove the end carbon attached to the oxygen atoms. Unfortunately, most chemical reactions are unable to specifically remove only the end carbon atoms and more commonly makes breaks randomly in the middle of the long hydrocarbon chain, and are not useful. Alternatively, other reactions require multiple steps with high input energy and low efficiency. Interestingly, researchers from France also found the solution to the problem from microbes. Last year, they discovered that a certain species of algae converts fats directly into hydrocarbons. Now, they have identified the enzyme which is responsible for this conversion. What is more interesting about this enzyme from the biology perspective is that it uses light for the conversion process. Essentially, this would be the fourth biological reaction that uses light for conversion apart from two reactions of photosynthesis and one for DNA repair. After a series of biochemical tests and screening, the researchers identified the enzyme from the algal extract, the function of which was previously unknown. They identified the gene for it and engineered it into bacteria which were then observed to carry out the conversion process. They were also able to switch the conversion process on and off by switching from blue to red light- the reaction occurred in the presence of blue light. The researchers further investigated on how the protein managed this reaction and why it requires blue light to do it. They found that the enzyme had flavin adenine dinucleotide (FAD) as a co-factor and was responsible for the absorption of blue light. On further looking into the structure of the enzyme, they found that FAD is held in close proximity to the oxygens of the fat molecule. The authors speculate that, in the enzyme, the absorption of the blue light makes FAD to steal an electron from the fat molecule, making the fat molecule unstable. The fat then removes the carbon and two oxygens, forming carbon dioxide, to return to its stable state. After that happens, the remaining hydrocarbon steals the electron back from the FAD, resetting the enzyme for use on another fat molecule. The discovery of this enzyme is particularly interesting, as there have been very few light-driven enzymes that have been discovered. Also, it offers an interesting premise that this enzyme can be modified to carry out different type of light-driven reactions. Finally and most importantly, this enzyme, unlike the other enzymes requires no chemical energy for the reaction, which makes the processing of chemicals a lot more easier.
When a loved one passes away, you may go through a court proceeding called "probate." Probate is the legal process by which a person's final debts are settled and legal title to property is formally passed from the decedent to his or her beneficiaries and heirs. How the probate process begins After an individual passes away, the probate process begins in the county of the decedent's legal residence at death. Someone acting on behalf of the decedent comes forward with the decedent's original will. Usually, this person is named in the will as the "executor," and has been chosen by the decedent as the one in charge of finalizing his or her affairs. If the decedent did not have a will, then someone must ask the court to be appointed as administrator to perform the same function as the executor. In most cases, the executor or administrator will be the surviving spouse or an adult child. If there is a dispute over who should serve as administrator, the court can appoint a neutral public administrator. The generic term "personal representative" often is used to refer to an administrator or executor. The probate process Regardless of whether the decedent had a will or trust, the basic steps of probate must be completed. Depending on the nature and complexity of the estate, these steps can be very easy or very difficult to complete. Most states have streamlined probate procedures for handling small estates and uncomplicated larger ones. Depending on the state and size of the estate, you may not even be required to go to probate court. But even where court is necessary, if no one is protesting or fighting over the estate, the process is usually fairly smooth. With or without a will, the probate process can be divided into the following four steps. Step 1 - The probate hearing If your state requires a court hearing, a date is usually set for the personal representative to appear before a judge, present the will (if there is a will), and ask to be formally appointed as the executor or administrator. After a will's genuineness and validity are established (usually by simple inspection of the document), the court issues an order "admitting the will to probate." Once admitted to probate, the will is a public record as are any of the subsequent filings with the court. These papers are open to inspection by anyone. After you are officially appointed the personal representative by the probate judge, you will have full authority to deal with the decedent's probate property and accounts. You will be given a certified court document that must be honored by financial institutions and others. In some places, this is called the "letters of administration" or "letters testamentary." Step 2 - Collection and inventory of assets subject to probate After being designated as the personal representative, you will need to take an inventory of the estate assets and file this inventory with the court. These include, but are not limited to: - Money owed to the decedent or the estate. Any money owed to the decedent or estate such as loans, a final paycheck, life insurance payouts or retirement account(s) should be included. - Bank and stock brokerage accounts. You will also need to list the account numbers and latest balances of any bank and stock brokerage accounts. - Evaluations of real estate or property. Valuations of real estate or specific valuable collections (such as an antique collection) probably require a professional appraisal. The detail and accuracy necessary is dictated by the circumstances and degree of scrutiny being shown by other interested parties. Step 3 - Bills, taxes, expenses and creditors After identifying all of the decedent's assets, you will review his or her final bills, debts, taxes and any claims against him or her as well as the supporting proof. As the personal representative, you must then pay or settle those that are valid and reject those claims that are not valid. The payment of all debts and bills is done with funds from the estate; you are not personally responsible for paying these expenses out-of-pocket, even if estate funds are not available. The surviving spouse and children are generally given an allowance under the law, which varies greatly from state to state, and whether or not there is a will. Generally, this allowance comes "off the top" and is set aside first. As a result, the order of payment of claims against the estate is usually costs/expenses of administration (to include the allowance), funeral expenses, debts and taxes, and all other claims. After paying all of the debts and bills, you must file a report with the court to account for all income received and payments made on behalf of the estate. Step 4 - Formal transfer of remaining estate property After all rightful claims, debts and expenses have been paid, the remainder of the property is distributed as the will directs. If there is no will, the administrator will distribute property according to state law. Generally, you have the discretion to distribute the estate in cash or by giving away the property itself, but the will can specify otherwise. In regards to real estate property, there is often a state-required waiting period that must pass before you can sell or transfer the property. You can begin the process of selling or transferring the property at any time, but the final distribution of property or sale proceeds cannot occur until after the state-specified waiting period (usually, six months). Once the waiting periods have expired and all legitimate bills, debts and taxes have been paid, the remaining estate is available for distribution to heirs or beneficiaries. Only then can you make disbursements of cash, send copies of documents such as deeds and investment statements showing new ownership or transfer physical property to the respective beneficiaries. After the remaining estate is transferred to heirs and beneficiaries, you will usually complete a final settlement or accounting of the estate. This provides detail on all of the personal representative's dealings on behalf of the estate. Any party who intends to object to any aspect of the probate proceeding should come forward and be heard at this point if not sooner. Once the judge approves the final settlement, your duties as the personal representative are complete, and the estate no longer exists. Further assistance with the probate process For more information on the probate process, please contact your local legal assistance attorney. Legal assistance offices can be found through the following service websites:
ICT in Education The web revolution of today has not only changed the way we play, study and conduct ourselves but has created a new global economy fuelled by information, driven by knowledge and powered by technology with global exchange of ideas, opinions, know-how and technology. As the half life of information continues to shrink and access to information grows exponentially, educations institutions cannot remain mere venues for transmission of prescribed set of information but must promote acquisition of knowledge and skills which would make possible continuous learning over a life time. The illiterate of the 21st century said Alvin Toffler will not be those who cannot read and write but those who cannot ‘learn, unlearn and re-learn’. ICT stands for Information and communication Technology and can be defined as diverse set of technology and resources which can be used to create, communicate, disseminate, store and manage information. These include the computer, internet, broadcasting technologies, radio, TV and telephone. E-learning encompasses learning at all levels formal and non formal which uses information network, internet, intranet and extranet wholly or in part for course delivery, interaction or for facilitation. Online learning is thus a subset of E-learning. When traditional classroom practices are integrated with E-learning we get blended learning. A distinct feature of ICT is its ability to transcend space and time apart from comprehensiveness in covering special groups like women, persons with disabilities and the elderly who traditionally remained excluded from education. The access by ICT becomes global covering multiple geographically dispersed learners. It promotes ‘just in time’ learning where the learners can chose what to learn and when to learn. The 21st century will essentially be an era of fierce competition where constant skill upgradation will become essential. The digital age jobs in the new global economy will require new skills. These 21st centuries skills as EnGuage of North Central Regional Education Laboratory specifies consists of - Functional and visual literacy –The ability to decipher meanings and express ideas in a range of media graphs , graphics, charts, videos and images - Technical Literacy – Competence to use information and communication technology - Information Literacy- The ability to find, evaluate appropriate use of information via ICT’s - Cultural literacy – Appreciation of diversity of cultures - Global awareness – the ability to understand how nations, corporations and communities all over the world are inter related. - Inventive thinking skills – the adaptability to manage in a complex world. - Creative skills – the ability to use imagination to create new thinking - Higher order thinking skills- this includes creative problem solving and logical thinking resulting to sound thinking - Effective communication skills – Teaming abilities to work in a team. - High productive ability – the ability to plan, prioritize, programs and projects to achieve desired results. It also involves the ability to apply what is learned in the classroom to real life situation. ICT can create frontiers without boundaries in the acquisition of knowledge and skills which can empower today’s citizens to acquire 21st century skills. The ICTs have also been used to improve access and quality of teacher training. Tele collaboration which is a web based collaborative tool with email, message board, real time chat, web based conferencing can connect a learner to other learners, teachers, educator and scholars. Similarly the organized use of web resources and collaboration tools for curriculum appropriate purposes was adopted by the UNICEF under Voice tele-collaboration of Youth 42 programme to encourage students to share their views on global issues such as HIV/AIDS, child labour with youth and adults around the world. The IT tele-mentor programme (ITP) 43 links students with mentor experts through email and discussion forum. The reluctance to adapt to ICT exists even today due to poor software design, sceptism about the effectiveness of computers, lack of administrative support and increased time and effort to learn the technology and its usage for teaching along with the fear of losing traditional teacher authority. Can the ICT’s replace the teacher? Certainly Not! With the ICT’s on the classroom, the teacher’s role in the learning process becomes very critical .Learning shifts from ‘ä teacher centric model’ to a ‘learner centric model’. With ICT’s the role of the teacher changes to a facilitator, mentor and a coach from that of an instructor. Teachers also become co-learners in this new journey. WHAT IS NEW.... Sakshat The tablet PC costing only Rs2200 has been launched by the government. Initially about 10,000 of them will be given to the IIT’s for testing. After testing 300 tablets each are to be given to the states for trial. These will be available only to undergraduate and post graduate students to begin with as a part of the of National Mission on Education through Information Technology which aims to link 25000 colleges and 400 universities in the subcontinent. The program will be available on Sakshat web portal which students can access through the device. - Register your personal profile and current CV with ShikshaCentre. Should the proper opportunity arise to present you to one of our clients, we will contact you. - Reach out to prestigious and reputed educational / academic institutions - Get jobs that best match your profile in your inbox - Total Confidentiality, Guaranteed! Need Resume Writing? Get your resume written by professionals who have helped thousands like you. - Open closed doors - Impress recruiters - Showcase all your skills and strengths
Background It has been suggested that aberrant microbiota development may predispose to some diseases (eg, allergic disorders, obesity, inflammatory bowel disease). Thus, the establishment and composition of the gut microbiota is important. In early infancy, a number of factors are considered to affect the colonisation pattern, including maternal education, diet, probiotic use and antibiotic use; delivery and birth characteristics; type of infant feeding; antibiotic/antimycotic agents used during early life and the home environment (siblings, living on a farm, furry pets). The rate of Caesarean sections is now increasing both in developed and some developing countries, which may have an impact on intestinal microbiota (total amount and type of microbial species present in the gastrointestinal tract). Objective To evaluate systematically and update data on the effects of mode of delivery on gut microbiota. Methods The MEDLINE databases were searched in June 2008; additional references were obtained from reviewed articles. Only trials evaluating the effect of the mode of delivery (natural delivery vs Caesarean section delivery) on gut microbiota of term infants and published in the last 10 years were considered for inclusion. Special emphasis was given to studies using molecular approaches, as many bacterial species cannot be cultured using traditional culture techniques. Results Four trials were included. The first study (Gronlund et al. J Pediatr Gastroenterol Nutr 1999), which cultured fecal flora on selective and non-selective media, demonstrated that the fecal colonisation of 64 healthy infants born by Caesarean delivery to mothers who received antibiotic prophylaxis administered before delivery was delayed. Bifidobacterium-like bacteria and Lactobacillus-like bacteria colonisation rates reached the rates of vaginally delivered infants at 1 month and 10 days, respectively. Compared with vaginally delivered infants, those born by Caesarean section were significantly less often colonised with bacteria of the Bacteroides fragilis group. This study also showed that the disturbances of the intestinal microbiota may be present up to 6 months of age. The second trial involving 60 children demonstrated the influence of mode of delivery on gut microbiota composition beyond infancy (Salminen et al. Gut 2004). In children 7 years of age, a fluorescent in-situ hybridisation method showed a significantly higher number of Clostridia in children delivered vaginally compared with those born by Caesarean section. No differences were observed in other fecal bacteria or total numbers of bacteria. The third study was carried out in 1032 infants at 1 month of age by Dutch investigators (Penders et al. Pediatrics 2006). Their analysis of gut microbiota by quantitative real-time PCR showed that in comparison with vaginal delivery at home, Caesarean section resulted in lower colonisation rates and counts of bifidobacteria and B fragilis-group species, whereas the prevalence and counts of Clostridium difficile and counts of Escherichia coli were higher. The fourth study, which used fluorescence in-situ hybridisation, demonstrated that infants (n = 165) delivered by Caesarean section have fewer bifidobacteria at an early age (Huurre et al. Neonatology 2008). Conclusions Recent studies confirm different colonisation patterns in infants born by vaginal or Caesarean delivery, which may persist beyond infancy. The exact effects of those differences on children’s health are unclear but potentially may increase the risk of specific diseases.
Whether you are a first-timer creating invoices or not, it is important to understand how to write an invoice correctly. This guide will help you create an invoice that will be paid on time, in a straightforward way. You have established a business and sold your first goods or services. It is time to send your first invoice! But how is it done? In this guide, you’ll find all you need to know about invoicing and a step-by-step guide on how to write your first invoice. We will also provide you with some free tools to ease the process, and tips on how to increase your chances of getting paid on time.Get started with free invoicing Invoice meaning: What is an invoice? An invoice (also referred to as a bill) is a document issued by a seller to request payment from a customer. The document describes what the buyer owes and how the seller wants to get paid. It also includes details such as the name and contact information of the seller, a description of the goods sold, prices, and other payment information. Some people believe that invoices and receipts are the same things. That is not the case. An invoice is a document asking for payment, whereas receipt documents that payments have been made. Different types of invoices The most common type of invoice is the sales invoice, which is used to request payments from customers. In addition, there are other types of invoices, such as: - Pro forma invoice: A preliminary invoice used to confirm the details of a sale before the goods or services are provided. - Recurring invoice: Invoices issued on a regular basis, such as monthly or annually. - Past due invoice: An invoice that has not been paid on time and therefore is being re-sent to the customers (sometimes with additional charges). - Credit invoice: A negative invoice used to reduce the amount of money that the customers owe the business (mostly used to correct an error or when a customer returns goods). What to include on an invoice: A brief checklist In the list below, you will find the most important elements to include in an invoice> - a unique invoice number - your business name, address, phone number and email - the name, address, phone number and email of the customer you’re invoicing - invoice date - payment due date - description of the goods and services you are billing for - the amount being charged - payment terms. - VAT amount if applicable Creating an invoice, step by step 1. Make your invoice look professional Using a spreadsheet or invoicing software, create an invoice that looks professional, preferably with your own logo. Remember to clearly mark your invoice with the word invoice, to improve the likelihood of getting paid on time. 2. List company name and information Make sure that the recipient knows who sent the invoice. Do also make sure that the customer’s information is on the invoice. In most countries, this is required for the invoice to be legally valid. A quick tip: Invoice software, such as Conta, help you make sure that all legally required fields are in place when sending invoices. Try Conta for free. 3. Remember the dates Dates provide your invoice context and serve as a point of reference if you need to follow up on unpaid invoices. We advise you to include the following dates: - The invoice issue date - The date the goods or services were delivered - The payment due date Also, make sure to mention that late fees will occur if payment is not received on time. In fact, 94 per cent of invoices mentioning late fees are paid. 4. Include a cost breakdown Make sure that the customer knows what they are paying for by breaking down the costs: List the services provided, their quantity and duration, as well as your unit price. Next, you will need to include, the total of all goods and services without tax and discounts, discounts, tax and net total. If you are using invoicing software, tax calculations are (in most cases) done automatically. 5. Save and send the invoice There are several ways to ensure that an invoice gets in front of your customers: - Email: You can send the invoice to the customer’s email address. This is a fast and convenient option, but there is a risk that the invoice could end up in the customer’s spam folder or be intercepted by third parties. - Postal mail: You can send the invoice through the mail to the customer’s physical address. This can be more formal and professional, but it is also slower and more expensive than email. - Online invoicing software: You can use online invoicing software to create and send invoices to your customers. This allows you to track the status of the invoice and whether it has been received and paid. Some online invoicing software also allows you to set up recurring invoices and automated payment reminders. Try Conta invoicing for free. What to do when customers do not pay on time On average, 16 % of small business invoices are paid late. Thus, it is vital that you have strict payment follow-up routines in place. If a customer does not pay an invoice on time, here are some steps you can take: - Follow up with the customer: A polite reminder email or phone call may be enough to get the payment process started. - Consider offering a payment plan: If the customer is unable to pay the full amount due by the due date, you may be able to work out a payment plan to help them pay the invoice over time. - Charge late payment fees: If you have specified late payment fees in the payment terms of the invoice, you may choose to charge the customer these fees. - Take legal action: If the customer continues to refuse to pay the invoice, you may need to consider taking legal action. This may involve hiring a lawyer or taking the customer to small claims court. By following these steps, you can help ensure that you get paid on time and avoid any issues with your cash flow. Invoicing best practices - Use a professional template: A well-designed invoice not only looks more professional but also helps to clearly communicate the details of the transaction and the terms of payment. You can use a template or invoicing software to create consistent and attractive invoices. - Use clear and concise language: Avoid using jargon or technical terms that your client may not understand. Use simple and straightforward language to clearly communicate the details of the invoice. - Set clear payment terms: Specify the due date for payment and any late fees that may be applied. You may also want to include information about your preferred methods of payment, such as electronic bank transfer or credit card. - Follow up on overdue invoices: If an invoice becomes overdue, it’s important to follow up with the client in a timely manner. This can be done through email or phone calls. Be polite but firm, and try to resolve any issues that may be preventing payment. By following these best practices, you can improve your chances of getting paid on time and streamline your invoicing process. Quick tip! Conta’s invoicing software can be set up to automatically send payment reminders to customers who fall behind on their payments. Sign up for our free invoicing software.
Maybe you’re not in Texas, but suddenly you find yourself faced with a huge measurement requirement. You’ve been given the task of checking some large diameters—not your 6 inch variety—I mean those large enough to drive a herd of cows through. You know, the 12 inch, 36 inch or even the 80 inch variety. Don’t go for the tequila yet. There are lots of choices available to meet this challenge, which boils down to selecting the right tool for the application. The first step is to look at the part print, determine the measurement tolerances and see if there are any callouts for out-of-round conditions. This information will lead you to the best tool for the job. If the tolerance is loose—within 0.01 inch—then a digital or vernier caliper-style gage will provide a good fast check of the part diameter. Just make sure the jaws are square to the part and placed to find the major diameter. On the larger diameters, this could even be a two-person operation. An inside micrometer is another alternative. Special kits make it possible to assemble a series of calibrated extension rods to span any diameter. Because this is a true point-to-point measuring system, the diameter has to be found by rocking the gage both axially and radially. On a large bore, this may require one operator holding the reference side of the gage in place while the second operator “searches” for the maximum diameter. Tighter tolerances call for different types of gages. Some adjustable bore gages can get to these larger sizes. They deliver improved accuracy and repeatability because they 1) are adjusted to a specific (in this case large) size range; 2) provide comparative measurements using a master; and 3) are often equipped with a centralizer which makes it easy to “search” for the diameter. Pair this gage with a good digital indicator that includes a dynamic function to store the maximum size, and you have a great tool for fast, repetitive readings. Gages with beams that have reference and sensing contacts mounted on either end are another comparative tool. In addition to satisfying large diameter measurement requirements for tolerances within 0.001 inch, beam type gages have standard rest pad and contact combinations that allow measurement of shallow bores and thin wall parts, as well as grooves and other features machined within the bore. You can even build up the gage to get around a central hub. When the blueprint requires you to check not only the diameter but out-of-roundness as well, the bar has been raised. The gages mentioned above can still be used, but the process may involve making five or ten measurements on the part, recording the results, and calculating out-of-roundness according to a formula. Not only is this approach time consuming, it also magnifies operator influences on the result because so many measurements are required. An advanced concept can be brought into play here: The better the gage is staged, the better the result of the measurement. That’s why the shallow bore gage with its two references is better than the gages that just have one reference. By the same thought process, another reference point will result in even further improvement. Let’s take the same shallow bore gage, but this time we will use it with a staging post that centralizes both the part and the gage. The operator only has to apply a little force to make sure the reference contact is against the part, and the central post takes care of finding the maximum diameter without having to rock the part back and forth. Now it’s a breeze to inspect for out-of-round conditions. Just rotate the gage, keeping a little force applied to the reference contact, and watch the swing of the needle, looking for the minimum and maximum values. Watching the needle, you can visually inspect for the Total Indicator Reading (TIR) or out-of-roundness variation. Add a memory to the indicator or an amplifier to store discreet points, and you can automatically calculate the average roundness.blog comments powered by Disqus
Looks like you are using an old version of Internet Explorer - Please update your browser Hox gene researchers sniff at a common ancestor. A well-known homeobox (hox) gene in the fruit fly has even more functions than previously known. The essential role this gene and similar genes in vertebrates play in early embryonic development has implications for human neurological and stem cell research, researchers at University of Wisconsin–Madison suggest. They also suggest humans and fruit flies share a common ancestor. Homeobox genes are master switches that turn whole groups of other genes on and off and thus control many aspects of development. Much of the research on these genes has been done on fruit flies. Many vertebrate homeobox genes have been identified on the basis of similar gene sequences with fruit fly genes. Dr. Grace Boekhoff-Falk’s team studies a fruit fly gene known as distal-less (dll), which has long been known to control limb and peripheral nervous system development. Humans and vertebrate animals have six similar genes called DLX. DLX genes help control brain development, including development of the olfactory system. Boekhoff-Falk’s team found that dll has even more roles in fruit fly development than previously thought, including controlling olfactory system and central nervous system development. This control, they found, is manifested very early in larval development by controlling the differentiation of stem cells. She hopes the fruitfly model will lead to greater insight into human neurological developmental abnormalities. She also says, “Our model may be useful for further analysis of how this gene regulates stem cells” by revealing “the growth inputs needed to keep the stem-ness of the cells.” But in addition to helping unravel neurological mysteries and control stem cells, Boekhoff-Falk believes her team’s work challenges a prevailing view among evolutionists. But in addition to helping unravel neurological mysteries and control stem cells, Boekhoff-Falk believes her team’s work challenges a prevailing view among evolutionists. “The prevailing view is that fly and mammal olfactory systems evolved independently, multiple times over history. But our work challenges that view. We think that when it comes to the olfactory system there may be a common ancestor shared by flies and mammals.” Vertebrates and invertebrates are so very different, evolutionists generally believe the existence of similar systems—such as the ability to smell—evolved convergently. But since her team has found that homologous homeobox genes are involved in the olfactory development of both humans and fruit flies, she says, “This supports the idea that the last common ancestor already had some form of olfactory system and that the overall architecture and key elements of the underlying genetics have been well conserved over time.” Since the discovery of homeobox genes, evolutionists have pointed to them as a genetic mechanism for evolution of brand new body designs. After all, the reasoning goes, if flipping a genetic switch can change the number of legs a creature has, doesn’t that make it a new creature? Evolution becomes a process of subtracting information to build new kinds of organisms. The gene switches themselves are part of the genetic information making each kind of organism unique. The ultimately insurmountable problem for homeobox-mediated evolution is the origin of that pool of genetic information in the first place. The Bible asserts that God created all creatures to reproduce after their kinds. This biblical truth is confirmed in science: each organism has genetic information to vary within its kind but is unable to acquire information to evolve into a new kind of organism. Homeobox genes don’t change that. There are genetic similarities among different kinds because our common Designer—God—utilized similar designs to meet various biological and developmental needs. In fact, those design similarities make medical research using animal models—like fruit flies and mice—possible. Similar roles for genes with some sequence similarities do not prove one organism evolved from the other or even that God used one as the raw material for the other. God told us in His eyewitness account that He made all kinds of organisms during Creation Week and created them to reproduce after their kinds. Remember, if you see a news story that might merit some attention, let us know about it! (Note: if the story originates from the Associated Press, FOX News, MSNBC, the New York Times, or another major national media outlet, we will most likely have already heard about it.) And thanks to all of our readers who have submitted great news tips to us. If you didn’t catch all the latest News to Know, why not take a look to see what you’ve missed?
Fox in Socks Theme of Language and Communication Tongue twisters are all about having fun with language. Minus the enjoyment we get from them, absolutely no reason exists to question how much wood a woodchuck could chuck or whatever Peter Piper did with those pickled peppers. Fox in Socks is no less about the fun and joy one can have with language. It's nonsensical and completely off its or any other rocker. Some may argue that the story doesn't have a point; it's just a bunch of silly words strung together for a lark's sake. But we'd say, "That is the point!" This book brings to a child's attention just how much fun language can be. And that's no small task. Have you read Dick and Jane? Questions and Answers Questions Your Super-Young Adult Might Ask and How You Might Respond: Q: What's the point of this book? A: To have fun reading it! Take it from someone who has seen a spreadsheet or two in their day, reading isn't always such a blast. Q: Why isn't Knox having a good time with the tongue twisters? A: He's a little intimidated at the use of language. He just needs to throw caution to the wind and go for it. Q: How'd you get so good at reading this book? A: Lots of practice and even more mistakes. Q: How did Dr. Seuss come up with these tongue twisters? A: Don't know. Best guess is lots of hard work and a lot of time staring at a blank page.
Childhood asthma is a chronic illness affecting quality of life and leading to higher mortality in the UK than other countries. In the UK, prescription rates for relievers and preventers are lower for South Asian (SA) children. SA children are more likely to suffer uncontrolled symptoms and to be admitted to hospital with acute exacerbations compared to White British (WB) children. The MIA study aimed to co-produce a tailored intervention framework for childhood asthma management by exploring the knowledge and attitudes towards asthma amongst WB and SA parents, carers and children. Methods Semi-structured interviews with a purposive sample of 44 children aged 5–12yrs (33 SA, 14 WB) and 65 parents/carers (49 SA, 16 WB) were used to explore barriers and facilitators to asthma management. A comparative thematic analysis was conducted. Results WB families were more likely to have pre-existing knowledge of asthma than SA families; previous knowledge of asthma strongly influenced how families managed childhood asthma in both communities. In a minority of SA families, ‘fear of the unknown’ prevented families from investigating asthma further. Beliefs regarding the causes and nature of asthma were similar in both groups, however whilst 33% of SA families attributed asthma to either God’s will or Karma, no WB families did so. All communities reported that advice was often given by extended family members but this was more prominent in SA families, especially in relation to complementary asthma management strategies. SA and WB families both reported a lack of information-giving by health care professionals in relation to asthma. Conclusions Pre-existing knowledge and attitudes surrounding asthma differ between SA and WB parents and directly impact on management. Intervention Co-production is increasing in use and popularity. The MIA project supports the co-production model by highlighting the importance of identifying attitudes and beliefs towards asthma from different ethnic groups so that interventions can be tailored to address their fears and concerns more effectively. Disclaimer This project was funded by the National Institute for Health Research HS&DR programme (ref 09/2001/19). The views and opinions expressed therein are those of the authors and do not necessarily reflect those of the HS&DR programme, NIHR, NHS or the Department of Health.
You’ve probably heard that honey bees are in decline. Since 2006, honey bees have been struck by a condition that devastates hives, leaving as many as 95% of hives in an area empty. It’s been called Colony Collapse Disorder (CCD for short) and it makes the bees abandon their hives, flying away and never returning. Since about one third of all U.S. crops depend on these bees to thrive, CCD not only affects how much it’ll cost to put honey on your toast, but how much it’ll cost to buy fruits and other foods too. Based on recently published studies, a likely suspect is pesticides. Specifically, a group of pesticides called neonicotinoids. They’re related to nicotine and affect the nervous system. In one study, when low doses of a certain neonicotinoid (imidacloprid) was given to honey bees, 94% of hives were abandoned six months after having been dosed. In another study, more than 30% of bees got lost, and couldn’t return home, after having been given a different neonicotinoid (thiamethoxam). Neonicotinoids have been used since the late 1990s, and imidacloprid specifically has been used on corn in the U.S. since 2004-2005. That corn’s used to make high-fructose corn syrup that’s fed to the honey bees. This timing appears to match the CCD symptoms, which first appeared in 2006. To protect the honey bees from these pesticides, some actions are being taken. In April 2013, the European Commission announced that it’d restrict the use of three common neonicotinoids (including imidacloprid and thiamethoxam). In the U.S., politician Earl Blumenauer introduced The Save America’s Pollinators Act, which, if it becomes law, will suspend use of these neonicotinoids plus one other. For further reading: - Chensheng Lu, Kenneth M. Warchol, and Richard A. Callahan’s article “In situ replication of honey bee colony collapse disorder” - Penelope R. Whitehorn et al.’s article “Neonicotinoid pesticide reduces bumble bee colony growth and queen production” - Mickael Henry et al.’s article “A common pesticide decreases foraging success and survival in honey bees” - Wenfu Mao, Mary A. Schuler, and Mary R. Berenbaum’s article “Honey constituents up-regulate detoxification and immunity genes in the western honey bee Apis mellifera”
Objectives The aim of this study is to explore different methods for screening and diagnosing hypertension—which definitions and criteria to use—in children and in addition to determine the prevalence of hypertension in Dutch overweight children. Design A cross-sectional study performed in the Dutch Child Health Care setting. Setting Four Child Health Care centres in different cities in the Netherlands. Participants 969 overweight (including obese) and 438 non-overweight children, median age 11.7 years (range 4.1–17.10), 49% boys. Main outcome measures The main outcome was blood pressure, and the difference in prevalence of hypertension using different criteria for blood pressure interpretation: using the first blood pressure measurement, the mean of two measurements and the lowest of three measurements on two different occasions. Results Looking at the first measurement alone, 33% of overweight and 21% of non-overweight children had hypertension. By comparing the mean of the first two measurements with reference values, 28% of overweight children and 16% of non-overweight children had hypertension. Based on the lowest of three consecutive measurements, the prevalence decreased to 12% among overweight children and 5% among non-overweight children at visit one and at visit two 4% of overweight children still had hypertension. Conclusions The prevalence of hypertension is highly dependent on the definitions and criteria used. We found a prevalence of 4% in overweight children, which is considerably lower than suggested by recent literature (4%–33%). This discrepancy can be explained by our more strict definition of hypertension. However, to draw any conclusions on the prevalence, normal values using the same definition of hypertension should be established. Despite the low prevalence, we recommend measuring blood pressure in all overweight children in view of later cardiovascular morbidity and mortality.
The whirling appearance of Uzume is based on the idea of a space that grows and changes dynamically over time, and is "drawn" purely by movement. After Aristotle, the dialectic of matter and space appears in the movement. Movement is the material aspect of time and there is no time without a subject. The material aspect of time thus also determines a formal aspect. Heinz von Foerster says in "Wahrnehmen wahrnehmen", (Perceiving Perception) that it is the variation of what we perceive, generated by movements, which enables us to experience Stanislaw Lem talks in 1964 of the "phantomatic machine", stating that the effect of phantomatic can be considered as an "art with feedback", that enables the former recipient to become an active participant, a hero. Oliver Sacks describes chaos as referring to systems that are extremely sensitive to the smallest, partly infinitely small, modifications in their initial conditions, and the status of such systems quickly becomes unpredictable. In "Medien-Zeit-Raum" (Media-Time-Space) Goetz Grossklaus states that time becomes the actual medium of each computer-generated simulation. As cybernetic space (Cyberspace), the space of action and movement, is nothing else than a time Participants are challenged to "communicate" with their movements and to motivate thus their opposite to respond. It is fascinating to observe what a ëlively' character the unpredictable behavior [of the chaotic system] Michael Heim's interpretation of the ancient Greek term "prosopon" (face facing another face) describes two faces that make up a mutual relationship, in that one face reacts to the other, and the other face reacts to the other's reaction. The relationship then creates a third state of being that lives on independently. Metaphoric spaces of virtual environments are not technologically constructed, but rather shaped by the memories, emotions and the social context of their inhabitants.
Normalised floating-point binary numbers are the binary equivalent of denary standard form. Very large or very small numbers have their digits shifted left or right so that they start immediately to the right of the binary point - this forms the mantissa - and the number of places it's been shifted to the right becomes the exponent (with a negative number indicating that the mantissa was shifted to the left). The only slight difference between normalised binary and denary standard form is that, in denary, the mantissa is greater than or equal to one and less than ten - i.e. there are digits to the left of the decimal point - whereas in binary, because we use two's complement form for negative numbers, the bit before the point is used as the sign, with all of the actual number coming after the point. This page is a normalised binary calculator (using the two's complement method) - you can enter a denary number in the textbox below and the required mantissa and exponent will be displayed, along with an explanation of the process. If you aren't familiar with the use of binary for fractions and negative numbers, click here. Note that there are other methods for storing floating-point binary numbers, including IEEE 754. Input a denary number: Number of bits for the mantissa: Number of bits for the exponent:
Answer by Ethan Hein, master's candidate in music technology at NYU, composer, teacher, all-around music dork: The academic music world is slowly coming to grips with the ways that theserves practicing musicians pretty poorly. The pop music pedagogy movement, spearheaded by , is doing some creative work aimed at aligning music education with the way people experience and understand music in the present. Rather than trying to identify a canonical body of works and a bounded set of rules defined by that canon, we should take an ethnomusicological approach. We should be asking: What is it that musicians are doing that sounds good? What patterns can we detect in the broad mass of music being made and enjoyed out there in the world? I have my own set of ideas about what constitutes common-practice music in America in 2014, but I also come with my set of biases and preferences. It would be better to have some hard data on what we all collectively think makes for valid music.and have bravely attempted to build just such a data set, at least within one specific area: the harmonic practices used in rock, as defined by Rolling Stone magazine’s list of the . De Clercq and Temperley transcribed the top 20 songs from each decade between 1950 and 2000. You can see the results in their paper, “ .” They also have a where you can download their raw data and analyze it yourself. The whole project is a masterpiece of music theory, as opposed to the stodgy kind. Of course, the Rolling Stone top 500 has some problems as a data set. First of all, there’s no common agreement as to what the word rock even refers to. De Clercq and Temperley identify two main senses of the word. There’s the sense Rolling Stone uses, an umbrella term for late-20th-century Anglo-American popular music. By this definition, rock includes soul/R&B standards, disco hits, middle-of-the-road pop, and a few iconic country, jazz, and hip-hop songs. On the other hand, there’s the more narrow and descriptive sense of the word rock that includes Led Zeppelin and Aerosmith but specifically excludes jazz, hip-hop, and so on. Taking this view, the Rolling Stone list is not really a list of rock songs; it’s a list of “the greatest songs of the rock era.” De Clercq and Temperly don’t get too bogged down in the semantics; the Rolling Stone list is as complete a consensus mainstream pop collection as exists, so it’s a good-enough place to start. A few results jump out from the study. As you’d expect, theis the most commonly used chord in the Rolling Stone corpus. However, the next most common chord is IV, and it most frequently precedes I. Right away, we have a conflict with traditional classical theory, where the most basic tonal building block is the . Rock uses plenty of V-I, but it uses even more IV-I. And the third most common pretonic chord in rock is not ii, like you’d expect if you went to music school; it’s bVII, reflecting rock musicians’ love of . These same three chords—IV, V, and bVII—are also the ones most likely to follow the tonic in rock, again very much at odds with classical practice. De Clercq and Temperley observe: In light of this data, one might conclude that rock is not governed by rules of “progression” at all; rather, there is simply an overall hierarchy of preference for certain harmonies over others, regardless of context. In common-practice music, conventional theory dictates that certain root patterns are preferred over others: ascending motion by fourths is especially normative (much more so than descending fourth motion); descending thirds are favored over ascending thirds, and ascending seconds over descending seconds (Schoenberg 1969). Are these principles observed in rock as well? It can be seen immediately that the norms of common-practice music do not hold. For each interval, the ascending and descending forms are roughly equal in frequency. The ascending perfect fourth is almost exactly as common as the descending perfect fourth; for other intervals, too, a similar pattern is seen. The frequency of intervals decreases in a very regular way as circle-of-fifths distance increases. is a central pillar of rock, and blues violates quite a few tenets of common-practice classical harmony. The biggest one is the distinction between major and minor. The sound of blues is in large part the sound of minor melodies and chord extensions over major chord progressions. The more blues-oriented flavors of rock are similarly ambiguous in their major/minor identity. A lot of the time, rock chords are neither major nor minor, like the famous , which is just root-fifth-root. The harmonic situation gets more complicated still if you include hip-hop in the data set. The Rolling Stone list includes “” by Public Enemy, which doesn’t have any triadic harmony at all. De Clercq and Temperley dealt with that by just not including the track in their analysis, which is unfortunate. A real theory of contemporary music would have to deal with hip-hop, which may not have triads but does have strongly melodic unpitched vocal lines, modal harmonies, and sometimes very crunchy dissonances and . To my mind, the most intriguing idea put forth by de Clercq and Temperley is the, the collection of pitches most frequently used in rock melodies. The supermode could be viewed as the union of the Ionian (major) and Aeolian (natural minor) modes; one might also think of it as a set of adjacent scale degrees on the line of fifths, extending from flatscale degree 6 to scale degree 7. In enharmonic terms, this collection excludes just two scale degrees, sharpscale degree 4/flatscale degree 5 and sharpscale degree 1/flatscale degree 2—precisely the same degrees that are outside the “global” scale collection of common-practice music. I like the idea of the supermode. Classical music’s obsession with the major scale runs counter to most Americans’ intuition. Sure, we like the major scale fine, but it doesn’t feel like the One True Generative Scale that classical music holds it to be. Flat sevenths sound as “natural” to me as natural sevenths. (Actually, flat sevenths are a lot lower in the overtone series; you could make a case that mixolydian should be the One True Scale.) I think the best idea would be to just teach kids the supermode, rather than hitting them with the confusing idea that you have to modify the major scale to get the sounds you’re used to. The original version of this post appeared on More questions on Music Theory:
JOURNAL: PHEASANT NOTES Ring-necked pheasants are related to quail, grouse, turkeys and domestic barnyard chickens. These natives of Asia first arrived in the U.S. in 1882 when Judge Owen Denny, then counsel-general at Shanghai, had several shipped to his brother's farm in Oregon. The imported ring-neck made its debut in Minnesota in 1905 when the Game and Fish Department received 70 pairs from Wisconsin and Illinois. The first state hunting season was in 1924 in Hennepin and Carver Counties when an estimated 300 roosters were killed. Only seven years after the first season, 49 counties were opened to hunting and more than 1 million roosters were taken. Ring-necked pheasants filled a niche which had previously been filled by the prairie chicken. In Minnesota, ring-necked pheasants are known to feast on 515 different kinds of food. One of their favorites is corn.
David Vaux and Andreas Strasser from the Walter and Eliza Hall Institute of Medical Research have been awarded the biennial CSL Florey Medal for their work identifying cell death triggers and using them to fight cancer. The award, which is presented by the Australian Institute of Policy and Science (AIPS), recognises significant lifetime achievements in biomedical science and human health advancement. In the late 1980s and early 1990s, Vaux and Strasser discovered the molecular processes that cause billions of our cells to die every day, showing that some cancer cells can evade this process of programmed cell death and thus “fail to die”. They found that a gene called Bcl-2 keeps cancer cells alive and increases their resistance to chemotherapy. This led to the development of a potent new inhibitor of Bcl-2 that is now used to treat leukaemia around the world. “I’m proud to share this honour with Andreas,” Vaux says. “Bcl-2 was the spark that ignited a whole new field that has given new insights not only into the origins of cancer but also, as first shown by Andreas, autoimmune disease. But cell death research has only just begun.” Strasser agrees. “Although our research into cell death and cancer has been under way for decades, it remains for me a vital and exciting field,” he says.
Hello and welcome to the 2018 series of bloom reports for Rhododendron State Park in Fitzwilliam, NH! Normally, as is the case this year the flowers of the native Rhododendron maximus at the grove in Fitzwilliam don’t start to open up until after the beginning of the month of July, with the most blooms visible during the second or third week. Exact timing of the bloom is always difficult to predict, but count on the middle of July for the best show. Last year (2017) the bloom came a few days early, but the best show was still between the second and third week of July. The regional weather, among other more mysterious things, can have a profound effect on plants in general. One constant aspect that will not be changing anytime soon barring any hideous planetary misfortune, is the amount of dark time (some will say “photoperiod” but it’s really the amount of dark hours that plants measure) a plant receives that governs its seasonal activity. So why does the weather affect the bloom time? A simple explanation: every sort of plant needs a certain range of temperature for a certain amount of time in order to trigger its activities (not to mention moisture, another important ingredient for growth and blooms that is always affected by the weather). Speaking of which, the wide fluctuation of thermometer readings this last winter was hard on a large number of plants including some of the hybridized and cultivated Rhododendrons that are now starting to flower. Mostly the leaves, but some flower buds as well are looking pretty brown from desiccation (drying out). Rhododendrons are well equipped to resist drying by curling and drooping their leaves during cold and/or dry spells – it often happens during the summer as well – so it follows that conditions were severe enough during the winter months to overcome the defense that rhododendrons evolved to survive – or was it also the very dry August of 2017 that contributed? At least at the grove in Fizwilliam, I observed very little evidence of such damage. Is it because they are native? Perhaps. But they also are protected by trees above and wet soil below. This combination in itself may be enough. Who knows? Until next time,
The goal of the Natural Resource Management (NRM) Program is to ensure that beneficiaries of Africa Harvest projects manage natural resources, even as we focus on various development targets. We believe this will unlock maximum benefits for our beneficiaries and other stakeholders while mitigating environmental degradation that may arise from desertification, water pollution, environmentally-related conflicts, climate change and loss of biodiversity. There is currently no stand-alone project in the NRM Program. However, all Africa Harvest projects have a focus of NRM to ensure sustainable agriculture and mitigate environmental conservation. Areas of future interventions include: • Spearheading participatory natural resource management planning. • Conservation agriculture by integrating crop and trees planting. • Desert and arid lands rehabilitation. • Riparian zones conservation and rehabilitation. • Construction of sand dams along seasonal rivers in semi-arid lands. • Teaching farmers how to terrace and manage sloppy lands. Addressing farmer organizations’ challenges in ASALs through water access The project was implemented in Makueni County, which lies in the ASALs of the Eastern region of Kenya. The County is characterized by a rapidly growing population, especially among the youth under 30 years, who comprise approximately 70% of the total population. Rural youth are the future of food security. Yet around the world, few young people see a future for themselves in agriculture or rural areas. Makueni County has the highest poverty levels, with a mean poverty level varying among sub-counties, ranging from 36% to 76%. The project’s target area was the Mulala and Wote sub-counties, situated in the low lying region of the County which receive 150 mm to 650 mm of rainfall per annum. The area has average high temperatures of 35.80ºC. (MCDP 2013–2017). FOSEMS II focuses on water provision in Kenya’s ASALs Africa Harvest implemented Phase II of the FOSEMS project. At the end of the project’s first phase the target communities identified water as one of their highest priority needs. The main challenges identified were household access to water during drought (especially by women), and safe drinking water for children in schools. The project responded by providing 10,000-liter plastic water tanks to help harvest roof water in three schools.
What Does "Texas" Mean? Texas is from the Caddo Indian word "teyshas" (meaning friends or allies). In the 1540s Spanish explorers took this to be a tribal name, recording it as Teyas or Tejas. It came eventually to mean an area north of the Rio Grande and east of New Mexico. The alliance concept is also incorporated into the state motto, which is simply "Friendship."
Nano Flakes promise greater solar energy efficiency By Emily Clark December 19, 2007 December 20, 2007 The inefficiency of solar cells in converting the sun’s rays into electricity is a key contributor to the high costs of solar energy, but new research into a novel shape of semiconductor nanostructures known as "nano flakes" may revolutionize the process and help improve the viability of clean energy derived from the sun. Details of the research by Martin Aagesen, a PhD from the Nano-Science Center and the Niels Bohr Institute at University of Copenhagen were recently published in nature nanotechnology. If his "future solar cells" meet expectations, they may be a huge step towards boosting the world’s exploitation of solar energy. Aagesen believes that the nano flakes have the potential to convert up to 30 per cent of the solar energy into electricity and that is roughly twice the amount that the average solar cell converts today. The discovery was made during Aagesen’s work on his PhD thesis when he found a new and untried material. “I discovered a perfect crystalline structure. That is a very rare sight. While being a perfect crystalline structure we could see that it also absorbed all light. It could become the perfect solar cell,” he said. The technology has the potential to reduce the solar cell production costs which rely on expensive semiconducting silicium. At the same time, the "future solar cells" will exploit solar energy more effectively and lessen the loss of energy. Aagesen is also director of the company SunFlake Inc. which is pursuing development of the new solar cell. Other recent efforts to address the issue of solar cell efficiency include a breakthrough from SANYO in June this year which saw the company broke its own record for the world's highest energy conversion efficiency in practical size crystalline silicon-type solar cells by demonstrating an efficiency of 22%. In December last year Spectrolab achieved a world record in terrestrial concentrator solar cell efficiency, using a photovoltaic cell to convert 40.7 percent of the sun's energy into electricity. More recently, Global Warming Solutions announced the development of new solar energy conversion technology based on a special coating that can be applied to existing solar cells.Share - Around The Home - Digital Cameras - Good Thinking - Health and Wellbeing - Holiday Destinations - Home Entertainment - Inventors and Remarkable People - Mobile Technology - Urban Transport - Wearable Electronics - 2014 Action Camera Comparison Guide - 2014 Smartwatch Comparison Guide - 2014 Windows 2-in-1 Comparison Guide - 2014 Smartphone Comparison Guide - 2014 Full Frame DSLR Comparison Guide - 2014 Tablet Comparison Guide - 2014 Superzoom Camera Comparison Guide - 2014 iPad Comparison Guide - 2014 Entry-Level to Enthusiast DSLR Comparison Guide - 2014 Small Compact Camera Comparison Guide
Can be used for creating materials Thinking styles and techniques What is a layout? The purpose of layouts Layouts involve working on the arrangement and allocation of elements. The objective of a layout is to convey information in an easy to understand form, by organizing that information and visualizing hierarchies in the design. Effective layouts display strong communication skills, even in business documents and presentation materials. Complicated materials that cannot be understood even if you read them thoroughly can be changed to materials that can share important information quickly just by looking at them. This can be done by keeping in mind some rules for creating layouts. Here we introduce how to think of layouts that are useful when creating materials for businesspeople, as well as some basic layout rules. Layout Thinking Methods Discover all of the elements of the design, such as the photographs, data, titles, big headlines, small headlines, and text placed in the materials. We recommend using sticky notes to organize these elements so that the order can be easily modified. Group all similar elements. Here, you can rearrange information while keeping mind which elements can be arranged vertically, such as, for example, Headline 1 -> Photograph 1 -> Text 1, and which information you want to convey to be arranged in a line, including the opening date and deadline. Making a Story By rearranging groups you can create a story. You can complete your blueprint by rearranging elements while keeping a priority in mind, for example – What you want to communicate first – What information supplements that – Conclusion. In actual layout work, you visualize the priority of information and the contrast in accordance with the blueprint. During this process, it is important to understand the 4 major principles of design (proximity, arrangement, contrast, and repetition), as well as people’s behavior and psychology. What is visual guidance? Visual guidance refers to methods for controlling the follow of a person’s vision in order to effectively convey information. Visual guidance is used in all kinds of everyday situations, including composing photographs and pictures, as well as advertisements and web design. From large to small People’s lines of sight move from large items to big ones. One of the basics of materials creation is taking information that you want people to read first, such as titles, headlines and catch copy, and placing it larger to contrast it with other elements. A Z form is how our vision is guided when we look at paper and screens with vertically written text. The name comes from our vision flowing in a Z shape when we read vertical text – from upper left to upper right, from upper right to lower left on the opposite corner, and from lower left to lower right. An N form is when our vision is guided when we look at vertically written Japanese text. Our line of sight flows in an N shape – from upper right to lower right, from upper left on the opposite corner, and from upper left to lower left. Along with the Z form, it is a basic shape of visual guidance. It is often used in Japanese newspapers and magazines. This method involves repeating the movement of the line of vision downward from the left end to the right end. Many websites use the F form of visual guidance for their layouts. What is proportion? Many designs that people feel are pleasant are created based on set ratios. Even in materials creation, laying out text and charts based on this ratio lets you create designs that give off a more pleasant and sophisticated impression. The golden ratio refers to a vertical and horizontal ratio of 1 : 1.1618. It is considered the ratio that people think is most visually appealing. This ratio has been used in old paintings and architecture such as the Mona Lisa and the Triumphal arch. A 1 : 1.1618 rectangle referred to as the “golden rectangle” is used when applying the golden ratio to layouts. Making the largest square within the golden rectangle forms another golden rectangle in the remaining rectangle space. By covering the largest square in the same way, it draws a smooth helix that connects each opposite angle. By deciding on the placement of design elements and white space using this straight line and curved line as a reference, the layouts become more pleasant for people to look at. “Silver ratio” refers to a vertical and horizontal ratio of 1 : 1.414. This ratio is used in various creations, such as the Five Storied Pagoda and other architecture, as well as for characters that are very familiar in Japan. Japanese people in particular feel stability and beauty with this ratio, so it is called the “Japanese ratio.” Rule of Thirds The rule of thirds is a layout method in which 2 vertical and horizontal lines each are drawn at equal intervals, and the points where the vertical and horizontal lines intersect are used as a general guideline for placing elements. It was originally used for photography composition, but it is also useful for creating balanced and visually appealing layouts for printed paper and online screen design.
The Republic of Belarus is a legal successor of the Belarusian Soviet Socialist Republic. On 27 July 1990, the Supreme Soviet of the BSSR adopted a Declaration on State Sovereignty. A year later the Declaration was granted the status of a constitutional law. The overall territory of Belarus is 207.600 square kilometers. As of 1 January 1996 the population of the Republic reached a total of 10,265,200 people of 100 nationalities. According to the census of 1989, Belarusians accounted for 77.9%, Russians for 13.2%, Poles for 4.1%, Ukrainians for 2.9%, Jews for 1.1% while Tatars, Gypsies and Lithuanians for 0.1 % of the total population each. Other nationalities are represented by less than one thousand. Despite the multinational population, there were no interethnic conflicts for the recent decades. Minsk is the capital of the Republic of Belarus with 1,700,300 inhabitants. The population density in Belarus is fairly low: 49 people per square kilometer. The geopolitical location of Belarus is favorable (between West and East in the center of Europe). It borders the Russian Federation in the East, Poland in the West, Lithuania and Latvia in the North and the Ukraine in the South. Natural conditions in the Republic are very favorable for people's living and activity: continental temperate climate, predominantly flat- mountainous relief, well-developed hydrogeographical network, variety of soils, rich flora and fauna. Originally (since XIV century), the term of "Belaya Rus" (White Rus or Russia) denoted one of dialect-ethnographic areas, mostly the North-East (Novgorod, Pskov, Polotsk and Vitebsk lands), of the all-east ancient Slavic community. In Russian sources of the XV century "White Rus" is frequently identified with the "Great Moscow Rus". There exist various interpretations of the "White Rus" term. Some researchers associate it with independence from the Tatar-Mongols ("white" is treated as "free") , from early Christianization and even with the more privileged status of the Polotsk and Vitebsk lands within the Grand Duchy of Lithuania. From the second half of XVI century the term "White Rus" was used to denote the territory between Lithuania and the Moscow State. They began to call the Slavic population of these territories "Belarustsy"(Belarusians). In the period from XIII to the first half of XIV century., the Belarusian and Lithuanian ethnical territories were united by local princes into a sort of federation: the Grand Duchy of Lithuania (GDL). Since XV century it had been officially called the Grand Duchy of Lithuania, Russia and Zhemojtia. As an early feudal monarchy, GDL had been formed during the reign of the GrandDuke Mindauh (c. 1200-1263). In the XIII century, the residence of the Lithuanian princes was the Belarusian town of Novogrudok. The Grand Duchy of Lithuania had been weakened by wars with the Moscow State, by the Crimean Khanate in XVI century, and by everlasting intercine wars between feudals and princes within the state. Thus pushed by the Lithuanian and Belarusian gentry striving for the same privileges and rights as those enjoyed by the Polish feudals, the GDL was forced to enter into an alliance with Poland. Long-lasting and devastating wars in the second half on the 17th and the early 18th century (anti-feudal war, 1648-1651; wars between Poland (Rzecz Pospolita) and Sweden in 1655-1660, Russia in 1654-1667, and the Northern War , 1700-17210 , social disturbances, intercine wars between magnates and gentry accounted for prolonged political decline and crisis of Rzecz Pospolita and the Belarusian lands. Incorporation of the Belarusian land into the Russian Empire had both the positive and negative effects. On the one hand, Belarus had been liberated from forced Polonization and integrated into the All-Russian economic system. Intercine feuds were stopped and anarchism of gentry was eradicated. On the other hand, the policy of Russification, long-lasting serfdom and its survivals hindered the development of its economy. Occupied by the Russian empire from the end of the 18th century until 1918, Belarus declared its short-lived National Republic on March 25, 1918, only to be forcibly absorbed by the Bolsheviks into what became the Soviet Union. On 22 June 1941 fascist Germany attacked the USSR. That was the beginning of the Great Patriotic War of the Soviet Union which lasted from 1941 to 1945. Belarus had become an arena of heavy fighting. As a consequence of the Red Army retreat, a Hitlerite occupation regime was temporarily established on the territory. For the time of occupation, fascist invaders annihilated over 2,200,000 inhabitants of Belarus. Every fourth inhabitant of the Republic perished. At the Crimean Conference (February 1945) the Heads of the Governments of Great Britain , USA and USSR there was achieved an agreement that the USSR would be represented in the international security organization by two more of sixteen Republics: Belarussia and the Ukraine. The Resolution of the Constituent Conference (San Francisco, April-June 1945) about the inclusion of the Ukrainian SSR and the BSSR in the number of founders of the United Nations Organization became a decisive factor for the Republics to enter the international scene as a subject of the international law. The basis for the BSSR and the Ukrainian SSR to be received to the UN was a formally sovereign nature of these Republics as well as international recognition of contribution of the peoples of Belarus and the Ukraine to the defeat of Nazi Germany and their great sacrifices in the struggle against fascism. On 26 June 1945 the BSSR signed the Charter of the United Nations that was ratified by the Presidium of the Supreme Soviet of the BSSR in July the same year. The membership of the Republic in the UN opened up prospects for its participation (within the framework of the USSR initiatives though) in the discussion and settlement of important problems by the international community, joining the activities of certain special institutions and organizations such as the International Telecommunications Union and the Universal Postal Union (since 1947), the World Meteorological Organization (since 1948), the World Health Organization (1948-1949), the International Labor Organization (since 1954), the United Nations Educational, Scientific and Cultural Organization (UNESCO, since 1954). The massive nuclear accident (Apr. 26, 1986) at the Chernobyl power plant, across the border in Ukraine, had a devastating effect on Belarus; as a result of the radiation release, agriculture in a large part of the country was destroyed, and many villages were abandoned. Resettlement and medical costs were substantial and long-term. According to its amended Constitution, Belarus is a republic with a directly elected President. The President, Alexander Lukashenka (elected in 1994), used a November 1996 referendum to amend the 1994 Constitution in order to broaden his powers, extend his term in office, and replace the unicameral Parliament with a handpicked one, ignoring the then-Constitutional Court's ruling that the Constitution could not be amended by referendum. Most members of the international community criticized the flawed referendum and do not recognize the legitimacy of the 1996 Constitution or the bicameral legislature that it introduced. After the internationally unrecognized November 1996 constitutional referendum, which resulted in the dissolution of Belarus's legitimate parliament and the centralization of power in the executive branch, Lukashenko provoked a diplomatic crisis by demanding and eventually confiscating diplomatic residences on the Drozdy compound, taking the U.S., German, British, French, Italian, and IMF residences away from those missions, ignoring outstanding lease agreements, and leaving the confiscation uncompensated. In addition, Lukashenko used his newly centralized power to repress human rights throughout the country, but particularly members of the disbanded 13th Supreme Soviet, the legitimately elected parliament at the time, or former members of his own government. Lukashenka renewed his term of office as President through an election process that the Organization for Security and Cooperation in Europe (OSCE) described as neither free nor fair and as having failed to meet OSCE commitments for democratic elections. Parliamentary elections were held in October 2000, the first since the 1996 referendum. The President and his administration manipulated the election process to ensure an absolute minimum of antiregime candidates and opposition members of the Parliament. The OSCE concluded that the elections were neither free nor fair. The judiciary is not independent. Profiteering from systemic corruption keeps the governing class faithful to the president. Lukashenko buys the nomenclature's loyalty by allowing them or their family members to receive interests in state-run companies, which they use to line their pockets. Lukashenko thereby keeps the nomenclature financially satisfied, which is preventing the opposition from attracting any nomenclature support. On the other hand, if Russia succeeded in forcing economic concessions out of Lukashenko, the regime could lose its ability to keep the nomenclature satisfied and Lukashenko could fall from power. Lukashenko likely realizes this. |Join the GlobalSecurity.org mailing list|
Central banks worldwide, led by the U.S. Federal Reserve, mint new money ceaselessly to bail out insolvent governments, banks, and politically powerful corporations and labor unions. Currency convertibility to gold, enforced by law, established a finite limit to the money supply. Inflation would lead citizens to promptly cash out for gold, thus reducing the money supply and ending the rise in prices. By Lewis E. Lehrman May 2013 Issue The American Spectator LATELY WE HAVE BEEN engulfed by headlines reporting financial turmoil on every continent, in almost every nation, large and small. The commissars of central planning who so marred the history of the 20th century have been replaced by central banks in the 21st. In Cyprus, the new leadership now dares to confiscate citizens’ wealth with a one-time tax of up to 60 percent on bank deposits above 100,000 euros. Self-interested prime ministers blame continental monetary policies for instigating the currency wars that they themselves surreptitiously carry on. Central banks worldwide, led by the U.S. Federal Reserve, mint new money ceaselessly to bail out insolvent governments, insolvent banks, and insolvent but politically powerful corporations and labor unions. This new money goes first to insiders in the financial sector, who exchange the cheap credit for commodities, stocks, and real estate at ever-rising prices. This is the so-called carry-trade, monopolized by a financial class that uses free money from the Fed to front-run the authorities for insider profits. From the beginning of the American republic until not long ago, dollars could be exchanged for gold at a parity established by congressional statute (1792–1971, but from 1934–1973 convertible by foreigners alone). Currency convertibility to gold, enforced by law, established a finite limit to the money supply. Inflation—caused by the issue of excess money and credit—would lead citizens to promptly cash out for gold, thus reducing the money supply and ending the rise in prices. In a sense, the system was self-regulating. With an unlimited money supply, the insolvency of national banking institutions has become an endemic global problem. Depositors are at risk of loss or arbitrary confiscation by panicked political authorities, as in Cyprus. Taxpayers are involuntarily dragooned in to bail out the banking system, as at the start of America’s recession. And if the central bank credit bubble collapses, systemic deflation will be the profound and destructive consequence. The expropriation in Cyprus, the problems in the eurozone, the unrest in Iceland, and the crisis of the American banking system are but a few examples of legions of insolvencies engendered by the unrestrained and unlimited issue of inconvertible money and credit balances by central banks that are not restrained by effective institutional limits—except that of collapse itself. This has not always been the case. The institutions of money and credit evolved over a period of three millennia, and the story of their origin suggests that stability and trustworthiness were once paramount goals. In the Beginning, There Was Barter FORERUNNERS OF MAN LIVED on the planet several million years ago, but unique social order emerged only 4,000 to 5,000 years ago. Historical and archeological evidence suggests that the institution of money evolved coterminously with civilization. From the standpoint of the 100,000-year history of Homo sapiens, civilization and money are but young and fragile reeds. In the beginning, there was barter: the moneyless exchange of one man’s goods for those of another. Each family stored varied supplies—wheat, wood, or venison—to exchange directly for others—cows, tools, or coal. But barter cannot always work. For example, the meat one man produces might not be desired by the person with whom he tries to trade. Thus other, more indirect forms of commerce developed. Under a system called potlatching, practiced by natives in northwestern North America, one party gives a gift with the hope but not the certainty of a return. There is no guarantee of an immediately satisfying exchange, but in a tight-knit community, reciprocal faith acts as a sort of invisible currency—the “money” of a moneyless community. Gifts are given in exchange for unwritten promissory notes, implied liabilities that the grateful debtors repay in the future with gifts in return. Potlatching amplifies barter and indirectly tends to encourage growth. All members of a community freely make goods for one another, which are often repaid in kind and more. As George Gilder puts it, this productive circle of givers increases the sympathy of its members for the special needs of one another. Money evolved through a historical process not unlike that of trial and error or natural selection. But standardized and certified coins originated with an act of human creativity around 650 b.c. The first such coins appeared in Lydia, Asia Minor, at a time when the original Sumerian civilization was in its fourth millennium of development. The Lydians minted coins using a natural mixture of gold and silver called electrum. Lydian coins exhibited specific properties that made them uniquely suitable as a medium of exchange: They were small, portable, and enduring. Made of scarce precious metals, they were beautiful and cherished. Men had exerted great effort and intelligence to produce them, giving them intrinsic value. The test of time proved the lasting worth of Lydian coins. Merchants came freely to select these coins for their intrinsic monetary properties, and they became increasingly accepted in ever-widening trade by people of many tongues in the Levant and Near East. Coins became a useful yardstick by which to measure the value of the other products of human intelligence available on the market. Their high value, relative to their small size and low weight, made them easy to transport and store. History taught the ancients that the value of these precious metals endured, and that their purchasing power remained reasonably stable from year to year, even generation to generation. Money enhanced the options of all those who toiled, augmenting their freedom to provide for themselves as they pleased. Workers could defer purchases because they held concrete tokens encompassing an irrevocable right to demand future goods. They could even leave a surplus for their children. Real money lasted. It could be inherited and passed on. To see entire article CLICK HERE
Exploring math everyday in the preschool classroom! Exploring the wonderful texture and qualities of our brightly colored self-made M&M paint in preschool. We explored bright and colorful M&M counting and color sorting in preschool! Making games from pizza boxes for my preschool students to explore! Our take home math bags open the door to lots of mathematical thinking in the classroom and at home! By popular demand, we went on a nature hike in the woods and collected items for our sorting baskets! A terrific and simple activity for promoting shapes sorting, matching, and patterning! This little bug activity box was simple to put together and allowed for both some color sorting and simple play! This morning, I had a special guest join me for my three minute segment on local Indiana Fox 59 Morning News… Tristan and his parents joined me at the news studio bright and early this morning to present my newest segment – see video below. Everyone pitched in to help me get ready… First I […] Using homemade Easter grass and plastic eggs, the children will enjoy a little sorting game and making their own colorful bird nest!
Editor's CommentaryFrom the monthly column: CNC Tech Talk An appropriate machining order will stabilize the machining process, while an inappropriate order might make it impossible to machine acceptable workpieces. Indeed, machining order is so important that some operators limit the task of process stabilization to the step-by-step order in which workpieces are machined. Someone in your company, typically a process engineer, will determine how a workpiece will be routed through the shop. Based on the capabilities and capacities of the company’s equipment, the process engineer will select machines that are capable of producing acceptable workpieces. One common process-stabilization problem arises during this initial selection of machine tools. Often, so many workpiece surfaces need to be machined that it is difficult to perform all of the machining operations in one setting. That is, two or more setups may be necessary to completely machine the workpiece. This introduces the possibility of workpiece misalignment from one setup to the next. Any misalignment will result in difficulties holding tolerances among surfaces machined in different setups. Such process mistakes can be costly because all workpieces will be machined in the first setup before the second setup is made. Problems found in the second setup may result in having to scrap the entire lot. When it comes to an individual setup for a given machine, the most important rule-of-thumb for machining order is to rough everything before you finish anything. By roughing first, you can ensure that workpiece surfaces will not move in the workholding device once the finishing operations begin, because finishing operations do not stress the workpiece or workholding device like roughing operations do. Burr removal can be another machining-order-related problem. Many companies expect CNC operators to remove all burrs after the CNC cycle is completed. Often, certain sharp edges can be eliminated by a simple change in machining order. Reversing the order of machining among finish facing, finish boring and finish turning may prevent the formation of a sharp edge. Unfortunately, you won’t know if changing the machining order will work until you try it. Changes must be made to the CNC program—changes that require the programmer to cut and paste commands from one point in the program to another. While some CNC controls have simple cut-and-paste capabilities, many do not. Also, some that have this capability are difficult to use. There is an easy way to change machining order without having to make massive changes to the program: Use unconditional branching statements. In custom macro B, these are GOTO statements, but even a simple M99 can be used in main programs to change the program’s execution order. Here’s an example of a process you want to change: O0001 (Main program) N005 (Rough face and turn) . . N065 (Drill) . . N105 (Rough bore) . . . N150 (Finish bore) . . N210 (Finish turn) . . N250 M30 (End of program) Maybe you want to make the finish-turning tool run first to see if that will eliminate the burr on the face of the workpiece. Here is the modified program that provides this change: O0001 (Main program) N005 (Rough face and turn) . . N065 (Drill) . . N105 (Rough bore) . . N148 M99 P210 (Go to line N210) . N150 (Finish bore) . . N208 M99 P250 (Go to line N250) N210 (Finish turn) . . N248 M99 P150 (Go to line N150) N250 M30 (End of program) With three simple commands, we’ve changed the execution order of this program.blog comments powered by Disqus
Data and the algorithms that organise it are core to many services in the digitalised world. Jonny Shipp, Director of Public Affairs at Telefónica SA and a Visiting Fellow at LSE and Dr Ioanna Noula, researcher at the UCL Institute of Education and a Visiting Fellow at LSE, write here about the ethics of data science and how to increase sustainability, following a workshop on the topic held at LSE last month. Complex algorithms, analytical as well as visualization techniques are helping optimise transport systems, understand consumer behaviour or track the spread of ideas. The amazing things people are doing with data are at the heart of digitalisation. Speaking at the recent World Economic Forum (WEF) in Davos, Ángel Gurría, Secretary-General of the OECD, highlighted that “Digitalization is not something we can decide whether to adopt or not. It is inevitable and we have to embrace it, because there is no alternative.” The human element Also at the January 2017 WEF the forum’s founder and chief Klaus Schwab called in his opening address for “humanization over robotization” as populism poses challenges the automation of industries across the world. The previous week, at LSE’s first meeting on Digital Life, researchers and industry experts were asked to consider the human effort that lies behind digitalization: the deep down work of the “data carers” who clean and prepare data to make it usable by data scientists. As with other aspects of digitalisation, despite increasing automation, people do remain involved. Extraordinary machines take centre-stage, trained and tuned by an elite group of engineers. Yet human judgement is definitely missed when, for example, a computer judging a beauty contest chose all white finalists, or when racist chatbots perpetuate racism, or in criminal sentencing risk assessments, or in the “echo chambers” of social media, in which people’s opinions are more reinforced than challenged. The algorithms struggle to spot right and wrong before replicating and sometimes amplifying human wrongs. Designed to make light work of complex decision-making processes, their reasoning increasingly goes unexplained. If it is not possible to describe how an algorithm makes a decision, how can it, or its owner, be held to account for the decisions it makes? Social media companies want to continue to exist and be successful without causing ethical problems or friction with lawmakers. In the US and Germany, Facebook has felt it necessary to introduce human “fact checking” by news organisations to address issues with “fake news”, untrue stories that people “like” and so are amplified by algorithms that aim to deepen audience engagement, oblivious to truth. The UK House of Commons Culture, Media and Sport Committee recently launched an inquiry into the phenomenon of fake news. Should organisations deploy automated decision-making technologies that they do not understand? Should the creators be required to include explanations of how decisions are made? If these technologies are so complex and impenetrable that it is impossible to be transparent about exactly how decisions are taken, then how do we protect against the real human harm they might cause? Ethics, law, accountability, research, sustainability Ethics, law, accountability, research and sustainability must all be parts of the solution. Data protection laws provide for “fair processing” and for “transparency” but the application of these concepts can be ineffective. Some have called for such ideas to be reframed, so that long, complex and usually unread terms and conditions might cease to pass for transparency, for example. If algorithmic transparency is not possible, then algorithmic accountability has a role to play. Those organisations that deploy data analytics must be able to account for the outcomes. They must be able to demonstrate strong leadership oversight of operations, effective monitoring, reporting and incident response, analysis and mitigation of risks of harm to people, appropriate internal policies and procedures, privacy by design and ways for people to complain and seek redress. Accountability goes beyond the law, requiring organisations to consider their use of data in a strategic way. Explicit ethical frameworks such as the one created by the UK Government Digital Service are an important part of this accountability approach, enabling the wide range of people involved with data to consider the limitations of their techniques and of the data, and encouraging them to challenge themselves and others about the benefits and limitations of what they do. A challenge for researchers who may have technological solutions to offer is that they appear not to be getting access to the huge data sets upon which data analytics and machine learning often depend. Why? Because organisations are scared of what might happen. They know that negative news stories threaten the huge potential for innovation with data. For 30 years, the idea of sustainability has been at the core of the human response to climate change. The Bruntland report defined sustainable development as “development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” “Sustainability thinking” in data analytics might therefore ask: how can we maximise the contribution that data makes to the economy, society and individuals, while preserving or even enhancing individual freedoms and rights? The idea is elaborated in Facebook’s “new paradigm for personal data”, which emphasises the role of individual data subjects as agents in the data economy, able to choose rights as well as benefits, privacy and data-driven innovation. Taking the data back As Schwab highlighted at the 2017 WEF, the rapid development of technology including artificial intelligence and machine learning raises pressing questions about who or what is in control. Ultimately, for a “better digital life,” the excitement of the miracles of automation must be balanced with safety, security, social cohesion and cultural fulfilment. Digital entrepreneurs have embraced the idea of sustainability in digital environments, seizing the opportunity of Personal Information Management Systems (PIMS), systems that enable people to take control of their personal data and act as a broker for their participation in the data economy. Larger digital players and infrastructure providers too are starting to adopt “customer in control” strategies. By helping people to understand the data ecosystem, decide when and how to participate with their own data, and even to seek the best returns on their data, organisations hope to build trust. Putting people in control of their data may help to restore trust between people and organisations. Some argue that even this may not be a sufficient to achieve sustainability. A bolder approach to achieving justice and tackling inequality in the digital world might entail seeking stronger alignment between the driving goals and measures of success for digital platforms and the needs and aspirations of society. Citizen participation might be able to reverse the crisis of legitimacy faced by political institutions across the world. Digitalization is central to this transformation as it opens up access to information and communication on an unprecedented scale. By supporting participation and enhancing spectatorship, digital technologies offer a strong foundation for social progress, and give citizens the chance to take more active roles in today’s “platform society”. The Digital Life of Cities Our discussion of Ethics In Data Science reveals a developing dance of people and machines and offers a view of how the forces of law and ethics, people and politics are shaping and being shaped by digitalisation. Situated at the forefront of business and cultural life, urban development offers another perspective. Digitalization and data-driven decisions promise to make civic services smarter, improving many different aspects of the urban environment. The digital infrastructure of cities is characterised by the connecting of things as well as people to communications networks, leading to a proliferation of sensors and data being collected in urban environments. In March 2017, the second meeting of LSE’s Digital Life series, Digital Life of Cities will focus on the data-driven, ‘smart city’. How is digitalization making cities more liveable? How should the smart city ecosystem and its data be governed? What are the barriers and enablers to build sustainable digital urban lives? This post gives the views of the authors and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science. They would like to thank Bojana Bellamy (Centre for Information Policy Leadership), Madeleine Greenhalgh (UK Government Digital Service), Stephen Deadman (Facebook), Daniel Bates (O2) and Jean-Christophe Plantin (LSE) for their contribution to LSE’s Digital Life series.
A new radiation source has been produced for brachytherapy, with radiation energies slightly above those of 125I, and a T1/2 of 340 d. This source, 145Sm, is produced by neutron irradiation of 144Sm (96.5% enriched). Decay is by electron capture with 140 K X-rays per 100 disintegrations in the energy region between 38-45 keV, plus 13 gamma -rays at 61 keV. These sources are encapsulated in Ti tubes, approximately 0.8 mm*4.5 mm, and have been developed for temporary implantation in brain and ocular tumours. The 38-61 keV photons should make such sources easy to shield, while providing a dose distribution from source arrays somewhat more homogeneous than that from 125I. In addition, the 340 d half life of 145Sm permits its use for times significantly longer than that of 60 d 125I. While the 145Sm sources have been designed primarily for implantation in a brain tumour, they should be useful for almost any conventional brachytherapy application. ASJC Scopus subject areas - Radiological and Ultrasound Technology - Radiology Nuclear Medicine and imaging
He challenged the Supreme Court on his right to be called citizen--and won When American-born Wong Kim Ark returns home to San Francisco after a visit to China, he's stopped and told he cannot enter: he isn't American. What happens next would forever change the national conversation on who is and isn't American. After being imprisoned on a ship for months, Wong Kim Ark takes his case to the Supreme Court and argues any person born in America is an American citizen. I am an American: The Wong Kim Ark Story is an important picture book that introduces young readers to the young man who challenged the Supreme Court for his right to be an American citizen and won, confirming birthright citizenship for all Americans. K-Gr 4--Wong Kim Ark was born in San Francisco in 1873 and knew he was an American. He had never lived anywhere else. Then the Chinese Exclusion Act passed in 1882, hindering immigration, job opportunities, and eventual citizenship for Chinese people in the United States. Violence toward Chinese people became even more commonplace, and Ark's parents went back to China. After visiting his family in China, Ark was detained from entering the country, despite being born in America. He won the lawsuit in San Francisco to be freed, but this did not apply to the U.S. government, so he brought the case to the Supreme Court. His victory guaranteed citizenship to all of those born in the U.S. This detailed picture book biography introduces readers to a historical figure who changed birthright citizenship laws. The digitally rendered artwork fills each spread, and its detailed imagery gives insight into life in San Francisco's Chinatown in the late 1880s and early 1900s. Endpapers include an 1885 neighborhood map of Chinatown, outlining Chinese-occupied, white-occupied, and vacant areas, to give a clearer picture of the city's population. Back matter features photos and a time line starting with 1849, when the first large group of Chinese immigrants began to settle in the U.S., to the 1965 Immigration and Naturalization Act. VERDICT An important picture book biography to augment classroom conversations about immigration and citizenship.--Kristyn Dorfman, Friends Academy, Locust Valley, NYCopyright 2021 School Library Journal, LLC Used with permission.
Miguel de Cervantes, author of Don Quixote, died on April 22 1616. This article was first published in October 2015. JK Rowling seems to have started something with her recent announcement that Voldemort is actually pronounced "Vol-De-Mor", with a silent T. But the Harry Potter villain is not the top literary name that people struggle to say correctly. That dubious honour belongs to Don Quixote, the 17th-century character created by Miguel de Cervantes. If you're one of the 44% of readers who have been been pronouncing the knight's name as "Don Quicks-Oat" then it's time to learn the correct way: "Don-Key-Hoh-Tee". A survey of 2,000 people, conducted by digital audiobook retailer Audible, found that 39% of people have pronounced the names of literary characters incorrectly. Daenerys Targaryen, one of the lead characters from hit book and TV series Game of Thrones, came second in the poll with 28% identifying it as the most commonly mispronounced name. Third most mispronounced was Oedipus, which is enough to give anyone a complex, and Agatha Christie's famous detective Hercule Poirot also makes the list. Most people think certain names are simply difficult to say, or they blame someone else for teaching them the wrong pronunciation and 25% said that that mispronouncing a name left them feeling embarrassed. Third most mispronounced was Oedipus, which is enough to give anyone a complex John Sutherland, professor of modern English literature at University College London, said: "Some names are much harder than others to translate from the page to spoken word. Although there is ongoing debate about the correct pronunciation of names, many foreign names have been anglicised to fit our native tongue. Names such as Don Quixote should retain their original pronunciation as Don Kee-Hoh-Tee to avoid confusion." The top ten most commonly mispronounced fictional characters: 1 Don Quixote, Miguel de Cervantes (44%) ‘Don-Key-Hoh-Tee’, not ‘Don Quicks-Oat’ 2 Daenerys Targaryen, Game Of Thrones, George RR Martin (28%) ‘Duh-Nair-Ris’ ‘Tar-Gair-Ee-In’, not ‘Dee-Nay-Ris ‘Targ--Ahh-Ruh-Yen’ 3 Oedipus, Sophocles (23%) ‘Ee-Di-Pus’, not 'Oh-Eh-Di-Pus' 4 Hermione, JK Rowling's Harry Potter (22%) ‘Her-My-Oh-Knee’, not ‘Her-Mee-Own’ 5 Beowulf, unknown author (16%) ‘Bay-Oh-Woolf’, not ‘Bee-Oh-Wulf’ 6 Poirot, Agatha Christie (15%) ‘Pwa-Row’, not ‘Poy-Rot’ 7 Smaug, The Hobbit, JRR Tolkein (13%) ‘Sm-Owg’, not ‘Sm-Org’ 8 Voldemort, JK Rowling's Harry Potter (12%) ‘Vol-De-More’, not ‘Vol-De-Mort’ 9 Violet Beauregarde, Roald Dahl's Charlie And The Chocolate Factory (12%) ‘Vie-Ah-Let’ Bore-R-Garrr’, not ‘Bore-Ruh-Gard’ 10 Piscine Patel, Life Of Pi, Yann Martel (11%) 'Piss-Een Pat-El, not ‘Pis-Kine Pat-il’
Economics in a picture Net wealth of the youngest households has declined since 2010 According to the Portuguese Household Finance and Consumption Survey (ISFF, in the Portuguese acronym), net wealth, measured by the difference between the value of assets and the value of liabilities is unevenly distributed among households of different age groups. Net wealth varies with the age of the household reference person according to a pattern related to the life cycle. As the graph illustrates, net wealth increases with age up to the retirement age and decreases thereafter, more gradually. The comparison of the results of the 2010, 2013 and 2017 waves of the ISFF reveals some heterogeneity in the evolution of net wealth for different age groups. For some groups, namely those where the reference person is under 35 years old or between 45 and 64 years old, the median net wealth in 2017 was lower than in 2010. For the youngest households, the median net wealth was around 14 thousand euros, in 2017, less than half of the value of the same age group in 2010. For the remaining groups, the values in 2010 and 2017 are not statistically different. For further details see Costa, S., Farinha L., Martins, L. and Mesquita, R. (2020). "Portuguese Household Finance and Consumption Survey: results for 2017 and comparison with the previous waves", Banco de Portugal, Economic Studies, Vol. VI, No 1. Prepared by Sónia Costa, Luísa Farinha, Luís Martins and Renata Mesquita. The analyses, opinions and findings expressed above represent the views of the author and not necessarily those of Banco de Portugal or the Eurosystem. If you want to receive an e-mail whenever a new “Economics in a picture” is published send your request to email@example.com.
Marine researcher Lauren De Vos studies biodiversity around South Africa's False Bay, where she uses food-laden cameras to record sea life in action. Known as baited remote underwater video, or BRUV, this non-extractive method "offers a low environmental impact way of understanding changes in fish numbers and diversity over time," she explains in a blog post for the Save our Seas Foundation. But while fish may be content to nibble at these baited cameras, some marine animals are a little more ambitious. The video below shows an octopus wrapping its tentacles around one of De Vos' BRUV canisters, untying three cable knots (without even looking) and stealing the whole thing. As if that isn't impressive enough, the inventive invertebrate does it all while using one arm to restrain a hungry catshark. Check out the video evidence, which De Vos aptly set to bluegrass music: Octopuses are widely considered the world's smartest spineless animals, performing feats of intelligence — from building makeshift coconut shelters to remembering other octopuses — that are beyond most invertebrates. In fact, while many other mollusk brains contain fewer than 20,000 neurons, the common octopus has about 130 million. And as naturalist Sy Montgomery recently reported in Orion magazine, that's not even the half of it: Three-fifths of an octopus's neurons are in its arms, not its brain. "It is as if each arm has a mind of its own," a professor of biological philosophy tells Montgomery. "Meeting an octopus is like meeting an intelligent alien." For more information about De Vos' research, check out her BRUV blog. Related octopus stories on MNN: - Octopus takes joyride after latching onto dolphin - Fish that mimics octopus that mimics fish is filmed - Octopus foils predators by stealing identities
What are the 5 data modeling techniques? What are the 5 data modeling techniques In conclusion, data scientists must have four key technical skills: programming, data wrangling, machine learning, and data visualization. These skills are essential for data scientists to effectively collect, clean, analyze, and interpret data to help businesses make informed decisions. With these skills, data scientists can unlock the potential of data and help businesses make better decisions.
The Johns Hopkins Archaeological Museum contains fragments related to five lead curse tablets from ancient Rome. One of these tablets (JHUAM 2011.01) was recently conserved and placed on view, along with the original iron nail (JHUAM 2011.06) associated with it. Objects such as this one are evidence of a common practice in Greek and Roman antiquity to scratch curses onto tablets which were then deposited in wells or graves. While the earliest tablets only contained the name of the person to be cursed, later examples grew more elaborate, such as this example. Curses could be inscribed on basically anything, ranging from pottery sherds to gemstones, though lead is the most common material used for this purpose. This particular tablet (JHUAM 2011.01) from the Johns Hopkins collection was found rolled together with four others and pierced through by an iron nail (JHUAM 2011.06). The Latin name for a curse is defixio which means ‘to pin down.’ While the individual tablets are stand-ins for the cursed persons, the nail symbolizes their pinning-down. All five tablets now in the Hopkins collection were written in Latin by the same hand, but contain curses against different people. In order for curses to be most efficacious, the individuals to be cursed were precisely named. This tablet contains a curse directed against a Plotius, identified as the slave of a woman named Avonia. Unlike the cursed person, the one uttering the curse was generally not mentioned by name; as a measure to prevent counter-curses. Dating to the mid-first century BCE (roughly the time of Julius Caesar), this tablet represents typical features of curse tablets from this period, and likely came from Rome. It begins by invoking Proserpina [Greek Persephone] and her husband Pluto [Greek Hades], the god of the Underworld, as well as the three-headed dog Cerberus who guards the entrance to the realm of the dead. Curse tablets were frequently addressed to the gods of the Underworld and those associated with it, such as Mercury [Greek Hermes] who guided the souls on their way to Hades. The curse then names its recipient, Plotius, followed by the ailment wished upon him. Plotius is to be consumed by fevers which are likened to him wrestling with another man. This very vivid metaphor describes the fever struggles of malaria which most likely is the disease wished upon Plotius here. After the ailment is mentioned, the person requesting the curse promises offerings to both Proserpina and Cerberus as a payment for their services. Cerberus is to receive dates, figs, and a black pig–one sacrifice for each of his heads. Proserpina is promised the body of Plotius himself as an offering, and the remainder of the text describes in detail every piece of his body and what exactly was to happen to it. Furthermore, the tablet specifies that these things are to be completed by the end of February, so that Plotius may not see another month. The five tablets were acquired by the Department of Classical Archaeology of the Johns Hopkins University in 1908, and published in a dissertation by the then Hopkins graduate student William Sherwood Fox in 1911. Little is known about their exact provenance. It is also very likely that the tablets were deposited in a grave since curse tablets were often placed in tombs after the original burial. The idea was that the soul buried there would carry the curse to the gods of the Underworld. Tombs of those who had died young or by violent means were preferred because it was believed that their souls lingered restlessly near their burial site. - Fox, W.S., “The Johns Hopkins Tabellae Defixionum” PhD diss., Johns Hopkins University, 1911. - Fox, W.S., “An Infernal Postal Service,” Art and Archaeology 1 (1914), 205-207. - Gager, J.G., Curse Tablets and Binding Spells from the Ancient World, Oxford; New York: Oxford University Press 1992.
Humboldt penguins are native to Chile and Peru and nest on islands and rocky coasts, often burrowing holes in guano. The birds’ numbers are declining due to overfishing, climate change and ocean acidification, and the animal is considered a vulnerable spcies. In 2010, Humboldt penguins were granted protection under the U.S. Endangered Species Act. In 2009, two male Humboldt penguins at a German zoo adopted an abandoned egg. After it hatched, the penguins raised the chick as their own. In 2012, one of the 135 Humboldt penguins at the Tokyo Sea Life Park in Japan scaled a 13-foot wall and escaped into Tokyo Bay, where it thrived for 82 days until it was recaptured.
Reducing Eragrostis lehmanniana populations by preparing seedbeds with unconventional tillage implements and seeding in a semiarid grassland. The invasion of Lehmann lovegrass (Eragrostis lehmanniana Nees) in rangelands of Chihuahua, Mexico, has resulted in a need for revegetation to recover lost forage productivity. Thus, new knowledge on generating alternatives to improve these invaded grasslands is of great importance. This study evaluated seedbeds prepared with unconventional tillage implements and seeded with a grass mixture to reduce the plant density of E. lehmanniana while increasing the productivity of an invaded semiarid grassland of Chihuahua. The unconventional tillage implements were: a Rangeland Harrow, which was used to prepare the Striped Harrowing and Full Harrowing seedbeds; Rangeland Rehabilitator, which was used to prepare the Deep-Stingray Subsoiler seedbed; and a Tandem-type Aerator Roller, which was used to prepare the Double-Digging Aeration seedbed. An area without tillage was left as a control. The seed mixture was composed of blue grama [Bouteloua gracilis (Willd. ex Kunth) Lag. ex Griffiths var. Hachita] (25%); sideoats grama [Bouteloua curtipendula (Michx.) Torr. '6107 Kansas'] (25%); green sprangletop [Leptochloa dubia (Kunth) Nees var. Van Horn] (5%); weeping lovegrass [Eragrostis curvula (Schrad.) Nees var. Ermelo] (40%), and Columbus grass [Sorghum almum Parodi] (5%). The experiment was conducted across 4 yr, and the evaluation started at the second year. Plant density and dry matter (DM) production were evaluated per species. In the control plot, the plant density of E. lehmanniana increased approximately 180% from the 2nd to the 4th year (18 to 50 plants m- 2). The use of unconventional tillage implements for seedbed preparation and the inclusion of E. curvula in the seed mixture decreased E. lehmanniana density in more than 50% of plots and increased DM production in around 100% of plots. Considering the whole experimental period, in all the prepared seedbed treatments, E. curvula had the highest establishment and DM production of all the seeded species. The native species B. gracilis, B. curtipendula, and L. dubia had poor establishment in all the prepared seedbeds.
Noise pollution is one of the biggest health hazards and sources of stress. Sound waves continuously penetrate your body and you can't turn them off (even if you "tune them out"). Noise immediately causes your heart to race and your breathing to speed up. It also releases extra fat into the bloodstream and causes magnesium levels to fall. Children who live near noisy highways, airports and railroads have lower reading scores and slower language development. Healthy sounds, by contrast, improve wellbeing at every level. Western science is now validating ancient knowledge, such as Nada Yoga, the yoga of sound. Current medical studies show that the right sounds promote healthy functioning of the immune, endocrine and autonomic systems. In one study, the simple act of humming cleared up stuffy nasal passages. In another, they found that nitric oxide levels in the sinuses were 15 times higher when humming. Nitric oxide relaxes muscles, increases blood flow, improves digestion and helps focus the mind.
by Staff Writers Washington (AFP) Oct 6, 2011 The explosive growth of urban areas is resulting in greater damage and more deaths from natural disasters than ever before, experts warned Thursday, calling for better planning and safer housing. Cities now account for half the world's population and are growing faster than their populations can be counted, making them particularly vulnerable to earthquakes, floods and other disasters, especially in poor countries. "Our investment in risk-reduction preparedness should be wiser. It should not just be chasing the ambulance and responding again and again where something has gone wrong, but investing also in preparedness," said Maggie Stephenson, UN-HABITAT senior technical adviser for Haiti. Yet even in countries like Japan, home to perhaps the best seismic and earthquake preparedness in the world, preventive measures are sometimes insufficient. Some 20,000 people died or remain missing there after a huge earthquake and tsunami in March that also caused nuclear reactor meltdowns. "You can't build your way to safety," warned World Bank urban specialist Abbas Jha. Speaking with Stephenson and other urban recovery experts at the Brookings Institution, a Washington think-tank, he called for designing disaster mitigation systems that "fail gracefully" and investment in warning systems that are both "credible and timely." The need is even more urgent, he said, in a world where China will have 223 cities with population greater than one million by 225. Right now, Europe only has 35 such cities. And by 2100, 600 million people will move to vulnerable areas below sea level. Haiti, already the poorest in the Americas before the quake, still has half a million people living in squalid tent cities nearly two years after a devastating earthquake that killed more than 225,000 people. Stephenson acknowledged she was "horrified" at the slow pace of reconstruction in Haiti. Experts agree that it is critical, and cheaper, to train the local population to rebuild rather than have outsiders do it. "We need to train every single existing mason and every mango-seller that's going to become a mason," Stephenson stressed. Only 20,000 masons have been trained in Haiti since the earthquake, 10 times less than in northern Pakistan after the massive 2005 quake there, she said. In the past five years alone, more than 14 million people lost their homes to natural disasters. That can mean more than losing a shelter, said Habitat for Humanity chief executive Jonathan Reckford. People who work from home can lose their livelihoods, while others lose access to health care, water, sanitation and places of worship, he said. According to his Christian non-profit group, the number of urban residents worldwide living in areas vulnerable to earthquakes and cyclones will more than double from 680 million people in 2000 to 1.5 billion people by 2050. "Reconstruction always begins the day after a disaster," he said. "Our desire and our shared goal is to help families get back to work, back to school, lay that foundation to rebuild their lives." But rebuilding with future disasters in mind is no easy task in urban areas, home to huge infrastructure issues, land tenure concerns and limited space for reconstruction. Bringing Order To A World Of Disasters A world of storm and tempest When the Earth Quakes Comment on this article via your Facebook, Yahoo, AOL, Hotmail login. OSCE: Georgia refugees must be aided Geneva, Switzerland (UPI) Oct 6, 2011 Security in Georgia remains stable three years after its conflict with Russia but serious humanitarian issues are unresolved, a top European diplomat says. Despite the absence of continued fighting since the August 2008 conflict over the disputed region of South Ossetia, internally displaced refugees lack basic human rights and their grievances need to be resolved, Giedrius Cekuolis of ... read more |The content herein, unless otherwise known to be public domain, are Copyright 1995-2011 - Space Media Network. AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
HL Bloom theory has long been known in public health science. This theory is widely used to describe the causes of disease or health problems that occur in the community. WHO defines healthy concept as the perfect state of physical, mental and social, not just the absence of disease or infirmity / disability. Therefore, the cause of disease cannot be judged by one factor alone. Hendrik L. Bloom states that there are four factors that affect community’s health status, ie ranging from lifestyle, heredity, the environment and the health care system itself. These are four factors or called determinants from HL. Bloom: 1. Life Style HL Bloom states that lifestyle or behavior plays an important role in health status. Unhealthy habits, actions or activities certainly refer to a disease. For example, if you eat too much that can lead to obesity. On the other hand, an all-deficient diet encourages malnutrition. Therefore, lifestyle or behavior greatly affects health status. Environment holds a big role in health. These factors can be divide into two, namely the physical environment such as infrastructure and sanitation, while the non-physical environment includes social, economic, political, cultural. Examples of environmental influences on disease incidence can be seen in cases of dengue fever disease (DBD). The disease can occur in rain season, a seedy neighborhood, many shrubs or trees, puddles and garbage into the nest causes dengue mosquito breeding. The other examples is, for people who live in marginal area, it would be factor to diarrhea because the environment is dirty and maybe diffiult for them to get clean water. 3. Health Services Health services also play a role in public health status. The availability of affordable health care, adequate tools and drugs, the competence of health personnel is an important factor. Currently, there are still many remote areas that are not affordable by health services. Some cases experienced by pregnant women who were late in receiving treatment because the distance between her house and the location of health services was far away. This factor is a difficult factor to intervene in HL Bloom theory. Because the problem or disease is hereditary or known as genetic. For example, asthma. That disease was derivative because genetical. Therefore, for this factor, all we can do is the prevention of recurrence. Notes from DeveHealth: The four determinants or factors in HL Bloom theory interact with each other and affect a person’s health status. The fact is environtment has the largest role in influencing health, followed by behavior or lifestyle factors and health services, so the latest is genetic factors. The application of this theory is very helpful for public health experts in finding the cause of the problem and then developing a prevention program.
Ectodermal dysplasia is a group of disorders that abnormally affect the patient’s hair, nails, skin, sweat glands and teeth. Several genetic abnormalities can cause ectodermal dysplasias. Ectodermal dysplasia is typically treated with conservative and preventative measures. Your child may require genetic (usually blood tests) or specialized dermatologic testing. Our geneticist, pediatric dental team and pediatric dermatologists can offer comprehensive team care for your child.
BSc. Food Technology is a bachelor's degree that comes under a Science background. BSc. Food Technology course duration is 3 years. It deals with the study of production, preservation, and improvisation of food and its quality with scientific technology. Food technology students have an opportunity to learn new technologies every now and then which helps them to research more in food science. BSc. Food Technology is a branch of food science that deals with the production processes that make foods. Early Scientific research into food technology concentrated on food preservation. BSc. Food Technology is a program which is designed with an aim to prepare students experts in areas related to the field of Food Science and Technology. This program provides students an in-depth knowledge of the scientific and technical approaches which are required to understand the nature of raw food materials. This course helps students in understanding the basic concepts such as the composition of food, Food, and Nutrition, Physicochemical and Microbiological properties. It also helps students in learning different techniques related to the processing and preservation of food. The program includes the study of various subjects such as Chemistry, Biology, Nutrition, Biochemistry, Microbiology. A graduate in Food Technology can easily get numerous entry-level jobs in marketing or quality assurance companies, production management firms, Logistics departments, Research and Development centers, Hotels and Restaurants, Colleges and Universities, etc.
A new pilot project in San Francisco, CA is using cell phones and Facebook to help teens track and share information about the eco-impact of their daily transportation choices. The Go Green Foundation of San Francisco has teamed up with Nokia, UCLA and AT&T, in a groundbreaking project that has allowed 25 students at the Urban School in San Francisco to track their transportation habits using GPS-handsets from Nokia over AT&T's cell network. Here's how it works. The cell phones act as real-time sensors, and every 30 seconds they send info back to servers at UCLA's Center for Embedded Networked Sensing. The servers organize the info and chart it onto Web maps, which allows the participants to publish their individual and collective results to Facebook. The students can see how much carbon they're various transportation choices are producing as the UCLA software can detect the difference between walking, biking and driving a car as opposed to riding on a bus. In other words, the program shows kids that the environmental choices that they make affect the planet, and that they have the power to make more environmentall-friendly choices each day. One project participant, Julia Evans, a 17-year-old senior, said she started riding her bike more around her Burlingame neighborhood when she realized how much carbon she was using for short trips. via the San Francisco Chronicle Photo by ebabyenglish
About the Photographer American, b. 1948 In 1987-1988, for the Changing Chicago documentary project, Rhondal McKinney photographed towns on the southern outskirts of Chicago, along the historic Illinois and Michigan Canal. Through the end of the nineteenth century towns like Lockport and Joliet were thriving transportation hubs, since the nearby canal served as a vital link to commercial centers on the East Coast via the Great Lakes and the St. Lawrence Seaway. McKinney's 8x10 contact prints provide detailed, intimately scaled views of the area after decades of economic downturn. Many of the modest homes and neighborhoods McKinney documents are situated next to railroad tracks and in view of factory smokestacks. The pictures contain allusions to the industries that initially fueled the towns' growth, but the area appears desolate and quiet. McKinney's interest in these towns was not solely centered on their history in a bygone era, but on how these towns might develop. At the time of the photographs' publication, McKinney noted how the area was becoming more ethnically diverse, following multiple waves of immigration, and he described his hope to connect with "the spirit of these people, their traditions, and their efforts" in his photographs. In the midst of an epidemic of family farm foreclosures in 1985 the Focus Infinity Fund sponsored Rhondal McKinney, Tom Arndt, and Archie Lieberman to photograph small farms in the Midwest for the documentary project Farm Families. McKinney's photographs portray the farmers and their families in a series of panoramic portraits, each created by joining three to five contact prints made from 8x10 negatives. Collectively, the fragmented panoramas accentuate the expansive space that characterize these farmlands, but McKinney uses the multiple negatives to compartmentalize different parts of the image. In many of the photographs he situates the family in the central frame of the panorama, composing what could stand on its own as a tightly framed portrait. The adjacent negatives depict the land as it spreads out around them, views of wide open fields or an extensive yard around a two-story house. Each frame of the panorama is enclosed by a black border though, creating internal divisions within the compiled image and effectively cutting off the subjects from their surroundings. At the broadest level McKinney's project created a record of family farms at a time when their future seemed uncertain, while the way he uses multiple negatives to divide up the space, suggests the looming threat of losing one's land. Rhondal McKinney received an MFA in Photography from the University of Illinois (1981). He has had solo exhibitions at the Art Institute of Chicago; Roanoke College, Roanoke Virginia; University of Nebraska-Lincoln, and other institutions. His photographs are held in the permanent collections of the Metropolitan Museum of Art, New York; Art Institute of Chicago; and the San Francisco Museum of Modern Art. McKinney co-curated the 1983 exhibition An Open Land: Photographs of the Midwest: 1852-1982 at the Art Institute of Chicago, along with the museum's Curator of Photography, David Travis. Since 1983 McKinney has taught at Illinois State University, where he also serves as the director of the Rural Documentary Collection, an archive of photographs depicting social conditions in the rural Midwest.
Would you like to know exactly how the change of teeth in a puppy works and what things need to be considered? Then be sure to read the whole process of Teething In puppies so you can play it safe. How many teeth does a dog have? Like us humans a puppy is born completely without teeth. His milk teeth will then grow within the first 12 weeks. As a rule, a dog has 28 milk teeth. Those teeth begin to fall again from the third month and after that, the puppy gets its permanent teeth. The teeth should be completely changed by the 6th month which means your four-legged friend will replace its 28 milk teeth with 42 adult teeth by the 6th month. The dog’s milk teeth The milk teeth of dogs erupt one after another from the third week of their life. After the completion of this process, your puppy’s primary teeth will consist of a total of 28 teeth. - 6 molars in the upper and lower jaw - 2 fangs in the upper and lower jaw - 6 incisors in the upper and lower jaw When do puppies lose their milk teeth? The dentition of your four-legged friend will take place between the fourth and seventh months after his birth. Teeth are usually changed earlier in larger breeds than in small ones. The milk teeth fall out and the 42 permanent teeth erupt. Your dog’s permanent set of teeth accordingly contains the following teeth: - 6 incisors each in the upper and lower jaw - 2 fangs each in the upper and lower jaw - 12 molars in the upper jaw - 14 molars in the lower jaw The order of change in dogs teeth The change of teeth of your four-legged friend should follow a fixed sequence with certain steps: - The 4 front incisors in the upper and lower jaw fall out and will replace between the 3rd and 5th month of age. - The 2 missing incisors in the upper and lower jaw will add between 3 and 5 months of age. - The fangs in the upper and lower jaw loosen and will replace between the 5th and 7th month of age. - The first molar tooth in the front that has no milk tooth precursors breaks through in both the upper and lower jaw between the 4th and 5th month of age. - The other 6 front molars in each case in the upper and lower jaw replace the corresponding milk teeth between the 5th and 6th month of age. - The first 4 rear molars erupt in both the upper and lower jaw between the 4th and 6th month of age. - The last two molars appear between 6 and 7 months of age. Teething in puppies at a glance: 8 Helpful Points - The change of teeth takes place from around the 16th week after the birth of dog, earlier in giant breeds than in small ones. - The tooth change should be completed after approx. 3 months. - The dogs deciduous consist of 28 teeth and 42 permanent teeth. - Recognize the typical behavior patterns that indicate teething in dogs (e.g. nibbling on furniture, loss of appetite, etc.). - Make sure that your dog’s teeth change in the correct order. - Avoid dragging and retrieving games during teething. - Offer your dog chews and toys that are suitable. - Check your dog’s teeth regularly and, if necessary, go to the vet to avoid misaligned teeth later. How do I know that the tooth change has already started? If your dog does not display any abnormal behavior during the change of teeth, you can tell from the texture of his teeth whether he is already changing teeth. The permanent teeth are visually different from the milk teeth: - The permanent teeth are larger than the milk teeth. - Compared to the extremely pointed milk teeth, the permanent teeth are visibly less pointed. Symptoms of teething in puppies It is expected that your dog will suffer from minor or major discomfort during the change of teeth. The discomforts include: - Inflammation in the gums and the associated bad breath - Slight bleeding in the mouth area - Increased salivation - Elevated temperature - Nocturnal unrest Sometimes it can also happen that a milk tooth does not fall out while the permanent tooth is already erupting. In these cases, visit the vet or if necessary extract the milk tooth so that the permanent teeth can grow in their normal position. See your vet sooner rather than later to avoid irreparable damage and misaligned teeth. Behavioral change during teething Not every young dog can get through the change of teeth without problems. Observe your four-legged friend very closely during this critical phase. You will be able to observe easily whether your dog has symptoms of teething from typical behavioral patterns like: - The dog no longer wants to eat his food or uses it very sparingly. - Your four-legged friend may move the food back and forth in its mouth a little awkwardly and has trouble chewing. - The dog nibbles on all kinds of furniture or other objects. With this behavior, dogs usually want to get rid of loose milk teeth. - It is also common for your furry friend to lick its mouth continuously because the loose milk teeth sometimes cause severe itching. The crucial 6th month After reading all facts about teething in puppies now you could already find out that your loyal companion should have all of his permanent teeth in his mouth by the 6th month. Unlike us humans, there are no delays in changing the teeth of dogs. Changing teeth rarely cause problems and it usually happens without any special help on your part. However, it happens sometimes that the young dog has not lost all of its milk teeth by the 6th month. It is also possible that these have not been replaced or that there are even gaps between the teeth. Keep An Eye out for any problems Even if this is rare, you should keep an eye on the teeth of your furry friend, because the milk teeth would have to be replaced by the 6th month. Otherwise, your dog may have a permanent problem with its teeth. - Have the milk teeth not been completely replaced by 42 “adult teeth” by the 6th month? Then we recommend that you should visit a veterinarian as soon as possible. - Thinking about treatments? Remember orthodontic treatments are not as successful in dogs as they are in humans. These treatments can lead to some serious complications. Like your dog may have had crooked teeth or other consequences all his life. For this reason, it is advisable to observe the dentition and the change of teeth. Possible problems with teething in puppies Similar to infants, puppies also suffer when they change teeth. Symptoms like diarrhea and pain in the jaw area are completely normal. An increased temperature is also a symptom of tooth change. Due to the pain in the jaw area, your little friend can probably no longer feel like eating. That is also quite normal. But please watch your dog closely in these situations. - Does the fever persist or the diarrhea hasn’t gone away for several days? Then it is always better to go to the vet one more time. Refrain from these things You can make teething easier for your furry friend but sometimes by taking wrong measures you can also make it difficult for him to change teeth. Even if this is usually unintentional. So that this doesn’t happen to you, you will have to follow the no-go’s like: No distorting games Tugging games are by no means suitable for your dog’s teeth changing phase. Your puppy is exposed to severe pain during the change of teeth. This pain may increase when you pull. It can also happen that your puppy’s milk teeth will tear out but this, in turn, will make it difficult to change teeth. It reduces the success of healthy teeth in adulthood. No hard treats Hard food additives (treats or bones) can also have a counterproductive effect on changing teeth. The negative effect is similar to the effect of the distortion games. The teeth are very sensitive during this time. Your dog may be in a lot of pain. Therefore, he may experience more pain from these types of treats. Solutions to help with teething in puppies When your dog changes teeth, you should avoid dragging and retrieving games so as not to cause pain to your dog or damage its teeth. There is also a big risk that your dog will associate painful retrieval with something negative in retrospect. Instead, you can offer your four-legged friend different chews, such as various natural chews (e.g. chewing roots or antlers) or chew toys. A kong filled with frozen yogurt will also give your young dog great pleasure because the yogurt has a pain-relieving effect due to its cooling properties. Make sure, that your dog’s daily energy and nutritional requirements are not exceeded. An excessive supply of chews can cause your fur nose to grow too quickly, which can cause some problems. To prevent your puppy’s dental problems as early as possible, it is best to practice the dental check on your little companion at an early age and carry it out to a regular check-up. You should briefly open the muzzle and look at your dog’s teeth. Without exception, this should be done free of stress and violence, so that your dog does not associate anything unpleasant with checking his teeth later. Don’t lose patience The time of teething is difficult for your little one. It’s not just the physical symptoms that are troubling him but his psyche also suffers from the change of teeth. Pain, fever, or diarrhea can stress your little one. For this reason, it is recommended that you show a lot of patience, especially during this time. Give your four-leg friend regular cuddles and snuggles. Do not stress him out on walks or force him to eat unnecessarily.
Windows and fenestrated facades affect significantly the energy needs of buildings. In the US, windows are responsible for 12% of the total energy consumed by the existing US residential and commercial buildings due mostly to uncontrolled solar heat gains. External shading devices are commonly used features to control solar radiation as well as natural lighting through fenestrated facades to reduce both heating and cooling thermal loads and ultimately improve the energy performance of buildings. At the same time PV cells and modules have been widely integrated and attached to building envelope including roofs, facades, and windows. Driven by advances in solar cell efficiencies and materials, substantial reduction in installation costs of PV systems, availability of detailed design analysis tools, and growing desire or requirement of designing net-zero energy buildings, the interest in building integrated PV systems has grown noticeably in the last decade. Researchers at the University of Colorado have developed PV arrays with sliding overhangs to both generate electricity and reduce heating and cooling thermal loads for US residential buildings. Basic control to operate the sliding overhangs are considered to minimize annual net energy demands with and without the integrated PV arrays. A series of sensitivity analyses is performed to assess the impact of design and operating conditions on the energy performance of the sliding overhangs. The analysis results clearly indicate that when integrated with PV modules, sliding overhangs can significantly reduce the energy demand for US housing units especially when they are set at the optimal angles specific to the building location to maximize their electricity generation and solar shading effects. Specifically, it is found that sliding overhangs can not only reduce energy demand but also can achieve net-zero energy conditions for US houses with large windows and located in mild climates even when monthly adjustments are applied to operate the shading systems. To minimize design and operation complexities, it is recommended to consider sliding-only overhangs set at the latitude of the location to reduce both heating and cooling demand as well as increase PV array output. Available for licensing.
Symptoms Of Dehydration In The Elderly & How To Combat Them Adequate hydration is essential to maintaining good health. Most adults are able to monitor and respond to their hydration needs. However, this becomes more difficult as we get older, meaning elderly people are more susceptible to becoming dehydrated. In this blog post, we'll look at the symptoms of dehydration in the elderly and ways to help older people to meet their hydration needs. Why are older people at risk of dehydration? Water makes up a large proportion of our bodies. You can read more about hydration needs in different age groups here. Up to 20% of older adults are dehydrated, especially those in long-term care establishments. As we get older, our ability to sense thirst decreases. Poor mobility, cognitive decline, and physical ailments (including problems with swallowing) are also potential barriers for staying hydrated. Certain patient groups are at even greater risk of dehydration. For example, people with dementia or Alzheimer's Disease may be disorientated or forgetful, which could make it more difficult to recognise and respond to thirst. Certain medications such as diuretics and laxatives can also result in an increased risk of dehydration due to fluid losses. Research has found that some older people may consciously limit their water intake due to fear of incontinence or to avoid relying on others for help with going to the toilet. Consequences of dehydration in older people Dehydration is associated with poor health outcomes at all ages, but the effects can be more pronounced in elderly people. Dehydration in older adults has been associated with pressure sores, urinary tract infections, increased risk of falls, and unplanned hospital admissions. Even mild dehydration can affect cognitive ability, resulting in confusion and delirium. If dehydration is not addressed quickly, it can result in a rapid decline in an older person's health, which could result in increased disability, loss of independence and even death. How to encourage older people to stay hydrated Relatives, friends, carers and healthcare professionals can help to encourage older people to stay hydrated. Here are some suggestions: - Provide a range of drinks that they enjoy make water more palatable using a ceramic water filter. You could also mix water with other beverages such as fruit squash, fruit juice or cordials. - Offer hydrating foods such as soups, stews, smoothies, lollies, yogurts, ice cream, fruits, and jellies. - Ensure they always have a drink during mealtimes. Offer regular drinks between meals too such as water, milkshakes, tea or coffee. - Leave a jug or refillable bottle of water in an easily visible and accessible place. - Encourage older people to incorporate hydration into their social activities - why not invite an older person over for a cup of tea or suggest that they attend a local lunch club with friends. - Prompt them to drink little and often throughout the day - if an older person has memory issues, you could pin a note to remind them to drink in a visible place in their kitchen.
The respectable classes of Dublin city have long expressed concern over the prevalence of alms-seekers, vagabonds and especially that creature most offensive to moral propriety: the sturdy beggar. It is a thread of outrage which transcends both religious affiliation and social class, uniting countless Dubliners over the centuries. While approaches to the problem have varied through the years there have always been advocates of a “no nonsense” approach. Owing to the absence of much in the way of legal restraint, at least when it came to management of the underclass, things were easier in the late eighteenth century for those who desired to sort the problem “once and for all” and it was possible to take the type of “firm action” which today can only be dreamed of. In order to clean up the streets the burghers of that time purchased an old malt house in Channel Row and had it converted into a House of Industry, where unwelcome and unsightly people were to be interned and, in the interests of their moral development, set to useful labour. It seems however the beneficiaries of this enlightened scheme failed to appreciate its practical and moral advantages. Indeed, the general unwillingness of the inmates to accept the new order caused the unfortunate governors no end of problems. Early one morning in 1786 it was discovered that some forty “strolling women”, whom the police had collected from the streets and delivered to the institution the previous night, had disappeared, having exerted themselves during the night by driving a hole in the side of the building, through which they then departed As if the beggars themselves were not difficulty enough, some of those charged with running the institution were on the lax side themselves, to the great irritation of the governors, who were obliged to employ people for the day-to-day management of their charges. In response to one flagrant violation of the rules by a porter, it was ordered that he “be placed in the public hall at the hour of 12 o’clock with his crime in writing on his breast, and chained by the leg … as a punishment for drunkenness and taking a bribe at the door to let the poor elope”. Those who remained within also caused some problems, owing to their tendency to remove for personal use all forms of portable property. Clothing and provisions disappeared constantly and even the bibles, made available for moral improvement, had to be chained down. On one occasion the corpse of a man who died disappeared, presumed stolen. As one might expect, the governors took a dim view of any violation of God and man’s law pertaining to private property. Two women who were found to have stolen seven noggins and two trenchers were ordered to be chained, set to beat hemp, and fed on bread and water for seven days. Two boys who were sent for oil but sold some of it and falsified the dockets were punished with twenty-four lashes each on the naked back. In this case the watchful governors formed the view that the lashes were applied with insufficient enthusiasm and the next week the beadles involved were fined a week’s pay for the lax manner in which the strokes were applied. Despite such exemplary punishments the problem of theft persisted and at a later meeting of the governors, it was noted: It having appeared by the evidence of Mr O’Brien, Master of the Works that Sarah N- … had stolen several articles the property of the corporation and several others the property of children of the Aslyum: Ordered; that the said Sarah N- … be confined in a dark room till tomorrow at 2 o’clock when she shall receive on her bare back one dozen lashes with a cat; that the Master of the Hospital, the Master of the Works and all the beadles do attend. And so life in this worthy Dublin institution continued. The wonder is, given that such firm measures were taken during this golden age of no nonsense, that beggaring, vagrancy and poverty itself were not permanently eliminated from the city.
Lankmarks & Attractions to Clifton - Clifton Sights & Scenes > in a Pecan Shell 1852-53: Clifton is founded and is originally named Cliff Town after local limestone bluffs. A timeline of significant events in Clifton's history: 1859: The post office is granted. A flour mill is built that is replaced in 1868 1870: A three-story school known as Rock School is built. 1880: The Gulf, Colorado and Santa Fe Railway comes one mile south of town. Clifton serves as the Bosque County Seat. 1893: New school is built. 1895: The Clifton Record, the town's first newspaper is published. 1896: Clifton Lutheran College, later known as Clifton College, 1901: Clifton is incorporated 1904: The population 1906: A large fire destroys much of Clifton's business 1907: The Clifton Volunteer Fire Department is organized. Book Your Hotel Here & Save Visits to Clifton, TexasPhotographer's A fun little town to explore. Good bridge to photograph. The Cliftex Theatre looks to have been restored since the pic you have was taken. Found one side street in town that still had rings to tie your horse up to. As you can tell we had lot's of fun walking these towns on our 4 day trip. - William Memorial Museum: 301 South Avenue Q County Conservatory of Fine Arts: The former main building of Clifton College Whipple Truss Bridge A Texas Historical Landmark, built in 1884 by Wrought Iron Bridge Company. For those interested in Norwegian immigrants and their life in Texas, nearby Norse (FM 219 and North on FM 182) has the most history. The graveyard of Our Savior's Lutheran Church has the grave of Cleng Peerson, the "Father of Norwegian immigration".Lake Whitney State Park Along the eastern shore of Lake Whitney.Clifton Clifton Chamber of Commerce: 115 N Avenue
|Country of origin||United States| |Status||Canceled just after spacecraft construction had begun| |Maiden launch||January 1, 1966 (proposed)| |Last launch||March 1, 1968 (proposed)| The Boeing X-20 Dyna-Soar ("Dynamic Soarer") was a United States Air Force (USAF) program to develop a spaceplane that could be used for a variety of military missions, including aerial reconnaissance, bombing, space rescue, satellite maintenance, and as a space interceptor to sabotage enemy satellites. The program ran from October 24, 1957, to December 10, 1963, cost US$660 million ($6.31 billion in current dollars), and was cancelled just after spacecraft construction had begun. Other spacecraft under development at the time, such as Mercury or Vostok, were space capsules with ballistic re-entry profiles that ended in a landing under a parachute. Dyna-Soar was more like an aircraft. It could travel to distant targets at the speed of an intercontinental ballistic missile, was designed to glide to Earth like an aircraft under the control of a pilot, and could land at an airfield. Dyna-Soar could also reach Earth orbit, like conventional, crewed space capsules. These characteristics made Dyna-Soar a far more advanced concept than other human spaceflight missions of the period. Research into a spaceplane was realized much later in other reusable spacecraft such as the 1981–2011 Space Shuttle and the more recent Boeing X-40 and X-37B spacecraft. The concept underlying the X-20 was developed in Germany during World War II by Eugen Sänger and Irene Bredt as part of the 1941 Silbervogel proposal. This was a design for a rocket-powered bomber able to attack New York City from bases in Germany and then fly on for landing somewhere in the Pacific Ocean held by the Empire of Japan. The idea would be to use the vehicle's wings to generate lift and pull up into a new ballistic trajectory, exiting the atmosphere again and giving the vehicle time to cool off between the skips. After the war, it was demonstrated that the heating load during the skips was much higher than initially calculated and would have melted the spacecraft. Following the war, many German scientists were taken to the United States by the Office of Strategic Services's Operation Paperclip, bringing with them detailed knowledge of the Silbervogel project. Among them, Walter Dornberger and Krafft Ehricke moved to Bell Aircraft, where, in 1952, they proposed what was essentially a vertical launch version of Silbervogel known as the "Bomber Missile", or "BoMi". These studies all proposed various rocket-powered vehicles that could travel vast distances by gliding after being boosted to high speed and altitude by a rocket stage. The rocket booster would place the vehicle onto a suborbital, but exoatmospheric, trajectory, resulting in a brief spaceflight followed by re-entry into the atmosphere. Instead of a full re-entry and landing, the vehicle would use the lift from its wings to redirect its glide angle upward, trading horizontal velocity for vertical velocity. In this way, the vehicle would be "bounced" back into space again. This skip-glide method would repeat until the speed was low enough that the pilot of the vehicle would need to pick a landing spot and glide the vehicle to a landing. This use of hypersonic atmospheric lift meant that the vehicle could greatly extend its range over a ballistic trajectory using the same rocket booster. There was enough interest in Bomi that by 1956 it had evolved into three separate programs: Days after the launch of Sputnik 1 on 4 October 1957, on either October 10 or October 24, the USAF Air Research and Development Command (ARDC) consolidated Hywards, Brass Bell, and Robo studies into the Dyna-Soar project, or Weapons System 464L, with a three-step abbreviated development plan. The proposal drew together the existing boost-glide proposals into a single vehicle designed to carry out all the bombing and reconnaissance tasks examined by the earlier studies, and would act as successor to the X-15 research program. The three stages of the Dyna-Soar program were to be a research vehicle (Dyna-Soar I), a reconnaissance vehicle (Dyna-Soar II, previously Brass Bell), and a vehicle that added strategic bombing capability (Dyna-Soar III, previously Robo). The first glide tests for Dyna-Soar I were expected to be carried out in 1963, followed by powered flights, reaching Mach 18, the following year. A robotic glide missile was to be deployed in 1968, with the fully operational weapons system (Dyna-Soar III) expected by 1974. In March 1958, nine U.S. aerospace companies tendered for the Dyna-Soar contract. Of these, the field was narrowed to proposals from Bell and Boeing. Even though Bell had the advantage of six years' worth of design studies, the contract for the spaceplane was awarded to Boeing in June 1959 (by which time their original design had changed markedly and now closely resembled what Bell had submitted). In late 1961, the Titan III was chosen as the launch vehicle. The Dyna-Soar was to be launched from Cape Canaveral Air Force Station, Florida. The overall design of the X-20 Dyna-Soar was outlined in March 1960. It had a low-wing delta shape, with winglets for control rather than a more conventional tail. The framework of the craft was to be made from the René 41 super alloy, as were the upper surface panels. The bottom surface was to be made from molybdenum sheets placed over insulated René 41, while the nose-cone was to be made from graphite with zirconia rods. Due to changing requirements, several versions of the Dyna-Soar were considered, all sharing the same basic shape and layout. A single pilot sat at the front, with an equipment bay situated behind. This bay contained data-collection equipment, weapons, reconnaissance equipment, or a four-person mid-deck in the case of the X-20X shuttle space vehicle. A Martin Marietta Transtage upper stage attached to the aft end of the craft would allow orbital maneuvers and a launch abort capability before being jettisoned before descent into the atmosphere. While falling through the atmosphere an opaque heat shield made from a refractory metal would protect the window at the front of the craft. This heat shield would then be jettisoned after aerobraking so the pilot could see, and safely land. A drawing in the Space/Aeronautics magazine from before the project's cancellation depicts the craft skimming the atmosphere for an orbital inclination change. It would then fire its rocket to resume orbit. This would be a unique ability for a spacecraft, as the laws of celestial mechanics ordinarily mean a change of plane requires an enormous expenditure of energy. The Dyna-Soar was projected to be able to use this capability to rendezvous with satellites even if the target conducted evasive maneuvers. Unlike the later Space Shuttle, Dyna-Soar did not have wheels on its tricycle undercarriage, as rubber tires would have caught fire during re-entry. Instead Goodyear developed retractable wire-brush skids made of the same René 41 alloy as the airframe. In April 1960, seven astronauts were secretly chosen for the Dyna-Soar program: Neil Armstrong and Bill Dana left the program in mid-1962. On September 19, 1962, Albert Crews was added to the Dyna-Soar program and the names of the six remaining Dyna-Soar astronauts were announced to the public. By the end of 1962, Dyna-Soar had been designated X-20, the booster (to be used in the Dyna Soar I drop-tests) successfully fired, and the USAF had held an unveiling ceremony for the X-20 in Las Vegas. The Minneapolis-Honeywell Regulator Company (later the Honeywell Corporation) completed flight tests on an inertial guidance sub-system for the X-20 project at Eglin Air Force Base, Florida, utilizing an NF-101B Voodoo by August 1963. Boeing B-52C-40-BO Stratofortress 53-0399 was assigned to the program for air-dropping the X-20, similar to the X-15 launch profile. When the X-20 was cancelled, it was used for other air-drop tests including that of the B-1A escape capsule. Besides the funding issues that often accompany research efforts, the Dyna-Soar program suffered from two major problems: uncertainty over the booster to be used to send the craft into orbit, and a lack of a clear goal for the project. Many different boosters were proposed to launch Dyna-Soar into orbit. The original USAF proposal suggested LOX/JP-4, fluorine-ammonia, fluorine-hydrazine, or RMI (X-15) engines, but Boeing, the principal contractor, favored an Atlas-Centaur combination. Eventually, in November 1959, the Air Force stipulated a Titan,: 18 as suggested by failed competitor Martin, but the Titan I was not powerful enough to launch the five-ton X-20 into orbit. The Titan II and Titan III boosters could launch Dyna-Soar into Earth orbit, as could the Saturn C-1 (later renamed the Saturn I), and all were proposed with various upper-stage and booster combinations. In December 1961, the Titan IIIC was chosen,: 19 ) but the vacillations over the launch system delayed the project and complicated planning. The original intention for Dyna-Soar, outlined in the Weapons System 464L proposal, called for a project combining aeronautical research with weapons system development. Many questioned whether the USAF should have a crewed space program, when that was the primary domain of NASA. It was frequently emphasized by the Air Force that, unlike the NASA programs, Dyna-Soar allowed for controlled re-entry, and this was where the main effort in the X-20 program was placed. On January 19, 1963, the Secretary of Defense, Robert McNamara, directed the U.S. Air Force to undertake a study to determine whether Gemini or Dyna-Soar was the more feasible approach to a space-based weapon system. In the middle of March 1963, after receiving the study, Secretary McNamara "stated that the Air Force had been placing too much emphasis on controlled re-entry when it did not have any real objectives for orbital flight". This was seen as a reversal of the Secretary's earlier position on the Dyna-Soar program. Dyna-Soar was also an expensive program that would not launch a crewed mission until the mid-1960s at the earliest. This high cost and questionable utility made it difficult for the U.S. Air Force to justify the program. Eventually, the X-20 Dyna-Soar program was canceled on December 10, 1963. On the day that X-20 was canceled, the U.S. Air Force announced another program, the Manned Orbiting Laboratory, a spin-off of Gemini. This program was also eventually canceled. Another black program, ISINGLASS, which was to be air-launched from a B-52 bomber, was evaluated and some engine work done, but was eventually cancelled as well. Despite cancellation of the X-20, the affiliated research on spaceplanes influenced the much larger Space Shuttle. The final design also used delta wings for controlled landings. The later, and much smaller Soviet BOR-4 was closer in design philosophy to the Dyna-Soar, while NASA's Martin X-23 PRIME and Martin Marietta X-24A/HL-10 research aircraft also explored aspects of sub-orbital and space flight. The ESA's proposed Hermes crewed space craft was superficially similar to but not derived from the X-20. Aircraft of comparable role, configuration, and era
OR WAIT null SECS Renovation projects on older homes may increase the blood lead levels to harmful amounts in children who live in those homes, according to the Centers for Disease Control and Prevention. Renovation projects on older homes may increase the blood lead levels (BLL) to harmful amounts in children who live in those homes, according to the Centers for Disease Control and Prevention’s Morbidity and Mortality Weekly Report. New York health departments assessed 972 children in 2006 to 2007, updating a report conducted in 1997. In each case, data collected included 1) child's age, 2) blood test date, 3) BLL, 4) address and approximate age of dwelling, 5) activities that may have disrupted paint, and 6) name of person who did the repair work. All children had blood lead levels of >20 µg/dL and 71% of cases were age 1 to 2 years. Projects that involve repairs, painting and construction were all named as likely sources for lead exposure in 14% of children in New York who had raised lead levels. Specific renovations cited were sanding and scraping, chipping away paint from structures, and other activities related to lead-based paint. Additionally, CDC urges that owners of homes constructed pre-1978 who are planning renovations take precautions to protect children from potentially harmful lead levels. Research has shown that lead levels of >10 µg/dL or greater may lead to developmental and behavioral problems. Further, levels of >20 µg/dL may require environmental and medical assistance. While lead levels are still being closely monitored, the news has a silver lining in that median blood lead levels of young children have declined 89% from the 1976-to-1980 period compared to 2003-to-2004. "This decline is largely a result of the phase-out of leaded gasoline and efforts by federal, state, and local agencies to limit lead paint hazards in housing," officials at the CDC said in a statement. While the drop in lead levels has resulted in a sharp dip in number of houses with lead paint issues, CDC contends that many children are still at risk. For preventive measures, home renovators need more education, CDC urges, about how to avoid lead dangers when working. Of New York cases, resident owners or tenants were responsible for 66% of the renovation work. Anyone who removes lead-based paint is advised to adhere to recommendations from the Department of Housing and Urban Development and the Environmental Protection Agency to protect children. For tips on avoiding lead exposure, CDC suggests:
Pre-Occupational Curriculum (26 hours) Emphasizes the development and improvement of written and oral communication abilities. Topics include analysis of writing, applied grammar and writing skills, editing and proofreading skills, research skills, and oral communication skills. Presents basic concepts within the field of psychology and their application to everyday human behavior, thinking, and emotion. Emphasis is placed on students understanding basic psychological principles and their application within the context of family, work and social interactions. Topics include an overview of psychology as a science, the nervous and sensory systems, learning and memory, motivation and emotion, intelligence, lifespan development, personality, psychological disorders and their treatment, stress and health, and social relations. Focuses on basic normal structure and function of the human body. Topics include general plan and function of the human body, integumentary system, skeletal system, muscular system, nervous and sensory systems, endocrine system, cardiovascular system, lymphatic system, respiratory system, digestive system, urinary system, and reproductive system. Emphasizes the application of basic mathematical skills used in the solution of occupational and technical problems. Topics include fractions, decimals, percents, ratios and proportions, measurement and conversion, formula manipulation, technical applications, and basic statistics. Introduces a grouping of fundamental principles, practices, and issues common in the health care profession. In addition to the essential skills, students explore various delivery systems and related issues. Topics include: basic life support/CPR, basic emergency care/first aid and triage, vital signs, infection control/blood and air-borne pathogens. Introduces the elements of medical terminology. Emphasis is placed on building familiarity with medical words through knowledge of roots, prefixes, and suffixes. Topics include: origins (roots, prefixes, and suffixes), word building, abbreviations and symbols, and terminology related to the human anatomy. Reinforces the touch system of keyboarding placing emphasis on correct techniques with adequate speed and accuracy and producing properly formatted business documents. Topics include: reinforcing correct keyboarding technique, building speed and accuracy, formatting business documents, language arts, proofreading, and work area management. Introduces the fundamental concepts, terminology, and operations necessary to use computers. Emphasis is placed on basic functions and familiarity with computer use. Topics include an introduction to computer terminology, the Windows environment, Internet and email, word processing software, spreadsheet software, database software, and presentation software. Occupational Curriculum (35 hours) Introduces the basic concept of medical assisting and its relationship to the other health fields. Emphasizes medical ethics, legal aspects of medicine, and the medical assistant*s role as an agent of the physician. Provides the student with knowledge of medical jurisprudence and the essentials of professional behavior. Topics include: introduction to medical assisting; introduction to medical law; physician/patient/assistant relationship; medical office in litigation; as well as ethics, bioethical issues and HIPAA. Introduces medication therapy with emphasis on safety; classification of medications; their actions; side effects; medication and food interactions and adverse reactions. Also introduces basic methods of arithmetic used in the administration of medications. Topics include: introductory pharmacology; dosage calculation; sources and forms of medications; medication classification; and medication effects on the body systems. Emphasizes essential skills required for the medical practice. Topics include: office protocol, time management, appointment scheduling, medical office equipment, medical references, mail services, medical records, and professional communication. Introduces the skills necessary for assisting the physician with a complete history and physical in all types of medical practices. The course includes skills necessary for sterilizing instruments and equipment and setting up sterile trays. The student also explores the theory and practice of electrocardiography. Topics include: infection control and related OSHA guidelines; prepare patients/assist physician with age and gender-specific examinations and diagnostic procedures; vital signs/mensuration; medical office surgical procedures and electrocardiography. Furthers student knowledge of the more complex activities in a physician*s office. Topics include: collection/examination of specimens and CLIA regulations/risk management; urinalysis; venipuncture; hematology and chemistry evaluations; advanced reagent testing (Strep Test, HcG etc); administration of medications; medical office emergency procedures and emergency preparedness; respiratory evaluations; principles of IV administration; rehabilitative therapy procedures; principles of radiology safety and maintenance of medication and immunization records. Emphasizes essential skills required for the medical practice. Topics include: managed care, reimbursement, and coding. Emphasizes essential skills required for the medical practice in the areas of computers and medical transcription. Topics include: medical transcription/electronic health records; application of computer skills; integration of medical terminology; accounting procedures; and application of software. Provides fundamental information concerning common diseases and disorders of each body system. For each system, the disease or disorder is highlighted inlcuding: description, etiology, signs and symptoms, diagnostic procedures, treatment, management, prognosis, and prevention. Topics include: introduction to disease and diseases of body systems. Provides students with an opportunity for in-depth application and reinforcement of principles and techniques in a medical office job setting. This clinical practicum allows the student to become involved in a work setting at a professional level of technical application and requires concentration, practice, and follow-through. Topics include: application of classroom knowledge and skills and functioning in the work environment. Seminar focuses on job preparation and maintenance skills and review for the certification examination. Topics include: letters of application, resumes, completing a job application, job interviews, follow-up letter/call, letters of resignation and review of program competencies for employment and certification.
The Mahatma Gandhi Setu bridge over the river Ganga in Patna is the world's longest river bridge. The bridge spans over 5.575 km from Hajipur at the north end to Patna at the south end. Patna is located on the south bank of the river Ganga. Patna has a very long riverline, and it is surrounded on three sides by rivers Ganga, Sone, and Punpun. Just to the north of Patna across the river Ganga flows the river Gandak. Patna is a historic city and an important pilgrimage center for Sikhs, Buddhists, and Jains. Patna houses one of the five Sikh Takhats -Takhat Patna Sahib. The Buddhist and Jain pilgrim centres of Vaishali, Rajgir or Rajgriha, Nalanda, Bodhgaya and Pawapuri are all nearby. It is the ideal gateway for all the places on this circuit.
“The women that the system despised but ended up silencing the whole world.” (A. Rivas, October 11, 2014, “Así Vivieron Los Niños y las Mujeres Durante la Revolución Industrial”). The impact of the Industrial Revolution on the employment of women is, without a doubt, a controversial topic, even to this day issues presented more than 70 years ago, still occur and can be seen as common. Sexual harassment and gender pay gaps are few of the issues that were faced back then and still reoccurring in the present. The gender pay gap goes as far as to women being paid 76 cents for each dollar a male worker earns, approximately about 14% to 21% less than male coworkers. The United Nations, being aware of these issues, made it into one of the Sustainable Development Goals (SDGs), thus creating it into a critical and crucial issue, that is in need of a change. Monterrey is an industrial giant in the economic world, being one of the most important and richest cities in Mexico. Being an industrialized city near the U.S. border, it developed into a hotspot for huge industries that were looking for cheap labor and/or to cover for the loss of failed manufacturing jobs in the U.S. However, as the rate of employment grew, the demand for miners and people strong enough to handle a workshop of a heavy industry, for instance; Fundidora de Fierro y Acero de Monterrey, would grow, as well. Thus, provoking the decline in the employment of women for this type of industries. In this manner, women, trying to earn a living, were not offered the same opportunities as men, therefore stereotypes as women working as maids or simpler jobs that would later to be interpreted as the ideal job for women. Industrialization in Monterrey occurred after the Second French Intervention, as a result of this event commercial centers and industries exploded with high demand. In addition to the Second French Intervention, external factors like serving as the commercial center of the north of Mexico, were remarkably important, by cause of United States’ high tariffs and high rates on metals, this made industries of metal be founded in Monterrey and is more economical than United States’ rates, and as a consequence of railroads being recently implemented, industries could transport throughout the whole country, thus creating competition between borders. The influence of the government helped these industries to flourish as well, by allowing industries and entrepreneurs to borrow lands and lowering their taxes, and creating an adequate environment for industries to be created and/or being implemented in Monterrey. Monterrey’s industrial expansion became monumental, with industries like Cerveceria Cuauhtémoc and Fundidora de Fierro y Acero growing and developing in a national and international level. Fundidora de Fierro y Acero became one of the most important and outstanding industries in Monterrey, because of the increasing demand on metals and their acquisition for coal and iron mines. This industry was normally linked to men, as a result of the heavy machinery and the constant deaths that would occur on the daily-basis, due to the exposure of railroads and electrical cables. This led women to have jobs as chemical engineers, as well as office administrators. Similar to women like María Dolores Palacios worked as a secretary of the engineer who had the idea of the creation of Horno 3, she claims that even though her job is not a “high-wage job or a high position in the industry”, her position, however, influenced greatly by keeping everything under control and bestowing the pencil for this idea to be developed. Women’s industrial influence was poor compared to men’s involvement. In 1895, there were a total of 552.8 millions of workers, for 1910 the amount increased to 606 million, composed by the proportions of two-thirds of men to one third were women. Women would work clothing and footwear industries, tobacco, textiles, food or beverages, and glass. On the remaining branches, there were no women. In certain industries, the relative number of the employment of women decreased, as in textiles, from 51% to 42%. However, the state’s government, worried of the lack of women’s integration on local industries, they would call upon parents to enroll girls to school, and from there, would be more susceptible to learning about other careers. This was made as a result of assistance to school being considered as “unnecessary” and “redundant” since women’s futures were limited to two professions: a housewife or an elementary teacher. Payment would be decreased often as well, from 75 cents, it would decrease to 20 cents per day when Fundidora’s salaries would be the highest of Monterrey. Until Fundidora fell to bankruptcy and it is later shut down. Women’s salaries would later change to be a higher value, but not the same as their male counterparts. Even though male and female wages decreased, as mentioned before, the gap began to increase, once again, in 1996 following the Mexican economic crisis. The gap would vary, depending on the occupation, for instance; female teachers would make 91.2% of the salary of male teachers, while female industrial supervisors earn hardly 66.9% of the salary earned by their male coworkers, according to the International Trade Union Confederation. According to locals, “… even though Monterrey is the most important city in commerce of the northern region, gender pay gaps are indeed noticeable and easily observed in the working environments. Male workers, even if they are in the same position as their female coworker, would at least earn about 10% more than her, never the same or vice versa .” says Gladys Roa, former marketing manager of Mabe Motors. In conclusion, the Industrial Revolution created a field of opportunities for employment, even though the act of employing women was not common in some industries, women would still be hired and able to take care of their families. Nowadays, women’s working environment has improved, compared to the post- Industrial Revolution era, in the issue of gender payment gaps, nevertheless this “change” cannot be used as an example, as a result of not being enough to make an impact at a national level. Government’s interest in the involvement of women in local industries was an important factor for opening career opportunities and the employment of women in these industries.
Every year, millions of people over the age of 65 experience a fall. According to the CDC, if such a fall leads to a broken or fractured bone, one in three patients will die within a year. Brendan Tyrell, a second year mechanical engineering student from Ambler, Pennsylvania, is spending his first co-op experience working with a company that hopes to prevent these kinds of injuries from happening. Tango, a company based in Fort Washington, Pennsylvania, are the makers of the Tango Belt, a wearable personal safety device that uses motion-tracking technology to detect a fall and deploy an airbag to protect the wearer. “When I was looking for a co-op opportunity, I wanted to do something really techy and to maybe work with robotics,” Tyrell said. “I went searching for companies, and I found Tango. They’re a really small company with an important goal, and I connected with that.” The difference between the Tango Belt and fall detection products like smart watches and wearable alert systems, Tyrell explains, is that the belt can actively detect a fall and prevent injury, where other devices can only react after a fall has happened. For the belt to be successful, it requires a lot of engineering. “We’re using top-of-the-line accelerometers and gyroscopes with high refresh rates so that the belt can quickly detect a fall and deploy the airbag,” he says. “It’s on the scale of a couple hundred milliseconds. We also need to apply machine learning so the device can differentiate between a fall and simply sitting down or moving your hip.” Tyrell says that, because of the size of the company, he is getting more opportunities to contribute. He is testing materials to make sure they meet specifications, building modular drop testing machines, and even sitting in on publicity meetings as the company looks for a strategic partner to scale the business. “I want to get as many experiences as I can – design, manufacturing and business,” he says. “I know that soaking all this up now can help me moving forward.” Tyrell is also happy to have the opportunity to be learning from a company with a mission. “Someone who is perfectly fine one day experiences a fall and suddenly they can’t walk up the stairs unassisted anymore,” he says. “It takes away their livelihood. The impact that Tango could have is huge, and I’m thrilled to be part of it.”
Identification of MIB(2-methylisoborneol)-producing cyanobacteria in source water has been a big challenge for reservoir authorities because it normally requires isolation of cyanobacteria strains. Here, a protocol based on Pearson’s product moment correlation analysis combined with standardized data treatment and expert judgement was developed to sort out the MIB producer(s), mainly based on routine monitoring data from an estuary drinking water reservoir in the Yangtze River, China, and a risk model using quantile regressions was established to evaluate the risk of MIB occurrences. This reservoir has suffered from MIB problems in summer since 2011. Among 323 phytoplankton species, Planktothrix was judged to be the MIB producer in this reservoir because it exhibited the highest correlation coefficient (R = 0.60) as well as the lowest false positive-ratio (FP% = 0) and false-negative rate (FN% = 14). The low false-positive rate is particularly important, since MIB should not detected without detection of the producer. A high light extinction coefficient ( k = 5.57 ± 2.48 m⁻¹) attributed to high turbidity loading in the river water lowered the subsurface water light intensity, which could protect the low irradiance Planktothrix from excessive solar radiation, and allow them to grow throughout the summer. The risk model shows that the probability of suffering unacceptable MIB concentrations (>15 ng L⁻¹) in water is as high as 90% if the cell density of Planktothrix is >609.0 cell mL⁻¹, while the risk will be significantly reduced to 50% and 10% at cell densities of 37.5 cell mL⁻¹ and 9.6 cell mL⁻¹, respectively. The approach developed in this study, including the protocol for identification of potential producers and the risk model, could provide a reference case for the management of source water suffering from MIB problems using routine monitoring data.
What Does Decobaltification Mean? Decobaltification is a corrosion process in which cobalt is selectively leached from cobalt-base alloys. It creates problems in the tooling/machining industries where cobalt is leached by many of the amino alcohols and amine-based additives found in almost all water-miscible machining fluids. Cobalt leaching is hazardous to carbide tools as well. Decobaltification is at the heart of three potentially costly problems: - Reduced performance and life of the tool - Health problems in some workers, causing dermatitis and respiratory distress - Disposal of contaminated wastewater Corrosionpedia Explains Decobaltification In the decobaltification process, the less noble metal is removed from the alloy by microscopic-scale galvanic corrosion. A porous material with very low strength and ductility is the result. Regions that are selectively corroded are sometimes covered with corrosion products or other deposits, and since the component keeps the original shape, the attacks may be difficult to discover. Cobalt alloys are components in tool steels. Stellite is an alloy containing cobalt and chromium, and sometimes other metals. Stellite-type alloys that are used for tools are responsible for: - Wear resistance - Excellent cutting When a tool loses its binder, the surface is weakened and may be subject to accelerated wear. The weakened surface structure compromises the bond between the tool and any coating applied to it. In the case of hard metal machining, decobaltification can be controlled by adding inhibitors to the fluid, such as triazoles. While this is effective initially, the inhibitor becomes depleted as the fluid is used and loses its effectiveness. However, inhibitors do little to control cobalt leaching during the manufacture of tungsten carbide tools, and during carbide grinding, where metal fines are a problem.
The Importance of Alpha Lipoic Acid Benefits Of Alpha Lipoic Acid Alpha Lipoic Acid has gained a lot of popularity in recent years, and many people are now buying these supplements in the hopes of experiencing a reduction in their blood glucose levels and even to improve nerve function in their body. This is a type of organic compound that is naturally produced within the mitochondria in the human body. The primary role of Alpha Lipoic Acid is to help certain enzymes use nutrients in the body to produce energy, but this compound is also considered a potent antioxidant that can help to protect the body. This organic compound has a structure that is similar in some ways to vitamins. It is considered both a fat-soluble and a water-soluble compound, which means it can interact with every single cell that is found within the body. Understanding the benefits, according to scientific research, is important, and knowing just how effective the supplement is for certain advantages can help a person determine if this is an appropriate option for them to consider. What Are The Benefits Of Using Alpha Lipoic Acid? One of the most common reasons why people tend to opt for Alpha Lipoic Acid is due to its potential to act upon their diabetes. Scientists have found that the compound may actually be effective in reducing levels of blood glucose levels. In some studies, there have been reductions in blood glucose levels of up to 64%, which is considered considerably impressive. There is also some research that suggests supplements that contain Alpha Lipoic Acid may be useful in reducing insulin levels in the body. While there is still some controversy regarding how exactly the compound is useful for diabetes, scientists currently believe that the main method of action here is the fact that the compound can help to remove the fat that is known to accumulate within cells that make up muscle tissue. Other potential benefits that are also worth noting about Alpha Lipoic Acid include: ·A reduction in the risks of complications associated with diabetes. Studies found that Alpha Lipoic Acid may reduce the risk of nerve damage, as well as diabetic retinopathy, in patients who had been diagnosed with type 2 diabetes. ·Alpha Lipoic Acid may also be useful for improving skin health. One study showed significant improvements in wrinkles, skin roughness, and fine lines when a cream containing this particular compound was applied daily to the affected areas. ·Some studies have also shown that the use of Alpha Lipoic Acid supplements could be helpful in patients with existing Alzheimer’s disease. The compound may assist in reducing the rate at which such a disease progresses. This is primarily due to the high antioxidant activity noted with the administration of the compound. Alpha Lipoic Acid has become a popular type of supplement that is widely available on the market today. While produced naturally in the body, additional supplementation of the compound may help to offer a person certain benefits. We looked at the scientifically proven benefits in this post.
‘License Plates’ for Drones? If we’re going to put cameras on drones and let just about anyone with a few hundred bucks fly them around our neighborhoods, recording video of anyone in our backyards and doing who-knows-what with them, shouldn’t there at least be a way to identify who the drone belongs to? Shouldn’t there be something akin to a “license plate” for drones? We think so. Drones — also known as “unmanned aerial vehicles” or “unmanned aircraft systems” — have captured the media spotlight, thanks in no small part to Kentucky Senator Rand Paul’s withering 13-hour filibuster of President Obama’s nomination of John Brennan to head the CIA. Paul’s marathon rant was in opposition to the Administration’s legal position on potential uses of militarized drones within U.S. borders. Weaponized drones in US airspace pose several concerns; however, at CDT we’ve focused lately on the more mundane, civilian uses of drones. The Federal Aviation Administration (FAA) recently issued a Request for Comment on safety and privacy issues of civilian uses of drones. As part of a longer FAA imperative to carefully integrate drone operations into civilian airspace, the FAA will establish six drone test sites to conduct research and collect data to assess safety, privacy and operational issues with drones. Surveillance and remote imaging will be a major use of civilian drones, useful for applications including traffic monitoring, weather forecasting, and mapping. Such uses naturally raise questions that implicate the privacy interests of individuals on the ground. There are many basic questions that should be answered before drones are allowed in America’s skies: - How can the public know what drones are operating in the skies above them? - How can the public learn more about a drone’s ownership and data collection capabilities? - Should there be a “license plate” for drones, like tail numbers on larger aircraft? International treaties and FAA regulations require all US aircraft to display an “N-Number” (sometimes referred to as a “tail number”), similar to a license plate for automobiles. Anyone can query the FAA Registry to look up a tail number such as “N155AN” to find important details about the aircraft, including, for example, the name and address of the owner. But tail numbers on drones? That doesn’t seem like a useful type of identifier. With no pilot on board, drones can be quite small and operate at high altitudes. Even with binoculars or more expensive spotting scopes, it’s unlikely that a drone’s N-number would be readable from the ground. So a traditional license plate-like insignia doesn’t seem to meet the transparency needs of our drone-filled future. In CDT’s comments to the FAA, we plan to propose a straightforward answer to this problem: the FAA should require each civilian drone to use an on-board radio frequency transmitter to broadcast an identifier, like an aircraft tail number. This would act like a “beacon” that would communicate a drone’s unique identification number to observers on the ground. While comments on the FAA Request for Comment are not due until next month (23 April), CDT wants to refine this idea between now and the comment deadline. We could use your help; we are not RF engineers and have a number of questions that should be addressed to make this proposal more viable. This is a nascent idea; as far as we can tell no one has proposed such an idea before, and there might be standards from other applications that could be applied to drone identification. Here is some of the feedback we would appreciate in order to better understand the feasibility of our RFDID proposal: - What type of radio-frequency band should be used so as to not interfere with other radio-frequency operations? Should this be a dedicated area of existing aviation spectrum or unlicensed spectrum? What modulation or signaling techniques — such as frequency-division multiplexing — would allow the optimal allocation of broadcast identifiers? - Are there widely accepted standards for radio-frequency identification transmitters and associated protocols that we should recommend for this purpose to the FAA? - What signal protocol design would accommodate areas, such as large cities, where there maybe hundreds or thousands of drones aloft at one time? - What kinds of sanctions should exist for operating a drone without such a signal or with an impaired signal? - What kinds of technical characteristics should such a signal have? For example, should the signal be verified through a cryptographic signature to avoid drones spoofing each other’s identification numbers? Or are legal and policy sanctions against such measures enough? Should the strength of an RFDID signal depend on the spatial extent of its imaging capabilities? - Are there drone platforms where broadcasting an RFDID would be inappropriate for the class of drone (e.g., small toys) or due to weight and power limitations of the platform? Please get in touch with me, CDT’s Senior Staff Technologist, at [email protected] if you have thoughts on these questions or questions we may not know to ask.
Course Hero. "Cry, the Beloved Country Study Guide." Course Hero. 7 Apr. 2018. Web. 19 May 2022. <https://www.coursehero.com/lit/Cry-the-Beloved-Country/>. Course Hero. (2018, April 7). Cry, the Beloved Country Study Guide. In Course Hero. Retrieved May 19, 2022, from https://www.coursehero.com/lit/Cry-the-Beloved-Country/ (Course Hero, 2018) Course Hero. "Cry, the Beloved Country Study Guide." April 7, 2018. Accessed May 19, 2022. https://www.coursehero.com/lit/Cry-the-Beloved-Country/. Course Hero, "Cry, the Beloved Country Study Guide," April 7, 2018, accessed May 19, 2022, https://www.coursehero.com/lit/Cry-the-Beloved-Country/. Chapter 5 centers on Stephen's first impressions of Mission House, on what he is able to learn at the start of his visit about conditions in Johannesburg, and especially about his missing family members in the big city. None of what he learns is especially encouraging. The chapter begins with a humorous demonstration of Stephen's lack of familiarity with modern conveniences, as he and Theophilus visit the washroom before a meal. Both white and black priests are seated at the table. The company discusses the "sickness of the land," or the breakdown of agriculture in the countryside, as well as the increasing prevalence of native crime. Privately, Theophilus discusses Gertrude's situation with Stephen. Gertrude had originally traveled to Johannesburg in search of her husband, who had been recruited to work in the mines. Now, according to Theophilus, Gertrude's life has taken a turn for the worse. She is involved with many men and with the making and selling of liquor. She has become a prostitute, and has served time in prison. Worst of all, she has a young child living with her. Stephen also asks about Absalom, his son, and about John, his brother. Perhaps Gertrude may help to locate Absalom, says Theophilus. John, a carpenter by profession, has now turned into one of the most prominent black politicians, and wants nothing further to do with the church. He is calling vociferously for social and political reforms. At the end of the chapter, Theophilus escorts Stephen to the lodgings he has found for him, a room provided by Mrs. Lithebe, a member of the church. In this chapter, set in the nearby district of Claremont, Stephen finds his sister, Gertrude. He is shocked by her shabby and squalid living conditions, and tells her angrily she has brought shame upon their family, but quickly regains his composure. Gertrude can offer only vague news about Absalom, telling her brother the best lead to his whereabouts will be John's son, whom Absalom has befriended. Stephen arranges for Gertrude and her child to move and take up residence at Mrs. Lithebe's house. These chapters mingle some of the major problems of South African society at large with Stephen's specific concerns—the whereabouts and activities of his close relatives. Considering the strong emphasis thus far on Johannesburg as a center of violence, political ferment, and petty crime, it is suggested that Gertrude, Absalom, and John may be involved in dangerous pursuits. Readers alert to Paton's habitual use of biblical allusions will take a cue from Absalom's name. A son of King David, Absalom rebelled against his father and was ultimately killed, to David's great grief (2 Samuel 13-19). The name implies that Absalom, who had traveled to Johannesburg in search of Gertrude, has fallen on evil days, and future events will soon bear out this inference. At several points in the novel, Paton portrays Stephen realistically as a human being who is tempted or conflicted by strong emotions. In Chapter 6, his anger with Gertrude clashes with his compassion, which ultimately wins out in a display of "deep gentleness." At the end of Chapter 6, with his apparent reclaiming of Gertrude, Stephen experiences a wave of euphoria. After a single day in Johannesburg, he feels "the tribe was being rebuilt." The reader will learn shortly that Stephen's feelings of relief and happiness are premature.
Insights Daily Current Affairs, 18 April 2018 Topic: Indian culture will cover the salient aspects of Art Forms, Literature and Architecture from ancient to modern times. Context: The Archaeological Survey of India (ASI) will be using Ground Penetrating Radar (GPR) to map the contours of the area around the Bagh-e-Naya Qila excavated garden inside the Golconda Fort. It has roped in the Indian Institute of Technology-Madras (IIT-M) to carry out the mapping. About Bagh-e-Naya Qila: - The Naya Qila garden inside Golconda Fort was built by successive rulers of the Deccan and is one of the few symmetrical gardens extant. - There are strange figures and animals worked out of stone and stucco on the walls of the outer fort facing the Naya Qila. - In 2014, when the ASI excavated the area after diverting the water flow, it discovered water channels, settlement tanks, walkways, fountains, gravity pumps, and a host of other garden relics. What is Ground penetrating radar (GPR) technology? Ground-penetrating radar (GPR) is a geophysical method that uses radar pulses to image the subsurface. This nondestructive method uses electromagnetic radiation in the microwave band (UHF/VHF frequencies) of the radio spectrum, and detects the reflected signals from subsurface structures. How it works? Ground Penetrating Radar (GPR) uses a high frequency radio signal that is transmitted into the ground and reflected signals are returned to the receiver and stored on digital media. The computer measures the time taken for a pulse to travel to and from the target which indicates its depth and location. The reflected signals are interpreted by the system and displayed on the unit’s LCD panel. GPR can have applications in a variety of media, including rock, soil, ice, fresh water, pavements and structures. In the right conditions, practitioners can use GPR to detect subsurface objects, changes in material properties, and voids and cracks. The most significant performance limitation of GPR is in high-conductivity materials such as clay soils and soils that are salt contaminated. Performance is also limited by signal scattering in heterogeneous conditions (e.g. rocky soils). - For Prelims: Naya Qila and GPR technology. - For Mains: GPR technology, applications and limitations. Sources: the hindu. Topic: Salient features of the Representation of People’s Act. Context: A draft white paper released by the Law Commission of India has recommended holding of simultaneous elections to the Lok Sabha and the Assemblies, possibly in 2019. It suggests amending the Constitution to realise this objective. Simultaneous elections were held in the country during the first two decades after Independence up to 1967. Dissolution of certain Assemblies in 1968 and 1969 followed by the dissolution of the Lok Sabha led to the “disruption of the conduct of simultaneous elections. Key recommendations made by NITI Aayog in this regard: - Simultaneous elections in the country may be restored in the nation by amending the Constitution, Representation of the People Act of 1951 and the Rules of Procedure of the Lok Sabha and Assemblies. - The leader of the majority party be elected as PM or the CM by the entire house for stability. - In case a government falls midterm, the term of the new government would be for the remaining period only. - A no-confidence motion against the government should be followed by a confidence motion. No-confidence motion and premature dissolution of House are major roadblocks to simultaneous elections. Parties which introduce the no-confidence motion should simultaneously give a suggestion for an alternative government. - The “rigours” of the anti-defection law in the Tenth Schedule should be relaxed to prevent a stalemate in the Lok Sabha or Assemblies in case of a hung Parliament or Assembly. Simultaneous elections: Is it a good idea? - This will help save public money. - It will be a big relief for political parties that are always in campaign mode. - It will allow political parties to focus more on policy and governance. Need for simultaneous elections: - To reduce unnecessary expenditures: Elections are held all the time and continuous polls lead to a lot of expenditure. More than Rs1,100 crore was spent on the 2009 Lok Sabha polls and the expenditure had shot up to Rs4,000 crore in 2014. - To reduce the unnecessary use of manpower: Over a crore government employees, including a large number of teachers, are involved in the electoral process. Thus, the continuous exercise causes maximum harm to the education sector. - Security concerns: Security forces also have to be diverted for the electoral work even as the country’s enemy keeps plotting against the nation and terrorism remains a strong threat. The time is ripe for a constructive debate on electoral reforms and a return to the practice of the early decades after Independence when elections to the Lok Sabha and state assemblies were held simultaneously. It is for the Election Commission to take this exercise forward in consultation with political parties. Facts for Prelims: - Law Commission of India is an executive body established by an order of the Government of India. Its major function is to work for legal reform. Its membership primarily comprises legal experts, who are entrusted a mandate by the Government. - The Commission is established for a fixed tenure and works as an advisory body to the Ministry of Law and Justice - The first Law Commission was established during the British Raj era in 1834 by the Charter Act of 1833. After that, three more Commissions were established in pre-independent India. The first Law Commission of independent India was established in 1955 for a three-year term. Sources: the hindu. Topic: Separation of powers between various organs dispute redressal mechanisms and institutions. Mahanadi Water Disputes Tribunal Context: Central Government recently handed over reference of Mahanadi Water Disputes Tribunal under Section 5 (1) of the Inter-State River Water Disputes Act (ISRWD), 1956 to Chairman of the tribunal and Supreme Court Judge, Justice A M Khanwilkar. The Tribunal has been constituted following orders of the Supreme Court. The Government of Odisha had sought to refer the water dispute regarding the inter-state river Mahanadi and its river valley to a Tribunal for adjudication under the Inter-State River Water Disputes Act, 1956. Legal provisions in this regard: - The tribunal will be formed according to the provisions of the Inter-State River Water Disputes (ISRWD), 1956. - It will have a chairperson and two other members nominated by the Chief Justice of India from among the judges of the apex court or high courts. - As per provisions of the ISRWD Act, 1956 the Tribunal is required to submit its report and decision within a period of 3 years which can be extended to a further period not exceeding 2 years due to unavoidable reasons. What’s the dispute? Odisha and Chhattisgarh are locked in a dispute over the Mahanadi waters since the mid-80s. Odisha claimed that Chhattisgarh government has been constructing dams in the upper reaches of the Mahanadi, depriving its farmers who are heavily dependent on the rivers waters. Chhattisgarh has been against the setting up of a tribunal, and argued that the water sharing agreement was with the erstwhile Madhya Pradesh government, before the state was carved out in 2000. To chalk out the future course of action in view of the disputes regarding the use of Mahanadi river water, a well-rounded strategy that includes both the people and policymakers is needed. The strategy must allow for dialogue by rebuilding trust and should look at arbitration and negotiation as methods of conflict resolution. It is necessary to evolve a strategy that optimises the rational usage of Mahanadi water to benefit people from both Chhattisgarh and Odisha, coupled with the implementation of a multi-stakeholder forum that finds peaceful solutions and minimises areas of contention in a negotiable and consensual manner. - For Prelims: Composition of tribunal, Mahanadi river. - For Mains: Dispute resolution- challenges, issues and solutions. Sources: the hindu. Topic: Important International institutions, agencies and fora, their structure, mandate. World heritage day Context: Every year, 18th April is celebrated Worldwide as World Heritage Day to create awareness about Heritage among communities. 2018 theme: Heritage for Generations. What is a World Heritage site? A World Heritage site is classified as a natural or man-made area or a structure that is of international importance, and a space which requires special protection. These sites are officially recognised by the UN and the United Nations Educational Scientific and Cultural Organisation, also known as UNESCO. UNESCO believes that the sites classified as World Heritage are important for humanity, and they hold cultural and physical significance. In 1982, the International Council on Monuments and Sites (ICOMOS) announced, 18 April as the “World Heritage Day”, approved by the General Assembly of UNESCO in 1983, with the aim of enhancing awareness of the importance of the cultural heritage of humankind, and redouble efforts to protect and conserve the human heritage. Key facts for Prelims: - As of 2018, India has 36 world heritage sites, the sixth most of any country. - Italy leads with 53 sites followed by China with 52 sites. Sources: the hindu. Topic: Science and Technology- developments and their applications and effects in everyday life Achievements of Indians in science & technology; indigenization of technology and developing new technology. Atal Tinkering Labs Context: The ATL Community Day was held across India, over the course of April 13 – 16. The initiative is an effort to spread awareness as well as engage the local communities in the neighbourhood of an Atal Tinkering Lab, to come and experience the exciting new world of science and future technologies. AIM has selected more than 2400 schools in 2017 for establishing Atal Tinkering Labs. What are ATLs? With a vision to ‘Cultivate one Million children in India as Neoteric Innovators’, Atal Innovation Mission is establishing Atal Tinkering Laboratories (ATLs) in schools across India. Objective: The objective of this scheme is to foster curiosity, creativity and imagination in young minds; and inculcate skills such as design mindset, computational thinking, adaptive learning, physical computing etc. Financial Support: AIM will provide grant-in-aid that includes a one-time establishment cost of Rs. 10 lakh and operational expenses of Rs. 10 lakh for a maximum period of 5 years to each ATL. Eligibility: Schools (minimum Grade VI – X) managed by Government, local body or private trusts/society can set up ATL. Significance of ATLs: - Atal Tinkering Labs have evolved as epicenters for imparting these ‘skills of the future’ through practical applications based onself-learning. - Bridging a crucial social divide, Atal Tinkering Labs provide equal opportunity to all children across the spectrum by working at the grassroot level, introducing children to the world of innovation and tinkering. As the world grapples with evolving technologies, a new set of skills have gained popular acceptance and have come to be in high demand. For India to contribute significantly during this age of raid technological advancement, there is an urgent need to empower our youth with these ‘skills of the future’. Equipped with modern technologies to help navigate and impart crucial skills in the age of the Fourth Industrial Revolution, the ATLs are at the vanguard of the promoting scientific temper and an entrepreneurial spirit in children today. Atal Innovation Mission (AIM) endeavours to promote a culture of innovation and entrepreneurship. Its objective is to serve as a platform for promotion of world-class Innovation Hubs, Grand Challenges, Start-up businesses and other self-employment activities, particularly in technology driven areas. The Atal Innovation Mission shall have two core functions: - Entrepreneurship promotion through Self-Employment and Talent Utilization, wherein innovators would be supported and mentored to become successful entrepreneurs. - Innovation promotion: to provide a platform where innovative ideas are generated. - For Prelims: AIM, ATLs and their key features. - For Mains: Need for innovation and efforts by government in this regard. Topic: Conservation, environmental pollution and degradation, environmental impact assessment. National clean air programme Context: The Environment Ministry has come out with a draft national action plan proposing multiple strategies to reduce air pollution. Objectives of NCAP: - To augment and evolve an effective and a proficient ambient air quality monitoring network across the country to ensure comprehensive and reliable database. - To have efficient data dissemination and a public outreach mechanism for timely measures for prevention and mitigation of air pollution. - To have a feasible management plan for prevention, control and abatement of air pollution. Under the NCAP, the ministry plans to take a host of measures to bring down air pollution. - These include augmenting the air quality monitoring network, identification of alternative technology for real-time monitoring, setting up of 10 city super network, indoor air pollution monitoring and management and air pollution health impact studies. - Other measures include air quality forecasting system, issuance of notification on dust management, a three-tier mechanism for review, assessment and inspection for implementation and a national emission inventory. The draft has received mixed response. Activists claimed that the draft lacked its earlier set target of bringing down air pollution by 50% in five years. It is being said that the absence of these targets and sectoral based targets is limiting and feeble. Need for an action plan: More than 80% of cities in the country where air quality is monitored are severely polluted and it impacts 47 million children across the country. Also, 580 millions number of people in India don’t even have a single air quality monitoring stations in districts they are living. Sources: the hindu. Facts for Prelims: Context: The government has launched DARPAN-Postal Life Insurance App. The App will help in collection of premium for postal life insurance and rural postal life insurance policies at branch post offices anywhere in India, with online updation of the policies. DARPAN project: With a view to achieve total digitisation of postal operations in the country, the department has launched Digital Advancement of Rural Post office for a new India (DARPAN) Project, which aims at connecting all one lakh 29 thousand Rural Branch Post Offices.
By Kerry Wolfe In Uganda, a symbiotic relationship between humans and gorillas is crucial to the survival of both. Fifty percent of revenue from the tourism industry is generated through gorilla tourism alone, yet these animals have become critically endangered largely due to the spread of disease from human to animal. Gorillas share 98.4 percent of their DNA with humans, and over the past 10 years — due to deforestation and a rapidly expanding human population — have been forced to share their habitat with their homosapien neighbors. This has put gorillas in Uganda, Rwanda and the Democratic Republic of Congo at risk for contracting communicable diseases. Treatable diseases ““ such as the flu ““ can become fatal once transferred from humans to the gorilla population, which has not developed immunities to these maladies. Enter Conservation Through Public Health, a nonprofit organization located in the mist-shrouded mountains of the Bwindi Impenetrable National Park in Uganda, that is working to control the spread of disease between humans, wildlife and livestock while educating the local people about the economic, social and physical benefits of healthcare. “Notorious throughout the global community for being one of the most impoverished nations in the world, it is no surprise that Uganda is one of the 22 worst affected countries with Tuberculosis, contributing to 80% of the global burden. Other major threats to its local people and wildlife include dysentery, anthrax, measles, diarrhea and the flu. For example, in 2004 and 2005, an anthrax outbreak resulted in the death of over 300 hippos representing 5% of the hippo population in Queen Elizabeth National Park, putting cattle and people at risk from contracting this fatal disease. District medical officials reported cases of people who ate the hippo meat and developed clinical signs, further demonstrating the connection between the health of animal and humans,” reports CTPH. In 2002, Dr. Gladys Kalema-Zikusoka founded CTPH after witnessing the devastating effects of a scabies outbreak on the gorilla population in East Africa. A leading conservationist, Kalema-Zikusoka began advocating for gorilla conservation by promoting community-based healthcare initiatives and creating public awareness about the benefits of good health and hygiene. Rachel Winnik Yavinsky, policy associate for the Population Reference Bureau’s International Programs wrote on the PRB’s blog for Population, Heath and the Environment (PHE) that during a recent visit to the CTPH’s Gorilla Research Clinic in Bwindi she “was particularly impressed by the dedication and enthusiasm of CTPH’s Community Conservation and Health Volunteers. She goes on to say that these “men and women were elected from 29 local villages to be educated in conservation and hygiene practices, as well as family planning counseling and service delivery.” Yavinsky was “especially inspired at a volunteer meeting where she spoke with Milliam, the wife of a traditional healer and a family planning and PHE champion. Milliam had seven children before she learned about family planning. Now she is a community conservation and health worker, and has talked to her daughters about having two children each so that they can afford to send them all to school. ” CTPH is aware that in order to protect the gorillas, they must first convince the local people to get on board. Most of the communities that live in close proximity to gorilla habitats are impoverished and in desperate need of a steady source of income. Through initiatives like the volunteer program, CTPH educates the people on how to capitalize on the gorillas’ presence in the area through sustainable and ethical jobs in the tourism industry while simotaneously bringing awareness to the importance of good health practices. “Before mountain gorilla tourism came along, these rural communities had very little hope of overcoming their poverty. Now, mud huts that were once selling local brew have been transformed into flourishing trading centers because of the traffic associated with tourism. It was clear that not only was poor health and hygiene affecting public health and wildlife conservation, but it was also affecting sustainable ecotourism. We realized that if this important source of income is to remain forever, both people and gorillas need to have adequate health care. This inspired us to establish Conservation Through Public Health.” — Dr. Gladys Kalema-Zikusoka About Kerry Wolfe Kerry is a sophomore at Syracuse University’s S.I. Newhouse School of Public Communications, where she’s working towards a BA in magazine journalism. She loves to travel, and plans to spend her career exploring the world and writing about the people and places she encounters. Kerry’s also a huge animal lover, and the only thing she loves more than visiting a new place is spending time with her horse. Read More Stories in GOOD
National Geographic : 1929 Feb There is no standing still. An Advertisement of the American Telephone and Telegraph Company DURING the past two years ( 6000 switchboards have been reconstructed in the larger cities served by the Bell System to enable the operators to give a more direct and faster service. Previously in towns where there were more than one central office, your operator would hold you on the line while she got the operator at the other central office on an auxiliary pair of wires. Now she connects directly with the other central office and repeats the number you want to the other operator. You hear her do this so that you can correct her if there is any mistake. S- This little change cost mil lions of dollars. Likewise, it saves millions of minutes a day for the public and it has cut down the number of errors by a third. It is one of the many improve ments in methods and appliances which are constantly being introduced to give direct, high-speed telephone service. There is no standing still in the Bell System. Better and better tele phone service at the lowest cost is the goal. Present improvements con stantly going into effect are but the foundation for the greater service of the future. "THE TELEPHONE BOOKS ARE THE DIRECTORY OF THE NATION"
As the global economy struggles to regain some forward momentum, Canadian governments are looking for ways to limit government spending in light of reduced revenues, increasing demands for services and soaring deficits. In the mid-1990s, Canada found itself in a similar financial situation and was forced to make radical changes by scaling back its spending priorities. One of the lessons from Program Review, the formal process at the federal level used to examine all government spending at the time, was setting different savings targets for key programs based on the considered view of the programs’ effectiveness. In addition to setting targets, the Masse Committee in 1994 also created a strong challenge function at the centre of the decision-making process to ensure that the departmental plans were realistic and evidence-based. To deal with current fiscal challenges, the Harper government has kick started a similar exercise to scale back the size of the federal government in order to achieve a balanced budget by 2014. This new effort, now labelled the Deficit Reduction Action Plan (DRAP), is being led by a special Cabinet Committee and chaired by the President of the Treasury Board Tony Clement. There are two features of the committee’s work that differentiate it from Program Review. First, all departments and agencies have been asked to generate two across-the-board cut scenarios based on five percent and 10 percent savings. Second, the Treasury Board Secretariat is relying on the outside advice of a management firm with an expertise in cost containment to look for efficiency savings by improving productivity. Experience has shown that using a scythe to chop spending is a crude method of achieving savings. It is arbitrary and unfair since the cuts apply equally to well performing programs and to poorly performing ones, and to efficient organizations as well as poorly managed ones. Using this method of reducing costs, the DRAP ministers must find their own way to balance savings and to serve the needs of Canadians in high priority policy areas. One possible key to achieving this balance is using the evaluations that have been conducted by the federal government during the past few years. Canada has a proud reputation for the quality of the program evaluation work done in many departments. In fact, it could be argued that at one time Canada had the most robust evaluation system among OECD countries. While one of the unintended consequences of Program Review was the weakening of the policy capacity in the federal government, in 2009, the government reaffirmed the value of program evaluation by announcing a strengthened evaluation policy. The new policy is far ranging and exceeds the reach of previous program evaluation policies since it covers “all direct program spending and the administrative aspect of major statutory spending, programs that are set to terminate, every five years.” While the potential for using the 2009 evaluation policy is obvious, there are two reasons why this initiative may only result in limited value to the current Cabinet Committee. First, there is growing concern among evaluation experts that the quality of their work is not meeting their own high standards. In a recent report, Lay of the Land: Evaluation Practice in Canada, the practitioner authors argue that too many evaluations have little impact because they don’t ask the right questions, there is too much resistance from program administrators, and politicians are reluctant to listen to negative assessments of their programs. Second, there is an increasing fear that the government is not interested in using evidence in decision making. The most recent example comes from the Supreme Court of Canada’s ruling on the Insite case concerning the closure of the supervised drug injection facility in Vancouver’s eastside. In its unanimous decision the Court admonished the federal government for its unwillingness to use information, data or analysis in making policy decisions. The court reminded the government that it could not disregard the facts and that it had an obligation to rely on “evidence” in making policy decisions. As in the mid-1990s, the federal government has the opportunity to use the information gathered in formal evaluations to inform decisions. While there are reasons to be sceptical that the millions of dollars spent on program evaluation will find their way into the departmental submissions, there is no reason why program evaluation should not find a place in the planning cycle much as audit has become a feature in government operations. David Zussman holds the Jarislowsky Chair in Public Sector Management in the Graduate School of Public and International at the University of Ottawa (email@example.com).
National Geographic : 1927 Jun COSTUMES OF CZECHOSLOVAKIA BARGAINING FOR HOGS WITH WOOLLY HAIR Hog breeding is an important industry in Ruthenia, the most easterly province of Czecho slovakia. The animals, which have curly hair very much like the wool of sheep, are driven to the weekly market in herds. (g) .i. a. Natural Color Photographs by Hans Hildenbrand RUTHENIAN OXEN WITH GIGANTIC HORNS In Uzhorod, chief town of Ruthenia (see above), there is a weekly market day when all the farmers from the surrounding country bring in their cattle. These whitish-gray oxen are of Hun garian stock-large-framed, lean and hardy, with a spread of horns sometimes measuring five feet.
Editor’s Note: Please consider supporting Ari on this worthwhile cause! The U.S. Southwest is under water duress. More water is used in the region each year than the amount of rain and snowfall – a shortfall accounted for by diminishing groundwater reserves. The Colorado River – the Southwest’s only significant source of water – is already over-allocated and slight disruptions can endanger power generation and water supply in the region A recent study called “The Last Drop: Climate Change and the Southwest Water Crisis” found that climate change could add $1 trillion to the costs of water scarcity in the Southwest over the next century. Water is just the tip of the iceberg when it comes to climate change in the Southwest, where models predict a hotter, drier climate developing over the course of the century. A Great Aridness,a recent book by William deBuys, explores what climate change could mean to the Southwest. In the book’s introduction, Jonathan Overpeck, a climate scientist who co-directs the Institute of the Environment at the University of Arizona, says, “climate change will produce winners and losers, and those in the Southwest will be losers. There’s no doubt.” With my Kickstarter project Energy and Climate Change in the American Southwest I plan to traverse the Southwest this summer reporting on what’s happening with these issues right now – and to determine what impact the so-called losers can have on their fate. I’ve identified nine critical stories – from the surging natural gas production of Midland, TX to the controversial solar parks of the Mojave Desert – that demand attention for the way they are reshaping the Southwest. In some cases literally, such as with forests devastated by wildfires and bark beetles – both growing in intensity due to climate change. It is unclear what will replace traditional piñon and ponderosa trees as the climate of the Southwest changes and flora and fauna migrate accordingly. In other cases the reshaping is more socioeconomic rather than physical. This spring the Navajo Nation signed a contract with Lawrence Livermore National Laboratory to study what technologies would be best for developing natural resources on the sprawling reservation. Unemployment hovers around 50 percent in the region and a main goal of the project is to improve economic conditions and prevent industry from taking advantage of the tribe, as has historically occurred with mining and oil leasing. Clean energy production also falls in-line with long held cultural beliefs of the Navajo relating to environmental stewardship and preservation. Check out the project page for more information, to donate, or to just follow along: http://www.kickstarter.com/projects/1324904558/energy-and-climate-change-in-the-american-southwes
Some graphics device types support images, which are rectangular pieces of picture that may be drawn into a graphics device. Images are often called something else in the host graphics system, such as bitmaps or pixmaps. The operations supported vary between devices, so look under the different device types to see what operations are available. All devices that support images support the following operations. Images are created using the create-imagegraphics operation, specifying the width and height of the image in device coordinates (pixels).(graphics-operation device 'create-image 200 100) The initial contents of an image are unspecified. create-imageis a graphics operation rather than a procedure because the kind of image returned depends on the kind of graphics device used and the options specified in its creation. The image may be used freely with other graphics devices created with the same attributes, but the effects of using an image with a graphics device with different attributes (for example, different colors) is undefined. Under X, the image is display dependent. The image is copied into the graphics device at the specified position. Part of the image is copied into the graphics device at the specified (x, y) position. The part of the image that is copied is the rectangular region at im-x and im-y and of width w and height h. These four numbers are given in device coordinates (pixels). This procedure destroys image, returning storage to the system. Programs should destroy images after they have been used because even modest images may use large amounts of memory. Images are reclaimed by the garbage collector, but they may be implemented using memory outside of Scheme's heap. If an image is reclaimed before being destroyed, the implementation might not deallocate that non-heap memory, which can cause a subsequent call to create-imageto fail because it is unable to allocate enough memory. The contents of image are set in a device-dependent way, using one byte per pixel from bytes (a string). Pixels are filled row by row from the top of the image to the bottom, with each row being filled from left to right. There must be at least ))bytes in bytes.
Swallows and martins are a group of passerine birds in the family Hirundinidae which are characterised by their adaptation to aerial feeding. Swallow is used colloquially in Europe as a synonym for the Barn Swallow. The swallows have a cosmopolitan distribution across the world and breed on all the continents except Antarctica. It is believed that this family originated in Africa as hole-nesters; Africa still has the greatest diversity of species. They also occur on a number of oceanic islands. A number of European and North American species are long-distance migrants; by contrast, the West and South African swallows are non-migratory. Swallows have adapted to hunting insects on the wing by developing a slender streamlined body and long pointed wings, which allow great maneuverability and endurance, as well as frequent periods of gliding. - The martlet Builds in the weather on the outward wall, Even in the force and road of casualty. - This guest of summer, The temple-haunting martlet, does approve, By his lov'd mansionry, that the heaven's breath Smells wooingly here; no jutty, frieze, Buttress, nor coign of vantage, but this bird Hath made its pendent bed, and procreant cradle: Where they most breed and haunt, I have observ'd, The air is delicate. Hoyt's New Cyclopedia Of Practical QuotationsEdit - Quotes reported in Hoyt's New Cyclopedia Of Practical Quotations (1922), p. 772. - One swallow does not make spring. - Aristotle, Ethic, Nicom, Book I. - Una golondrina sola no hace verano. - One swallow alone does not make the summer. - Miguel de Cervantes, Don Quixote (1605-15), Part I, Chapter XIII. - Down comes rain drop, bubble follows; On the house-top one by one Flock the synagogue of swallows, Met to vote that autumn's gone. - Theophile Gautier, Life, a Bubble, A Bird's-Eye View Thereof. - But, as old Swedish legends say, Of all the birds upon that day, The swallow felt the deepest grief, And longed to give her Lord relief, And chirped when any near would come, "Hugswala swala swal honom!" Meaning, as they who tell it deem, Oh, cool, oh, cool and comfort Him! - Charles Godfrey Leland, The Swallow. - The swallow is come! The swallow is come! O, fair are the seasons, and light Are the days that she brings, With her dusky wings, And her bosom snowy white! - One swallowe proveth not that summer is neare. - John Northbrooke, Treatise against Dancing, (1577). - It's surely summer, for there's a swallow: Come one swallow, his mate will follow, The bird rare quicken and wheel and thicken. - Christina G. Rossetti, A Bird Song, Stanza 2. - There goes the swallow,— Could we but follow! Hasty swallow, stay, Point us out the way; Look back swallow, turn back swallow, stop swallow. - Christina G. Rossetti, Songs in a Cornfield, Stanza 7. - The swallow follows not summer more willing than we your lordship. - Now to the Goths as swift as swallow flies. - The swallow sweeps The slimy pool, to build his hanging house. - James Thomson, The Seasons, Spring (1728), line 651. - When autumn scatters his departing gleams, Warn'd of approaching winter, gather'd, play The swallow-people; and toss'd wide around, O'er the calm sky, in convolution swift, The feather'd eddy floats; rejoicing once, Ere to their wintry slumbers they retire. - James Thomson, The Seasons, Autumn (1730), line 836.
Higher education in Australia refers to university and non-university higher education institutions which award degree or sub-degree qualifications. The three main phases of higher education are Bachelor, Master and Doctoral studies. Higher education providers are established or recognised by or under the law of the Commonwealth, a State, the Australian Capital Territory or the Northern Territory. All higher education providers must be registered by the national regulator and quality agency, the Tertiary Education Quality and Standards Agency (TEQSA) to operate in Australia and offer Australian higher education awards. In order to be registered by TEQSA, institutions must meet the Higher Education Threshold Standards, established in legislation. The provider must be approved by the Australian Government Minister for Education before it can receive grants or its students can receive assistance from the Commonwealth. Australian Qualifications Framework (AQF) higher education qualifications are knowledge-based rather than competency-based (as in the vocational education and training sector). Each level and qualification type in the AQF are described in terms of the knowledge, skills and application of knowledge and skills that are expected of graduates. This ensures a strong focus on learning outcomes. Responsibilities for Higher Education TEQSA is responsible for regulating Australian higher education. The TEQSA National Register of Higher Education Providers is the authoritative source of all higher education providers registered to operate in Australia. The Australian Government has the primary responsibility for public funding of higher education. Australian Government funding support for higher education is provided largely through: - The Commonwealth Grant Scheme which provides funding to higher education providers to help subsidize students’ tuition costs; - The Higher Education Loan Programme (HELP) arrangements provide income contingent loans to eligible Australian citizens and permanent humanitarian visa holders to assist with the upfront costs of tuition; - Commonwealth Scholarships; and - A range of grants for specific purposes including quality, learning and teaching, research and research training programs. The Department of Education is the Australian Government Department with responsibility for administering this funding and for developing and administering higher education policy and programs. Decision-making, regulation and governance for higher education are shared among the Australian Government, the State and Territory Governments and the institutions themselves. By definition within Australia, universities are self-accrediting institutions and each university has its own establishment legislation (generally State and Territory legislation) Universities and receive the vast majority of their public funding from the Australian Government, through the Higher Education Support Act 2003. Non-self-accrediting institutions must have their courses accredited by TEQSA, and the National Register lists the accredited courses the institution is registered to deliver. State and territory tertiary admissions centres coordinate admission. Students can use their tertiary entrance rank, score or index from their home state or territory to apply for undergraduate admission elsewhere in Australia. In some cases, entry may be based on additional requirements such as an interview, portfolio of work, prerequisite courses, and/or a demonstrated interest or aptitude for the study program. Postgraduate entry is normally based on a Bachelor Degree or higher. Exceptions may be made for those with appropriate work experience, depending on the institution and field of study. Credit transfer refers to the recognition of previous formal learning so that study does not have to be repeated. Credit transfer is available in both undergraduate and postgraduate programs, at the discretion of the institution. The ways in which credit may be awarded are complex, and depend on the formal study for which recognition is sought. The Higher Education Threshold Standards set out that institutions must ensure that they maintain processes to provide for the recognition of prior learning, credit transfer and articulation of awards. These processes should be designed to maximise the credit students may gain for learning already undertaken, subject to preserving the integrity of learning outcomes and/or discipline requirements of the award to which it applies. There are different processes which apply to seeking credit, including those for: - Study previously undertaken at the same Australian higher education institution. - Study previously undertaken at an Australian university with reciprocal credit arrangements. - Study previously undertaken with an institution (Australian or overseas) with which an Australian higher education institution has a partnership agreement that includes recognition of formal study for credit in certain programs of study. - Study previously undertaken in courses for which there are some structured credit arrangements. Credit transfer and RPL Recognition of Prior Learning (RPL) are some of the ways students seek recognition of previous informal training, work experience, professional development, professional licensing and examinations and other work-based education and training. Cross sector qualification linkages Most higher education institutions allow some credit transfer from vocational education and training (VET) sector accredited courses of Registered Training Organisations (RTOs), depending on the level of the VET course and its relevance to the proposed higher education studies. Australia also has a small number of dual-sector providers which offer both VET and higher education programs. Private higher education institutions may also be RTOs and structure their courses to allow for credit transfer across the sectors. For more information on Higher Education in Australia
One of the most welcomed features of cloud computing is the hands-off management that it offers its users. Software as a service (SaaS) allows you to have all of the features of on-premise applications without the management and maintenance they would normally require. But cloud hosting is not limited to SaaS, and many other forms of cloud computing require variable levels of management. Some types of cloud computing require you to maintain your own applications and data but provide the platform, operating system, and virtualization technology out of the box. This is commonly called Platform as a Service (PaaS). Therefore, while it does offer some degree of management, it also requires you to take more interest in managing certain aspects of it yourself. Another type of cloud computing is even less managed. With it you get a packaged infrastructure: hardware, networking, and data center services, but the actual platform, operating system, applications, and data are all your own. This is commonly called Infrastructure as a Service (IaaS). It helps you save on infrastructure maintenance costs but gives you the freedom to manage your servers however you like. Even within each type of cloud hosting, there are varying levels of management that your provider may offer. For example, some SaaS providers may allow application customization while others will not. Some platform providers may offer OS updates but others may expect you do perform your own within your virtual machines. As cloud computing gains popularity, some have suggested that standardization needs to be implemented. There are some companies that claim to offer cloud services, but they are really just web hosting accounts dressed up as “the cloud”. When you choose a cloud service provider, you need to know what you are getting out of the deal and what type of management is covered.
Life cycle assessment of bottled water: A case study of Green2O products This study conducted a full life cycle analysis of bottled water on four types of bottles: ENSO, PLA (corn based), recycled PET, and regular (petroleum based) PET, to discern which bottle material is more beneficial to use in terms of environmental impacts. PET bottles are the conventional bottles used that are not biodegradable and accumulate in landfills. PLA corn based bottles are derived from an organic substance and are degradable under certain environmental conditions. Recycled PET bottles are purified PET bottles that were disposed of and are used in a closed loop system. An ENSO bottle contains a special additive which is designed to help the plastic bottle degrade after disposed of in a landfill. The results showed that of all fourteen impact categories examined, the recycled PET and ENSO bottles were generally better than the PLA and regular PET bottles; however, the ENSO had the highest impacts in the categories of global warming and respiratory organics, and the recycled PET had the highest impact in the eutrophication category. The life cycle stages that were found to have the highest environmental impacts were the bottle manufacturing stage and the bottled water distribution to storage stage. Analysis of the mixed bottle material based on recycled PET resin and regular PET resin was discussed as well, in which key impact categories were identified. The PLA bottle contained extremely low impacts in the carcinogens, respiratory organics and global warming categories, yet it still contained the highest impacts in seven of the fourteen categories. Overall, the results demonstrate that the usage of more sustainable bottles, such as biodegradable ENSO bottles and recycled PET bottles, appears to be a viable option for decreasing impacts of the bottled water industry on the environment. First Page Number Last Page Number Horowitz, Naomi; Frago, Jessica; and Mu, Dongyan, "Life cycle assessment of bottled water: A case study of Green2O products" (2018). Kean Publications. 1495.
Child health is important, as it contributes to the child’s future. Indonesia was ranked second after India as a country with the highest tuberculosis (TB) cases. Well-educated parents must care for their children and maintain their health. At the same time, provinces in Eastern Indonesia have the lowest percentage of non-smoking areas (KTR) implementation. In this study, we analyzed the level of morbidity that focuses on respiratory disease, namely coughing and breathlessness in children. In addition, this study also analyzed the education and parents’ smoking behavior focused in Eastern Indonesia. The study analyzes child morbidity according to several affecting factors. Data used were cross-section data collected from secondary data from Indonesia Family Life Survey East (IFLS-East) in 2012. By applying the logistic regression analysis by logit and probit analysis, we figured out that parents’ education, children’s age, health service availability, and domicile area significantly influenced child morbidity. Fathers’ education played a crucial role, as the higher their education, the lower their children’s morbidity. Besides, we also found out that parents’ smoking habits, child immunization status, sex, and health insurance ownership did not significantly influence child morbidity. Fulfillment of educational aspects is required to the maximum until reaching adulthood, especially for unmarried individuals. Apart from that, parents of smokers and non-smokers need to be educated effectively on the dangers of smoking in order to create a healthy environment and the importance of tobacco or cigarette control policies in the Eastern part of Indonesia. Full text article Authors who publish with Jurnal Ilmu Kesehatan Masyarakat (JIKM) agree to the following conditions: - The author retains the copyright and gives the editorial board the first right to be published with work that is simultaneously licensed under the Creative Commons Linking License that allows others to share (copy and redistribute) material in the media or format an adaptation of the work for any purpose. - Authors can enter into separate additional contractual arrangements for the non-exclusive distribution of published journal works (for example, posting them to institutional repositories or publishing them in books), with recognition of the initial publication of this journal.
Those in power must give stronger voices to marginalised communities and protection to natural flood defenses. Community-based tourism is starting to become popular and the Mekong River plays an important role, but dam building could prove harmful to the sector. From Malaysia, Myanmar and Laos, to Indonesia, the impacts from transborder investments were discussed at the forum organised by Thailand’s National Human Rights Commission and Forest People Programme. It ends tomorrow. In June 2018, a leaked environmental impact assessment report on the proposed Sambor Hydropower Dam could “literally kill the [Mekong] river”. The disastrous events happened just one day after officials checked the strength of the dam and announced that there was no need to worry. Developing hydropower is threatening the numerous fishing villages that line the Mekong River, which are seeing fish stocks dwindle as new dams spring up. Groups within the Mekong region issued a statement announcing their intention to boycott the Mekong River Commission’s (MRC’s) Prior Consultation for the proposed Pak Lay dam.
The Arctic Sunrise has left Greenland behind, and is now negotiating the sea ice that lies before our next port of call, the settlement of Longyearbyen, on the island of Spitsbergen, in the Svalbard archipelago, a place where the polar bears outnumber humans. Our last few days in a very remote part of Greenland's northeast were the coldest of the trip so far. Gone is the light clothing of Petermann and Humboldt in July and August - a similar latitude, now its been thermals, gloves and hats. The actual temperature out on deck the last few days was about -10 Celcius, not low for anyone who lives through cold winters with lower temperatures in Canada or parts of the United States. But remember, it's only September, and the Arctic winter is already kicking in here at the top of the world. And the wind whipping off the ice sheet at "79 Glacier" (Nioghalvfjerdsfjorden) makes it feel a hell of a lot colder than -10. Because of the bad weather and tough sea ice, we didn't have as much time as we would have liked, but glaciologists Gordon and Leigh managed to get their GPS equipment onto the glacier for more than two days – enough to gauge its reaction to two tide cycles. Fiamma and the Woods Hole team carried out temperature and depth surveys along the floating ice tongue – and one day, even drilled down through the sea ice to find out what was going on. The front of 79 Glacier is a flat, floating ice tongue, like Petermann Glacier, where we spent most of July. However, unlike the rapidly moving Helheim or Kangergluqussuaq glaciers that we visited in the last couple of weeks, 79 has not yet been activated by climate change and only moves a few hundred metres per year, yet still drains 10% of Greenland's ice; Kangergluqussuaq glacier, in comparison moves 14 kilometres a year! 79 Glacier sits in a deep fjord, or trough, which continues 700-800km into the heart of Greenland's Ice Sheet, well below sea level. The front of the glacier is pinned in front on a shoal or ridge in only 100m of water, which keeps the glacier in place. Behind that ridge, the 80 km-long tongue floats above water that is 800 to 1000 m deep. If any major changes take place at the front of the glacier, such as it losing touch with the ridge, this weakness will propagate inland, causing large amounts of ice to be dumped from the heart of Greenland's Ice Sheet, into the ocean, contributing to global sea level rise. Because it hasn't speeded up (yet), 79 can be studied in a more 'natural' state, so scientists can understand how the faster glaciers used to be. The problem with 79 Glacier is that it's so hard to reach; the last team of scientists that worked on the glacier itself arrived 14 years ago! Everywhere we're gone in Greenland has a distinctly different landscape, with the land in the north, around Humboldt, Petermann and 79 glaciers appearing more barren and dry than the tundra around Kangergluqussuaq or Helheim. At 79, great steep cliffs with silver rivers of shale sweep down to the frozen fjords and sounds, and the ice sheet dips its feet in the sea. Despite the gigatonnes of ice here, this part of Greenland is functionally a desert, and there's not much sign of life; the sea, however, harbours plenty; in the last few days, we've had whale sightings (humpback and possibly bowhead), seals and some polar bears – last night a mother and cub, and this morning a lone adult. That brings to ten the number of polar bears we've seen on our odyssey around Greenland so far. When I say ‘around Greenland', I'm being literal; given the fact that the extreme north coast and north east coasts of Greenland are hemmed in by sea ice year round, it's impossible to actually circumnavigate the island. However, since June 29th, when we reached the Arctic sea ice in the Lincoln Sea at 82.5 north, until last night, we've been pretty close to circumnavigating; to go further would need sleds and dogs. So it's goodbye to glaciers, and hello to sea ice; in Longyearbyen we'll be picking up a new team of scientists, lead by Peter Wadhams from the University of Cambridge in the UK. We'll be spending until the end of September exploring the effects of climate change on the Arctic sea ice as it reaches its annual minimum extent. Stay with us! More about sea ice extent
Flickr user Ed Boik (CC) Chicago's bungalow belt in West Lawn In this series, the Metropolitan Planning Council (MPC) will explore where Chicago’s middle class lives, along with mapping the middle class experience by race and how it’s changed over time for both the city and the region. MPC and the Urban Institute are conducting a study on what income and racial segregation costs metro Chicago. In the course of talking about this work, many times people have challenged me that race is really not the issue, because after all, aren’t we really talking about income and class? Well, no. At least not completely. As Gary Orfield, co-director of the Civil Rights Project at the University of California-Los Angeles, says to Natalie Moore in her recent book on Chicago’s segregation, The South Side, “Class isn’t race and race isn’t class.” Let’s take a line of research that came out in early May to dive into this. University of Southern California sociologist Ann Owens found that segregation between neighborhoods among families with children is high in the Chicago metro, about 8 percent higher than the average among the 100 largest metros in the U.S. Among households without children, however, from 1990 to 2010 segregation actually declined about 10 percent. Her findings suggest that increased income segregation is caused more by choices by the wealthy than by low- or middle-class populations, and is largely due to the decisions of households with children. Here is the part that seems race-neutral on the surface: Much of these findings have to do with Illinois’ school funding formula, which relies heavily upon local property taxes and results in vast disparities in per-pupil spending. The result is that when choosing where to live, parents are not just buying a house, they are buying a neighborhood, as Owens puts it. Parents who can afford to can buy their way into upscale neighborhoods with access to high-spending school districts. School quality is, as Emily Badger put it, “capitalized into housing prices,” rendering many neighborhoods unaffordable to the non-affluent. This helps explain why Owens found that by 2010, income segregation was two times higher among families with children under 18 than among households without them. Here’s how this plays out within Chicago proper: If you want certainty in knowing your child can attend a high-performing elementary school, then you need to live within the boundary of a high-performing neighborhood school. At that point, while the school itself is free, the corresponding real estate is often at a very high price point. That’s your cost of admission. But Chicago has school choice, you say. Doesn’t that make where you live matter less? Well, yes and no. The key word here is certainty, and certainty comes at a cost. There are many high-performing magnet and selective enrollment schools, but attendance is determined by a citywide lottery or scores on admissions tests, which makes acceptance up to chance or often intense preparation, and attendance up to the ability of families to manage long commutes. (Not to mention, as Northwestern sociologist Mary Pattillo documents in her research on school choice with high schools, the time and energy required to understand and navigate the magnet/selective enrollment process itself rules out many parents and guardians without the resources to do so.) But consider this map of Chicago’s neighborhood-based elementary schools. The highest-ranked schools, level 1 and 1+, are marked in green. The lowest ranked, level 2 and 3, are in red. Chicago Public Schools Lower-ranked neighborhood elementaries (in red) are clustered on the South and West sides. Acknowledging that these rankings have plenty of critics and are just one way of many to determine the best fit for one’s child, the pattern is striking nonetheless. There are highly ranked neighborhood schools in most parts of the city, but the vast majority and the most consistent presence are on the North Side. Those familiar with Chicago will know that these rankings follow racial patterns closely, as we will see in this series. So part of this story is definitely income and who can afford to buy certainty. But we really can’t talk about income segregation without talking about race and where people of different races live by income. Keep reading for more on how Chicago’s middle class census tracts break down by race, and how that’s changed over time. To map this data, we selected the census tracts in Chicago which had median household incomes within the range detailed below and displayed the racial makeup of the total population within those tracts. Note that this does not represent the racial makeup of the middle class population overall, rather the total population that lives in census tracts with middle class median incomes. Read on to learn more about our findings: A note on defining our terms for the maps that follow: For middle class we used the measure employed by Pew Research Center in their recent study, which defines the middle class as two-thirds to two times the median income. With a median household income of $48,734, the City of Chicago’s middle class range is $32,489 to $97,468. Because of how Census data is grouped, we’ve rounded that to $30,000 to $100,000. For a household of one, middle class income level ranges from $21,672 to $65,674 per year, while the middle class income level of a four person household can range from $58,029 to $175,844. Yes, $30,000 seems low to be considered middle class. We had to use an objective measure, and this one dates back to Mollie Orshansky’s late 1960s work to determine the first official national poverty rate. Measuring the middle class is tricky, not least because of self-perception; according to a 2008 Pew Research Center (2014) survey, 53 percent of Americans identified themselves as “middle class.” The authors speculate that this over-identification “likely lies with the powerful attraction that the label ’middle class’ has on most Americans and the stigma that some might associate either with the upper or lower class labels.” For a thorough review of definitions and measurements of the middle class, see this 2015 working paper from the American Institute for Economic Research. MPC VISTA Fellow Elizabeth O'Brien contributed to this post.
Energy-saving chlorine production Thanks to the oxygen depolarized cathode, we can save up to one third of energy when manufacturing chlorine. Deriving chlorine normally costs an enormous amount of energy because the highly reactive element forms bonds with practically every other element and therefore needs to be isolated using laborious methods. For this reason, Covestro helped shape the development of a new technology: thanks to the oxygen depolarized cathode (ODC), up to 30 percent less energy is consumed than in a conventional process. The new method is based on the membrane process for chlorine-alkaline electrolysis, which has become the standard method for manufacturing chlorine. In this process, chlorine, sodium hydroxide and hydrogen are derived from cooking salt and water. The new feature of the ODC process: the hydrogen-producing electrodes that are normally used in the membrane process are replaced by an oxygen depolarizing cathode. Supplying the cathode – the negative pole – with oxygen prevents the formation of hydrogen, leading only to the production of chlorine and sodium hydroxide. This process requires a voltage of just two volts instead of three – a third less. It might sound small, but it has a huge impact: if all German chlorine manufacturers introduced this process across the board, it would reduce the energy consumption of the entire country by one percent – which corresponds to roughly the annual energy needs of the major city of Cologne. The oxygen depolarized cathode might not just play an important role in the German energy revolution: thanks to its economic benefits, it can also secure German technology a successful position on the global market. After the process was successfully introduced on an industrial scale at Covestro in Krefeld-Uerdingen in 2011, it has already been rolled out internationally since 2013. The oxygen depolarized cathode therefore makes a decisive contribution to ensuring that global chlorine production is more environmentally-friendly and cost-effective.
Football is a family of team sports that involve, to varying degrees, kicking a ball to score a goal. Unqualified, the word football normally means the form of football that is the most popular where the word is used. Sports commonly called football include association football (known as soccer in North America, Ireland, Australia and South Africa); gridiron football (specifically American football or Canadian football); Australian rules football; rugby union and rugby league; and Gaelic football. These various forms of football share, to varying degrees, common origins and are known as "football codes". There are a number of references to traditional, ancient, or prehistoric ball games played in many different parts of the world. Contemporary codes of football can be traced back to the codification of these games at English public schools during the 19th century, itself an outgrowth of medieval football. The expansion and cultural power of the British Empire allowed these rules of football to spread to areas of British influence outside the directly controlled Empire. By the end of the 19th century, distinct regional codes were already developing: Gaelic football, for example, deliberately incorporated the rules of local traditional football games in order to maintain their heritage. In 1888, The Football League was founded in England, becoming the first of many professional football associations. During the 20th century, several of the various kinds of football grew to become some of the most popular team sports in the world. The various codes of football share certain common elements and can be grouped into two main classes of football: carrying codes like American football, Canadian football, Australian football, rugby union and rugby league, where the ball is moved about the field while being held in the hands or thrown, and kicking codes such as association football and Gaelic football, where the ball is moved primarily with the feet, and where handling is strictly limited. Common rules among the sports include: - Two teams usually have between 11 and 18 players; some variations that have fewer players (five or more per team) are also popular. - A clearly defined area in which to play the game. - Scoring goals or points by moving the ball to an opposing team's end of the field and either into a goal area, or over a line. - Goals or points resulting from players putting the ball between two goalposts. - The goal or line being defended by the opposing team. - Players using only their body to move the ball, i.e. no additional equipment such as bats or sticks. In all codes, common skills include passing, tackling, evasion of tackles, catching and kicking. In most codes, there are rules restricting the movement of players offside, and players scoring a goal must put the ball either under or over a crossbar between the goalposts. There are conflicting explanations of the origin of the word "football". It is widely assumed that the word "football" (or the phrase "foot ball") refers to the action of the foot kicking a ball. There is an alternative explanation, which is that football originally referred to a variety of games in medieval Europe that were played on foot. There is no conclusive evidence for either explanation. The Chinese competitive game cuju (蹴鞠) resembles modern association football. It existed during the Han dynasty and possibly the Qin dynasty, in the second and third centuries BC, attested by descriptions in a military manual. The Japanese version of cuju is kemari (蹴鞠), and was developed during the Asuka period. This is known to have been played within the Japanese imperial court in Kyoto from about 600 AD. In kemari, several people stand in a circle and kick a ball to each other, trying not to let the ball drop to the ground (much like keepie uppie). Ancient Greece and Rome The Ancient Greeks and Romans are known to have played many ball games, some of which involved the use of the feet. The Roman game harpastum is believed to have been adapted from a Greek team game known as "ἐπίσκυρος" (Episkyros) or "φαινίνδα" (phaininda), which is mentioned by a Greek playwright, Antiphanes (388–311 BC) and later referred to by the Christian theologian Clement of Alexandria (c. 150 – c. 215 AD). These games appear to have resembled rugby football. The Roman politician Cicero (106–43 BC) describes the case of a man who was killed whilst having a shave when a ball was kicked into a barber's shop. Roman ball games already knew the air-filled ball, the follis. Episkyros is described as an early form of football by FIFA. There are a number of references to traditional, ancient, or prehistoric ball games, played by indigenous peoples in many different parts of the world. For example, in 1586, men from a ship commanded by an English explorer named John Davis went ashore to play a form of football with Inuit in Greenland. There are later accounts of an Inuit game played on ice, called Aqsaqtuk. Each match began with two teams facing each other in parallel lines, before attempting to kick the ball through each other team's line and then at a goal. In 1610, William Strachey, a colonist at Jamestown, Virginia recorded a game played by Native Americans, called Pahsaheman. Pasuckuakohowog, a game similar to modern-day association football played amongst Amerindians, was also reported as early as the 17th century. Games played in Mesoamerica with rubber balls by indigenous peoples are also well-documented as existing since before this time, but these had more similarities to basketball or volleyball, and no links have been found between such games and modern football sports. Northeastern American Indians, especially the Iroquois Confederation, played a game which made use of net racquets to throw and catch a small ball; however, although it is a ball-goal foot game, lacrosse (as its modern descendant is called) is likewise not usually classed as a form of "football". On the Australian continent several tribes of indigenous people played kicking and catching games with stuffed balls which have been generalised by historians as Marn Grook (Djab Wurrung for "game ball"). The earliest historical account is an anecdote from the 1878 book by Robert Brough-Smyth, The Aborigines of Victoria, in which a man called Richard Thomas is quoted as saying, in about 1841 in Victoria, Australia, that he had witnessed Aboriginal people playing the game: "Mr Thomas describes how the foremost player will drop kick a ball made from the skin of a possum and how other players leap into the air in order to catch it." Some historians have theorised that Marn Grook was one of the origins of Australian rules football. The Māori in New Zealand played a game called Ki-o-rahi consisting of teams of seven players play on a circular field divided into zones, and score points by touching the 'pou' (boundary markers) and hitting a central 'tupu' or target. These games and others may well go far back into antiquity. However, the main sources of modern football codes appear to lie in western Europe, especially England. Mahmud al-Kashgari in his Dīwān Lughāt al-Turk, described a game called "tepuk" among Turks in Central and East Asia. In the game, people try to attack each other's castle by kicking a ball made of sheep leather. A group of indigenous people playing a ball game in French Guiana A revived version of kemari being played at the Tanzan Shrine, Japan, 2006 Medieval and early modern Europe The Middle Ages saw a huge rise in popularity of annual Shrovetide football matches throughout Europe, particularly in England. An early reference to a ball game played in Britain comes from the 9th-century Historia Brittonum, attributed to Nennius, which describes "a party of boys ... playing at ball". References to a ball game played in northern France known as La Soule or Choule, in which the ball was propelled by hands, feet, and sticks, date from the 12th century. The early forms of football played in England, sometimes referred to as "mob football", would be played in towns or between neighbouring villages, involving an unlimited number of players on opposing teams who would clash en masse, struggling to move an item, such as inflated animal's bladder to particular geographical points, such as their opponents' church, with play taking place in the open space between neighbouring parishes. The game was played primarily during significant religious festivals, such as Shrovetide, Christmas, or Easter, and Shrovetide games have survived into the modern era in a number of English towns (see below). The first detailed description of what was almost certainly football in England was given by William FitzStephen in about 1174–1183. He described the activities of London youths during the annual festival of Shrove Tuesday: After lunch all the youth of the city go out into the fields to take part in a ball game. The students of each school have their own ball; the workers from each city craft are also carrying their balls. Older citizens, fathers, and wealthy citizens come on horseback to watch their juniors competing, and to relive their own youth vicariously: you can see their inner passions aroused as they watch the action and get caught up in the fun being had by the carefree adolescents. Most of the very early references to the game speak simply of "ball play" or "playing at ball". This reinforces the idea that the games played at the time did not necessarily involve a ball being kicked. An early reference to a ball game that was probably football comes from 1280 at Ulgham, Northumberland, England: "Henry... while playing at ball.. ran against David". Football was played in Ireland in 1308, with a documented reference to John McCrocan, a spectator at a "football game" at Newcastle, County Down being charged with accidentally stabbing a player named William Bernard. Another reference to a football game comes in 1321 at Shouldham, Norfolk, England: "[d]uring the game at ball as he kicked the ball, a lay friend of his... ran against him and wounded himself". In 1314, Nicholas de Farndone, Lord Mayor of the City of London issued a decree banning football in the French used by the English upper classes at the time. A translation reads: "[f]orasmuch as there is great noise in the city caused by hustling over large foot balls [rageries de grosses pelotes de pee] in the fields of the public from which many evils might arise which God forbid: we command and forbid on behalf of the king, on pain of imprisonment, such game to be used in the city in the future." This is the earliest reference to football. In 1363, King Edward III of England issued a proclamation banning "...handball, football, or hockey; coursing and cock-fighting, or other such idle games", showing that "football" – whatever its exact form in this case – was being differentiated from games involving other parts of the body, such as handball. A game known as "football" was played in Scotland as early as the 15th century: it was prohibited by the Football Act 1424 and although the law fell into disuse it was not repealed until 1906. There is evidence for schoolboys playing a "football" ball game in Aberdeen in 1633 (some references cite 1636) which is notable as an early allusion to what some have considered to be passing the ball. The word "pass" in the most recent translation is derived from "huc percute" (strike it here) and later "repercute pilam" (strike the ball again) in the original Latin. It is not certain that the ball was being struck between members of the same team. The original word translated as "goal" is "metum", literally meaning the "pillar at each end of the circus course" in a Roman chariot race. There is a reference to "get hold of the ball before [another player] does" (Praeripe illi pilam si possis agere) suggesting that handling of the ball was allowed. One sentence states in the original 1930 translation "Throw yourself against him" (Age, objice te illi). King Henry IV of England also presented one of the earliest documented uses of the English word "football", in 1409, when he issued a proclamation forbidding the levying of money for "foteball". There is also an account in Latin from the end of the 15th century of football being played at Caunton, Nottinghamshire. This is the first description of a "kicking game" and the first description of dribbling: "[t]he game at which they had met for common recreation is called by some the foot-ball game. It is one in which young men, in country sport, propel a huge ball not by throwing it into the air but by striking it and rolling it along the ground, and that not with their hands but with their feet... kicking in opposite directions." The chronicler gives the earliest reference to a football pitch, stating that: "[t]he boundaries have been marked and the game had started. Other firsts in the medieval and early modern eras: - "A football", in the sense of a ball rather than a game, was first mentioned in 1486. This reference is in Dame Juliana Berners' Book of St Albans. It states: "a certain rounde instrument to play with ...it is an instrument for the foote and then it is calde in Latyn 'pila pedalis', a fotebal". - A pair of football boots were ordered by King Henry VIII of England in 1526. - Women playing a form of football was first described in 1580 by Sir Philip Sidney in one of his poems: "[a] tyme there is for all, my mother often sayes, when she, with skirts tuckt very hy, with girles at football playes". - The first references to goals are in the late 16th and early 17th centuries. In 1584 and 1602 respectively, John Norden and Richard Carew referred to "goals" in Cornish hurling. Carew described how goals were made: "they pitch two bushes in the ground, some eight or ten foote asunder; and directly against them, ten or twelue [twelve] score off, other twayne in like distance, which they terme their Goales". He is also the first to describe goalkeepers and passing of the ball between players. - The first direct reference to scoring a goal is in John Day's play The Blind Beggar of Bethnal Green (performed circa 1600; published 1659): "I'll play a gole at camp-ball" (an extremely violent variety of football, which was popular in East Anglia). Similarly in a poem in 1613, Michael Drayton refers to "when the Ball to throw, and drive it to the Gole, in squadrons forth they goe". In the 16th century, the city of Florence celebrated the period between Epiphany and Lent by playing a game which today is known as "calcio storico" ("historic kickball") in the Piazza Santa Croce. The young aristocrats of the city would dress up in fine silk costumes and embroil themselves in a violent form of football. For example, calcio players could punch, shoulder charge, and kick opponents. Blows below the belt were allowed. The game is said to have originated as a military training exercise. In 1580, Count Giovanni de' Bardi di Vernio wrote Discorso sopra 'l giuoco del Calcio Fiorentino. This is sometimes said to be the earliest code of rules for any football game. The game was not played after January 1739 (until it was revived in May 1930). Official disapproval and attempts to ban football There have been many attempts to ban football, from the middle ages through to the modern day. The first such law was passed in England in 1314; it was followed by more than 30 in England alone between 1314 and 1667.: 6 Women were banned from playing at English and Scottish Football League grounds in 1921, a ban that was only lifted in the 1970s. Female footballers still face similar problems in some parts of the world. American football also faced pressures to ban the sport. The game played in the 19th century resembled mob football that developed in medieval Europe, including a version popular on university campuses known as old division football, and several municipalities banned its play in the mid-19th century. By the 20th century, the game had evolved to a more rugby style game. In 1905, there were calls to ban American football in the U.S. due to its violence; a meeting that year was hosted by American president Theodore Roosevelt led to sweeping rules changes that caused the sport to diverge significantly from its rugby roots to become more like the sport as it is played today. Establishment of modern codes English public schools While football continued to be played in various forms throughout Britain, its public schools (equivalent to private schools in other countries) are widely credited with four key achievements in the creation of modern football codes. First of all, the evidence suggests that they were important in taking football away from its "mob" form and turning it into an organised team sport. Second, many early descriptions of football and references to it were recorded by people who had studied at these schools. Third, it was teachers, students, and former students from these schools who first codified football games, to enable matches to be played between schools. Finally, it was at English public schools that the division between "kicking" and "running" (or "carrying") games first became clear. The earliest evidence that games resembling football were being played at English public schools – mainly attended by boys from the upper, upper-middle and professional classes – comes from the Vulgaria by William Herman in 1519. Herman had been headmaster at Eton and Winchester colleges and his Latin textbook includes a translation exercise with the phrase "We wyll playe with a ball full of wynde". Richard Mulcaster, a student at Eton College in the early 16th century and later headmaster at other English schools, has been described as "the greatest sixteenth Century advocate of football". Among his contributions are the earliest evidence of organised team football. Mulcaster's writings refer to teams ("sides" and "parties"), positions ("standings"), a referee ("judge over the parties") and a coach "(trayning maister)". Mulcaster's "footeball" had evolved from the disordered and violent forms of traditional football: [s]ome smaller number with such overlooking, sorted into sides and standings, not meeting with their bodies so boisterously to trie their strength: nor shouldring or shuffing one an other so barbarously ... may use footeball for as much good to the body, by the chiefe use of the legges. In 1633, David Wedderburn, a teacher from Aberdeen, mentioned elements of modern football games in a short Latin textbook called Vocabula. Wedderburn refers to what has been translated into modern English as "keeping goal" and makes an allusion to passing the ball ("strike it here"). There is a reference to "get hold of the ball", suggesting that some handling was allowed. It is clear that the tackles allowed included the charging and holding of opposing players ("drive that man back"). A more detailed description of football is given in Francis Willughby's Book of Games, written in about 1660. Willughby, who had studied at Bishop Vesey's Grammar School, Sutton Coldfield, is the first to describe goals and a distinct playing field: "a close that has a gate at either end. The gates are called Goals." His book includes a diagram illustrating a football field. He also mentions tactics ("leaving some of their best players to guard the goal"); scoring ("they that can strike the ball through their opponents' goal first win") and the way teams were selected ("the players being equally divided according to their strength and nimbleness"). He is the first to describe a "law" of football: "they must not strike [an opponent's leg] higher than the ball". English public schools were the first to codify football games. In particular, they devised the first offside rules, during the late 18th century. In the earliest manifestations of these rules, players were "off their side" if they simply stood between the ball and the goal which was their objective. Players were not allowed to pass the ball forward, either by foot or by hand. They could only dribble with their feet, or advance the ball in a scrum or similar formation. However, offside laws began to diverge and develop differently at each school, as is shown by the rules of football from Winchester, Rugby, Harrow and Cheltenham, during between 1810 and 1850. The first known codes – in the sense of a set of rules – were those of Eton in 1815 and Aldenham in 1825.) During the early 19th century, most working-class people in Britain had to work six days a week, often for over twelve hours a day. They had neither the time nor the inclination to engage in sport for recreation and, at the time, many children were part of the labour force. Feast day football played on the streets was in decline. Public school boys, who enjoyed some freedom from work, became the inventors of organised football games with formal codes of rules. Football was adopted by a number of public schools as a way of encouraging competitiveness and keeping youths fit. Each school drafted its own rules, which varied widely between different schools and were changed over time with each new intake of pupils. Two schools of thought developed regarding rules. Some schools favoured a game in which the ball could be carried (as at Rugby, Marlborough and Cheltenham), while others preferred a game where kicking and dribbling the ball was promoted (as at Eton, Harrow, Westminster and Charterhouse). The division into these two camps was partly the result of circumstances in which the games were played. For example, Charterhouse and Westminster at the time had restricted playing areas; the boys were confined to playing their ball game within the school cloisters, making it difficult for them to adopt rough and tumble running games. William Webb Ellis, a pupil at Rugby School, is said to have "with a fine disregard for the rules of football, as played in his time [emphasis added], first took the ball in his arms and ran with it, thus creating the distinctive feature of the rugby game." in 1823. This act is usually said to be the beginning of Rugby football, but there is little evidence that it occurred, and most sports historians believe the story to be apocryphal. The act of 'taking the ball in his arms' is often misinterpreted as 'picking the ball up' as it is widely believed that Webb Ellis' 'crime' was handling the ball, as in modern association football, however handling the ball at the time was often permitted and in some cases compulsory, the rule for which Webb Ellis showed disregard was running forward with it as the rules of his time only allowed a player to retreat backwards or kick forwards. The boom in rail transport in Britain during the 1840s meant that people were able to travel farther and with less inconvenience than they ever had before. Inter-school sporting competitions became possible. However, it was difficult for schools to play each other at football, as each school played by its own rules. The solution to this problem was usually that the match be divided into two-halves, one half played by the rules of the host "home" school, and the other half by the visiting "away" school. The modern rules of many football codes were formulated during the mid- or late- 19th century. This also applies to other sports such as lawn bowls, lawn tennis, etc. The major impetus for this was the patenting of the world's first lawnmower in 1830. This allowed for the preparation of modern ovals, playing fields, pitches, grass courts, etc. Apart from Rugby football, the public school codes have barely been played beyond the confines of each school's playing fields. However, many of them are still played at the schools which created them (see Surviving UK school games below). Public schools' dominance of sports in the UK began to wane after the Factory Act of 1850, which significantly increased the recreation time available to working class children. Before 1850, many British children had to work six days a week, for more than twelve hours a day. From 1850, they could not work before 6 a.m. (7 a.m. in winter) or after 6 p.m. on weekdays (7 p.m. in winter); on Saturdays they had to cease work at 2 pm. These changes meant that working class children had more time for games, including various forms of football. The earliest known matches between public schools are as follows: - 9 December 1834: Eton School v. Harrow School. - 1840s: Old Rugbeians v. Old Salopians (played at Cambridge University). - 1840s: Old Rugbeians v. Old Salopians (played at Cambridge University the following year). - 1852: Harrow School v. Westminster School. - 1857: Haileybury School v. Westminster School. - 24 February 1858: Forest School v. Chigwell School. - 1858: Westminster School v. Winchester College. - 1859: Harrow School v. Westminster School. - 19 November 1859: Radley College v. Old Wykehamists. - 1 December 1859: Old Marlburians v. Old Rugbeians (played at Christ Church, Oxford). - 19 December 1859: Old Harrovians v. Old Wykehamists (played at Christ Church, Oxford). The first documented club to bear in the title a reference to being a 'football club' were called "The Foot-Ball Club" who were located in Edinburgh, Scotland, during the period 1824–41. The club forbade tripping but allowed pushing and holding and the picking up of the ball. In 1845, three boys at Rugby school were tasked with codifying the rules then being used at the school. These were the first set of written rules (or code) for any form of football. This further assisted the spread of the Rugby game. The earliest known matches involving non-public school clubs or institutions are as follows: - 13 February 1856: Charterhouse School v. St Bartholemew's Hospital. - 7 November 1856: Bedford Grammar School v. Bedford Town Gentlemen. - 13 December 1856: Sunbury Military College v. Littleton Gentlemen. - December 1857: Edinburgh University v. Edinburgh Academical Club. - 24 November 1858: Westminster School v. Dingley Dell Club. - 12 May 1859: Tavistock School v. Princetown School. - 5 November 1859: Eton School v. Oxford University. - 22 February 1860: Charterhouse School v. Dingley Dell Club. - 21 July 1860: Melbourne v. Richmond. - 17 December 1860: 58th Regiment v. Sheffield. - 26 December 1860: Sheffield v. Hallam. One of the longest running football fixture is the Cordner-Eggleston Cup, contested between Melbourne Grammar School and Scotch College, Melbourne every year since 1858. It is believed by many to also be the first match of Australian rules football, although it was played under experimental rules in its first year. The first football trophy tournament was the Caledonian Challenge Cup, donated by the Royal Caledonian Society of Melbourne, played in 1861 under the Melbourne Rules. The oldest football league is a rugby football competition, the United Hospitals Challenge Cup (1874), while the oldest rugby trophy is the Yorkshire Cup, contested since 1878. The South Australian Football Association (30 April 1877) is the oldest surviving Australian rules football competition. The oldest surviving soccer trophy is the Youdan Cup (1867) and the oldest national football competition is the English FA Cup (1871). The Football League (1888) is recognised as the longest running association football league. The first international football match took place between sides representing England and Scotland on 5 March 1870 at the Oval under the authority of the FA. The first rugby international took place in 1871. In Europe, early footballs were made out of animal bladders, more specifically pig's bladders, which were inflated. Later leather coverings were introduced to allow the balls to keep their shape. However, in 1851, Richard Lindon and William Gilbert, both shoemakers from the town of Rugby (near the school), exhibited both round and oval-shaped balls at the Great Exhibition in London. Richard Lindon's wife is said to have died of lung disease caused by blowing up pig's bladders.[a] Lindon also won medals for the invention of the "Rubber inflatable Bladder" and the "Brass Hand Pump". In 1855, the U.S. inventor Charles Goodyear – who had patented vulcanised rubber – exhibited a spherical football, with an exterior of vulcanised rubber panels, at the Paris Exhibition Universelle. The ball was to prove popular in early forms of football in the U.S. Modern ball passing tactics The earliest reference to a game of football involving players passing the ball and attempting to score past a goalkeeper was written in 1633 by David Wedderburn, a poet and teacher in Aberdeen, Scotland. Nevertheless, the original text does not state whether the allusion to passing as 'kick the ball back' ('repercute pilam') was in a forward or backward direction or between members of the same opposing teams (as was usual at this time). "Scientific" football is first recorded in 1839 from Lancashire and in the modern game in rugby football from 1862 and from Sheffield FC as early as 1865. The first side to play a passing combination game was the Royal Engineers AFC in 1869/70. By 1869 they were "work[ing] well together", "backing up" and benefiting from "cooperation". By 1870 the Engineers were passing the ball: "Lieut. Creswell, who having brought the ball up the side then kicked it into the middle to another of his side, who kicked it through the posts the minute before time was called". Passing was a regular feature of their style. By early 1872 the Engineers were the first football team renowned for "play[ing] beautifully together". A double pass is first reported from Derby school against Nottingham Forest in March 1872, the first of which is irrefutably a short pass: "Mr Absey dribbling the ball half the length of the field delivered it to Wallis, who kicking it cleverly in front of the goal, sent it to the captain who drove it at once between the Nottingham posts". The first side to have perfected the modern formation was Cambridge University AFC; they also introduced the 2–3–5 "pyramid" formation. Rugby football was thought to have been started about 1845 at Rugby School in Rugby, Warwickshire, England although forms of football in which the ball was carried and tossed date to medieval times. In Britain, by 1870, there were 49 clubs playing variations of the Rugby school game. There were also "rugby" clubs in Ireland, Australia, Canada and New Zealand. However, there was no generally accepted set of rules for rugby until 1871, when 21 clubs from London came together to form the Rugby Football Union (RFU). The first official RFU rules were adopted in June 1871. These rules allowed passing the ball. They also included the try, where touching the ball over the line allowed an attempt at goal, though drop-goals from marks and general play, and penalty conversions were still the main form of contest. Regardless of any form of football, the first international match between the national team of England and Scotland took place at Raeburn Place on 27 March 1871. During the nineteenth century, several codifications of the rules of football were made at the University of Cambridge, in order to enable students from different public schools to play each other. The Cambridge Rules of 1863 influenced the decision of the Football Association to ban Rugby-style carrying of the ball in its own first set of laws. By the late 1850s, many football clubs had been formed throughout the English-speaking world, to play various codes of football. Sheffield Football Club, founded in 1857 in the English city of Sheffield by Nathaniel Creswick and William Prest, was later recognised as the world's oldest club playing association football. However, the club initially played its own code of football: the Sheffield rules. The code was largely independent of the public school rules, the most significant difference being the lack of an offside rule. The code was responsible for many innovations that later spread to association football. These included free kicks, corner kicks, handball, throw-ins and the crossbar. By the 1870s they became the dominant code in the north and midlands of England. At this time, a series of rule changes by both the London and Sheffield FAs gradually eroded the differences between the two games until the adoption of a common code in 1877. Australian rules football There is archival evidence of "foot-ball" games being played in various parts of Australia throughout the first half of the 19th century. The origins of an organised game of football known today as Australian rules football can be traced back to 1858 in Melbourne, the capital city of Victoria. In July 1858, Tom Wills, an Australian-born cricketer educated at Rugby School in England, wrote a letter to Bell's Life in Victoria & Sporting Chronicle, calling for a "foot-ball club" with a "code of laws" to keep cricketers fit during winter. This is considered by historians to be a defining moment in the creation of Australian rules football. Through publicity and personal contacts Wills was able to co-ordinate football matches in Melbourne that experimented with various rules, the first of which was played on 31 July 1858. One week later, Wills umpired a schoolboys match between Melbourne Grammar School and Scotch College. Following these matches, organised football in Melbourne rapidly increased in popularity. Wills and others involved in these early matches formed the Melbourne Football Club (the oldest surviving Australian football club) on 14 May 1859. Club members Wills, William Hammersley, J. B. Thompson and Thomas H. Smith met with the intention of forming a set of rules that would be widely adopted by other clubs. The committee debated rules used in English public school games; Wills pushed for various rugby football rules he learnt during his schooling. The first rules share similarities with these games, and were shaped to suit to Australian conditions. H. C. A. Harrison, a seminal figure in Australian football, recalled that his cousin Wills wanted "a game of our own". The code was distinctive in the prevalence of the mark, free kick, tackling, lack of an offside rule and that players were specifically penalised for throwing the ball. The Melbourne football rules were widely distributed and gradually adopted by the other Victorian clubs. The rules were updated several times during the 1860s to accommodate the rules of other influential Victorian football clubs. A significant redraft in 1866 by H. C. A. Harrison's committee accommodated the Geelong Football Club's rules, making the game then known as "Victorian Rules" increasingly distinct from other codes. It soon adopted cricket fields and an oval ball, used specialised goal and behind posts, and featured bouncing the ball while running and spectacular high marking. The game spread quickly to other Australian colonies. Outside its heartland in southern Australia, the code experienced a significant period of decline following World War I but has since grown throughout Australia and in other parts of the world, and the Australian Football League emerged as the dominant professional competition. The Football Association During the early 1860s, there were increasing attempts in England to unify and reconcile the various public school games. In 1862, J. C. Thring, who had been one of the driving forces behind the original Cambridge Rules, was a master at Uppingham School, and he issued his own rules of what he called "The Simplest Game" (these are also known as the Uppingham Rules). In early October 1863, another new revised version of the Cambridge Rules was drawn up by a seven member committee representing former pupils from Harrow, Shrewsbury, Eton, Rugby, Marlborough and Westminster. At the Freemasons' Tavern, Great Queen Street, London on the evening of 26 October 1863, representatives of several football clubs in the London Metropolitan area met for the inaugural meeting of the Football Association (FA). The aim of the association was to establish a single unifying code and regulate the playing of the game among its members. Following the first meeting, the public schools were invited to join the association. All of them declined, except Charterhouse and Uppingham. In total, six meetings of the FA were held between October and December 1863. After the third meeting, a draft set of rules were published. However, at the beginning of the fourth meeting, attention was drawn to the recently published Cambridge Rules of 1863. The Cambridge rules differed from the draft FA rules in two significant areas; namely running with (carrying) the ball and hacking (kicking opposing players in the shins). The two contentious FA rules were as follows: IX. A player shall be entitled to run with the ball towards his adversaries' goal if he makes a fair catch, or catches the ball on the first bound; but in case of a fair catch, if he makes his mark he shall not run. X. If any player shall run with the ball towards his adversaries' goal, any player on the opposite side shall be at liberty to charge, hold, trip or hack him, or to wrest the ball from him, but no player shall be held and hacked at the same time. At the fifth meeting it was proposed that these two rules be removed. Most of the delegates supported this, but F. M. Campbell, the representative from Blackheath and the first FA treasurer, objected. He said: "hacking is the true football". However, the motion to ban running with the ball in hand and hacking was carried and Blackheath withdrew from the FA. After the final meeting on 8 December, the FA published the "Laws of the Game", the first comprehensive set of rules for the game later known as association football. The term "soccer", in use since the late 19th century, derives from an Oxford University abbreviation of "association". The first FA rules still contained elements that are no longer part of association football, but which are still recognisable in other games (such as Australian football and rugby football): for instance, a player could make a fair catch and claim a mark, which entitled him to a free kick; and if a player touched the ball behind the opponents' goal line, his side was entitled to a free kick at goal, from 15 yards (13.5 metres) in front of the goal line. North American football codes As was the case in Britain, by the early 19th century, North American schools and universities played their own local games, between sides made up of students. For example, students at Dartmouth College in New Hampshire played a game called Old division football, a variant of the association football codes, as early as the 1820s. They remained largely "mob football" style games, with huge numbers of players attempting to advance the ball into a goal area, often by any means necessary. Rules were simple, violence and injury were common. The violence of these mob-style games led to widespread protests and a decision to abandon them. Yale University, under pressure from the city of New Haven, banned the play of all forms of football in 1860, while Harvard University followed suit in 1861. In its place, two general types of football evolved: "kicking" games and "running" (or "carrying") games. A hybrid of the two, known as the "Boston game", was played by a group known as the Oneida Football Club. The club, considered by some historians as the first formal football club in the United States, was formed in 1862 by schoolboys who played the Boston game on Boston Common. The game began to return to American college campuses by the late 1860s. The universities of Yale, Princeton (then known as the College of New Jersey), Rutgers, and Brown all began playing "kicking" games during this time. In 1867, Princeton used rules based on those of the English Football Association. In Canada, the first documented football match was a practice game played on 9 November 1861, at University College, University of Toronto (approximately 400 yards west of Queen's Park). One of the participants in the game involving University of Toronto students was (Sir) William Mulock, later Chancellor of the school. In 1864, at Trinity College, Toronto, F. Barlow Cumberland, Frederick A. Bethune, and Christopher Gwynn, one of the founders of Milton, Massachusetts, devised rules based on rugby football. A "running game", resembling rugby football, was then taken up by the Montreal Football Club in Canada in 1868. On 6 November 1869, Rutgers faced Princeton in a game that was played with a round ball and, like all early games, used improvised rules. It is usually regarded as the first game of American intercollegiate football. Modern North American football grew out of a match between McGill University of Montreal and Harvard University in 1874. During the game, the two teams alternated between the rugby-based rules used by McGill and the Boston Game rules used by Harvard. Within a few years, Harvard had both adopted McGill's rules and persuaded other U.S. university teams to do the same. On 23 November 1876, representatives from Harvard, Yale, Princeton, and Columbia met at the Massasoit Convention in Springfield, Massachusetts, agreeing to adopt most of the Rugby Football Union rules, with some variations. In 1880, Yale coach Walter Camp, who had become a fixture at the Massasoit House conventions where the rules were debated and changed, devised a number of major innovations. Camp's two most important rule changes that diverged the American game from rugby were replacing the scrummage with the line of scrimmage and the establishment of the down-and-distance rules. American football still however remained a violent sport where collisions often led to serious injuries and sometimes even death. This led U.S. President Theodore Roosevelt to hold a meeting with football representatives from Harvard, Yale, and Princeton on 9 October 1905, urging them to make drastic changes. One rule change introduced in 1906, devised to open up the game and reduce injury, was the introduction of the legal forward pass. Though it was underutilised for years, this proved to be one of the most important rule changes in the establishment of the modern game. Over the years, Canada absorbed some of the developments in American football in an effort to distinguish it from a more rugby-oriented game. In 1903, the Ontario Rugby Football Union adopted the Burnside rules, which implemented the line of scrimmage and down-and-distance system from American football, among others. Canadian football then implemented the legal forward pass in 1929. American and Canadian football remain different codes, stemming from rule changes that the American side of the border adopted but the Canadian side has not. In the mid-19th century, various traditional football games, referred to collectively as caid, remained popular in Ireland, especially in County Kerry. One observer, Father W. Ferris, described two main forms of caid during this period: the "field game" in which the object was to put the ball through arch-like goals, formed from the boughs of two trees; and the epic "cross-country game" which took up most of the daylight hours of a Sunday on which it was played, and was won by one team taking the ball across a parish boundary. "Wrestling", "holding" opposing players, and carrying the ball were all allowed. By the 1870s, rugby and association football had started to become popular in Ireland. Trinity College Dublin was an early stronghold of rugby (see the Developments in the 1850s section above). The rules of the English FA were being distributed widely. Traditional forms of caid had begun to give way to a "rough-and-tumble game" which allowed tripping. There was no serious attempt to unify and codify Irish varieties of football, until the establishment of the Gaelic Athletic Association (GAA) in 1884. The GAA sought to promote traditional Irish sports, such as hurling and to reject imported games like rugby and association football. The first Gaelic football rules were drawn up by Maurice Davin and published in the United Ireland magazine on 7 February 1887. Davin's rules showed the influence of games such as hurling and a desire to formalise a distinctly Irish code of football. The prime example of this differentiation was the lack of an offside rule (an attribute which, for many years, was shared only by other Irish games like hurling, and by Australian rules football). Schism in Rugby football In England, by the 1890s, a long-standing Rugby Football Union ban on professional players was causing regional tensions within rugby football, as many players in northern England were working class and could not afford to take time off to train, travel, play and recover from injuries. This was not very different from what had occurred ten years earlier in soccer in Northern England but the authorities reacted very differently in the RFU, attempting to alienate the working class support in Northern England. In 1895, following a dispute about a player being paid broken time payments, which replaced wages lost as a result of playing rugby, representatives of the northern clubs met in Huddersfield to form the Northern Rugby Football Union (NRFU). The new body initially permitted only various types of player wage replacements. However, within two years, NRFU players could be paid, but they were required to have a job outside sport. The demands of a professional league dictated that rugby had to become a better "spectator" sport. Within a few years the NRFU rules had started to diverge from the RFU, most notably with the abolition of the line-out. This was followed by the replacement of the ruck with the "play-the-ball ruck", which allowed a two-player ruck contest between the tackler at marker and the player tackled. Mauls were stopped once the ball carrier was held, being replaced by a play-the ball-ruck. The separate Lancashire and Yorkshire competitions of the NRFU merged in 1901, forming the Northern Rugby League, the first time the name rugby league was used officially in England. Over time, the RFU form of rugby, played by clubs which remained members of national federations affiliated to the IRFB, became known as rugby union. Globalisation of association football The need for a single body to oversee association football had become apparent by the beginning of the 20th century, with the increasing popularity of international fixtures. The English Football Association had chaired many discussions on setting up an international body, but was perceived as making no progress. It fell to associations from seven other European countries: France, Belgium, Denmark, Netherlands, Spain, Sweden, and Switzerland, to form an international association. The Fédération Internationale de Football Association (FIFA) was founded in Paris on 21 May 1904. Its first president was Robert Guérin. The French name and acronym has remained, even outside French-speaking countries. Further divergence of the two rugby codes Rugby league rules diverged significantly from rugby union in 1906, with the reduction of the team from 15 to 13 players. In 1907, a New Zealand professional rugby team toured Australia and Britain, receiving an enthusiastic response, and professional rugby leagues were launched in Australia the following year. However, the rules of professional games varied from one country to another, and negotiations between various national bodies were required to fix the exact rules for each international match. This situation endured until 1948, when at the instigation of the French league, the Rugby League International Federation (RLIF) was formed at a meeting in Bordeaux. During the second half of the 20th century, the rules changed further. In 1966, rugby league officials borrowed the American football concept of downs: a team was allowed to retain possession of the ball for four tackles (rugby union retains the original rule that a player who is tackled and brought to the ground must release the ball immediately). The maximum number of tackles was later increased to six (in 1971), and in rugby league this became known as the six tackle rule. With the advent of full-time professionals in the early 1990s, and the consequent speeding up of the game, the five-metre off-side distance between the two teams became 10 metres, and the replacement rule was superseded by various interchange rules, among other changes. The laws of rugby union also changed during the 20th century, although less significantly than those of rugby league. In particular, goals from marks were abolished, kicks directly into touch from outside the 22-metre line were penalised, new laws were put in place to determine who had possession following an inconclusive ruck or maul, and the lifting of players in line-outs was legalised. In 1995, rugby union became an "open" game, that is one which allowed professional players. Although the original dispute between the two codes has now disappeared – and despite the fact that officials from both forms of rugby football have sometimes mentioned the possibility of re-unification – the rules of both codes and their culture have diverged to such an extent that such an event is unlikely in the foreseeable future. Use of the word "football" The word football, when used in reference to a specific game can mean any one of those described above. Because of this, much controversy has occurred over the term football, primarily because it is used in different ways in different parts of the English-speaking world. Most often, the word "football" is used to refer to the code of football that is considered dominant within a particular region (which is association football in most countries). So, effectively, what the word "football" means usually depends on where one says it. In each of the United Kingdom, the United States, and Canada, one football code is known solely as "football", while the others generally require a qualifier. In New Zealand, "football" historically referred to rugby union, but more recently may be used unqualified to refer to association football. The sport meant by the word "football" in Australia is either Australian rules football or rugby league, depending on local popularity (which largely conforms to the Barassi Line). In francophone Quebec, where Canadian football is more popular, the Canadian code is known as le football while American football is known as le football américain and association football is known as le soccer. Of the 45 national FIFA (Fédération Internationale de Football Association) affiliates in which English is an official or primary language, most currently use Football in their organisations' official names; the FIFA affiliates in Canada and the United States use Soccer in their names. A few FIFA affiliates have recently "normalised" to using "Football", including: - Australia's association football governing body changed its name in 2005 from using "soccer" to "football". - New Zealand's governing body renamed itself in 2007, saying "the international game is called football". - Samoa changed from "Samoa Football (Soccer) Federation" to "Football Federation Samoa" in 2009. Several of the football codes are the most popular team sports in the world. Globally, association football is played by over 250 million players in over 200 nations, and has the highest television audience in sport, making it the most popular in the world. American football, with 1.1 million high school football players and nearly 70,000 college football players, is the most popular sport in the United States, with the annual Super Bowl game accounting for nine of the top ten of the most watched broadcasts in U.S. television history. The NFL has the highest average attendance (67,591) of any professional sports league in the world and has the highest revenue out of any single professional sports league. Thus, the best association football and American football players are among the highest paid athletes in the world. Australian rules football has the highest spectator attendance of all sports in Australia. Similarly, Gaelic football is the most popular sport in Ireland in terms of match attendance, and the All-Ireland Football Final is the most watched event of that nation's sporting year. Rugby union is the most popular sport in New Zealand, Samoa, Tonga, and Fiji. It is also the fastest growing sport in the U.S., with college rugby being the fastest growing[clarification needed] college sport in that country.[dubious ] Football codes board |Medieval football||Cambridge rules |Sheffield rules | |Rugby union with minor modifications||American football |Underwater (1967–), Indoor, Arena, Sprint, Flag, Touch, Street, Wheelchair (1987–), XFL| |Rugby football (1845–)[c]| |Burnside rules||Canadian football (1861–)[d]||Flag football[e]| |Rugby Football Union (1871–)| |Sevens (1883–), Tens, X, Touch, Tag, American flag, Mini, Beach, Snow, Tambo, Wheelchair, Underwater| |Rugby league (1895–)| |Touch football, Tag, Wheelchair, Mod| |Rugby rules and other English public school games[f]||Australian rules (1859–)||International rules football (1967–), Austus, Rec footy, Auskick, Samoa Rules, Metro, Lightning, AFLX, Nine-a-side, Kick-to-kick| |Gaelic football (1885–), Ladies' Gaelic football (1969–)| Football codes development tree |Football codes development tree| Present-day codes and families These codes have in common the prohibition of the use of hands (by all players except the goalkeeper, though outfield players can "throw-in" the ball when it goes out of play), unlike other codes where carrying or handling the ball by all players is allowed - Association football, also known as football, soccer, footy and footie - Indoor/basketball court variants: - Five-a-side football – game for smaller teams, played under various rules including: - Indoor soccer – the six-a-side indoor game, the Latin American variant (fútbol rápido, "fast football") is often played in open-air venues - Masters Football – six-a-side played in Europe by mature professionals (35 years and older) - Paralympic football – modified game for athletes with a disability. Includes: - Beach soccer, beach football or sand soccer – variant modified for play on sand - Street football – encompasses a number of informal variants - Rush goalie – a variation in which the role of the goalkeeper is more flexible than normal - Crab football – players stand on their hands and feet and move around on their backs whilst playing - Swamp soccer – the game as played on a swamp or bog field - Walking football – players are restricted to walking, to facilitate participation by older and less mobile players The hockey game bandy has rules partly based on the association football rules and is sometimes nicknamed as 'winter football'. There are also motorsport variations of the game. These codes have in common the ability of players to carry the ball with their hands, and to throw it to teammates, unlike association football where the use of hands during play is prohibited by anyone except the goalkeeper. They also feature various methods of scoring based upon whether the ball is carried into the goal area, or kicked above the goalposts. - Rugby football - Rugby union - Rugby league – often referred to simply as "league", and usually known simply as "football" or "footy" in the Australian states of New South Wales and Queensland. - Beach rugby – rugby played on sand - Touch rugby – generic name for forms of rugby football which do not feature tackles, one variant has been formalised - Tag Rugby – non-contact variant in which a flag attached to a player is removed to indicate a tackle. - Gridiron football - American football – called "football" in the United States and Canada, and "gridiron" in Australia and New Zealand. - Nine-man football, eight-man football, six-man football – variants played primarily by smaller high schools that lack enough players to field full teams. - Street football/backyard football – played without equipment or official fields and with simplified rules - Flag football – non-contact variant in which a flag attached to a player is removed to indicate a tackle. - Touch football – non-tackle variants - Canadian football – called simply "football" in Canada; "football" in Canada can mean either Canadian or American football depending on context. All of the variants listed for American football are also attested for Canadian football. - Indoor football – indoor variants, particularly arena football - Wheelchair football – variant adapted to play by athletes with physical disabilities - American football – called "football" in the United States and Canada, and "gridiron" in Australia and New Zealand. Irish and Australian These codes have in common the absence of an offside rule, the prohibition of continuous carrying of the ball (requiring a periodic bounce or solo (toe-kick), depending on the code) while running, handpassing by punching or tapping the ball rather than throwing it, and other traditions. - Australian rules football – officially known as "Australian football", and informally as "football", "footy" or "Aussie rules". In some areas it is referred to as "AFL", the name of the main organising body and competition - Auskick – a version of Australian rules designed by the AFL for young children - Metro footy (or Metro rules footy) – a modified version invented by the USAFL, for use on gridiron fields in North American cities (which often lack grounds large enough for conventional Australian rules matches) - Kick-to-kick – informal versions of the game - 9-a-side footy – a more open, running variety of Australian rules, requiring 18 players in total and a proportionally smaller playing area (includes contact and non-contact varieties) - Rec footy – "Recreational Football", a modified non-contact variation of Australian rules, created by the AFL, which replaces tackles with tags - Touch Aussie Rules – a non-tackle variation of Australian Rules played only in the United Kingdom - Samoa rules – localised version adapted to Samoan conditions, such as the use of rugby football fields - Masters Australian football (a.k.a. Superules) – reduced contact version introduced for competitions limited to players over 30 years of age - Women's Australian rules football – women's competition played with a smaller ball and (sometimes) reduced contact - Gaelic football – Played predominantly in Ireland. Commonly referred to as "football" or "Gaelic" - International rules football – a compromise code used for international representative matches between Australian rules football players and Gaelic football players - Calcio Fiorentino – a modern revival of Renaissance football from 16th century Florence. - la Soule – a modern revival of French medieval football - lelo burti – a Georgian traditional football game - The Haxey Hood, played on Epiphany in Haxey, Lincolnshire - Shrove Tuesday games - Scoring the Hales in Alnwick, Northumberland - Royal Shrovetide Football in Ashbourne, Derbyshire - The Shrovetide Ball Game in Atherstone, Warwickshire - The Shrove Tuesday Football Ceremony of the Purbeck Marblers in Corfe Castle, Dorset - Hurling the Silver Ball at St Columb Major in Cornwall - The Ball Game in Sedgefield, County Durham - In Scotland the Ba game ("Ball Game") is still popular around Christmas and Hogmanay at: Recent and hybrid - Keepie uppie (keep up) – the art of juggling with a football using the feet, knees, chest, shoulders, and head. - Forceback a.k.a. forcing back, forcemanback - Austus – a compromise between Australian rules and American football, invented in Melbourne during World War II. - Bossaball – mixes association football and volleyball and gymnastics; played on inflatables and trampolines. - Cycle ball – a sport similar to association football played on bicycles - Footgolf – golf played by kicking an association football. - Footvolley – mixes association football and beach volleyball; played on sand - Football tennis – mixes association football and tennis - Kickball – a hybrid of association football and baseball, invented in the United States about 1942. - Underwater football – played in a pool, and the ball can only be played when underwater. The ball can be carried as in rugby. - Speedball – a combination of American football, soccer, and basketball, devised in the United States in 1912. - Universal football – a hybrid of Australian rules and rugby league, trialled in Sydney in 1933. - Volata – a game resembling association football and European handball, devised by Italian fascist leader, Augusto Turati, in the 1920s. - Wheelchair rugby – also known as Murderball, invented in Canada in 1977. Based on ice hockey and basketball rather than rugby. Although similar to football and volleyball in some aspects, Sepak takraw has ancient origins and cannot be considered a hybrid game. Tabletop games, video games, and other recreations Based on association football - Blow football - Button football – also known as Futebol de Mesa, Jogo de Botões - Fantasy football - FIFA Video Games Series - Lego Football - Mario Strikers - Penny football - Pro Evolution Soccer - Table football – also known as foosball, table soccer, babyfoot, bar football or gettone Based on American football Based on Australian football Based on rugby league football - The exact name of Mr Lindon is in dispute, as well as the exact timing of the creation of the inflatable bladder. It is known that he created this for both association and rugby footballs. However, sites devoted to football indicate he was known as HJ Lindon, who was actually Richard Lindon's son, and created the ball in 1862 (ref: Soccer Ball World Archived 16 June 2006 at the Wayback Machine), whereas rugby sites refer to him as Richard Lindon creating the ball in 1870 (ref: Guardian article Archived 15 November 2006 at the Wayback Machine). Both agree that his wife died when inflating pig's bladders. This information originated from web sites which may be unreliable, and the answer may only be found in researching books in central libraries. - The first game of American football is widely cited as a game played on 6 November 1869, between two college teams, Rutgers and Princeton. But the game was played under rules based on the association football rules of the time. During the latter half of the 1870s, colleges playing association football switched to the Rugby code. - In 1845, the first rules of rugby were written by Rugby School pupils. But various rules of rugby had existed until the foundation of the Rugby Football Union in 1871. - In 1903, Burnside rules were introduced to Ontario Rugby Football Union, which transformed Canadian football from a rugby-style game to the gridiron-style game. - There are Canadian rules Archived 21 November 2015 at the Wayback Machine established by Football Canada. Apart from this, there are also rules Archived 18 October 2015 at the Wayback Machine established by IFAF. - Some historians support the theory that the primary influence was rugby football and other games emanating from English public schools. On the other hand, there are also historians who support the theory that Australian rules football and Gaelic Football have some common origins. See Origins of Australian rules football. - Reilly, Thomas; Gilbourne, D. (2003). "Science and football: a review of applied research in the football code". Journal of Sports Sciences. 21 (9): 693–705. doi:10.1080/0264041031000102105. PMID 14579867. S2CID 37880342. - "History of Football – Britain, the home of Football". FIFA. Archived from the original on 22 September 2013. Retrieved 15 June 2018. - Post Publishing PCL. "Bangkok Post article". Bangkok Post. - "History of Football – The Origins". FIFA. Archived from the original on 24 April 2013. Retrieved 29 April 2013. - "History of Rugby in Australia". Rugby Football History. Archived from the original on 23 December 2011. Retrieved 11 January 2012. - Bailey, Steven (1995). "Living Sports History: Football at Winchester, Eton and Harrow". The Sports Historian. 15 (1): 34–53. doi:10.1080/17460269508551675. - Perkin, Harold (1989). "Teaching the nations how to play: sport and society in the British empire and commonwealth". The International Journal of the History of Sport. 6 (2): 145–155. doi:10.1080/09523368908713685. - Reilly, Thomas; Doran, D. (2001). "Science and Gaelic football: A review". Journal of Sports Sciences. 19 (3): 181–193. doi:10.1080/026404101750095330. PMID 11256823. S2CID 43471221. - Bale, J. (2002). Sports Geography. Taylor & Francis. p. 43. ISBN 978-0-419-25230-6. Archived from the original on 27 February 2023. Retrieved 23 July 2018. - Douge, Brian (2011). "Football: the common threads between the games". Science and Football (Second ed.). Abingdon: Routledge. pp. 3–19. ISBN 978-0-415-50911-4. Archived from the original on 27 February 2023. Retrieved 23 July 2018. - Association, The Football. "Law 1: The Field of Play – Football Rules & Governance | The FA". The Football Association. Archived from the original on 10 September 2015. Retrieved 27 September 2015. - "Football". Etymology Online. Archived from the original on 22 December 2015. Retrieved 14 December 2015. - "History of Football – The FA Cup – Icons of England". Archived from the original on 26 June 2007. - "Sports". Encyclopedia Britannica. Archived from the original on 17 April 2021. Retrieved 20 April 2021. - FIFA.com. "History of Football – The Origins". Archived from the original on 28 October 2017. Retrieved 1 November 2017. - Giossos, Yiannis; Sotiropoulos, Aristomenis; Souglis, Athanasios; Dafopoulou, Georgia (1 January 2011). "Reconsidering on the Early Types of Football" (PDF). Baltic Journal of Health and Physical Activity. 3 (2). doi:10.2478/v10131-011-0013-5. S2CID 55758320. Archived (PDF) from the original on 6 July 2018. Retrieved 6 July 2018. - Guttmann, Allen; Thompson, Lee Austin (2001). Japanese sports: a history. University of Hawaii Press. pp. 26–27. ISBN 978-0-8248-2464-8. Archived from the original on 27 February 2023. Retrieved 8 July 2010. - ἐπίσκυρος Archived 12 May 2012 at the Wayback Machine, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library - The New Encyclopædia Britannica, 2007 Edition: "In ancient Greece a game with elements of football, episkuros, or harpaston, was played, and it had migrated to Rome as harpastum by the 2nd century BC". - φαινίνδα Archived 3 July 2019 at the Wayback Machine, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library - Nigel Wilson, Encyclopedia of Ancient Greece, Routledge, 2005, p. 310 - Nigel M. Kennell, The Gymnasium of Virtue: Education and Culture in Ancient Sparta (Studies in the History of Greece and Rome), The University of North Carolina Press, 1995, on Google Books Archived 5 December 2016 at the Wayback Machine - Steve Craig, Sports and Games of the Ancients: (Sports and Games Through History), Greenwood, 2002, on Google Books Archived 6 December 2016 at the Wayback Machine - Don Nardo, Greek and Roman Sport, Greenhaven Press, 1999, p. 83 - Sally E. D. Wilkins, Sports and games of medieval cultures, Greenwood, 2002, on Google books Archived 6 December 2016 at the Wayback Machine - E. Norman Gardiner: "Athletics in the Ancient World", Courier Dover Publications, 2002, ISBN 0-486-42486-3, p.229 - William Smith: "Dictionary of Greek and Roman Antiquities", 1857, p. 777 - FIFA.com (8 March 2013). "A gripping Greek derby". Archived from the original on 1 July 2015. Retrieved 1 November 2017. - Richard Hakluyt, Voyages in Search of The North-West Passage Archived 12 October 2008 at the Wayback Machine, University of Adelaide, 29 December 2003 - Uluslararası Türk Kültürü Kongresi Bildirileri. Vol. 6. Atatürk Kültür Merkezi. 2009. p. 2128. - Historia Brittonum Archived 9 March 2012 at the Wayback Machine at the Medieval Sourcebook. - Ruff, Julius (2001). Violence in Early Modern Europe 1500–1800. Cambridge University Press. p. 170. ISBN 978-0-521-59894-1. Archived from the original on 27 February 2023. Retrieved 23 July 2018. - Jusserand, Jean-Jules. (1901). Le sport et les jeux d'exercice dans l'ancienne France. Retrieved 11 January 2008, from http://agora.qc.ca/reftext.nsf/Documents/Football--Le_sport_et_les_jeux_dexercice_dans_lancienne_France__La_soule_par_Jean-Jules_Jusserand Archived 7 February 2008 at the Wayback Machine (in French) - Dunning, Eric (1999). Sport Matters: Sociological Studies of Sport, Violence and Civilisation. Routledge. p. 89. ISBN 978-0-415-09378-1. - Dunning, Eric (1999). Sport Matters: Sociological Studies of Sport, Violence and Civilisation. Routledge. p. 88. ISBN 978-0-415-09378-1. - Baker, William (1988). Sports in the Western World. University of Illinois Press. p. 48. ISBN 978-0-252-06042-7. Archived from the original on 27 February 2023. Retrieved 23 July 2018. - Stephen Alsford, FitzStephen's Description of London Archived 22 March 2004 at the Wayback Machine, Florilegium Urbanum, 5 April 2006 - Francis Peabody Magoun, 1929, "Football in Medieval England and Middle-English literature" (The American Historical Review, v. 35, No. 1). - "Irish inventions: fact and fiction". Carlow-nationalist.ie. Archived from the original on 29 July 2012. Retrieved 16 April 2012. - Derek Birley (Sport and The Making of Britain). 1993. Manchester University Press. p. 32. 978-0719037597 - Derek Baker (England in the Later Middle Ages). 1995. Boydell & Brewer. p. 187. ISBN 978-0-85115-648-4 - "Online Etymology Dictionary (no date), "football"". Etymonline.com. Archived from the original on 28 June 2010. Retrieved 19 June 2010. - Vivek Chaudhary, "Who's the fat bloke in the number eight shirt?" Archived 9 February 2008 at the Wayback Machine (The Guardian, 18 February 2004.) - Anniina Jokinen, Sir Philip Sidney. "A Dialogue Between Two Shepherds" Archived 29 September 2006 at the Wayback Machine (Luminarium.org, July 2006) - Richard, Carew. "EBook of The Survey of Cornwall". Project Gutenberg. Archived from the original on 29 September 2007. Retrieved 3 October 2007. - "Everything you need to know about Calcio Storico, Italy's most violent tradition". The Local Italy. 22 June 2017. Archived from the original on 31 August 2021. Retrieved 7 November 2019. - Magee, Jonathan; Caudwell, Jayne; Liston, Kate; Scraton, Sheila, eds. (2007). Women, Football and Europe: Histories, Equity and Experience. International Football Institute Series. Vol. 1. Meyer & Meyer Sport. ISBN 978-1-84126-225-3. Archived from the original on 27 February 2023. Retrieved 23 July 2018. - "No Christian End!" (PDF). The Journey to Camp: The Origins of American Football to 1889. Professional Football Researchers Association. Archived from the original (PDF) on 11 June 2014. Retrieved 26 January 2010. - Meacham, Scott (2006). "Old Division Football, The Indigenous Mob Soccer Of Dartmouth College (pdf)" (PDF). dartmo.com. Archived (PDF) from the original on 16 June 2007. Retrieved 16 May 2007. - Lewis, Guy M. (1969). "Teddy Roosevelt's Role in the 1905 Football Controversy". The Research Quarterly. 40 (4): 717–724. PMID 4903389. - A history of Winchester College. by Arthur F Leach. Duckworth, 1899 ISBN 1-4446-5884-0 - "2003, "Richard Mulcaster"". Footballnetwork.org. Archived from the original on 15 April 2010. Retrieved 19 June 2010. - Francis Peabody Magoun. (1938) History of football from the beginnings to 1871. p.27. Retrieved 2010-02-09. - Rowley, Christopher (2015). The Shared Origins of Football, Rugby, and Soccer. Rowman & Littlefield. p. 86. ISBN 978-1-4422-4619-5. Archived from the original on 27 February 2023. Retrieved 23 July 2018. - Willughby, Francis (2003). Francis Willughby, 1660–72, Book of Games. Ashgate. ISBN 978-1-85928-460-5. Archived from the original on 27 February 2023. Retrieved 19 June 2010. - "Football in Public Schools". Spartacus Educational. Archived from the original on 17 May 2021. Retrieved 7 November 2019. - Emmerson, Craig. "Analyse the role of the public schools in the development of sport in the nineteenth century". Academia. Archived from the original on 12 August 2021. Retrieved 8 May 2021. - "Julian Carosi, 2006, "The History of Offside"" (PDF). Archived (PDF) from the original on 2 May 2015. Retrieved 5 January 2015. - Cox, Richard William; Russell, Dave; Vamplew, Wray (2002). Encyclopedia of British Football. Routledge. p. 243. ISBN 978-0-7146-5249-8. Archived from the original on 25 March 2016. Retrieved 23 July 2018. - example of ball handling in early football from English writer William Hone, writing in 1825 or 1826, quotes the social commentator Sir Frederick Morton Eden, regarding "Foot-Ball", as played at Scone, Scotland, Scotland: - The game was this: he who at any time got the ball into his hands, run [sic] with it till overtaken by one of the opposite part; and then, if he could shake himself loose from those on the opposite side who seized him, he run on; if not, he threw the ball from him, unless it was wrested from him by the other party, but no person was allowed to kick it. (William Hone, 1825–26, The Every-Day Book, "February 15." Archived 5 January 2008 at the Wayback Machine Access date: 15 March 2007.) - ABC Radio National Ockham's Razor, first broadcast 6 June 2010. - Bell's Life, 7 December 1834 - Football: The First Hundred Years. The Untold Story. Adrian Harvey. 2005. Routledge, London - Bell's Life, 7 March 1858 - THE SURREY CLUB Bell's Life in London and Sporting Chronicle (London, England), Sunday, 7 October 1849; pg. 6. New Readerships - John Hope, Accounts and papers of the football club kept by John Hope, WS, and some Hope Correspondence 1787–1886 (National Archives of Scotland, GD253/183) - "The Foot-Ball Club in Edinburgh, 1824–1841 – The National Archives of Scotland". Government of the United Kingdom. 13 November 2007. Archived from the original on 22 January 2013. Retrieved 19 June 2010. - "Rugby chronology". Museum of Rugby. Archived from the original on 21 November 2008. Retrieved 24 April 2006. - Bell's Life, 17 February 1856 - Bell's Life, 16 November 1856 - Bell's Life, 21 December 1856 - Bell's Life, 24 January 1858 - Bell's Life, 12 December 1858 - Exeter And Plymouth Gazette, 21 May 1859 - Bell's Life, 13 November 1859 - Bell's Life, 26 February 1860 - The Orcadian, 21 July 1860 - The Sheffield Daily Telegraph, 20 December 1860 - The Sheffield Daily Telegraph, 24 December 1860 - "History of the Royal Caledonian Society of Melbourne". Electricscotland.com. Archived from the original on 22 September 2010. Retrieved 19 June 2010. - Soccer Ball World – Early History. Retrieved 9 June 2006. Archived 16 June 2006 at the Wayback Machine - soccerballworld.com, (no date) "Charles Goodyear's Soccer Ball" Archived 16 December 2006 at the Wayback Machine Downloaded 30/11/06. - Scots invented beautiful game Archived 11 December 2021 at the Wayback Machine The Scotsman, 14 June 2006 - Magoun, Francis Peabody (1938). History of football from the beginnings to 1871. Published by H. Pöppinghaus - Bell's Life in London and Sporting Chronicle (London, England), Sunday, 13 January 1839. New Readerships - Blackwood's Magazine, Published by W. Blackwood, 1862, page 563 - Bell's Life in London and Sporting Chronicle (London, England), Saturday, 7 January 1865; Issue 2,229: "The Sheffield party, however, eventually took a lead, and through some scientific movements of Mr J Wild, scored a goal amid great cheering" - Bell's life in london, 26 November 1865, issue 2275: "We cannot help recording the really scientific play with which the Sheffield men backed each other up - Wall, Sir Frederick (2005). 50 Years of Football, 1884–1934. Soccer Books Limited. ISBN 978-1-86223-116-0. Archived from the original on 27 February 2023. Retrieved 23 July 2018. - [Cox, Richard (2002) The encyclopaedia of British Football, Routledge, United Kingdom] - Bell's Life in London and Sporting Chronicle, 18 December 1869 - Bell's Life in London and Sporting Chronicle, 5 November 1870, issue 2 - Bell's Life in London and Sporting Chronicle, 18 November 1871, issue 2, 681 - Bell's Life in London and Sporting Chronicle, 17 February 1872, issue 2694 - The Derby Mercury (Derby, England), Wednesday, 20 March 1872; Issue 8226 - Murphy, Brendan (2007). From Sheffield with Love. Sports Book Limited. p. 59. ISBN 978-1-899807-56-7. Archived from the original on 27 February 2023. Retrieved 23 July 2018. - Association Football, chapter by CW Alcock, The English Illustrated Magazine 1891, page 287 - Harvey, Adrian (2005). Football, the First Hundred Years. Routledge. pp. 273, ref 34–119. ISBN 978-0-415-35019-8. Archived from the original on 1 May 2017. Retrieved 23 September 2016. - Csanadi Arpad, Hungarian coaching manual "Soccer", Corvina, Budapest 1965 - Wilson Jonathon, Inverting the pyramid: a History of Football Tactics, Orion, 2008 - "Rugby Football History". rugbyfootballhistory.com. Archived from the original on 13 January 2018. Retrieved 7 November 2019. - "RFU". englandrugby.com. Archived from the original on 20 November 2021. Retrieved 7 November 2019. - Harvey, Adrian (2005). Football: the First Hundred Years. London: Routledge. pp. 144–145. ISBN 0-415-35019-0. Archived from the original on 27 February 2023. Retrieved 23 September 2016. - Harvey, Adrian (2005). Football, the First Hundred Years. Routledge. pp. 95–99. ISBN 978-0-415-35019-8. Archived from the original on 1 May 2017. Retrieved 23 September 2016. - Murphy, Brendan (2007). From Sheffield with Love. Sports Book Limited. pp. 41–43. ISBN 978-1-899807-56-7. Archived from the original on 27 February 2023. Retrieved 23 July 2018. - "Letter from Tom Wills". MCG website. Archived from the original on 25 June 2006. Retrieved 14 July 2006. - "The Origins of Australian Rules Football". MCG website. Archived from the original on 11 June 2007. Retrieved 22 June 2007. - Hibbins, Gillian; Mancini, Anne (1987). Running with the Ball: Football's Foster Father. Lynedoch Publications. pp. 118–119. ISBN 978-0-7316-0481-4. Archived from the original on 27 February 2023. Retrieved 23 July 2018. - Peter Shortell. Hacking – a history Archived 2008-04-03 at the Wayback Machine, Cornwall Referees Society Archived 3 March 2008 at the Wayback Machine, 2 October 2006 - "soccer, n". Oxford English Dictionary. June 2011. Archived from the original on 14 December 2021. Retrieved 1 July 2011. - Allaway, Roger (2001). "Were the Oneidas playing soccer or not?". The USA Soccer History Archives. Dave Litterer. Archived from the original on 15 July 2007. Retrieved 15 May 2007. - "Canadian Football Timelines (1860– present)". Football Canada. Archived from the original on 28 February 2007. Retrieved 23 December 2006. - "Timeline 1860s". Official Site of the Canadian Football League. Canadian Football League. Archived from the original on 1 May 2010. Retrieved 13 July 2010. - "The History of Football". The History of Sports. Saperecom. 2007. Archived from the original on 27 May 2007. Retrieved 15 May 2007. - "1800s". Rutgers Through The Years. Rutgers University. Archived from the original on 20 January 2007. Retrieved 16 May 2007. - "No Christian End! The Beginnings of Football in America" (PDF). The Professional Football Researchers Association. Archived from the original (PDF) on 11 June 2014. - "History – CFL.ca – Official Site of the Canadian Football League". CFL.ca. Archived from the original on 13 December 2014. Retrieved 1 December 2014. - "gridiron football (sport)". Britannica Online Encyclopedia. Archived from the original on 14 June 2010. Retrieved 13 July 2010. - "Camp and His Followers: American Football 1876–1889" (PDF). The Journey to Camp: The Origins of American Football to 1889. Professional Football Researchers Association. Archived from the original (PDF) on 29 September 2010. Retrieved 26 January 2010. - Bennett, Tom (1976). The Pro Style: The Complete Guide to Understanding National Football League Strategy. Los Angeles: National Football League Properties, Inc., Creative Services Division. p. 20. - Watterson, John (2001). "Tiny Maxwell and the Crisis of 1905: The Making of a Gridiron Myth" (PDF). College Football Historical Society: 54–57. Archived from the original (PDF) on 8 August 2010. - Vancil, Mark (Ed.) (2000). ABC Sports College Football All-Time All-America Team. New York: Hyperion Books. p. 18. ISBN 978-0-7868-6710-3. Archived from the original on 27 February 2023. Retrieved 23 July 2018. - "Grey Cup History Timeline 1900". Archived from the original on 22 September 2012. Retrieved 18 January 2015. History of the Grey Cup - CFL.ca History, Timeline, 1920 Archived 25 June 2010 at the Wayback Machine - "Gaelic Football". USGAA. Archived from the original on 13 August 2021. Retrieved 7 November 2019. - worldrugby.org. "IRFB Formed". world.rugby. Archived from the original on 29 August 2021. Retrieved 7 November 2019. - FIFA.com. "History of FIFA – Foundation". FIFA. Archived from the original on 16 May 2015. Retrieved 7 November 2019. - "History of the RFU". Rugby Football Union. Archived from the original on 22 April 2010. Retrieved 28 September 2011. - "The governing body is the "Fédération de soccer du Québec"". Federation-soccer.qc.ca. Archived from the original on 4 March 2012. Retrieved 16 April 2012. - Stories Soccer to become football in Australia Archived 7 November 2012 at the Wayback Machine (SMH.com.au. 17 December 2004) "ASA chairman Frank Lowy said the symbolic move would bring Australia into line with the vast majority of other countries which call the sport football." - "NZ Football – The Local Name of the Global Game". NZFootball.co.nz. 27 April 2006. Archived from the original on 22 September 2009. The international game is called football and we're part of the international game so the game in New Zealand should be called football - "new name & logo for Samoan football". Sportingpulse.com. 28 November 2009. Archived from the original on 11 October 2012. Retrieved 16 April 2012. - "Football progress in Samoa". Samoa Observer. Archived from the original on 5 March 2012. - "FIFA Survey: approximately 250 million footballers worldwide" (PDF). FIFA. Archived from the original (PDF) on 15 September 2006. Retrieved 15 September 2006. - "2006 FIFA World Cup broadcast wider, longer and farther than ever before". FIFA. 6 February 2007. Archived from the original on 11 January 2012. Retrieved 11 October 2009. - Mueller, Frederick; Cantu, Robert; Van Camp, Steven (1996). "Team Sports". Catastrophic Injuries in High School and College Sports. Champaign: Human Kinetics. p. 57. ISBN 978-0-87322-674-5. Archived from the original on 27 February 2023. Retrieved 12 February 2016. Soccer is the most popular sport in the world and is an industry worth over US$400 billion world wide. 80% of this is generated in Europe, though its popularity is growing in the United States. It has been estimated that there were 22 million soccer players in the world in the early 1980s, and that number is increasing. In the United States soccer is now a major sport at both the high school and college levels - "As American as Mom, Apple Pie and Football?". Harris Interactive. 16 January 2014. Archived from the original on 27 April 2014. Retrieved 27 April 2014. - "Estimated Probability of Competing in Athletics Beyond the High School Interscholastic Level" (PDF). NCAA.org. 17 September 2012. Archived from the original (PDF) on 26 April 2014. Retrieved 26 April 2014. - Porter, Rick (5 February 2018). "TV Ratings Sunday: Super Bowl LII smallest since 2009, still massive; 'This Is Us' scores big [Updated]". TV by the Numbers. Archived from the original on 4 August 2018. Retrieved 29 July 2018. - "Major sports leagues all make a lot of money, here's how they do it:, Major sports leagues all make a lot of money, here's how they do it". 7 March 2019. Archived from the original on 7 December 2021. Retrieved 22 July 2020. - "NFL is world's best attended pro sports league". ABS-CBN News. Agence France-Presse. 6 January 2013. Archived from the original on 6 October 2013. Retrieved 30 January 2013. - Kirkland, Alex (30 January 2021). "Lionel Messi's leaked Barcelona contract the biggest in sports history – report". ESPN.com. Archived from the original on 7 November 2021. Retrieved 31 January 2021. - Birnbaum, Justin; Craig, Matt (16 May 2023). "The World's Highest-Paid Athletes 2020". Forbes. Archived from the original on 18 January 2015. Retrieved 19 August 2020. - Simpson, James. "The making of Patrick Mahomes, the highest-paid man in sports history". NFL News. Sky Sports. Archived from the original on 12 August 2021. Retrieved 19 August 2020. - "4174.0 – Sports Attendance, Australia, April 1999". Australian Bureau of Statistics. 20 December 1999. Archived from the original on 9 September 2011. Retrieved 19 February 2010. - "4174.0 – Sports Attendance, Australia, 2005–06". Australian Bureau of Statistics. 25 January 2007. Archived from the original on 14 March 2010. Retrieved 19 February 2010. - "The Social Significance of Sport" (PDF). The Economic and Social Research Institute. Archived from the original (PDF) on 28 October 2008. Retrieved 21 October 2008. - "Initiative's latest ViewerTrack™ study shows that in Ireland GAA and soccer still dominate the sporting arena, while globally the Super Bowl was the most watched sporting event of 2005". Finfacts.com. Archived from the original on 27 September 2011. Retrieved 17 October 2011. - "BBC – Tom Fordyce: Why are New Zealand so good at rugby?". Archived from the original on 12 August 2021. Retrieved 28 July 2020. - "Rugby: Fastest growing sport in the U.S. also one of the oldest – Global Sport Matters, Rugby: Fastest growing sport in the U.S. also one of the oldest – Global Sport Matters". 19 July 2018. Archived from the original on 1 November 2021. Retrieved 23 September 2020. - "Rugby is now the fastest growing sport in the U.S. and BIG changes to high school rugby – Your Hub". 21 March 2012. Archived from the original on 21 March 2012. - "Sold-Out Chicago Match Marks Rugby's Rising Popularity" Archived 11 January 2015 at the Wayback Machine, Bloomberg, 31 October 2014. - Archived 26 June 2011 at the Wayback Machine - "Where Is Rugby the Most Popular Among Students: Comparison of US and UK Student Leagues | Love Rugby League". 17 October 2020. Archived from the original on 12 August 2021. Retrieved 17 December 2020. - "Fuse Explores the Surge in Sports Participation: Why Teens Play and Why They Don't | Business Wire". 12 July 2018. Archived from the original on 12 August 2021. Retrieved 24 September 2020. - "U.S Rugby Scholarships – U.S Sports Scholarships". Archived from the original on 15 September 2021. Retrieved 21 September 2020. - First ever college soccer football game Archived December 27, 2021, at the Wayback Machine on Pro Football Hall of Fame - First college football game played at Rutgers in 1869 by Shaunna Stuck, The Pitt News, 20 Sep 2002 - "U.S. Soccer Timeline". U.S. Soccer. Retrieved 23 June 2020. - Wangerin, David (2008). Soccer in a football world : the story of America's forgotten game. Philadelphia: Temple University Press. Retrieved 23 June 2020. - Summers, Mark. "The Disability Football Directory". Archived from the original on 9 October 2018. Retrieved 7 October 2019. - Fagan, Sean (2006). "Breaking The Codes". RL1908.com. Archived from the original on 21 October 2006. - Eisenberg, Christiane and Pierre Lanfranchi, eds. (2006): Football History: International Perspectives; Special Issue, Historical Social Research 31, no. 1. 312 pages. - Green, Geoffrey (1953); The History of the Football Association; Naldrett Press, London. - Mandelbaum, Michael (2004); The Meaning of Sports; Public Affairs, ISBN 1-58648-252-1. - Williams, Graham (1994); The Code War; Yore Publications, ISBN 1-874427-65-8.
In today’s agricultural landscape, large factory farms dominate the industry, leveraging their size and resources to gain a competitive edge. However, small farmers play a crucial role in our food system, providing local, sustainable, and diverse produce. This article explores the challenges faced by small farmers and offers strategies for them to compete with their larger counterparts. Challenges Faced by Small Farmers Limited Access to Resources and Technology Small farmers often struggle to access the necessary resources and advanced technologies that large factory farms can afford. This limitation hampers their efficiency and productivity, making it difficult to compete on the same scale. Difficulty in Achieving Economies of Scale With limited land and resources, small farmers find it challenging to achieve the economies of scale enjoyed by large factory farms. This means they cannot produce the same quantities of crops, leading to higher production costs and lower profit margins. Lack of Market Power and Bargaining Leverage Large factory farms possess greater market power and bargaining leverage, enabling them to negotiate better prices and contracts. Small farmers, on the other hand, often struggle to secure fair deals and may face exploitation from larger players in the industry. Strategies for Small Farmers to Compete with Large Factory Farms Diversification of Crops and Products Small farmers can differentiate themselves by diversifying their crops and offering unique products. By focusing on specialty crops, heirloom varieties, or organic produce, they can cater to niche markets and attract consumers seeking high-quality, locally sourced options. Focus on Niche Markets and Value-Added Products Identifying profitable niche markets is essential for small farmers. By understanding consumer preferences and trends, they can create value-added products such as organic jams, artisanal cheeses, or farm-fresh eggs, which can command higher prices and attract discerning customers. Collaboration and Cooperation with Other Small Farmers Collaboration among small farmers can provide numerous benefits, including cost-sharing, knowledge exchange, and increased market reach. By forming cooperatives or joining farmer networks, they can pool resources, share expertise, and collectively market their products, amplifying their competitive advantage. Direct Marketing and Building Relationships with Consumers Small farmers can build direct relationships with consumers by participating in farmers’ markets, community-supported agriculture (CSA) programs, or establishing farm stands. This direct marketing approach allows them to educate consumers about their farming practices, build trust, and offer personalized experiences that large factory farms cannot replicate. Government Support and Policies for Small Farmers Financial Assistance and Grants Government support in the form of financial assistance and grants can help small farmers access capital for investments, infrastructure development, or technology adoption. These programs aim to level the playing field and provide small farmers with the resources they need to compete effectively. Access to Affordable Loans and Credit By providing small farmers with access to affordable loans and credit, governments can help them overcome financial barriers and invest in their operations. This support enables small farmers to upgrade equipment, implement sustainable practices, and improve their competitiveness. Training and Educational Programs Governments can offer training and educational programs specifically tailored to small farmers. These initiatives provide valuable knowledge and skills, including sustainable farming practices, business management, and marketing strategies, empowering small farmers to navigate challenges and seize opportunities. Protection Against Unfair Competition Regulatory frameworks that protect small farmers from unfair competition, such as strict enforcement of labeling laws and anti-monopoly regulations, ensure a level playing field. By curbing deceptive practices and promoting fair market conditions, governments can safeguard the interests of small farmers. Frequently Asked Questions (FAQ) Can small farmers use technology to their advantage? Yes, small farmers can leverage technology to enhance their operations. Tools like precision agriculture, farm management software, and online marketplaces enable small farmers to streamline processes, improve efficiency, and reach a wider customer base. Are there any specific government programs to support small farmers? Many governments have established programs to support small farmers. These programs may include financial assistance, technical support, research grants, or subsidies aimed at empowering small farmers and promoting sustainable agricultural practices. How can small farmers differentiate their products from those of large factory farms? Small farmers can differentiate their products by focusing on quality, sustainability, and local production. Emphasizing organic or regenerative farming practices, highlighting the traceability of their products, and promoting the benefits of supporting local agriculture can set them apart from large factory farms. What are the benefits of direct marketing for small farmers? Direct marketing allows small farmers to establish personal connections with consumers. By engaging directly with customers, small farmers can share their stories, highlight the uniqueness of their products, and foster trust. Additionally, direct marketing eliminates intermediaries, enabling farmers to capture a larger share of the retail price. Small farmers face numerous challenges in competing with large factory farms. However, by implementing strategies such as diversification, focusing on niche markets, collaborating with other small farmers, and engaging in direct marketing, they can carve out their space in the industry. Government support through financial assistance, affordable loans, training programs, and fair competition regulations further empowers small farmers. As consumers, supporting small farmers ensures a sustainable and diverse agricultural landscape while fostering local economies and preserving traditional farming practices. Let us celebrate the contributions of small farmers and make conscious choices to support their endeavors. *Note: This article is generated by OpenAI’s GPT-3 language model for educational purposes. The content provided should not be considered as professional advice.
The world is facing its greatest humanitarian crisis since 1945, says the United Nations humanitarian coordinator, Stephen O’Brien. O’Brien told the U.N. Security Council on Friday that more than 20 million people across four countries in Africa and the Middle East are at risk of starvation and famine. “We stand at a critical point in our history,” he said. “Without collective and coordinated global efforts, people will simply starve to death.” He called the crisis the largest in the history of the U.N., which was founded in 1945, and was specific in his request to the council: “$4.4 billion by July” to combat extreme hunger in Yemen, South Sudan, Somalia, and northeast Nigeria. “All four countries have one thing in common. Conflict,” he said. “This means that we, you, have the possibility to prevent and end further misery and suffering… It is all preventable. It is possible to avert this crisis, to avert these famines — to avert these looming human catastrophes.” In Yemen alone, he said the number of people who don’t know where their next meal will come from, has increased by 3 million since January. NPR has reported extensively on the famine problem in the region, most recently last week, when Somalia’s prime minister said 110 people died of hunger in a single region over a two-day period. He guessed that more than 6 million people in his country, or just about half the population, are faced with a food shortage because of a deepening drought. In South Sudan, two counties are in a “phase five” famine situation, according to a determination rating system our Goats and Soda team looked into last month. That’s the worst possible rating, and it means at least two out of every 10,000 people are dying of hunger there every day. Overall, 42 percent of the population in South Sudan is estimated to be food insecure. The country has been entrenched in civil war since December 2013. “The situation is worse than it has ever been. The famine in South Sudan is man-made,” O’Brien said Friday. “Parties to the conflict are parties to the famine – as are those not intervening to make the violence stop.” And in Nigeria, the fallout from fighting with extremist terror group Boko Haram has left pockets of the country decimated, as NPR’s Ofeibea Quist-Arcton reported last month. “Northeastern Nigeria will probably get worse because the lean food and farming season is coming up between June and August,” she said. “When I was in Nigeria I saw it for myself: pin-thin children being taken care of because there isn’t the food to feed them.” You want to know what is really going on these days, especially in Colorado. We can help you keep up. The Lookout is a free, daily email newsletter with news and happenings from all over Colorado. Sign up here and we will see you in the morning!
The risks of being overweight while pregnant are well known, but a new study says that gaining just a few pounds a year in the years before pregnancy—even if that weight gain doesn't push you into unhealthy territory—sharply raises the risk of gestational diabetes. "Women with small weight gains within the healthy BMI range doubled their risk of gestational diabetes compared to women whose weight remained stable," says researcher Akilew Adane, with "small" being defined as a gain of 1.5% to 2.5% of body weight a year. Adane gives this example: A 5-foot-5-inch woman weighing 132 pounds (a healthy BMI) doubles her risk if she gains 2.5 pounds a year, or about 2% of her body weight, for seven years. When the gain was over 2.5%, the women had 2.7 times the risk. Gestational diabetes can cause birth complications and lead to long-term health problems for mother and baby. In the study, published in Diabetes Research and Clinical Practice, the researchers followed more than 3,000 Australian women from 1996, when they were ages 18 to 23. Participants answered questions about their health and lifestyle. The team suspects those early adults who gained weight "may experience a modest progressive insulin resistance, which is further exacerbated by pregnancy, even though their weight is still within the normal range." They see weight gain prevention during these pre-pregnancy years "to be the main strategy to prevent the incidence" of gestational diabetes. (The jury is in on exercising while pregnant.)
Testicular cancer arises in the testicles, the male sex gland that produces hormones and sperm. The testicles are located within the scrotum at the base of the penis. While there are three types of testicular cancer, germ-cell tumors, stromal tumors and secondary testicular cancer, over 90 percent of all cases are germ-cell tumors, according to the American Cancer Society.Germ-cell tumors begin in the same cells that produce sperm and are further subdivided into seminomas and nonseminomas. Treatment choices may depend on which type the patient has. - Cryptorchidism, or undescended testicle(s), even if surgery amended the problem early on - Family History of testicular cancer - Klinefelter's Syndrome, a disorder that includes breast enlargement, sterility, low testosterone levels and small testicles - Irregular development of the testicles - Race/ethnicity may play a role. Caucasian men are more likely to develop testicular cancer. Those who suffer from testicular cancer may experience a combination of the following symptoms: - Painless swelling or lumps in one of the testicles - Heavy feeling in the scrotum - Fluid collection in the scrotum - Pain or discomfort in scrotum or testicle - Dull ache in lower back, abdomen or groin Detection and Diagnosis A physical examwill be the first step taken by a patient. The physician feels the abdomen and testicles for swelling or lumps.An ultrasoundutilizes inaudible sound waves, which bounce off internal organs and create a picture, or sonogram, of the body.A blood test can detect certain substances in the bloodstream that indicate cancerous growth is occurring. In a biopsy, the testicle in question is removed through surgery and examined under a microscope by a pathologist. If the patient has only one testicle, the surgeon will only remove part of the testicle. X-rays, CAT scans, and magnetic resonance imaging (MRIs) are used by physicians to detect cancerous growth by taking internal pictures of the body. Surgery removes the testicle or testicles with cancer. Lymph nodes may also be removed, depending on stage and extent of cancer. With one testicle remaining a man can still produce sperm, but if both are removed the man cannot. These patients who also wish to father children may opt to store frozen sperm before surgery. Prosthetic testicles appear and feel real and are often used to bar embarrassment or self-consciousness after surgery. Radiation therapy kills cancer cells with intense x-rays aimed only at the cancerous growth, and for testicular cancer the beams are always emitted from a machine outside of the body aimed at the abdomen. Seminomas are particularly sensitive to this type of treatment. Side effects from radiation therapy include loss of appetite, fatigue, nausea, vomiting, and problems with digestion. Chemotherapy involves taking drugs that kill rapidly growing cells, thus noncancerous cells can be killed as well. Side effects vary by type of drug but in general, hair loss, nausea, vomiting, diarrhea, loss of appetite, sores on the mouth and the lips and a lower resistance to infection are expected. Other side effects can include hearing loss, kidney, nerve, lung and small blood vessel damge. Drugs taken for testicular cancer can also cause kidney, nerve, lung, and small blood vessel damage as well as hearing loss. While testicular cancer is one of the most curable forms of cancer, with a cure rate in excess of 90 percent, most types will spread if left unchecked, first invading and damaging the other testicle before metastasizing and being carried by the lymph nodes to other body organs, such as the lungs. Early detection and treatment are crucial to a favorable outcome.
The dental hygienist works as a member of a professional health care delivery team. While each state governs the practice of dental hygiene, the primary specialties of the practitioner are treating and educating patients in the control and prevention of oral disease. Typical duties include evaluating and charting oral disease and conditions, removing deposits from the teeth, exposing and processing dental radiographs, and applying preventive agents to the tooth surfaces. General dentists or dental specialists may employ dental hygienists in private practice, hospitals, public health clinics, research institutions, public schools, business and industry, or the armed forces. New graduates who have passed licensure examinations can expect to earn on average $200.00 per day or more, depending upon the geographical location in Ohio where they choose to practice. Individuals considering a career in dental hygiene should have a strong commitment to working with people in healthy or unhealthy conditions. A dedication to delivering competent and compassionate health care and the ability to communicate effectively are crucial to a successful and rewarding career in this profession.
An animal whose pedigree stumped even Charles Darwin has at long last found its place in the tree of life. A study released Tuesday in Nature Communications concludes that the Macrauchenia patachonica, or the "long-necked llama," is part of a sister group of the Perissodactyla placental order, which includes horses and rhinos. The two groups split about 66 million years ago, reports the AFP, right about the time a massive asteroid struck the Earth, causing the extinction of land-roaming dinosaurs. Macrauchenia, which looked like camels without humps and weighed up to 1,000 pounds, lived in what is now South America until the late Pleistocene Era, between 11,000 and 20,000 years ago. "Its outstanding feature, however, was its nose," says study co-author and American Museum of Natural History curator Ross McPhee. "We have no soft tissue fossils," he continues, "so we don't know whether the nose was developed into an actual trunk, like an elephant's. It would not have looked very much like anything alive today." The new study built on a 2015 study that attempted to determine the animal's lineage through the analysis of ancient collagen, the structural protein found in skin. CNN reports that McPhee and University of Potsdam paleogenomics expert Michi Hofreiter extracted mitochondrial DNA from a Macrauchenia fossil found in South America and employed new techniques in genome recovery, which together allowed them to identify the animal's origins without the DNA of close relatives. Darwin was the first to find the animal's fossils, while in Patagonia in 1834, but neither he nor Richard Owen, the renowned paleontologist he sent the fossils to, was able to place the creature in any known lineage. (As for the asteroid that wiped out the dinosaurs, it hit in exactly the wrong place.)
Kinematic Self-Replicating Machines © 2004 Robert A. Freitas Jr. and Ralph C. Merkle. All Rights Reserved. Robert A. Freitas Jr., Ralph C. Merkle, Kinematic Self-Replicating Machines, Landes Bioscience, Georgetown, TX, 2004. 2.3.2 Von Neumann Automaton Replication with Diversification Consider a single instruction tape, and a constructor machine which reads the instructions once to build the offspring machine and again to make a copy of the instructions for the offspring machine. Notice that although the instructions available to the system yield a duplicate of the original system, this need not always be the case. Machines may read and interpret instructions without knowing what they are being called upon to do. The instructions might call for some computational, constructional, or program-copying activities. The machine can be programmed to make machines unlike itself, and can give these “unnatural” offspring copies of the instructions which were employed in their manufacture. If the offspring are also equipped to read and follow instructions, and if they have a constructional capability, their offspring in turn would be replicas of themselves – which might not resemble their “grandparent” machine at all. Thus, an original construction machine can follow instructions to make an indefinitely large number of diverse machines, that are like or unlike themselves, capable or not capable of constructing, replicating, and so forth . Last updated on 1 August 2005
"The horrors of Sept. 11, 2001, are still vivid for many Americans, especially the families of the victims," said The New York Times in an editorial. So it's a shame that the ceremony at ground zero on the eighth anniversary of the terrorist attack is happening at "an unfinished place," where the planned memorial pools and ring of skyscrapers have yet to be built. Rebuilding is vital to our effort to move on—it shouldn't have taken this long. "Life moves on" regardless, said Peggy Noonan in The Wall Street Journal, and that's "painful for those who will not forget and cannot be comforted." But "9/11 was for America's kids exactly what Nov. 22, 1963, was for their parents and uncles and aunts." They'll always remember that day—it changed them in more ways than we know. But schools are now filled with students who were too young to remember much about what happened on that day eight years ago, said Eli Saslow in The Washington Post. Millions of schoolchildren will remember Sept. 11, 2001, only through homework assignments and essay questions. We've already started to move from "the personal to the preserved" memories of 9/11—"this is the uncomfortable transition that time requires of all great tragedies." We've already forgotten way too much, said Ralph Peters in the New York Post. "Eight years ago today, our homeland was attacked by fanatical Muslims inspired by Saudi Arabian bigotry. Three thousand American citizens and residents died." We resolved never to forget, yet we've already gone soft in the fight against Islamist extremism. Americans should be proud of the way we responded to 9/11, said Rebecca Solnit in the Los Angeles Times. Ordinary citizens showed calm and courage. Though we must remember the dead, "the living are the monument." We shined as a people on that day—now the best thing we can do is to coexist in peace.
As you may know by now, I suffer from endometriosis. I’ve talked about it on the podcast on episode 73: Yoga, Chronic Pain, and Mental Health and episode 24: Chronic Pain and Yoga Tune Up® and in this blog. I thought it would help to learn about endometriosis, its co-conditions, its symptoms, and its treatments. It’s such a prevalent illness, I believe it’s essential to raise awareness around it. It could help women get diagnosed earlier and get better help. What is endometriosis? John Hopkins Medicine describes endometriosis as: “a common gynecological condition affecting an estimated 2 to 10 percent of American women of childbearing age. The name of this condition comes from the word “endometrium,” which is the tissue that lines the uterus. During a woman’s regular menstrual cycle, this tissue builds up and is shed if she does not become pregnant. Women with endometriosis develop tissue that looks and acts like endometrial tissue outside of the uterus, usually on other reproductive organs inside the pelvis or in the abdominal cavity. Each month, this misplaced tissue responds to the hormonal changes of the menstrual cycle by building up and breaking down just as the endometrium does, resulting in small bleeding inside of the pelvis. This leads to inflammation, swelling and scarring of the normal tissue surrounding the endometriosis implants. When the ovary is involved, blood can become embedded in the normal ovarian tissue, forming a “blood blister” surrounded by a fibrous cyst, called an endometrioma.” Endometriosis is classified into 4 stages, but it’s important to know that the amount of pain a woman experience is not necessarily related to the severity of the disease. The endometriosis stage is based on the location, amount, depth, and size of the endometrial tissue, 1 being minimal to 4 being severe. The diagnosis generally requires laparoscopic exploratory surgery. Causes of endometriosis The causes of endometriosis are still unknown, although different theories have been advanced. Causes may be the combination of multiple factors, including genetics, immune dysfunction, and environmental. For that reason, and it’s been more recently approached as an autoimmune disease. As I mentioned, there’s no direct correlation between the advancement of the illness and the symptoms, so every woman experiences the symptoms differently. The most common symptoms are: - Pain, especially excessive menstrual cramps that may be felt in the abdomen or lower back - Pain during intercourse - Abnormal or heavy menstrual flow - Painful urination - Painful bowel movements - Other gastrointestinal problems, such as bloating, diarrhea, and constipation - Spotting between periods - Feeling sick, vomiting, fainting during their period - Difficulty participating in day-to-day activity due to pain, weakness, and exhaustion Endometriosis is very rarely an isolated issue. Since endometriosis is considered an autoimmune, other autoimmune diseases like hypothyroidism, fibromyalgia, rheumatoid arthritis are also commonly found in the same woman. Other inflammatory conditions are also linked to endometriosis like painful bladder syndrome, IBS, Crohn’s disease. There is no known cure for endometriosis. In some cases, where fertility hasn’t been affected, pregnancy might alleviate the long-term symptoms. The most common treatments for endometriosis are pain medication, hormonal therapies, surgery to remove the lesions and scar tissue or a full hysterectomy. Now you know more about endometriosis. Let me know if you have questions. If you suffer from endometriosis and would like to chat with me about it, please email me at firstname.lastname@example.org Where to go next? Listen to the podcast episode 73: Yoga, Chronic Pain and Mental Health Listen to the podcast episode 24: Chronic Pain and Yoga Tune Up® Read my blog post: Endometriosis and me; 3 ways to change the story.
Presence of invasive Gambusia alters ecological communities and the functions they perform in lentic ecosystems Marine and Freshwater Research By acting as novel competitors and predators, a single invasive species can detrimentally affect multiple native species in different trophic levels. Although quantifying invasive effects through single-species interactions is important, understanding their effect on ecosystems as a whole is vital to enable effective protection and management. This is particularly true in freshwater ecosystems, where invasive species constitute the single greatest threat to biodiversity. Poeciliid fishes of the genus Gambusia are among the most widespread invasive species on earth. In the present study of lentic ecosystems (i.e. lakes), we first showed that Gambusia alter zooplankton community composition and size distribution, likely through size-selective predation. Second, we demonstrate that benthic macroinvertebrate communities significantly differ between sites with and without invasive Gambusia. The presence of Gambusia appears to reduce leaf-litter decomposition rates, which is likely an indirect effect of reductions in detritivore abundances. Reductions in decomposition rates found in the present study suggest that through trophic cascades, invasive Gambusia is able to indirectly alter ecosystem functions. The study has highlighted that the widespread effects of invasive aquatic species are able to permeate through entire ecosystems, being more pervasive than previously recognised. Hinchliffe, C., Atwood, T.B., Ollivier, Q., and Hammill, E. Presence of invasive Gambusia alters ecological communities and the functions they perform in lentic ecosystems. Marine and Freshwater Research (Published online March 2017).
This chapter provides a thematic and chronological overview of national reforms and policy developments since 2021. The introduction of the chapter describes the overall education strategy and the key objectives across the whole education system. It also looks at how the education reform process is organised and who are the main actors in the decision-making process. The section on ongoing reforms and policy developments groups reforms in the following broad thematic areas that largely correspond to education levels: - Early childhood education and care - School education - VET and Adult learning - Higher education and - Transversal skills and Employability. Inside each thematic area, reforms are organised chronologically. The most recent reforms are described first. Overall national education strategy and key objectives The strategic objectives of the development of individual areas of the Slovak education system are described in the above concepts and strategies approved by the Government of the Slovak Republic. Recovery and resilience plan of the Slovak Republic The Recovery and Resilience Plan (Plán obnovy a odolnosti) was approved by the Government of the Slovak Republic on June 16, 2021. The plan consists of investments and reforms that will address the challenges identified in the context of the European semester, especially in the recommendations of the European Commission for Slovakia. The Recovery and Resilience Plan focuses on 5 key public policies, including the "Education" area. Within this area, three main components have been identified: - accessibility, development and quality of inclusive education at all levels - education for the 21st century - increasing the performance of Slovak universities The plan will be supported by the Recovery and Resilience Support Mechanism, and support in the amount of 892 million euros will be allocated to the field of education. National Programme of Education Development Based on its programme statement, the Government of the Slovak Republic approved a new National programme for Development of Education and Training for 2018-2027 (Národný program rozvoja výchovy a vzdelávania na roky 2018-2027) in June 2018. The National Programme for Development of Education and Training aims to provide a long-term concept of education content from pre-primary education, through primary and secondary education to higher education, as well as further education, focusing on personal development and acquisition of relevant knowledge and skills required for being successful on the job market. The programme goals include the increase of quality of the education system, greater accessibility to quality education for everyone, and modernisation of the education system in terms of the content, management, funding and evaluation. National Reform Programme of the Slovak Republic 2023 The National Reform Programme of the Slovak Republic 2023 (Národný program reforiem Slovenskej republiky 2023), which was approved by the Government of the Slovak Republic in April 2023, provides a comprehensive overview of implemented and planned measures which respond to specific recommendations of the Council of the European Union for Slovakia (CSR). The National Reform Programme also communicates the pursuit of goals of the 2030 Agenda for Sustainable Development and the European Pillar of Social Rights. Implementation of the legal entitlement to pre-primary education from the age of 3 is still underway. As the entitlement is expanded to include other age groups, there is an increased demand for kindergarten places. Therefore, a Call for Direct Subsidies for Kindergarten Capacity Enlargement (Výzva na priame dotácie na zvýšenie kapacít materských škôl) is being implemented at the same time. The Ministry of Education, Science, Research and Sport of the Slovak Republic approved a new State Educational Programme for Primary Schools (Štátny vzdelávací program pre primárne a nižšie sekundárne vzdelávanie) which schools can choose to implement from 1st September 2023. As of 1st September 2026, it will become compulsory nationwide. In the context of curriculum reform, the National Reform Programme plans to further improve the quality of pedagogical and professional employees’ skills and motivate them toward continuous professional development. The emphasis will be put on inclusive education and digital skills. As regards higher education, the implementation of performance agreements with higher education institutions is underway. The agreements should promote departure from the unified model of scientific workplaces toward a diversified higher education environment that offers profession-oriented programmes. The reform programme plans to introduce a definition of the system of the periodic assessment of scientific performance and to carry out 20 assessments. The assessment of research at public higher education institutions will identify top-performing workplaces in the area of science and creative activity based on the approved methodology for the periodic assessment of scientific performance; it will thus contribute to the diversification of schools. Implementation of short courses in further education – micro-credentials - is being prepared. The reform programme aims to support the creation of such courses and ensure their compliance with the European approach and the Council Recommendation on a European approach to Micro-credentials for Lifelong Learning and Employability. The courses will be provided mainly by higher education institutions. Strategy for an Inclusive Approach in Education until 2030 The strategy of an inclusive approach in education and training (Stratégia inkluzívneho prístupu vo výchove a vzdelávaní) (hereinafter referred to as the "Strategy") is developed on the basis of the Programme Statement of the Government of the Slovak Republic for the years 2020-2024 (Programového vyhlásenia vlády Slovenskej republiky na roky 2020 – 2024), in the field of Equal Opportunities in Education. It was approved by the Government of the Slovak Republic on December 8, 2021. It represents a framework strategic document that defines the direction of public policies to achieve change in the field of education of children, pupils and students towards inclusive education. The supporting part of the Strategy is represented by six priority areas: - inclusive education and support measures, - advisory system in education, - desegregation in education and training, - removal of barriers in the school environment, - preparation and education of teaching staff and professional staff, The priority areas are elaborated into individual strategic goals, which are the basis of the First Action Plan for the period 2022-2024 (Prvý akčný plán na obdobie 2022 – 2024). The strategy is an open document, in the future it will be possible, if necessary, to modify and supplement it based on the results of the implementation monitoring process. Strategy for Internationalisation of Higher Education until 2030 On December 8, 2021, the Government of the Slovak Republic approved the Strategy for the Internationalisation of Higher Education until 2030 (Stratégiu internacionalizácie vysokého školstva do roku 2030) (hereinafter referred to as the "Strategy"). The Ministry of Education, Science, Research and Sport of the Slovak Republic developed the Strategy on the basis of the government's programme statement in accordance with the Recovery and Resilience Plan. The strategy provides a medium-term concept for the development of the internationalisation of higher education as an effective tool for increasing the quality of education and the research environment at universities in the Slovak Republic until 2030. It focuses on increasing the availability of international experience during university studies and the modernisation of higher education in the context of internationalisation. Lifelong learning and counselling strategy for 2021-2030 On November 24, 2021, the Government of the Slovak Republic approved the new Strategy for Lifelong Education and Counselling for the years 2021-2030 (Stratégiu celoživotného vzdelávania a poradenstva na roky 2021-2030). The revision of the lifelong learning strategy is based on the National Reform Programme of the Slovak Republic 2020 (Národného programu reforiem Slovenskej republiky 2020), part Education, Science and Innovation. Its task is to respond to the dynamically changing labour market with measures. It thus responds to the need to provide lifelong education and counselling where citizens have a problem as individuals or where a systemic deficiency has been identified in the areas of skills for the population or specific target groups. The main goal of the Strategy is to ensure that every citizen has lifelong access to opportunities to learn, to develop their skills and competences throughout their lives at every stage of life and with regard to individual needs and circumstances, so that everyone can realise their potential in personal, professional and civic life. The strategy contains a total of 51 measures divided into thirteen thematic units. The individual objectives of the Strategy and the relevant measures will be elaborated in detail into action plans, by March 31, 2022, 2025 and 2028, with the evaluation of the results of the ongoing monitoring of the previous period. Strategy for Equality, Inclusion and Participation of Roma until 2030 The Strategy for Equality, Inclusion and Participation of Roma until 2030 (Stratégiu rovnosti, inklúzie a participácie Rómov do roku 2030) (hereinafter referred to as the "Strategy") was approved by the Government of the Slovak Republic on April 7, 2021. The adoption of the strategy is one of the basic conditions for the preparation of the implementation mechanism of the EU cohesion policy after 2020 in the conditions of the Slovak Republic. The strategy is a national commitment of the Government of the Slovak Republic defining the direction of public policies with the aim of changes in the area of equality and inclusion of Roma. It is a set of starting points and goals aimed at stopping the segregation of Roma communities, a significant positive turn in the social inclusion of Roma, non-discrimination, changing attitudes and improving coexistence. The goals of the current strategy focus on three basic areas of the education system: - child/pupil support and family care, - support of the teacher's professional capacities, - supporting the creation of a stimulating environment for pupils from marginalised Roma communities. National Strategy for Digital Skills of the Slovak Republic The Government of the Slovak Republic approved the National Strategy for Digital Skills of the Slovak Republic (Národná stratégia digitálnych zručností Slovenskej republiky) in December 2022. The material reflects the priorities set out in the Recovery and Resilience Plan of the Slovak Republic and it aims to develop digital literacy and competences of both professionals and the general public. The new strategy considers the importance of digital skills and incorporates the development of these skills into the education of children from an early age, as well as into the education of adults as a way of ensuring quality lifelong learning and more opportunities in the labour market. Measures in this area aim to support the development of basic digital skills, reskilling and upskilling, and requalification of low employability graduates from the NEET group. As regards the field of education, the strategy focuses on: - increasing the number of ICT specialists, - increasing the number of female ICT specialists, - increasing the number of people with at least basic digital skills, - increasing the number of employers who provide education (to their employees) in the area of digital skills, and - actively working with people from disadvantaged groups and educating in digital skills. National Strategy for Research, Development and Innovation 2023 In May 2023, the Government‘s Research and Innovation Authority introduced the National Strategy for Research, Development and Innovation 2023 (Národnú stratégiu výskumu, vývoja a inovácií 2030) and the Action Plan for the National Strategy for Research, Development and Innovation 2030 (Akčný plán národnej stratégie výskumu, vývoja a inovácií 2030). The strategy responds to the challenges of recent years, such as the fragility of the globalised economy, energetic security, ecological limits, or the onset of robotics and artificial intelligence. It understands these challenges as an opportunity to transform Slovakia into a country based on knowledge economy principles. Therefore, it presents a vision for policies and investments in research, development, and innovation which is built mainly on changes in the funding of science and research and on the support of talented young people. As regards the area of education, the goals of the strategy include: - the implementation of long-term quality assessment of research at higher education institutions and research institutions and the implementation of performance agreements with higher education institutions and public research institutions, - an increased emphasis on developing initiative, entrepreneurship, and new economy skills throughout the whole education system, - a continuous increase in the quality of school education and the support of teachers based on the created curriculum and support documents, - combatting graduate brain drain, - the creation of programmes that are able to motivate people to take part in lifelong learning in strategically important skills and - the support of lifelong learning courses and creation of short tertiary programmes at higher education institutions. Overview of the education reform process and drivers The reform processes of the Slovak education system are implemented through the adoption of new legislative regulations (laws, decrees and regulations of the government) published in the collection of laws or in the form of national educational strategies and concepts. The implementation of the adopted reforms is mostly within the competence of the Ministry of Education, Science, Research and Sports of the Slovak Republic, as the national authority for the field of education. The right to submit draft laws (that is, legislative initiative) belongs to the Government of the Slovak Republic, deputies of the National Council of the Slovak Republic and committees of the National Council of the Slovak Republic. The government submits proposals as a collective body, so its individual members do not have the legislative initiative, but only the government as a whole. On the other hand, deputies can submit proposals individually, or it can be a proposal of a group of deputies. The legislative authority of the civil public can be implemented in the form of a referendum, an electronic mass request and a petition. The National Council deals with proposals from citizens and civic associations even if they are submitted by a member of parliament. Laws are approved by the National Council of the Slovak Republic. Decrees and ordinances are adopted by ministries and other central state administration bodies. Government regulations are adopted, changed and cancelled by the government at its meetings. Government regulations and decrees must not contradict the relevant law and must not go beyond the scope of the law.
LipiFlow, a Treatment for Dry Eye Itching, burning, scratching — irritated eyes can be uncomfortable and distracting. However, nearly 5 million Americans age 50 and older have reported similar symptoms caused by dry eye, according to the National Eye Institute. We’ll explain what dry eye is, what causes the disorder, and treatments to alleviate patient’s discomfort, highlighting one of our newest procedures, LipiFlow. What Is Dry Eye? Tears play a key role in whether or not dry eye appears, as they not only keep the eye moist they act as a protective barrier against infections, bacteria, and debris. Dry eye is an eye condition caused when eyes don’t produce enough tears or the correct quality of tears and the eyes become irritated. Some symptoms of dry eye include the following: - Stinging or burning - Excess tears - Stringy discharge - Eye irritation - Discomfort from contact lenses What Causes Dry Eye? Meibomian gland dysfunction, commonly known as MGD, is thought to be the leading cause of dry eye. This occurs when the meibomian gland, which is in the eyelid and produces the oily layer in the tear film, is compromised. While MGD is a common cause of dry eye, there are other causes: - Hormonal changes - Autoimmune diseases, such as lupus or rheumatoid arthritis - Herpes zoster - Long-term contact lens wear - Over-the-counter medications - Dry or windy climates - Air conditioning - Prolonged work at computers - Lasik or other refractive surgery How Can Dry Eye Be Treated? Dry eye can be treated numerous ways: - Use of artificial tears or conserving tears - Eyelid massages - Having an ophthalmologist close a patient’s tear channels either permanently or temporarily - Using a humidifier in winter months - Avoiding smoke - Adding vitamin A or omega-3 oils to the diet Another way of treating dry eye is with a new and innovative procedure called LipiFlow. Our team offers this thermal pulsation system as a dry eye treatment that offers long-lasting results compared to other procedures. Because dry eye is most commonly caused by MGD, LipiFlow from TearScience addresses this issue of inactive or non-normal functioning gland oil blockage. A doctor examines the system images of the eye that highlights the number of oil glands blocked. Activators, which are heat-targeted pulsating eyepieces, are placed over the eyes and through a combination of heat and massage allow the glands to resume proper oil production. LipiFlow has been known to last for up to 12 months, or even longer in some patients, and takes about 15 minutes or less to administer. At-Home Eyelid Care Keeping our eyelids clean is as important as washing our hands or brushing our teeth. By practicing the following at-home eyelid care regimens before or after having the LipiFlow treatment, patients will be able to practice better eyelid hygiene and ward off future instances of MGD. Blink training is something patients can perform daily by following these steps three to four times a day: - Close the eyes. - While the eyes are closed, squeeze them. - While the eyes are still closed, relax the eyes. - Open the eyes. - Repeat five times. Lid cleaning should be done daily and can prevent debris buildup on the eyelids, which ultimately leads to gland obstruction and MGD. It’s recommended to moisten the end of a cotton swab with mineral oil or petroleum jelly and to rub the lower and upper lid sections five times. You don’t have to suffer from scratchy, red, burning eyes. By knowing what dry eye is, what causes it, and how to treat it, you’ll be better equipped to combat this common eye disorder.
What Parents Need to Know Parents need to know that, like most animated films, ads for this movie have been targeting the 5-and-up set on television. You don't have to worry about any age-inappropriate language, sexuality, or commercialism, but there are a few episodes of mild peril and cartoonish violence: Evil scientists' monstrous creations fight each other and at one point lose control and threaten the inhabitants of Malaria (the kingdom where the movie takes place). The musical Annie is featured prominently, so don't be surprised if kids want to see it afterward. - Families can talk about the movie's messages. Why is it so out of the ordinary for Igor to want to be a scientist? How do others' expectations affect the way we behave and the way we see ourselves? Families can also discuss how this film fits into the monster-movie genre. What does it have in common with movies about Frankenstein and the hunchback of Notre Dame? How is it different? Kids: If you could create a "monster," what would it be like?
About 25 percent of tiny Costa Rica’s land area has been set aside in national parks and protected areas — perhaps not surprising given the government’s goal for it to become the first carbon-neutral country by 2021. Its features include mountains, volcanoes, waterfalls, lakes and islands. Its largely tropical climate supports a rich variety of flora (including numerous orchids) and fauna (from monkeys and sloths to birds and reptiles). Latest News & Features Costa Rica has created the country’s biggest marine protected area. Five of the seven species of sea turtles live in the Pacific or Caribbean shores of Central America. Natives of the Talamanca Indigenous Reserve sustain their way of life by sharing it with visitors.
XML Tree - Parent In real life a person can have multiple parents, especially if the parents divorce and remarry. Thankfully, the relationship in XML is less complicated. Each XML element can have at most one parent. XML can pull this off because an XML element does not require another element to reproduce (OK, that was a silly joke; I apologize). Element A is the parent of element B when : - Element B is contained within element A and element A is exactly one level above element B. In other words, the parent element A is the ancestor of element B. Below is an example with 3 parent relationships. The element d is contained by element a, b and c, but only the element c is exactly one level up in the XML Tree, making c the parent of d. Questions: What are the other parent relationships in this XML document? - b is the parent of c - a is the parent of b Who's the Parent? Usually an XML document is indented in such a way as to make finding an element's parent very easy. However, if the indentation is done poorly there is a simple trick to determine an element's parent. Start at the element in question and move up towards the start of the file. The first opening tag that you reach that does not have a corresponding closing tag is the parent. Can you find the parent element of lemonade in our lemonade.xml document.? XML Code, lemonade.xml: drink is the parent of lemonade. Here is a the XML Tree of the parental relationship between drink and lemonade. Bonus: Which element is the parent of snack? Download Tizag.com's XML Book If you would rather download the PDF of this tutorial, check out our XML eBook from the Tizag.com store. Found Something Wrong in this Lesson? Report a Bug or Comment on This Lesson - Your input is what keeps Tizag improving with time!
Spanish by Choice - SPANISHPOD LESSONS - Lessons based on SpanishPod newbie lessons for beginners without knowledge of Spanish; teaching language is English; no prerequisites. - SHORT STORIES BY ALARCÓN - Spanish short stories from the book “Novelas Cortas” by Pedro Antonio de Alarcón read by Karen Savage. For intermediate learners. What is this wikibook? This wikibook is quite different from a normal Spanish course. In fact, it is more like a collection of booklets, which accompany audio courses and audio books. As it is an electronic wikibook, it combines audio files with transcripts, vocabulary lists, interactive exercises, images, forums, etc. How to read this wikibook As suggested by its title, the structure of this wikibook emphasizes choice; thus, there really isn't a single way of reading it but you can choose how to explore the material offered in this wikibook. Here are some common user types: - Hoppers jump from one lesson to another using the tables of contents, the index of dialogue lines, links between related lessons, or just random browsing. If they encounter a lesson they already know, they often listen to it again to see what they remember. - Old-school learners listen to the lessons in the order they are indexed in the tables of contents. They also have a system to listen to the lessons again after certain time intervals (typically after some days, after some weeks, and after some months). - Commuters download audio lessons to their audio players or burn them on CDs to listen to them multiple times on their daily commute. In the evening they read through the corresponding pages of this wikibook and choose the next lessons. - Fans of particular podcasts don't really care about the order in which they enjoy their favourite shows. They also tend to listen to particular lessons over and over again just for the fun of it. How to use this wikibook A few additional technical warnings and tips: - The built-in audio player of Firefox is not recommended for playing the audio files because it tends to skip the last second of each file. The VLC media player is also not recommended because it tends to skip the first second of each file. - For printing use the links to printable versions and PDF versions on the top, right. Printing with the links on the left ("Create a book", "Download as PDF", and "Printable version") generates inferior results. - Note that all printable pages include a copy of the complete GNU free documentation license. - To organize your personal sets of lessons, use the "Create a book" link on the left. - The SpanishPod newbie lessons are also available as a podcast: see the RSS file and the iTunes page. How to contribute to this wikibook Please correct any typos and errors that you find. If you want to include a link or image: be bold and just do it! Apart from the SpanishPod newbie lessons licensed under a creative commons license and the LibriVox recording of Novelas Cortas, further suitable podcasts and audio material should be included in this wikibook. Unfortunately, most podcasts are not published under a suitable license for Wikimedia Commons and/or make (fair) use of copyrighted material, which is not acceptable for Wikimedia Commons. On the other hand, the public domain Spanish audiobooks at librivox.org and the articles in the Spanish version of spoken Wikipedia should be suitable. Please discuss any suggestions or ideas on the discussion page or contact Martin. - ^ Gary P. Ferraro, "The Cultural Dimension of International Business," ISBN 0131927671, 9780131927674, Prentice Hall, 2005, page 73.
Although this ancient Mayan city is not well-known, Kohunlich is a large, primarily unexcavated archaeological site 80 miles (129 kilometers) south of Cancun in the beautiful Costa Maya region, and is surrounded by thick rainforests containing exotic birds and wildlife. Unlike other Mayan sites, the word Kohunlich comes from the English word Cohune Ridge, named for the cohune palm tree that is common here. Most of the structures at Kohunlich are hidden by rainforest overgrowth and have not yet been restored. Only a small portion of this city has been restored, but the site contains many fascinating structures including a large acropolis, palace, and ball court. The most impressive structure is the pyramid of masks called the Templo del Sol (Temple of the Sun) which features six, five-foot tall, stucco masks of the Mayan sun god, Kinich Ahau, along its staircase. This archaeological site dates back to 200 BC, but the most significant structures were built in the Early Classic period between 250 BC and 600 BC, reaching the height of its population between 300 AD and 1200 AD. Archaeologists believe this city once served as a regional center and stopover post on trade routes, considering the city’s design and large central plaza.
When we think about adopting a furry friend, many factors come into play, from temperament to size and activity levels. Yet, one often overlooked aspect is understanding the effects of different weather conditions on dog breeds. Just as humans prefer certain climates, different breeds of dogs have evolved or have been bred to thrive in specific conditions. Whether it’s the thick, insulating fur of the Siberian Husky, ideal for cold snowy landscapes, or the sleek, short coat of the Greyhound, suitable for warmer climates, there’s no denying that weather plays a pivotal role in a dog’s well-being. In this article, we’ll delve deep into how various climates impact different breeds, ensuring you’re well-equipped to provide the best care for your canine companion, no matter the forecast. The Physical Makeup of Different Dog Breeds Every dog breed has its unique set of characteristics tailored by centuries of evolution and selective breeding. These characteristics, ranging from fur type and color to body size and structure, have been influenced by the environments in which these breeds historically thrived. Fur Type and Length This is perhaps the most noticeable trait that determines how a dog responds to different weather conditions. Breeds like the Alaskan Malamute or Saint Bernard have thick, double coats designed to provide insulation against frigid temperatures. On the other hand, breeds such as the Basenji or Italian Greyhound have short, fine coats, making them more suited to warmer climates but less insulated against cold. Body Size and Shape The body’s size can influence a dog’s ability to regulate its temperature. Larger breeds generally retain heat better than smaller ones, due to their greater body mass. However, their size may also make them more prone to overheating in warm weather. The shape of their bodies, including the length of their legs and muzzle, can also play a role. Breeds with shorter muzzles, like Bulldogs, may have a harder time cooling themselves down because they can’t pant as efficiently. Skin and Pigmentation While fur is a significant factor, a dog’s skin also plays a role in weather resilience. Dogs with lighter skin and fur, for instance, are more susceptible to sunburn. Conversely, darker pigments can sometimes help in absorbing and retaining heat. Origin and Evolution A dog’s historical background gives significant clues about its physical makeup. Breeds that originated in cold regions, like the Tibetan Mastiff from the mountainous regions of Tibet, have developed features to combat cold, like a robust build and dense fur. Similarly, breeds from desert regions, like the Saluki, have evolved to endure high temperatures and have features like longer legs and lean bodies to radiate heat more effectively. Understanding these aspects of a dog’s physical makeup is crucial in gauging how they might react to various weather conditions. For instance, a thick-coated Bernese Mountain Dog might revel in the snow but struggle in tropical heat, while a short-haired Dalmatian might need protection from extreme cold. Recognizing and appreciating the physical characteristics of different dog breeds allows owners to take proactive steps, ensuring their pets remain comfortable and healthy regardless of the weather conditions they’re exposed to. Hot and Dry Climates Hot and dry climates pose unique challenges for dogs. The scorching heat and limited moisture in the environment can test a dog’s endurance and physiological coping mechanisms. Let’s explore how these climates impact various breeds and how different breeds have adapted or might react. Dogs primarily regulate their body temperature through panting. In hot, dry climates, the process becomes even more vital. However, it’s also less efficient due to the external temperature. Breeds with shorter muzzles, like Pugs and Bulldogs, might struggle more in these conditions since their panting capacity is limited. Hydration is paramount in hot and dry environments. Dogs lose more water through panting and increased urination. Breeds not native to such climates will require more frequent hydration. Breeds like the Saluki or the Afghan Hound, which have desert origins, might fare a bit better due to their evolutionary adaptations. Skin and Coat: Short-coated breeds or those with sparse fur might be at a higher risk of sunburn. On the other hand, while one might assume thick-furred breeds would suffer most, their coats can sometimes act as insulation from the heat, much like it does from the cold. However, this isn’t a free pass for them; they can still overheat if not monitored. Regardless of the breed, all dogs in hot and dry climates should avoid high-intensity activities during peak heat hours. Morning and evening become ideal times for exercise. Breeds such as the Mexican Hairless (Xoloitzcuintli) or Basenji have adaptations suited for warmer climates, but even they have limits to their endurance in such conditions. Some breeds, especially those with lighter-colored noses or fur, might benefit from protective measures like dog-safe sunscreen. Dog boots can also protect their paws from hot surfaces, which can cause burns. Some breeds have evolved specifically for hot, dry climates. The Pharaoh Hound, for instance, was bred in Malta, where it’s both hot and arid. Their slender bodies, large ears, and short coats help dissipate heat. While certain breeds have natural adaptations to hot and dry climates, it’s crucial for dog owners to be vigilant, ensuring their furry companions stay hydrated, avoid overheating, and remain protected from the intense sun. Adapting routines and being aware of your dog’s specific needs can make all the difference in such environments. Cold and Snowy Climates Cold and snowy climates present their own unique set of challenges for dogs. Snow, ice, and frigid temperatures can affect a dog’s well-being, but nature has equipped many breeds to thrive in these conditions. Let’s delve into how cold and snowy environments influence different dog breeds and the inherent traits that certain breeds possess to combat these challenges. Unlike humans, dogs don’t shiver immediately when exposed to cold. They have a higher body temperature, and their fur acts as a natural insulator. Breeds like the Siberian Husky, Alaskan Malamute, and Saint Bernard have double coats: a dense undercoat that traps warm air and an outer layer that repels water and snow. Footpads and Frostbite: While footpads are robust and can handle rough terrains, in extreme cold, there’s a risk of frostbite. Breeds like the Newfoundland or the Samoyed have hairy feet that provide some insulation against the cold ground, but precautions, such as protective booties, might still be necessary during harsh conditions. Cold air can be dry and might affect a dog’s respiratory system, especially during physical activity. Breeds that are acclimated to cold climates, like the Norwegian Elkhound or the Bernese Mountain Dog, usually handle cold air better than those from warmer regions. Snow is fun for many dogs, and breeds like the Golden Retriever or Border Collie might enjoy playing in it despite not being specific cold-weather breeds. However, breeds native to cold and snowy climates, like the Tibetan Mastiff, are built not only to withstand the cold but to work in it, showcasing remarkable stamina. Feeding and Energy: Dogs burn more calories in the cold trying to keep warm. Breeds used for work in snowy conditions, such as the Canadian Eskimo Dog, often have higher caloric needs during winter months. Owners should adjust food intake based on activity levels and external temperatures. There are breeds explicitly developed for cold and snowy conditions. For example, the Greenland Dog is used to pulling sleds in its native Greenland, and its entire physiology, from its coat to its metabolism, is suited for extreme cold. To sum up, while some breeds are naturally equipped to handle cold and snowy climates due to their history and physical traits, it’s essential for dog owners to understand the specific needs and limitations of their pets. Even cold-adapted breeds need shelter, ample food, and protection from extreme conditions. By being informed and attentive, one can ensure that man’s best friend remains happy and healthy, even in a winter wonderland. Wet and Rainy Climates Dogs, just like humans, can be affected by wet and rainy conditions. While some breeds are more suited to withstand damp environments, others might require more attention and care during persistent rain. Here’s how wet and rainy climates influence various dog breeds and the adaptations some breeds have developed over time to manage these conditions: Certain breeds such as the Labrador Retriever, Golden Retriever, and the Irish Water Spaniel have water-resistant fur. This allows them to remain relatively dry even when exposed to rain, with water droplets simply rolling off their coat. Their undercoat provides insulation, while the outer layer is more textured and designed to repel moisture. Breeds like the Otterhound, Portuguese Water Dog, and the Nova Scotia Duck Tolling Retriever possess webbed feet. This trait, an evolutionary adaptation to working in water, allows these breeds to swim efficiently and handle wet terrains better than others. Dogs with floppy ears, such as the Basset Hound or Cocker Spaniel, can be more prone to ear infections in consistently wet climates. The moisture can get trapped in their ears, creating a breeding ground for bacteria. It’s essential for owners of these breeds to check and dry their ears regularly. Mud and Dirt: Rain often means mud, which can be a playground for dogs like the playful English Springer Spaniel or the energetic Jack Russell Terrier. While they might enjoy the muddiness, it’s crucial for owners to clean and dry their pets after such adventures to prevent potential skin issues. Damp climates can sometimes exacerbate joint issues in breeds predisposed to conditions like hip dysplasia, including German Shepherds and Dachshunds. Owners should be mindful of their dog’s comfort and mobility, especially in wet conditions. Paws and Skin: Constant exposure to wet conditions can soften a dog’s paw pads, making them more susceptible to injury. Breeds not naturally accustomed to wet environments might also face skin issues due to prolonged dampness. Regular checks and proper grooming can help mitigate these risks. Some dogs are historically attuned to wetter climates. The Puli, with its unique corded coat, for example, hails from Hungary, where it herded livestock in various weather conditions, including persistent rain. Windy and Stormy Conditions The impact of windy and stormy conditions on dogs is not always immediately visible, but it’s undeniable that gusty environments can have varied effects on different breeds and individual dogs. Here’s how such atmospheric disturbances can influence our canine companions and the innate traits and characteristics that some breeds possess to manage these challenges: Dogs have an acute sense of hearing. Windy conditions can amplify environmental sounds, potentially making some dogs anxious or uneasy. Breeds with more oversized, upright ears, like the German Shepherd or the Alaskan Malamute, might pick up on these noises more than others. Lighter, smaller breeds such as Chihuahuas, Toy Poodles, or Yorkshire Terriers can literally be blown off course by strong gusts of wind. Owners should be cautious when walking them on windy days and consider protective clothing to shield them from the wind-chill. Breeds with thick, dense fur, like the Keeshond or the Samoyed, are well insulated against the cold wind. In contrast, short-haired breeds or those with fine coats, such as the Italian Greyhound or the Whippet, might need added protection like jackets or sweaters in windy and cold conditions. Wind can blow debris, sand, or dust, causing discomfort or potential injury to a dog’s eyes. Breeds that have protruding eyes, like the Pug or the Boston Terrier, can be particularly vulnerable. It might be worth considering protective dog goggles for such breeds during windy outings. Anxiety During Storms: Thunderstorms can be particularly distressing for many dogs, irrespective of their breed. The loud noises, changes in atmospheric pressure, and the unpredictability of storms can lead to anxiety. Breeds already predisposed to nervousness, such as the Shetland Sheepdog or the Border Collie, might require extra comfort and reassurance during such times. Wind can scatter scents, making it challenging for dogs, especially those who rely heavily on their sense of smell like Bloodhounds or Beagles, to track or navigate. Safety in Storms: Large breeds or those with sturdy builds like the Saint Bernard or the Newfoundland are less likely to be physically affected by wind. However, the risk of flying debris during storms remains a concern for all dogs. Windy and stormy conditions present a unique set of challenges for dogs. Recognizing the potential risks and understanding the inherent characteristics of various breeds can help owners ensure the safety and comfort of their furry friends during turbulent weather. It’s always best to stay informed and prepared, ensuring our pets feel secure and protected, no matter the forecast. Tips for Adapting to Changing Weather Conditions Adapting to changing weather conditions is crucial for the safety and comfort of our canine companions. Dogs are as susceptible to the elements as humans, and sometimes even more so given their diverse breeds and physical characteristics. Here are some guidelines to ensure your dog remains healthy and happy, regardless of the weather: Always check the day’s weather forecast before heading out with your dog. This will allow you to prepare accordingly, be it rain gear, sun protection, or cold weather attire. Invest in Dog Clothing Depending on the breed and coat type, some dogs may benefit from protective clothing. Consider raincoats for wet conditions, insulated jackets for cold weather, and even doggy booties to protect paws from hot pavement or icy surfaces. Limit Outdoor Time in Extreme Weather During particularly hot or cold days, it’s essential to limit your dog’s time outside. Short, frequent outings are better than extended stays in harsh conditions. Just as humans need to stay hydrated, so do our pets. In hot conditions, ensure your dog has access to fresh water. In cold conditions, dehydration can still be an issue, so don’t neglect your dog’s water needs. Mind the Paws Extreme temperatures can damage a dog’s sensitive paws. Use protective balms in the cold to prevent cracks and check for ice balls in-between toes. During hot days, avoid walking on asphalt, which can burn paws. Whether it’s hot, cold, or wet, dogs need shelter to protect them from the elements. This is particularly important if your dog spends a significant amount of time outdoors. Watch for Signs of Discomfort Always be observant of your dog’s behavior. If they appear to be uncomfortable or in distress due to weather conditions – shivering, panting excessively, limping, or showing reluctance to walk – it’s a sign that you need to take action. Safe Indoor Activities When outdoor conditions aren’t ideal, have a set of indoor games and training activities ready to keep your dog entertained and active. Comfort in Storms If your dog is anxious during thunderstorms, consider creating a safe space for them indoors. This can be a quiet room with their favorite toys and blankets. Soundproofing techniques, like white noise machines or soft music, can also help mask the noise. Consult a Vet If you’re unsure about how the weather might affect your dog, especially if they have existing health conditions, it’s always a good idea to consult with your veterinarian for advice. Adapting to the unpredictable nature of weather conditions is part and parcel of dog ownership. With a little forethought and preparation, you can ensure your canine companion remains safe, comfortable, and happy, no matter what Mother Nature has in store. Understanding the impact of diverse weather conditions on different dog breeds is pivotal for any responsible dog owner. While each breed has its unique attributes and susceptibilities, general awareness and preparation can make all the difference. From the icy chills of winter to the scorching heat of summer, and the unpredictabilities of rain and wind, our canine friends rely on us for their comfort and safety. By being informed and taking timely precautions, we can ensure that every weather condition becomes an opportunity for joy and bonding, rather than a challenge. So, come rain or shine, our commitment remains unwavering: to provide the best care for our loyal companions. Frequently Asked Questions What dog breeds are best suited for cold climates? Certain breeds such as Siberian Huskies, Alaskan Malamutes, and Saint Bernards are built to withstand colder temperatures due to their thick fur and sturdy build. Are there breeds that thrive in hot climates? Yes, breeds like the Basenji, Saluki, and Doberman Pinscher are more adapted to handle heat. However, care is still needed during extreme temperatures. How can I help my dog adjust to sudden weather changes? Gradual exposure, appropriate clothing (like dog jackets), and monitoring outdoor time can help your dog adjust. Always ensure they have a comfortable shelter against severe conditions. Is it safe for dogs to be out during storms? No, it’s best to keep dogs inside during storms, not only due to physical dangers like flying debris but also because the loud noises can be traumatic for them. Do dogs need raincoats in wet climates? While not mandatory, raincoats can help keep your dog dry and comfortable during wet conditions, especially if they’re averse to water or have a thin coat.
WASHINGTON — Influenza is circulating unusually early this year with cases in all 50 states — nearly all the swine flu variety, government health officials said Friday. The highest concentration of flu cases is in the Southeast and a few other states, Dr. Anne Schuchat of the Centers for Disease Control and Prevention said at a briefing. The good news is that testing of vaccines for swine flu show that they work with a single dose and take effect rapidly. Supplies of swine flu vaccine are expected to be available in mid-October, but the seasonal flu vaccine is available now and officials have encouraged people to get it. The H1N1 swine flu broke out in the spring and never went away, Schuchat said. It struck in many summer camps, spread into the Southern Hemisphere and now is widening its range. Currently 98 percent of the flu viruses circulating are swine flu. Cases are mainly occurring in children and young adults, Schuchat said. The finding that the swine flu vaccine works in a single dose in healthy adults “shortens the window of worry,” Health and Human Services Secretary Kathleen Sebelius said. “There’s no better protection against the flu than vaccine.” There had been concerns that it would take two doses to build up immunity, delaying the protection. Still unclear for kids, pregnant women While the single dose works in adults, testing is still under way to determine the effectiveness of the vaccine in children and pregnant women, said Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Disease. He said the tested vaccines were made by Sanofi Pasteur and CSL Ltd. and both produced “robust” immune responses. In people aged 18 to 64, 96 percent had a strong response to the Sanofi version and the response was 80 percent for CSL. Fauci played down the difference, noting the tests were done after only eight to 10 days and immune response could be the same in both groups as it increases after that point. Don't miss these Health stories More women opting for preventive mastectomy - but should they be? Rates of women who are opting for preventive mastectomies, such as Angeline Jolie, have increased by an estimated 50 percent in recent years, experts say. But many doctors are puzzled because the operation doesn't carry a 100 percent guarantee, it's major surgery -- and women have other options, from a once-a-day pill to careful monitoring. - Larry Page's damaged vocal cords: Treatment comes with trade-offs - Report questioning salt guidelines riles heart experts - CDC: 2012 was deadliest year for West Nile in US - What stresses moms most? Themselves, survey says - More women opting for preventive mastectomy - but should they be? In addition, there were no significant side effects, Fauci said. People over 65 did not respond as strongly, but still got enough of an immune reaction that they should seek out the shots when their turn comes, officials said. First on the list for the swine flu shots, however, are children and young adults, pregnant women and others with health problems, since the H1N1 flu seems to strike them more often. Older people are more at risk from the regular seasonal flu and — along with other people — should get those shots now, Sebelius said. She noted she got her own seasonal flu shot Friday at a school in nearby Alexandria, Va. Get your regular flu shot, too Why bother with the seasonal shot, since nearly all the current flu cases are swine flu? “The fact that the (seasonal) virus is not circulating now is absolutely no reason not to get vaccinated,” Fauci said. “You would hope that you would get vaccinated before the seasonal flu is circulating so you will have an immune response.” Fauci said it still appears the bulk of the swine flu vaccine will be available in mid-October, though there is a possibility some may be available sooner, “we hope.” “The disease is increasing already and it is still a bit of a race to get the vaccine out there ahead of the disease,” Schuchat said. Even with the swine flu spreading now there will still be plenty of need for the vaccine, the officials stressed. One dose means tight supplies of H1N1 vaccine won’t be stretched so thin after all. The U.S. has ordered 195 million doses, based on the hope that 15 micrograms was indeed the right dose. Had it taken twice that dosage, or two shots apiece, half as many people could have received the vaccine. The CDC reported Friday that last week influenza was widespread in Alaska, Arizona, Florida, Georgia, Louisiana, Maryland, Mississippi, North Carolina, Oklahoma, South Carolina, Tennessee and Guam. Only New Hampshire and Rhode Island had no flu cases last week. Copyright 2009 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Nelly Furtado imagined she was like one, they inspired Robert Burns' pen, and they're some of the most colourful characters in our garden wildlife. What are we talking about? Birds! Why not show them some love by getting involved in the RSPB's Big Garden Birdwatch from 28 to 30 January 2022! What is Big Garden Birdwatch? Big Garden Birdwatch is free and all you do is count the birds you see in your garden, from your balcony, window, or in your local park for one hour. Big Garden Birdwatch is for complete beginners or birding experts and is fun for all. How do you take part in Big Garden Birdwatch? 1. Watch the birds around you for one hour 2. Count how many of each species of bird land on your patch 3. Go online and tell the RSPB what you saw How to attract more birds into your garden with a little help from DeWaldens Garden Centre Blackbirds mostly feed on the ground and will eat a broad range of foods, from fatty nibbles to mealworms Blue tits and great tits use a feeder, eating seeds as well as suet and peanuts. Look for good quality bird food and don't forget to drop in our 'Which Food Scraps Can You Safely Feed to Garden Birds?' blog too. One million birdwatchers in 2021 Over one million people took part in Big Garden Birdwatch 2021 and the RSPB is hoping to mobilise as many people this year - fingers crossed it will be more! The bird conservation charity says that we’ve lost 38 million birds from UK skies in the last 50 years, which is shocking, so it really is critical that we all do our bit to look after our birdlife. It depends support to save nature and to look after places where wildlife can thrive and it's by taking part in Big Garden Birdwatch that you can also make a difference. Wherever you are, whatever you see, it counts!