id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
22,916,190
https://en.wikipedia.org/wiki/Andr%C3%A1s%20S%C3%A1rk%C3%B6zy
András Sárközy (born in Budapest) is a Hungarian mathematician, working in analytic and combinatorial number theory, although his first works were in the fields of geometry and classical analysis. He has the largest number of papers co-authored with Paul Erdős (a total of 62); he has an Erdős number of one. He proved the Furstenberg–Sárközy theorem that every sequence of natural numbers with positive upper density contains two members whose difference is a full square. He was elected a corresponding member (1998), and a full member (2004) of the Hungarian Academy of Sciences. He received the Széchenyi Prize (2010). He is the father of the mathematician Gábor N. Sárközy. References Living people 1941 births Mathematicians from Budapest Members of the Hungarian Academy of Sciences Number theorists
András Sárközy
Mathematics
170
19,481,861
https://en.wikipedia.org/wiki/Paul%20Palmer%20%28physicist%29
E. Paul Palmer (1926–2011) was a Brigham Young University physicist who specialized in geophysics. He coined the term "cold fusion". However he was an early critic of Fleischmann and Pons's claims to have developed a useful method of cold fusion. Palmer served in the US Navy during World War II. He later served as a missionary for the Church of Jesus Christ of Latter-day Saints in the East Central States Mission (primarily Tennessee and Kentucky). He received his bachelor's degree in physics from the University of Utah. Sources New York Times, April 28, 1989 BYU faculty listing https://web.archive.org/web/20080506023241/http://pages.csam.montclair.edu/~kowalski/cf/131history.html http://www.newscientist.com/article/mg12216633.500-science-rocks-reveal-the-signature-of-fusion-at-the-centreof-the-earth-.html Provo Herald obituary for Palmer 1926 births 2011 deaths University of Utah alumni Brigham Young University faculty American geophysicists Cold fusion Latter Day Saints from Utah American Mormon missionaries in the United States
Paul Palmer (physicist)
Physics,Chemistry
260
22,819,492
https://en.wikipedia.org/wiki/Acaulospora
Acaulospora is a genus of fungi in the family Acaulosporaceae. Species in this genus are widespread in distribution, and form arbuscular mycorrhiza and vesicles in roots. Species list A. alpina A. appendicula A. bireticulata A. brasiliensis A. capsicula A. cavernata A. colliculosa A. colombiana A. colossica A. delicata A. denticulata A. dilatata A. elegans A. entreriana A. excavata A. foveata A. gedanensis A. gerdemannii A. jejuensis A. kentinensis A. koreana A. koskei A. lacunosa A. laevis A. longula A. mellea A. morrowiae A. myriocarpa A. nicolsonii A. nivalis A. paulinae A. polonica A. rehmii A. rugosa A. scrobiculata A. sieverdingii A. spinosa A. splendida A. sporocarpia A. taiwania A. terricola A. thomii A. trappei A. tuberculata A. walkeri References External links International Culture Collection of Vesicular Arbuscular Mycorrhizal Fungi Diversisporales Taxa named by James Trappe
Acaulospora
Biology
295
2,181,563
https://en.wikipedia.org/wiki/Nuclear%20material
Nuclear material refers to the metals uranium, plutonium, and thorium, in any form, according to the IAEA. This is differentiated further into "source material", consisting of natural and depleted uranium, and "special fissionable material", consisting of enriched uranium (U-235), uranium-233, and plutonium-239. Uranium ore concentrates are considered to be a "source material", although these are not subject to safeguards under the Nuclear Non-Proliferation Treaty. According to the Nuclear Regulatory Commission(NRC), there are four different types of regulated nuclear materials: special nuclear material, source material, byproduct material and radium. Special nuclear materials have plutonium, uranium-233 or uranium with U233 or U235 that has a content found more than in nature. Source material is thorium or uranium that has a U235 content equal to or less than what is in nature. Byproduct material is radioactive material that is not source or special nuclear material. It can be an isotope produced by a nuclear reactor, the tailings and waste that is produced or extracted from uranium or thorium from an ore that processed mainly for its source material content. Byproduct material can also be discrete sources of radium-226 or discrete sources of accelerator-produced isotopes or naturally occurring isotopes that pose a threat greater or equal to a discrete source of radium-226. Radium is also a regulated nuclear material that is found in nature and produced by the radioactive decay of uranium. The half-life of radium is approximately 1,600 years. Different countries may use different terminology: in the United States of America, "nuclear material" most commonly refers to "special nuclear materials" (SNM), with the potential to be made into nuclear weapons as defined in the Atomic Energy Act of 1954. The "special nuclear materials" are also plutonium-239, uranium-233, and enriched uranium (U-235). Note that the 1980 Convention on the Physical Protection of Nuclear Material definition of nuclear material does not include thorium. The NRC has a regulatory process for nuclear materials with five main components. Develop regulation and guidance for their applicants and licensees Licensing, decommissioning and certification for applicants to use nuclear materials, or operate a nuclear facility or decommission a permit license termination Oversight of licensee operations and facilities that ensure that licensees comply with the safety requirements Operational experience at licensed facilities or licensed activities Support for decisions by conducting research, holding hearings that address concerns, and obtain independent reviews that support the NRC regulatory decisions The United States Department of Energy Office of Environmental Management (EM) manages and dispositions spent nuclear fuel and surplus nuclear materials. The EM Nuclear Materials Program safely and securely manages the spent nuclear fuels in their facilities while managing an inventory of the materials. The Nuclear Waste Policy Act defines procedures to evaluate and select locations for geological repositories to safely dispose/store the radioactive waste. The EM also works with the National Nuclear Security Administration (NNSA) to dispose the surplus, non-pit, weapons-usable plutonium-239. EM with the NNSA, oversee the disposition of 21 metric tons of surplus highly enriched uranium materials that has about 13.5 metric tons of spent nuclear fuel. See also Tube Alloys Institute of Nuclear Materials Management Material unaccounted for References Nuclear weapons
Nuclear material
Physics
690
75,643,536
https://en.wikipedia.org/wiki/Alternation%20of%20supports
Alternation of supports is a trait of Romanesque architecture (and Early Gothic), where the supports in a colonnade or arcade have different types. For example, periodic change between the strong supports (piers) and the weak ones (columns) provides visually obvious alternating supports. More subtle alternation can result, for example, from variations of the column shafts. An early example of technique used for a decorative purpose can be found in Hagios Demetrios, a 5th century Byzantine church in Thessaloniki. The technique became common at the end of the 10th century and appears to be coupled with the use of transverse arches: the arches rested on the tops of the stronger piers. The double-bay system, with its side aisles at a half of the width of the nave, required columns for the aisle vaults placed at half the step for the transverse arches of the nave, the additional columns sometimes carried smaller load and thus can be thinner. The use of alternating supports was largely abandoned with the introduction of Gothic architecture and its more malleable pointed arches. There were some notable exceptions, however: for example, the lateral aisles of the Notre-Dame de Paris have alternating piers of lower and greater strength that provide a "powerful appeal to the eye and the senses", but had originally fulfilled a structural need, as the heavier piers carried an extra load from the intermediate supports in the buttress system. Also, the alternation can be found in some early Gothic designs of sexpartite vaults, where the support for the middle transverse rib carries less load. Geography The alternating supports became popular in Europe in the 11th century (early examples started to appear in the 9th century), with the use gradually transitioning from the decorative function to support for the double-bay system. Italian architecture of the 11th and 12th century actively used the alternating system. However, frequently the column and pier alternation was used for purely decorative purposes, most likely following the Byzantine idea found in the Hagios Demetrios. Examples include San Miniato al Monte (), San Clemente al Laterano (dedicated in 1128), Santa Maria in Cosmedin (1123), Basilica di San Nicola in Bari (1197). The alternation was also used structurally, as in Modena Cathedral (1099-1184), probably as an evolution of the decorative use. The second area of the frequent use of the alternation was Germany, with the earliest example still standing of the church of Saint Cyriakus, Gernrode (-1014). St. Michael's Church, Hildesheim (1022), Hildesheim Cathedral (1061), and Gandersheim Abbey (1094) form an 11th-century group of churches in Saxony with alternate supports in the "dactyl" arrangement (one pier-two columns in repetition). Dactyl pattern was not new to Saxony in the 11th century, as it was used previously in Gernrode and, likely, in the old Hildesheim Cathedral (852-872). Another group of churches with alternating piers and columns is located in former Lower Lorraine: Abbey of Echternach (1016-1031), church in Zyfflich (early 11th century), Susteren Abbey (mid-11th century), Lobbes Abbey (11th century). The group might also include St Bavo's Cathedral, Ghent. The churches in Lower Lorraine use simple alternation (pier-column) as a base of the double bay system, but without the galleries. A fully developed double bay system with galleries can be found in the church of Soignies. The use of alternating supports was not common in Normandy, with notable exceptions of Jumièges Abbey (1052-1066) and Lyre Abbey (12th century), the former being an early example of a double-bay transition. References Sources Romanesque architecture Architectural terminology
Alternation of supports
Engineering
804
62,424,009
https://en.wikipedia.org/wiki/School%20belonging
The most commonly used definition of school belonging comes from a 1993 academic article by researchers Carol Goodenow and Kathleen Grady, who describe school belonging as "the extent to which students feel personally accepted, respected, included, and supported by others in the school social environment." The construct of school belonging involves feeling connected with and attached to one's school. It also encompasses involvement and affiliation with one's school community. Conversely, students who do not feel a strong sense of belonging within their school environment are frequently described as being alienated or disaffected. There are a number of terms within educational research that are used interchangeably with school belonging, including school connectedness, school attachment, and school engagement. School belonging is determined by a myriad of factors, including academic achievement and motivation, personal characteristics, social relationships, demographic characteristics, school climate, and participation in extracurricular activities. Research indicates that school belonging has significant implications for students, as it has been consistently linked with academic outcomes, psychological adjustment, well-being, identity formation, mental health, and physical health—it is considered a fundamental aspect of students' development. A sense of belonging to one's school is considered particularly important for adolescents because they are within a period of transition and identity formation, and research has found that school belonging significantly declines during this period. Psychological Sense of School Membership (PSSM), developed in 1993, is one of the measures to ascertain the degree to which students feel a sense of school belonging. Students rate the extent to which they agree or disagree with statements, such as "People here notice when I'm good at something." In 2003, the Centers for Disease Control and Prevention held an international convention where the Wingspread Declaration on School Connections was developed as a group of tactics to increase students' sense of belonging and connection with their school. Prevalence and trajectory Research indicates that many students have deficient feelings of school belonging. The Programme for International Student Assessment (PISA) has an investigated school belonging and disaffection in students around the world since 2003. Their most recent collection of data occurred in 2018. Approximately 600,000 students representing 32 million 15 years olds (aged between 15 years 3 months and 16 years 2 months) from 79 countries and economies participated in PISA 2018. Their analyses revealed that a significant proportion of students around the world are lacking strong feelings of belongingness to school. On average, a third of all students surveyed felt they did not belong to their school. In addition, they found that one in five students feels like an outsider at school and one in six reports feeling lonely. In most of the education systems, students who were socio-economically felt less belonging to school. On average student belonging to school declined by 2% between 2015 and 2018. The portion of students who do not feel like they belong to school has increased since 2003 indicating a trend in the deterioration of school belonging globally. School belonging tends to decrease as students grow older, as indicated in several different research studies. In one study involving students from Latin America, Asia, and Europe, researchers Cari Gillen-O'Neel and Andrew Fuligni found that in childhood, students generally report high levels of school belonging. However, once students transition into middle school and adolescence, their perceptions of school belonging drop significantly. Similarly, a separate study found that students' school belonging decreased in the transition from middle to high school; these students also displayed an increase in depressive symptoms and a decline in social support, which could be considered either causes or consequences of the decline in school belonging. This trend has been replicated in many other studies, suggesting that school belonging declines once students reach adolescence. Determinants A meta-analysis of 51 studies (N = 67,378) by K. Allen and colleagues (2018) identified that there are multiple individual and social level factors that influence school belonging. These core themes include academic factors, personal characteristics, social relationships, demographic characteristics, school climate and extra-curricular activities. For many of the determinants of school belonging, it is likely that each of them have a reciprocal relationship with a student's sense of belonging. That is, they operate as both an antecedents or consequences. Academic factors Research has documented the influence of academic factors (i.e. achievement, motivation, hardiness, interest in school) on students' school belonging. Academic achievement, or one's skills and competencies in school, has been identified as a substantial predictor of school belonging. For example, research has demonstrated that students' grade point averages (GPAs), a common measure of academic achievement, are positively associated with school belonging. This means that students who have higher GPAs have higher levels of school belonging. Studies have also found several measures of academic motivation to be determinants of students' school belonging. Academic motivation encompasses behaviors such as homework completion, setting goals, expectancy of success, and effort and engagement within the classroom. Carol Goodenow and Kathleen Grady found each of these sub-sects of academic motivation to be significant predictors of students' perceptions of school belonging. More recent research has replicated these findings, suggesting that academic motivation plays an important role in developing feelings of school belonging. In addition, students' perceived value of school influences their school belonging: when they perceive their assignments and education as instructive, meaningful, and valuable, they are more likely to report greater school belonging. Personal characteristics Personal characteristics refer to students' distinctive qualities, traits, personality, emotions, and attributes, and have been consistently identified as a substantial determinant of school belonging. Personal characteristics can be classified as either positive or negative. Positive personal characteristics such as self-esteem, self-efficacy, positive affect, and effective emotional regulation have been shown to help foster students' sense of school belonging. A study by Xin Ma found that students' self-esteem had the greatest impact on school belonging compared to all other personal factors. Conversely, negative personal characteristics like anxiety, depressive symptoms, heightened stress, negative affect, and mental illness can lower students' perceptions of school belonging. Emotional instability can further influence school belonging by negatively affecting students' educational experiences. Social relationships Social relationships are involved in developing students' feelings of belonging within a school. There are large, positive correlations between school belonging and positive social relations with peers, teachers, and parents. Support, acceptance, and encouragement from these sources can help students develop the feeling that they connect and identify with their school. Peers Peer relations have been identified as a direct contributor to students' development of school belonging. Positive social relations with peers involve feelings of acceptance, connection, encouragement, academic and social support, trust, closeness, and caring. Such qualities within a peer relationship can significantly facilitate students' feelings of school belonging. When students are rejected or unsupported by their peers, they may experience anxiety, stress, and alienation. This alters their perceptions of belonging at school because the school environment now seems unwelcome and distressing, making it harder to identify and connect with the school. Parents Relationships with one's parents can have significant implications for students' feelings of school belonging, given that parents typically provide students' first social relationships. Positive parental relations include parents providing academic and social support, healthy communication, encouragement, compassion, acceptance, and safety. Such qualities within parent-child relationships have been shown to foster students' sense of school belonging by influencing their perceived connection with their school environment. Teachers Teachers have been identified as being noteworthy contributors to students' feelings of belonging at school. Several academic studies have identified teacher support as the strongest predictor of school belonging compared to support from peers or parents. Teachers can help instil school belonging by developing a safe and healthy classroom climate, providing academic and social support, fostering respect amongst peers, and treating students fairly. Teachers can also promote feelings of school belonging by being friendly, approachable, and making an effort to connect with their students. Teaching practices that seem to promote students' school belonging include scaffolding learning, commending positive behaviors and performance, allowing students autonomy within the classroom, and using academic pressure, such as holding high expectations of students. Demographic characteristics Gender The relationship between gender and school belonging is largely inconclusive because research has produced conflicting results. Several studies have found gender differences in perceptions of school belonging: some research indicates that females possess a higher sense of school belonging compared to males, while other studies have found the opposite effect and conclude that males have higher school belonging than females. Other research has demonstrated that school belonging is not at all influenced by gender. Race and ethnicity Similar to gender, some research on the effect of race and ethnicity on school belonging has found a significant relationship between the two, while other research contradicts these findings. For example, one study found that Black students experience lower feelings of school belonging compared to white students, however, other research has found the opposite pattern or has found no significant influence of race on school belonging at all. School climate A school's climate can have significant consequences for students feeling like they belong at school. School climate broadly refers to the feelings associated with a school's environment and quality; it is considered to have physical (e.g. adequacy of buildings), social (e.g. interpersonal relationships), and academic dimensions (e.g. teaching quality). School climate influences school belonging through its support (or lack thereof) of students' feelings of connection with and attachment to their school. One important facet of school climate is school safety, which is how safe students feel at school. It includes variables such as a school's safety policies, use of discipline, bullying prevalence, and fairness. School safety is regarded as an important determinant of school belonging. Higher perceptions of school safety is associated with students holding greater feelings of school belonging. Extracurricular activities Research has shown that being involved in extracurricular activities can positively influence students' perceptions of school belonging. For example, researchers Casey Knifsend and Sandra Graham found that students who participated in two extracurricular activities reported greater feelings of school belonging compared to those students who participated in fewer than two. Other studies have replicated this relationship, highlighting the importance of participating in extracurricular activities for developing school belonging. Extracurricular activities may influence school belonging by providing collaborative and long-term interactions between students and their peers. A Socio-ecological perspective The many determinants of school belonging can be conceptualised in a socio-ecological model. The Socio-ecological Model of School Belonging developed by Allen and Colleagues (2016), adapted from Bronfenbrenner's Socio-ecological systems theory (1979) is used to describe the school system as whole and the multiple and dynamic influencers of school belonging. The model depicts students at the centre of their school environment. The inner circles describe biological and individual level characteristics that influence school belonging. These factors include biological traits and personal characteristics such as emotional stability and academic motivation. The microsystem is represented by relationships with others, specifically, teachers, peers, and parents. The mesosystem represents the school policy and practices that occur within the day-to-day operations of the school and the exosystem represents a broader level that may include the wider school community. The macro-system describes the cultural context of a school that may be influenced by where a school is geographically located, the external social climate, and other factors such as history, legislation, and government driven priorities. Consequences Psychological health and adjustment School belonging has numerous consequences for students' psychological health and adjustment. Research has shown that when students feel a greater sense of school belonging, their mental health and well-being is improved: they exhibit greater levels of emotional stability, lower levels of depression, reduced stress, and an increase in positive emotions, such as happiness and pride. Feelings of school belonging have also been shown to predict self-esteem, self-concept, and self-worth. Students who possess school belonging experience more positive life transitions as well, which can have important implications for psychological health and adjustment. On the other hand, students who do not have a strong sense of school belonging are at risk for a number of disadvantageous psychological and mental health outcomes. Students who lack a sense of belonging at school are at significantly greater risk for exhibiting anxiety, depression, negative affect, suicidal ideation, and overall developing mental illness. It may also increase their feelings of social rejection and alienation. Academic development and outcomes Feelings of school belonging can have a significant influence on academic development and outcomes for students. School belonging is related to students' expectancy of success, effort in school, and perceived value of school and education. Greater feelings of school belonging has been shown to increase engagement and participation both inside school and within extracurricular activities. Similarly, school belonging is associated with a greater commitment to school. Strong feelings of school belonging have also been shown to improve overall academic performance and achievement, as shown by increases in grade point averages. A sense of belonging at school can also improve academic self-efficacy, or in other words, students' belief in their ability to succeed in school. Research has suggested that school belonging can also alleviate the prevalence of negative academic outcomes. Greater feelings of school belonging are associated with decreased misbehavior and misconduct, such as fighting, bullying, and vandalism. It can improve school attendance by reducing the frequency of truancy and absenteeism. Having school belonging also reduces students' likelihood of dropping out of school, thus improving rates of school completion. Conversely, students who lack a sense of school belonging are at greater risk for disengagement from school and potentially dropping out. Physical health School belonging has several implications for students' physical health. Students who possess feelings of school belonging exhibit reduced risk of having a stroke or disease. School belonging is also associated with lower mortality rates for students. In addition, perceptions of school belonging have a significant inverse relationship with risk-taking behaviors, including substance and tobacco use and early sexualization. In other words, students who have higher levels of school belonging are less likely to engage in risk-taking behaviors. Measures There are a number of measures used to assess school belonging. The most commonly used measures include: Psychological Sense of School Membership (PSSM) The most commonly used measure of school belonging is the Psychological Sense of School Membership (PSSM) scale, which was developed by Carol Goodenow in 1993. This scale measures students' feelings of belonging and membership within a school setting by having students respond to 18 items regarding their personal feelings and experiences within school. It is designed to be used with students of all ages and nationalities. Students answer the items on a scale ranging from 1 to 5, where 1 indicates Not at all true, and 5 indicates Completely true. The items are intended to measure students' perceptions of acceptance, academic and social support, value, and contentment within their social relationships at school. The following are some examples of items that students respond to: "People here notice when I'm good at something," "Other students take my opinions seriously," and "I feel like a real part of this school." Research has found the PSSM to have high validity and reliability, attesting to its status as a valuable and functional measure of school belonging. Hemingway Measure of Adolescent Connectedness (HMAC) The Hemingway Measure of Adolescent Connectedness (HMAC) was constructed by Michael Karcher in 1999 and has been used in research as a measure of school belonging for adolescents specifically. It contains 74 items on a scale ranging from 1 (Not true at all) to 5 (Very true). It examines adolescents' perceptions of connectedness, or in other words, their involvement with and valuation of both the specific and general social support they receive, across three sub-categories: social connectedness, academic connectedness, and family connectedness. The social connectedness component measures adolescents' feelings of connection towards their friends, neighborhood, and self. Academic connectedness evaluates adolescents' sense of connection towards their school, teachers, peers, and academic self. Finally, the family connectedness component assesses adolescents' feelings of connectedness to their parents, siblings, religion, and ancestry. Items measuring school belonging specifically include: "I feel good about myself when I am at school," "I get along well with the other students in my classes" and "I enjoy being at school." This scale has been found to be generalizable to adolescents across the globe. School Connectedness Scale (SCS) Jill Hendrickson Lohmeier and Steven W. Lee created the School Connectedness Scale (SCS) in 2011 to assess students' peer, adult, and school relationships within three distinct categories: general support (belongingness), specific support (relatedness), and engagement (connectedness).The scale includes 54 self-report items presented on a scale ranging from 1 to 5, where 1 represents 'Not at all true' and 5 represents 'Completely true'. Some items include "Students at my school help each other", "I am very involved in activities at my school, like clubs or teams", "Teachers at my school care about their students", and "I like spending time with my classmates." The SCS has shown generalizability to students from diverse populations, including different ages and ethnicities. School Engagement Instrument (SEI) The School Engagement Instrument (SEI) was designed by James Appleton, Sandra Christenson, Dongjin Kim, and Amy Reschly in 2006 and is commonly used to gauge perceptions of school belonging. It includes 35 items on a four-point scale ranging from Strongly agree to Strongly disagree that measure students' cognitive and affective engagement within the school environment. The items are categorized into six sub-domains: "future goals and aspirations, control and relevance of schoolwork, extrinsic motivation, family support for learning, peer support for learning, and teacher-student relationships." Items from the SEI include: "Overall, my teachers are open and honest with me," "Students at my school are there for me when I need them," "When I have problems at school, my family/guardian(s) want to know about it," and "What I'm learning in my classes will be important for my future." Implications for practice In 2003, the Centers for Disease Control and Prevention (CDC) held an international convention to develop tactics for bolstering students' perceptions of school belonging. They developed the Wingspread Declaration on School Connections which identified the following strategies for increasing students' belonging to and connection with their school: Implementing high standards and expectations, and providing academic support to all students. Applying fair and consistent disciplinary policies that are collectively agreed upon and fairly enforced. Creating trusting relationships among students, teachers, staff, administrators, and families. Hiring and supporting capable teachers skilled in content, teaching techniques, and classroom management to meet each learner's needs. Fostering high parent/family expectations for school performance and school completion. Ensuring that every student feels close to at least one supportive adult at school. —"Wingspread Declaration on School Connections", Journal of School Health The CDC later advanced the work of the Wingspread Declaration in 2009 by conducting a comprehensive, systematic review of school belonging and connectedness using sources from expert researchers, the government, educators, and more. This work produced four additional strategies for enhancing students' perception of belonging within school: Adult Support: School staff members can dedicate their time, interest, attention, and emotional support to students. Belonging to a Positive Peer Group: A stable network of peers can improve student perceptions of school. Commitment to Education: Believing that school is important to their future and perceiving that the adults in school are investing in their education can help students engaged in their own learning and involved in school activities. School Environment: The physical environment and psychosocial climate can set the stage for positive student perceptions of school. —"School Connectedness: Strategies for Increasing Protective Factors Among Youth", Centers for Disease Control and Prevention Student-level implications for practice Student-level interventions may also increase a sense of school belonging. Research has indicated that social and emotional learning opportunities may also increase a sense of school belonging in students. Many individual characteristics found to enhance a student's sense of belonging can be taught to students and thus offer a preventative mechanism to support their sense of school belonging. For example, research suggests that teaching emotional regulation, coping skills, interpersonal skills, and skills related to academic motivation hold promise for supporting a students sense of school belonging. See also Education in the United States References External links School belonging measure s are found on the International Belonging Research Laboratory. Developmental psychology Educational assessment and evaluation Educational environment Education and health Educational research Teaching
School belonging
Biology
4,224
21,257,247
https://en.wikipedia.org/wiki/Arrival%20theorem
In queueing theory, a discipline within the mathematical theory of probability, the arrival theorem (also referred to as the random observer property, ROP or job observer property) states that "upon arrival at a station, a job observes the system as if in steady state at an arbitrary instant for the system without that job." The arrival theorem always holds in open product-form networks with unbounded queues at each node, but it also holds in more general networks. A necessary and sufficient condition for the arrival theorem to be satisfied in product-form networks is given in terms of Palm probabilities in Boucherie & Dijk, 1997. A similar result also holds in some closed networks. Examples of product-form networks where the arrival theorem does not hold include reversible Kingman networks and networks with a delay protocol. Mitrani offers the intuition that "The state of node i as seen by an incoming job has a different distribution from the state seen by a random observer. For instance, an incoming job can never see all 'k jobs present at node i, because it itself cannot be among the jobs already present." Theorem for arrivals governed by a Poisson process For Poisson processes the property is often referred to as the PASTA property (Poisson Arrivals See Time Averages) and states that the probability of the state as seen by an outside random observer is the same as the probability of the state seen by an arriving customer. The property also holds for the case of a doubly stochastic Poisson process where the rate parameter is allowed to vary depending on the state. Theorem for Jackson networks In an open Jackson network with m queues, write for the state of the network. Suppose is the equilibrium probability that the network is in state . Then the probability that the network is in state immediately before an arrival to any node is also . Note that this theorem does not follow from Jackson's theorem, where the steady state in continuous time is considered. Here we are concerned with particular points in time, namely arrival times. This theorem first published by Sevcik and Mitrani in 1981. Theorem for Gordon–Newell networks In a closed Gordon–Newell network with m queues, write for the state of the network. For a customer in transit to state , let denote the probability that immediately before arrival the customer 'sees' the state of the system to be This probability, , is the same as the steady state probability for state for a network of the same type with one customer less. It was published independently by Sevcik and Mitrani, and Reiser and Lavenberg, where the result was used to develop mean value analysis. Notes Queueing theory Probability theorems
Arrival theorem
Mathematics
545
826,216
https://en.wikipedia.org/wiki/Galaxy%20morphological%20classification
Galaxy morphological classification is a system used by astronomers to divide galaxies into groups based on their visual appearance. There are several schemes in use by which galaxies can be classified according to their morphologies, the most famous being the Hubble sequence, devised by Edwin Hubble and later expanded by Gérard de Vaucouleurs and Allan Sandage. However, galaxy classification and morphology are now largely done using computational methods and physical morphology. Hubble sequence The Hubble sequence is a morphological classification scheme for galaxies invented by Edwin Hubble in 1926. It is often known colloquially as the “Hubble tuning-fork” because of the shape in which it is traditionally represented. Hubble's scheme divides galaxies into three broad classes based on their visual appearance (originally on photographic plates): Elliptical galaxies have smooth, featureless light distributions and appear as ellipses in images. They are denoted by the letter "E", followed by an integer n representing their degree of ellipticity on the sky. The specific ellipticity rating depends on ratio of the major (a) to minor axes (b), thus: Spiral galaxies consist of a flattened disk, with stars forming a (usually two-armed) spiral structure, and a central concentration of stars known as the bulge, which is similar in appearance to an elliptical galaxy. They are given the symbol "S". Roughly half of all spirals are also observed to have a bar-like structure, extending from the central bulge. These barred spirals are given the symbol "SB". Lenticular galaxies (designated S0) also consist of a bright central bulge surrounded by an extended, disk-like structure but, unlike spiral galaxies, the disks of lenticular galaxies have no visible spiral structure and are not actively forming stars in any significant quantity. These broad classes can be extended to enable finer distinctions of appearance and to encompass other types of galaxies, such as irregular galaxies, which have no obvious regular structure (either disk-like or ellipsoidal). The Hubble sequence is often represented in the form of a two-pronged fork, with the ellipticals on the left (with the degree of ellipticity increasing from left to right) and the barred and unbarred spirals forming the two parallel prongs of the fork on the right. Lenticular galaxies are placed between the ellipticals and the spirals, at the point where the two prongs meet the “handle”. To this day, the Hubble sequence is the most commonly used system for classifying galaxies, both in professional astronomical research and in amateur astronomy. Nonetheless, in June 2019, citizen scientists through Galaxy Zoo reported that the usual Hubble classification, particularly concerning spiral galaxies, may not be supported, and may need updating. De Vaucouleurs system The de Vaucouleurs system for classifying galaxies is a widely used extension to the Hubble sequence, first described by Gérard de Vaucouleurs in 1959. De Vaucouleurs argued that Hubble's two-dimensional classification of spiral galaxies—based on the tightness of the spiral arms and the presence or absence of a bar—did not adequately describe the full range of observed galaxy morphologies. In particular, he argued that rings and lenses are important structural components of spiral galaxies. The de Vaucouleurs system retains Hubble's basic division of galaxies into ellipticals, lenticulars, spirals and irregulars. To complement Hubble's scheme, de Vaucouleurs introduced a more elaborate classification system for spiral galaxies, based on three morphological characteristics: The different elements of the classification scheme are combined — in the order in which they are listed — to give the complete classification of a galaxy. For example, a weakly barred spiral galaxy with loosely wound arms and a ring is denoted SAB(r)c. Visually, the de Vaucouleurs system can be represented as a three-dimensional version of Hubble's tuning fork, with stage (spiralness) on the x-axis, family (barredness) on the y-axis, and variety (ringedness) on the z-axis. Numerical Hubble stage De Vaucouleurs also assigned numerical values to each class of galaxy in his scheme. Values of the numerical Hubble stage T run from −6 to +10, with negative numbers corresponding to early-type galaxies (ellipticals and lenticulars) and positive numbers to late types (spirals and irregulars). Thus, as a rough rule, lower values of T correspond to a larger fraction of the stellar mass contained in a spheroid/bulge relative to the disk. The approximate mapping between the spheroid-to-total stellar mass ratio (MB/MT) and the Hubble stage is MB/MT=(10−T)2/256 based on local galaxies. Elliptical galaxies are divided into three 'stages': compact ellipticals (cE), normal ellipticals (E) and late types (E+). Lenticulars are similarly subdivided into early (S−), intermediate (S0) and late (S+) types. Irregular galaxies can be of type magellanic irregulars (T = 10) or 'compact' (T = 11). The use of numerical stages allows for more quantitative studies of galaxy morphology. Yerkes (or Morgan) scheme The Yerkes scheme was created by American astronomer William Wilson Morgan. Together with Philip Keenan, Morgan also developed the MK system for the classification of stars through their spectra. The Yerkes scheme uses the spectra of stars in the galaxy; the shape, real and apparent; and the degree of the central concentration to classify galaxies. Thus, for example, the Andromeda Galaxy is classified as kS5. See also References External links Galaxies and the Universe – an introduction to galaxy classification Near-Infrared Galaxy Morphology Atlas, T.H. Jarrett The Spitzer Infrared Nearby Galaxies Survey (SINGS) Hubble Tuning-Fork, SINGS Spitzer Space Telescope Legacy Science Project Go to GalaxyZoo.org to try your hand at classifying galaxies as part of an Oxford University open community project Astronomical classification systems Extragalactic astronomy Edwin Hubble
Galaxy morphological classification
Astronomy
1,261
14,423,438
https://en.wikipedia.org/wiki/Povl%20Ahm
Povl Ahm (26 September 1926 – 15 May 2005) was a structural engineer and former chairman of Ove Arup & Partners. Life Born in Aarhus, Denmark, Ahm attended the Polyteknisk Læreanstalt in Copenhagen, from where he graduated in 1949. Ahm married Birgit Moller in 1953, with whom he had two sons, Carsten Ahm and Peter Ahm. He was a keen sportsman, and a good footballer. He played for the London amateur team Corinthian-Casuals and played in the 1956 Amateur Cup Final at Wembley Stadium. He died of cancer on 15 May 2005. Career He joined the firm Ove Arup and Partners in London in 1952, where he worked on Coventry Cathedral with Basil Spence. In his own words: "It was an architectural concept showing clearly the ecclesiastical functions but without any clear definition of structural concept, for so far no engineer had been involved in the design." Ahm was given great responsibility on this project, working directly with Ove Arup. He also worked on early conceptual design schemes for the Sydney Opera House, and worked on other projects, including Smithfield Market, London and Centre Pompidou, Paris – some of Ove Arup & Partners' most prestigious projects. The architect of Sydney Opera House, Jørn Utzon, later went on to design a house for Ahm in Hertfordshire - a project which avoided the many problems of Sydney Opera House. In 1957 Ahm was made an associate partner of Ove Arup & Partners, and in 1965 he was made a full partner, becoming a director of the firm after its ownership was rearranged in 1977 (the firm was now owned in trust for the staff). By winning the competition to design the Gateshead Viaduct in 1965, Ahm started the firm's new transport group, specialising in bridges. From 1989 to 1992 he was chairman of the firm. He was made a Fellow of the Royal Academy of Engineering in 1981. Ahm was an active member of the Institution of Civil Engineers, acting as a Council Member twice, and becoming Vice Chairman of Registered Engineers for Disaster Relief from 1989 to 1993. From 1992 to 1996 he was chairman of the Association of Consulting Engineers. Notable projects Coventry Cathedral, St Catherine's College, Oxford, 1960 University of Sussex, 1962 44 West Common Way (Ahm House), Harpenden, Hertfordshire, 1963 Gateshead Viaduct, 1965 Centre Pompidou, 1974 British Embassy in Rome, 1975 Danish Embassy in London, 1978 Awards Ahm was awarded the ICE's first gold medal in 1993; the same year he received a CBE for services to engineering. He received an honorary doctorate from University of Warwick in 1994. References Danish civil engineers Corinthian-Casuals F.C. players Structural engineers 1926 births 2005 deaths Fellows of the Royal Academy of Engineering Men's association football players not categorized by position Danish men's footballers Footballers from Aarhus 20th-century Danish engineers 20th-century Danish sportsmen Expatriate men's footballers in England
Povl Ahm
Engineering
627
76,031,128
https://en.wikipedia.org/wiki/Portland%20Women%20in%20Technology
Portland Women in Technology (also known as PDX Women in Tech, PDX WIT, or PDXWIT) was a 501(c)(3) nonprofit organization based in Portland, Oregon with the mission of advancing inclusion in the technology industry. They hosted four to six events per month, which ranged from member-driven events to monthly happy hours. In 2021, all formerly in-person PDXWIT events became virtual due to the COVID-19 pandemic. The organization is to be dissolved in April 2024 due to lack of funding. History Founding After attending the Grace Hopper Celebration of Women in Computing in November 2011, Megan Bigelow was both inspired but also disheartened she had to attend a specific conference to be surrounded by a number of women in technology. Bigelow initially posted on LinkedIn about regular happy hour events to continue meeting women in technology. Bigelow eventually founded the organization, along with Kasey Tonsfeldt, in 2012 when Bigelow discovered her salary was 30% less than a man with an equivalent job title. The organization became an established nonprofit six years after being a community group. Mission statement PDXWIT's original mission statement was "We encourage women, non-binary and underrepresented people to join tech and support and empower them so they’ll stay in tech." In 2021, they updated their mission statement to "We are building a better tech industry by creating access, dismantling inequities and fueling belonging." The motivations to make this update came from not wanting to focus on certain underrepresented populations and to also emphasize focus on improving the culture around the tech industry. Executive director changes In 2018, Elizabeth Stock was chosen as PDXWIT's first executive director. In February 2020, founder and board president Megan Bigelow leaves the organization in order to focus on her personal life. In May 2022, executive director Elizabeth Stock stepped down from the position after four years to prioritize their family. During Stock's tenure, PDXWIT's corporate sponsors increased from 25 to 90 and the operating budget increased from $100,000 to $500,000. Rihana Mungin led the organization in the interim, alongside Dawn Mott and Isabel Rodriguez. In December 2022, Hazel Valdez was chosen to be the CEO, who had previously worked with the group from 2017 to 2020. Her vision for the organization included collaborating with other community organizations to obtain funding and to take PDXWIT national. Impact Board update PDXWIT's mission for inclusion started with their own board. Upon reflection two years after being an established nonprofit, their board realized most of their work has been focused on the challenges for white women, instead of the BIPOC or LGBTQ-identifying communities they initially aimed to serve. As a result, PDXWIT developed new interview screening criteria, updated their outreach efforts, and shared their interview questions ahead of time. A product of their change showed when comparing their 2018 board composition (100% cisgender straight women and 80% white) with their 2020 board composition (80% BIPOC and with LGBTQ representation). State of the Community Survey PDXWIT surveys the technology community to better understand challenges it faces. Scholarship In 2018, PDXWIT announced they received a $10,000 grant from The Folley Family Foundation to award scholarships of up to $2,500. The intent of this scholarship is to help cover costs for travel, registration, or per diem for individuals attending tech and women-in-business events. Events PDXWIT would host four to six events per month. Some events include: marquee events, monthly happy hours, annual summer soirée, a quarterly hiring event called Get Hired Up, and member-driven events that include discussions and workshops. In 2021, all formerly in-person PDXWIT events became virtual due to the COVID-19 pandemic. Some events, like Get Hired Up, were altered to adjust for virtual gatherings. Community Since its founding in 2011, PDXWIT had over 8,000 members, hosted over 400 events, and paired over 1,200 aspiring technology workers with mentors. In 2016, PDXWIT was asked by Oregon Public Broadcasting (OPB) to help write accurate job descriptions. In December 2023, Cambia Health Solutions helped host PDXWIT's Winter Soirée & Awards Ceremony to celebrate those who have helped advance women in technology. PDXWIT was able to raise $2,000 from the event. Dissolution On February 6, 2024, PDXWIT's board announced their vote to shut down the organization due to "falling corporate funding for sponsorship". Lack of sponsorship renewals, failed return on investment in fundraising opportunities, and competition for limited nonprofit funding among corporate entities all contributed to this decision. References External links PDXWIT 2022 State of the Community Survey PDXWIT 2020 State of the Community Survey and its archive Portland, Oregon Non-profit organizations based in Oregon Diversity in computing Women in technology
Portland Women in Technology
Technology
1,052
23,642,182
https://en.wikipedia.org/wiki/Dehalogenation
In organic chemistry, dehalogenation is a set of chemical reactions that involve the cleavage of carbon-halogen bonds; as such, it is the inverse reaction of halogenation. Dehalogenations come in many varieties, including defluorination (removal of fluorine), dechlorination (removal of chlorine), debromination (removal of bromine), and deiodination (removal of iodine). Incentives to investigate dehalogenations include both constructive and destructive goals. Complicated organic compounds such as pharmaceutical drugs are occasionally generated by dehalogenation. Many organohalides are hazardous, so their dehalogenation is one route for their detoxification. Mechanistic and thermodynamic concepts Removal of a halogen atom from an organohalide generates a radical. Such reactions are difficult to achieve and, when they can be achieved, these processes often lead to complicated mixtures. When a pair of halides are mutually adjacent (vicinal), their removal is favored. Such reactions give alkenes in the case of vicinal alkyl dihalides: Most desirable from the perspective of remediation are dehalogenations by hydrogenolysis, i.e. the replacement of a bond by a bond. Such reactions are amenable to catalysis: The rate of dehalogenation depends on the strength of the bond between the carbon and halogen atom. The bond dissociation energies of carbon-halogen bonds are described as: (234 kJ/mol), (293 kJ/mol), (351 kJ/mol), and (452 kJ/mol). Thus, for the same structures the bond dissociation rate for dehalogenation will be: . Additionally, the rate of dehalogenation for alkyl halide also varies with steric environment and follows this trend: halides. Applications Since organochlorine compounds are the most abundant organohalides, most dehalogenations entail manipulation of C-Cl bonds. Organic synthesis Of some interest in organic synthesis, electropositive metals react with many organic halides in a metal-halogen exchange: The resulting organometallic compound is susceptible to hydrolysis: Heavily studied examples are found in organolithium chemistry and organomagnesium chemistry. Some illustrative cases follow. Lithium-halogen exchange is essentially irrelevant to remediation, but the method is useful for fine chemical synthesis. Sodium metal has been used for dehalogenation process. Removal of halogen atom from arene-halides in the presence of Grignard agent and water for the formation of new compound is known as Grignard degradation. Dehalogenation using Grignard reagents is a two steps hydrodehalogenation process. The reaction begins with the formation of alkyl/arene-magnesium-halogen compound, followed by addition of proton source to form dehalogenated product. Egorov and his co-workers have reported dehalogenation of benzyl halides using atomic magnesium in 3P state at 600 °C. Toluene and bi-benzyls were produced as the product of the reaction. Morrison and his co-workers also reported dehalogenation of organic halides by flash vacuum pyrolysis using magnesium. With transition metal complexes Many low-valent and electron-rich transition metals effect stoichiometric dehalogenation. The reaction achieves practical interest in the context of organic synthesis, e.g. Cu-promoted Ullmann coupling. The reaction is mainly conducted as stoichiometrically. Some metalloenzymes Vitamin B12 and coenzyme F430 are capable of dehalogenations catalytically. Of great interest are hydrodehalogenations, especially for chlorinated precursors: Further reading Gotpagar, J.; Grulke, E.; Bhattacharyya, D.; Reductive dehalogenation of trichloroethylene: kinetic models and *Hetflejš, J.; Czakkoova, M.; Rericha, R.; Vcelak, J. Catalyzed dehalogenation of delor 103 by sodium hydridoaluminate. Chemosphere 2001, 44, 1521. Kagoshima, H.; Hashimoto, Y.; Oguro, D.; Kutsuna, T.; Saigo, K. Trophenylphosphine/germanium (IV) chloride combination: A new agent for the reduction of α-bromo carboxylic acid derivatives. Tetrahedron, 1998, 39, 1203-1206 References Halogenation reactions Organic reactions Inorganic reactions Halogens
Dehalogenation
Chemistry
1,001
262,252
https://en.wikipedia.org/wiki/Pyrolysis
Pyrolysis is the process of thermal decomposition of materials at elevated temperatures, often in an inert atmosphere without access to oxygen. Etymology The word pyrolysis is coined from the Greek-derived elements pyro- (from Ancient Greek πῦρ : pûr - "fire, heat, fever") and lysis (λύσις : lúsis - "separation, loosening"). Applications Pyrolysis is most commonly used in the treatment of organic materials. It is one of the processes involved in the charring of wood or pyrolysis of biomass. In general, pyrolysis of organic substances produces volatile products and leaves char, a carbon-rich solid residue. Extreme pyrolysis, which leaves mostly carbon as the residue, is called carbonization. Pyrolysis is considered one of the steps in the processes of gasification or combustion. Laypeople often confuse pyrolysis gas with syngas. Pyrolysis gas has a high percentage of heavy tar fractions, which condense at relatively high temperatures, preventing its direct use in gas burners and internal combustion engines, unlike syngas. The process is used heavily in the chemical industry, for example, to produce ethylene, many forms of carbon, and other chemicals from petroleum, coal, and even wood, or to produce coke from coal. It is used also in the conversion of natural gas (primarily methane) into hydrogen gas and solid carbon char, recently introduced on an industrial scale. Aspirational applications of pyrolysis would convert biomass into syngas and biochar, waste plastics back into usable oil, or waste into safely disposable substances. Terminology Pyrolysis is one of the various types of chemical degradation processes that occur at higher temperatures (above the boiling point of water or other solvents). It differs from other processes like combustion and hydrolysis in that it usually does not involve the addition of other reagents such as oxygen (O2, in combustion) or water (in hydrolysis). Pyrolysis produces solids (char), condensable liquids, (light and heavy oils and tar), and non-condensable gasses. Pyrolysis is different from gasification. In the chemical process industry, pyrolysis refers to a partial thermal degradation of carbonaceous materials that takes place in an inert (oxygen free) atmosphere and produces both gases, liquids and solids. The pyrolysis can be extended to full gasification that produces mainly gaseous output, often with the addition of e.g. water steam to gasify residual carbonic solids, see Steam reforming. Types Specific types of pyrolysis include: Carbonization, the complete pyrolysis of organic matter, which usually leaves a solid residue that consists mostly of elemental carbon. Methane pyrolysis, the direct conversion of methane to hydrogen fuel and separable solid carbon, sometimes using molten metal catalysts. Hydrous pyrolysis, in the presence of superheated water or steam, producing hydrogen and substantial atmospheric carbon dioxide. Dry distillation, as in the original production of sulfuric acid from sulfates. Destructive distillation, as in the manufacture of charcoal, coke and activated carbon. Charcoal burning, the production of charcoal. Tar production by destructive distillation of wood in tar kilns. Caramelization of sugars. High-temperature cooking processes such as roasting, frying, toasting, and grilling. Cracking of heavier hydrocarbons into lighter ones, as in oil refining. Thermal depolymerization, which breaks down plastics and other polymers into monomers and oligomers. Ceramization involving the formation of polymer derived ceramics from preceramic polymers under an inert atmosphere. Catagenesis, the natural conversion of buried organic matter to fossil fuels. Flash vacuum pyrolysis, used in organic synthesis. Other pyrolysis types come from a different classification that focuses on the pyrolysis operating conditions and heating system used, which have an impact on the yield of the pyrolysis products. History Pyrolysis has been used for turning wood into charcoal since ancient times. The ancient Egyptians used the liquid fraction obtained from the pyrolysis of cedar wood, in their embalming process. The dry distillation of wood remained the major source of methanol into the early 20th century. Pyrolysis was instrumental in the discovery of many chemical substances, such as phosphorus from ammonium sodium hydrogen phosphate in concentrated urine, oxygen from mercuric oxide, and various nitrates. General processes and mechanisms Pyrolysis generally consists in heating the material above its decomposition temperature, breaking chemical bonds in its molecules. The fragments usually become smaller molecules, but may combine to produce residues with larger molecular mass, even amorphous covalent solids. In many settings, some amounts of oxygen, water, or other substances may be present, so that combustion, hydrolysis, or other chemical processes may occur besides pyrolysis proper. Sometimes those chemicals are added intentionally, as in the burning of firewood, in the traditional manufacture of charcoal, and in the steam cracking of crude oil. Conversely, the starting material may be heated in a vacuum or in an inert atmosphere to avoid chemical side reactions (such as combustion or hydrolysis). Pyrolysis in a vacuum also lowers the boiling point of the byproducts, improving their recovery. When organic matter is heated at increasing temperatures in open containers, the following processes generally occur, in successive or overlapping stages: Below about 100 °C, volatiles, including some water, evaporate. Heat-sensitive substances, such as vitamin C and proteins, may partially change or decompose already at this stage. At about 100 °C or slightly higher, any remaining water that is merely absorbed in the material is driven off. This process consumes a lot of energy, so the temperature may stop rising until all water has evaporated. Water trapped in crystal structure of hydrates may come off at somewhat higher temperatures. Some solid substances, like fats, waxes, and sugars, may melt and separate. Between 100 and 500 °C, many common organic molecules break down. Most sugars start decomposing at 160–180 °C. Cellulose, a major component of wood, paper, and cotton fabrics, decomposes at about 350 °C. Lignin, another major wood component, starts decomposing at about 350 °C, but continues releasing volatile products up to 500 °C. The decomposition products usually include water, carbon monoxide and/or carbon dioxide , as well as a large number of organic compounds. Gases and volatile products leave the sample, and some of them may condense again as smoke. Generally, this process also absorbs energy. Some volatiles may ignite and burn, creating a visible flame. The non-volatile residues typically become richer in carbon and form large disordered molecules, with colors ranging between brown and black. At this point the matter is said to have been "charred" or "carbonized". At 200–300 °C, if oxygen has not been excluded, the carbonaceous residue may start to burn, in a highly exothermic reaction, often with no or little visible flame. Once carbon combustion starts, the temperature rises spontaneously, turning the residue into a glowing ember and releasing carbon dioxide and/or monoxide. At this stage, some of the nitrogen still remaining in the residue may be oxidized into nitrogen oxides like and . Sulfur and other elements like chlorine and arsenic may be oxidized and volatilized at this stage. Once combustion of the carbonaceous residue is complete, a powdery or solid mineral residue (ash) is often left behind, consisting of inorganic oxidized materials of high melting point. Some of the ash may have left during combustion, entrained by the gases as fly ash or particulate emissions. Metals present in the original matter usually remain in the ash as oxides or carbonates, such as potash. Phosphorus, from materials such as bone, phospholipids, and nucleic acids, usually remains as phosphates. Safety challenges Because pyrolysis takes place at high temperatures which exceed the autoignition temperature of the produced gases, an explosion risk exists if oxygen is present. To control the temperature of pyrolysis systems careful temperature control is needed and can be accomplished with an open source pyrolysis controller. Pyrolysis also produces various toxic gases, mainly carbon monoxide. The greatest risk of fire, explosion and release of toxic gases comes when the system is starting up and shutting down, operating intermittently, or during operational upsets. Inert gas purging is essential to manage inherent explosion risks. The procedure is not trivial and failure to keep oxygen out has led to accidents. Occurrence and uses Clandestine chemistry Conversion of CBD to THC can be brought about by pyrolysis. Cooking Pyrolysis has many applications in food preparation. Caramelization is the pyrolysis of sugars in food (often after the sugars have been produced by the breakdown of polysaccharides). The food goes brown and changes flavor. The distinctive flavors are used in many dishes; for instance, caramelized onion is used in French onion soup. The temperatures needed for caramelization lie above the boiling point of water. Frying oil can easily rise above the boiling point. Putting a lid on the frying pan keeps the water in, and some of it re-condenses, keeping the temperature too cool to brown for longer time. Pyrolysis of food can also be undesirable, as in the charring of burnt food (at temperatures too low for the oxidative combustion of carbon to produce flames and burn the food to ash). Coke, carbon, charcoals, and chars Carbon and carbon-rich materials have desirable properties but are nonvolatile, even at high temperatures. Consequently, pyrolysis is used to produce many kinds of carbon; these can be used for fuel, as reagents in steelmaking (coke), and as structural materials. Charcoal is a less smoky fuel than pyrolyzed wood. Some cities ban, or used to ban, wood fires; when residents only use charcoal (and similarly treated rock coal, called coke) air pollution is significantly reduced. In cities where people do not generally cook or heat with fires, this is not needed. In the mid-20th century, "smokeless" legislation in Europe required cleaner-burning techniques, such as coke fuel and smoke-burning incinerators as an effective measure to reduce air pollution The coke-making or "coking" process consists of heating the material in "coking ovens" to very high temperatures (up to ) so that the molecules are broken down into lighter volatile substances, which leave the vessel, and a porous but hard residue that is mostly carbon and inorganic ash. The amount of volatiles varies with the source material, but is typically 25–30% of it by weight. High temperature pyrolysis is used on an industrial scale to convert coal into coke. This is useful in metallurgy, where the higher temperatures are necessary for many processes, such as steelmaking. Volatile by-products of this process are also often useful, including benzene and pyridine. Coke can also be produced from the solid residue left from petroleum refining. The original vascular structure of the wood and the pores created by escaping gases combine to produce a light and porous material. By starting with a dense wood-like material, such as nutshells or peach stones, one obtains a form of charcoal with particularly fine pores (and hence a much larger pore surface area), called activated carbon, which is used as an adsorbent for a wide range of chemical substances. Biochar is the residue of incomplete organic pyrolysis, e.g., from cooking fires. It is a key component of the terra preta soils associated with ancient indigenous communities of the Amazon basin. Terra preta is much sought by local farmers for its superior fertility and capacity to promote and retain an enhanced suite of beneficial microbiota, compared to the typical red soil of the region. Efforts are underway to recreate these soils through biochar, the solid residue of pyrolysis of various materials, mostly organic waste. Carbon fibers are filaments of carbon that can be used to make very strong yarns and textiles. Carbon fiber items are often produced by spinning and weaving the desired item from fibers of a suitable polymer, and then pyrolyzing the material at a high temperature (from ). The first carbon fibers were made from rayon, but polyacrylonitrile has become the most common starting material. For their first workable electric lamps, Joseph Wilson Swan and Thomas Edison used carbon filaments made by pyrolysis of cotton yarns and bamboo splinters, respectively. Pyrolysis is the reaction used to coat a preformed substrate with a layer of pyrolytic carbon. This is typically done in a fluidized bed reactor heated to . Pyrolytic carbon coatings are used in many applications, including artificial heart valves. Liquid and gaseous biofuels Pyrolysis is the basis of several methods for producing fuel from biomass, i.e. lignocellulosic biomass. Crops studied as biomass feedstock for pyrolysis include native North American prairie grasses such as switchgrass and bred versions of other grasses such as Miscantheus giganteus. Other sources of organic matter as feedstock for pyrolysis include greenwaste, sawdust, waste wood, leaves, vegetables, nut shells, straw, cotton trash, rice hulls, and orange peels. Animal waste including poultry litter, dairy manure, and potentially other manures are also under evaluation. Some industrial byproducts are also suitable feedstock including paper sludge, distillers grain, and sewage sludge. In the biomass components, the pyrolysis of hemicellulose happens between 210 and 310 °C. The pyrolysis of cellulose starts from 300 to 315 °C and ends at 360–380 °C, with a peak at 342–354 °C. Lignin starts to decompose at about 200 °C and continues until 1000 °C. Synthetic diesel fuel by pyrolysis of organic materials is not yet economically competitive. Higher efficiency is sometimes achieved by flash pyrolysis, in which finely divided feedstock is quickly heated to between for less than two seconds. Syngas is usually produced by pyrolysis. The low quality of oils produced through pyrolysis can be improved by physical and chemical processes, which might drive up production costs, but may make sense economically as circumstances change. There is also the possibility of integrating with other processes such as mechanical biological treatment and anaerobic digestion. Fast pyrolysis is also investigated for biomass conversion. Fuel bio-oil can also be produced by hydrous pyrolysis. Methane pyrolysis for hydrogen Methane pyrolysis is an industrial process for "turquoise" hydrogen production from methane by removing solid carbon from natural gas. This one-step process produces hydrogen in high volume at low cost (less than steam reforming with carbon sequestration). No greenhouse gas is released. No deep well injection of carbon dioxide is needed. Only water is released when hydrogen is used as the fuel for fuel-cell electric heavy truck transportation, gas turbine electric power generation, and hydrogen for industrial processes including producing ammonia fertilizer and cement. Methane pyrolysis is the process operating around 1065 °C for producing hydrogen from natural gas that allows removal of carbon easily (solid carbon is a byproduct of the process). The industrial quality solid carbon can then be sold or landfilled and is not released into the atmosphere, avoiding emission of greenhouse gas (GHG) or ground water pollution from a landfill. In 2015, a company called Monolith Materials built a pilot plant in Redwood City, CA to study scaling Methane Pyrolysis using renewable power in the process.  A successful pilot project then led to a larger commercial-scale demonstration plant in Hallam, Nebraska in 2016.  As of 2020, this plant is operational and can produce around 14 metric tons of hydrogen per day.  In 2021, the US Department of Energy backed Monolith Materials' plans for major expansion with a $1B loan guarantee.  The funding will help produce a plant capable of generating 164 metric tons of hydrogen per day by 2024. Pilots with gas utilities and biogas plants are underway with companies like Modern Hydrogen. Volume production is also being evaluated in the BASF "methane pyrolysis at scale" pilot plant, the chemical engineering team at University of California - Santa Barbara and in such research laboratories as Karlsruhe Liquid-metal Laboratory (KALLA). Power for process heat consumed is only one-seventh of the power consumed in the water electrolysis method for producing hydrogen. The Australian company Hazer Group was founded in 2010 to commercialise technology originally developed at the University of Western Australia.  The company was listed on the ASX in December 2015. It is completing a commercial demonstration project to produce renewable hydrogen and graphite from wastewater and iron ore as a process catalyst use technology created by the University of Western Australia (UWA). The Commercial Demonstration Plant project is an Australian first, and expected to produce around 100 tonnes of fuel-grade hydrogen and 380 tonnes of graphite each year starting in 2023. It was scheduled to commence in 2022. "10 December 2021: Hazer Group (ASX: HZR) regret to advise that there has been a delay to the completion of the fabrication of the reactor for the Hazer Commercial Demonstration Project (CDP). This is expected to delay the planned commissioning of the Hazer CDP, with commissioning now expected to occur after our current target date of 1Q 2022." The Hazer Group has collaboration agreements with Engie for a facility in France in May 2023, A Memorandum of Understanding with Chubu Electric & Chiyoda in Japan April 2023 and an agreement with Suncor Energy and FortisBC to develop 2,500 tonnes per Annum Burrard-Hazer Hydrogen Production Plant in Canada April 2022 The American company C-Zero's technology converts natural gas into hydrogen and solid carbon. The hydrogen provides clean, low-cost energy on demand, while the carbon can be permanently sequestered. C-Zero announced in June 2022 that it closed a $34 million financing round led by SK Gas, a subsidiary of South Korea's second-largest conglomerate, the SK Group. SK Gas was joined by two other new investors, Engie New Ventures and Trafigura, one of the world's largest physical commodities trading companies, in addition to participation from existing investors including Breakthrough Energy Ventures, Eni Next, Mitsubishi Heavy Industries, and AP Ventures. Funding was for C-Zero's first pilot plant, which was expected to be online in Q1 2023. The plant may be capable of producing up to 400 kg of hydrogen per day from natural gas with no CO2 emissions. One of the world's largest chemical companies, BASF, has been researching hydrogen pyrolysis for more than 10 years. Ethylene Pyrolysis is used to produce ethylene, the chemical compound produced on the largest scale industrially (>110 million tons/year in 2005). In this process, hydrocarbons from petroleum are heated to around in the presence of steam; this is called steam cracking. The resulting ethylene is used to make antifreeze (ethylene glycol), PVC (via vinyl chloride), and many other polymers, such as polyethylene and polystyrene. Semiconductors The process of metalorganic vapour-phase epitaxy (MOCVD) entails pyrolysis of volatile organometallic compounds to give semiconductors, hard coatings, and other applicable materials. The reactions entail thermal degradation of precursors, with deposition of the inorganic component and release of the hydrocarbons as gaseous waste. Since it is an atom-by-atom deposition, these atoms organize themselves into crystals to form the bulk semiconductor. Raw polycrystalline silicon is produced by the chemical vapor deposition of silane gases: Gallium arsenide, another semiconductor, forms upon co-pyrolysis of trimethylgallium and arsine. Waste management Pyrolysis can also be used to treat municipal solid waste and plastic waste. The main advantage is the reduction in volume of the waste. In principle, pyrolysis will regenerate the monomers (precursors) to the polymers that are treated, but in practice the process is neither a clean nor an economically competitive source of monomers. In tire waste management, tire pyrolysis is a well-developed technology. Other products from car tire pyrolysis include steel wires, carbon black and bitumen. The area faces legislative, economic, and marketing obstacles. Oil derived from tire rubber pyrolysis has a high sulfur content, which gives it high potential as a pollutant; consequently it should be desulfurized. Alkaline pyrolysis of sewage sludge at low temperature of 500 °C can enhance H2 production with in-situ carbon capture. The use of NaOH (sodium hydroxide) has the potential to produce H2-rich gas that can be used for fuels cells directly. In early November 2021, the U.S. State of Georgia announced a joint effort with Igneo Technologies to build an $85 million large electronics recycling plant in the Port of Savannah. The project will focus on lower-value, plastics-heavy devices in the waste stream using multiple shredders and furnaces using pyrolysis technology. One-stepwise pyrolysis and Two-stepwise pyrolysis for Tobacco Waste Pyrolysis has also been used for trying to mitigate tobacco waste. One method was done where tobacco waste was separated into two categories TLW (Tobacco Leaf Waste) and TSW (Tobacco Stick Waste). TLW was determined to be any waste from cigarettes and TSW was determined to be any waste from electronic cigarettes. Both TLW and TSW were dried at 80 °C for 24 hours and stored in a desiccator. Samples were grounded so that the contents were uniform. Tobacco Waste (TW) also contains inorganic (metal) contents, which was determined using an inductively coupled plasma-optical spectrometer. Thermo-gravimetric analysis was used to thermally degrade four samples (TLW, TSW, glycerol, and guar gum) and monitored under specific dynamic temperature conditions. About one gram of both TLW and TSW were used in the pyrolysis tests. During these analysis tests, CO2 and N2 were used as atmospheres inside of a tubular reactor that was built using quartz tubing. For both CO2 and N2 atmospheres the flow rate was 100 mL min−1. External heating was created via a tubular furnace. The pyrogenic products were classified into three phases. The first phase was biochar, a solid residue produced by the reactor at 650 °C. The second phase liquid hydrocarbons were collected by a cold solvent trap and sorted by using chromatography. The third and final phase was analyzed using an online micro GC unit and those pyrolysates were gases. Two different types of experiments were conducted: one-stepwise pyrolysis and two-stepwise pyrolysis. One-stepwise pyrolysis consisted of a constant heating rate (10 °C min−1) from 30 to 720 °C. In the second step of the two-stepwise pyrolysis test the pyrolysates from the one-stepwise pyrolysis were pyrolyzed in the second heating zone which was controlled isothermally at 650 °C. The two-stepwise pyrolysis was used to focus primarily on how well CO2 affects carbon redistribution when adding heat through the second heating zone. First noted was the thermolytic behaviors of TLW and TSW in both the CO2 and N2 environments. For both TLW and TSW the thermolytic behaviors were identical at less than or equal to 660 °C in the CO2 and N2 environments. The differences between the environments start to occur when temperatures increase above 660 °C and the residual mass percentages significantly decrease in the CO2 environment compared to that in the N2 environment. This observation is likely due to the Boudouard reaction, where we see spontaneous gasification happening when temperatures exceed 710 °C. Although these observations were seen at temperatures lower than 710 °C it is most likely due to the catalytic capabilities of inorganics in TLW. It was further investigated by doing ICP-OES measurements and found that a fifth of the residual mass percentage was Ca species. CaCO3 is used in cigarette papers and filter material, leading to the explanation that degradation of CaCO3 causes pure CO2 reacting with CaO in a dynamic equilibrium state. This being the reason for seeing mass decay between 660 °C and 710 °C. Differences in differential thermogram (DTG) peaks for TLW were compared to TSW. TLW had four distinctive peaks at 87, 195, 265, and 306 °C whereas TSW had two major drop offs at 200 and 306 °C with one spike in between. The four peaks indicated that TLW contains more diverse types of additives than TSW. The residual mass percentage between TLW and TSW was further compared, where the residual mass in TSW was less than that of TLW for both CO2 and N2 environments concluding that TSW has higher quantities of additives than TLW.  The one-stepwise pyrolysis experiment showed different results for the CO2 and N2 environments. During this process the evolution of 5 different notable gases were observed. Hydrogen, Methane, Ethane, Carbon Dioxide, and Ethylene all are produced when the thermolytic rate of TLW began to be retarded at greater than or equal to 500 °C. Thermolytic rate begins at the same temperatures for both the CO2 and N2 environment but there is higher concentration of the production of Hydrogen, Ethane, Ethylene, and Methane in the N2 environment than that in the CO2 environment. The concentration of CO in the CO2 environment is significantly greater as temperatures increase past 600 °C and this is due to CO2 being liberated from CaCO3 in TLW. This significant increase in CO concentration is why there is lower concentrations of other gases produced in the CO2 environment due to a dilution effect. Since pyrolysis is the re-distribution of carbons in carbon substrates into three pyrogenic products. The CO2 environment is going to be more effective because the CO2 reduction into CO allows for the oxidation of pyrolysates to form CO. In conclusion the CO2 environment allows a higher yield of gases than oil and biochar. When the same process is done for TSW the trends are almost identical therefore the same explanations can be applied to the pyrolysis of TSW. Harmful chemicals were reduced in the CO2 environment due to CO formation causing tar to be reduced. One-stepwise pyrolysis was not that effective on activating CO2 on carbon rearrangement due to the high quantities of liquid pyrolysates (tar). Two-stepwise pyrolysis for the CO2 environment allowed for greater concentrations of gases due to the second heating zone. The second heating zone was at a consistent temperature of 650 °C isothermally. More reactions between CO2 and gaseous pyrolysates with longer residence time meant that CO2 could further convert pyrolysates into CO. The results showed that the two-stepwise pyrolysis was an effective way to decrease tar content and increase gas concentration by about 10 wt.% for both TLW (64.20 wt.%) and TSW (73.71%). Thermal cleaning Pyrolysis is also used for thermal cleaning, an industrial application to remove organic substances such as polymers, plastics and coatings from parts, products or production components like extruder screws, spinnerets and static mixers. During the thermal cleaning process, at temperatures from , organic material is converted by pyrolysis and oxidation into volatile organic compounds, hydrocarbons and carbonized gas. Inorganic elements remain. Several types of thermal cleaning systems use pyrolysis: Molten Salt Baths belong to the oldest thermal cleaning systems; cleaning with a molten salt bath is very fast but implies the risk of dangerous splatters, or other potential hazards connected with the use of salt baths, like explosions or highly toxic hydrogen cyanide gas. Fluidized Bed Systems use sand or aluminium oxide as heating medium; these systems also clean very fast but the medium does not melt or boil, nor emit any vapors or odors; the cleaning process takes one to two hours. Vacuum Ovens use pyrolysis in a vacuum avoiding uncontrolled combustion inside the cleaning chamber; the cleaning process takes 8 to 30 hours. Burn-Off Ovens, also known as Heat-Cleaning Ovens, are gas-fired and used in the painting, coatings, electric motors and plastics industries for removing organics from heavy and large metal parts. Fine chemical synthesis Pyrolysis is used in the production of chemical compounds, mainly, but not only, in the research laboratory. The area of boron-hydride clusters started with the study of the pyrolysis of diborane (B2H6) at ca. 200 °C. Products include the clusters pentaborane and decaborane. These pyrolyses involve not only cracking (to give H2), but also recondensation. The synthesis of nanoparticles, zirconia and oxides utilizing an ultrasonic nozzle in a process called ultrasonic spray pyrolysis (USP). Other uses and occurrences Pyrolysis is used to turn organic materials into carbon for the purpose of carbon-14 dating. Pyrolysis liquids from slow pyrolysis of bark and hemp have been tested for their antifungal activity against wood decaying fungi, showing potential to substitute the current wood preservatives while further tests are still required. However, their ecotoxicity is very variable and while some are less toxic than current wood preservatives, other pyrolysis liquids have shown high ecotoxicity, what may cause detrimental effects in the environment. Pyrolysis of tobacco, paper, and additives, in cigarettes and other products, generates many volatile products (including nicotine, carbon monoxide, and tar) that are responsible for the aroma and negative health effects of smoking. Similar considerations apply to the smoking of marijuana and the burning of incense products and mosquito coils. Pyrolysis occurs during the incineration of trash, potentially generating volatiles that are toxic or contribute to air pollution if not completely burned. Laboratory or industrial equipment sometimes gets fouled by carbonaceous residues that result from coking, the pyrolysis of organic products that come into contact with hot surfaces. PAHs generation Polycyclic aromatic hydrocarbons (PAHs) can be generated from the pyrolysis of different solid waste fractions, such as hemicellulose, cellulose, lignin, pectin, starch, polyethylene (PE), polystyrene (PS), polyvinyl chloride (PVC), and polyethylene terephthalate (PET). PS, PVC, and lignin generate significant amount of PAHs. Naphthalene is the most abundant PAH among all the polycyclic aromatic hydrocarbons. When the temperature is increased from 500 to 900 °C, most PAHs increase. With increasing temperature, the percentage of light PAHs decreases and the percentage of heavy PAHs increases. Study tools Thermogravimetric analysis Thermogravimetric analysis (TGA) is one of the most common techniques to investigate pyrolysis with no limitations of heat and mass transfer. The results can be used to determine mass loss kinetics. Activation energies can be calculated using the Kissinger method or peak analysis-least square method (PA-LSM). TGA can couple with Fourier-transform infrared spectroscopy (FTIR) and mass spectrometry. As the temperature increases, the volatiles generated from pyrolysis can be measured. Macro-TGA In TGA, the sample is loaded first before the increase of temperature, and the heating rate is low (less than 100 °C min−1). Macro-TGA can use gram-scale samples to investigate the effects of pyrolysis with mass and heat transfer. Pyrolysis–gas chromatography–mass spectrometry Pyrolysis mass spectrometry (Py-GC-MS) is an important laboratory procedure to determine the structure of compounds. Machine learning In recent years, machine learning has attracted significant research interest in predicting yields, optimizing parameters, and monitoring pyrolytic processes. See also Dextrin Gasification Hydrogen Hydrogen production Karrick process Pyrolytic coating Thermal decomposition Torrefaction Wood gas References External links In Situ Catalytic Fast Pyrolysis Technology Pathway National Renewable Energy Laboratory Organic reactions Chemical processes Industrial processes Oil shale technology Biodegradable waste management Waste treatment technology Fire protection
Pyrolysis
Chemistry,Engineering
6,996
53,764,676
https://en.wikipedia.org/wiki/Aspergillus%20microcysticus
Aspergillus microcysticus is a species of fungus in the genus Aspergillus. Aspergillus microcysticus produces aspochalasin A, aspochalasin C, aspochalasin D, and the antibiotic asposterol. Growth and morphology A. microcysticus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References Further reading microcysticus Fungi described in 1955 Fungus species
Aspergillus microcysticus
Biology
132
701,934
https://en.wikipedia.org/wiki/Fermi%27s%20interaction
In particle physics, Fermi's interaction (also the Fermi theory of beta decay or the Fermi four-fermion interaction) is an explanation of the beta decay, proposed by Enrico Fermi in 1933. The theory posits four fermions directly interacting with one another (at one vertex of the associated Feynman diagram). This interaction explains beta decay of a neutron by direct coupling of a neutron with an electron, a neutrino (later determined to be an antineutrino) and a proton. Fermi first introduced this coupling in his description of beta decay in 1933. The Fermi interaction was the precursor to the theory for the weak interaction where the interaction between the proton–neutron and electron–antineutrino is mediated by a virtual W− boson, of which the Fermi theory is the low-energy effective field theory. According to Eugene Wigner, who together with Jordan introduced the Jordan–Wigner transformation, Fermi's paper on beta decay was his main contribution to the history of physics. History of initial rejection and later publication Fermi first submitted his "tentative" theory of beta decay to the prestigious science journal Nature, which rejected it "because it contained speculations too remote from reality to be of interest to the reader." It has been argued that Nature later admitted the rejection to be one of the great editorial blunders in its history, but Fermi's biographer David N. Schwartz has objected that this is both unproven and unlikely. Fermi then submitted revised versions of the paper to Italian and German publications, which accepted and published them in those languages in 1933 and 1934. The paper did not appear at the time in a primary publication in English. An English translation of the seminal paper was published in the American Journal of Physics in 1968. Fermi found the initial rejection of the paper so troubling that he decided to take some time off from theoretical physics, and do only experimental physics. This would lead shortly to his famous work with activation of nuclei with slow neutrons. The "tentativo" Definitions The theory deals with three types of particles presumed to be in direct interaction: initially a “heavy particle” in the “neutron state” (), which then transitions into its “proton state” () with the emission of an electron and a neutrino. Electron state where is the single-electron wavefunction, are its stationary states. is the operator which annihilates an electron in state which acts on the Fock space as is the creation operator for electron state Neutrino state Similarly, where is the single-neutrino wavefunction, and are its stationary states. is the operator which annihilates a neutrino in state which acts on the Fock space as is the creation operator for neutrino state . Heavy particle state is the operator introduced by Heisenberg (later generalized into isospin) that acts on a heavy particle state, which has eigenvalue +1 when the particle is a neutron, and −1 if the particle is a proton. Therefore, heavy particle states will be represented by two-row column vectors, where represents a neutron, and represents a proton (in the representation where is the usual spin matrix). The operators that change a heavy particle from a proton into a neutron and vice versa are respectively represented by and resp. is an eigenfunction for a neutron resp. proton in the state . Hamiltonian The Hamiltonian is composed of three parts: , representing the energy of the free heavy particles, , representing the energy of the free light particles, and a part giving the interaction . where and are the energy operators of the neutron and proton respectively, so that if , , and if , . where is the energy of the electron in the state in the nucleus's Coulomb field, and is the number of electrons in that state; is the number of neutrinos in the state, and energy of each such neutrino (assumed to be in a free, plane wave state). The interaction part must contain a term representing the transformation of a proton into a neutron along with the emission of an electron and a neutrino (now known to be an antineutrino), as well as a term for the inverse process; the Coulomb force between the electron and proton is ignored as irrelevant to the -decay process. Fermi proposes two possible values for : first, a non-relativistic version which ignores spin: and subsequently a version assuming that the light particles are four-component Dirac spinors, but that speed of the heavy particles is small relative to and that the interaction terms analogous to the electromagnetic vector potential can be ignored: where and are now four-component Dirac spinors, represents the Hermitian conjugate of , and is a matrix Matrix elements The state of the system is taken to be given by the tuple where specifies whether the heavy particle is a neutron or proton, is the quantum state of the heavy particle, is the number of electrons in state and is the number of neutrinos in state . Using the relativistic version of , Fermi gives the matrix element between the state with a neutron in state and no electrons resp. neutrinos present in state resp. , and the state with a proton in state and an electron and a neutrino present in states and as where the integral is taken over the entire configuration space of the heavy particles (except for ). The is determined by whether the total number of light particles is odd (−) or even (+). Transition probability To calculate the lifetime of a neutron in a state according to the usual quantum perturbation theory, the above matrix elements must be summed over all unoccupied electron and neutrino states. This is simplified by assuming that the electron and neutrino eigenfunctions and are constant within the nucleus (i.e., their Compton wavelength is much larger than the size of the nucleus). This leads to where and are now evaluated at the position of the nucleus. According to Fermi's golden rule, the probability of this transition is where is the difference in the energy of the proton and neutron states. Averaging over all positive-energy neutrino spin / momentum directions (where is the density of neutrino states, eventually taken to infinity), we obtain where is the rest mass of the neutrino and is the Dirac matrix. Noting that the transition probability has a sharp maximum for values of for which , this simplifies to where and is the values for which . Fermi makes three remarks about this function: Since the neutrino states are considered to be free, and thus the upper limit on the continuous -spectrum is . Since for the electrons , in order for -decay to occur, the proton–neutron energy difference must be The factor in the transition probability is normally of magnitude 1, but in special circumstances it vanishes; this leads to (approximate) selection rules for -decay. Forbidden transitions As noted above, when the inner product between the heavy particle states and vanishes, the associated transition is "forbidden" (or, rather, much less likely than in cases where it is closer to 1). If the description of the nucleus in terms of the individual quantum states of the protons and neutrons is accurate to a good approximation, vanishes unless the neutron state and the proton state have the same angular momentum; otherwise, the total angular momentum of the entire nucleus before and after the decay must be used. Influence Shortly after Fermi's paper appeared, Werner Heisenberg noted in a letter to Wolfgang Pauli that the emission and absorption of neutrinos and electrons in the nucleus should, at the second order of perturbation theory, lead to an attraction between protons and neutrons, analogously to how the emission and absorption of photons leads to the electromagnetic force. He found that the force would be of the form , but noted that contemporary experimental data led to a value that was too small by a factor of a million. The following year, Hideki Yukawa picked up on this idea, but in his theory the neutrinos and electrons were replaced by a new hypothetical particle with a rest mass approximately 200 times heavier than the electron. Later developments Fermi's four-fermion theory describes the weak interaction remarkably well. Unfortunately, the calculated cross-section, or probability of interaction, grows as the square of the energy . Since this cross section grows without bound, the theory is not valid at energies much higher than about 100 GeV. Here is the Fermi constant, which denotes the strength of the interaction. This eventually led to the replacement of the four-fermion contact interaction by a more complete theory (UV completion)—an exchange of a W or Z boson as explained in the electroweak theory. The interaction could also explain muon decay via a coupling of a muon, electron-antineutrino, muon-neutrino and electron, with the same fundamental strength of the interaction. This hypothesis was put forward by Gershtein and Zeldovich and is known as the Vector Current Conservation hypothesis. In the original theory, Fermi assumed that the form of interaction is a contact coupling of two vector currents. Subsequently, it was pointed out by Lee and Yang that nothing prevented the appearance of an axial, parity violating current, and this was confirmed by experiments carried out by Chien-Shiung Wu. The inclusion of parity violation in Fermi's interaction was done by George Gamow and Edward Teller in the so-called Gamow–Teller transitions which described Fermi's interaction in terms of parity-violating "allowed" decays and parity-conserving "superallowed" decays in terms of anti-parallel and parallel electron and neutrino spin states respectively. Before the advent of the electroweak theory and the Standard Model, George Sudarshan and Robert Marshak, and also independently Richard Feynman and Murray Gell-Mann, were able to determine the correct tensor structure (vector minus axial vector, ) of the four-fermion interaction. Fermi constant The most precise experimental determination of the Fermi constant comes from measurements of the muon lifetime, which is inversely proportional to the square of (when neglecting the muon mass against the mass of the W boson). In modern terms, the "reduced Fermi constant", that is, the constant in natural units is Here, is the coupling constant of the weak interaction, and is the mass of the W boson, which mediates the decay in question. In the Standard Model, the Fermi constant is related to the Higgs vacuum expectation value . More directly, approximately (tree level for the standard model), This can be further simplified in terms of the Weinberg angle using the relation between the W and Z bosons with , so that References Interaction Weak interaction
Fermi's interaction
Physics
2,256
1,617,962
https://en.wikipedia.org/wiki/Brick%20nog
Brick nog (nogging or nogged, beam filling) is a construction technique in which bricks are used to fill the gaps in a wooden frame. Such walls may then be covered with tile, weatherboards, or rendering, or the brick may remain exposed on the interior or exterior of the building. The technique was developed in England from the late 1400s to early 1500s, developing out of methods such as wattle and daub and lath and plaster construction, with the bricks being laid in horizontal courses or a herringbone pattern. Brick used in this way is rarely mechanically fastened to the adjacent wood members, generally being held in place only by the mortar bonds and friction. It is an integral part of the building structure that can also serve as fireproofing, soundproofing, or the final exposed surface of the assembly. Generally, the term brick infill is used instead of nogging in half-timbered construction, and the word nog or noggin has also come to be used to describe timber bracing pieces between wall studs in timber frame construction. References Bricks
Brick nog
Engineering
222
9,930,380
https://en.wikipedia.org/wiki/Bottle%20wall
A bottle wall is a wall made out of glass or plastic bottles and binding material. Bottle wall construction This is a building construction style which usually uses glass bottles (although mason jars, glass jugs, and other glass containers may be used also) as masonry units and binds them using adobe, sand, cement, stucco, clay, plaster, mortar or any other joint compound. This results in an intriguing stained-glass like wall. An alternative is to make the bottle wall from glass jugs filled with ink and set them up by supporting them between 2 windows. Construction Construction materials Although bottle walls can be constructed in many different ways, they are typically made on a foundation that is set into a trench in the earth to add stability to the wall. The trench is filled with a rubble of pea gravel and then filled in with cement. Rebar can be set into the foundation to add structural integrity. Bottle walls range one bottle to two bottles thick. Primitive mixture, such as cob or adobe can be used as mortar to bind the bottles. It is thickly spread on the previous layer of bottles followed by the next layer which is pressed into the mixture. Typically, two fingers of separation are used as a means of spacing although any kind of spacing can be achieved. Bottles can also be duct taped together to create a window-type effect. Two similar size bottles can be taped together with the openings allowing a light passageway. This also traps air and creates a small amount of insulation. Filling glass with liquid that will be subjected to freezing and thawing is not a good idea, but is useful if the glass is protected from temperature extremes. Heat sink When the bottles are filled with a (dark) liquid, or other dark material, the wall can function as a thermal mass, absorbing solar radiation during the day and radiating it back into the space at night, thus dampening diurnal temperature swings. This may be a pleasant feature for colder climates - but can turn a room into an oven in hotter climates. Binding mixtures A typical mortar mix is 3:1 mason sand to a pozzolan (fly ash) cement mix. Other mixtures could be made from mortar and clay, adobe, cob, sand or cement. Bottle walls are extremely versatile and could be bonded with pretty much anything that can endure its given climate. Bottle houses throughout history The use of empty vessels in construction dates back at least to ancient Rome, where many structures used empty amphorae embedded in concrete. This was not done for aesthetic reasons, but to lighten the load of upper levels of structures, and also to reduce concrete usage. This technique was used for example in the Circus of Maxentius. It is believed that the first bottle house was constructed in 1902 by William F. Peck in Tonopah, Nevada. The house was built using 10,000 bottles of J. Hostetter's Stomach Bitters which consisted of various herbs in a solution of 47% alcohol. The Peck house was demolished in the early 1980s. Around 1905, Tom Kelly built his house in Rhyolite, Nevada, using 51,000 beer bottles bonded with adobe. Kelly chose bottles because trees were scarce in the desert. Most of the bottles were Busch beer bottles collected from the 50 bars in this Gold Rush town. Rhyolite became a ghost town by 1920. In 1925, Paramount Pictures discovered the Bottle House and had it restored for use in a movie. It then became a museum, but tourism was slow, causing it to close. From 1936-1954, Lewis Murphy took care of the house and hosted tourists. From 1954-1969, Tommy Thompson occupied the house. He tried to make repairs to the house with concrete which, when mixed with the desert heat, caused many bottles to crack (Kelly had used adobe mud). Knott's Berry Farm in Buena Park, California, has a bottle house, made from over 3,000 whiskey bottles, that it uses as an "Indian Trader" store today. The house is a remake of the Rhyolite Bottle House replicated from photos taken by Walter Knott in the early 1950s. Another famous bottle house site was built by the self-taught senior citizen Tressa "Grandma " Prisbrey. Located in Simi Valley, California, Bottle Village is lauded by art scholars, The State of California, The National Register of Historic Places and in exhibitions, as a major artistic achievement. Beginning construction in 1956 at age 60, and working until 1981, Tressa "Grandma" Prisbrey transformed her 1/3 acre lot into Bottle Village, an otherworld of shrines, wishing wells, walkways, random constructions, plus 15 life size structures all made from found objects placed in mortar. The name "Bottle Village" comes from the structures themselves - made of tens of thousands of bottles unearthed via daily visits to the dump. The Washington Court Bottle House in Ohio was made with 9,963 bottles of all sizes and colors. The builder was a bottle collector and, to display his collection, he had them built into this house which was on display at Meyer's Modern Tourist Court. In Alexandria, Louisiana, there is a bottle-house gift shop that still stands today. The bottle house was constructed by Drew Bridges who used bottles from his drugstore. There are about 3,000 bottles used as masonry units with railroad ties used as the framing structure. The Kaleva Bottle House in Kaleva, Michigan, was built by John J. Makinen, Sr.(1871-1942) using over 60,000 bottles laid on their sides with the bottoms toward the exterior. The bottles were mostly from his company, The Northwestern Bottling Works. The house was completed in 1941, but he died before he could move in. The building was purchased by the Kaleva Historical Museum in 1981 and is listed on the National Register of Historical Places. Boston Hills Pet Memorial Park in Boston, Massachusetts, has a bottle wall from 1942. It is part of a small building used for storage. The Wimberley Bottle House in Wimberley, Texas, was constructed using over 9,000 soda bottles. It was built in the early 1960s as part of a pioneer town, a simulated Old West town set to be a tourist attraction/theme park. The house was modeled after Knott's Berry Farm bottlehouse in California. The Heineken WOBO (World Bottle) While on a world tour of Heineken factories in 1960, Alfred Heineken had an epiphany on the Caribbean island of Curaçao, where he saw many bottles littering the beach because the island had no economic means of returning the bottles to the bottling plants from which they had come. He was also concerned with the lack of affordable building materials and the inadequate living conditions plaguing Curaçao's lower-class. Envisioning a solution for these problems, he asked Dutch architect N. John Habraken to design what he called "a brick that holds beer." A similar project was the Block-o-beer-bottle developed in 1959 by the east German Radeberger Brewery. Over the next three years, the Heineken WOBO went through a design process. Some of the early designs were of interlocking and self-aligning bottles. The idea derived from a belief that the need for mortar would add complexity and expense to the bottle wall's intended simplicity and affordability. Some designs proved to be effective building materials, but too heavy and slow-forming to be economically produced. Other designs were rejected by Heineken based on aesthetic preferences. In the end, the bottle that was selected was a compromise between the previous designs. The bottle was designed to be interlocking, laid horizontally and bonded with cement mortar with a silicon additive. The necks were short and fitted into a large recess in the base, the bottles were square section with dimpled sides to bond with the mortar. A x shack would take approximately 1,000 bottles to build. In 1963, 100,000 WOBOs were produced in two sizes, 350 and 500 mm. This size difference was necessary in order to bond the bottles when building a wall, in the same way as a half brick is necessary when building with bricks. Unfortunately, most of them are destroyed and as such, they are now very rare and have become a collector's item. Only two WOBO structures exist and they are both on the Heineken estate in Noordwijk, near Amsterdam. The first was a small shed which had a corrugated iron roof and timber supports where the builder could not work out how to resolve the junction between necks and bases running in the same direction. Later, a timber double garage was renovated with WOBO siding. Alfred Heineken did not develop the WOBO concept further and the idea never got a chance to materialize. Rinus van den Berg, a Dutch industrial & architectural designer, designed several buildings while working with John Habraken in the 1970s. One design was published in Domus, 1976. A third WOBO structure was made by Dutch architect Gerard Baar in the late 1980s. He used a small batch of WOBO's for the side walls of his garden shed. Bottle house of Ganja The Bottle house of Ganja was built between 1966-67. Wat Pa Maha Chedi Kaew Wat Pa Maha Chedi Kaew is a temple in Thailand that was built by monks out of bottles. See also Adobe Cob Glass brick Earthship Hundertwasser Toilets Tin can wall References Agilitynut. Mar 2000/Feb 2007. Seltzer, Debra Jane. Mar 2007 <https://web.archive.org/web/20070314045052/http://www.agilitynut.com/h/otherbh.html>. The Goat House. Georgia Tech. Mar 2007 <https://web.archive.org/web/20070617181339/http://maven.gtri.gatech.edu/sfi/gradcourses/goathouse/MBWall.html>. Books and publications Pawley, Martin. Building for Tomorrow: Putting Waste to Work. San Francisco: Sierra Club Books, 1982. Earthship Biotecture Earthship Biotecture. Mar 2007 Warmke, Annie & Jay. "Building a Vaulted Strawbale Building." Blue Rock Station Publishing, 2006 Warmke, Annie & Jay. "Building a Plastic Bottle Greenhouse." Blue Rock Station Publishing, 2008 Building materials Bottles Types of wall Sustainable building Recycled building materials
Bottle wall
Physics,Engineering
2,152
10,103,794
https://en.wikipedia.org/wiki/Slenderness%20ratio
In architecture, the slenderness ratio, or simply slenderness, is an aspect ratio, the quotient between the height and the width of a building. In structural engineering, slenderness is used to calculate the propensity of a column to buckle. It is defined as where is the effective length of the column and is the least radius of gyration, the latter defined by where is the area of the cross-section of the column and is the second moment of area of the cross-section. The effective length is calculated from the actual length of the member considering the rotational and relative translational boundary conditions at the ends. Slenderness captures the influence on buckling of all the geometric aspects of the column, namely its length, area, and second moment of area. The influence of the material is represented separately by the material's modulus of elasticity . Structural engineers generally consider a skyscraper as slender if the height:width ratio exceeds 10:1 or 12:1. Slim towers require the adoption of specific measures to counter the high strengths of wind in the vertical cantilever, like including additional structures to endow greater rigidity to the building or diverse types of tuned mass dampers to avoid unwanted swinging. Tall buildings with high slenderness ratio are sometime referred to as pencil towers. Examples References External links The Super Slender Revolution Building engineering Skyscrapers
Slenderness ratio
Engineering
276
27,926,471
https://en.wikipedia.org/wiki/Athens%20Charter%20%28preservation%29
The Athens Charter for the Restoration of Historic Monuments is a seven-point manifesto adopted at the First International Congress of Architects and Technicians of Historic Monuments in Athens in 1931. Manifesto The Athens Charter for the Restoration of Historic Monuments was produced by the participants of the First International Congress of Architects and Technicians of Historic Monuments. This congress was organized by the International Museums Office, taking place in Athens in 1931. The seven points of the manifesto are: to establish organizations for restoration advice to ensure projects are reviewed with knowledgeable criticism to establish national legislation to preserve historic sites to rebury excavations which were not to be restored. to allow the use of modern techniques and materials in restoration work. to place historical sites under custodial protection. to protect the area surrounding historic sites. See also Venice Charter – Charter for the Conservation and Restoration of Monuments and Sites Florence Charter – by ICOMOS on 15 December 1982 as an addendum to the Venice Charter Barcelona Charter – European Charter for the Conservation and Restoration of Traditional Ships in Operation Building restoration Historic preservation External links The Athens Charter for the Restoration of Historic Monuments The Florence Charter The Barcelona Charter Architectural history Historic preservation Archaeology 1930s in Athens 1931 in international relations International cultural heritage documents Conservation and restoration of cultural heritage 1931 in Greece 1931 documents Proclamations
Athens Charter (preservation)
Engineering
253
4,973,983
https://en.wikipedia.org/wiki/CS%20Camelopardalis
CS Camelopardalis (CS Cam; HD 21291) is a binary star in reflection nebula VdB 14, in the constellation Camelopardalis. It is a 4th magnitude star, and is visible to the naked eye under good observing conditions. It forms a group of stars known as the Camelopardalis R1 association, part of the Cam OB1 association. The near-identical supergiant CE Camelopardalis is located half a degree to the south. As a binary star, CS Cam is designated as Struve 385 (STF 385, Σ385). The primary component, CS Camelopardalis A, is a blue-white B-type supergiant with a mean apparent magnitude of 4.21m. The star was found to be a variable star when the Hipparcos data was analyzed. It was given its variable star designation in 1999. It is classified as an Alpha Cygni type variable star and its brightness varies from magnitude 4.19m to 4.23m. Its companion, CS Camelopardalis B, is a magnitude 8.7m blue giant star located 2.4 arcseconds from the primary. References External links Image CS Camelopardalis Nebula vdB 14 Van Den Bergh 14 and 15 021291 016228 Alpha Cygni variables Binary stars B-type supergiants Camelopardalis Camelopardalis, CS 1035 BD+59 0660
CS Camelopardalis
Astronomy
302
615,487
https://en.wikipedia.org/wiki/Heterogeneous%20Element%20Processor
The Heterogeneous Element Processor (HEP) was introduced by Denelcor, Inc. in 1982. The HEP's architect was Burton Smith. The machine was designed to solve fluid dynamics problems for the Ballistic Research Laboratory. A HEP system, as the name implies, was pieced together from many heterogeneous components -- processors, data memory modules, and I/O modules. The components were connected via a switched network. A single processor, called a PEM (Process Execution Module), in a HEP system (up to sixteen PEMs could be connected) was rather unconventional; via a "program status word (PSW) queue" up to fifty processes could be maintained in hardware at once. The largest system ever delivered had 4 PEMs. The eight-stage instruction pipeline allowed instructions from eight different processes to proceed at once. In fact, only one instruction from a given process was allowed to be present in the pipeline at any point in time. Therefore, the full processor throughput of 10 MIPS could only be achieved when eight or more processes were active; no single process could achieve throughput greater than 1.25 MIPS. This type of multithreading processing classifies today the HEP as a barrel processor, while it was described as an MIMD pipelined processor by its designers. The hardware implementation of the HEP PEM was emitter-coupled logic. Processes were classified as either user-level or supervisor-level. User-level processes could create supervisor-level processes, which were used to manage user-level processes and perform I/O. Processes of the same class were required to be grouped into one of seven user tasks and seven supervisor tasks. Each processor, in addition to the PSW queue and instruction pipeline, contained instruction memory, 2,048 64-bit general purpose registers and 4,096 constant registers. Constant registers were differentiated by the fact that only supervisor processes could modify their contents. The processors themselves contained no data memory; instead, data memory modules could be separately attached to the switched network. The HEP memory consisted of completely separate instruction memory (up to 128 MBs) and data memory (up to 1 GB). Users saw 64-bit words, but in reality, data memory words were 72-bit with the extra bits used for state, see next paragraph, parity, tagging, and other uses. The HEP implemented a type of mutual exclusion in which all registers and locations in data memory had associated "empty" and "full" states. Reading from a location set the state to "empty," while writing to it set the state to "full." A programmer could allow processes to halt after trying to read from an empty location or write to a full location, enforcing critical sections. The switched network between elements resembled, in many ways, a modern computer network. On the network were sets of nodes, each of which had three links. When a packet arrived at a node, it consulted a routing table and attempted to forward the packet closer to its destination. If a node became congested, any incoming packets were passed on without routing. Packets treated in such a manner had their priority level increased; when several packets vied for a single node, a packet with a higher priority level would be routed before ones with lower priority levels. Another component of the switched network was the sO System, with its own memory and many individual DEC UNIBUS buses attached for disks and other peripherals. The system also had the ability to save the full/empty bits not normally visible directly. The initial IO System performance was shown to be woefully inadequate due to the high latency in starting the IO operations. Ron Natalie (from BRL) and Burton Smith designed a new system out of spare parts on napkins at a local steakhouse and put it into operation in the course of the ensuing week. The HEP's primary application programming language was a unique Fortran variant. In time C, Pascal, and SISAL were added. The syntax of data variables using full-empty bits prepended '$' before their name. So 'A' would name a local variable, but $A would be a locking full-empty variable. Application dead-lock was thus possible. Problematic, failure to '$' could introduce unintended numerical inaccuracy. The first HEP operating system was HEPOS. Mike Muuss was involved in a Unix port for the Ballistic Research Laboratory. HEPOS was not a Unix-like operating system. Although it was known to have poor cost-performance, the HEP received attention due to what were, at the time, several revolutionary features. The HEP had the performance of a CDC 7600-class computer in the Cray-1 era. HEP systems were leased by the Ballistic Research Laboratory (four PEM system), Los Alamos, the Argonne National Laboratory (single PEM), the National Security Agency and Shoko Ltd (Japan, 1 PEM). Germany's Messerschmitt (three PEMS system) is the only client who bought it. Denelcor also delivered a two PEM system to the University of Georgia in exchange for them providing software assistance (the system had also been offered to the University of Maryland). Messerschmitt was the only client to put the HEP into use for "real" applications; the other clients used it for experimenting with parallel algorithms. The BRL system was used to prepare a movie using the BRL-CAD software as its only real application. Faster and larger designs for HEP-2 and HEP-3 were started but never completed. The architectural concept would later be embodied with the code-name Horizon. See also Multithreading (computer architecture) Hyper-threading Cray MTA Tera Computer Company VLIW References Parallel computing Supercomputers
Heterogeneous Element Processor
Technology
1,207
57,276,228
https://en.wikipedia.org/wiki/Estrone%20sulfate%20%28medication%29
Estrone sulfate (E1S) is an estrogen medication and naturally occurring steroid hormone. It is used in menopausal hormone therapy among other indications. As the sodium salt (sodium estrone sulfate), it is the major estrogen component of conjugated estrogens (Premarin) and esterified estrogens (Estratab, Menest). In addition, E1S is used on its own as the piperazine salt estropipate (piperazine estrone sulfate; Ogen). The compound also occurs as a major and important metabolite of estradiol and estrone. E1S is most commonly taken by mouth, but in the form of Premarin can also be taken by parenteral routes such as transdermal, vaginal, and injection. Medical uses E1S is used in menopausal hormone therapy among other indications. Pharmacology Pharmacodynamics E1S itself is essentially biologically inactive, with less than 1% of the relative binding affinity of estradiol for the estrogen receptors (ERs), ERα and ERβ. The compound acts as a prodrug of estrone and more importantly of estradiol, the latter of which is a potent agonist of the ERs. Hence, E1S is an estrogen. Pharmacokinetics E1S is cleaved by steroid sulfatase (also called estrogen sulfatase) into estrone. Simultaneously, estrogen sulfotransferases transform estrone back into E1S, which results in an equilibrium between the two steroids in various tissues. E1S is thought to serve both as a rapidly-acting prodrug of estradiol and also as a long-lasting reservoir of estradiol in the body, which serves to greatly extend the duration of estradiol when used as a medication. When estradiol is administered orally, it is subject to extensive first-pass metabolism (95%) in the intestines and liver. A single administered dose of estradiol is absorbed 15% as estrone, 25% as E1S, 25% as estradiol glucuronide, and 25% as estrone glucuronide. Formation of estrogen glucuronide conjugates is particularly important with oral estradiol as the percentage of estrogen glucuronide conjugates in circulation is much higher with oral ingestion than with parenteral estradiol. Estrone glucuronide can be reconverted back into estradiol, and a large circulating pool of estrogen glucuronide and sulfate conjugates serves as a long-lasting reservoir of estradiol that effectively extends its terminal half-life of oral estradiol. To demonstrate the importance of first-pass metabolism and the estrogen conjugate reservoir in the pharmacokinetics of estradiol, the terminal half-life of oral estradiol is 13 to 20 hours whereas with intravenous injection its terminal half-life is only about 1 to 2 hours. Estrogen sulfates like estrone sulfate are about twice as potent as the corresponding free estrogens in terms of estrogenic effect when given orally to rodents. This in part led to the introduction of conjugated estrogens (Premarin), which are primarily estrone sulfate, in 1941. Chemistry E1S, also known as estrone 3-sulfate or as estra-1,3,5(10)-trien-17-one 3-sulfate, is a naturally occurring estrane steroid and a derivative of estrone. It is an estrogen conjugate or ester, and is specifically the C3 sulfate ester of estrone. Salts of E1S include sodium estrone sulfate and estropipate (piperazine estrone sulfate). The logP of E1S is 1.4. References Further reading Estrogens Estrone esters Human drug metabolites Phenol esters Sex hormone esters and conjugates Sulfate esters
Estrone sulfate (medication)
Chemistry
867
72,912,800
https://en.wikipedia.org/wiki/CIBERSORT
CIBERSORT, also called CIBERSORTx, is a bioinformatics tool used to deconvolute cell type proportions and gene expression profiles from bulk RNA sequencing datasets. It is among the fastest growing software tools in the life sciences. References Biotechnology
CIBERSORT
Chemistry,Biology
57
1,833,842
https://en.wikipedia.org/wiki/WeatherStar
WeatherStar (sometimes rendered Weather Star or WeatherSTAR; "STAR" being an acronym for Satellite Transponder Addressable Receiver) is the technology used by American cable and satellite television network The Weather Channel (TWC) to generate its local forecast segments—branded as Local on the 8s (LOT8s) since 2002 and previously from 1996 to 1998—on cable and IPTV systems nationwide. The hardware takes the form of a computerized unit installed at a cable system's headend. It receives, generates, and inserts local forecasts and other weather information, including weather advisories and warnings, into TWC's national programming. Overview The primary purpose of WeatherStar units is to disseminate weather information for local forecast segments on The Weather Channel. The forecast and observation data – which is compiled from local offices of the National Weather Service (NWS), the Storm Prediction Center (SPC), and The Weather Channel (which began producing in-house forecasts in 2002, replacing the NWS-sourced zone forecasts that were utilized for the STAR's descriptive, regional and extended forecast products) – is received from the vertical blanking interval of the TWC video feed and from data transmitted via satellite; the localized data is then sent to the unit that inserts the data and accompanying programmed graphics over the TWC feed. The WeatherStar systems are typically programmed to cue the local forecast segments and Lower Display Line (LDL) at given times. The units are programmed to feature customized segments known as "flavors", pre-determined segment lengths for each local forecast segment, varying by the time of broadcast, accommodating the inclusion or exclusion of certain products from a segment's product list. (Until the Local on the 8s segments adopted a uniform length, the extended forecast was the only product regularly included in each flavor.) Flavor lengths previously varied commonly between 30 seconds and two minutes, with some running as long as six minutes during the late 1980s and the mid-1990s; in April 2013, the LOT8s segment flavor switched permanently to a uniform one-minute length. Outside of the regularly scheduled full-screen graphical segments, weather data is also inserted over the channel's national feed via the Lower Display Line; the LDL was originally displayed as a text-only overlay over the bottom third of the video feed on older STAR units up to the Weather Star Jr. model, containing no graphical background and only showing current weather observations and monthly precipitation totals for the chosen reporting station. (The text-based LDL was discontinued on active pre-1998 STAR units on March 11, 2010, coinciding with The Weather Channel permanently adding a version of the LDL for the network's national clean feed.) With the release of the Weather Star XL, the LDL was modified to include short-term daypart (and, later three-day) forecasts for the STAR's home location as well as a semi-translucent background; the later release of the IntelliStar saw the incorporation of additional products into the LDL, including air quality indexes, travel forecasts for three major cities in the region, traffic information and almanac data. The IntelliStar units' LDL was redesigned on November 12, 2013, expanding it to be displayed throughout all programming on the national feed (including commercial breaks and telecasts of its long-form programs, but not during local ad breaks inserted at the provider level); the LDL was replaced by a rundown/progress bar during the full-screen LOT8s segments, indicating the time remaining for the product currently playing and up to two forecast products scheduled to be played afterwards. A sidebar, which was shown only during the channel's forecast programming and was removed during commercial breaks, was also added and paired with the LDL on the right third of the screen over the channel's high definition simulcast feed and displayed supplementary observation data (including visibility, dew point and barometric pressure data that was previously shown on the LDL), average flight delay times for area airports, air quality forecasts, and historical almanac data. All STAR systems are able to display watches, warnings and advisories issued by the National Weather Service and the Storm Prediction Center for the immediate area where the WeatherStar system's headend is based, which generate a tone as an audible leader to the alert message. Older STAR units up to the WeatherStar 4000 displayed NWS bulletins in the form of a full-screen vertical scroll with differing-colored backgrounds (brown for advisories and red for warnings), which was paired with the Lower Display Line. However, the 4000 introduced a horizontal ticker that was restricted to the bottom third of the TWC video feed; since November 12, 2013, IntelliStar models now display alerts over the national feed's headlines ticker placed above the LDL. The systems are also capable of generating multiple scrolling text advertisements that appear at the bottom of the screen during local forecast segments, which are programmed into the administrative menus by a local provider-employed technician. STAR units are also capable of generating advertising tags for overlay on national advertisements seen on the national feed, displaying localized addresses for retailers, and on newer models, tagging products seen during breaks (such as pollen reports). The Weather Channel provides its STAR units to cable and IPTV providers free of charge. Programming and maintenance of all units is handled by engineers employed by each provider, who are able to modify specifications to generate locally specific weather data, program locally specific greetings for LOT8s segment introductions, generate test alerts viewable only by cable company technicians performing silent remote administration tests, and make upgrades and repairs to the unit's software and hardware. Although extremely rare, the programmability of STAR units at the headend level can leave systems vulnerable to possible tampering. One such instance occurred over Mediacom’s Des Moines, Iowa system on July 21, 2022, when the introductory message to a LOT8s segment displayed a racial slur that was tacked onto a default greeting used to open the segment (one of several programmed into all IntelliStar units that are usually modified only to reference the municipality of the STAR unit). TWC parent Allen Media Group (owned and overseen by Black media entrepreneur Byron Allen, and which acquired TWC from a consortium of NBCUniversal, Blackstone and Bain Capital in 2018) stated it would investigate the source of the message, which originated within Mediacom's local headend operations. History Since its introduction at TWC's launch in May 1982, several generations of the WeatherStar have been used. , two STAR models (the IntelliStar 2xD and IntelliStar 2 Jr.) are currently being used by cable and IPTV providers for generation of local weather information on the channel. Some providers only use one STAR model, the IntelliStar 2xD. As it has capabilities to output in 720x480 letterboxed SD with 1920x1080 HD. Weather Star I The original WeatherStar system, the Weather Star I, was released upon The Weather Channel's launch. Like subsequent WeatherStar units, it received local weather data from TWC and the National Weather Service, via data encoded in the vertical blanking interval of TWC's video feed, as well as receiving extra data from a subcarrier transmitted above TWC's video and audio signals on its transponder on satellite. The Weather Star I was manufactured and developed for TWC by Salt Lake City, Utah-based Compuvid. A couple of years before TWC was founded, Compuvid had already made a similar product which was installed at the headends of cable television systems owned by TeleCable Corporation, a subsidiary of Landmark Communications, TWC's corporate parent at the time and the channel's founding owner. This system displayed weather conditions, forecasts and announcements via a set of weather sensors locally installed at the cable headend. The Weather Star I was an updated version of this unit, receiving data from both The Weather Channel and the National Weather Service. The Weather Star I, like its two subsequent successors, lacked the ability to generate graphics and was only capable of displaying white text on various backgrounds: purple for the "Latest Observations" (which displayed current weather conditions for the nearest reporting station and others within a radius of the headend location) and "Weather Information" (which displayed random data, usually weather-related trivia, past weather events in the area, or information on upcoming programming) pages, grey for the "36 Hour Forecast" page (a descriptive forecast using the National Weather Service's zone forecast products), brown for scrolling weather advisories, and red for scrolling weather warnings. Until the release of the Weather Star III, The Weather Channel used a single one-minute local forecast sequence featuring each of the three above-mentioned forecast screens. As with all future WeatherStar models, the Weather Star I could key its text over TWC's national video feed, most often to display the current conditions at the bottom of the screen. Even though the Weather Star I met the Federal Communications Commission's Part 15 regulations for emanated RF interference (RFI), it still radiated enough to interfere with VHF channel 2 on the broadcast band, resulting in problems at the cable television system's headend where the Weather Star I unit was installed. This problem was temporarily solved by having ferrite chokes attached to all cables and wires attached to the Weather Star. The Weather Star I was also notorious for frequent text jamming and text garbling issues. Weather Star II The Weather Star II was released in 1984; the unit had improved RF shielding to reduce interference issues and had an improved overall hardware design. Otherwise, the unit was similar in its features to the Weather Star I. Weather Star III The Weather Star III, released in 1986 as an upgrade to the Weather Star II, was another text-only unit that was essentially identical to the two prior WeatherStar models, though with additional internal improvements and forecast products (and consequently, more local forecast sequences). However, TWC decided to drop one of the products included in the unit, "Weather Information," soon after the introduction of the STAR III. In 2001, the FCC granted The Weather Channel a waiver from complying with its forthcoming requirement for aural tones to accompany broadcast of "scrolled" or "crawled" emergency information, which otherwise went into effect in 2002, for the Weather Star Jr. and Weather Star III. The Weather Star III was capable of generating an aural tone only during the first display of a weather warning, not every time it was shown, as required by the regulations. The waiver, which expired on December 31, 2004, was granted with the understanding that TWC would "replace the Star IIIs in 2003/2004". TWC released an "Audio Weather Alert Enhancement" for the Weather Star Jr. and Weather Star III in June 2004, so that they would emit "a series of audible beeps" every time a tornado warning, flash flood warning or severe thunderstorm warning issued by the National Weather Service was transmitted for insertion over the TWC feed. The Weather Star III was retired completely in December 2004. From 1989 to 1992, The Weather Network and its French language sister network MétéoMedia – the Canadian equivalents of TWC – used the Weather Star III units to display local forecasts, which were displayed over a sky-blue background, a colour that TWC's units did not use. WeatherStar 4000 The Weather Star 4000 was the first WeatherStar model capable of displaying graphics. First developed in 1988, it was introduced in early 1990. It was designed and manufactured by Canadian electronics company Applied Microelectronics Institute (now Amirix). The first Star 4000s were programmed to operate in a text-only mode (displayed over stylized graphical backgrounds), similar to the STAR III, but with two improvements: an improved font was introduced, as was a graphical radar product at the end of the local forecast segment, showing precipitation that was occurring in the viewer's local geographic area. The first version was just a static (current) image. A second version was added in the fall of 1992 and was a loop showing radar data logged during the previous 90 minutes. Within a brief period of time, the Weather Star 4000 began to produce graphically based local forecast segments, including maps for the regional observation and forecast products. Until 1995, the Star 4000 incorporated a narration track provided by Dan Chandler into the software, which introduced forecast products presented in each flavor; the tracks could be programmed to play either on certain products or all that were featured during that particular flavor. A customized version of the Weather Star 4000 was used by The Weather Network until 1997, when it switched to a technically different system to disseminate local weather information, known as PMX. Due to the cost of upgrading to more advanced units including the IntelliStar, the Weather Star 4000 remained in use in some smaller communities as late as 2014, although it was already being gradually phased out in some areas in favor of the more recent models at that time. On June 27, 2023, The Weather Channel quietly introduced a new hour-long block called “Retro 8s LIVE,” which features a modernized high-definition version of the WeatherSTAR 4000. The block, which preceded another block introduced at the time called Twilight LIVE, cycled through major cities in the United States with weather information and accompanying narration. It aired weekday mornings at 4 AM eastern time until it was retired (along with Twilight LIVE) on November 3, 2023. WeatherStar Jr. The Weather Star Jr. was a budget model manufactured by Wegener Communications for cable headends in smaller communities. Released in 1994 following field testing on eight cable systems in various smaller markets, the system was based on Wegener's Series 2450 graphics display platform, and cost US$500 per unit. It featured the same products used by the Star III, but utilized the typeface used by the 4000. The Weather Channel was able to upgrade Weather Star Jr. units to meet the FCC's 2002 deadline to require broadcasts of "scrolled" or "crawled" emergency information to be accompanied by an aural tone for accessibility reasons. When the change in FCC regulations forced the retirement of the Star III, headends using that unit upgraded to the Weather Star Jr. or more advanced units. WeatherStar XL In the fall of 1998, the Weather Star XL, the fifth-generation system in the WeatherStar fleet, was introduced. The Star XL, an IRIX-based computer unit manufactured by SGI, had significantly more advanced technical capabilities than the 4000; it incorporated modernized graphics (with Akzidenz-Grotesk as the main typeface) and a new set of weather icons that would be used on the channel for eight years after its launch. Its on-screen appearance was originally based on those used on the channel's program introductions that were introduced shortly beforehand but were eventually replaced by a graphics set that closely resembled the original graphical design of the WeatherStar's successor, the IntelliStar. The Star XL was also the first WeatherStar platform to be adapted and modified by The Weather Channel for use on its sister service Weatherscan, a 24-hour local weather channel carried on select cable systems throughout the country (primarily on digital tiers) that launched in 1999; three years later, the Weatherscan XL units would be phased out for use on Weatherscan (and eventually, on TWC in most large and mid-sized markets) and replaced by the newer IntelliStar technology as part of the first trial of the system. The Star XL model has a high manufacturing cost (US$6,500) and weighs . It was also the first STAR system to utilize Vocal Local, a software function that is technologically different from the narration track used in the WeatherStar 4000, which assembles pre-recorded audio tracks to provide narration of the current temperature and sky conditions, descriptive forecasts and introductions to certain forecast products. The XL, along with the WeatherStar 4000 and WeatherStar Jr. systems, were retired when The Weather Channel discontinued transmission of its analog satellite feed on June 26, 2014. IntelliStar In February 2003, TWC released an advanced model, IntelliStar, initially being rolled out for use on Weatherscan; the "domestic" version intended for use on The Weather Channel was subsequently introduced in early to mid-2004 in the top media markets (including Dallas, Los Angeles, Philadelphia and Pittsburgh). Initially, its graphics were essentially the same as those seen on the WeatherStar XL (though it used Interstate, which was used by TWC for its on-air graphics package at the time, as the typeface instead of Akzidenz-Grotesk) until December 2006, when the IntelliStar received its own, even more realistic icon set – which were used on TWC's on-air and online forecast content as well. The amount of weather products provided by the IntelliStar had dramatically increased with the revamp: with the addition of school-day and outdoor activity forecasts; ultraviolet indexes and other health information; and the introduction of more localized maps for forecasts and radar/satellite imagery. However, most of the products were dropped in April 2013, when the channel uniformly reduced its local forecast segments to one minute (instead of varying between one and two minutes, depending on the segment). Some of the data added was also incorporated into the Lower Display Line, which eventually added a tabbed display for each product. Through a content agreement with Traffic Pulse, traffic information (in the form of accident and construction reports, roadway flow and average travel times for local roadways) was also presented by the IntelliStar in markets in which Traffic Pulse provided traffic data until TWC's agreement with the company expired in 2010. The IntelliStar was officially discontinued on November 16, 2015, being replaced by the IntelliStar 2 and IntelliStar 2 Jr. IntelliStar 2 The IntelliStar 2 (also known internally as IntelliStar 2 HD) – is the seventh generation WeatherStar system and the first to be capable of generating forecast graphics in both widescreen and high-definition (specifically, in the channel's 1080i format). The unit originally did not feature any programmed narration, a Lower Display Line or icon animations. When the system was officially released in July 2010, many of the existing issues that were present with the ALPHA were corrected. The fully released version of the IntelliStar 2 features an animated lower display line, and various products including current weather conditions, weather bulletins, three-hour Doppler radar loops for the region and the metropolitan area, a 12-hour forecast graph, and 24-hour descriptive and seven-day forecast graphics. From its release until November 12, 2013, the IntelliStar 2 used a graphics package that differed from the original IntelliStar (before both systems implemented a uniform graphics package, the IntelliStar used graphics based on TWC's 2005 package while the IntelliStar 2 used graphics based on the channel's 2008 graphics). Vocal Local narration is done by TWC meteorologist/storm tracker Jim Cantore, instead of Allen Jackson, who provided the narration track for the first generation IntelliStar and WeatherStar XL. The system was gradually rolled out to major U.S. cable providers strictly for use on The Weather Channel's HD simulcast feed, and originally did not replace existing operational STAR units used on The Weather Channel's standard definition feed or Weatherscan; as a result, TWC became one of the few channels which by necessity does not have an "autotune to HD" version for providers that utilize set-top boxes allowing HD tuning to standard definition channel positions. IntelliStar 2 Jr. The IntelliStar 2 Jr., a low-cost digital model suitable for smaller cable providers, was developed and released in 2013. Similar to the first-generation IntelliStar, the unit is capable of operating natively for both analog and digital transmission on cable systems. The Star 2 Jr. was later used as a permanent replacement for all analog WeatherStar systems on June 26, 2014, as a result of the discontinuance of the analog-only units. IntelliStar 2 xD The IntelliStar 2 xD is a model of the IntelliStar 2 Series that was released in late 2014 and early 2015 as a replacement of the original IntelliStar 2. It letterboxes The Weather Channel HD feed in SD and sends the full HD feed to the HD Channel. It was made as a full replacement of the IntelliStar and Intellistar 2. WeatherStar products WS Indicates product is featured on all STAR systems. 3000 Indicates product is featured on WeatherStar 3000. 4000 Indicates product is featured on WeatherStar 4000 systems. XL Indicates product is featured on WeatherStar XL systems. IS Indicates product is featured on IntelliStar systems. IS2 Indicates product is featured on IntelliStar 2, IntelliStar 2 xD, and IntelliStar 2 Jr. systems. Jr Indicates product is featured on WeatherStar Jr. Current products Former products References External links Television technology The Weather Channel 1982 software 1982 establishments in the United States
WeatherStar
Technology
4,350
581,859
https://en.wikipedia.org/wiki/Invertible%20sheaf
In mathematics, an invertible sheaf is a sheaf on a ringed space that has an inverse with respect to tensor product of sheaves of modules. It is the equivalent in algebraic geometry of the topological notion of a line bundle. Due to their interactions with Cartier divisors, they play a central role in the study of algebraic varieties. Definition Let (X, OX) be a ringed space. Isomorphism classes of sheaves of OX-modules form a monoid under the operation of tensor product of OX-modules. The identity element for this operation is OX itself. Invertible sheaves are the invertible elements of this monoid. Specifically, if L is a sheaf of OX-modules, then L is called invertible if it satisfies any of the following equivalent conditions: There exists a sheaf M such that . The natural homomorphism is an isomorphism, where denotes the dual sheaf . The functor from OX-modules to OX-modules defined by is an equivalence of categories. Every locally free sheaf of rank one is invertible. If X is a locally ringed space, then L is invertible if and only if it is locally free of rank one. Because of this fact, invertible sheaves are closely related to line bundles, to the point where the two are sometimes conflated. Examples Let X be an affine scheme . Then an invertible sheaf on X is the sheaf associated to a rank one projective module over R. For example, this includes fractional ideals of algebraic number fields, since these are rank one projective modules over the rings of integers of the number field. The Picard group Quite generally, the isomorphism classes of invertible sheaves on X themselves form an abelian group under tensor product. This group generalises the ideal class group. In general it is written with Pic the Picard functor. Since it also includes the theory of the Jacobian variety of an algebraic curve, the study of this functor is a major issue in algebraic geometry. The direct construction of invertible sheaves by means of data on X leads to the concept of Cartier divisor. See also Vector bundles in algebraic geometry Line bundle First Chern class Picard group Birkhoff-Grothendieck theorem References Geometry of divisors Sheaf theory
Invertible sheaf
Mathematics
487
41,999,062
https://en.wikipedia.org/wiki/Penicillium%20alutaceum
Penicillium alutaceum is a fungus species of the genus of Penicillium. See also List of Penicillium species References alutaceum Fungi described in 1968 Fungus species
Penicillium alutaceum
Biology
43
40,019,369
https://en.wikipedia.org/wiki/Aeroflot%20Flight%208641
Aeroflot Flight 8641 was a Yakovlev Yak-42 airliner on a domestic scheduled passenger flight from Leningrad (now Saint Petersburg) to Kiev (now Kyiv). On 28 June 1982, the flight crashed south of Mazyr, Byelorussian SSR, killing all 132 people on board. The accident was both the first and deadliest crash of a Yakovlev Yak-42, and remains the deadliest aviation accident in Belarus. The cause was a failure of the jackscrew controlling the horizontal stabilizer due to a design flaw. Aircraft and crew The Yakovlev Yak-42 involved in the accident was registered to Aeroflot as СССР-42529 (manufacturer number 11040104, series number 04-01). The aircraft made its maiden flight on 21 April 1981 and was delivered to Ministry of Civil Aviation on 1 June 1981. At the time of the accident, it had 795 flight hours and 496 takeoff and landing cycles. All 124 passenger seats were filled, 11 by children. The flight crew consisted of: Captain Vyacheslav Nikolaevich Musinskiy Co-pilot Alexander Sergeevich Stigariev Flight engineer Nikolai Semyonovich Vinogradov Navigator-trainee Viktor Ivanovich Kedrov Flight Attendant Anna Nikolaevna Sheykina Flight Attendant Tamara Mikhailovna Vasishcheva Flight Attendant Olga Pavlovna Pavlova Flight Attendant Yury Borisovich Ryabov Sequence of events The aircraft took off from Pulkovo Airport at 9:01 Moscow time, having been delayed one minute because of a late passenger. At 10:45 it entered the zone of Kiev/Boryspil air traffic control Center. The crew started the landing checklist at 10:48:01. At 10:48:58 the crew informed the air traffic controller they reached the planned top of descent point, the controller clearing them for descent to FL255 (approximately ). The crew confirmed the flight path; no further communications were heard from Flight 8641. At 10:51:20 the autopilot gradually brought about a horizontal stabilizer angle of up 0.3° for descent for landing. At 10:51:30 the stabilizer angle sharply increased, exceeding the 2° limit within half a second. The sudden change resulted in a negative g-force of -1.5 g, but the autopilot adjusted the controls to lower it to -0.6 g. As the stabilizer did not respond to commands and the plane went on diving, the autopilot switched off after 3 seconds. The pilots pulled back on the yoke trying to level out the plane, but it continued into a steep dive; soon it rolled 35° left and the dive achieved 50°. As it rolled counterclockwise with over -2 g of overload, the aircraft disintegrated at 10:51:50 at the altitude of and the instrument speed of . The wreckage was found on the outskirts of Verbavychi village, southeast of the district center Naroulia (itself being further 18 km south/west of the larger Mozyr which is often listed.) Fragments of the plane were scattered across an area of . All 132 people on board perished. Cause The cause was determined to be a failure of the jackscrew mechanism in the aircraft's tail due to metal fatigue, which resulted from flaws in the Yak-42's design. The investigation concluded that among the causes of the crash were poor maintenance, as well as the control system of the stabilizer not meeting basic aviation standards. Three engineers who signed the jackscrew drawings were convicted. As for the official cause of the crash: "the spontaneous movement of the stabilizer was due to disconnection in flight of the jackscrew assembly due to the almost complete deterioration of the 42M5180-42 thread-nuts due to structural imperfections in the mechanism." Due to the accident, all Yak-42s were withdrawn from service until the design defect was rectified in October 1984. See also Alaska Airlines Flight 261 – a MD-83 accident in 2000 also resulting from a jackscrew failure American Airlines Flight 96 and Turkish Airlines Flight 981 - similar accidents involving in-flight structural failure due to design flaws, in which the latter contributed to the respective type grounding References 1982 in Belarus Aviation accidents and incidents in 1982 Aviation accidents and incidents in Belarus Airliner accidents and incidents caused by mechanical failure Airliner accidents and incidents caused by design or manufacturing errors Airliner accidents and incidents caused by in-flight structural failure Accidents and incidents involving the Yakovlev Yak-42 8641 Aviation accidents and incidents in the Soviet Union June 1982 events in the Soviet Union Aviation accidents and incidents caused by loss of control 1982 disasters in the Soviet Union 1982 disasters in Belarus
Aeroflot Flight 8641
Materials_science
987
17,511,951
https://en.wikipedia.org/wiki/MobileHCI
The Conference on Mobile Human-Computer Interaction (MobileHCI) is a leading series of academic conferences in Human–computer interaction and is sponsored by ACM SIGCHI, the Special Interest Group on Computer-Human Interaction. MobileHCI has been held annually since 1998 and has been an ACM SIGCHI sponsored conference since 2012 The conference is very competitive, with an acceptance rate of below 20% in 2017 from 25% in 2006 and 21.6% in 2009. MobileHCI 2011 was held in Stockholm, Sweden, and MobileHCI 2012 which was sponsored by SIGCHI held in San Francisco, USA. History The MobileHCI series started in 1998 as a stand-alone Workshop on Human Computer Interaction with Mobile Devices organized by Chris Johnson and held at the University of Glasgow. In the following year the workshop was held in conjunction with the Interact conference and was organized by Stephen Brewster and Mark Dunlop. In 2001 MobileHCI was again organized by Brewster and Dunlop in association with a major conference. This was in conjunction with IHM-HCI in Lille, France. In 2002, MobileHCI was held independently from an associated conference as a stand-alone symposium in Pisa, Italy, organized by Fabio Paternò. In 2003 the conference was organized by Luca Chittaro in Udine, Italy. In 2004 it was again organized by Brewster and Dunlop, this time at the University of Strathclyde. In the following years the conference took place in Austria, Finland, and Singapore. MobileHCI 2008 has been organized by Henri Ter Hofte from the Telematica Instituut in the Netherlands. For 2008 the conference's steering committee agreed to award a prize for the most influential paper published at MobileHCI ten years ago. The price should recognises the longevity of impact papers from the first MobileHCI have had on the research community. The 2008 prize was awarded to Keith Cheverst for the paper Exploiting Context in HCI Design for Mobile Systems written together with Tom Rodden, Nigel Davies, and Alan Dix. MobileHCI 2009 was organised by Fraunhofer FIT and University of Siegen, in cooperation with ACM SIGCHI and ACM SIGMOBILE. The general chair was Prof. Dr. Reinhard Oppermann from Fraunhofer Society FIT, and the program chairs were Dr. Markus Eisenhauer, Prof. Dr. Matthias Jarke, and Prof. Dr. Volker Wulf. The 2009 prize for the most influential paper from ten years ago was awarded to Albrecht Schmidt for his paper Implicit human-computer interaction through context. The acceptance rate was 24.2% for full papers and 18.5% for short papers. The 12th MobileHCI took place in Lisboa, Portugal, from September 7–10, 2010. The conference's general chairs were Marco de Sá and Luís Carriço from the University of Lisboa. The theme of the conference was a mobile world for all. The acceptance rate was 20% for full papers and 22% overall. MobileHCI 2011 took place in Stockholm, Sweden from 30 August to 2 September 2011. The 13th in the series was chaired by Markus Bylund (Swedish Institute of Computer Science) and Maria Holm (Mobile Life Centre) with Oskar Juhlin and Ylva Fernaeus also from Mobile Life Centre as programme chairs. The full paper acceptance rate was 27% with an overall 23%. The Most influential Paper from MobileHCI 2001 prize was awarded to Simon Holland for his paper AudioGPS: Spatial Audio Navigation with a Minimal Attention Interface. In 2018, the conference's steering committee agreed to award a prize for the most impactful paper published at MobileHCI in the 20 years conference series ("Impact Award") to the paper by Matthias Böhmer, Brent Hecht, Johannes Schöning, Antonio Krüger and Gernot Bauer on mobile application usage. In 2021, the same paper was honoured with the "Most Influential Paper Award" for the recent 10 years of the conference series. In 2020, the conference's steering committee agreed to change the name of MobileHCI from Conference on Human-Computer Interaction with Mobile Devices and Services to Conference on Mobile Human-Computer Interaction to reflect the societal and technological transition where mobility has become pervasive and prime to our lives. Topics In its early years, the conference had a limited number of unspecific topics. The list of topics grew over the years. Topics considered relevant to date are, for example, audio and speech interaction, input and output techniques for mobile technologies, evaluation of mobile devices and services, and multimodal interaction. Examples of topics that emerged in the last years are Wearable Computing, Mobile social networks, and studies on the use of mobile devices for special target groups (e.g. seniors). Workshops Since 2002 workshops have been held prior to the main conference. Workshops focus on specific topics related to the conference's main theme. To participate in a workshop it is often necessary to submit a paper and present it during the workshop. Usually around 20 persons participate in a workshop. Besides the presentations there is typically more room for discussions than during the main conference. Successful workshops are often repeated in the following years. Some examples are the workshops on HCI in Mobile Guides, Mobile Interaction with the Real World (MIRW), and Speech in Mobile and Pervasive Environments (SiMPE). Tutorials Tutorial days have been held at Mobile HCI 2008 and 2009. After more than 10 years of Mobile HCI, providing an overview of the state of the art becomes more and more challenging. During the tutorial days, a number of well-known researchers in Mobile HCI gave overviews of the state of the art and cover many of the relevant topics. The tutorials also introduced the "must read" papers in this domain. The audience varied and included new students starting a PhD in Mobile HCI, practitioners wanting a quick survey of the state of the art and educators wishing to get an overview of Mobile HCI for their own teaching. External links Website of the MobileHCI conference series Website of the MobileHCI 2013 conference Website of the MobileHCI 2012 conference Website of the MobileHCI 2011 conference Website of the MobileHCI 2010 conference Website of the workshop HCI in Mobile Guides 2005 Website of the workshop Mobile Interaction with the Real World 2009 IT-Outsourcing (in German) Mobile HCI 2009 tutorial day slides Mobile HCI 2008 tutorial day slides Website of the workshop Speech in Mobile and Pervasive Environments Notes and references Computer science conferences Human–computer interaction Association for Computing Machinery
MobileHCI
Technology,Engineering
1,351
386,169
https://en.wikipedia.org/wiki/Spectral%20radius
In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by . Definition Matrices Let be the eigenvalues of a matrix . The spectral radius of is defined as The spectral radius can be thought of as an infimum of all norms of a matrix. Indeed, on the one hand, for every natural matrix norm ; and on the other hand, Gelfand's formula states that . Both of these results are shown below. However, the spectral radius does not necessarily satisfy for arbitrary vectors . To see why, let be arbitrary and consider the matrix . The characteristic polynomial of is , so its eigenvalues are and thus . However, . As a result, As an illustration of Gelfand's formula, note that as , since if is even and if is odd. A special case in which for all is when is a Hermitian matrix and is the Euclidean norm. This is because any Hermitian Matrix is diagonalizable by a unitary matrix, and unitary matrices preserve vector length. As a result, Bounded linear operators In the context of a bounded linear operator on a Banach space, the eigenvalues need to be replaced with the elements of the spectrum of the operator, i.e. the values for which is not bijective. We denote the spectrum by The spectral radius is then defined as the supremum of the magnitudes of the elements of the spectrum: Gelfand's formula, also known as the spectral radius formula, also holds for bounded linear operators: letting denote the operator norm, we have A bounded operator (on a complex Hilbert space) is called a spectraloid operator if its spectral radius coincides with its numerical radius. An example of such an operator is a normal operator. Graphs The spectral radius of a finite graph is defined to be the spectral radius of its adjacency matrix. This definition extends to the case of infinite graphs with bounded degrees of vertices (i.e. there exists some real number such that the degree of every vertex of the graph is smaller than ). In this case, for the graph define: Let be the adjacency operator of : The spectral radius of is defined to be the spectral radius of the bounded linear operator . Upper bounds Upper bounds on the spectral radius of a matrix The following proposition gives simple yet useful upper bounds on the spectral radius of a matrix. Proposition. Let with spectral radius and a consistent matrix norm . Then for each integer : Proof Let be an eigenvector-eigenvalue pair for a matrix A. By the sub-multiplicativity of the matrix norm, we get: Since , we have and therefore concluding the proof. Upper bounds for spectral radius of a graph There are many upper bounds for the spectral radius of a graph in terms of its number n of vertices and its number m of edges. For instance, if where is an integer, then Symmetric matrices For real-valued matrices the inequality holds in particular, where denotes the spectral norm. In the case where is symmetric, this inequality is tight: Theorem. Let be symmetric, i.e., Then it holds that Proof Let be the eigenpairs of A. Due to the symmetry of A, all and are real-valued and the eigenvectors are orthonormal. By the definition the spectral norm, there exists an with such that Since the eigenvectors form a basis of there exists factors such that which implies that From the orthonormality of the eigenvectors it follows that and Since is chosen such that it maximizes while satisfying the values of must be such that they maximize while satisfying This is achieved by setting for and otherwise, yielding a value of Power sequence The spectral radius is closely related to the behavior of the convergence of the power sequence of a matrix; namely as shown by the following theorem. Theorem. Let with spectral radius . Then if and only if On the other hand, if , . The statement holds for any choice of matrix norm on . Proof Assume that goes to zero as goes to infinity. We will show that . Let be an eigenvector-eigenvalue pair for A. Since , we have Since by hypothesis, we must have which implies . Since this must be true for any eigenvalue , we can conclude that . Now, assume the radius of is less than . From the Jordan normal form theorem, we know that for all , there exist with non-singular and block diagonal such that: with where It is easy to see that and, since is block-diagonal, Now, a standard result on the -power of an Jordan block states that, for : Thus, if then for all . Hence for all we have: which implies Therefore, On the other side, if , there is at least one element in that does not remain bounded as increases, thereby proving the second part of the statement. Gelfand's formula Gelfand's formula, named after Israel Gelfand, gives the spectral radius as a limit of matrix norms. Theorem For any matrix norm we have . Moreover, in the case of a consistent matrix norm approaches from above (indeed, in that case for all ). Proof For any , let us define the two following matrices: Thus, We start by applying the previous theorem on limits of power sequences to : This shows the existence of such that, for all , Therefore, Similarly, the theorem on power sequences implies that is not bounded and that there exists such that, for all , Therefore, Let }. Then, that is, This concludes the proof. Corollary Gelfand's formula yields a bound on the spectral radius of a product of commuting matrices: if are matrices that all commute, then Numerical example Consider the matrix whose eigenvalues are ; by definition, . In the following table, the values of for the four most used norms are listed versus several increasing values of k (note that, due to the particular form of this matrix,): Notes and references Bibliography See also Spectral gap The Joint spectral radius is a generalization of the spectral radius to sets of matrices. Spectrum of a matrix Spectral abscissa Spectral theory Articles containing proofs
Spectral radius
Mathematics
1,306
51,715,905
https://en.wikipedia.org/wiki/Dhanusha%20%28unit%29
Dhanusha is an ancient unit of measuring height used in Jain literature. Modern units One Dhanusha equals 3 meters. References Units of measurement
Dhanusha (unit)
Mathematics
31
33,062,705
https://en.wikipedia.org/wiki/2011%20Nairobi%20pipeline%20fire
The 2011 Nairobi pipeline fire was caused by an explosion secondary to a fuel spill in the Kenyan capital Nairobi on 12 September 2011. Approximately 100 people were killed in the fire and at least 116 others were hospitalized with varying degrees of burns. The incident was not the first such pipeline accident in Kenya, with the Molo fire of 2009 resulting in at least 133 fatalities and hundreds more injured. Causes A fuel tank, located in the industrial Lunga Lunga area of Nairobi and part of a pipeline system operated by the state owned Kenya Pipeline Company (KPC), had sprung a leak. People in the adjacent densely populated shanty town of Sinai had started to collect leaking fuel when at about 10 a.m. a massive explosion occurred at the scene. Fire spread to the Sinai area. The cause of the explosion has not yet been determined but some reports indicate that the fire might have started from a discarded cigarette or when the wind changed, bringing embers from nearby garbage fires. Energy Minister Kiraitu Murungi is reported as saying that the disaster began when a pipeline valve failed under pressure allowing the oil to leak into the sewer. Selest Kilinda, the managing director of KPC, is reported to have said the spill occurred from two pipelines, and that engineers had already depressurised the Sinai pipeline but not in time to prevent fuel leaking into the sewer. Casualties Early police estimates have the number of fatalities to be above one hundred; in addition, at least 116 other people were hospitalized with burn injuries. The exact death toll remains uncertain due to some bodies being badly charred or lost in the murky waters of a nearby river. Kenya's Red Cross Disaster Risk Reduction Officer said that the Red Cross would counsel the victims and also would attempt to reconcile the casualty figures with those reported missing. He also reported that most bodies taken to the mortuary were burnt beyond recognition and would require DNA tests to confirm their identities. In November 2011, The Kenya Pipeline Company funded the delivery to the Ministry of Public Health and Sanitation a computer and software system to facilitate forensic DNA identification of victims. The system, called M-FISys (pronounced like "emphasis," an acronym for the Mass-Fatality Identification System), was developed to identify victims of the World Trade Center Disaster of September 11, 2001. City hospitals were hard pressed by the surge of the need for care provisions, food and a strained medicare staff complement. The Kenyatta National Hospital has only 22 burn unit beds and considers any more than 60 casualties as a 'disaster', requiring them to put disaster plans into action. At least 112 people were admitted with burns, many critical or severe. The long-term treatment required for burns patients means that extra tents have been erected for blood donations. The nearer Mater Hospital admitted three casualties with less than 30% burns into the normal ward and one other casualty with 80–90% burns into the intensive care unit. Responsibility Neither the managing director of the KPC, which operates the pipeline, nor the energy minister Kiraitu Murungi have given any indication of accepting responsibility. Kiraitu Murungi initially said that the KPC would compensate the victims, but later the KPC stated it would not do so as it was "not responsible". In 2008 the KPC had issued an eviction order to nearby residents, but they refused to leave. In response to protests by students, an inter-ministerial committee was tasked with gathering names to arrange relocation when funds became available. KPC sent representatives to inform the residents of the danger and to make sure holes were not dug. Political impact Prime minister Raila Odinga and vice-president Kalonzo Musyoka have visited the scene and various hospitals to console injured victims and to condole bereaved families. President Mwai Kibaki visited the main Kenyatta National Hospital to empathize with the injured. The secretary-general of the United Nations, Ban Ki-moon, expressed sorrow and sympathy for the victims, wishing a full and speedy recovery to the survivors, while the United States ambassador to Kenya, Scott Gration, lauded the rescue workers and the personal heroism of the locals. Amnesty International-Kenya said that the failure to relocate people puts the majority of the blame on government officials. Enforcement after the event The National Environment Management Authority (NEMA) said it will act against the KPC for failing to enforce EMCA 1999—and suggests that if the required spill containment measures were in place at the facility the oil would not have run off into the drains. NEMA dismissed KPC claims that they had acted sufficiently, saying they had not received the Environmental audit that is obligatory under the 2003 Environmental Impact Assessment and Environmental Audit Regulations. The slum has been in that place for approximately 20 years despite the requirement for KPC to keep those areas clear of settlement. NEMA said it would also require KPC to deal with the pollution in the environment, particularly regarding the flora and fauna along the Ngong River into which the storm drain flows. Warnings before the event In 2009 journalist John Ngirachu wrote for the local newspaper Daily Nation and reported that the slums in Sinai being located so near to the pipeline were a disaster waiting to happen. The permanent secretary to the Ministry of Energy, Patrick Nyoike, had asked the KPC to refurbish the pipelines but it was reported that the Ministry of Finance declined. References 2010s fires in Africa 2011 fires Fires in Kenya 2010s in Nairobi Pipeline accidents Deaths caused by petroleum looting Industrial fires and explosions 2011 industrial disasters September 2011 events in Africa 2011 disasters in Kenya
2011 Nairobi pipeline fire
Chemistry
1,125
52,147
https://en.wikipedia.org/wiki/Sawtooth%20wave
The sawtooth wave (or saw wave) is a kind of non-sinusoidal waveform. It is so named based on its resemblance to the teeth of a plain-toothed saw with a zero rake angle. A single sawtooth, or an intermittently triggered sawtooth, is called a ramp waveform. The convention is that a sawtooth wave ramps upward and then sharply drops. In a reverse (or inverse) sawtooth wave, the wave ramps downward and then sharply rises. It can also be considered the extreme case of an asymmetric triangle wave. The equivalent piecewise linear functions based on the floor function of time t is an example of a sawtooth wave with period 1. A more general form, in the range −1 to 1, and with period p, is This sawtooth function has the same phase as the sine function. While a square wave is constructed from only odd harmonics, a sawtooth wave's sound is harsh and clear and its spectrum contains both even and odd harmonics of the fundamental frequency. Because it contains all the integer harmonics, it is one of the best waveforms to use for subtractive synthesis of musical sounds, particularly bowed string instruments like violins and cellos, since the slip-stick behavior of the bow drives the strings with a sawtooth-like motion. A sawtooth can be constructed using additive synthesis. For period p and amplitude a, the following infinite Fourier series converge to a sawtooth and a reverse (inverse) sawtooth wave: In digital synthesis, these series are only summed over k such that the highest harmonic, Nmax, is less than the Nyquist frequency (half the sampling frequency). This summation can generally be more efficiently calculated with a fast Fourier transform. If the waveform is digitally created directly in the time domain using a non-bandlimited form, such as y = x − floor(x), infinite harmonics are sampled and the resulting tone contains aliasing distortion. An audio demonstration of a sawtooth played at 440 Hz (A4) and 880 Hz (A5) and 1,760 Hz (A6) is available below. Both bandlimited (non-aliased) and aliased tones are presented. Applications Sawtooth waves are known for their use in electronic music. The sawtooth and square waves are among the most common waveforms used to create sounds with subtractive analog and virtual analog music synthesizers. Sawtooth waves are used in switched-mode power supplies. In the regulator chip the feedback signal from the output is continuously compared to a high-frequency sawtooth to generate a new duty cycle PWM signal on the output of the comparator. In the field of computer science, particularly in automation and robotics, allows to calculate sums and differences of angles while avoiding discontinuities at 360° and 0°. The sawtooth wave is the form of the vertical and horizontal deflection signals used to generate a raster on CRT-based television or monitor screens. Oscilloscopes also use a sawtooth wave for their horizontal deflection, though they typically use electrostatic deflection. On the wave's "ramp", the magnetic field produced by the deflection yoke drags the electron beam across the face of the CRT, creating a scan line. On the wave's "cliff", the magnetic field suddenly collapses, causing the electron beam to return to its resting position as quickly as possible. The current applied to the deflection yoke is adjusted by various means (transformers, capacitors, center-tapped windings) so that the half-way voltage on the sawtooth's cliff is at the zero mark, meaning that a negative current will cause deflection in one direction, and a positive current deflection in the other; thus, a center-mounted deflection yoke can use the whole screen area to depict a trace. The horizontal frequency is 15.734 kHz on NTSC, 15.625 kHz for PAL and SECAM. The vertical deflection system operates the same way as the horizontal, though at a much lower frequency (59.94 Hz on NTSC, 50 Hz for PAL and SECAM). The ramp portion of the wave must appear as a straight line. If otherwise, it indicates that the current isn't increasing linearly, and therefore that the magnetic field produced by the deflection yoke is not linear. As a result, the electron beam will accelerate during the non-linear portions. This would result in a television image "squished" in the direction of the non-linearity. Extreme cases will show marked brightness increases, since the electron beam spends more time on that side of the picture. The first television receivers had controls allowing users to adjust the picture's vertical or horizontal linearity. Such controls were not present on later sets as the stability of electronic components had improved. See also List of periodic functions Sine wave Square wave Triangle wave Pulse wave Sound Wave Zigzag References External links Waveforms Fourier series
Sawtooth wave
Physics
1,039
685,665
https://en.wikipedia.org/wiki/Alexander%20horned%20sphere
The Alexander horned sphere is a pathological object in topology discovered by . It is a particular topological embedding of a two-dimensional sphere in three-dimensional space. Together with its inside, it is a topological 3-ball, the Alexander horned ball, and so is simply connected; i.e., every loop can be shrunk to a point while staying inside. However, the exterior is not simply connected, unlike the exterior of the usual round sphere. Construction The Alexander horned sphere is the particular (topological) embedding of a sphere in 3-dimensional Euclidean space obtained by the following construction, starting with a standard torus: Remove a radial slice of the torus. Connect a standard punctured torus to each side of the cut, interlinked with the torus on the other side. Repeat steps 1–2 on the two tori just added ad infinitum. By considering only the points of the tori that are not removed at some stage, an embedding of the sphere with a Cantor set removed results. This embedding extends to a continuous map from the whole sphere, which is injective (hence a topological embedding since the sphere is compact) since points in the sphere approaching two different points of the Cantor set will end up in different 'horns' at some stage and therefore have different images. Impact on theory The horned sphere, together with its inside, is a topological 3-ball, the Alexander horned ball, and so is simply connected; i.e., every loop can be shrunk to a point while staying inside. The exterior is not simply connected, unlike the exterior of the usual round sphere; a loop linking a torus in the above construction cannot be shrunk to a point without touching the horned sphere. This shows that the Jordan–Schönflies theorem does not hold in three dimensions, as Alexander had originally thought. Alexander also proved that the theorem does hold in three dimensions for piecewise linear/smooth embeddings. This is one of the earliest examples where the need for distinction between the categories of topological manifolds, differentiable manifolds, and piecewise linear manifolds became apparent. Now consider Alexander's horned sphere as an embedding into the 3-sphere, considered as the one-point compactification of the 3-dimensional Euclidean space R3. The closure of the non-simply connected domain is called the solid Alexander horned sphere. Although the solid horned sphere is not a manifold, R. H. Bing showed that its double (which is the 3-manifold obtained by gluing two copies of the horned sphere together along the corresponding points of their boundaries) is in fact the 3-sphere. One can consider other gluings of the solid horned sphere to a copy of itself, arising from different homeomorphisms of the boundary sphere to itself. This has also been shown to be the 3-sphere. The solid Alexander horned sphere is an example of a crumpled cube; i.e., a closed complementary domain of the embedding of a 2-sphere into the 3-sphere. Generalizations One can generalize Alexander's construction to generate other horned spheres by increasing the number of horns at each stage of Alexander's construction or considering the analogous construction in higher dimensions. Other substantially different constructions exist for constructing such "wild" spheres. Another example, also found by Alexander, is Antoine's horned sphere, which is based on Antoine's necklace, a pathological embedding of the Cantor set into the 3-sphere. See also Cantor tree surface Wild knot Wild arc, specifically the Fox–Artin arc References Citations Hatcher, Allen, Algebraic Topology, http://pi.math.cornell.edu/~hatcher/AT/ATpage.html External links Zbigniew Fiedorowicz. Math 655 – Introduction to Topology. – Lecture notes Construction of the Alexander sphere rotating animation PC OpenGL demo rendering and expanding the cusp Geometric topology Fractals Eponyms in geometry 1924 introductions
Alexander horned sphere
Mathematics
828
69,934,691
https://en.wikipedia.org/wiki/Igor%20Dzyaloshinskii
Igor Ekhielevich Dzyaloshinskii, (Игорь Ехиельевич Дзялошинский, surname sometimes transliterated as Dzyaloshinsky, Dzyaloshinski, Dzyaloshinskiĭ, or Dzyaloshinkiy, 1February 193114July 2021) was a Russian theoretical physicist, known for his research on "magnetism, multiferroics, one-dimensional conductors, liquid crystals, van der Waals forces, and applications of methods of quantum field theory". In particular he is known for the Dzyaloshinskii-Moriya interaction. Biography He was born in Moscow to a Jewish family. His father, Yechiel Moiseevich Dzyaloshinskii (1897–1942), a native of Kalush, Ukraine, died in captivity in early 1942. The first in his family to attend a university, Igor E. Dzyaloshinskii graduated in 1953 from the faculty of physics of Moscow State University. Dzyaloshinski pursued graduate study at the Institute of Physics of the Russian Academy of Sciences, where he received in 1957 his Russian Candidate of Sciences degree (Ph.D.) with a thesis on weak ferromagnetism under the supervision of Lev Landau. Weak ferromagnetism is "a small spontaneous magnetic moment in certain classes of antiferromagnetic materials". Its explanation involves exchange interactions based upon "concepts of the magnetic symmetry of crystals". In 1962 Dzyaloshinskii received his Russian Doctor of Sciences degree (habilitation). His Russian doctoral thesis dealt with application of quantum field theory methods in statistical physics. In 1964 he was one of the founding members of the Landau Institute for Theoretical Physics in Moscow. He was until 1972 a professor at the Moscow Institute of Physics and Technology and from 1972 to 1989 at Moscow State University. Between 1958 and 1961, with Alexei Abrikosov and Lev Gor'kov, he published important works on the application of methods of quantum field theory in statistical physics (e.g. the theory of superconductivity) and many-particle theory, about which the three also wrote an outstanding textbook Методы квантовой теории поля в статистической физике, which was published in Russian in 1961 and in English translation as Quantum field theory methods in statistical physics in 1963. Dzaloshinskii did important research with Lev Pitaevskii in solving "the problem of the van der Waals forces between bodies separated by an absorbing liquid" and with Yury Bychkov and Lev Gor’kov on the "problem of superconducting and charge-density-wave instabilities in 1D conductors". Dzyaloshinskii and Anatoly Larkin in the 1970s published "a solution to the Luttinger-liquid problem that is central to the theory of 1D Fermi systems and to the bosonization technique." In 1991 he immigrated to the United States and soon became a professor at the University of California, Irvine (UCI), where he eventually retired as professor emeritus. In the last years of his career, he did research on violation of time-parity in magneto-optics and the condensed matter physics of Fermi liquids and non-Fermi liquids. Dzyaloshinskii applied diagram methods to finite-temperature transport problems. He conjectured the existence of phase transitions without fixed points of the renormalization group. He was involved in the formulation of the Matsubara formalism (Takeo Matsubara, 1955). Dzyaloshinskii was awarded in 1972 the Lomonosov Prize, in 1975 the Order of the Badge of Honour, in 1981 the Order of the Red Banner of Labour, in 1984 the USSR State Prize, and in 1989 the Landau Prize. He was elected in 1974 a corresponding member of the Soviet Academy of Sciences, in 1991 an honorary foreign member of the American Academy of Arts & Sciences, in 1996 a fellow of the American Physical Society, and in 2002 a fellow of the American Association for the Advancement of Science. He married in 1960. Upon his death, he was survived by his widow, their daughter, three grandchildren, and two great-grandchildren. Selected publications Articles Gorkov, Abrikosov, & Dzyaloshinski On the application of Quantum field theory methods to problems of quantum statistics at finite temperature, Sov.Phys.JETP, Vol. 9, 1959, p. 636 (JETP, Vol. 36, 1959, p. 900) Books Abrikosov, Gorkov, & Dzyaloshinskii Quantum field theory methods in statistical physics, Prentice Hall 1963, 2nd edition Pergamon Press 1965, new edition Dover 1975 References External links (publication list) 1931 births 2021 deaths Russian theoretical physicists Condensed matter physicists Soviet physicists Jewish American physicists Jewish Russian physicists 20th-century Russian physicists 21st-century Russian physicists 20th-century American physicists 21st-century American physicists American people of Russian-Jewish descent Moscow State University alumni Academic staff of Moscow State University Academic staff of the Moscow Institute of Physics and Technology University of California, Irvine faculty Fellows of the American Academy of Arts and Sciences Fellows of the American Association for the Advancement of Science Fellows of the American Physical Society Recipients of the USSR State Prize Recipients of the Order of the Red Banner of Labour Soviet Jews Scientists from Moscow American people of Ukrainian-Jewish descent
Igor Dzyaloshinskii
Physics,Materials_science
1,161
19,943,360
https://en.wikipedia.org/wiki/Peter%20Mark%20Memorial%20Award
This Peter Mark Memorial Award was established in 1979 by American Vacuum Society " To recognize outstanding theoretical or experimental work by a young scientist or engineer." See also List of physics awards References External links Peter Mark Memorial Award Physics awards Early career awards Awards established in 1979 1979 establishments in the United States
Peter Mark Memorial Award
Technology
59
46,397,207
https://en.wikipedia.org/wiki/Universal%20Test%20Specification%20Language
Universal Test Specification Language (UTSL) is a programming language used to describe ASIC tests in a format that leads to an automated translation of the test specification into an executable test code. UTSL is platform independent and provided a code generation interface for a specific platform is available, UTSL code can be translated into the programming language of a specific Automatic Test Equipment (ATE). History Increased complexity of ASICs leads to requirements of more complex test programs with longer development times. An automated test program generation could simplify and speed up this process. Teradyne Inc. together with Robert Bosch GmbH agreed to develop a concept and a tool chain for an automated test-program generation. To achieve this a tester independent programming language was required. Hence, UTSL, a programming language that enables detailed description of tests that can be translated into the ATE specific programming language was developed. The ATE manufacturers need to provide a Test Program Generator that uses the UTSL test description as inputs and generates the ATE-specific test code with optimal resource mapping and better practice program code. As long as the ATE manufacturer provides with the test program generator that can use UTSL as an input the cumbersome task of translating a test program from one platform to another can be significantly simplified. In other words, the task of rewriting of the test programs for a specific platform can be replaced by the automatically generating the code from the UTSL based test specification. Prerequisite for this is that the UTSL description of tests is sufficiently detailed with definition of the test technique as well as the description of all the necessary inputs and outputs. Being a platform independent programming language, UTSL allows the engineers to read, analyse and modify the tests in the test specification regardless of the ATE at which the testing of the ASIC will be done. UTSL is based on C# and allows procedural programming and is class oriented. The classes contain sub-classes which in term have their sub-classes. UTSL contains high amount of commands and test-functions. It also allows the usage of commonly known high level programming language syntax elements such as "if/then/else" , etc. Design UTSL is a C# like language where the test are defined as blocks of code. Simple tests such a forcing current and measuring voltage or vice versa can be written in UTSL and with the means of the ATE (Automatic Test Equipment) specific code generator translated into testable code (see the picture1). UTSL allows the user to set the instruments ranges and clamps in order to guarantee the measurement precision and to prevent the measurements from exceeding the instrument clamp values. The current UTSL capabilities can cover c.a. 70% of the required test specification for ASIC testing. For the remaining 30% one could use the option of writing comments in an informal form as it was done in the past. UTSL supports language features such as: Flow control - "if/then/else, select/case" Loops - "for, while, for each" Data types - "int, double, bool, string" Numerical operators - "=, +, -, *, /, %, **, --, &, |, <<, >>" Logical operators - "==, <, >, >=, <=, !=, ^" Arrays - "declare, resize, and [] operator" Furthermore, specialized classes for testing were added: Pin and PinList classes - "for the test board specifics" TestEnvironment class - "wafer level vs final testing" SerialPort and SerialDataFrame classes - "for device serial communications" Evaluate class - "data-logs the results and compares the results to the defined limits" UTSL also supports the units and scales wherever floating point numbers are used. This is essential for a language that describes a test program where the values can be returned as "V, mV, uV, A, mA, uA" , etc. Also more complex tests such as serial communications with ASIC that require write and/or read to and from register can be implemented using UTSL. The example below shows a test where a certain trim code is written to a register and based on the trim code the internal regulator steps in voltage which is read back (see the picture2). Additionally, UTSL allows the user to define the state of the instrument i.e. connected to the pin, or disconnected from the pin. References Automatic test equipment Integrated circuits Programming languages
Universal Test Specification Language
Technology,Engineering
916
1,106,531
https://en.wikipedia.org/wiki/Intrusive%20rock
Intrusive rock is formed when magma penetrates existing rock, crystallizes, and solidifies underground to form intrusions, such as batholiths, dikes, sills, laccoliths, and volcanic necks. Intrusion is one of the two ways igneous rock can form. The other is extrusion, such as a volcanic eruption or similar event. An intrusion is any body of intrusive igneous rock, formed from magma that cools and solidifies within the crust of the planet. In contrast, an extrusion consists of extrusive rock, formed above the surface of the crust. Some geologists use the term plutonic rock synonymously with intrusive rock, but other geologists subdivide intrusive rock, by crystal size, into coarse-grained plutonic rock (typically formed deeper in the Earth's crust in batholiths or stocks) and medium-grained subvolcanic or hypabyssal rock (typically formed higher in the crust in dikes and sills). Classification Because the solid country rock into which magma intrudes is an excellent insulator, cooling of the magma is extremely slow, and intrusive igneous rock is coarse-grained (phaneritic). However, the rate of cooling is greatest for intrusions at relatively shallow depth, and the rock in such intrusions is often much less coarse-grained than intrusive rock formed at greater depth. Coarse-grained intrusive igneous rocks that form at depth within the Earth are called abyssal or plutonic while those that form near the surface are called subvolcanic or hypabyssal. Plutonic rocks are classified separately from extrusive igneous rocks, generally on the basis of their mineral content. The relative amounts of quartz, alkali feldspar, plagioclase, and feldspathoid are particularly important in classifying intrusive igneous rocks, and most plutonic rocks are classified by where they fall in the QAPF diagram. Dioritic and gabbroic rocks are further distinguished by whether the plagioclase they contain is sodium-rich, and sodium-poor gabbros are classified by their relative contents of various iron- or magnesium-rich minerals (mafic minerals) such as olivine, hornblende, clinopyroxene, and orthopyroxene, which are the most common mafic minerals in intrusive rock. Rare ultramafic rocks, which contain more than 90% mafic minerals, and carbonatite rocks, containing over 50% carbonate minerals, have their own special classifications. Hypabyssal rocks resemble volcanic rocks more than they resemble plutonic rocks, being nearly as fine-grained, and are usually assigned volcanic rock names. However, dikes of basaltic composition often show grain sizes intermediate between plutonic and volcanic rock, and are classified as diabases or dolerites. Rare ultramafic hypabyssal rocks called lamprophyres have their own classification scheme. Characteristics Intrusive rocks are characterized by large crystal sizes, and as the individual crystals are visible, the rock is called phaneritic. There are few indications of flow in intrusive rocks, since their texture and structure mostly develops in the final stages of crystallization, when flow has ended. Contained gases cannot escape through the overlying strata, and these gases sometimes form cavities, often lined with large, well-shaped crystals. These are particularly common in granites and their presence is described as miarolitic texture. Because their crystals are of roughly equal size, intrusive rocks are said to be equigranular. Plutonic rocks are less likely than volcanic rocks to show a pronounced porphyritic texture, in which a first generation of large well-shaped crystals are embedded in a fine-grained ground-mass. The minerals of each have formed in a definite order, and each has had a period of crystallization that may be very distinct or may have coincided with or overlapped the period of formation of some of the other ingredients. Earlier crystals originated at a time when most of the rock was still liquid and are more or less perfect. Later crystals are less regular in shape because they were compelled to occupy the spaces left between the already-formed crystals. The former case is said to be idiomorphic (or automorphic); the latter is xenomorphic. There are also many other characteristics that serve to distinguish plutonic from volcanic rock. For example, the alkali feldspar in plutonic rocks is typically orthoclase, while the higher-temperature polymorph, sanidine, is more common in volcanic rock. The same distinction holds for nepheline varieties. Leucite is common in lavas but very rare in plutonic rocks. Muscovite is confined to intrusions. These differences show the influence of the physical conditions under which crystallization takes place. Hypabyssal rocks show structures intermediate between those of extrusive and plutonic rocks. They are very commonly porphyritic, vitreous, and sometimes even vesicular. In fact, many of them are petrologically indistinguishable from lavas of similar composition. Occurrences Plutonic rocks form 7% of the Earth's current land surface. Intrusions vary widely, from mountain-range-sized batholiths to thin veinlike fracture fillings of aplite or pegmatite. Batholith: a large irregular discordant intrusion Chonolith: an irregularly-shaped intrusion with a demonstrable base Cupola: a dome-shaped projection from the top of a large subterranean intrusion Dike: a relatively narrow tabular discordant body, often nearly vertical Laccolith: concordant body with roughly flat base and convex top, usually with a feeder pipe below Lopolith: concordant body with roughly flat top and a shallow convex base, may have a feeder dike or pipe below Phacolith: a concordant lens-shaped pluton that typically occupies the crest of an anticline or trough of a syncline Volcanic pipe or volcanic neck: tubular, roughly vertical body that may have been a feeder vent for a volcano Sill: a relatively thin tabular concordant body intruded along bedding planes Stock: a smaller irregular discordant intrusive Boss: a small stock See also Ellicott City Granodiorite Guilford Quartz Monzonite Pluton emplacement Norbeck Intrusive Suite Subvolcanic rock Tuolumne Intrusive Suite Volcanic rock Woodstock Quartz Monzonite References Petrology Rocks
Intrusive rock
Physics
1,395
60,672,279
https://en.wikipedia.org/wiki/SKIDA1
Ski/Dach domain-containing protein 1 is a protein that in humans is encoded by the SKIDA1 gene. It is also known as C10orf140 and DLN-1. It has orthologs in vertebrates. It has two domains: the Ski/Sno/Dac domain and a domain of unknown function, DUF4854. It is associated with multiple types of cancer, like leukemia, ovarian cancer, and colon cancer. It's predicted to be a nuclear protein. It may interact with PRC2. Homologs Orthologs SKIDA1 has orthologs in vertebrate species. The species least related to humans with a SKIDA1 ortholog is the lancelet Branchiostoma belcheri. The clades amphibia and chondrichthyes have at least two species with SKIDA1, but SKIDA1 is not found throughout the clades. No orthologs have been found in lungfish or invertebrate species. Paralogous Domains SKIDA1 shares the Ski/Sno/Dac domain with Ski oncogene (Ski), Ski-like protein (Sno), and dachshund (Dac). It shares DUF4584 with Elongin BC Polycomb Repressive Complex 2 associated Protein (EPOP). Structure In humans, SKIDA1 is located on the reverse strand of chromosome 10 at locus 10p12.31. It contains five exons. Isoforms There is not a consensus on whether humans have one or two SKIDA1 isoforms. NCBI Gene claims there is one, while UniProt claims there are two. It's possible isoform 2 is recorded in NCBI Gene as DLN-1 (accession BAE93016.1). Isoform 1 is 908 amino acids long, while isoform 2 is 827 amino acids long; isoform 2 is missing amino acids 240-318 from isoform 1. Isoform 1 is predicted to weigh 98 kDa and have an isoelectric point of 8.7, while isoform 2 is predicted to weigh 90 kDa and have an isoelectric point of 7.6. Other mammalian species also have multiple isoforms of SKIDA1, including carnivorans, rodents, and primates. The number of isoforms each species has varies: cheetahs have five recorded isoforms, chimpanzees have three recorded, and brown rats have two recorded. Amino Acid Repeats Human SKIDA1 contains two poly-alanine regions, one poly-histidine region, and one poly-glutamic acid region. It's unknown if they have any function. The poly-alanine and poly-histidine regions are not highly conserved among orthologs; for example, while they are found in the house mouse ortholog, they are not found in the western lowland gorilla ortholog. The poly-glutamic acid region shows more conservation, and is found abbreviated in species as distantly related from humans as the tire track eel. Domains SKIDA1 contains two domains: Ski/Sno/Dac and DUF4854. The Ski/Sno/Dac domain is at the N-terminus end of the protein. The Ski/Sno/Dac domain is also found in the proteins Ski, Ski-like protein, and dachshund. It is potentially a DNA-binding domain. The other domain, DUF4854, is also found in EPOP, near its C-terminus. However, the DUF4584 found in EPOP is roughly a fifth the size of that in SKIDA1. The C-termini of SKIDA1 (amino acids 844-908) and EPOP (amino acids 313-379) have 52% identity. The C-terminus of EPOP binds to the SUZ12 subunit of Polycomb Repressive Complex 2 (PRC2), suggesting that of SKIDA1 may as well. Regulation Promoter and Transcription Factors In humans, there are five predicted potential promoters. Two align with the second half of the mRNA transcript, suggesting they are not used or only produce an incomplete polypeptide. The promoter that aligns best with the start of the mRNA transcript is potentially bound to by many transcription factors, including Transcription factor II B, Nuclear factor Y, Early growth response 1, and Krueppel-like factor 6. It does not contain a TATA box. Transcript Regulation SKIDA1 is regulated by microRNAs. miR-93 binds to the SKIDA1 3'-UTR. Multiple microRNAs are predicted to bind to the SKIDA1 3'-UTR, including miR-130, miR-301, miR-454, and miR-494. Polypeptide Modification SKIDA1 is SUMOylated at five sites. Additional sites are predicted to be SUMOylated. SKIDA1 is also predicted to be phosphorylated and O-GlcNAcylated. Expression Subcellular Localization SKIDA1 is predicted to be localized primarily in the nucleus and less so in the cytosol. Tissue Expression SKIDA1 is expressed at high levels in the brain, thyroid, and testes. It's expressed at medium to low levels in adipose tissue, lymph nodes, and skeletal muscle. In mice, it's noted to have medium-to-high expression in the olfactory bulb, retina, and salivary gland. Developmental Expression SKIDA1 expression changes during organism development. Expression is low in the zygote, peaks during embryonic development, and is low post-birth. In the house mouse, it's expressed most during organogenesis. In the fetus, its expression is low in the liver but not other organs. Expression in the adult liver is much higher. In contrast, SKIDA1 expression in the fetal brain is higher than in the adult brain. SKIDA1 in the African clawed frog is expressed faintly in the marginal zone of gastrulae. During neurulation, it's expressed in the brain and cranial neural crest. During tailbud, SKIDA1 expression increases in sensory placodes. By the end of tailbud, neural expression has faded except in the olfactory organ. Function SKIDA1 is predicted to function primarily in the nucleus and also in the cytosol. SKIDA1 knockouts in mice have significant differences from wild-type mice in the skeletal, neurological, reproductive, and immune systems. Other significant differences include effected hearing, an enlarged thymus, and increased pre-weaning mortality. Some, but not all, of these effects were found in heterozygous knockouts. Clinical significance SKIDA1 expression is associated with multiple types of cancer. It is over-expressed in epithelial ovarian cancer cells. Its expression is altered by various cancer-treatment compounds: human alpha-lactalbumin made lethal to tumor cells; oleate salts; metformin; and aspirin. In cell lines of cancerous cells, altered expression is associated with resistance to dasatinib and docetaxel, which are used to treat cancer. Altered methylation of SKIDA1 is associated with human pancreatic cancer, rheumatoid arthritis, and lupus erythematosus. Additionally, SKIDA1 is expressed less in women with Down syndrome compared to their identical twins without Down syndrome. Its expression is dramatically reduced in brains affected by untreated HIV1-associated neurocognitive disorders (HAND) in comparison to healthy brains and brains affected by HAND but treated with antiretrovirals. References Human proteins Genes on human chromosome 10 Proteins Genes Human genes
SKIDA1
Chemistry
1,620
127,511
https://en.wikipedia.org/wiki/DNA%20sequencer
A DNA sequencer is a scientific instrument used to automate the DNA sequencing process. Given a sample of DNA, a DNA sequencer is used to determine the order of the four bases: G (guanine), C (cytosine), A (adenine) and T (thymine). This is then reported as a text string, called a read. Some DNA sequencers can be also considered optical instruments as they analyze light signals originating from fluorochromes attached to nucleotides. The first automated DNA sequencer, invented by Lloyd M. Smith, was introduced by Applied Biosystems in 1987. It used the Sanger sequencing method, a technology which formed the basis of the "first generation" of DNA sequencers and enabled the completion of the human genome project in 2001. This first generation of DNA sequencers are essentially automated electrophoresis systems that detect the migration of labelled DNA fragments. Therefore, these sequencers can also be used in the genotyping of genetic markers where only the length of a DNA fragment(s) needs to be determined (e.g. microsatellites, AFLPs). The Human Genome Project spurred the development of cheaper, high throughput and more accurate platforms known as Next Generation Sequencers (NGS) to sequence the human genome. These include the 454, SOLiD and Illumina DNA sequencing platforms. Next generation sequencing machines have increased the rate of DNA sequencing substantially, as compared with the previous Sanger methods. DNA samples can be prepared automatically in as little as 90 mins, while a human genome can be sequenced at 15 times coverage in a matter of days. More recent, third-generation DNA sequencers such as PacBio SMRT and Oxford Nanopore offer the possibility of sequencing long molecules, compared to short-read technologies such as Illumina SBS or MGI Tech's DNBSEQ. Because of limitations in DNA sequencer technology, the reads of many of these technologies are short, compared to the length of a genome therefore the reads must be assembled into longer contigs. The data may also contain errors, caused by limitations in the DNA sequencing technique or by errors during PCR amplification. DNA sequencer manufacturers use a number of different methods to detect which DNA bases are present. The specific protocols applied in different sequencing platforms have an impact in the final data that is generated. Therefore, comparing data quality and cost across different technologies can be a daunting task. Each manufacturer provides their own ways to inform sequencing errors and scores. However, errors and scores between different platforms cannot always be compared directly. Since these systems rely on different DNA sequencing approaches, choosing the best DNA sequencer and method will typically depend on the experiment objectives and available budget. History The first DNA sequencing methods were developed by Gilbert (1973) and Sanger (1975). Gilbert introduced a sequencing method based on chemical modification of DNA followed by cleavage at specific bases whereas Sanger's technique is based on dideoxynucleotide chain termination. The Sanger method became popular due to its increased efficiency and low radioactivity. The first automated DNA sequencer was the AB370A, introduced in 1986 by Applied Biosystems. The AB370A was able to sequence 96 samples simultaneously, 500 kilobases per day, and reaching read lengths up to 600 bases. This was the beginning of the "first generation" of DNA sequencers, which implemented Sanger sequencing, fluorescent dideoxy nucleotides and polyacrylamide gel sandwiched between glass plates - slab gels. The next major advance was the release in 1995 of the AB310 which utilized a linear polymer in a capillary in place of the slab gel for DNA strand separation by electrophoresis. These techniques formed the base for the completion of the human genome project in 2001. The human genome project spurred the development of cheaper, high throughput and more accurate platforms known as Next Generation Sequencers (NGS). In 2005, 454 Life Sciences released the 454 sequencer, followed by Solexa Genome Analyzer and SOLiD (Supported Oligo Ligation Detection) by Agencourt in 2006. Applied Biosystems acquired Agencourt in 2006, and in 2007, Roche bought 454 Life Sciences, while Illumina purchased Solexa. Ion Torrent entered the market in 2010 and was acquired by Life Technologies (now Thermo Fisher Scientific). And BGI started manufacturing sequencers in China after acquiring Complete Genomics under their MGI arm. These are still the most common NGS systems due to their competitive cost, accuracy, and performance. More recently, a third generation of DNA sequencers was introduced. The sequencing methods applied by these sequencers do not require DNA amplification (polymerase chain reaction – PCR), which speeds up the sample preparation before sequencing and reduces errors. In addition, sequencing data is collected from the reactions caused by the addition of nucleotides in the complementary strand in real time. Two companies introduced different approaches in their third-generation sequencers. Pacific Biosciences sequencers utilize a method called Single-molecule real-time (SMRT), where sequencing data is produced by light (captured by a camera) emitted when a nucleotide is added to the complementary strand by enzymes containing fluorescent dyes. Oxford Nanopore Technologies is another company developing third-generation sequencers using electronic systems based on nanopore sensing technologies. Manufacturers of DNA sequencers DNA sequencers have been developed, manufactured, and sold by the following companies, among others. Roche The 454 DNA sequencer was the first next-generation sequencer to become commercially successful. It was developed by 454 Life Sciences and purchased by Roche in 2007. 454 utilizes the detection of pyrophosphate released by the DNA polymerase reaction when adding a nucleotide to the template strain. Roche currently manufactures two systems based on their pyrosequencing technology: the GS FLX+ and the GS Junior System. The GS FLX+ System promises read lengths of approximately 1000 base pairs while the GS Junior System promises 400 base pair reads. A predecessor to GS FLX+, the 454 GS FLX Titanium system was released in 2008, achieving an output of 0.7G of data per run, with 99.9% accuracy after quality filter, and a read length of up to 700bp. In 2009, Roche launched the GS Junior, a bench top version of the 454 sequencer with read length up to 400bp, and simplified library preparation and data processing. One of the advantages of 454 systems is their running speed. Manpower can be reduced with automation of library preparation and semi-automation of emulsion PCR. A disadvantage of the 454 system is that it is prone to errors when estimating the number of bases in a long string of identical nucleotides. This is referred to as a homopolymer error and occurs when there are 6 or more identical bases in row. Another disadvantage is that the price of reagents is relatively more expensive compared with other next-generation sequencers. In 2013 Roche announced that they would be shutting down development of 454 technology and phasing out 454 machines completely in 2016 when its technology became noncompetitive. Roche produces a number of software tools which are optimised for the analysis of 454 sequencing data. Such as, GS Run Processor converts raw images generated by a sequencing run into intensity values. The process consists of two main steps: image processing and signal processing. The software also applies normalization, signal correction, base-calling and quality scores for individual reads. The software outputs data in Standard Flowgram Format (or SFF) files to be used in data analysis applications (GS De Novo Assembler, GS Reference Mapper or GS Amplicon Variant Analyzer). GS De Novo Assembler is a tool for de novo assembly of whole-genomes up to 3GB in size from shotgun reads alone or combined with paired end data generated by 454 sequencers. It also supports de novo assembly of transcripts (including analysis), and also isoform variant detection. GS Reference Mapper maps short reads to a reference genome, generating a consensus sequence. The software is able to generate output files for assessment, indicating insertions, deletions and SNPs. Can handle large and complex genomes of any size. Finally, the GS Amplicon Variant Analyzer aligns reads from amplicon samples against a reference, identifying variants (linked or not) and their frequencies. It can also be used to detect unknown and low-frequency variants. It includes graphical tools for analysis of alignments. Illumina Illumina produces a number of next-generation sequencing machines using technology acquired from Manteia Predictive Medicine and developed by Solexa. Illumina makes a number of next generation sequencing machines using this technology including the HiSeq, Genome Analyzer IIx, MiSeq and the HiScanSQ, which can also process microarrays. The technology leading to these DNA sequencers was first released by Solexa in 2006 as the Genome Analyzer. Illumina purchased Solexa in 2007. The Genome Analyzer uses a sequencing by synthesis method. The first model produced 1G per run. During the year 2009 the output was increased from 20G per run in August to 50G per run in December. In 2010 Illumina released the HiSeq 2000 with an output of 200 and then 600G per run which would take 8 days. At its release the HiSeq 2000 provided one of the cheapest sequencing platforms at $0.02 per million bases as costed by the Beijing Genomics Institute. In 2011 Illumina released a benchtop sequencer called the MiSeq. At its release the MiSeq could generate 1.5G per run with paired end 150bp reads. A sequencing run can be performed in 10 hours when using automated DNA sample preparation. The Illumina HiSeq uses two software tools to calculate the number and position of DNA clusters to assess the sequencing quality: the HiSeq control system and the real-time analyzer. These methods help to assess if nearby clusters are interfering with each other. Life Technologies Life Technologies (now Thermo Fisher Scientific) produces DNA sequencers under the Applied Biosystems and Ion Torrent brands. Applied Biosystems makes the SOLiD next-generation sequencing platform, and Sanger-based DNA sequencers such as the 3500 Genetic Analyzer. Under the Ion Torrent brand, Applied Biosystems produces four next-generation sequencers: the Ion PGM System, Ion Proton System, Ion S5 and Ion S5xl systems. The company is also believed to be developing their new capillary DNA sequencer called SeqStudio that will be released early 2018. SOLiD systems was acquired by Applied Biosystems in 2006. SOLiD applies sequencing by ligation and dual base encoding. The first SOLiD system was launched in 2007, generating reading lengths of 35bp and 3G data per run. After five upgrades, the 5500xl sequencing system was released in 2010, considerably increasing read length to 85bp, improving accuracy up to 99.99% and producing 30G per 7-day run. The limited read length of the SOLiD has remained a significant shortcoming and has to some extent limited its use to experiments where read length is less vital such as resequencing and transcriptome analysis and more recently ChIP-Seq and methylation experiments. The DNA sample preparation time for SOLiD systems has become much quicker with the automation of sequencing library preparations such as the Tecan system. The colour space data produced by the SOLiD platform can be decoded into DNA bases for further analysis, however software that considers the original colour space information can give more accurate results. Life Technologies has released BioScope, a data analysis package for resequencing, ChiP-Seq and transcriptome analysis. It uses the MaxMapper algorithm to map the colour space reads. Beckman Coulter Beckman Coulter (now Danaher) has previously manufactured chain termination and capillary electrophoresis-based DNA sequencers under the model name CEQ, including the CEQ 8000. The company now produces the GeXP Genetic Analysis System, which uses dye terminator sequencing. This method uses a thermocycler in much the same way as PCR to denature, anneal, and extend DNA fragments, amplifying the sequenced fragments. Pacific Biosciences Pacific Biosciences produces the PacBio RS and Sequel sequencing systems using a single molecule real time sequencing, or SMRT, method. This system can produce read lengths of multiple thousands of base pairs. Higher raw read errors are corrected using either circular consensus - where the same strand is read over and over again - or using optimized assembly strategies. Scientists have reported 99.9999% accuracy with these strategies. The Sequel system was launched in 2015 with an increased capacity and a lower price. Oxford Nanopore Oxford Nanopore Technologies' MinION sequencer is based on evolving nanopore sequencing technology to nucleic acid analyses. The device is four inches long and gets power from a USB port. MinION decodes DNA directly as the molecule is drawn at the rate of 450 bases/second through a nanopore suspended in a membrane. Changes in electric current indicate which base is present. Initially, the device was 60 to 85 percent accurate, compared with 99.9 percent in conventional machines. Even inaccurate results may prove useful because it produces long read lengths. In early 2021, researchers from University of British Columbia has used special molecular tags and able to reduce the five-to-15 per cent error rate of the device to less than 0.005 per cent even when sequencing many long stretches of DNA at a time. There are two more product iterations based on MinION; the first one is the GridION which is a slightly larger sequencer that processes up to five MinION flow cells at once. And, the second one is the PromethION which uses as many as 100,000 pores in parallel, more suitable for high volume sequencing. MGI MGI produces high-throughput sequencers for scientific research and clinical applications such as DNBSEQ-G50, DNBSEQ-G400, and DNBSEQ-T7, under a proprietary DNBSEQ technology. It is based upon DNA nanoball sequencing and combinatorial probe anchor synthesis technologies, in which DNA nanoballs (DNBs) are loaded onto a patterned array chip via the fluidic system, and later a sequencing primer is added to the adaptor region of DNBs for hybridization. DNBSEQ-T7 can generate short reads at a very large scale—up to 60 human genomes per day. DNBSEQ-T7 was used to generate 150 bp paired-end reads, sequencing 30X, to sequence the genome of SARS-CoV-2 or COVID-19 to identify the genetic variants predisposition in severe COVID-19 illness. Using a novel technique the researchers from China National GeneBank sequenced PCR-free libraries on MGI's PCR-free DNBSEQ arrays to obtain for the first time a true PCR-free whole genome sequencing. MGISEQ-2000 was used in single-cell RNA sequencing to study the underlying pathogenesis and recovery in COVID-19 patients, as published in Nature Medicine. Comparison Current offerings in DNA sequencing technology show a dominant player: Illumina (December 2019), followed by PacBio, MGI and Oxford Nanopore. References DNA sequencing Genetics techniques Molecular biology laboratory equipment Scientific instruments
DNA sequencer
Chemistry,Technology,Engineering,Biology
3,221
17,935,458
https://en.wikipedia.org/wiki/AirLaunch
AirLaunch was an aerospace design and development company headquartered in Kirkland, Washington. They had hoped to provide launch services for launching payloads into orbits around the Earth. This was to be realized through a method called air launch where a rocket is carried to high altitude by an aircraft and then released for launch. The rocket engine is then ignited to launch the rocket (with its payload) into a low Earth orbit (LEO). The principal advantage of a rocket being launched by a high flying airplane is that it need not fly through the low, dense atmosphere, the drag of which requires a considerable amount of extra work and thus mass of propellant. Another advantage is to precisely launch a payload into any orbital inclination at any time, and from a much wider variety of geographic launch locations. Falcon Small Launch Vehicle On June 14, 2006, the firm, in a DARPA sponsored test, dropped a dummy payload from the back of a C-17, a record-setting drop for the aircraft type. Airlaunch subsequently carried out upper stage propulsion development for the QuickReach orbital launch vehicle. The QuickReach vehicle is part of the Air Force and DARPA Falcon Small Launch Vehicle Program. Conclusion of Falcon SLV program According to a DARPA document dated Oct 2008, the QuickReach phase 2C test firings were completed, and DARPA has concluded its SLV program. AirLaunch subsequently ceased operations in November 2008. References External links Private spaceflight companies Space access Aerospace companies of the United States Defunct companies based in Washington (state) Defunct spaceflight companies
AirLaunch
Astronomy
317
69,097,976
https://en.wikipedia.org/wiki/Madeleine%20Akrich
Madeleine Akrich (born 4 March 1959) is a French sociologist of technology. She served as the director of the Center for the Sociology of Innovation at Mines ParisTech from 2003 to 2013. She is known for developing actor–network theory (ANT) with Bruno Latour, Michel Callon, John Law and others. Research Akrich's work concerns the sociology of technology and has been influential in Science and technology studies (STS). She developed actor–network theory, a theoretical approach to social analysis, alongside Michel Callon, Bruno Latour, John Law, and others. Akrich primarily studies users' relationships with various technologies, with a focus on technologies of obstetric medicine and, in recent collaboration with Cécile Méadel, online health discussion forums. Script analysis is another STS methodology developed by Akrich. The term "script" is "a metaphor for the 'instruction manual' she claims is inscribed in an artifact. This is related to Don Norman's concept of affordances, but more comprehensive, and has been applied both in STS and adjacent disciplines such as design, internet research and management. In 2016, Akrich received the CNRS Silver Medal. Notable publications Madeleine Akrich, Cécile Méadel and Vololona Rabeharisoa, Se mobiliser pour la santé. Des associations s'expriment, Paris, Presses des mines, 2009. Madeleine Akrich & Cécile Méadel, "De l'interaction à l'engagement: les collectifs électroniques, nouveaux militants dans le champ de la santé," Hermès, n°47, 2007. Madeleine Akrich, Bruno Latour, & Michel Callon (ed.), Sociologie de la traduction : textes fondateurs, Paris, Mines Paris, les Presses, "Sciences sociales," 2006. Madeleine Akrich, Vololona Rabeharisoa, P. Jamet, Cécile Méadel & F. Vincent (ed.), La Griffe de l'ours. Débats et controverses en environnement, Paris, Presses de l'École des Mines, 2002. Madeleine Akrich & Françoise Laborie, De la contraception à l'enfantement. L'offre technologique en question, Paris; Montréal (Québec), l'Harmattan, 1999. Madeleine Akrich & Bernike Pasveer, Comment la naissance vient aux femmes. Les techniques de l'accouchement en France et aux Pays-Bas, Le Plessis-Robinson, Synthélabo, "Les Empêcheurs de penser en rond," 1996. Madeleine Akrich, L. Bibard, Michel Callon et al. (ed.), Ces réseaux que la raison ignore, Paris, l'Harmattan, "Logiques sociales," 1992. Madeleine Akrich, "The De-Scription of Technical Objects" in Shaping Technology / Building Society: Studies in Sociotechnical Change, 1992. References External links Research page on CSI French women sociologists Science and technology studies scholars Actor-network theory Sociologists of science French philosophers of technology 1959 births People from Boulogne-Billancourt Living people Academic staff of Mines Paris - PSL
Madeleine Akrich
Technology
680
25,809,437
https://en.wikipedia.org/wiki/Kenneth%20B.%20Storey
Kenneth B. Storey (born October 23, 1949) is a Canadian scientist whose work draws from a variety of fields including biochemistry and molecular biology. He is a Professor of Biology, Biochemistry and Chemistry at Carleton University in Ottawa, Canada. Storey has a world-wide reputation for his research on biochemical adaptation - the molecular mechanisms that allow animals to adapt to and endure severe environmental stresses such as deep cold, oxygen deprivation, and desiccation. Biography Kenneth Storey studied biochemistry at the University of Calgary (B.Sc. '71) and zoology at the University of British Columbia (Ph.D. '74). Storey is a Professor of Biochemistry, cross-appointed in the Departments of Biology, Chemistry and Neuroscience and holds the Canada Research Chair in Molecular Physiology at Carleton University in Ottawa, Canada. Storey is an elected fellow of the Royal Society of Canada, of the Society for Cryobiology and of the American Association for the Advancement of Science. He has won fellowships and awards for research excellence including the Fry medal from the Canadian Society of Zoologists (2011), the Flavelle medal from the Royal Society of Canada (2010), Ottawa Life Sciences Council Basic Research Award (1998), a Killam Senior Research Fellowship (1993–1995), the Ayerst Award from the Canadian Society for Molecular Biosciences (1989), an E.W.R. Steacie Memorial Fellowship from the Natural Sciences and Engineering Research Council of Canada (1984–1986), and four Carleton University Research Achievement Awards. Storey is the author of over 1200 research articles, the editor of seven books, has given over 500 talks at conferences and institutes worldwide, and organized numerous international symposia. Research Storey's research includes studies of enzyme properties, gene expression, protein phosphorylation, epigenetics, and cellular signal transduction mechanisms to seek out the basic principles of how organisms endure and flourish under extreme conditions. He is particularly known within the field of cryobiology for his studies of animals that can survive freezing, especially the frozen "frog-sicles" (Rana sylvatica) that have made his work popular with multiple TV shows and magazines. Storey's studies of the adaptations that allow frogs, insects, and other animals to survive freezing have made major advances in the understanding of how cells, tissues and organs can endure freezing. Storey was also responsible for the discovery that some turtle species are freeze tolerant: newly hatched painted turtles that spend their first winter on land (Chrysemys picta marginata & C. p. bellii). These turtles are unique as they are the only reptiles, and highest vertebrate life form, known to tolerate prolonged natural freezing of extracellular body fluids during winter hibernation. These advances may aid the development of organ cryopreservation technology. A second area of his research is metabolic rate depression - understanding the mechanisms by which some animals can reduce their metabolism and enter a state of hypometabolism or torpor that allows them to survive prolonged environmental stresses. His studies have identified molecular mechanisms that underlie metabolic arrest across phylogeny and that support phenomena including mammalian hibernation, estivation, and anoxia- and ischemia-tolerance. These studies hold key applications for medical science, particularly for preservation technologies that aim to extend the survival time of excised organs in cold or frozen storage. Additional applications include insights into hyperglycemia in metabolic syndrome and diabetes, and anoxic and ischemic damage caused by heart attack and stroke. Furthermore, Storey's lab has created several web based programs freely available for data management, data plotting, and microRNA analysis. Publication links Dr. Kenneth B. Storey is among the top 2% of highly cited scientists in the world. PubMed Google Scholar External links Storey lab website Storey lab research tools Kenneth B. Storey CV References 1949 births Canada Research Chairs Carleton University Academic staff of Carleton University Cryobiology Fellows of the Royal Society of Canada Living people Molecular biologists People from Taber, Alberta University of Calgary alumni
Kenneth B. Storey
Physics,Chemistry,Biology
826
37,196,658
https://en.wikipedia.org/wiki/Database%20encryption
Database encryption can generally be defined as a process that uses an algorithm to transform data stored in a database into "cipher text" that is incomprehensible without first being decrypted. It can therefore be said that the purpose of database encryption is to protect the data stored in a database from being accessed by individuals with potentially "malicious" intentions. The act of encrypting a database also reduces the incentive for individuals to hack the aforementioned database as "meaningless" encrypted data adds extra steps for hackers to retrieve the data. There are multiple techniques and technologies available for database encryption, the most important of which will be detailed in this article. Transparent/External database encryption Transparent data encryption (often abbreviated as TDE) is used to encrypt an entire database, which therefore involves encrypting "data at rest". Data at rest can generally be defined as "inactive" data that is not currently being edited or pushed across a network. As an example, a text file stored on a computer is "at rest" until it is opened and edited. Data at rest are stored on physical storage media solutions such as tapes or hard disk drives. The act of storing large amounts of sensitive data on physical storage media naturally raises concerns of security and theft. TDE ensures that the data on physical storage media cannot be read by malicious individuals that may have the intention to steal them. Data that cannot be read is worthless, thus reducing the incentive for theft. Perhaps the most important strength that is attributed to TDE is its transparency. Given that TDE encrypts all data it can be said that no applications need to be altered in order for TDE to run correctly. It is important to note that TDE encrypts the entirety of the database as well as backups of the database. The transparent element of TDE has to do with the fact that TDE encrypts on "the page level", which essentially means that data is encrypted when stored and decrypted when it is called into the system's memory. The contents of the database are encrypted using a symmetric key that is often referred to as a "database encryption key". Column-level encryption In order to explain column-level encryption it is important to outline basic database structure. A typical relational database is divided into tables that are divided into columns that each have rows of data. Whilst TDE usually encrypts an entire database, column-level encryption allows for individual columns within a database to be encrypted. It is important to establish that the granularity of column-level encryption causes specific strengths and weaknesses to arise when compared to encrypting an entire database. Firstly, the ability to encrypt individual columns allows for column-level encryption to be significantly more flexible when compared to encryption systems that encrypt an entire database such as TDE. Secondly, it is possible to use an entirely unique and separate encryption key for each column within a database. This effectively increases the difficulty of generating rainbow tables which thus implies that the data stored within each column is less likely to be lost or leaked. The main disadvantage associated with column-level database encryption is speed, or a loss thereof. Encrypting separate columns with different unique keys in the same database can cause database performance to decrease, and additionally also decreases the speed at which the contents of the database can be indexed or searched. Field-level encryption Experimental work is being done on providing database operations (like searching or arithmetical operations) on encrypted fields without the need to decrypt them. Strong encryption is required to be randomized - a different result must be generated each time. This is known as probabilistic encryption. Field-level encryption is weaker than randomized encryption, but it allows users to test for equality without decrypting the data. Filesystem-level encryption Encrypting File System (EFS) It is important to note that traditional database encryption techniques normally encrypt and decrypt the contents of a database. Databases are managed by "Database Management Systems" (DBMS) that run on top of an existing operating system (OS). This raises a potential security concern, as an encrypted database may be running on an accessible and potentially vulnerable operating system. EFS can encrypt data that is not part of a database system, which implies that the scope of encryption for EFS is much wider when compared to a system such as TDE that is only capable of encrypting database files. Whilst EFS does widen the scope of encryption, it also decreases database performance and can cause administration issues as system administrators require operating system access to use EFS. Due to the issues concerning performance, EFS is not typically used in databasing applications that require frequent database input and output. In order to offset the performance issues it is often recommended that EFS systems be used in environments with few users. Full disk encryption BitLocker does not have the same performance concerns associated with EFS. Symmetric and asymmetric database encryption Symmetric database encryption Symmetric encryption in the context of database encryption involves a private key being applied to data that is stored and called from a database. This private key alters the data in a way that causes it to be unreadable without first being decrypted. Data is encrypted when saved, and decrypted when opened given that the user knows the private key. Thus if the data is to be shared through a database the receiving individual must have a copy of the secret key used by the sender in order to decrypt and view the data. A clear disadvantage related to symmetric encryption is that sensitive data can be leaked if the private key is spread to individuals that should not have access to the data. However, given that only one key is involved in the encryption process it can generally be said that speed is an advantage of symmetric encryption. Asymmetric database encryption Asymmetric encryption expands on symmetric encryption by incorporating two different types of keys into the encryption method: private and public keys. A public key can be accessed by anyone and is unique to one user whereas a private key is a secret key that is unique to and only known by one user. In most scenarios the public key is the encryption key whereas the private key is the decryption key. As an example, if individual A would like to send a message to individual B using asymmetric encryption, he would encrypt the message using Individual B's public key and then send the encrypted version. Individual B would then be able to decrypt the message using his private key. Individual C would not be able to decrypt Individual A's message, as Individual C's private key is not the same as Individual B's private key. Asymmetric encryption is often described as being more secure in comparison to symmetric database encryption given that private keys do not need to be shared as two separate keys handle encryption and decryption processes. For performance reasons, asymmetric encryption is used in Key management rather than to encrypt the data which is usually done with symmetric encryption. Key management The Symmetric & Asymmetric Database Encryption section introduced the concept of public and private keys with basic examples in which users exchange keys. The act of exchanging keys becomes impractical from a logistical point of view, when many different individuals need to communicate with each-other. In database encryption the system handles the storage and exchange of keys. This process is called key management. If encryption keys are not managed and stored properly, highly sensitive data may be leaked. Additionally, if a key management system deletes or loses a key, the information that was encrypted via said key is essentially rendered "lost" as well. The complexity of key management logistics is also a topic that needs to be taken into consideration. As the number of application that a firm uses increases, the number of keys that need to be stored and managed increases as well. Thus it is necessary to establish a way in which keys from all applications can be managed through a single channel, which is also known as enterprise key management. Enterprise Key Management Solutions are sold by a great number of suppliers in the technology industry. These systems essentially provide a centralised key management solution that allows administrators to manage all keys in a system through one hub. Thus it can be said that the introduction of enterprise key management solutions has the potential to lessen the risks associated with key management in the context of database encryption, as well as to reduce the logistical troubles that arise when many individuals attempt to manually share keys. Hashing Hashing is used in database systems as a method to protect sensitive data such as passwords; however it is also used to improve the efficiency of database referencing. Inputted data is manipulated by a hashing algorithm. The hashing algorithm converts the inputted data into a string of fixed length that can then be stored in a database. Hashing systems have two crucially important characteristics that will now be outlined. Firstly, hashes are "unique and repeatable". As an example, running the word "cat" through the same hashing algorithm multiple times will always yield the same hash, however it is extremely difficult to find a word that will return the same hash that "cat" does. Secondly, hashing algorithms are not reversible. To relate this back to the example provided above, it would be nearly impossible to convert the output of the hashing algorithm back to the original input, which was "cat". In the context of database encryption, hashing is often used in password systems. When a user first creates their password it is run through a hashing algorithm and saved as a hash. When the user logs back into the website, the password that they enter is run through the hashing algorithm and is then compared to the stored hash. Given the fact that hashes are unique, if both hashes match then it is said that the user inputted the correct password. One example of a popular hash function is SHA-256. Salting One issue that arises when using hashing for password management in the context of database encryption is the fact that a malicious user could potentially use an Input to Hash table rainbow table for the specific hashing algorithm that the system uses. This would effectively allow the individual to decrypt the hash and thus have access to stored passwords. A solution for this issue is to 'salt' the hash. Salting is the process of encrypting more than just the password in a database. The more information that is added to a string that is to be hashed, the more difficult it becomes to collate rainbow tables. As an example, a system may combine a user's email and password into a single hash. This increase in the complexity of a hash means that it is far more difficult and thus less likely for rainbow tables to be generated. This naturally implies that the threat of sensitive data loss is minimised through salting hashes. Pepper Some systems incorporate a "pepper" in addition to salts in their hashing systems. Pepper systems are controversial, however it is still necessary to explain their use. A pepper is a value that is added to a hashed password that has been salted. This pepper is often unique to one website or service, and it is important to note that the same pepper is usually added to all passwords saved in a database. In theory the inclusion of peppers in password hashing systems has the potential to decrease the risk of rainbow (Input : Hash) tables, given the system-level specificity of peppers, however the real world benefits of pepper implementation are highly disputed. Application-level encryption In application-level encryption, the process of encrypting data is completed by the application that has been used to generate or modify the data that is to be encrypted. Essentially this means that data is encrypted before it is written to the database. This unique approach to encryption allows for the encryption process to be tailored to each user based on the information (such as entitlements or roles) that the application knows about its users. According to Eugene Pilyankevich, "Application-level encryption is becoming a good practice for systems with increased security requirements, with a general drift toward perimeter-less and more exposed cloud systems". Advantages of application-level encryption One of the most important advantages of application-level encryption is the fact that application-level encryption has the potential to simplify the encryption process used by a company. If an application encrypts the data that it writes/modifies from a database then a secondary encryption tool will not need to be integrated into the system. The second main advantage relates to the overarching theme of theft. Given that data is encrypted before it is written to the server, a hacker would need to have access to the database contents as well as the applications that were used to encrypt and decrypt the contents of the database in order to decrypt sensitive data. Disadvantages of application-level encryption The first important disadvantage of Application-level encryption is that applications used by a firm will need to be modified to encrypt data themselves. This has the potential to consume a significant amount of time and other resources. Given the nature of opportunity cost firms may not believe that application-level encryption is worth the investment. In addition, application-level encryption may have a limiting effect on database performance. If all data on a database is encrypted by a multitude of different applications then it becomes impossible to index or search data on the database. To ground this in reality in the form of a basic example: it would be impossible to construct a glossary in a single language for a book that was written in 30 languages. Lastly the complexity of key management increases, as multiple different applications need to have the authority and access to encrypt data and write it to the database. Risks of database encryption When discussing the topic of database encryption it is imperative to be aware of the risks that are involved in the process. The first set of risks are related to key management. If private keys are not managed in an "isolated system", system administrators with malicious intentions may have the ability to decrypt sensitive data using keys that they have access to. The fundamental principle of keys also gives rise to a potentially devastating risk: if keys are lost then the encrypted data is essentially lost as well, as decryption without keys is almost impossible. How can encryption be used to secure data in a database? Encryption can be employed to enhance the security of data stored in a database by converting the information into an unreadable format using an algorithm. The encrypted data can only be accessed and deciphered with a decryption key, ensuring that even if the database is compromised, the information remains confidential. By encrypting sensitive data such as passwords, financial records, and personal information, organizations can safeguard their data from unauthorized access and data breaches. This process mitigates the risk of data theft and ensures compliance with data protection regulations. Implementing encryption in a database involves utilizing encryption technologies such as Advanced Encryption Standard (AES) or Transport Layer Security (TLS). Encryption keys must be securely managed to prevent unauthorized decryption of data. References Cryptography Data security
Database encryption
Mathematics,Engineering
3,145
15,752,256
https://en.wikipedia.org/wiki/N-Triples
N-Triples is a format for storing and transmitting data. It is a line-based, plain text serialisation format for RDF (Resource Description Framework) graphs, and a subset of the Turtle (Terse RDF Triple Language) format. N-Triples should not be confused with Notation3 which is a superset of Turtle. N-Triples was primarily developed by Dave Beckett at the University of Bristol and Art Barstow at the World Wide Web Consortium (W3C). N-Triples was designed to be a simpler format than Notation3 and Turtle, and therefore easier for software to parse and generate. However, because it lacks some of the shortcuts provided by other RDF serialisations (such as CURIEs and nested resources, which are provided by both RDF/XML and Turtle) it can be onerous to type out large amounts of data by hand, and difficult to read. Usage There is very little variation in how an RDF graph can be represented in N-Triples. This makes it a very convenient format to provide "model answers" for RDF test suites. Implementations As N-Triples is a subset of Turtle and Notation3, by definition all tools which support input in either of those formats will support N-Triples. In addition, some tools like Cwm have specific support for N-Triples. File format Each line of the file has either the form of a comment or of a statement: A statement consists of four parts, separated by whitespace: the subject, the predicate, the object, a full stop which means the termination of a statement Subjects may take the form of a URI or a blank node; predicates must be a URI; objects may be a URI, blank node or a literal. URIs are delimited with less-than and greater-than signs used as angle brackets. Blank nodes are represented by an alphanumeric string, prefixed with an underscore and colon (_:). Literals are represented as printable ASCII strings (with backslash escapes), delimited with double-quote characters, and optionally suffixed with a language or datatype indicator. Language indicators are an at sign followed by an RFC 3066 language tag; datatype indicators are a double-caret followed by a URI. Comments consist of a line beginning with a hash sign. Example The N-Triples statements below are equivalent to this RDF/XML: RDF/XML <rdf:RDF xmlns="http://xmlns.com/foaf/0.1/" xmlns:dc="http://purl.org/dc/terms/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" > <Document rdf:about="http://www.w3.org/2001/sw/RDFCore/ntriples/"> <dc:title xml:lang="en-US">N-Triples</dc:title> <maker> <Person rdf:nodeID="art"> <name>Art Barstow</name> </Person> </maker> <maker> <Person rdf:nodeID="dave"> <name>Dave Beckett</name> </Person> </maker> </Document> </rdf:RDF> N-Triples <http://www.w3.org/2001/sw/RDFCore/ntriples/> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> ↵ <http://xmlns.com/foaf/0.1/Document> . <http://www.w3.org/2001/sw/RDFCore/ntriples/> <http://purl.org/dc/terms/title> "N-Triples"@en-US . <http://www.w3.org/2001/sw/RDFCore/ntriples/> <http://xmlns.com/foaf/0.1/maker> _:art . <http://www.w3.org/2001/sw/RDFCore/ntriples/> <http://xmlns.com/foaf/0.1/maker> _:dave . _:art <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> . _:art <http://xmlns.com/foaf/0.1/name> "Art Barstow". _:dave <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> . _:dave <http://xmlns.com/foaf/0.1/name> "Dave Beckett". (The symbol ↵ is used to indicate a place where a line has been wrapped for legibility. N-Triples do not allow lines to be wrapped arbitrarily: the line endings indicate the end of a statement.) N-Quads The related N-Quads superset extends N-Triples with an optional context value at the fourth position. <http://one.example/subject1> <http://one.example/predicate1> <http://one.example/object1> <http://example.org/graph3> . # comments here # or on a line by themselves _:subject1 <http://an.example/predicate1> "object1" <http://example.org/graph1> . _:subject2 <http://an.example/predicate2> "object2" <http://example.org/graph5> . See also Notation3 (N3) Turtle (syntax) TriG (syntax) References External links RDF for Intrepid Unix Hackers: Grepping N-Triples RDF for Intrepid Unix Hackers: Transmuting N-Triples Metadata Computer file formats
N-Triples
Technology
1,379
665,809
https://en.wikipedia.org/wiki/Long%20weekend
A long weekend is a weekend that is at least three days long (i.e. a three-day weekend), due to a public or unofficial holiday occurring on either the following Monday or the preceding Friday. Many countries also have four-day weekends, in which two days adjoining the weekend are holidays. Examples are Good Friday / Easter Monday, and Christmas Day / Boxing Day (e.g. when Christmas Day occurs on a Thursday or Monday). Four-day "bridge" weekends In many countries, when a lone holiday occurs on a Tuesday or a Thursday, the day between the holiday and the weekend may also be designated as a holiday, set to be a movable or floating holiday, or work/school may be interrupted by consensus unofficially. This is typically referred to by a phrase involving "bridge" in many languages; for example in some Spanish-speaking countries the term is puente ("bridge") or simply "fin de semana largo". Four-day bridge weekends are commonplace in non-English speaking countries, but there are only a couple of examples in English-speaking countries: In the United States, the fourth Thursday of November is Thanksgiving, a public holiday on which most workplaces are closed; many workplaces remain closed the following day to create a four-day weekend. In Melbourne, Australia, the Melbourne Cup holiday is held on a Tuesday. The Monday is not a public holiday, but many people modify their work arrangements to also have the Monday off and many schools will have a "pupil free day", so it is colloquially referred to as the "Cup Day long weekend". Europe In Flanders, the Dutch-speaking part of Belgium, "brugdag" ("bridge" day) is used. In the Netherlands also "Klemdag" is used. In France, a bridge idiom is used: faire le pont ("to make the bridge") is used to mean taking additional holiday days. For example, if there is already an official holiday on Thursday, one could "faire le pont" on the Friday and thus have a four-day weekend (Thursday through Sunday inclusive). In the German language, a bridge-related term is also used: a day taken off from work to fill the gap between a holiday Thursday (or Tuesday) and the weekend is called a Brückentag ("bridge day") in Germany and Switzerland, and a Fenstertag ("window day") in Austria. Since Ascension Day is a holiday throughout Germany and Corpus Christi is a holiday in large parts of the country (both of these holidays are always on Thursdays), such "bridge days" are fairly common, though always unofficial in character. Italians use the idiom Fare il ponte: literally, "make the bridge". This could be a Thursday–Sunday weekend if the bridge was over Friday, or a Saturday–Tuesday weekend if the bridge was over a Monday. In Norway, the term "oval weekend" (oval helg in Norwegian) is used. An ordinary weekend is conceived of as "round" (although this is not stated explicitly), and adding extra days off makes it "oval". Norwegians also refer to "inneklemte" (squeezed in) days, which are between a public holiday and a weekend. This is typical for the Friday after Ascension Day, which always falls on a Thursday. It is common not to work on such days, so as to be able to extend the weekend to four days. In Poland, long weekends occur several times a year. The term długi weekend (long weekend) is commonly used in the Polish language. As well as the Easter weekend and the Christmas weekend, there is Corpus Christi weekend (Corpus Christi is always on Thursday and people usually take Friday off as well) and it may occur also around other holidays. However, the best-known long weekend is at the beginning of May, when there are holidays of Labour Day on May 1 and 3 May Constitution Day. The weekend can in fact be up to 9 days long (April 28 – May 6) and, taking one to three days off work, Poles often go for short holidays then. Portugal also uses the bridge idiom with the Portuguese word ponte. In Slovenian, the term podaljšan vikend ("prolonged weekend") is used for a three-day weekend. Four-day weekends also happen, because May 1 and May 2 are public holidays (both May Day). A peculiar coincidence are Christmas Day and Independence Day, falling on two consecutive dates. In the United Kingdom, where the majority of public holidays are termed "bank holidays" by statute, five of the eight public holidays in England and Wales always fall on a Monday or a Friday. When a fixed-date holiday in the UK falls on a weekend, the next weekday is normally designated as a substitute holiday. As such, bank holidays normally form an extension of the weekend and are known as "bank holiday weekends": terminology which is also common in some Commonwealth countries and the Republic of Ireland. There is, however, no automatic entitlement to time off on a bank holiday under British labour laws, and thus not everyone benefits from long weekends. If an employee is entitled to time off on a bank holiday, it may count towards their 5.6 weeks-equivalent of statutory annual leave, though many companies offer bank holidays as an addition to employees' contracted annual leave entitlement. In Spain, the bridge becomes a macropuente when the anniversary of the Spanish Constitution of 1978 (December 6) and the Feast of the Immaculate Conception (December 8) fall on a Tuesday and Thursday, respectively. In Sweden, a day between a weekend and a bank holiday is called a klämdag ("squeeze day"). Many Swedes take a vacation day to have a long weekend. Middle East In Israel, a "bridge" metaphor is also used: ("", literally "bridge day"). In Iran, the Arabic term (""), which means "between two holidays," is used. North America In the United States, the Uniform Monday Holiday Act officially moved federal government observances of many holidays to Mondays, largely at the behest of the travel industry. The resulting long weekends are often termed "three-day weekends" as a result. A well-known four-day weekend starts with Thanksgiving and Black Friday after. South America In Argentina, some national holidays that occur on a Tuesday, Wednesday, Thursday or Friday (sometimes even on a Saturday) are officially moved to the closest Monday in order to create a long weekend. In Brazil, when a holiday occurs in a Tuesday or a Thursday, some sectors of the society, as government and education, turn the day between the holiday and the weekend into a holiday. The four-day or even the three-day weekends are called in Brazilian Portuguese feriados prolongados ("Extended holidays") or its popular form feriadão ("big holiday"). The bridge day is usually called "imprensado" ("pressed (in between)") or "enforcado" ("hanged"). To some extent, the term "ponte" is also used. One could also use the verb emendar (splice), saying eu vou emendar o feriado e o fim de semana ("I will splice together the holiday and the weekend.") In Chile, a "sandwich" is a day that falls between two holidays, independently of whether it's a holiday by itself or not. In the latter case, workers may take it off on account on vacation days, an action called "tomarse el sandwich" (lit.: "taking the sandwich"). In formal writings, the term "interferiado" is used instead of "sandwich". In colloquial contexts, these days, almost always a Monday or a Friday, may be called "San Lunes" or "San Viernes" (lit.: "Saint Monday" and "Saint Friday", respectively) as well. Asia In Indonesia, when a non holiday occurs between two holidays or one of them is a weekend, is colloquially termed "Harpitnas" ('Hari Kejepit Nasional') (lit. National Clamped/Pinched Day, a play on Hardiknas, National Education Day) causing some institutions to declare a day off, or some students or employees unilaterally declaring a day off for themselves, thereby creating a long weekend. In Japan, a weekday which falls between two public holidays is legally a public holiday. See also Public holiday List of holidays by country The Long Week-End Holiday economics References Holidays Weeks Units of time Working time
Long weekend
Physics,Mathematics
1,798
24,282,630
https://en.wikipedia.org/wiki/Parental%20abuse%20by%20children
Child-to-parent violence (CPV), also recognized as abuse of parents by their children, constitutes a manifestation of domestic violence characterized by the infliction of maltreatment upon parents. This mistreatment commonly manifests in verbal or physical forms. The repercussions of enduring abuse from one's offspring can be substantial, exerting influence on the physical and mental well-being of parents, both in the immediate and prolonged periods. CPV can manifest in diverse forms, encompassing physical, verbal, psychological, emotional, and financial dimensions. The occurrence of parental abuse by adolescents spans a variable age range, with adolescents defined as individuals aged between 12 and 24 years. Multiple causes of abusive behavior Many people consider parental abuse to be the result of certain parenting practices, neglect, or the child suffering abuse themselves, but other adolescent abusers have had "normal" upbringings and have not suffered from such situations. Children may be subjected to violence on TV, in movies and in music, and that violence may come to be considered "normal". The breakdown of the family unit, poor or nonexistent relationships with an absent parent, as well as debt, unemployment, and parental drug/alcohol abuse may all be contributing factors to abuse. Some other reasons for CPV according to several experts include: Aggressive behavioral tendencies Frustration or inability to deal with problems Unable or unwilling to learn how to manage behavior Witnessing other abuses at home Lack of respect for a parent because of perceived weakness Lack of consequences for bad behavior Being abused themselves Gang culture Not being able to properly care for a disabled or mentally ill parent(s) Revenge or punishment Mental illness Corporal punishment History Parental abuse is a relatively new term. In 1979, Harbin and Madden released a study using the term "parent battery" but juvenile delinquency, which is a major factor, has been studied since the late 19th century. Even though some studies have been done in the United States, Australia, Canada, and other countries, the lack of reporting of adolescent abuse towards parents makes it difficult to accurately determine the extent of it. Many studies have to rely on self-reporting by adolescents. In 2004, Robinson, of Brigham Young University, published: Parent Abuse on the Rise: A Historical Review in the American Association of Behavioral Social Science Online Journal, reporting the results of the 1988 study performed by Evans and Warren-Sohlberg. The results reported that 57% of parental abuse was physical; using a weapon was at 17%; throwing items was at 5% and verbal abuse was at 22%. With 82% of the abuse being against mothers (five times greater than against fathers), and 11% of the abusers were under the age of 10 years. The highest rate of abuse happens within families with a single mother. Mothers are usually the primary caregivers; they spend more time with their children than fathers and have closer emotional connections to them. It can also be due to the size and strength of the abuser. Parental abuse can occur in any family and it is not necessarily associated with ethnic background, socio-economic class, or sexual orientation. Numerous studies concluded that gender does not play a role in the total number of perpetrators; however, males are more likely to inflict physical abuse and females are more likely to inflict emotional abuse. Studies from the United States estimate that violence among adolescents peaks at 15–17 years old. However, a Canadian study done by Barbara Cottrell in 2001 suggests the ages are 12–14 years old. Parental abuse does not happen just inside the home but can in public places, further adding to the humiliation of the parents. Abuse is not only a domestic affair but can be criminal as well. Most teenagers experience a transition in which they try to go from being dependent to independent, but there are some dynamics of parental control that may alter it. There will always be times of resistance toward parental authority. According to the Canadian National Clearinghouse on Family Violence, the abuse generally begins with verbal abuse, but even then, some females can be very physically abusive towards a child who is smaller and more vulnerable than they are, and to cover their abuse, they often lie to the other parent about actual events that led to "severe punishment." The child, adolescent or parent may show no remorse or guilt and feel justified in the behavior, but many times when the child is the one who is being abused, they are very remorseful for being forced to defend themselves, especially when they are not the aggressor. Parents can examine the behavior of their children to determine whether or not it is abusive. Some teenagers can become aggressive as a result of parental abuse, dysfunction, or psychological problems, while some children may have trouble dealing with their emotions. However, children who are abused are not always afforded protection from their abusive parents. Intervention Non-violent resistance (NVR) is an approach designed to overcome a child’s aggressive, controlling, and self-destructive behaviors. In NVR, parents replace talking with action, not engaging with aggressive or harmful behaviors. With the support of therapists and other counselors, it is possible to identify mental health and other behavioral concerns throughout this process. It has four areas where parents are supported by therapists or other counselors: De-escalation Breaking taboos Taking non-violent actions Reconciliation gestures While intervention is an option, it may not always work. There are times when the child has a mental illness that does not allow them, adolescent or teenager, to understand what exactly is happening. Therefore, they act out their emotions the only way they know. This can present itself as violence, emotional abuse, destructive behavior, such as destroying personal property or self-harm. The United States currently protects abused children using Courts, Child Protective Services and other agencies. The US also has Adult Protective Services which is provided to abused, neglected, or exploited older adults and adults with significant disabilities. There are no agencies or programs that protect parents from abusive children, adolescents or teenagers other than giving up their parental rights to the state they live in. Lastly, the quality of family relationships directly influences child-to-parent violence, with power-assertive discipline playing a mediating role in this connection. It appears that the emotional aspect and overall quality of family relationships are pivotal factors in preventing violent behaviors. See also Child abuse Dysfunctional family Elder abuse Juvenile delinquency Parental alienation Runaway (dependent) Sibling abuse Teenage rebellion References Further reading Retrieved 26 May 2012 from Parentlink - Abuse of parents Retrieved 26 May 2012 from Parenting and Child Health - Health Topics - Retrieved 5 June 2012 from Lack of support for parents who live in fear of their teenagers, study shows Abuse Domestic violence Parenting
Parental abuse by children
Biology
1,355
78,854,296
https://en.wikipedia.org/wiki/Darboux%20transformation
In mathematics, the Darboux transformation, named after Gaston Darboux (1842–1917), is a method of generating a new equation and its solution from the known ones. It is widely used in inverse scattering theory, in the theory of orthogonal polynomials, and as a way of constructing soliton solutions of the KdV hierarchy. From the operator-theoretic point of view, this method corresponds to the factorization of the initial second order differential operator into a product of first order differential expressions and subsequent exchange of these factors, and is thus sometimes called the single commutation method in mathematics literature. The Darboux transformation has applications in supersymmetric quantum mechanics. History The idea goes back to Carl Gustav Jacob Jacobi. Method Let be a solution of the equation and be a fixed strictly positive solution of the same equation for some . Then for , is a solution of the equation where Also, for , one solution of the latter differential equation is and its general solution can be found by d’Alembert's method: where and are arbitrary constants. Eigenvalue problems Darboux transformation modifies not only the differential equation but also the boundary conditions. This transformation makes it possible to reduce eigenparameter-dependent boundary conditions to boundary conditions independent of the eigenvalue parameter – one of the Dirichlet, Neumann or Robin conditions. On the other hand, it also allows one to convert inverse square singularities to Dirichlet boundary conditions and vice versa. Thus Darboux transformations relate eigenparameter-dependent boundary conditions with inverse square singularities. References Theoretical physics Ordinary differential equations
Darboux transformation
Physics
334
2,759,662
https://en.wikipedia.org/wiki/Tide%20clock
A tide clock is a specially designed clock that keeps track of the Moon's apparent motion around the Earth. Along many coastlines, the Moon contributes the major part (67%) of the combined lunar and solar tides. The exact interval between tides is influenced by the position of the Moon and Sun relative to the Earth, as well as the specific location on Earth where the tide is being measured. Due to the Moon's orbital prograde motion, it takes a particular point on the Earth (on average) 24 hours and 50.5 minutes to rotate under the Moon, so the time between high lunar tides fluctuates between 12 and 13 hours. A tide clock is divided into two roughly 6 hour tidal periods that show the average length of time between high and low tides in a semi-diurnal tide region, such as most areas of the Atlantic Ocean. Traditional mechanical tide clocks The bottom of the tide clock dial (6 o'clock position) is marked "low tide" and the top of the tide clock dial (12 o'clock position) is marked "high tide." The left side of the dial is marked "hours until high tide" and has a count-down of hours from 5 to 1. There is one hand on the clock face, and along the left side it points to the number of hours "until" the (lunar) high tide. The right hand side of the clock is marked "hours until low tide" and has a count-down of hours from 5 to 1. The number pointed to by the hand gives the time "until" the (lunar) low tide. Some tide clocks incorporate time (using standard quartz movement) and even humidity and temperature in the same instrument. Some tide clocks count down the number of hours from high or low tide, as in "one hour past high or low tide". When the clock reaches the halfway point ("half-tide"), it then counts the hours up to high tide or low tide, as in "one hour until high or low tide". Generally, there is an adjustment knob on the back on the instrument which may be used to set the tide using official tide tables for a specific location at either high or low tide. Tides have an inherent lead or lag, known as the lunitidal interval, that is different at every location, so tidal clocks are set for the time when the local lunar high tide occurs. This is often complicated because the lead or lag varies during the course of the lunar month, as the lunar and solar tides fall into and out of synchronization. The lunar tide and solar tide are synchronized (ebb and flow at the same time) near the full moon and the new moon. The two tides are unsynchronized near the first and last quarter moon (or "half moon"). Also, in addition to the relative position of the moon and the elliptical pattern of the sun, the tide can be affected to some degree by wind and atmospheric pressure. All of these variables have less impact on the tide at the time of the full moon, so this is usually the best time to set a tide clock. If the tide clock is mounted on a moving boat, it will need to be reset more frequently. The best time to set the clock is at the new moon or the full moon, which is also when the clock can most reliably indicate the actual combined tide. A simple tide clock will always be least reliable near the quarter moon. Tide range is the vertical distance between the highest high tide and lowest low tide. The size of the lunar tide compared to the solar tide (which comes once every 12 hours) is generally about 2 to 1, but the actual proportion along any particular shore depends on the location, orientation, and shape of the local bay or estuary. Along some shorelines, the solar tide is the only important tide, and ordinary 12-hour clocks suffice since the high and low tides come at nearly the same time every day. Because ordinary tidal clocks only track a part of the tidal effect, and because the relative size of the combined effects is different in different places, they are in general only partially accurate for tracking the tides. Consequently, all navigators use tide tables either in a booklet, computer or digital tide clock. Analog tide clocks are most accurate for use on the Atlantic coasts of America and Europe. This is because along the Atlantic coastline the moon controls the tides predictably, ebbing and flowing on a regular (12- to 13-hour) schedule. However, in other parts of the world such as along the Pacific Coast, tides can be irregular. The Pacific Ocean is so vast that the moon cannot control the entire ocean at once. The result is that parts of the Pacific Coast can have 3 low tides a day. Similarly, there are areas in the world like the Gulf of Mexico or the South China Sea that have only one high tide a day. Mechanical tide clocks used on the Pacific Coast must be adjusted frequently, often as much as weekly, and are not useful in diurnal areas (those with one tide per day). Digital tide clocks Digital tide clocks are not married to the 24 hour 50.5 minute tide cycle and thus track tides beyond the Atlantic coast. Smart digital tide clocks can work across all locations in North America without any adjustments. This is achieved by storing all the variations of tides at numerous locations. Given a particular location and date/time, a digital tide clock can display the previous tide, next tide and current absolute tide height. Thus, they are able to track semi-diurnal, diurnal and mixed diurnal tides. Public clocks with tide indications Belgium Lier. The Zimmer tower's astronomical clock has twelve dials surrounding a central clockface. The dial at position X indicates the tides at Lier: the flag without a pennant, at the top of the dial, indicates high water; the flag with a pennant above indicates rising water, the flag with a pennant below indicates ebbing water. The size of the ships indicates the level of the tide. France Fécamp. The clock of 1667 at Fécamp Abbey shows the time of local high tide, and the present state of the sea by means of a disc with a quarter-circle aperture which rotates with the lunar phase, revealing a green background at the syzygies (at new moon and full moon), when the tidal range is most extreme ("spring tides"), and a black background at times of smaller tidal range ("neap tides"). Netherlands Arnemuiden. The 16th-century church clock at Arnemuiden indicates the time of local high tide as a pointer on a 12-hour clockface. Maassluis. Jacob Venker's tide clock on the exterior of the was installed in 1996. Despite the clock's traditional dial, it is computer-controlled, and accounts for 94 waves in its tidal timekeeping. United Kingdom King's Lynn. The south tower of King's Lynn Minster houses a tide clock, a 20th-century restoration of the original installed by Thomas Tue in 1681, which shows the moon phase and the time of local high tide, indicated by a dragon hand. The dial reads "LYNN HIGH TIDE" clockwise, but is to be interpreted as a 24-hour dial, with "L" at the top of the dial as midday and "G" at the bottom of the dial as midnight. London. Alunatime at Trinity Buoy Wharf is a tide clock designed by Laura Williams, installed in 2010, which indicates the lunar phase, lunar day and tide cycle using a graphical notation of lights. See also Tide predicting machine References External links Clocks
Tide clock
Physics,Technology,Engineering
1,575
78,012,334
https://en.wikipedia.org/wiki/Disputes%20on%20Wikipedia
Disputes on Wikipedia arise from Wikipedians, who are volunteer editors, disagreeing over article content, internal Wikipedia affairs, or alleged misconduct. Disputes often manifest as repeated competing changes to an article, known as "edit wars", where instead of making small changes, edits are "reverted" wholesale. Disputes may escalate into dispute resolution efforts and enforcement. Disputes are encouraged to be discussed on talk pages, but can go straight to editing bans, and some editors just "walk away" from conflict, especially if they do not know how to defend their edits within Wikipedia's complex systems. An early but persistent source of conflict is "proprietary editing", where an editor, who may have started an article, will not allow other editors to make changes to their content or language. Many current conflicts play out in articles about contentious topics, often with two entrenched opposing sides, that reflect debates and conflicts in society, based on ethnic, political, religious, and scientific differences. Dispute resolution efforts have shifted over the years. For content disputes in English Wikipedia, as of 2024, editors most often resort to Requests for Comment, along with specialized discussion structures, such as Articles for Deletion. For alleged user misconduct, some Wikipedias rely on Arbitration Committees as the final word. Disputes, editor behavior, and collaboration on Wikipedia have long been the subject of academic research, especially in the English Wikipedia. A 2023 review identified 217 articles about contributor goals, interactions, and collaboration processes, which identified 34 studies of "the causes and impact of conflict, the mechanisms for resolving conflict, and the measurement and prediction of conflict or controversial articles." The review examined numerous studies of editor coordination, especially on Talk pages, as well as algorithmic governance using bots to enforce Wikipedia policies. The review found that research attention peaked in 2012, and overall Wikipedia editing peaked in 2007. Identification of disputes As an open collaboration writing project, from the outset Wikipedia expected disagreements among contributors. The point at which disagreements turn into disputes, and conflicts, is not uniformly defined by Wikipedia communities and the scholars who study them. Conflicts over content within articles often arise among editors, which may result in edit wars. An edit war is a persistent exchange of edits representing conflicting views on a contested article, or as defined by the website's policy: "when editors who disagree about the content of a page repeatedly override each other's edits." Edit wars are prohibited on Wikipedia and editors are encouraged to seek consensus through discussion, however administrative intervention may be applied if discussion is unfruitful in resolving the conflict. Generally, edit wars are provoked by the presence of highly controversial content, such as abortion or the Israeli–Palestinian conflict, but can also occur due to other disputed matters, such as the nationality of artist Francis Bacon. According to a 2020 study, the longest edit war sequence, with 105 reverts by 20 users, was a 2008 tug-of-war over the biography of Turkey's first president, Mustafa Kemal Atatürk. Researchers also designed an analytical platform, titled Contropedia, to observe and measure protracted editing controversies, such as global warming. Edit wars may be defined and detected in terms of reverts and mutual re-reverts. In 2004, the community instituted the three revert rule, which was examined in subsequent scholarship. The rule reportedly cut reverts in half. To identify editing disputes, scholars also tried using the number of article revisions, deletion rates between editors, or a tag placed on controversial articles. For example, up to mid-2020, there were in-depth Talk page arguments over 7,425 instances of a dispute tag. In 2012, Yasseri et al. identified disputes through a pattern recognition algorithm and tested it against human evaluations of article. By avoiding language-based criteria, they stated that their method "makes possible both inter-cultural comparisons and cross-language checks and validation". Accordingly, in a 2014 chapter, Yasseri led a different team to identify the most controversial articles in 10 Wikipedias, including Arabic, Hebrew, and Hungarian. Later research has used other methods, even absent reverts and deletion patterns. A 2021 study claimed 80% accuracy in identifying "conflict-prone discussions" by their structural features, such as back and forth commenting by two editors (ABA pattern), before any contributions by a third person. Other features include phrases and pronoun usage that mark the level of politeness or collaboration. De Kock and Vlachos classified disputes with a natural language processing (NLP) model that improved on feature-based models. Many disputes center on the deletion of written content, which can be seen as a kind of gatekeeping. In a comparative study of such network gatekeeping on French and Spanish decolonization cases, it was found that more active editors experience fewer deletions and appear to function within rival camps. Impact of disputes Disputes are widely seen as a drain on the Wikipedia community, without adding to useful knowledge, and as creating a competitive and conflict-based culture associated with conventional masculine gender roles. Research has focused on the impoliteness of disputes, which can harm personal identities, "violate boundaries", and diminish voluntarism. Entrenched editor conflicts are said to detract from the quality and purported neutrality of Wikipedia articles. Occasionally, a behind-the-scenes dispute will garner negative media attention as a Wikipedia controversy. For example, after the 2019 ban of a user by the Wikimedia Foundation, media stories covered the internal debate and the resignation of 21 administrators from English Wikipedia. Nonetheless, adversarial editing has been defended by Wikipedia leadership as important for collaboration and scholars have argued that well-managed friction among editors can benefit the encyclopedia. Controversial topics may also attract editors, as found by a 2017 lab experiment with people exposed to German Wikipedia. Features of disputes With civility as a core principle of Wikipedia, user disputes often feature impoliteness. According to a study of disputes on 120 Talk pages, by and large "Wikipedians do not prolong the conflicts." The most common incivility is scorn, ridicule, or condescension, followed by "pointed criticism". Impolite comments got no traction, no response, two-fifths of the time. Regardless of the topic area, overt responses were divided: 37 percent of responses to rude conduct were defensive, such as explaining oneself or asking for information about the critic's concern. However, 53.5 percent of the time, people responded offensively. According to a similar study, personal attacks were reciprocated 26% immediately. Editors use a range of rebuttal tactics, ranging from insults to derailing to counterargument and refutation. Higher quality rebuttals "correlate to more constructive outcomes". Coordination tactics include asking questions, providing information, supplying context, offering a compromise, conceding or admitting lack of knowledge. Deferential wording reduces conflict, such as the phrase "by the way" or hedging to signal an openness to compromise. During editing disputes, Wikipedians have been found to adopt five conversational roles: architect (of the discussion structure), content expert, moderator, policy wonk, and wordsmith. The edit-focused roles, of expert and wordsmith, tended to be more successful than the conceptual, organizational roles, such as policy wonk. Indeed, when editors bring up Wikipedia policies during a general content dispute, "wiki-lawyering", they tend to escalate the editorial conflict. Still, researchers found that citing Wikipedia policy, such as Notability, does help settle disputes over the deletion of articles. Editing disputes may go through stages or a life cycle, as David Moats showed for the use of sources in the early days of writing about the Fukushima nuclear accident. Deletion disputes Disagreements over the deletion of articles, and other types of encyclopedic content (e.g., categories and lists), are managed through discussion structures. Notably, Wikipedia (English) has had more than 400,000 Articles for Deletion (AfD) discussions since 2004, though the rate of AfD submissions has declined after Wikipedia article creation was restricted in 2017. As of 2018, roughly 64 percent of debates ended in deletion and 24 percent in keeping the article, a ratio that is much lower than the early years of Wikipedia. Nearly all discussions are "closed" by a Wikipedian administrator. In 2019, researchers Mayfield and Black created an NLP model to forecast AfD outcomes. Consistent with previous research, they found that the first "vote" (i.e., comment) can generate a "herd effect" and predict outcomes 20 percent or more over the baseline. Deletion disputes vary among the language Wikipedias. In English Wikipedia, about 20 percent of AfD comments justify their stance with a policy, compared to less than 3 percent in German and Turkish Wikipedia. Long-time Wikipedians play an outsized role in deletion disputes. Although over 160,000 users had spoken up in AfD discussions, over half the debate comments were made by only 1,218 users. This dominance of veteran editors has increased over time. Contentious topics In English and several other Wikipedias, an Arbitration Committee (ArbCom) handles a variety of intractable disputes, including conflicts among users who edit multiple articles within a topic. The Committee itself defines such a situation as a "contentious topic" and its sanctions may apply expansively to all articles with the topic. Disputes within contentious topics is a distinct area of research, some based on ArbCom cases and others on quantifiable variables. Some topics appear to be unavoidably polarizing, such as abortion and climate change, although the level of editor conflict may not match the degree of public debate. In addition, a topic may be contentious in one language Wikipedia and not another. A 2014 study identified Israel, Adolf Hitler, The Holocaust, and God as the most hotly debated articles across 10 languages. Editors have been found to line up in rival camps over contentious articles and topics. It is unclear how much such editors coordinate outside of the Wikipedia platform, contrary to Wikipedia policy. Apparent editor coordination can be detected through discourse analysis, such as the 2020 study of 1,206 contentious articles that found "contentious Wikipedia articles seem to clearly partition others into friends (those who have the same opinion on a given topic) and enemies." At the same time, Wikipedians can enhance their reputations with successful editing, which can influence other editors into like-minded approaches to a contentious topic. The most reputable editors tend to write lasting content and they are less involved in disputes. In an analysis of 5,414 editor profiles, two types of rival camps were discerned: those whose viewpoints tended to be subsumed and those that tended to be maintained. Those found to "win" an edit war were more likely to ban opposing editors, revert edits, remove competing wikilinks, cite Wikipedia policies, show disrespect, be active in ArbCom proceedings, and especially exert control over cited references. Researchers expressed surprise that Wikipedia policies, designed to ensure balanced viewpoints, were instead leveraged to favor one point-of-view in contentious articles. Looking at two contentious topics in French Wikipedia, Shroud of Turin and Sigmund Freud, researchers noticed a shift in focus from the editors' conflicting opinions to their disagreements over encyclopedic sources (e.g., are they scientific) and fellow editors (e.g., did they read the sources). Editors argued in adversarial, not collaborative, ways because of personal, non-encyclopedic goals, such as religious commitments, beyond Wikipedia. With Freud, the split among editors could be explained in terms of their competing epistemologies. However, the Shroud of Turin article was vulnerable to the meta-fallacy of bothsideism, according to the case study authors, because the "tenacity" of religious Wikipedians "might simply aim to enable other believers to continue to do so, by illustrating possible lines of argumentative defense, that indeed seem unending". In a case study of two post-colonial topics, Algeria vs. France, and Gran Colombia vs. Spain, scholars found that the most active, presumably reputable, editors suffered the fewest deletions of their writing. Moreover, evidence suggested that fewer deletions were made by those who make use of Talk pages, as recommended by Wikipedia policy. The two ingroups with the most Wikipedians France and Gran Colombia were more likely to delete contributions by their presumed opposition from Algeria and Spain, respectively. Dispute resolution Soon after its founding, Wikipedia provided avenues to resolve content and conduct disputes. Just as editing disputes are difficult to define precisely, scholars have disagreed about identifying when disputes are resolved. Yasseri et al. categorized articles into three levels of disputation: Consensus, "Sequence of temporary consensuses", and "Never-ending wars". For content disagreements, Wikipedia has experimented with a variety of mechanisms. Experienced editors have been found to reduce reverts by citing Wikipedia policies, especially "Neutral point of view" (NPOV), "Consensus", and "No original research". Editing disagreements may be resolved by argumentation, compromise, and explaining previous discussions. As of 2024, editors may pursue dispute resolution by requesting a third party opinion, an informal arrangement intended for two editors in disagreement. If their dispute remains unresolved, another recourse is the Dispute Resolution Noticeboard (DRN). The DRN approach does not offer formal closure or a binding compromise, but many cases are rejected for not pursuing other avenues, so it has become less useful. Of 2,520 DRN cases through mid-2020, there were 237 successful resolutions, 149 failures, and 2,134 (85%) closed without a result. Moreover, editors may submit content disagreements into the Requests for Comment (RfC) system. These requests, circulated to uninvolved editors by a bot, benefit from the RfC's distinctive structure and the imposition of a 30-day deadline. During a seven year period, (English) Wikipedia had over 7,300 requests for comment discussions. RfC discussions are often closed with a Wikipedia-style "consensus" on the content dispute. However, a significant number "go stale" because they are ignored by veteran editors or, conversely, the RfCs are overwhelmed with comments and too complex or controversial to be closed. In the past, editors in unresolved content disputes could file for formal mediation by a Mediation Committee, which was discontinued due to inactivity in 2018. Dispute resolution was also provided by informal groups such as the "Mediation Cabal". A 2010 study, cited by Ren et al., found that "mediators can alter the text discussion between conflicting editors (e.g., by striking through some statements), clarify ambiguity, differentiate between personal and substantive arguments, and show the editors how their exchanges could be made more constructive. They can also help manage temporal discontinuities (i.e., when one party is unavailable, the other party may make misattributions), and reduce power differences among editors." For user conduct issues, in 2003, Jimmy Wales created the Arbitration Committee (ArbCom), an overarching authority for binding resolution of conduct disputes. ArbCom cases are structured in a formal, though it tends to be flexible and informal as it works toward decisions. More than 500 complaints were submitted to ArbCom between 2004 through 2020. ArbCom examines evidence of misconduct but its decisions have been criticized for favoring the more socially effective parties. History of disputes on Wikipedia One of the first large-scale disputes about Wikipedia was an internal argument over advertising, starting with Larry Sanger and dissent by Spanish editors, which led to a 2002 fork of the Spanish Wikipedia. Edit warring gave rise to the rule against three repeated reverts by the same editor. In 2005–2006, Wikipedians debated whether to display controversial images from the Jyllands-Posten Muhammad cartoons. On internal matters, early disputes included the 2006 userbox controversy, which was resolved partly by placing templates in personal user pages and partly by administration actions by Jimmy Wales. Meanwhile, in its first decade, Wikipedia set up dispute resolution mechanisms, including the Arbitration Committee, and refined policies to govern and reduce disputes. In its second decade, the Wikimedia Foundation funded and tracked research on disputes. Some Wikipedia dispute resolution efforts were disbanded. A Universal Code of Conduct for all Wikipedia organizations is designed to restrain the most egregious actions, some of which may arise from editing disputes. See also Ideological bias on Wikipedia List of edit wars on Wikipedia List of Wikipedia controversies (including some disputes among Wikipedia editors) Further reading Lih, Andrew (March 17, 2009). The Wikipedia Revolution: How a Bunch of Nobodies Created the World's Greatest Encyclopedia. Hyperion. . Tkacz, Nathaniel. Wikipedia and the Politics of Openness. University of Chicago Press, 2020. ISBN 9780226192444 References Wikipedia dispute resolution Wikipedia history Wikipedia disputes Dispute resolution Mediation Human–computer interaction Wikipedia content
Disputes on Wikipedia
Engineering
3,555
60,046,932
https://en.wikipedia.org/wiki/Michigan%20Disposal%20%28Cork%20Street%20Landfill%29
Michigan Disposal Service, also known as Kalamazoo City Dump, Kalamazoo City Landfill, Dispose-O-Waste and the Cork Street Landfill, is a 68-acre (27.5 hectare) Superfund site in Kalamazoo, Michigan. Davis Creek is adjacent to the site. It is one of six Superfund sites in the Kalamazoo River watershed. The site opened in 1925 as a privately run facility and operated as a dump and incinerator until 1961 when it was purchased by the City of Kalamazoo. In 1981 it was purchased by Dispose-O-Waste, now known as Michigan Disposal Service. A 1967 Solid Waste plan commissioned by the Kalamazoo County Road Commission, stated that the incinerator had no controls and produces an air pollution problem. Further, the authors state that the County's method for waste disposal creates water pollution and is detrimental to public health and is not in compliance with Michigan Act 87 of 1965. The EPA Superfund Record of Decision is dated September 30, 1991. Upon EPA review, the site was found to be leaching antimony, aroclor 1254, arsenic, barium, chromium, and manganese. The sediment was found to contain arsenic, a number of Polycyclic aromatic hydrocarbons, cadmium, and chrysene. References Superfund sites in Michigan Kalamazoo, Michigan Landfills in the United States Incinerators 1925 establishments in Michigan
Michigan Disposal (Cork Street Landfill)
Chemistry
299
1,165,945
https://en.wikipedia.org/wiki/Sanford%20Jackson%20%28biochemist%29
Sanford Jackson was a Canadian biochemist. Jackson graduated from the University of Toronto in chemical engineering and pathological chemistry. He was research biochemist and biochemist-in-chief at the Toronto Hospital for Sick Children 1937–1974. Jackson was a founding member of the Canadian Society of Clinical Chemists and the Ontario Society of Clinical Chemists. He invented the bilirubinometer, which allowed more accurate measurement of serum bilirubin in infants and children. Jackson died 4 September 2000 at age 91. References External links Professor Emeritus Sanford Jackson Year of birth missing Canadian biochemists University of Toronto alumni Academic staff of the University of Toronto
Sanford Jackson (biochemist)
Chemistry
130
21,167,095
https://en.wikipedia.org/wiki/Nap
A nap is a short period of sleep, typically taken during daytime hours as an adjunct to the usual nocturnal sleep period. Naps are most often taken as a response to drowsiness during waking hours. A nap is a form of biphasic or polyphasic sleep, where the latter terms also include longer periods of sleep in addition to one period. For years, scientists have been investigating the benefits of napping, including the 30-minute nap as well as sleep durations of 1–2 hours. Performance across a wide range of cognitive processes has been tested. Benefits Sara Mednick conducted a study experimenting on the effects of napping, caffeine, and a placebo. Her results showed that a 60–90-minute nap is more effective than caffeine in memory and cognition. Power nap A power nap, also known as a Stage 2 nap, is a short slumber of 20 minutes or less which terminates before the occurrence of deep slow-wave sleep, intended to quickly revitalize the napper. The power nap is meant to maximize the benefits of sleep versus time. It is used to supplement normal sleep, especially when a sleeper has accumulated a sleep deficit. The greater the sleep deficit, the more effective the nap. Prescribed napping for sleep disorders It has been shown that excessive daytime sleepiness (EDS) can be improved by prescribed napping in narcolepsy. Apart from narcolepsy, it has not been demonstrated that naps are beneficial for EDS in other sleep disorders. Learning and memory Research suggests that shorter, habitual naps after instruction offer the most benefits to learning. The benefits to alertness show no change based on duration of the nap for combating post-lunch dip, even for naps as short as 10 minutes. Napping enhances alertness in young adults and adolescents during afternoons’ performances, which affect efficiency. Additionally, pre-teens who nap regularly during the day demonstrate better sleep at night. In younger children, napping increased drowsiness even while improving memory recall. For students of all ages, napping during the school day showed benefits to reaction time and recall of declarative memory of new information, especially if the naps remain in slow-wave sleep, i.e. less than an hour in length. Cognitive capacity In adults, a causal association has been found between habitual daytime napping and larger brain volume. Brain volume normally declines with age, and is associated with neurodegenerative disease. Earlier studies have shown benefits of napping for cognitive performance for healthy adults. Alertness and fatigue The circadian cycle plays a role in the rising demand for daytime naps: sleepiness rises towards the mid-afternoon, hence the best timing for naps is early afternoon. Twenty- to thirty-minute naps are recommended for adults, while young children and elderly people may need longer naps. Research, on the other hand, has shown that the benefits of napping depend on sleep onset and sleep phases rather than time and duration. Negative effects Sleep inertia The state of grogginess, impaired cognition and disorientation experienced when awakening from sleep is known as sleep inertia. This state reduces the speed of cognitive tasks but has no effects on the accuracy of task performance. The effects of sleep inertia rarely last longer than 30 minutes in the absence of prior sleep deprivation. Potential health risks A 2016 meta-analysis showed that there may be a correlation between habitual napping for more than an hour, and having an increased risk for cardiovascular disease, diabetes, metabolic syndrome or death. There was no effect of napping for as long as 40 minutes per day, but a sharp increase in risk of disease occurred at longer nap times. No causal relationship was established: the link may be to do with people taking a longer nap in response to the pre-existence of other risk factors. Habitual naps are also an indicator of neurological degradation such as dementia in the elderly, as reduction in brain function causes more sleepiness. On sleep disorders For idiopathic hypersomnia, patients typically experience sleep inertia and are unrefreshed after napping. Best practices How long and when a person naps affects sleep inertia and sleep latency: a person is more likely to benefit in terms of those two points when they sleep moderately in the afternoon. According to research, the degree to which a person experiences sleep inertia differs in different durations of nap. Because sleep inertia is possibly resulting from awakening from slow-wave sleep, it is more likely to happen when one has a longer nap. Sleep inertia is less intense after short naps. Sleep latency is shorter when a nap is taken between 3 and 5 pm, compared with a nap taken between 7 and 9 pm. According to The Sleep Foundation, Psychology Today and Harvard Health Publishing, these are the best practices for napping: Setting up a sleep-friendly environment. Understanding physical needs Setting an alarm in order to prevent the negative impact of sleep inertia and sleep latency See also Siesta - a short nap in the early afternoon, often after the midday meal. References External links Sleep sv:Sömn#Tupplur
Nap
Biology
1,065
57,071,882
https://en.wikipedia.org/wiki/NGC%204683
NGC 4683 is a barred lenticular galaxy located about 170 million light-years away in the constellation Centaurus. It was discovered by astronomer John Herschel on June 8, 1834. NGC 4683 is a member of the Centaurus Cluster. See also List of NGC objects (4001–5000) References External links Centaurus Barred lenticular galaxies 4683 43182 Centaurus Cluster Astronomical objects discovered in 1834
NGC 4683
Astronomy
88
567,523
https://en.wikipedia.org/wiki/Message%20authentication%20code
In cryptography, a message authentication code (MAC), sometimes known as an authentication tag, is a short piece of information used for authenticating and integrity-checking a message. In other words, it is used to confirm that the message came from the stated sender (its authenticity) and has not been changed (its integrity). The MAC value allows verifiers (who also possess a secret key) to detect any changes to the message content. Terminology The term message integrity code (MIC) is frequently substituted for the term MAC, especially in communications to distinguish it from the use of the latter as media access control address (MAC address). However, some authors use MIC to refer to a message digest, which aims only to uniquely but opaquely identify a single message. RFC 4949 recommends avoiding the term message integrity code (MIC), and instead using checksum, error detection code, hash, keyed hash, message authentication code, or protected checksum. Definitions Informally, a message authentication code system consists of three algorithms: A key generation algorithm selects a key from the key space uniformly at random. A MAC generation algorithm efficiently returns a tag given the key and the message. A verifying algorithm efficiently verifies the authenticity of the message given the same key and the tag. That is, return accepted when the message and tag are not tampered with or forged, and otherwise return rejected. A secure message authentication code must resist attempts by an adversary to forge tags, for arbitrary, select, or all messages, including under conditions of known- or chosen-message. It should be computationally infeasible to compute a valid tag of the given message without knowledge of the key, even if for the worst case, we assume the adversary knows the tag of any message but the one in question. Formally, a message authentication code (MAC) system is a triple of efficient algorithms (G, S, V) satisfying: G (key-generator) gives the key k on input 1n, where n is the security parameter. S (signing) outputs a tag t on the key k and the input string x. V (verifying) outputs accepted or rejected on inputs: the key k, the string x and the tag t. S and V must satisfy the following: . A MAC is unforgeable if for every efficient adversary A , where AS(k, · ) denotes that A has access to the oracle S(k, · ), and Query(AS(k, · ), 1n) denotes the set of the queries on S made by A, which knows n. Clearly we require that any adversary cannot directly query the string x on S, since otherwise a valid tag can be easily obtained by that adversary. Security While MAC functions are similar to cryptographic hash functions, they possess different security requirements. To be considered secure, a MAC function must resist existential forgery under chosen-message attacks. This means that even if an attacker has access to an oracle which possesses the secret key and generates MACs for messages of the attacker's choosing, the attacker cannot guess the MAC for other messages (which were not used to query the oracle) without performing infeasible amounts of computation. MACs differ from digital signatures as MAC values are both generated and verified using the same secret key. This implies that the sender and receiver of a message must agree on the same key before initiating communications, as is the case with symmetric encryption. For the same reason, MACs do not provide the property of non-repudiation offered by signatures specifically in the case of a network-wide shared secret key: any user who can verify a MAC is also capable of generating MACs for other messages. In contrast, a digital signature is generated using the private key of a key pair, which is public-key cryptography. Since this private key is only accessible to its holder, a digital signature proves that a document was signed by none other than that holder. Thus, digital signatures do offer non-repudiation. However, non-repudiation can be provided by systems that securely bind key usage information to the MAC key; the same key is in the possession of two people, but one has a copy of the key that can be used for MAC generation while the other has a copy of the key in a hardware security module that only permits MAC verification. This is commonly done in the finance industry. While the primary goal of a MAC is to prevent forgery by adversaries without knowledge of the secret key, this is insufficient in certain scenarios. When an adversary is able to control the MAC key, stronger guarantees are needed, akin to collision resistance or preimage security in hash functions. For MACs, these concepts are known as commitment and context-discovery security. Implementation MAC algorithms can be constructed from other cryptographic primitives, like cryptographic hash functions (as in the case of HMAC) or from block cipher algorithms (OMAC, CCM, GCM, and PMAC). However many of the fastest MAC algorithms, like UMAC-VMAC and Poly1305-AES, are constructed based on universal hashing. Intrinsically keyed hash algorithms such as SipHash are also by definition MACs; they can be even faster than universal-hashing based MACs. Additionally, the MAC algorithm can deliberately combine two or more cryptographic primitives, so as to maintain protection even if one of them is later found to be vulnerable. For instance, in Transport Layer Security (TLS) versions before 1.2, the input data is split in halves that are each processed with a different hashing primitive (SHA-1 and SHA-2) then XORed together to output the MAC. One-time MAC Universal hashing and in particular pairwise independent hash functions provide a secure message authentication code as long as the key is used at most once. This can be seen as the one-time pad for authentication. The simplest such pairwise independent hash function is defined by the random key, , and the MAC tag for a message m is computed as , where p is prime. More generally, k-independent hashing functions provide a secure message authentication code as long as the key is used less than k times for k-ways independent hashing functions. Message authentication codes and data origin authentication have been also discussed in the framework of quantum cryptography. By contrast to other cryptographic tasks, such as key distribution, for a rather broad class of quantum MACs it has been shown that quantum resources do not offer any advantage over unconditionally secure one-time classical MACs. Standards Various standards exist that define MAC algorithms. These include: FIPS PUB 113 Computer Data Authentication, withdrawn in 2002, defines an algorithm based on DES. FIPS PUB 198-1 The Keyed-Hash Message Authentication Code (HMAC) NIST SP800-185 SHA-3 Derived Functions: cSHAKE, KMAC, TupleHash, and ParallelHash ISO/IEC 9797-1 Mechanisms using a block cipher ISO/IEC 9797-2 Mechanisms using a dedicated hash-function ISO/IEC 9797-3 Mechanisms using a universal hash-function ISO/IEC 29192-6 Lightweight cryptography - Message authentication codes ISO/IEC 9797-1 and -2 define generic models and algorithms that can be used with any block cipher or hash function, and a variety of different parameters. These models and parameters allow more specific algorithms to be defined by nominating the parameters. For example, the FIPS PUB 113 algorithm is functionally equivalent to ISO/IEC 9797-1 MAC algorithm 1 with padding method 1 and a block cipher algorithm of DES. An example of MAC use In this example, the sender of a message runs it through a MAC algorithm to produce a MAC data tag. The message and the MAC tag are then sent to the receiver. The receiver in turn runs the message portion of the transmission through the same MAC algorithm using the same key, producing a second MAC data tag. The receiver then compares the first MAC tag received in the transmission to the second generated MAC tag. If they are identical, the receiver can safely assume that the message was not altered or tampered with during transmission (data integrity). However, to allow the receiver to be able to detect replay attacks, the message itself must contain data that assures that this same message can only be sent once (e.g. time stamp, sequence number or use of a one-time MAC). Otherwise an attacker could – without even understanding its content – record this message and play it back at a later time, producing the same result as the original sender. See also Checksum CMAC HMAC (hash-based message authentication code) MAA MMH-Badger MAC Poly1305 Authenticated encryption UMAC VMAC SipHash KMAC Notes References External links RSA Laboratories entry on MACs Ron Rivest lecture on MACs Message authentication codes Error detection and correction
Message authentication code
Engineering
1,842
14,815,943
https://en.wikipedia.org/wiki/Apolipoprotein%20L1
Apolipoprotein L1 is a protein that in humans is encoded by the APOL1 gene. Two transcript variants encoding two different isoforms have been found for this gene. Species distribution This gene is found only in humans, African green monkeys, and gorillas. Structure The gene that encodes the APOL1 protein is 14,522 base pairs long and found on the human chromosome 22, on the long arm at position 13.1 from base pair 36,253,070 to base pair 36,267,530. The protein is a 398 amino acid protein. It consists of 5 functional domains: S domain-secretory signal MAD (membrane-addressing domain)-ph sensor and regulator of cell death BH3 domain - associated with programmed cell death PFD (pore forming domain) SRA (serum resistance-associated binding domain)- confers resistance to Trypanosoma brucei Mutations Two coding variants, G1 and G2, have been recently identified with relevance to human phenotypes. The G1 is a pair of two non-synonymous single nucleotide polymorphisms (SNPs) in almost complete linkage disequilibrium. G2 is an in-frame deletion of the two amino acid residues, N388 and Y389. Function Apolipoprotein L1 (apoL1) is a minor apolipoprotein component of HDL cholesterol which is synthesized in the liver and also in many other tissues, including pancreas, kidney, and brain. APOL1 is found in vascular endothelium, liver, heart, lung, placenta, podocytes, proximal tubules, and arterial cells. The protein as a secreted form that allows it to circulate in the blood. It forms a complex, known as a trypanosome lytic factor (TLF), with high-density lipoprotein 3 (HDL3) particles that also contain apolipoprotein A1 (APOA1) and the hemoglobin-binding, haptoglobin-related protein (HPR). The APOL1 protein acts as the main lytic component in this complex. Once uptaken by the trypanosome, the complex is trafficked to acidic endosomes, where the APOL1 protein may insert into the endosomal membrane. If the endosome is then recycled to the plasma membrane, where it encounters neutral pH conditions, APOL1 may form cation-selective channels. APOL1 is a member of a family of apolipoproteins which also includes six other proteins and it is a member of bcl2 genes which are involved in autophagic cell death. In fact an overabundance of APOL1 within a cell results in autophagy. APOL1 may play a role in the inflammatory response. Pro-inflammatory cytokines interferon-γ(IFN), tumor necrosis factor-α (TNF-α) and p53 can increase the expression of APOL1. APOL1 has a role in innate immunity by protecting against Trypanosoma brucei infection, which is a parasite transmitted by the tsetse fly. Trypanosomes endocytose the secreted form of APOL1; APOL1 forms pores on the lysosomal membranes of the trypanosomes which causes in influx of chloride, swelling of the lysosome and lysis of the trypanosome. Clinical significance African trypanosomiasis (sleeping sickness) Although its intracellular function has not been elucidated, apoL1 circulating in plasma has the ability to kill the trypanosome Trypanosoma brucei that causes sleeping sickness. Recently, two coding sequence variants in APOL1 have been shown to associate with kidney disease in a recessive fashion while at the same time conferring resistance against Trypanosoma brucei rhodesiense. This resistance is due, in part, to decreased binding of the G1 and G2 APOL1 variants to the T. b. rhodesiense virulence factor, serum resistance-associated protein (SRA) as a result of the C-terminal polymorphisms. People who have at least one copy of either the G1 or G2 variant are resistant to infection by trypanosomes, but people who have two copies of either variant are at an increased risk of developing a non-diabetic kidney disease. Kidney disease The distribution of the variants most associated with kidney disease risk was analyzed in African populations and found to be more prevalent in western compared to northeastern African populations and absent in Ethiopia, consistent with the reported protection from forms of kidney disease known to be associated with the APOL1 variants. In the Yoruba people of Nigeria (West Africa), the prevalence of G1 and G2 risk alleles are 40% and 8% respectively. African nations with high frequencies of APOL1 risk alleles also have large populations of Trypanosomes suggesting that the risk alleles underwent positive selection as a defense mechanism. The existence of these variants are only found on African chromosomes and exist in people with recent African ancestry (<10,000 years). Many African Americans are descendants of people of West African nations and consequently, also have a high prevalence of APOL1 risk alleles as well as APOL1 associated kidney diseases. The frequency of the risk alleles in African Americans is more than 30%. The existence of these alleles has been shown to increase the risk of developing diseases such as Focal Segmental Glomerulosclerosis(FSGS), Hypertension Attributed-End Stage Kidney Disease (ESKD), and HIV-Associated Nephropathy(HIVAN). The prevalence of the risk alleles in African Americans with these kidney diseases shown in recent studies are 67% in HIVAN, 66% in FSGS, and 47% in hypertension-attributed ESKD. Hispanic populations such as Dominicans and Puerto Ricans demonstrate a mixture of genetic influences that include African ancestry resulting in a prevalence of the APOL1 variants as well. Studies have also determined the prevalence of each individual allele in FSGS cases as well. Focal segmental glomerulosclerosis (FSGS) The prevalence of the G1 risk allele in African Americans with FSGS is 52% and 18-23% in those without FSGS. The prevalence of the G2 risk allele in African Americans with FSGS is 23% and 15% in those without FSGS. FSGS is a kidney disease that affects younger individuals therefore, its effects are slightly different from the effects of general non-diabetic ESKD. In a recent study, the mean ages of onset of FSGS for African Americans with 2, 1, and 0 APOL1 risk alleles was 32yrs, 36yrs and 39yrs, respectively. APOL1 variants also have a tendency to manifest FSGS at relatively young ages; FSGS begins between the ages of 15 and 39 in 70% of individuals with two APOL1 risk alleles and 42% of individuals with of 0 or 1 risk alleles. Pathogenesis Although possession of the APOL1 risk variants increases susceptibility to non-diabetic kidney disease, not all people who possess these variants develop kidney disease, which indicates another factor may initiate progression of kidney disease. Similarly in HIV positive patients, although the majority of African-American patients with HIVAN have two APOL1 risk alleles other as yet unknown factors in the host, including genetic risk variants and environmental or viral factors, may influence the development of this disorder in those with zero or one APOL1 risk allele. Kidney Int. 2012 Aug;82(3):338-43. The African American population has a total lifetime risk of developing FSGS of 0.8%. For those with 0 risk alleles the risk of developing FSGS is 0.2%, 0.3% with 1 risk allele, 4.25% with 2 risk alleles and a 50% chance of developing HIVAN for untreated HIV infected individuals. People with these allelic variants who develop ESKD begin dialysis at an earlier age than ESKD patients without the risk alleles. On average, those with two risk alleles begin dialysis approximately 10 years earlier than ESKD patients without the risk variants. The mean ages of initiation of dialysis of African American ESKD patients with two risk alleles, one risk allele, or no risk alleles are approximately 48yrs, 53yrs, and 58 yrs, respectively. Compared to African American ESKD patients, Hispanic ESKD patients with two APOL1 risk variants start dialysis at an earlier age, 41 yrs. Although, the age of initiation of dialysis is earlier with one risk allele this effect is only seen in those with the G1 variant. In a study, ~96% of patients with two risk alleles started dialysis before the age of 75 compared to 94% for G1 heterozygotes, and 84% for those with no risk alleles. Kidneys from donors containing two APOL1 variants experience allograft failure more rapidly than donors with 0 or 1 variants. Kidney recipients who have copies of the APOL1 risk variants, but do not receive kidneys from donors with the risk variants do not have decreased survival rates of the donated kidneys. These observations together suggest that the genotype of the donor only affects allograft survival. References External links Further reading Proteins
Apolipoprotein L1
Chemistry
2,006
63,274,769
https://en.wikipedia.org/wiki/2003%20United%20States%20smallpox%20vaccination%20campaign
The 2003 United States smallpox vaccination campaign was a vaccination program announced by the White House on 13 December 2002 as preparedness for bioterrorism using smallpox virus. The campaign aimed to provide the smallpox vaccine to those who would respond to an attack, establishing Smallpox Response Teams and using DryVax (containing the NYCBOH strain) to mandatorily vaccinate half a million American military personnel, followed by half a million health care worker volunteers by January 2004. The first vaccine was administered to then-President George W. Bush. The campaign ended early in June 2003, with only 38,257 civilian health care workers vaccinated, after several hospitals refused to participate due to the risk of the live virus infecting vulnerable patients and skepticism about the risks of an attack, and after over 50 heart complications were reported by the CDC. That August, the US Institute of Medicine (IOM) criticized the programme for its costs and not considering other bioterrorism control measures such as surveillance. The adverse cardiac events, including two deaths, were however unlikely to have been caused by the vaccine. A 2005 IOM report noted that some of the problems of the campaign stemmed from administration officials overruling scientific advice on the numbers who should be vaccinated and a lack of communication by the CDC of the public health need, though it found that the campaign had increased general preparedness for sudden occurrences of infectious diseases like that year's monkeypox outbreak and the 2002–2004 SARS outbreak. References Smallpox vaccines Vaccination in the United States 2003 in the United States Disaster preparedness in the United States Bioterrorism
2003 United States smallpox vaccination campaign
Biology
334
46,581,587
https://en.wikipedia.org/wiki/Ruthenium%20pentafluoride
Ruthenium pentafluoride is the inorganic compound with the empirical formula RuF5. This green volatile solid has rarely been studied but is of interest as a binary fluoride of ruthenium, i.e. a compound containing only Ru and F. It is sensitive toward hydrolysis. Its structure consists of Ru4F20 tetramers, as seen in the isostructural platinum pentafluoride. Within the tetramers, each Ru adopts octahedral molecular geometry, with two bridging fluoride ligands. Ruthenium pentafluoride reacts with iodine to give ruthenium(III) fluoride. References Ruthenium compounds Fluorides Platinum group halides
Ruthenium pentafluoride
Chemistry
156
75,018,143
https://en.wikipedia.org/wiki/Outline%20of%20reptiles
The following outline is provided as an overview of and topical guide to reptiles: Reptile – What type of thing are reptiles? A reptile can be described as all of the following: Lifeform Animal Chordate Vertebrate Amniote Ectotherm Types of reptiles List of reptiles List of largest reptiles List of largest extant lizards Lists of reptiles by region List of U.S. state reptiles Marine reptile List of marine reptiles Reptile classifications List of reptile genera Testudines Crocodilia Squamata Rhynchocephalia Examples of reptiles Alligator Crocodile Lizard Gecko Iguana Hybrid iguana Komodo dragon Snake Python Tortoise Tuatara Turtle History of reptiles Reptile egg fossil History of the study of reptiles 2014 in reptile paleontology 2015 in reptile paleontology 2017 in reptile paleontology 2018 in reptile paleontology 2019 in reptile paleontology 2020 in reptile paleontology 2021 in reptile paleontology 2022 in reptile paleontology 2023 in reptile paleontology Evolutionary history of reptiles Evolution of reptiles Evolution of reptiles Archosauromorpha Lepidosauromorpha Extinct reptiles List of largest extinct lizards Parareptilia Captorhinidae Araeoscelidia Neodiapsida Drepanosauromorpha Younginiformes Ichthyosauromorpha Thalattosauria Lepidosauriformes Characteristics of reptiles Reptile scales Reptile reproduction Reptile incubation Human impact on reptiles Herpetoculture Herping Human uses of reptiles Reptile conservation Reptile centres Reptile organizations Endangered reptiles lists List of least concern reptiles List of data deficient reptiles List of near threatened reptiles List of vulnerable reptiles List of endangered reptiles List of critically endangered reptiles Reptile centres Reptile centre Alice Springs Reptile Centre Armadale Reptile Centre Australian Reptile Park Clyde Peeling's Reptiland Colorado Gators Reptile Park Crocodile Rehabilitation and Research Centre Indian River Reptile and Dinosaur Park Kentucky Reptile Zoo Komodo Indonesian Fauna Museum and Reptile Park Melaka Butterfly and Reptile Sanctuary Reptile Gardens Reptile World Serpentarium Sleeping Turtles Preserve Snakes Down Under Reptile Park and Zoo The Reptile Zoo West Australian Reptile Park Reptile organizations Amphibian and Reptile Conservation Trust British Herpetological Society Friends of Snakes Society International Reptile Rescue Katala Foundation Snake Cell Andhra Pradesh Society for the Study of Amphibians and Reptiles United States Association of Reptile Keepers Reptile publications Books on reptiles Periodicals on reptiles Practical Reptile Keeping Reptiles Scientific journals covering reptiles African Journal of Herpetology Bibliotheca Herpetologica Caribbean Herpetology Chelonian Conservation and Biology Herpetologica Herpetological Conservation and Biology Herpetological Monographs Ichthyology & Herpetology Reptile databases Reptile Database Persons influential in reptile-related activities List of herpetologists See also Bird Outline of birds List of birds External links Reptiles
Outline of reptiles
Biology
606
1,023,011
https://en.wikipedia.org/wiki/Allen%27s%20rule
Allen's rule is an ecogeographical rule formulated by Joel Asaph Allen in 1877, broadly stating that animals adapted to cold climates have shorter and thicker limbs and bodily appendages than animals adapted to warm climates. More specifically, it states that the body surface-area-to-volume ratio for homeothermic animals varies with the average temperature of the habitat to which they are adapted (i.e. the ratio is low in cold climates and high in hot climates). Explanation Allen's rule predicts that endothermic animals with the same body volume should have different surface areas that will either aid or impede their heat dissipation. Because animals living in cold climates need to conserve as much heat as possible, Allen's rule predicts that they should have evolved comparatively low surface area-to-volume ratios to minimize the surface area by which they dissipate heat, allowing them to retain more heat. For animals living in warm climates, Allen's rule predicts the opposite: that they should have comparatively high ratios of surface area to volume. Because animals with low surface area-to-volume ratios would overheat quickly, animals in warm climates should, according to the rule, have high surface area-to-volume ratios to maximize the surface area through which they dissipate heat. In animals Though there are numerous exceptions, many animal populations appear to conform to the predictions of Allen's rule. The polar bear has stocky limbs and very short ears that are in accordance with the predictions of Allen's rule, so does the snow leopard. In 2007, R.L. Nudds and S.A. Oswald studied the exposed lengths of seabirds' legs and found that the exposed leg lengths were negatively correlated with Tmaxdiff (body temperature minus minimum ambient temperature), supporting the predictions of Allen's rule. J.S. Alho and colleagues argued that tibia and femur lengths are highest in populations of the common frog that are indigenous to the middle latitudes, consistent with the predictions of Allen's rule for ectothermic organisms. Populations of the same species from different latitudes may also follow Allen's rule. R.L. Nudds and S.A. Oswald argued in 2007 that there is poor empirical support for Allen's rule, even if it is an "established ecological tenet". They said that the support for Allen's rule mainly draws from studies of single species, since studies of multiple species are "confounded" by the scaling effects of Bergmann's rule and alternative adaptations that counter the predictions of Allen's rule. J.S. Alho and colleagues argued in 2011 that, although Allen's rule was originally formulated for endotherms, it can also be applied to ectotherms, which derive body temperature from the environment. In their view, ectotherms with lower surface area-to-volume ratios would heat up and cool down more slowly, and this resistance to temperature change might be adaptive in "thermally heterogeneous environments". Alho said that there has been a renewed interest in Allen's rule due to global warming and the "microevolutionary changes" that are predicted by the rule. In humans Marked differences in limb lengths have been observed when different portions of a given human population reside at different altitudes. Environments at higher altitudes generally experience lower ambient temperatures. In Peru, individuals who lived at higher elevations tended to have shorter limbs, whereas those from the same population who inhabited the more low-lying coastal areas generally had longer limbs and larger trunks. Katzmarzyk and Leonard similarly noted that human populations appear to follow the predictions of Allen's rule.:494 There is a negative association between body mass index and mean annual temperature for indigenous human populations,:490 meaning that people who originate from colder regions have a heavier build for their height and people who originate from warmer regions have a lighter build for their height. Relative sitting height is also negatively correlated with temperature for indigenous human populations,:487–88 meaning that people who originate from colder regions have proportionally shorter legs for their height and people who originate from warmer regions have proportionally longer legs for their height. In 1968, A.T. Steegman investigated the assumption that Allen's rule caused the structural configuration of the face of human populations adapted to polar climate. Steegman did an experiment that involved the survival of rats in the cold. Steegman said that the rats with narrow nasal passages, broader faces, shorter tails and shorter legs survived the best in the cold. Steegman said that the experimental results had similarities with the Arctic Mongoloids, particularly the Eskimo and Aleut, because these have similar morphological features in accordance with Allen's rule: a narrow nasal passage, relatively large heads, long to round heads, large jaws, relatively large bodies, and short limbs. Allen's rule may have also resulted in wide noses and alveolar and/or maxillary prognathism being more common in human populations in warmer regions, and the opposite in colder regions. Mechanism A contributing factor to Allen's rule in vertebrates may be that the growth of cartilage is at least partly dependent on temperature. Temperature can directly affect the growth of cartilage, providing a proximate biological explanation for this rule. Experimenters raised mice either at 7 degrees, 21 degrees or 27 degrees Celsius and then measured their tails and ears. They found that the tails and ears were significantly shorter in the mice raised in the cold in comparison to the mice raised at warmer temperatures, even though their overall body weights were the same. They also found that the mice raised in the cold had less blood flow in their extremities. When they tried growing bone samples at different temperatures, the researchers found that the samples grown in warmer temperatures had significantly more growth of cartilage than those grown in colder temperatures. See also Bergmann's rule, which correlates latitude with body mass in animals Gloger's rule, which correlates humidity with pigmentation in animals References Physiology Ecogeographic rules
Allen's rule
Biology
1,257
14,776,851
https://en.wikipedia.org/wiki/HOXB9
Homeobox protein Hox-B9 is a protein that in humans is encoded by the HOXB9 gene. Function This gene is a member of the Abd-B homeobox family and encodes a protein with a homeobox DNA-binding domain. It is included in a cluster of homeobox B genes located on chromosome 17. The encoded nuclear protein functions as a sequence-specific transcription factor that is involved in cell proliferation and differentiation. Increased expression of this gene is associated with some cases of leukemia, prostate cancer and lung cancer. Interactions HOXB9 has been shown to interact with BTG2 and BTG1. See also Homeobox References Further reading External links Transcription factors
HOXB9
Chemistry,Biology
145
41,742,377
https://en.wikipedia.org/wiki/Skeet%20%28Newfoundland%29
The noun skeet in Newfoundland and Labrador English is considered to be a pejorative epithet. Though it has never been formally defined in the Dictionary of Newfoundland English, it is used as a stereotype to describe someone who is ignorant, aggressive, and unruly, with a pattern of vernacular use of English, drug and alcohol use, and who is involved in petty crime, very similar to the word "chav" used in the UK. From this noun, the adjective "skeety" is derived. History The origin of this use of skeet is unknown. However, it is possible that it is a new use of an old word, coming out of the use of skeet as 'rascal'. There have been some who theorize that the use of the word skeet is linked to the townie versus bayman divide in Newfoundland and Labrador and how it speaks to class, education, and use of vernacular Newfoundland English. Use as pejorative Skeet has been called a pan-provincial slur against rural life. It is linked to stereotypes of those living in outport communities: the use of vernacular Newfoundland English, living in economically poor areas, and to lower levels of education. Though vernacular use of English is on the decline in Newfoundland and Labrador, those that continue to speak using non-standard forms of English are often stereotyped as uneducated fishermen from Newfoundland outports. Skeets are characterised as rough around the edges, unintelligent, poorly dressed, and poorly spoken. However, of equal importance is the skeets' connection to petty crime, and drug and alcohol use. Skeet stereotype is linked to those living in economically poor areas and lower levels of education. The stigma of being from a lower income area, or dropping out of school is associated with being a skeet, and it is unlikely that an educated or professional person would be associated with the term unless it was used in jest. Phillip Hiscock, associate professor of folklore at Memorial University of Newfoundland has said that using the word skeet says more about the person using it than the person being referred to. He also claims it is more a reflection of modern post-capitalism culture than a true identity. This use of skeet is virtually unknown outside of the province, though people displaying the same characteristics may be referred to as white trash or trailer trash in some areas of Canada and the United States, chav in the United Kingdom, spide in Northern Ireland, or skanger in Ireland. Sandra Clarke suggests there could be a connection between skeet and Prince Edward Island's skite. Pop culture Bands like Gazeebow Unit, a hip-hop group from Airport Heights, St. John's, Newfoundland and Labrador, play on some of the stereotypes of skeet, incorporating it into their music and parody the skeet stereotype. Some local Newfoundland and Labrador companies have begun to use the word on some of their products. Depictions of "skeet" characters in entertainment have included the television series Little Dog, and the theatrical feature films How to Be Deadly and Skeet. See also Gazeebow Unit, a Newfoundland rap group with skeet cultural references Ned (Scottish) Chav (United Kingdom) References Further reading ‘Not the Cream of the Crop': Using the Word 'Skeet' as Vernacular Speech in Newfoundland . Leslie Pierce, Folklore Department, Memorial University of Newfoundland, 2006. Best of St John's: Best of Local Slang. The Scope, 4 January 2012 Hip-hop in a Post-insular Community: Hybridity, Local Language, and Authenticity in an Online Newfoundland Rap Group Sandra Clarke, Journal of English Linguistics, 2007 Anti-social behaviour Canadian slang Culture of Newfoundland and Labrador Fashion aesthetics Newfoundland and Labrador society Class-related slurs Social class subcultures Stereotypes of the working class Canadian youth culture European-Canadian culture Working-class culture in Canada Socioeconomic stereotypes
Skeet (Newfoundland)
Biology
793
26,984,136
https://en.wikipedia.org/wiki/Arruda%E2%80%93Boyce%20model
In continuum mechanics, an Arruda–Boyce model is a hyperelastic constitutive model used to describe the mechanical behavior of rubber and other polymeric substances. This model is based on the statistical mechanics of a material with a cubic representative volume element containing eight chains along the diagonal directions. The material is assumed to be incompressible. The model is named after Ellen Arruda and Mary Cunningham Boyce, who published it in 1993. The strain energy density function for the incompressible Arruda–Boyce model is given by where is the number of chain segments, is the Boltzmann constant, is the temperature in kelvins, is the number of chains in the network of a cross-linked polymer, where is the first invariant of the left Cauchy–Green deformation tensor, and is the inverse Langevin function which can be approximated by For small deformations the Arruda–Boyce model reduces to the Gaussian network based neo-Hookean solid model. It can be shown that the Gent model is a simple and accurate approximation of the Arruda–Boyce model. Alternative expressions for the Arruda–Boyce model An alternative form of the Arruda–Boyce model, using the first five terms of the inverse Langevin function, is where is a material constant. The quantity can also be interpreted as a measure of the limiting network stretch. If is the stretch at which the polymer chain network becomes locked, we can express the Arruda–Boyce strain energy density as We may alternatively express the Arruda–Boyce model in the form where and If the rubber is compressible, a dependence on can be introduced into the strain energy density; being the deformation gradient. Several possibilities exist, among which the Kaliske–Rothert extension has been found to be reasonably accurate. With that extension, the Arruda-Boyce strain energy density function can be expressed as where is a material constant and . For consistency with linear elasticity, we must have where is the bulk modulus. Consistency condition For the incompressible Arruda–Boyce model to be consistent with linear elasticity, with as the shear modulus of the material, the following condition has to be satisfied: From the Arruda–Boyce strain energy density function, we have, Therefore, at , Substituting in the values of leads to the consistency condition Stress-deformation relations The Cauchy stress for the incompressible Arruda–Boyce model is given by Uniaxial extension For uniaxial extension in the -direction, the principal stretches are . From incompressibility . Hence . Therefore, The left Cauchy–Green deformation tensor can then be expressed as If the directions of the principal stretches are oriented with the coordinate basis vectors, we have If , we have Therefore, The engineering strain is . The engineering stress is Equibiaxial extension For equibiaxial extension in the and directions, the principal stretches are . From incompressibility . Hence . Therefore, The left Cauchy–Green deformation tensor can then be expressed as If the directions of the principal stretches are oriented with the coordinate basis vectors, we have The engineering strain is . The engineering stress is Planar extension Planar extension tests are carried out on thin specimens which are constrained from deforming in one direction. For planar extension in the directions with the direction constrained, the principal stretches are . From incompressibility . Hence . Therefore, The left Cauchy–Green deformation tensor can then be expressed as If the directions of the principal stretches are oriented with the coordinate basis vectors, we have The engineering strain is . The engineering stress is Simple shear The deformation gradient for a simple shear deformation has the form where are reference orthonormal basis vectors in the plane of deformation and the shear deformation is given by In matrix form, the deformation gradient and the left Cauchy–Green deformation tensor may then be expressed as Therefore, and the Cauchy stress is given by Statistical mechanics of polymer deformation The Arruda–Boyce model is based on the statistical mechanics of polymer chains. In this approach, each macromolecule is described as a chain of segments, each of length . If we assume that the initial configuration of a chain can be described by a random walk, then the initial chain length is If we assume that one end of the chain is at the origin, then the probability that a block of size around the origin will contain the other end of the chain, , assuming a Gaussian probability density function, is The configurational entropy of a single chain from Boltzmann statistical mechanics is where is a constant. The total entropy in a network of chains is therefore where an affine deformation has been assumed. Therefore the strain energy of the deformed network is where is the temperature. Notes and references See also Hyperelastic material Rubber elasticity Finite strain theory Continuum mechanics Strain energy density function Neo-Hookean solid Mooney–Rivlin solid Yeoh (hyperelastic model) Gent (hyperelastic model) Continuum mechanics Elasticity (physics) Non-Newtonian fluids Rubber properties Solid mechanics Polymer chemistry
Arruda–Boyce model
Physics,Chemistry,Materials_science,Engineering
1,066
3,410,840
https://en.wikipedia.org/wiki/Tolu%20balsam
Tolu balsam or balsam of Tolu is a balsam that originates from South America (Colombia, Peru, Venezuela). It is similar to (and frequently confounded with) the balsam of Peru. It is tapped from the living trunks of Myroxylon balsamum var. balsamum. The fresh balsam of Tolu is a brownish, sticky, semifluid mass. It gradually becomes a brittle solid, but softens again when it is warm. The balsam contains a fairly large amount of benzyl benzoate and benzyl cinnamate. Collection Balsam of Tolu is obtained by cutting a V-shaped wound on the trunk of Myroxylon balsamum var. balsamum and fixing a calabash there to catch the exuded resin. Uses The resin is still used in certain cough syrup formulas. However its main use in the modern era is in perfumery, where it is valued for its warm, mellow yet somewhat spicy scent. It is also used as a natural remedy for skin rashes. It is a well known cause of contact dermatitis, a form of skin allergy. History In 1841, Henri Étienne Sainte-Claire Deville isolated toluene by the dry distillation of tolu balsam. The resin is used in traditional medicine by the people of Central America and South America. It got its name because it was shipped to Europe from Tolú, Colombia. In 1753 Linnaeus described the type specimen of Toluifera balsamum (the synonym of Myroxylon balsamum) using a specimen collected in the province of Cartagena, probably a town called Tolú, which at the time was located in the province of Cartagena, and named it Toluifera balsamum in relation to the place of collection. The name of the important hydrocarbon solvent toluene is derived from Tolu balsam. References Resins Perfume ingredients
Tolu balsam
Physics
401
59,392
https://en.wikipedia.org/wiki/Surface%20anatomy
Surface anatomy (also called superficial anatomy and visual anatomy) is the study of the external features of the body of an animal. In birds, this is termed topography. Surface anatomy deals with anatomical features that can be studied by sight, without dissection. As such, it is a branch of gross anatomy, along with endoscopic and radiological anatomy. Surface anatomy is a descriptive science. In particular, in the case of human surface anatomy, these are the form and proportions of the human body and the surface landmarks which correspond to deeper structures hidden from view, both in static pose and in motion. In addition, the science of surface anatomy includes the theories and systems of body proportions and related artistic canons. The study of surface anatomy is the basis for depicting the human body in classical art. Some pseudo-sciences such as physiognomy, phrenology and palmistry rely on surface anatomy. Human surface anatomy Surface anatomy of the thorax Knowledge of the surface anatomy of the thorax (chest) is particularly important because it is one of the areas most frequently subjected to physical examination, like auscultation and percussion. In cardiology, Erb's point refers to the third intercostal space on the left sternal border where S2 heart sound is best auscultated. Some sources include the fourth left interspace. Human female breasts are located on the chest wall, most frequently between the second and sixth rib. Anatomical landmarks On the trunk of the body in the thoracic area, the shoulder in general is the acromial, while the curve of the shoulder is the deltoid. The back as a general area is the dorsum or dorsal area, and the lower back as the limbus or lumbar region. The shoulderblades are the scapular area and the breastbone is the sternal region. The abdominal area is the region between the chest and the pelvis. The breast is called the mamma or mammary, the armpit as the axilla and axillary, and the navel as the umbilicus and umbilical. The pelvis is the lower torso, between the abdomen and the thighs. The groin, where the thigh joins the trunk, are the inguen and inguinal area. The entire arm is referred to as the brachium and brachial, the front of the elbow as the antecubitis and antecubital, the back of the elbow as the olecranon or olecranal, the forearm as the antebrachium and antebrachial, the wrist as the carpus and carpal area, the hand as the manus and manual, the palm as the palma and palmar, the thumb as the pollex, and the fingers as the digits, phalanges, and phalangeal. The buttocks are the gluteus or gluteal region and the pubic area is the pubis. Anatomists divide the lower limb into the thigh (the part of the limb between the hip and the knee) and the leg (which refers only to the area of the limb between the knee and the ankle). The thigh is the femur and the femoral region. The kneecap is the patella and patellar while the back of the knee is the popliteus and popliteal area. The leg (between the knee and the ankle) is the crus and crural area, the lateral aspect of the leg is the peroneal area, and the calf is the sura and sural region. The ankle is the tarsus and tarsal, and the heel is the calcaneus or calcaneal. The foot is the pes and pedal region, and the sole of the foot the planta and plantar. As with the fingers, the toes are also called the digits, phalanges, and phalangeal area. The big toe is referred to as the hallux. List of features Following are lists of surface anatomical features in humans and other animals. Sorted roughly from head to tail, cranial to caudal. Homologues share a bullet point and are separated by commas. Subcomponents are nested. Class in which component occurs in italic. In humans In other animals Head Tentacle Cephalopoda Antler Crest Hood Horn Mane Eye Ear Snout Nose, Trunk Nostril Whiskers Beak Aves only, Mouth Lip not in Aves Philtrum Jaw not in Aves Gums not in Aves Teeth not in Aves, Tusk Tongue Throat Vocal sac Ranidae Vertebral column (extends dorsally) Thorax Udder, Mammary gland Gills Arm Mammalia, Amphibia, Fin Fish, Wing Aves Elbow Hand Fingers (Thumb: Primate) Knee Leg Foot Toe Hoof, Claw, Nail (anatomy), Nail (beak) Webbing Abdomen Pouch Marsupialia Gastro-genitourinary system Vulva (female) Placentalia Penis (male) Amniota Scrotum (male) Boreoeutheria Urogenital papillae Teleostei Cloaca Aves, Elasmobranchii, Reptilia, Amphibia, Monotremata, Sarcopterygii Anus Theria, Teleostei, Invertebrates Skin Vertebrata Feather Aves, Scale, Hair Mammalia, Fur Mammalia Shell Tail See also Anatomy Inspection (medicine) List of images in Gray's Anatomy: XII. Surface anatomy and Surface Markings Palpation Notes References Standring, Susan (2008) Gray's Anatomy: The Anatomical Basis of Clinical Practice, 39th Edition. . Human surface anatomy photos at pp. 947, 1406-1410 Figs. 56.3, 110.12, 110.13, 110.15, 110.22 Further reading Anatomy Human anatomy Human body Human surface anatomy
Surface anatomy
Physics,Biology
1,222
11,504,627
https://en.wikipedia.org/wiki/Second%20wind
Second wind is a phenomenon in endurance sports, such as marathons or road running (as well as other sports), whereby an athlete who is out of breath and too tired to continue (known as "hitting the wall"), finds the strength to press on at top performance with less exertion. The feeling may be similar to that of a "runner's high", the most obvious difference being that the runner's high occurs after the race is over. In muscle glycogenoses (muscle GSDs), an inborn error of carbohydrate metabolism impairs either the formation or utilization of muscle glycogen. As such, those with muscle glycogenoses do not need to do prolonged exercise to experience "hitting the wall". Instead, signs of exercise intolerance, such as an inappropriate rapid heart rate response to exercise, are experienced from the beginning of an activity, and some muscle GSDs can achieve second wind within about 10 minutes from the beginning of the aerobic activity, such as walking. (See below in pathology). In experienced athletes, "hitting the wall" is conventionally believed to be due to the body's glycogen stores being depleted, with "second wind" occurring when fatty acids become the predominant source of energy. The delay between "hitting the wall" and "second wind" occurring, has to do with the slow speed at which fatty acids sufficiently produce ATP (energy); with fatty acids taking approximately 10 minutes, whereas muscle glycogen is considerably faster at about 30 seconds. Some scientists believe the second wind to be a result of the body finding the proper balance of oxygen to counteract the buildup of lactic acid in the muscles. Others claim second winds are due to endorphin production. Heavy breathing during exercise also provides cooling for the body. After some time the veins and capillaries dilate and cooling takes place more through the skin, so less heavy breathing is needed. The increase in the temperature of the skin can be felt at the same time as the "second wind" takes place. Documented experiences of the second wind go back at least 100 years, when it was taken to be a commonly held fact of exercise. The phenomenon has come to be used as a metaphor for continuing on with renewed energy past the point thought to be one's prime, whether in other sports, careers, or life in general. Hypotheses Metabolic switching When non-aerobic glycogen metabolism is insufficient to meet energy demands, physiologic mechanisms utilize alternative sources of energy such as fatty acids and proteins via aerobic respiration. Second-wind phenomena in metabolic disorders such as McArdle's disease are attributed to this metabolic switch and the same or a similar phenomenon may occur in healthy individuals (see symptoms of McArdle's disease). Lactic acid Muscular exercise as well as other cellular functions requires oxygen to produce ATP and properly function. This normal function is called aerobic metabolism and does not produce lactic acid if enough oxygen is present. During heavy exercise such as long distance running or any demanding exercise, the body's need for oxygen to produce energy is higher than the oxygen supplied in the blood from respiration. Anaerobic metabolism to some degree then takes place in the muscle and this less ideal energy production produces lactic acid as a waste metabolite. If the oxygen supply is not soon restored, this may lead to accumulation of lactic acid. This is the case even without exercise in people with respiratory disease, challenged circulation of blood to parts of the body or any other situation when oxygen cannot be supplied to the tissues involved. Some people's bodies may take more time than others to be able to balance the amount of oxygen they need to counteract the lactic acid. This theory of the second wind posits that, by pushing past the point of pain and exhaustion, runners may give their systems enough time to warm up and begin to use the oxygen to its fullest potential. For this reason, well-conditioned Olympic-level runners do not generally experience a second wind (or they experience it much sooner) because their bodies are trained to perform properly from the start of the race. The idea of "properly trained" athlete delves into the theory of how an amateur athlete can train his or her body to increase the aerobic capacity or aerobic metabolism. A big push in Ironman Triathlon ten years ago introduced the idea of heart rate training and "tricking" one's body into staying in an aerobic metabolic state for longer periods of time. This idea is widely accepted and incorporated into many Ironman Triathlon training programs. Endorphins Endorphins are credited as the cause of the feeling of euphoria and wellbeing found in many forms of exercise, so proponents of this theory believe that the second wind is caused by their early release. Many of these proponents feel that the second wind is very closely related to—or even interchangeable with—the runner's high. Pathology A second wind phenomenon is also seen in some medical conditions, such as McArdle disease (GSD-V) and Phosphoglucomutase deficiency (PGM1-CDG/CDG1T/GSD-XIV). Unlike non-affected individuals that have to do long-distance running to deplete their muscle glycogen, in GSD-V individuals their muscle glycogen is unavailable, so second wind is achieved after 6–10 minutes of light to moderate aerobic activity (such as walking without an incline). Skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity, as well as throughout high-intensity aerobic activity and all anaerobic activity. In GSD-V, due to a glycolytic block, there is an energy shortage in the muscle cells after the phosphagen system has been depleted. The heart tries to compensate for the energy shortage by increasing heart rate to maximize delivery of oxygen and blood borne fuels to the muscle cells for oxidative phosphorylation. Exercise intolerance such as muscle fatigue and pain, an inappropriate rapid heart rate in response to exercise (tachycardia), heavy (hyperpnea) and rapid breathing (tachypnea) are experienced until sufficient energy is produced via oxidative phosphorylation, primarily from free fatty acids. Oxidative phosphorylation by free fatty acids is more easily achievable for light to moderate aerobic activity (below the aerobic threshold), as high-intensity (fast-paced) aerobic activity relies more on muscle glycogen due to its high ATP consumption. Oxidative phosphorylation by free fatty acids is not achievable with isometric and other anaerobic activity (such as lifting weights), as contracted muscles restrict blood flow (leaving oxygen and blood borne fuels unable to be delivered to muscle cells adequately for oxidative phosphorylation). The second wind phenomenon in GSD-V individuals can be demonstrated by measuring heart rate during a 12 Minute Walk Test. A "third wind" phenomenon is also seen in GSD-V individuals, where after approximately 2 hours, they see a further improvement of symptoms as the body becomes even more fat adapted. Without muscle glycogen, it is important to get into second wind without going too fast, too soon nor trying to push through the pain. Going too fast, too soon encourages protein metabolism over fat metabolism, and the muscle pain in this circumstance is a result of muscle damage due to a severely low ATP reservoir. Aiming for ATP production primarily from fat metabolism rather than protein metabolism is also why the preferred method for getting into second wind is to slowly increase speed during aerobic activity for 10 minutes, rather than to go quickly from the outset and then resting for 10 minutes before resuming. In muscle glycogenoses, second wind is achieved gradually over 6–10 minutes from the beginning of aerobic activity and individuals may struggle to get into second wind within that timeframe if they accelerate their speed too soon or if they try to push through the pain. Understanding the types of activity with which second wind can be achieved and which external factors affect it (such as walking into a headwind, walking on sand, or an icy surface), with practice while paying attention to the sensations in their muscles and using a heart rate monitor to see if their heart rate shoots up too high, individuals can learn how to get into second wind safely to the point where it becomes almost second nature (much like riding a bicycle or driving). Pain killers and muscle relaxants dull the sensations in the muscles that let us know if we are going too fast, so either take them after exercise or be extra mindful about the speed if you have to take them during exercise. Otherwise, individuals might find themselves in a spiral of taking painkillers or muscle relaxants, inadvertently causing muscle damage because they can’t feel the early warning signals that their muscles are giving them, then having to take more because of the increased pain from muscle damage, then causing even more muscle damage while exercising on the increased dosage, which then causes more pain, and so on. Due to the glycolytic block, those with McArdle disease and select other muscle glycogenoses don’t produce enough lactic acid to feel the usual kind of pain that unaffected individuals do during exercise, so the phrase “no pain, no gain” should be ignored; muscle pain and tightness should be recognized as signals to slow down or rest briefly. Going too fast, too soon encourages protein metabolism over fat metabolism. Protein metabolism occurs through amino acid degradation which converts amino acids into pyruvate, the breakdown of protein to maintain the amino acid pool, the myokinase (adenylate kinase) reaction and purine nucleotide cycle. Amino acids are vital to the purine nucleotide cycle as they are precursors for purines, nucleotides, and nucleosides; as well as branch-chained amino acids are converted into glutamate and aspartate for use in the cycle (see Aspartate and glutamate synthesis). Severe breakdown of muscle leads to rhabdomyolysis and myoglobinuria. Excessive use of the myokinase reaction and purine nucleotide cycle leads to myogenic hyperuricemia. For McArdle disease (GSD-V), regular aerobic exercise utilizing "second wind" to enable the muscles to become aerobically conditioned, as well as anaerobic exercise (strength training) that follows the activity adaptations so as not to cause muscle injury, helps to improve exercise intolerance symptoms and maintain overall health. Studies have shown that regular low-moderate aerobic exercise increases peak power output, increases peak oxygen uptake (VO2peak), lowers heart rate, and lowers serum CK in individuals with McArdle disease. Regardless of whether the patient experiences symptoms of muscle pain, muscle fatigue, or cramping, the phenomenon of second wind having been achieved is demonstrable by the sign of an increased heart rate dropping while maintaining the same speed on the treadmill. Inactive patients experienced second wind, demonstrated through relief of typical symptoms and the sign of an increased heart rate dropping, while performing low-moderate aerobic exercise (walking or brisk walking). Conversely, patients that were regularly active did not experience the typical symptoms during low-moderate aerobic exercise (walking or brisk walking), but still demonstrated second wind by the sign of an increased heart rate dropping. For the regularly active patients, it took more strenuous exercise (very brisk walking/jogging or bicycling) for them to experience both the typical symptoms and relief thereof, along with the sign of an increased heart rate dropping, demonstrating second wind. In young children (<10 years old) with McArdle disease (GSD-V), it may be more difficult to detect the second wind phenomenon. They may show a normal heart rate, with normal or above normal peak cardio-respiratory capacity (VO2max). That said, patients with McArdle disease typically experience symptoms of exercise intolerance before the age of 10 years, with the median symptomatic age of 3 years. Tarui disease (GSD-VII) patients do not experience the "second wind" phenomenon; instead are said to be "out-of-wind". However, they can achieve sub-maximal benefit from lipid metabolism of free fatty acids during aerobic activity following a warm-up. See also "Hitting the wall" McArdle disease (GSD-V) Phosphoglucomutase deficiency (PGM1-CDG/CDG1T/GSD-XIV) Metabolic myopathies Glycogen storage disease Inborn errors of carbohydrate metabolism Tachycardia § sinus (inappropriate rapid heart rate response to exercise) IST § differential diagnosis (inappropriate sinus tachycardia) External links IamGSD - International Association for Muscle Glycogen Storage Disease Training Support - IamGSD resources for "second wind", details and printouts for the 12 MWT, and physical training guidelines in McArdle disease (GSD-V) 12 Minute Walk Test in McArdle Disease - IamGSD Videos. A video of the 12 MWT demonstrating "second wind" using a treadmill and measuring heart rate of an individual with McArdle disease (GSD-V) References Running Sport of athletics terminology Physiology Inborn errors of carbohydrate metabolism
Second wind
Chemistry,Biology
2,821
2,387,525
https://en.wikipedia.org/wiki/Heliodisplay
The Heliodisplay is an air-based display using principally air that is already present in the operating environment (room or space). The system developed by IO2 Technology in 2001 uses a projection unit focused onto multiple layers of air and dry micron-size atomized particles in mid-air, resulting in a two-dimensional display that appears to float (3d when using 3d content). This is similar in principle to the cinematic technique of rear projection and can appear three-dimensional when using appropriate content. As dark areas of the image may appear invisible, the image may be more realistic than on a projection screen, although it is still not volumetric. However the system does allow for multiple viewing and dual viewing (back and front) when combined with two light sources. The necessity of an oblique viewing angle +/- 30 degrees may be required for various configurations due to the rear-projection requirement. Heliodisplay can operate as a free-space touchscreen when the equipment is ordered as an interactive unit with embedded sensors in the equipment. The original prototype of 2001 used a PC that sees the Heliodisplay as a pointing device, like a mouse. With the supplied software installed, one can use a finger, pen, or another object as cursor control and navigate or interact with simple content. As of 2010, no computer or drivers are required. The interactive version ("i") of the heliodisplay contains an embedded processor that controls these functions internally for single touch, or multiple touch interactivity using an equipment mounted arrangement but without the IR laser field found on the earlier versions. The smaller Heliodisplay version is transportable at and is as big as a lunchbox (30 cm x 30 cm x 12 cm) similar to the 2002 version. The larger equipment such as the systems that project life-size people capable of image diagonals up to 2.3 m also have the same footprint, about the same size as a sheet of paper. The air-based system is formed by a series of metal plates, and the original Heliodisplay could run for several hours although current models can operate continuously. 2008 model Heliodisplays use 80 ml to 120 ml of water per hour (most used for cooling), depending on screen size and user settings, although the medium is primarily air. Various versions of the heliodisplay work predominantly from the surrounding air (such as under museum environments) where there are negligible affect to the surrounding space. A tissue paper can be left on the exhaust side of the unit for a 24-hour period without any effect of moisture to it as compared to other mist or fog generating equipment that relies more on pumping a liquid or vaporizer and thereby affecting the surrounding air. The Heliodisplay was invented by Mr. Dyner, who built it as a five-inch interactive prototype in 2000-2001 before patenting the free-space display technology. The original system used a CMOS camera and IR laser to track the position of a finger in mid-air and update the projected image to enable the first of its kind co-located display with mid-air controller interface. IO2 Technology commercialized the original versions along with improvements over the years in developing the product line. The Heliodisplay is sold directly worldwide by IO2 Technology with offices in The Bay Area of Northern California. Models M1 The original M1 units produced by IO2 were advanced prototypes and proof-of-concept. These are the first Heliodisplay developed by the IO2 Technology. They have all the above said properties. But they have less fidelity than the future systems although adopted various ion-discharge plates and showcased in 2003. This first generation Heliodisplay supported only a 22” image and utilized an IR light source and an IR camera to track the position of a finger for cursor controlling the images. M2 The second-generation M2 Heliodisplay supports a 30" image with 16.7 million colours and a 2000:1 contrast ratio. The interactive M2i version includes virtual touchscreen capability. M3 and M30 The new third-generation M3 version launched on February 28, 2007 has the same basic specifications as the M2 but is said to be much quieter, with improved brightness and clarity and more stable operation with an improved tri-flow system. Apart from displaying at a standard ratio of 4:3 it also displays 16:9 widescreen ratio. There is also an interactive version called the M3i. The M30 is the updated version of the M3, which fits into the current model numbering system, 30 designating the diagonal screen size. M50 and M100 In late 2007, IO2 Technology introduced two larger Heliodisplays, the M50 and M100. The M50 has a 50" diagonal image, equivalent to displaying a life-size head-and-shoulders person. The M100 has a 100" diagonal image, equivalent to displaying a large full-body person (about 2 meters tall). S and L and XL In 2011 IO2 reintroduced the smaller format Heliodisplays along with the standard L(large) models that project approximately 2 meter tall image (for lifesize person projections). The L models can be placed on the floor as a standing tower and take up slightly more area that a sheet of paper (14 inches in diameter) and weigh in around 70 lbs (35 kg) allowing it to be moved around by one person. Power consumption for the base tower 2 meter version is as energy efficient as the legacy models and consumes around 300 watts. The system is based on improvements to the M100, and similarly, the s (small) models over the legacy versions of the m30 both in image and user interface. The XL model is a separate system that supports larger format images beyond the 2 meter range. All units from 2009 have a simple interface with a single on/off button and power cord. i versions IO2 incorporated various advances to the existing platforms and most equipment weight was reduced by close to 50%. Current 2.3 meter system now weigh closer to 38 lb (17.2 kg), along with a 20% reduction in form factor and footprint. Equipment efficiency was improved to over 90%, while still maintaining its relatively quiet operation of around 39 dB (as compared to other fan based technologies). Image recovery time is under 1.0 seconds in some models, along with wireless communication to limit the cables to only the power cord. Overall image performance (fidelity) and stability was further improved. References External links The IO2 website "Interactive 3D Display: It's here!" article from OhGizmo.com Sci-fi projections Article from CBC, March 22, 2007 Media Early footage (~2002) Display of a wristwatch A famous clip showing the Heliodisplay's interactive navigation using a map display Display of a car's exterior More recent footage IO2 Technology video page IO2 Technology's YouTube page Multimodal interaction Display technology companies Computer peripherals
Heliodisplay
Technology
1,426
7,148,261
https://en.wikipedia.org/wiki/Ultra%20Electronics
Ultra Electronics Holdings is a British defence and security company. It was listed on the London Stock Exchange and was a constituent of the FTSE 250 Index until it was acquired by Cobham, which is itself owned by Advent International. The company was originally founded as Edward E. Rosen & Co., a manufacturer of headphones and loudspeakers, in 1920. In 1925, a new company, known as Ultra Electric Ltd., was established. During 1930, the firm launched its first all-electric radio receiver; it produced numerous domestic radio receivers around this time. Ultra diversified into aviation during the Second World War, building fuselage elements and engine components. Relaunching itself into the civilian markets following the conflict, Ultra started producing television sets in 1953. In 1961, Ultra's consumer electronics interests became part of Thorn Electrical Industries. During 1977, Ultra Electronics was bought by the Dowty Group and regained its independence via a management buyout in 1993. Into the twenty-first century, it has continued to be an active supplier to the aerospace sector; various companies, including Bombardier Aerospace and Airbus, have chosen to incorporate Ultra Electronics' noise reduction and vibration dampening products onto their aircraft. By 2005, Ultra Electronics was ranked as the 66th biggest aerospace company in the world. In August 2021, the British aerospace and defence company, Cobham, agreed to acquire Ultra Electronics in exchange for £2.6 billion. History Early activities The company that would eventually become Ultra Electronics was started by wireless specialist Teddy Rosen as Edward E. Rosen & Co. during 1920. The firm was initially focused upon the manufacture of high quality headphones and loudspeakers. During 1923, the company relocated to new premises at Harrow Road, London. In 1925, a new company, known as Ultra Electric Ltd., was formed; the Ultra name had been previously used for one of its products, the first commercial moving iron loudspeaker. During 1930, Ultra launched its first all-electric radio receiver. During 1931, the firm introduced its first mains-powered wireless set, known as the Ultra Twin Cub. That same year, Ultra received its first order from the aviation industry, having been placed by the Japanese Kawasaki Company. As a result of further expansion, the company moved to larger premises at Erskine Road, Chalk Farm, NW3 in 1932; three years later, a new factory at Western Avenue, Acton. During the 1930s, Ultra manufactured a wide range of domestic radio receivers including the Blue Fox, Lynx, Panther and Tiger models. In 1939, the company presented a television receiver to the market for the BBC High Definition Television Service which was transmitted on 405 lines from the studios at Alexandra Palace, north London. During the Second World War, Ultra diversified into aviation; the Short Stirling was the first aircraft to incorporate their products, the company acting as a subcontractor to produce tails and bomb doors for the bomber. Ultra would produce a wide range of aerostructures for numerous aircraft throughout the conflict. The firm solely focused on wartime demands, only relaunching itself into the civilian market during 1947, although it would continue to have an interest in the military sector during the post-war period. Post-war Ultra continued to manufacture products for the aviation industry after the conflict. Various engines, including the Armstrong Siddeley Mamba and the Rolls-Royce Avon, incorporated components such as temperature regulators, fuel flow valves, and throttle controls produced by Ultra. Electronic control systems would become a key part of the company's product range. In 1953, Ultra started manufacturing television sets. During 1956, the firm opened a new factory at Gosport for the production of both televisions and radio sets; Ultra acquired rival company Pilot Radio & Television in 1959. During the following year, Ultra reorganised itself, splitting into two divisions, one specialising in domestic radio and television and the other focused on all other electronic products. In 1961, Ultra's consumer electronics interests became part of Thorn Electrical Industries, who continued to manufacture products using the Ultra brandname until 1974. As a result of the acquisition, the remainder of the company became Ultra Electronics Ltd. Amongst its varied product range at this time, it produced the "Jezebel" and "Mini-Jezebel" sonobuoys. In 1962, Ultra developed their Search and Rescue and Homing (SARAH) radio beacon, this would be widely used throughout the world. Various subsystems of Concorde, include the droop nose controls and the full authority engine controls, incorporated Ultra technologies. During 1977, Ultra Electronics was bought by the Dowty Group. Reemergence In 1993, Ultra was the subject of a management buyout, led by Julian Blogh, of seven Dowty Group plc companies which formed the Dowty Group Electronic Systems Divisions, which had been previously acquired by TI Group during 1992. In September 1995, Ultra Electronics received its first major export order from the American government, to supply support equipment for its McDonnell Douglas AV-8B Harrier II fleet. It was floated on the London Stock Exchange in 1996. During the late 1990s, Ultra Electronics began to vigorously promote its active noise control systems, marketed as UltraQuiet: the firm argued that aircraft manufacturers can deploy it to decrease cabin noise, which has been a traditionally prevalent drawback of turboprop-powered aircraft, such as regional airliners, in comparison to their jet-powered counterparts. It also developed further noise reduction technologies during this period. Various companies, including Bombardier Aerospace and Airbus, have chosen to incorporate Ultra Electronics' noise reduction and vibration dampening products onto their aircraft. According to Flight International, since regaining its independence in the 1990s, the corporate strategy of Ultra Electronics appears to have been slanted towards maintaining a diverse product range, avoiding any large exposures to a single market, as well as being intentionally widely dispersed geographically. In 2000, Ultra Electronics acquired Datel Ferranti Group. It also acquired American voice communications provider Audiopack Technologies in 2004. By 2005, Ultra Electronics was ranked as the 66th biggest aerospace company in the world: at this point in time, the American market accounted for around one-third of the business's turnover. In August 2021, the British aerospace and defence company, Cobham, agreed to acquire Ultra Electronics in exchange for £2.6 billion. A merger enquiry into the anticipated acquisition (Ultra Electronics is a key national security and the defence contractor, Cobham, is American owned) was completed in January 2022, with a report being passed to the Secretary of State for Business, Energy and Industrial Strategy, Kwasi Kwarteng. In July 2022, Kwarteng announced that the acquisition was cleared to proceed. Operations The company operates under five strategic business units; Maritime, Intelligence & Communications, Precision Control Systems, Energy and Forensic Technology. It has facilities in the UK, North America and Australia. In January 2020, Ultra launched new branding. See also Aerospace industry in the United Kingdom References 1920 establishments in England Aircraft component manufacturers of the United Kingdom Companies listed on the London Stock Exchange Defence companies of the United Kingdom Electronics companies established in 1920 Electronics companies of the United Kingdom Electronics industry in London Manufacturing companies based in London Sonar manufacturers Radio manufacturers
Ultra Electronics
Engineering
1,457
24,083,251
https://en.wikipedia.org/wiki/Polyprenol
Polyprenols are natural long-chain isoprenoid alcohols of the general formula H-(C5H8)n-OH, where n is the number of isoprene units. Any prenol with more than 4 isoprene units is a polyprenol. Polyprenols play an important function, acting as natural bioregulators and are found in small quantities in various plant tissues. Dolichols, which are found in all living creatures, including humans, are their 2,3-dihydro derivatives. Sources Live trees are known to contain polyprenols. The needles of conifer trees are one of the richest sources of polyprenols. They are also present in shiitake mushrooms in trace amounts. Research Polyprenols have been studied for more than 30 years. Interest has been strongest in Russia, Europe, Japan, India, and the United States. In the early 1930s, a scientific team at the Forest Technical Academy in St. Petersburg, Russia led by Fyodor Solodky, the founder of Forest Biochemistry, and Asney Agranet, began research into the composition of conifer tree needles. They were intrigued by the trees' ability to remain disease free in extremes of temperature ± 40 °C. Development of Solodky's research led Russian scientists to isolate a completely different class of organic substance from the needles, including polyprenols. Functions Polyprenols are low molecular natural bioregulators (physiologically active), playing a significant modulating role in the cellular process in plants referred to as biosynthesis. What polyprenols are to plants, dolichols are to all living creatures, including man. They are in fact of a very similar chemical composition. Dolichols are a derivative of polyprenols with a saturated isoprene unit. Through dolichols, the dolichol phosphate cycle occurs. The dolichol phosphate cycle plays a major role in the synthesis of glycoproteins. All proteins from secretions, membranes and intracellular glycoproteins form the basis for the building of membrane receptors which are used in the production of insulin, adrenaline, estrogen, testosterone and other hormones and enzymes. Seemingly, dolichols have an important role in maintenance of the correct lipid composition of membranes. Decreased levels of dolichols have been connected to higher levels of peroxidation of lipids. The dolichol phosphate cycle facilitates the process of cellular membrane glycosylation, that is, the synthesis of glycoproteins that control the interactions of cells, support the immune system and the stabilization of protein molecules. Out of all these glycoproteins, polyglycoprotein has been found to create drug resistance to multiple cancer treatments and keep cancer cells alive. The pharmacological activity of polyprenols takes place in the liver, where they are metabolized into dolichols. Potential medical applications The interest in polyprenols and dolichols is associated with their wide range of demonstrated biological activity and extremely low toxicity. Polyprenols cellular reparation and spermatogenesis, and have antistress, adaptogenic, antiulcerogenic and wound-healing activity. Dolichols have antioxidant activity and protect cell membranes from peroxidation. Experiments on mice have demonstrated that polyprenols have antiviral activity, in particular against influenza viruses. It has been established that the dolichol level in liver tumor cells are reduced in comparison with healthy hepatic cells. The Australian pharmaceutical company Solagran Limited has been investigating the medical significance of polyprenols. References Alcohols Alkene derivatives Polymers Terpenes and terpenoids
Polyprenol
Chemistry,Materials_science
774
58,785,859
https://en.wikipedia.org/wiki/Neomammalian%20brain
The neomammalian brain is one of three aspects of Paul MacLean's triune theory of the human brain. MacLean was an American physician and neuroscientist who formulated his model in the 1960s, which was published in his own 1990 book The Triune Brain in Evolution. MacLean's three-part theory explores how the human brain has evolved from ancestors over millions of years, consisting of the reptilian, paleomammalian and neomammalian complexes. MacLean proposes that the neomammalian complex is only found in higher order mammals, for example, the human brain, accounting for increased cognitive ability such as motor control, memory, improved reasoning and complex decision-making. MacLean's theory explores how in higher order mammals, the neomammalian brain works interdependently with the reptilian and paleomammalian complexes to allow sophisticated thought processes to occur. The theory of the neomammalian brain is based on MacLean's vast research conducted through comparing the structural differences between human brains and other organisms, including monkeys and a range of reptiles. MacLean's research was built upon previous neuroscience researchers' findings, including James Papez, which led to the formulation of the triune theory of the human brain and the limbic system, the two major contributions that MacLean made to the faculty of neuroscience. Paul MacLean Paul Donald MacLean was an American physician and neuroscientist who was born in Phelps, New York, on May 1, 1913, into a Presbyterian minister's family, thus, ultimately becoming a religious man himself. MacLean married Alison Stokes and lived in Mitchellville, Maryland, with their five children Alison, Alexander, David, James and Paul. MacLean died in Potomac, Maryland, in 2007, aged 94. MacLean is famous for his significant contributions to brain research, psychiatry and physiology. He spent a large amount of his working life at Yale Medical School and the National Institutes of Health, where through his research he was able to publish neuroscience texts, reports, photographs and audio-visual material on his neurological findings. MacLean spent two years during World War 2 serving as a medical officer for the Yale Unit, which later became known as the 39th General Hospital. This experience helped to shape MacLean's perspective on the impacts of post-traumatic stress disorder on fallen soldiers, which would ultimately shape his future studies into the way the human brain functions and how it can be easily damaged through life experiences, with particular focus on sleeping disorders and other mental health issues, including anxiety and depression. MacLean had a deep fascination with the natural human instinct, and the role that the brain plays with rational human thinking. MacLean believed that there was a connection between a human's violent actions and rational behaviour. In addition, MacLean coined the idea of the limbic system, the set of brain structures that surround the hypothalamus and are responsible for human emotions, memories and arousal. The research made by MacLean was based on previous studies by Dr James Papez, a neuroscientist who during the 1930s and 1940s delved into the circuit between the hippocampus, thalamus and cingulum, and how their connection is the basis for human emotion. MacLean proposed that the limbic system had developed over time in early mammals to control both fight and flight responses. MacLeans findings and proposals on the limbic system are both still questioned and debated by modern-day neuroscience researchers, failing to conclude whether MacLeans’ proposal is of accuracy. Structure The Triune Brain is divided into three sections: Reptilian, Paleomammalian and Neomammalian. MacLean proposed that the human skull doesn't just contain one single brain, according to his Triune Brain Theory, it in fact holds three. These three separate brains work interdependently, interconnected by nerves, each of which operate differently with different capacities. Reptilian The Reptilian Brain was referred to by MacLean as the ‘R Complex’ or the primitive brain. This is the oldest brain in the Triune Theory and anatomically is made up of the brain stem and the cerebellum. In reptiles, both the brain stem and cerebellum dominate and are the control centres for basic function. It has been found that these two parts of the brain are responsible for emotions such as paranoia, obsession and compulsion.  Further, being essential in regulating heart rate, body temperature and space orientation. For example, if a human holds their breath and carbon dioxide levels rise, the primitive brain initiates the lungs to start breathing to achieve a state of homeostasis. Paleomammalian The Paleomammalian brain is known as the intermediate or ‘old mammalian’ brain. The Paleomammalian brain anatomically consists of the hypothalamus, amygdala and the hippocampus. It is responsible for subconscious emotions such as fear, joy, fighting and sexual behaviour. The old mammalian brain is found in a large percentage of mammals and is believed to have a strong intricate connection with the neocortex. MacLeans idea of the ‘limbic system’ is based on the role the paleomammalian brain plays in brain function, where an individual's judgement of right and wrong stems from. MacLean had a particular influence on the role that the limbic system plays on mental health when it translates messages incorrectly, for example, how an individual can enter a state of deep distress when there is no stimuli to cause such a response, relating directly to MacLeans research into the causes of Post-Traumatic Stress Disorder. Neomammalian The neomammalian brain consists of the cerebral neocortex, which is found in higher mammals, especially in the human brain, and is not found in birds or reptiles. The neomammalian brains structure is of great complexity, and has evolved over time allowing humans to reach the top of the food chain. The neocortex is made up of grey matter consisting of folds to increase the surface area and memory retention, these folds in humans are 80% excitatory and 20% inhibitory. The arrangement of these folds differs from human to human, and is believed to account for the differing cognitive abilities of individual humans. It has been found by neuroscientists that the cerebral neocortex accounts for roughly 76% of the human brains total volume. The neocortex is predominately associated with high order brain functions such as motor control, sensory perception and cognition. The neocortex can be divided into two sections; the proisocortex and the true isocortex.  The Proisocortex is transitional between both the true isocotex and periallocortex, it can be found mainly in the cingulate gyrus, insula and the subcallosal areas of the brain. The true isocortex is a six layered cytoarchitecture that is predominately located in the frontal lobe, parietal lobe, temporal lobe and occipital lobe. Another unique feature of the neocortex is the way in which the matter is arranged together in columns. In the human brain, the six neocortex layers are 2.5mm thick which contain thousands of different types of cells. Neuroscientists over the many years of research have struggled greatly to reach an agreed conclusion as to why the Neocortex is arranged in such a way; however, many suggest that the columns act as channels for intricate communication between cells and differing layers, this is believed to be another neurological explanation as to why higher order mammals have such a complex order of thinking in comparison to lower-order mammals, reptiles and birds. Development The neomammalian brain (neocortex) is the newest addition to the Human Brain. MacLean proposed that as animals evolved over the hundreds of millions of years, in order for an increased chance of survival, higher order animals developed an increased cognitive ability, which resulted in an increase in brain size. MacLean firmly believed that the driving force in the development of the neocortex was the development of social behaviours, such as the separation cry between infant and mother during the development phase of offspring. It followed the idea that mammals evolved through learning about different methods of survival, as these mammals learnt various methods of survival through particular encounters, their brains developed into far more complex cytoarchitectures. MacLeans model is based on the idea of the larger the brain size, the higher the order of thinking, thus, an increased cognitive ability. The neomammalian brain is in charge of all ‘rational thinking’, his model follows Charles Darwin's natural selection idea of ’survival of the fittest’, where those mammals that developed characteristics of the neomammalian brain survived and then passed this trait onto their offspring, until a stage was reached where the majority of the population of higher order mammals attained the survival trait, a process that occurred over millions and millions of years. Archaeologists have discovered and are still discovering fossil records that allow comparative anatomy to occur between the modern-day Homo sapiens and primate ancestors. The tissue that the human brain is made up of decomposes once the organism has died, so old brain tissue cannot be analysed, however, due to the large percentage that the neomammalian brain takes up in the human brain, estimated to be 76%, comparative anatomy shows that the Homo sapiens has a much larger cranial size than early primate ancestors. It must be noted that many neuroscientists believe that MacLean's theory of the Triune Theory is false, however, what is a mutual agreement between the majority of neuroscientists, is that the features that McLean has described of the neomammalian brain is the reason as to why humans have such a high-level order of thinking. Clinical significance Through comparing the three different sections of MacLean's Triune Theory, neuroscientists have been able to account for the complexity of the human brain in comparison to reptiles, birds and other lower order mammals. Animal scientists have dissected a vast array of organism's brains and through comparison ultimately concluded that the cerebral cortex (neomammalian Brain) has a different column structure to other organisms’. The discovery of the six layered neomammalian brain has allowed neuroscientists to research into their differing roles, and how each function interdependently to allow for complex thought to occur. The six layers have been separated into three different sections according to the role they play in the survival of a human. Layers one to three are referred to as the supragranular layers and play a vital role in the origin and termination of intercortical connections. Layer one is known as the molecular layer and is made up of very few nerve cells. Layer two is the external granular layer that is made up small, dense neurons. Layer three is the pyramidal layer and is made up larger pyramidal shaped neurons. These three layers are composed of pyramidal cells, cells that have a pyramidal shaped axon with long dendrites connecting to other cells in neighbouring columns. The second section of the neomammalian brain is the Internal Granular Layer, and is known as layer four by neuroscientists; this layer is responsible for receiving afferent signals from the hypothalamus and sends messages to the other layers. For example, layer four would receive messages about external temperature changes. The Internal Granular Layer acts as a medium which receives, processes and the sends signals to other parts of the brain, allowing the body to respond in such a way to combat the change in environment. The final section is composed of layers five and six and is known as the infragranular layers; it connects the cerebral cortex with the subcortical regions of the brain, these regions are responsible for long-term memory, motor control and behavioural and emotional responses. Damages to layers five and six can be detrimental to the overall fitness of the mammal, usually resulting in some form of retardation or loss in cognitive processes. These six layers of the neomammalian brain work interdependently to process neurological messages at an extremely fast and high-quality level. These six layers are only found in the modern day human brain; however, other higher order mammals have features of these layers that give allow them to have a high cognitive processing ability. References Brain Biology theories
Neomammalian brain
Biology
2,540
27,005,308
https://en.wikipedia.org/wiki/Kouhrang%202%20Hydroelectric%20Power%20Station
The Kouhrang 2 Hydroelectric Power Station is located just south of Chelgard and about northwest of Shahrekord in Chaharmahal and Bakhtiari Province, Iran. The power station has an installed capacity of 33.3 MW and uses water diverted to the east from the Kouhrang River, via a small dam and the long Kouhrang 2 Tunnel, to produce power. Water from the Kouhrang is stored in a circular dam (Kouhrang 2 Dam) before being sent to the power station. The power station's three generators were commissioned between 2002 and 2004, the power plant were inaugurated in February 2005. Water discharged from the power station enters the Zayandeh River as part of a larger project to provide water to major cities like Isfahan. The intake for the power plant is located on the Kouhrang River () just downstream of the Kouhrang 1 Dam which also diverts water, via the long Kouhrang 2 Tunnel, to near Chelgard and was completed in 1953. The Kouhrang 3 Dam is planned downstream to regulate river flows and divert more water to the Zayandeh via the Kouhrang 3 Tunnel. See also List of power stations in Iran Dams in Iran References Hydroelectric power stations in Iran Earth-filled dams Dams completed in 2005 Dams in Chaharmahal and Bakhtiari Province Interbasin transfer Kuhrang County
Kouhrang 2 Hydroelectric Power Station
Environmental_science
286
49,732,727
https://en.wikipedia.org/wiki/P35%20holin%20family
The PRD1 Phage P35 Holin (P35 Holin) Family (TC# 1.E.5) is a member of Holin Superfamily III. The prototype for this family is the lipid-containing PRD1 enterobacterial phage holin protein P35 (12.8 kDa; TC# 1.E.5.1.1) encoded by gene XXXV (orfT). It is a component of a typical holin-endolysin system which functions to lyse the host bacterial cell. Structure P35 holin (TC# 1.E.5.1.1) has 3 transmembrane segments (TMSs) with 5 positively charged residues between TMSs 1 and 2. It has 4 positively charged residues at the C-terminus. It is therefore thought that the N-terminus is in the periplasm and the C-terminus is in the cytoplasm. Homologues of 109 amino acyl residues (aas), which also have 3 putative TMSs, are encoded in the genomes of Xylella fastidiosa strains. Function The reaction catalyzed by P35 holin is: autolysin (in) → autolysin (out) See also Bacteriophage Phage typing Holin Lysin Transporter Classification Database References Holins Protein families
P35 holin family
Biology
291
16,261,888
https://en.wikipedia.org/wiki/HD%20268835
HD 268835 (or R66) (30 SM) is one of two stars that were identified by NASA's Spitzer Space Telescope in the Milky Way's nearest neighbor galaxy, the Large Magellanic Cloud (the other being R 126 or HD 37974), as being circled by monstrous dust disks that are theorised to be the origin of planets. Significance Both HD 268835 and HD 37974 are classified as hypergiants, very large and very bright. The dust cloud around them surprised astronomers because stars as big as these were thought to be inhospitable to planet formation as they have very strong winds making it difficult/impossible for the dust clouds to "condense" into planets. "We do not know if planets like those in our solar system are able to form in the highly energetic, dynamic environment of these massive stars, but if they could, their existence would be a short and exciting one" said Charles Beichman, an astronomer at NASA's Jet Propulsion Laboratory and the California Institute of Technology, both in Pasadena, California. References Stars in the Large Magellanic Cloud Mensa (constellation) Luminous blue variables B-type hypergiants 268835 022989 CD-70 00273 B(e) stars ko:R 66과 R 126 it:R 66 e R 126
HD 268835
Astronomy
279
47,062,017
https://en.wikipedia.org/wiki/Abell%202744%20Y1
Abell 2744 Y1 is a galaxy located in the Abell 2744 galaxy cluster, 13 billion light years away in the Sculptor constellation. It is 2,300 light years in diameter, 50 times smaller than the Milky Way galaxy, but producing 10 times more stars. The galaxy was discovered in July 2014 by an international team led by astronomers from the Instituto de Astrofísica de Canarias (IAC) and La Laguna University (ULL) as part of the Frontier Fields program with the help of NASA’s Spitzer and Hubble Space Telescopes. References Galaxies Dwarf galaxies Sculptor (constellation)
Abell 2744 Y1
Astronomy
125
65,797,150
https://en.wikipedia.org/wiki/Pfizer%E2%80%93BioNTech%20COVID-19%20vaccine
The Pfizer–BioNTech COVID-19 vaccine, sold under the brand name Comirnaty, is an mRNA-based COVID-19 vaccine developed by the German biotechnology company BioNTech. For its development, BioNTech collaborated with the American company Pfizer to carry out clinical trials, logistics, and manufacturing. It is authorized for use in humans to provide protection against COVID-19, caused by infection with the SARS-CoV-2 virus. The vaccine is given by intramuscular injection. It is composed of nucleoside-modified mRNA (modRNA) that encodes a mutated form of the full-length spike protein of SARS-CoV-2, which is encapsulated in lipid nanoparticles. Initial guidance recommended a two-dose regimen, given 21 days apart; this interval was subsequently extended to up to 42 days in the United States, and up to four months in Canada. Clinical trials began in April 2020; by November 2020, the vaccine had met the primary efficacy goals of the phaseIII clinical trial, with over 40,000 people participating. Interim analysis of study data showed a potential efficacy of 91.3% in preventing symptomatic infection within seven days of a second dose and no serious safety concerns. Most side effects are mild to moderate in severity and resolve within a few days. Common side effects include mild to moderate pain at the injection site, fatigue, and headaches. Reports of serious side effects, such as allergic reactions, remain very rare with no long-term complications documented. The vaccine is the first COVID19 vaccine to be authorized by a stringent regulatory authority for emergency use and the first to be approved for regular use. In December 2020, the United Kingdom was the first country to authorize its use on an emergency basis. It is authorized for use at some level in the majority of countries. On 23 August 2021, the Pfizer–BioNTech vaccine became the first COVID-19 vaccine to be approved in the US by the Food and Drug Administration (FDA). The logistics of distributing and storing the vaccine present significant challenges due to the requirement for its storage at extremely low temperatures. In August 2022, a bivalent version of the vaccine (Pfizer-BioNTech COVID-19 Vaccine, Bivalent) was authorized for use as a booster dose in individuals aged twelve and older in the US. The following month, the BA.1 version of the bivalent vaccine (Comirnaty Original/Omicron BA.1 or tozinameran/riltozinameran) was authorized as a booster for use in the UK. The same month, the European Union authorized both the BA.1 and the BA.4/BA.5 (tozinameran/famtozinameran) booster versions of the bivalent vaccine. In August 2024, the FDA approved and granted emergency authorization for a monovalent Omicron KP.2 version of the Pfizer–BioNTech COVID-19 vaccine. The approval of Comirnaty (COVID-19 Vaccine, mRNA) (2024-2025 Formula) was granted to BioNTech Manufacturing GmbH. The EUA amendment for the Pfizer-BioNTech COVID-19 Vaccine (2024-2025 Formula) was issued to Pfizer Inc. Medical uses The Pfizer–BioNTech COVID-19 vaccine is used to provide protection against COVID-19, caused by infection with the SARS-CoV-2 virus, by eliciting an immune response to the S antigen. The vaccine is used to reduce morbidity and mortality from COVID-19. The vaccine is supplied in a multidose vial as "a white to off-white, sterile, preservative-free, frozen suspension for intramuscular injection". It must be thawed to room temperature and diluted with normal saline before administration. The initial course consists of two doses. The World Health Organization (WHO) recommends an interval of three to four weeks between doses. Delaying the second dose by up to twelve weeks increases immunogenicity, even in older adults, against all variants of concern. Authors of the Pitch study think that the optimal interval against the Delta variant is around eight weeks, with longer intervals leaving receptors vulnerable between doses. A third, fourth, or fifth dose can be added in some countries. Effectiveness A test-negative case-control study published in August 2021, found that two doses of the BNT162b2 (Pfizer) vaccine had 93.7% effectiveness against symptomatic disease caused by the alpha (B.1.1.7) variant and 88.0% effectiveness against symptomatic disease caused by the delta (B.1.617.2) variant. Notably, effectiveness after one dose of the Pfizer vaccine was 48.7% against alpha and 30.7% against delta, similar to effectiveness provided by one dose of the ChAdOx1 nCoV-19 vaccine. In August 2021, the US Centers for Disease Control and Prevention (CDC) published a study reporting that the effectiveness against infection decreased from to when the Delta variant became predominant in the US, which may be due to unmeasured and residual confounding related to a decline in vaccine effectiveness over time. Unless indicated otherwise, the following effectiveness ratings are indicative of clinical effectiveness two weeks after the second dose. A vaccine is generally considered effective if the estimate is ≥50% with a >30% lower limit of the 95% confidence interval. Effectiveness is generally expected to slowly decrease over time. In November 2021, Public Health England reported a possible but extremely small reduction in effectiveness against symptomatic disease from the Delta sublineage AY.4.2 at longer intervals after the second dose. Preliminary data suggest that the effectiveness against the Omicron variant starts to decline in about 10 weeks, either after the initial two-dose regimen or after the booster dose. For other variants, the effectiveness of the initial doses starts to decline in about six months. A case-control study in Qatar from 1 January to 5 September 2021 found that effectiveness against infection peaked at in the first month after the second dose, followed by a slow decline that accelerated after the fourth month, reaching 20% at months 5 to 7. A similar trajectory was observed against symptomatic disease and against specific variants. Effectiveness against severe disease, hospitalization and death was more robust, peaking at in the second month and remaining almost stable through the sixth month, declining thereafter. In October 2021, a phase III trial showed that a booster dose given approximately 11 months after the second dose restored the protective effect to the efficacy level against symptomatic disease from the Delta variant. In December 2021, Pfizer and BioNTech reported that preliminary data indicated that a third dose of the vaccine would provide a similar level of neutralizing antibodies against the Omicron variant as seen after two doses against other variants. In December 2021, private health insurer Discovery Health, in collaboration with the South African Medical Research Council, reported that real-world data from more than 211,000 cases of COVID-19 in South Africa, of which 78,000 were of the Omicron variant, indicate that effectiveness against the variant after two doses is about 70% against hospital admission and 33% against symptomatic disease. Protection against hospital admission is maintained for all ages and groups with comorbidities. A study of the bivalent booster effectiveness against severe COVID-19 outcomes in Finland, September 2022–January 2023, has shown that it reduced the risk of severe COVID-19 outcomes among the elderly. By contrast, among the chronically ill 18–64-year-olds the risk was similar among those who received bivalent vaccine and those who did not. Among the elderly a bivalent booster provided highest protection during the first two months after vaccination, but thereafter signs of waning were observed. The effectiveness among individuals aged 65–79 years and those aged 80 years or more was similar. Specific populations Based on the results of a preliminary study, the U.S. Centers for Disease Control and Prevention (CDC) recommends that pregnant women get vaccinated with the COVID19 vaccine. A statement by the British Medicines and Healthcare products Regulatory Agency (MHRA) and the Commission on Human Medicines (CHM) reported that the two agencies had reached a conclusion that the vaccine is safe and effective in children aged between 12 and 15 years. In May 2021, experts commissioned by the Norwegian Medicines Agency concluded that the Pfizer-BioNTech vaccine is the likely cause of ten deaths of frail elderly patients in Norwegian nursing homes. They said that people with very short life expectancies have little to gain from vaccination, having a real risk of adverse reactions in the last days of life and of dying earlier. A 2021 report by the New South Wales Government (NSW Health) in Australia found that the Pfizer-BioNTech vaccine is safe for those with various forms of immunodeficiency or immunosuppression, though it does note that the data on said groups is limited, due to their exclusion from many of the vaccine earlier trials held in 2020. It notes that the World Health Organization advises that the vaccine is among the three COVID-19 vaccines (alongside that of Moderna and AstraZeneca) it deems safe to give to immunocompromised individuals, and that expert consensus generally recommends their vaccination. The report states that the vaccines were able to generate an immune response in those individuals, though it does also note that this response is weaker than in those that are not immunocompromised. It recommends that specific patient groups, such as those with cancer, inflammatory bowel disease and various liver diseases be prioritised in the vaccination schedules over other patients that do not have said conditions. In September 2021, Pfizer announced that a clinical trial conducted in more than 2,200 children aged 5–11 has generated a "robust" response and is safe. Adverse effects In Phase III trials for the vaccine, there were no safety concerns and few adverse events. Most side effects of the Pfizer–BioNTech COVID19 vaccine are mild to moderate in severity, and are gone within a few days. They are similar to other adult vaccines and are normal signs that the body is building protection to the virus. During clinical trials, the common side effects affecting more than one in 10 people are (in order of frequency): pain and swelling at the injection site, tiredness, headache, muscle aches, chills, joint pain, fever or diarrhea. Fever is more common after the second dose. The European Medicines Agency (EMA) regularly reviews the data on the vaccine's safety. The safety report published on 8 September 2021 by the EMA was based on over 392million doses administered in the European Union. According to the EMA "the benefits of Comirnaty in preventing COVID19 continue to outweigh any risks, and there are no recommended changes regarding the use of this vaccine." Rare side effects (that may affect up to one in 1,000 people) include temporary one sided facial drooping and allergic reactions, such as hives or swelling of the face. Allergy Documented hypersensitivity to polyethylene glycol (PEG) (a very rare allergy) is listed as a contraindication to the COVID-19 Pfizer vaccine. Severe allergic reaction has been observed in approximately eleven cases per million doses of vaccine administered. According to a report by the US Centers for Disease Control and Prevention, 71% of those allergic reactions happened within 15 minutes of vaccination and mostly (81%) among people with a documented history of allergies or allergic reactions. The UK's Medicines and Healthcare products Regulatory Agency (MHRA) advised on 9December 2020 that people who have a history of "significant" allergic reaction should not receive the Pfizer–BioNTech COVID19 vaccine. On 12 December, the Canadian regulator followed suit, noting that: "Both individuals in the U.K. had a history of severe allergic reactions and carried adrenaline auto injectors. They both were treated and have recovered." Myocarditis In June 2021, the Israel's Ministry of Health announced a probable relationship between the second dose and myocarditis in a small group of 16–30-year-old men. Between December 2020 and May 2021, there were 55 cases of myocarditis per 1million people vaccinated, 95% of which were classified as mild and most spent no more than four days in the hospital. Since April 2021, increasing number of cases of myocarditis and pericarditis have been reported in the United States in about 13 per 1million young people, mostly male and over the age of 16, after vaccination with the Pfizer–BioNTech or the Moderna vaccine. Most affected individuals recover quickly with adequate treatment and rest. Since February 2022, the German Standing Committee on Vaccination recommends aspiration for COVID-19 vaccination as precautionary measure. Pharmacology The BioNTech technology for the BNT162b2 vaccine is based on use of nucleoside-modified mRNA (modRNA) which encodes a mutated form of the full-length spike protein found on the surface of the SARS-CoV-2 virus, triggering an immune response against infection by the virus protein. Sequence The modRNA sequence of the vaccine is 4,284 nucleotides long. It consists of a five-prime cap; a five prime untranslated region derived from the sequence of human alpha globin; a signal peptide (bases 55–102) and two proline substitutions (K986P and V987P, designated "2P") that cause the spike to adopt a prefusion-stabilized conformation reducing the membrane fusion ability, increasing expression and stimulating neutralizing antibodies; a codon-optimized gene of the full-length spike protein of SARS-CoV-2 (bases 103–3879); followed by a three prime untranslated region (bases 3880–4174) combined from AES and mtRNR1 selected for increased protein expression and mRNA stability and a poly(A) tail comprising 30 adenosine residues, a 10-nucleotide linker sequence, and 70 other adenosine residues (bases 4175–4284). The sequence contains no uridine residues; they are replaced by 1-methyl-3'-pseudouridylyl. The 2P proline substitutions in the spike proteins were originally developed for a Middle East respiratory syndrome (MERS) vaccine by researchers at the National Institute of Allergy and Infectious Diseases' Vaccine Research Center, Scripps Research, and Jason McLellan's team (at the University of Texas at Austin, previously at Dartmouth College). Chemistry In addition to the mRNA molecule, the vaccine contains the following inactive ingredients (excipients): ALC-0315, ((4-hydroxybutyl)azanediyl)bis(hexane-6,1-diyl)bis(2-hexyldecanoate) ALC-0159, 2-[(polyethylene glycol)-2000]-N,N-ditetradecylacetamide 1,2-distearoyl-sn-glycero-3-phosphocholine (DSPC) cholesterol dibasic sodium phosphate dihydrate monobasic potassium phosphate potassium chloride sodium chloride sucrose water for injection The first four of these are lipids. The lipids and modRNA together form nanoparticles that act not only as carriers to get the modRNA into the human cells, but also as adjuvants. ALC-0159 is a polyethylene glycol conjugate, i.e., a PEGylated lipid. Manufacturing Pfizer and BioNTech are manufacturing the vaccine in their own facilities in the United States and in Europe. The license to distribute and manufacture the vaccine in China was purchased by Fosun, alongside its investment in BioNTech. Manufacturing the vaccine requires a three-stage process. The first stage involves the molecular cloning of DNA plasmids that code for the spike protein by infusing them into Escherichia coli bacteria. For all markets, this stage is conducted in the United States, at a small Pfizer pilot plant in Chesterfield, Missouri (near St. Louis). After four days of growth, the bacteria are killed and broken open, and the contents of their cells are purified over a week and a half to recover the desired DNA product. The DNA is bottled and frozen for shipment. Safely and quickly transporting the DNA at this stage is so important that Pfizer has used its company jet and helicopter to assist. The second stage is being conducted at a Pfizer plant in Andover, Massachusetts, in the United States, and at BioNTech's plants in Germany. The DNA is used as a template to build the desired mRNA strands, which takes about four days. Once the mRNA has been created and purified, it is frozen in plastic bags about the size of a large shopping bag, of which each can hold up to 10million doses. The bags are placed on trucks which take them to the next plant. The third stage is being conducted at Pfizer plants in Portage, Michigan (near Kalamazoo) in the United States, and Puurs in Belgium. This stage involves combining the mRNA with lipid nanoparticles, then filling vials, boxing vials, and freezing them. Croda International subsidiary Avanti Polar Lipids is providing the requisite lipids. As of November 2020, the major bottleneck in the manufacturing process is combining mRNA with lipid nanoparticles. At this stage, it takes only four days to go from mRNA and lipids to finished vials, but each lot must then spend several weeks in deep-freeze storage while undergoing verification against 40 quality-control measures. Before May 2021, the Pfizer plant in Puurs was responsible for all vials for destinations outside the United States. Therefore, all doses administered in the Americas outside of the United States before that point in time required at least two transatlantic flights (one to take DNA to Europe and one to bring back finished vaccine vials). In February 2021, BioNTech announced it would increase production by more than 50% to manufacture 2billion doses in 2021, raised again at the end of March to 2.5billion doses in 2021. In February 2021, Pfizer revealed that the entire sequence initially took about 110 days on average from start to finish, and that the company is making progress on reducing the time to 60 days. More than half the days in the production process are dedicated to rigorous testing and quality assurance at each of the three stages. Pfizer also revealed that the process requires 280 components and relies upon 25 suppliers located in 19 countries. Vaccine manufacturers normally take several years to optimize the process of making a particular vaccine for speed and cost-effectiveness before attempting large-scale production. Due to the urgency presented by the COVID-19 pandemic, Pfizer and BioNTech began production immediately with the process by which the vaccine had been originally formulated in the laboratory, then started to identify ways to safely speed up and scale up that process.BioNTech announced in September 2020, that it had signed an agreement to acquire a manufacturing facility in Marburg, Germany, from Novartis to expand their vaccine production capacity. Once fully operational, the facility would produce up to 750million doses per year, or more than 60million doses per month. The site will be the third BioNTech facility in Europe that produces the vaccine, while Pfizer operates at least four production sites in the United States and Europe. The Marburg facility had previously specialized in cancer immunotherapy for Novartis. By the end of March 2021, BioNTech had finished retrofitting the facility for mRNA vaccine production and retraining its 300 staff, and obtained approval to begin manufacturing. Besides making mRNA, the Marburg facility also performs the step of combining mRNA with lipids to form lipid nanoparticles, then ships the vaccine in bulk to other facilities for fill and finish (i.e., filling and boxing vials). In April 2021, the EMA authorized an increase in batch size and associated process scale up at Pfizer's plant in Puurs. This increase is expected to have a significant impact on the supply of the vaccine in the European Union. Logistics The vaccine is delivered in vials that, once diluted, contain 2.25 mL of vaccine, comprising 0.45 mL frozen and 1.8 mL diluent. According to the vial labels, each vial contains five 0.3 mL doses, however excess vaccine may be used for one, or possibly two, additional doses. The use of low dead space syringes to obtain the additional doses is preferable, and partial doses within a vial should be discarded. The Italian Medicines Agency officially authorized the use of excess doses remaining within single vials. The Danish Health Authority allows mixing partial doses from two vials. As of 8January 2021, each vial contains six doses. In the United States, vials will be counted as five doses when accompanied by regular syringes and as six doses when accompanied by low dead space syringes.The vaccine can be stored at for thirty days before use and at or for up to two hours before use. During distribution the vaccine is stored in special containers that maintain temperatures between . Low-income countries have limited cold chain capacity for ultracold transport and storage of a vaccine. The necessary storage temperatures for the vaccine are much lower than for the similar Moderna vaccine. The head of Indonesia's Bio Farma Honesti Basyir said purchasing the vaccine is out of the question for the world's fourth-most populous country, given that it did not have the necessary cold chain capability. Similarly, India's existing cold chain network can handle only temperatures between , far above the requirements of the vaccine. History Before COVID19 vaccines, creating a vaccine for an infectious disease from scratch had never before been produced in less than the five years it had taken in 1967 when Maurice Hilleman had set the modern record with a vaccine for mumps, followed by the vaccine for Ebola also taking five years. As of 2019 no vaccine existed for preventing a coronavirus infection in humans. The SARS-CoV-2 virus, which causes COVID19, was detected in December 2019, The development of the Pfizer- BioNTech COVID19 vaccine began when BioNTech founder and CEO Uğur Şahin while at his home in Mainz on Friday 24 January 2020, was checking out his regular websites when he noted a report in the science section of Der Spiegel website about novel respiratory illness that had affected approximately 50 people in Wuhan. He then came across a submission from Hong Kong-based researchers on the website of the medical journal The Lancet in which they discussed a cluster of pneumonia associated with coronavirus and an indication of person-to-person transmission that had affected a family that had recently returned from Wuhan. The authors of the submission were of the opinion that they were observing the early stages of an epidemic, While no infectious disease expert Şahin did some quick calculations based on Wuhan's population and transport links and came to the conclusion that if this virus was possible of person-to-person transmission then it could cause a morality rate somewhere between 0.3 and 10 out of every 100 inflected people to give a best case scenario of two million deaths worldwide. This would expose him, his family, colleagues to danger. At the time there were 1,000 internationally confirmed cases of the virus. Later that day he sent an email to Helmut Jeggle, chairman of BioNTech to alert him of his conclusions. The next day he discussed it with his wife Özlem Türeci and his belief that once it reached Germany local schools would be closed by April. During a telephone call with Jeggle that same day he discussed potential impact of such a virus. Şahin and Türeci had previously identified that the mRNA vaccine technology that the company had been developing offered the possibly of being used to create a suitable vaccine. While the company had a small team which had started developing vaccines for infectious disease and had collaborating with Pfizer on a flu vaccine BioNTech was after 11 years of financial losses totalling more than €400 million was concentrating its efforts on developing mRNA as a means of fighting cancer. However, realizing the risk and believing that the company's proprietary mRNA technology at now at the stage where they had the tools to create a vaccine Şahin after discussing it with his wife, spent that weekend outlining the technical construction of eight possible vaccine candidates based on the company's mRNA platforms. He was assisted in his work by the SARS-CoV-2 genetic sequences having been previously published on 11 January 2020 by Edward C. Holmes in association with Zhang Yongzhen, a professor at the Chinese Center for Disease Control and Prevention on open-source website Virological.org. This triggered an urgent international response to prepare for an outbreak and hasten development of preventive vaccines. On Monday 27 January Şahin had a series of meetings with the company's few infectious experts and the leaders of most of the departments to discuss his concerns about the virus and to announce his decision to establish a new project called 'Lightspeed' that would use all of the company's available resources to develop a vaccine. He also decided that rather than follow the traditional method of developing a single prototype and then discard it if it didn't work and then start again they would develop and test multiple vaccines in parallel. They would then discard the least promising. BioNTech approaches Pfizer about collaborating At the board meeting the next day Şahin received permission to spend over the next weeks a limited amount of money that the company and its 1,300 personnel investigating the development of a vaccine, after which they would reevaluate whether to continue. The board then considered whether to build up their capability to fully manufacture, document, sell and distribute any potential vaccine they decided that this would take too long and it would be better to partner with a pharma giant. Since the company had been collaborating with Pfizer since 2018 on developing a mRNA vaccine for influenza. Şahin called Pfizer's chief scientific officer, Phil Dormitzer later that Tuesday to tell them what they were doing and ask if they were interested in collaborating with BioNTech. Dormitzer was lukewarm as he felt that this new virus would be able to controlled and confined to China by public health measures and a few hours later confirmed on behalf of Pfizer that they were not interested. Consulting the Paul Ehrlich Institute Prior to contacting Pfizer, Şahin had contacted Klaus Cichutek at the Paul Ehrlich Institute (PEI) in Langen, which was Germany's drug regulator to ask for his assistance in arranging a meeting with a panel of experts to discuss a vaccine development strategy and to determine what needed to be done to receive authorisations to undertake a clinical trial. As it was taking the Wuhan developments very seriously PEI was more than willing to help and had already initiated a vaccine development programme and was providing emergency advice to other drug makers and waiving its administration fees. it was more than willing to assist BioNTech and came back two days later to say that provided a detailed briefing dossier could be delivered in time would meet with them the next week. Corinna Rosenbaum who was the lead project manager on the BioNTech flu project was asked to prepare what eventually was a 50-page dossier detailing how the company had the expertise and technology to create a safe vaccine. Crucial to the delivery of an mRNA vaccine to its cellular destination via an injection into a human muscle was the availability of a suitable wrapper made of lipid nano particles to protect it from the body's enzymes. The company had no experience in them they approached Acuitas Therapeutics whose proprietary wrapper technology was already being used in human trials and for which all of the necessary safety data was available. This would assist in gaining PEI approval. This small Canadian company of 25 staff was led by Tom Madden. An advantage of using Acuitas Therapeutics was that their ALC-0315 lipid formulation was already available at Polymun which was one of the only companies which had the expertise to immediately combine lipids with mRNA. Polymun was located near Vienna in Austria, an eight-hour drive from BioNTech's headquarters, which would be make it easier for material had to transported between the two companies. On Monday 3 February Acuitas Therapeutics agreed to assist. With Acuitas Therapeutics on board the briefing dossier was able to be completed and was sent to PEI late on Tuesday, 4 February, six days after work had commenced on compiling it. On 6 February Şahin, Türeci and Rosenbaum together with Tom Madden and Chris Barbosa from Acuitas Therapeutics met with PEI who were happy with what BioNTech proposed, with the only point of contention being PEI rejecting BioNTech proposal to either skip altogether or run toxicology studies in parallel with clinical trials before human trials could begin. This was important as while the individual components had been shown by trials to not cause any significant issues in humans there was no safety data on the combination of mRNA and lipids. Toxicology studies on mice or rats normally took five months. At this point in time PEI main concerns were about whether there were any benefits in speeding up the normal process. For the vaccine to work it needed to deliver a stable accurate replica of the virus's spike protein so that the body's immune system could recognize and react to COVID19 if they became infected. In developing a stable replica, the team was assisted by advice from Barney S. Graham who had been studying the MERS virus, which was approximately 54% identical to the uploaded COVID-19 genetic code. There were two options, one was to reproduce a full likeness of entire spike protein which would contain approximately 1,200 amino acids (protein building blocks) increase the risk of antibody-dependent enhancement (ADE) complications. The other was to reproduce only the tip of the spike protein which was known as binding domain receptor (RBD). RDB was simpler as it would contain approximately 200 amino acids and risk of ADE would be reduced. Şahin decided that BioNTech would explore both methods. Development of parallel candidates BioNTech decided to simultaneously develop in parallel in their laboratory in Mainz 20 possible COVID19 vaccine permutations in different doses based on all four versions of synthetic mRNA platforms that they had developed, modified mRNA (modRNA), uridine RNA (uRNA), self-amplifying mRNA (saRNA) and trans-amplifying mRNA (taRNA). Using the genetic sequences that were available on Virological.org a team at BioNTech led by Stephanie Hein used gene synthesis to create DNA hardcopies, which were to be used to create the templates to make the mRNA. These hardcopies each contained up to 4,000 nucleotides, which were assembled from 50 to 80 smaller building blocks. Once these DNA templates was produced another team created the actual mRNA vaccine candidates, the first batch of which was produced on 2 March. This was then poured into a 50 ml bag, frozen to minus 70 degrees Celsius and dispatched by a waiting car to Polymun to be combined with the lipids, a process that was to followed by the rest of the 20 candidates. Once the first vials containing the lipid wrapped mRNA candidates were revied back in Mainz on 9 March a team led by Annette Vogel began testing them to determine which using at various dosage amounts induced the best immune responses, first in glass dishes and then at a separate location, in mice. Each of the candidates was tested in three dosages, low, medium and high with each given to eight mice, with their blood then sampled and analyzed over the next six weeks. The blood was analyzed by a team led by Lena Kranz and Mathias Vormehr to check to see if the mice's T-cells reacted and carried out the required immune response. These tests showed that all 20 candidates produced an immune response in the mice. In parallel Annette Vogel was also using enzyme-linked immunosorbent assays (ELISA) to determine using a virus neutralisation test (VNT) if the candidates were inducing sufficient neutralising antibodies. Because of the risk that COVID19 posed this testing had to be done in a biosafety level three (BSL-3) laboratory, which BioNTech didn't have. Fortunately, they were able to get around this by creating a vesicular stomatitis virus (VSV) pseudovirus to replace the harmful elements with the isolated spike proteins from SARS-CoV-2. A working prototype pseudovirus test was ready by 10 March. This meant the laboratory security requirements could be downgraded to BSL-1, which the company had onsite. To obtain a return on its investment in 'Project Lightspeed Helmut' Jeggle was of the opinion that the company had to take advantage of the massive demand by being among the first three to the market with a vaccine. To do this BioNTech needed the evolvement of either GSK, Johnson & Johnston, Merck, Pfizer or Sanofi, who alone had the financial resources, manufacturing ability and territorial coverage to undertake the massive Phase 3 trials needed to prove to the regulators that the vaccine was safe. BioNTech reapproaches Pfizer about collaborating Despite the earlier rebuff from Pfizer the company still preferred to partner with them. In the meantime they were able to reach what was in effect a licensing agreement on 16 March with Shanghai-based Fosun. On 3 March Şahin was able to contact Kathrin Jansen, head of vaccine research and development at Pfizer that BioNTech who by now was of the opinion that mRNA was the best means of creating a COVID19 vaccine. She took the idea of a collaboration to Pfizer CEO Albert Bourla. While the two companies had been working together since 2018 on developing a mRNA vaccine for influenza, it was only now that their two chief executives became personally acquainted. After a few phone calls, Bourla agreed that Pfizer would work with BioNTech on the development of BioNTech's COVID-19 vaccine a with work commencing immediately, with no formal written legal agreement in place to govern the new collaboration. BioNTech transferred its know-how to Pfizer the next day. Bouria agreed on the 50:50 partnership that Şahin proposed with each company equally sharing costs and any potential profits. Because of BioNTech's limited financial ability Pfizer agreed to fund Biontech's cost which was expected to be $190 million which would be paid back. As far as Bourla was concerned COVID19 was so important that he had told his staff that they had an “open cheque”. On 13 March it was formally announced that BioNTech was collaborating with Pfizer with a letter of intent being signed on 17 March. However it wasn't until January 2021the formal commercial agreement between Pfizer and BioNTech for the COVID-19 vaccine was signed. The release of news of the partnership bought BioNTech publicity that resulted the company receiving letters and telephone calls containing racists views and often death threats. Security was tightened and board members were offered personal protection. Funding According to Pfizer, research and development for the vaccine cost close to billion. BioNTech received a investment from Fosun on 16 March 2020, in exchange for 1.58million shares in BioNTech and the future development and marketing rights of BNT162b2 in China and surrounding territories. In April 2020, BioNTech signed a partnership with Pfizer and received $185million, including an equity investment of approximately $113million. In June 2020, BioNTech received () in financing from the European Commission and European Investment Bank. The Bank's deal with BioNTech started early in the pandemic, when the Bank's staff reviewed its portfolio and came up with BioNTech as one of the companies capable of developing a COVID19 vaccine. The European Investment Bank had already signed a first transaction with BioNTech in 2019. In September 2020, the German government granted BioNTech () for its COVID19 vaccine development program. Pfizer CEO Albert Bourla said he decided against taking funding from the US government's Operation Warp Speed for the development of the vaccine "because I wanted to liberate our scientists [from] any bureaucracy that comes with having to give reports and agree how we are going to spend the money in parallel or together, etc." Pfizer did enter into an agreement with the US for the eventual distribution of the vaccine, as with other countries. Clinical trials PhaseI–II Trials were started in Germany on 23 April 2020, and in the U.S. on 4May 2020, with four vaccine candidates entering clinical testing. The vaccine candidate BNT162b2 was chosen as the most promising among three others with similar technology developed by BioNTech. Before choosing BNT162b2, BioNTech and Pfizer had conducted phaseI trials on BNT162b1 in Germany and the United States, while Fosun performed a PhaseI trial in China. In these PhaseI studies, BNT162b2 was shown to have a better safety profile than the other three BioNTech candidates. The Pivotal PhaseII–III Trial with the lead vaccine candidate "BNT162b2" began in July. Preliminary results from PhaseI–II clinical trials on BNT162b2, published in October 2020, indicated potential for its safety and efficacy. During the same month, the European Medicines Agency (EMA) began a periodic review of BNT162b2. The study of BNT162b2 is a continuous-phase trial in phaseIII as of November 2020. It is a "randomized, placebo-controlled, observer-blind, dose-finding, vaccine candidate-selection, and efficacy study in healthy individuals". The study expanded during mid-2020 to assess efficacy and safety of BNT162b2 in greater numbers of participants, reaching tens of thousands of people receiving test vaccinations in multiple countries in collaboration with Pfizer and Fosun. The phase III trial assesses the safety, efficacy, tolerability, and immunogenicity of BNT162b2 at a mid-dose level (two injections separated by 21 days) in three age groups: 12–15 years, 16–55 years or above 55 years. The PhaseIII results indicating a 95% efficacy of the developed vaccine were published on 18 November 2020. For approval in the EU, an overall vaccine efficacy of 95% was confirmed by the EMA. The EMA clarified that the second dose should be administered three weeks after the first dose. At 14 days after dose 1, the cumulative incidence begins to diverge between the vaccinated group and the placebo group. The highest concentration of neutralizing antibodies is reached 7 days after dose 2 in younger adults and 14 days after dose 2 in older adults. The ongoing phase III trial, which is scheduled to run from 2020 to 2022, is designed to assess the ability of BNT162b2 to prevent severe infection, as well as the duration of immune effect. High antibody activity persists for at least three months after the second dose, with an estimated antibody half-life of 55 days. From these data, one study suggested that antibodies might remain detectable for around 554 days. Specific populations Pfizer and BioNTech started a PhaseII–III randomized control trial in healthy pregnant women 18 years of age and older (NCT04754594). The study will evaluate 30mcg of BNT162b2 or placebo administered via intramuscular injection in two doses, 21 days apart. The PhaseII portion of the study will include approximately 350 pregnant women randomized 1:1 to receive BNT162b2 or placebo at 27 to 34 weeks' gestation. The PhaseIII portion of this study will assess the safety, tolerability, and immunogenicity of BNT162b2 or placebo among pregnant women enrolled at 24 to 34 weeks' gestation. Pfizer and BioNTech announced on 18 February 2021 that the first participants received their first dose in this trial. A study published in March 2021, in the American Journal of Obstetrics and Gynecology came to the conclusion that messenger RNA vaccines against the novel coronavirus, such as the Pfizer-BioNTech and Moderna vaccines were safe and effective at providing immunity against infection to pregnant and breastfeeding mothers. Furthermore, they found that naturally occurring antibodies created by the mother's immune system were passed on to their children via the placenta and/or breastmilk, thus resulting in passive immunity among the child, effectively giving the child protection against the disease. The study also found that vaccine-induced immunity among the study's participants was stronger in a statistically significant way over immunity gained through recovery from a natural COVID19 infection. In addition, the study reported that the occurrence and intensity of potential side effects in those undergoing pregnancy or lactating was very similar to those expected from non-pregnant populations, remaining generally very minor and well tolerated, mostly including injection site soreness, minor headaches, muscles aches or fatigue for a short period of time. In January 2021, Pfizer said it had finished enrolling 2,259 children aged between 12 and 15 years to study the vaccine's safety and efficacy. On 31 March 2021, Pfizer and BioNTech announced from initial PhaseIII trial data that the vaccine is 100% effective for those aged 12 to 15 years of age, with trials for those younger still in progress. A research letter published in JAMA reported that the vaccines appeared to be safe for immunosuppressed organ transplant recipients, but that the resulting antibody response was considerably poorer than in the non-immunocompromised population after only one dose. The paper admitted the limitation of only reviewing the data following the first dose of a two-dose cycle vaccine. In November 2021, journalist Paul D. Thacker alleged there has been "poor practice" at Ventavia, one of the companies involved in the phase III evaluation trials of the Pfizer vaccine. The report was enthusiastically embraced by anti-vaccination activists. David Gorski commented that Thacker's article presented facts without necessary context to misleading effect, playing up the seriousness of the noted problems. Authorizations Although jointly developed with Pfizer, Comirnaty is based on BioNTech's proprietary mRNA technology, and BioNTech holds the Marketing Authorization in the United States, the European Union, the UK, and Canada; expedited licenses such as the US emergency use authorization (EUA) are held jointly with Pfizer in many countries. Expedited The United Kingdom's Medicines and Healthcare products Regulatory Agency (MHRA) gave the vaccine "rapid temporary regulatory approval to address significant public health issues such as a pandemic" on 2December 2020, which it is permitted to do under the Medicines Act 1968. It is the first COVID19 vaccine to be approved for national use after undergoing large scale trials, and the first mRNA vaccine to be authorized for use in humans. The United Kingdom thus became the first Western country to approve a COVID19 vaccine for national use, although the decision to fast-track the vaccine was criticized by some experts. After the United Kingdom, the following countries and regions expedited processes to approve the Pfizer–BioNTech COVID19 vaccine for use: Argentina, Australia, Bahrain, Canada, Chile, Costa Rica, Ecuador, Hong Kong, Iraq, Israel, Jordan, Kuwait, Malaysia, Mexico, Oman, Panama, the Philippines, Qatar, Saudi Arabia, Singapore, South Korea, the United Arab Emirates, the United States, and Vietnam. The World Health Organization (WHO) authorized it for emergency use. In the United States, an emergency use authorization (EUA) is "a mechanism to facilitate the availability and use of medical countermeasures, including vaccines, during public health emergencies, such as the current COVID-19 pandemic", according to the Food and Drug Administration (FDA). Pfizer applied for an EUA on 20 November 2020, and the FDA approved the application three weeks later on 11 December 2020. The US Centers for Disease Control and Prevention (CDC) Advisory Committee on Immunization Practices (ACIP) approved recommendations for vaccination of those aged sixteen years or older. Following the EUA issuance, BioNTech and Pfizer continued the PhaseIII clinical trial to finalize safety and efficacy data, leading to application for licensure (approval) of the vaccine in the United States. On 10 May 2021, the US FDA also authorized the vaccine for people aged 12 to 15 under an expanded EUA. The FDA recommendation was endorsed by the ACIP and adopted by the CDC on 12 May 2021. In October 2021, the EUA was expanded to include children aged 5 through 11 years of age. In June 2022, the EUA was expanded to include children aged six months through four years of age. In February 2021, the South African Health Products Regulatory Authority (SAHPRA) in South Africa issued Section 21, Emergency Use Approval for the vaccine. In May 2021, Health Canada authorized the vaccine for people aged 12 to 15. On 18 May 2021, Singapore's Health Sciences Authority authorized the vaccine for people aged 12 to 15. The European Medicines Agency (EMA) followed suit on 28 May 2021. In June 2021, the UK Medicines and Healthcare products Regulatory Agency (MHRA) came to a similar decision and approved the use of the vaccine for people twelve years of age and older. Standard In December 2020, the Swiss Agency for Therapeutic Products (Swissmedic) granted temporary authorization for the Pfizer–BioNTech COVID19 vaccine for regular use, two months after receiving the application, saying the vaccine fully complied with the requirements of safety, efficacy and quality. This is the first authorization under a standard procedure. In December 2020, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) recommended granting conditional marketing authorization for the Pfizer–BioNTech COVID19 vaccine under the brand name Comirnaty. The recommendation was accepted by the European Commission the same day. In February 2021, the Brazilian Health Regulatory Agency approved the Pfizer–BioNTech COVID19 vaccine under its standard marketing authorization procedure. In June 2021, the approval was extended to those aged twelve or over. Pfizer's negotiation process with Brazil (and other Latin American countries) was described as "bullying". The contract prohibits the state of Brazil from publicly discussing the existence or the terms of their agreement with Pfizer–BioNTech without the former's written consent. Brazil was also restricted from donating or receiving donations of vaccines. In July 2021, the U.S. Food and Drug Administration (FDA) granted priority review designation for the biologics license application (BLA) for the Pfizer–BioNTech COVID-19 vaccine with a goal date for the decision in January 2022. On 23 August 2021, the FDA approved the vaccine for use for those aged sixteen years and older. The Pfizer-BioNTech Comirnaty COVID-19 vaccine was authorized in Canada in September 2021, for people aged twelve and older. In July 2022, the FDA approved the vaccine for use for those aged twelve years and older. In September 2022, the CHMP of the EMA recommended converting the conditional marketing authorizations of the vaccine into standard marketing authorizations. The recommendation covers all existing and upcoming adapted Comirnaty vaccines, including the adapted Comirnaty Original/Omicron BA.1 (tozinameran/riltozinameran) and Comirnaty Original/Omicron BA.4/5 (tozinameran/famtozinameran). Administering of the first non-clinical doses The first dose administered outside of a clinical trial was given to 90-year-old Margaret Keenan in the outpatient ward at Coventry University Hospital on 8 December 2020. The vial and syringe used for her injection was subsequently sent for display to the Science Museum in London. The first dose administered outside of a clinical trial in the United States was given to Sandra Lindsay on 14 December 2020. Further development Homologous prime-boost vaccination In July 2021, Israel's Prime Minister announced that the country was rolling out a third dose of the Pfizer-BioNTech vaccine to people over the age of 60, based on data that suggested significant waning immunity from infection over time for those with two doses. The country expanded the availability to all Israelis over the age of 12, after five months since their second shot. On 29 August 2021, Israel's coronavirus czar announced that Israelis who had not received a booster shot within six months of their second dose would lose access to the country's green pass vaccine passport. Studies performed in Israel found that a third dose reduced the incidence of serious illness. In August 2021, the United States Department of Health and Human Services (HHS) announced a plan to offer a booster dose eight months after the second dose, citing evidence of reduced protection against mild and moderate disease and the possibility of reduced protection against severe disease, hospitalization, and death. The US Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC) authorized the use of an additional mRNA vaccine dose for immunocompromised individuals at that time. Scientists and the WHO noted in August 2021, the lack of evidence on the need for a booster dose for healthy people and that the vaccine remains effective against severe disease months after administration. In a statement, the WHO and Strategic Advisory Group of Experts (SAGE) said that, while protection against infection may be diminished, protection against severe disease will likely be retained due to cell-mediated immunity. Research into optimal timing for boosters is ongoing, and a booster too early may lead to less robust protection. In September 2021, the FDA and CDC authorizations were extended to provide a third shot for other specific groups. In October 2021, the European Medicines Agency (EMA) stated that a booster shot of the vaccine could be given to healthy people, aged 18 years and older, at least six months after their second dose. It also stated that people with "severely weakened" immune systems can receive an extra dose of either the Pfizer-BioNTech vaccine or the Moderna vaccine starting at least 28 days after their second dose. The final approval to provide booster shots in the European Union will be decided by each national government. In October 2021, the FDA and the CDC authorized the use of either homologous or heterologous vaccine booster doses. In October 2021, the Australian Therapeutic Goods Administration (TGA) provisionally approved a booster dose of Comirnaty for people 18 years of age and older. In January 2022, the FDA expanded the emergency use authorization to provide for the use of a vaccine booster dose to those aged 12 through 15 years of age, and it shortened the waiting period after primary vaccination to five months from six months. In May 2022, the FDA expanded the emergency use authorization to provide for the use of a vaccine booster dose to those aged 5 through 11 years of age. In August 2022, the FDA revoked the emergency use authorization for the monovalent vaccine booster for people aged twelve years of age and older and replaced it with an emergency use authorization for the bivalent vaccine booster dose for the same age group. Heterologous prime-boost vaccination In October 2021, the US Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC) authorized the use of either homologous or heterologous vaccine booster doses. The authorization was expanded to include all adults in November 2021. Bivalent booster vaccination In August 2022, the "Pfizer-BioNTech COVID-19 Vaccine, Bivalent (Original and Omicron BA.4/BA.5)" (in short: "COVID-19 Vaccine, Bivalent") received an emergency use authorization from the US Food and Drug Administration (FDA) for use as a booster dose in individuals aged twelve years of age and older. One dose contains 15mcg of "a nucleoside-modified messenger RNA (modRNA) encoding the viral spike (S) glycoprotein of SARS-CoV-2 Wuhan-Hu-1 strain (Original)" and 15mcg "of modRNA encoding the S glycoprotein of SARS-CoV-2 Omicron variant lineages BA.4 and BA.5 (Omicron BA.4/BA.5)". The bivalent vaccine authorized in the United States is different from the one that was authorized for use in the United Kingdom as the latter contains as second modRNA component 15mcg of modRNA enocoding the S gylcoprotein of the earlier BA.1 variant. In September 2022, the European Union authorized both the BA.1 and the BA.4/BA.5 booster versions of the bivalent vaccine for people aged twelve years of age and older. While the Omicron BA.1 vaccine has been tested in a clinical study, the Omicron BA.4/BA.5 vaccine was only tested in pre-clinical studies. According to the published presentation, the neutralization responses of Omicron BA.4/BA.5 monovalent, Omicron BA.1 mononvalent, Omicron BA.4/BA.5 bivalent and the original BNT162b2 vaccine have been explored in a study with BALB/c-mice. In October 2022, the FDA amended the authorization for the bivalent booster to cover people aged five years of age and older. In December 2022, the FDA amended the authorization for the bivalent booster to be used as the third dose in people aged six months through four years of age. XBB.1.5 monovalent vaccine In September 2023, the FDA approved an updated monovalent (single) component Omicron variant XBB.1.5 version of the vaccine (Comirnaty 20232024 formula) as a single dose for individuals aged twelve years of age and older; and authorized the Pfizer-BioNTech COVID-19 Vaccine 20232024 formula under emergency use for individuals aged 6 months through 11 years of age. The approvals and emergency authorizations for the bivalent versions of the vaccine were revoked. Health Canada approved the Pfizer-BioNTech Comirnaty Omicron XBB.1.5 subvariant, monovalent COVID19 vaccine in September 2023. The UK Medicines and Healthcare products Regulatory Agency approved the used of the Comirnaty Omicron XBB.1.5 vaccine in September 2023. JN.1 monovalent vaccine Comirnaty JN.1 contains bretovameran, an mRNA molecule with instructions for producing a protein from the Omicron JN.1 subvariant of SARS-CoV-2. It is under evaluation in Australia. KP.2 monovalent vaccine In August 2024, the FDA approved and granted emergency authorization for a monovalent Omicron KP.2 version of the Pfizer–BioNTech COVID-19 vaccine. In June 2024, the FDA advised manufacturers of licensed and authorized COVID-19 vaccines that the COVID-19 vaccines (2024-2025 formula) should be monovalent JN.1 vaccines. Based on the further evolution of SARS-CoV-2 and a rise in cases of COVID-19, the agency subsequently determined and advised manufacturers that the preferred JN.1-lineage for the COVID-19 vaccines (2024-2025 formula) is the KP.2 strain. It was approved for use in the European Union. Society and culture About 649million doses of the Pfizer–BioNTech COVID-19 vaccine, including about 55million doses in children and adolescents (below 18 years of age) were administered in the EU/EEA from authorization to 26 June 2022. Brand names BNT162b2 was the code name during development and testing, tozinameran is the international nonproprietary name (INN), and Comirnaty is the brand name. According to BioNTech, the name Comirnaty "represents a combination of the terms COVID19, mRNA, community, and immunity". Famtozinameran is the INN for the BA.5 variant in the bivalent version of the vaccine. Raxtozinameran is the INN for the XBB 1.5 variant version of the vaccine. Economics Pfizer reported revenue of million from the Pfizer–BioNTech COVID-19 vaccine in 2020, $36 billion in 2021, and $11.220 billion in 2023. In July 2020, the vaccine development program Operation Warp Speed placed an advance order of with Pfizer to manufacture 100million doses of a COVID19 vaccine for use in the United States if the vaccine was shown to be safe and effective. By mid-December 2020, Pfizer had agreements to supply 300million doses to the European Union, 120million doses to Japan, 40million doses (10million before 2021) to the United Kingdom, 20million doses to Canada, an unspecified number of doses to Singapore, and 34.4million doses to Mexico. Fosun also has agreements to supply 10million doses to Hong Kong and Macau. Pfizergate investigation Accounts of how Pfizer's got its way into a large deal to provide 1.8 billion doses of its vaccine to the European Union were described by The New York Times as "a striking alignment of political survival and corporate hustle". Shots worth €4 billion were reportedly wasted before the deal was re-negotiated. In early 2023, Belgian prosecutors began investigating European Commission President Ursula von der Leyen and Pfizer CEO Albert Bourla. The case was taken over in 2024 by the European Public Prosecutor's Office citing "interference in public functions, destruction of SMS, corruption and conflict of interest." Access Pfizer has been accused of hindering vaccine equity. In 2021, Pfizer delivered only 39% of the contractually agreed doses to the COVAX programme, a number that equals 1.5% of all vaccines produced by Pfizer. The company sold 67% of their doses to high-income countries and sold none directly to low-income countries. Pfizer actively lobbied against the temporary lift of intellectual property rights which would allow the vaccine to be produced by others without having to pay a royalty fee. Misinformation Videos on video-sharing platforms circulated around May 2021 showing people having magnets stick to their arms after receiving the vaccine, purportedly demonstrating the conspiracy theory that vaccines contain microchips, but these videos have been debunked. Notes References Further reading External links Global Information About Pfizer–BioNTech COVID-19 Vaccine (also known as BNT162b2 or as Comirnaty) by Pfizer Comirnaty Safety Updates from the European Medicines Agency Product information from the Centers for Disease Control and Prevention BioNTech American COVID-19 vaccines German COVID-19 vaccines Pfizer Products introduced in 2020 RNA vaccines COVID-19 vaccination in the United States 2020 in biotechnology 2020 in medicine Withdrawn drugs
Pfizer–BioNTech COVID-19 vaccine
Chemistry
12,537
25,102,062
https://en.wikipedia.org/wiki/Green%20building%20on%20college%20campuses
Green building on college campuses is the purposeful construction of buildings on college campuses that decreases resource usage in both the building process and also the future use of the building. The goal is to reduce emissions, energy use, and water use, while creating an atmosphere where students can be healthy and learn. Universities across the country are building to green standards set forth by the USGBC, United States Green Building Council. The USGBC is a non-profit organization that promotes sustainability in how buildings are designed and built. This organization created the Leadership in Energy and Environmental Design (LEED) rating system, which is a certification process that provides verification that a building is environmentally sustainable. In the United States, commercial and residential buildings account for 70 percent of the electricity use and over 38 percent of emissions. Because of these huge statistics regarding resource usage and emissions, the room for more efficient building practices is dramatic. Since college campuses are where the world's future leaders are being taught, colleges are choosing to construct new buildings to green standards in order to promote environmental stewardship to their students. Colleges across the United States have taken leading roles in the construction of green building in order to reduce resource consumption, save money in the long run, and instill the importance on environmental sustainability on their students. It is a better way to motivate new generation to live a sustainable life. Benefits of Green Building on Campuses Green buildings on college campuses provide benefits to the campus in several different ways. Campuses can benefit from the short and long-term economic benefits. Initially, federal and state governments will sometimes provide tax incentives for buildings constructed that surpass the standards set by the government. There are also long term savings. According to the USGBC, with an upfront investment of 2% in green building design, the resulting life savings is 20% of the total construction costs. With many universities lacking funding, this kind of savings could dramatically help the yearly budget. Along with this increase in monetary savings, green building and architecture has been proven to make the occupants more productive. Studies have shown a link between improved lighting design and a 27% reduction in the incidence of headaches. Also, students with the most daylighting in their classrooms progressed 20% faster on math tests and 26% faster on reading tests in one year than those with less daylighting. Both of these studies show that better lighting conditions, which are one of the main features of green buildings, can increase the productivity of its occupants. Students at colleges where green buildings are being used will benefit by increasing their potential to gain knowledge. The last important benefit of green buildings on college campuses is having the university seen as environmentally sustainable. Students are becoming increasingly aware of the issues the Earth faces with carbon emissions and increased consumption. These students want to attend universities that are striving to reduce their environmental impact. Universities participating in sustainable initiatives, like constructing green buildings, will attract more highly qualified students. Green buildings on campuses benefit both the school as well as the students. LEED Rating System Many institutions in the United States are administering the LEED (Leadership in Energy and Environmental Design) Green Building Rating System. The development of the LEED Rating System has been nationally recognized as the leading method to construct green buildings. The rating system incorporates the design, construction, and maintenance of the building. LEED promotes a cradle-to-cradle approach in regards to construction and design materials. The rating system is composed of six sections: Site Planning, Water Management, Energy Management, Material Use, Indoor air quality, and the Innovation & Design Process. Each section is composed of credits and points, which ultimately determine how "green" the building is constructed, designed, and maintained. LEED Certification Levels LEED has four different levels of certification. All depending on how many credits and points were obtained through the LEED Rating System. There are 100 possible base points plus an additional 6 points for Innovation in Design and 4 points for Regional Priority. Buildings can qualify for 4 types of certification: Certified: 40-49 Points Silver: 50-59 Points Gold: 60-79 Points Platinum: 80 points and above LEED - NC Application Guide for Multiple Buildings and On-Campus Building Projects (AGMBC) The USGBC has issued an application guide for administration of LEED Rating System on college, corporate, or government installations that include multiple buildings. This application is designed for projects where several buildings will be constructed at once, in phases, or a single building is constructed in a setting of existing buildings with common ownership. Note, however, that the AGMBC applies to LEED Rating System Versions 2.1 and 2.2. The methods described still apply to new construction on campuses. Issues with AGMBC The sustainable sites category is the most challenging category, and it is the most detailed section in the AGMBC. Campus settings sometimes have established property lines through campus, but share a common infrastructure between areas. (Examples include street lighting may encroach on another building, storm water routes may go into same retention areas) One overall sign for LEED certification, may not appeal to college trying to market LEED dedication. Multi-Building Certification Methods Certifying a new building within a setting of existing buildings that are considered a campus, i.e. there is one owner or common property management and control. Use of a retention pond not on "site" but on campus, would still qualify for LEED credit. Certifying a group of new buildings as a package where the entire building set will be rated as a package and only one rating received. These buildings may constitute the entire campus or be a subset of an existing campus. Certifying new buildings where each new building is constructed to a set of standards but will receive an independent rating based on achievement of credits beyond the standards specific to that building. These buildings may constitute the entire campus or be subset of an existing campus. Required LEED Levels for Select Colleges These are 10 colleges all around the US determined to build for a sustainable future. Each college outlines their commitment in Campus Sustainability Initiatives and Mission statements. Brown University - Requires a minimum of all new construction be at least "Silver" California Polytechnic State University - Requires a minimum of all new construction be at least "Certified." Georgia Institute of Technology - Requires a minimum of all new construction be at least "Certified." Harvard University - Requires a minimum of all new construction be at least "Silver." Massachusetts Institute of Technology - Requires a minimum of all new construction be at least "Silver." Northwestern University - Requires a minimum of all new construction be at least "Certified." Princeton University - Requires a minimum of all new construction be at least "Silver." University of Florida - Requires a minimum of all new construction be at least "Gold." University of North Texas - Requires a minimum of all new construction be at least "Silver." University of Oregon - Requires a minimum of all new construction be at least "Certified." University of Vermont - Requires a minimum of all new construction be at least "Silver." University of Florida is only college committed to a minimum of LEED "Gold" Certification Campus Green Building Techniques The following methods are becoming more prevalent on campuses around the nation. Because of the large scale of college campuses, the impact of these methods are truly praise for energy savings and enhanced occupants' comfort. Green roofs - Living, vegetative roofing alternatives; a solution to the heat island effect associated with buildings. Low-VOC paints - Drastically limits any odorous, harmful, or irritating emissions and enhance the occupants' comfort. Compact fluorescent bulbs - Uses less energy and give off less heat; will save energy used to cool the building. Using recycled content Buying and using local materials - Local materials have lower transportation costs because of the lower amount of energy needed to move materials. Tree preservation and relocation Low flow plumbing fixtures - Uses less water per flush. Alternative transportation - Campuses use bike transportation, rapid bus transit, and safe pedestrian walkways. Zipcar is also becoming popular on many college campuses. Sustainable Materials used in Green Building The following are some examples of sustainable products used in green building. These materials are less harmful to the environment. Now-a-days many materials have a "green" substitute. Division 3: Concrete PS 4000 Flat Wall Form: Improved tongue-and-groove design simplifies installation on the job and minimizes the problems associated with concrete spillage at the top of the wall. The unique design provides strength, fire resistance, and dimensional stability Provides you with a superior construction technology that delivers cost-effect, high-performance structures that are safer, quieter, comfortable, energy efficient, and more structurally secure and environmentally responsible building system available on the market today. Fly Ash Because fly ash use displaces Portland cement use, it also reduces the need for cement production, which is a major energy user and a leading source of "greenhouse gas" emissions. Better performance without increase in cost. It can replace up to 30% by mass of Portland cement, and can add to the concrete's final strength and increase its chemical resistance and durability. Division 4: Masonry Cavclear Masonry Mat A fluid-conducting, non-absorbent polymer mesh made from 100% recycled plastic that is installed full-height in the airspace. It prevents mortar from bridging the airspace and results in a continuous area for drainage and ventilation. Ensures water management. Reduces building's life-cycle costs. Sealtech Block Certified "Green" with 10% recycled high-strength plastic powder. Non-porous surface means decreased permeability, making it water-resistant. Stronger than standard concrete block yet 10% lighter, translating into reduced shipping and labor costs. Division 5: Metals Maze nails Made from recycled steel. The scrap steel generated while making nail heads goes right back to the steel mill for re-melting. Nails are galvanized with a dual Zinc coating for durability assurance. Cold-formed metal framing Lightweight, and dimensionally stable. Contains 20-25% recycled material (10-15% post-consumer content, though some manufactures have in excess of 90% of recycled content.) Steel studs can even be recycled at end of building's life. Division 6: Woods, Plastics, and Composites Ecosurfaces Made from recycled tires. Slip-resistant. Weather resistant, able to withstand extreme temperatures Reclaimed lumber If not reused, wood would be burned or chipped Old growth forests are protected Durable and aesthetically pleasing. The wood has become stabilized over time, which prevents changes due to humidity. Engineered wood, Gluecam Provide a significant environmental advantage over solid wood by using fast-growing, small diameter trees effectively. Plastic lumber Makes use of recycled plastic and is an effective replacement for pressure-treated lumber, which also protects timber resources. Will not rot, absorb water, splinter, or crack Resistant to oil, salt, and chemicals Division 7: Thermal and Moisture Protection Concrete roof tiles Made from an approximate mix of 3 parts sand to 1 part cement and 10% water. Limited maintenance is necessary. Concrete tiles are wind resistant. Can last up to 100 years. Division 8: Openings Greenscreen PVC-free fabrics PVC-Free construction of polyurethane and specially designed, pre-stretched polyester core. 5 different levels of visibility: 3%, 5%, 10%, and 25%. Elimination of PVC content in production of GreenScreen fabrics mean shades contain no VOC's and does not "offgas" during the life of the product. PVC-content makes it easier and quicker to recycle GreenScreen fabrics and divert them from landfills. Division 9: Finishes Marmoleum Flooring Raw materials and energy are used efficiently, waste is recycled wherever possible, and emissions are kept to an absolute minimum. Life-cycle analysis shows that these linoleum products are ecologically preferred floor covering. Linoleum is produced from renewable materials: linseed oil, wood flour, jute and ecologically responsible pigments. Organic product. Cork floating floor Highly compressible and resilient. Excellent sound and thermal insulator. Lightweight and buoyant. Natural fire retardant, hypoallergenic, and insect resistant. Australian Chestnut flooring LEED Qualification: MR 7-Certified Wood Product is certified according to the principles & criteria of the Forest Stewardship Council (FSC), adhering to strict environmental and social standards Easily meets E-1 Standard for Indoor Air Quality Bamboo flooring Bamboo is not wood, but rather a type of grass. Quick renewable resource, can be harvested in as little as 5 years. Very strong and stable, more so than many hardwoods Less likely to swell or shrink Division 12: Furnishings Climatex upholstery fabrics used for climate control seating Climatex is a mixture of three fibers to provide seating comfort. Pure wool, which is excellent for heat conservation and great for moisture absorption. Polyester, which allows for a fast humidity transport. Ramie, which offers a cooling effect and great moisture transport. Division 26- Electrical Evergreen solar panels A rigid, double walled, deep frame with integrated water drainage holes. Low energy - an energy payback time as rapid as 18 months. Low carbon and low lead used. International Campus Sustainability Organizations International Sustainable Campus Network Universities have a leadership role in advancing knowledge, technology and tools to create a sustainable future. To fulfill this role effectively and with high credibility, they need to include a focus on sustainability also in their own operations and facilities. Campus projects, be they educational or corporate campus developments, present interesting sustainability challenges and opportunities. Firstly, their size is at the borderline between single building projects and small towns, a fruitful scale for innovative energy and transport solutions. And secondly, they are to a certain degree one-purpose neighborhoods focused on education, research, development or distribution of new ideas, products or services. Goal 1: sustainable construction, renovation, and operation Goal 2. sustainable master planning and development, mobility, and community integration Goal 3. linking facilities, research and education for sustainable development Partners: Technische Universität Darmstadt, Australian National University, University of California, Berkeley, City of Zurich, Dundalk Institute of Technology, Swiss Federal Institute of Technology in Lausanne (EPFL), Swiss Federal Institute of Technology in Zurich (ETH Zurich), Harvard University, HEEPI, Hosei University, KTH Royal Institute of Technology, Los Angeles Community College District, National University of Singapore, Pontifical Catholic University of Peru, Stanford University, The Sustainability Forum, Tongji University, University of Applied Sciences of Trier-Birkenfeld, University of Copenhagen, University of Zurich – CCRS, University of Gothenburg, University of Luxembourg and Yale University. International Green Construction Code The International Green Construction Code is a part of the International Code Council (ICC). As part of its commitment to green and sustainable safety concepts, the Code Council is excited to develop a new set of green codes under the multi-year initiative called "IGCC: Safe and Sustainable by the Book." This initiative will include collaboration from the council's closest allies and pre-eminent thought leaders in green building, as well as outreach and feedback from our members and the general public. The International Green Construction Code is committed to developing an effective and efficient code that will continue our long tradition of international code guidance. World Green Building Council The World Green Building Council is an international organization that facilitates the green building councils of many developed and developing nations. The Council started in 1999 with its first meeting in California. Eight members attended the first meeting: U.S. Green Building Council, Green Building Council of Australia, Spain Green Building Council, United Kingdom Green Building Council, Japan Green Building Council, United Arab Emirates, Russia and Canada. THE WorldGBC incorporated in 2002 and operates from Toronto, Canada. There are currently over 15 established GBCs and 35 emerging and prospective countries with GBCs. Campus Green Building Case Studies United States Stanford University: Knight Management Center Stanford is a leading university in the green movement and the school is striving to achieve a LEED Platinum certification for their new graduate school of business, the Knight Management Center. The goal for this building is to open in the winter of 2011. The center will have eight buildings around three quadrangles with of interior space. According to the principal architect, Stan Boles of Boora Architects in Portland, Oregon, "The orientation of the buildings is narrow in the north-south dimension. They are designed for optimum daylighting, ventilation, and for shading of one another. The exterior walls are designed so that areas of glass are created but shaded by exterior screens to prevent excessive heat gain." This project aims to: Reduce overall water usage by at least 40%. Exceed current energy efficiency standards by at least 40%. Generate at least 12% electricity on site through solar energy. Use rainwater or re-circulated gray water to reduce potable water use for building sewage conveyance by 80%. Recycle or salvage 50% to 70% of non-hazardous construction debris. Use low- or non-volatile organic compound-emitting materials to ensure exceptional indoor air quality. Stanford's president, John L. Hennessy, said, "One of the biggest global challenges facing us today is the sustainable use of our planets natural resources. The Graduate School of Business will play a key role in helping us address these challenges by leading the way in its sustainable development of this new campus." Stanford University is taking an active role in constructing green buildings on their campus and the Knight Management Center will be a great example of how a building can be sustainable. University of California at Santa Barbara: Donald Bren School of Environmental Science & Management The Donald Bren School of Environmental Science & Management is located at the University of California, Santa Barbara, California. The academic laboratory and classroom facility demonstrates cost-effective, energy-efficient technologies and operations. The concrete and steel frame structure was complete in 2002 and cost approximately $27,500,000. Donald Bren Hall was the first laboratory to receive LEED Platinum accreditation, the highest rating achievable through the US Green Building Council's national rating system, with the following building design features: Site Protection: Since Donald Bren Hall is located adjacent to the ocean, a strict site protection plan was developed and implemented to ensure all storm water is retained onsite to prevent contamination of local waterways. Water Efficiency: A separate reclaimed water system was installed to furnish greywater to flush toilets and irrigate the landscape. Waterless urinals were also installed and it is estimated that each waterless urinal will save approximately 45,000 gallons of water per year. Energy Efficiency: Design includes a 40 KW rooftop photovoltaic system, natural ventilation linked with a window interlock system for heating, daylighting controls, energy-efficient lighting, high efficiency boiler, and chiller integrated into a virtual chilled water loop. These energy efficiency measures assisted the building to exceed Title 24 (1998 Standards) by 31%. Materials Efficiency: 93% of the construction waste generated onsite was diverted from the landfill. Recycled-content products include 12-20% flyash in the concrete, glass tiles and countertops, 100% postconsumer recycled content carpet, and tire-derived rubber flooring. Other environmentally preferable products for the interior surface materials included linoleum and natural cork flooring, bamboo cabinetry, and stained concrete flooring. According to Great Buildings, "The Donald Bren School at the University of California, Santa Barbara takes advantage of a beautiful setting near the Pacific Ocean to become a green building that embraces its environment not only for efficiency, but for experience. With a striking open courtyard, it provides ample opportunity for social interaction that makes the transition between indoors and outdoors much smoother and ephemeral than most buildings. Building Bren Hall with sustainable materials and methods is estimated to have added only 2% to the building cost, which will easily be offset over time by energy savings." University of North Carolina at Chapel Hill: Botanical Gardens Education Center The Education Center is located at the University of North Carolina at Chapel Hill. The building consists of three major sections connected by covered breezeways. The central wing welcomes visitors to the education center as they enter the garden through a large breezeway. The east wing offers classrooms for students enrolled in workshops and classrooms, and the west wing features the Reeves Auditorium. This large multipurpose space is used for lectures, conferences, and special events. The Education Center plans to achieve a LEED Platinum rating, most likely the first ever in North Carolina, with these features: Site Selection and Design: The Education center was located with an efficient solar orientation. Also, during the construction process, there was minimal disturbance to grade, and existing vegetation was well protected. Water Efficiency: The building uses water-efficient native landscaping and low-flow plumbing. Stormwater is conserved and re-used. Rainwater cisterns, gardens, and retention swales are also methods being used. Energy Efficiency: Geothermal wells for efficient heating and air-conditioning are used. Photovoltaic and solar cells have been installed on the building, and natural lighting is used very effectively along with day sensors that automatically dim lights when daylight is too strong. Materials Efficiency: To minimize transportation costs and carbon dioxide emissions, and to stimulate local economies, all materials were locally and sustainably produced. No wood came from old-growth trees; all the wood came from certified sustainable forests. At least 75% of the construction waste was recycled, and there was no non-toxic or off-gassing. The new Education Center expresses a sense of place and celebrates relationships between humans and nature through the integration of indoor and outdoor spaces. Open breezeways, comfortable porches, natural light in every room, beautiful native plant landscaping, and educational exhibits inform, delight, and invite visitors to the Conservation Garden. Most of all, the building is a center of learning, teaching both the science and the enjoyment of plants and nature. University of Florida: James W. Heavener Football Complex The University of Florida’s new football complex, the James W. Heavener Football Complex, was completed in 2008 and received LEED Platinum rating for the environmental sustainability of the building. The facility contractor was PPI Construction Management and the architect was RDG Planning and Design. The building includes offices, conference rooms, an atrium to display the football teams accomplishments, and a weight training facility. The LEED rated the complex 52 out of the 69 available points for the certification, which gave the building the Platinum rating. This facility is the first platinum athletic facility in the United States as well as the first platinum rated building in the state of Florida. The $28 million building exceeded the original goal of obtaining a LEED Silver rating. This building has many features that helped it to achieve the Platinum level. The features dealing with water usage reduce the buildings indoor water use by 40 percent. Due to all of the facility’s energy saving features the building has exceeded the state and national energy requirements by 35 percent. Another interesting fact about the construction of this building is that most of the material used in the construction came from within 500 miles of the University of Florida, which reduce the emissions created form transporting the material. Also 78 percent of the building debris was recycled. The assistant director of LEED at UF, Bahar Armaghani, said, "Green Buildings are not exclusively concerned with saving money through more efficient technology. They are also investments for the well-being of the people and environment." The University of Florida has taken on an initiative to have all new construction be LEED Gold certified or higher and with the construction of this facility the school has surpassed their own requirements by achieving the Platinum rating. Key Features of the Heavener Football Complex: Occupancy sensors to control lighting Organic carpet Paint and flooring made out of recycled materials Low-flow water fixture and water saving shower heads Dual-flush toilets Low-e glazing, insulation, and reflective material on glass Green roof on weight room 100% reclaimed water for irrigation High Point University School of Education High Point University, located in High Point, North Carolina, has a LEED-Certified building that houses the School of Education. The 31,000-square-foot building houses the education and psychology departments in technologically advanced classrooms, computer labs and offices. It features high-tech educational equipment, such as smart boards, a children's book library, math and science touch screen games, a methods lab designed to look and feel like a real elementary school classroom, a Mac lab and psychology research booths. The School of Education building is setting an example for modern-day energy conservation with things like floor to ceiling windows for natural lighting and light sensors in the rooms. Key Statistics: Water usage is cut by 30 percent inside the building and by 50 percent in its irrigation system Energy usage is decreased by 24 percent. International Charles Hostler Student Center The Charles Hostler Student Center on the campus of the American University of Beirut provides a model for environmentally responsive design that meets the social needs of the campus and the larger region. Situated on Beirut's seafront and main public thoroughfare, the new . facility houses competitive and recreational athletic facilities for swimming, basketball, handball, volleyball, squash, exercise and weight training. The space also includes an auditorium with associated meeting rooms, cafeteria with study space, and underground parking for 200 cars. Green Building methods: Organized as building clusters as opposed to a single building. Allowing the building forms themselves to redistribute air, activity and shade. The east–west orientation of the building forms helps to shade exterior courtyards, reducing the amount of southern exposure. The orientation also directs nighttime breezes and daytime sea breezes to cool outdoor spaces. Green spaces on the rooftops allow for a more pleasing physical and visual integration with the upper campus, providing usable rooftop areas for activities and reducing the amount of exposure to the sun. Usable program area on the site is increased through shading and ventilation of outdoor spaces Dubai International Academic City Phase-III Dubai International Academic City Phase-III (DIAC phase-III) comprises four academic buildings and a food court spread over a total built up area of . It has received the Silver LEED certification, and is expected to save approximately AED2.3 million per year from reduced energy costs, district cooling demand changes, irrigation water costs, sewage tanker and domestic water costs. Green Building component Features: Heat recovery wheels Enhanced levels of insulation Optimization of fresh air through variable speed drives on air handling units Recessed windows Significantly low lighting power densities These features will make this cluster 21.7% more energy efficient than the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) 90.1 - 2004 standards. will also consume 30% less water than the standards set by U.S. Environmental Protection Agency (EPA) as well as 40% less irrigation water. These savings have been achieved by the installation of ultra-low flow water restrictors in wash basins and dual-flush tanks in wash rooms, as well as additives in the soil for the landscape areas. See also "LEED For New Construction". USGBC. Retrieved 2009-11-13. Notes References United States Green Building Council. (2005, October). LEED-NA Application Guide for Multiple Building and On-Campus Building Projects (AGMBC) [Policy Manual]. External links LEED at the United States Green Building Council World Green Building Council Canada Green Building Council Sustainable Building Alliance UNEP-SBCI Second Nature University and college buildings Sustainable architecture Low-energy building Energy conservation Sustainable urban planning
Green building on college campuses
Engineering,Environmental_science
5,617
38,688,636
https://en.wikipedia.org/wiki/Sierra%20Club%20v.%20Babbitt
Sierra Club v. Babbitt, 15 F. Supp. 2d 1274 (S.D. Ala. 1998), is a United States District Court for the Southern District of Alabama case in which the Sierra Club and several other environmental organizations and private citizens challenged the United States Fish and Wildlife Service (FWS). Plaintiffs filed action seeking declaratory injunctive relief regarding two incidental take permits (ITPs) issued by the FWS for the construction of two isolated high-density housing complexes in habitat of the endangered Alabama beach mouse (Peromyscus polionotus ammobates). The District Court ruled that the FWS must reconsider its decision to allow high-density development on the Alabama coastline that might harm the endangered Alabama beach mouse. The District Court found that the FWS violated both the Endangered Species Act (ESA) and the National Environmental Policy Act (NEPA) by permitting construction on the dwindling beach mouse habitat. Background information Endangered Species Act The ESA of 1973 was signed by President Richard Nixon on December 28, 1973, and provides for the conservation of species that are endangered or threatened throughout all or a significant portion of their range, and the conservation of the ecosystems on which they depend. Under the ESA, species are defined as subspecies, varieties, and (for vertebrates) distinct population segments. The ESA protects endangered and threatened species and their habitats by prohibiting the "take" of listed animals and the interstate or international trade in listed plants and animals, including their parts and products, except under federal permit. Section 9(a)(1) of the ESA sets out the general prohibition on taking listed species. Take is defined as, "to harass, harm, pursue, hunt, shoot, wound, kill, trap, capture, or collect or attempt to engage in any such conduct." Habitat Conservation Plans In 1982, Congress amended the ESA to allow limited take of listed threatened and endangered species to lawful development projects. This amendment requires the issuance of an Incidental Take Permit (ITP) by either the Secretary of Interior or the Secretary of Commerce. To mitigate possible take of listed species, Section 10(a) of the ESA requires that the parties obtaining an ITP must submit a Habitat Conservation Plan (HCP). A HCP is a required part of an application for an ITP, a permit issued under the ESA to private entities including private citizens, corporations, Tribes, States, and counties with undertaking projects that might result in the destruction of an endangered or threatened species. The HCP lays out the proposed actions, determining the effects of those actions on affected wildlife species and their habitats, and defining measures to minimize and mitigate adverse effects. The FWS and the National Marine Fisheries Service (NMFS) oversee the HCP program. National Environmental Policy Act In 1969, the National Environmental Policy Act (NEPA) was one of the first laws ever written that established a broad national framework for protecting the environment. NEPA's basic policy is to assure that all branches of government give proper consideration to the environment, prior to undertaking of any major federal action that could significantly affect the environment. A project is federally controlled when it requires federal licensing, federal funding, or is undertaken by the federal government. When such a project is determined to have significant effects on the human environment, an environmental impact statement (EIS) is required. An EIS for a proposed project outlines in detail the proposed actions, alternative actions (including no action), and their probable environmental ramifications. The environmental impact statements must cover plausible bases, which are generally determined by the rule of reason. Major parties Sierra Club The Sierra Club was the major petitioner in this case. It was founded in 1892 by John Muir in San Francisco, California, and is one of America's oldest, largest, and most influential grassroots environmental organizations. The Sierra Club's mission is to explore, enjoy, and protect the wild places of the earth; to practice and promote the responsible use of the earth's ecosystems and resources; to educate and enlist humanity to protect and restore the quality of the natural and human environment; and to use all lawful means to carry out those objectives. Fish and Wildlife Service (FWS) The FWS was the major respondent of this case. It is a federal government agency within the United States Department of the Interior dedicated to the management of fish, wildlife, and natural habitats. The mission of the agency is "working with others to conserve, protect, and enhance fish, wildlife, plants and their habitats for the continuing benefit of the American people." Under the ESA, FWS is responsible for protecting endangered and threatened species and their habitats. Under provisions of section 7(a)(2) of the ESA, a federal agency that carries out, permits, licenses, funds, or otherwise authorizes activities that may affect a listed species, must consult with the FWS to ensure that its actions are not likely to jeopardize the continued existence of any listed species. Bruce Babbitt served as the U.S. Secretary of the Interior from 1993 to 2001. As the Secretary of the Interior, Secretary Babbitt was responsible for overseeing several government agencies, which included the FWS. Facts The Alabama beach mouse, a sand-colored mouse indigenous to the beaches and sandy fields of southern Alabama, was listed as endangered in 1985 due to the drastic destruction of the species' habitat by residential and commercial development, recreational activity and tropical storms. At the time of listing, 671 acres of beach mouse habitat remained on the Fort Morgan Peninsula on the Alabama coast. The FWS speculated that the remaining habitat may not be an adequate area to allow the beach mouse population to recover. Since then, the habitat has been further reduced by commercial and residential development, a golf course, and a series of hurricanes. Nevertheless, the FWS permitted two isolated high density housing complexes within the beach mouse habitat including the Aronov project and the Fort Morgan project, which was brought to suit in this case. The ESA does not offer much exception for developers that find an endangered species on the land they wish to develop. In order to get permits, the developers must prepare a HCP, which shows the impact on the species, and ways to mitigate that impact. The FWS approved the developer's plans and issued permits for two Fort Morgan developments. They claimed that the permits "will not jeopardize the beach mouse" or harm its critical habitat. However, the FWS did remain concerned over whether the mitigation in the permit plans was to the maximum extent practicable, as required by the ESA. Arguments The Sierra Club challenged the issuance of these permits under the ESA and the NEPA, asking the District Court to suspend the permits, based on the HCP, and until the FWS revised its environmental analysis and permit conditions. The plaintiffs brought three claims: 1) the level of off-site mitigation funding was inadequate and lacked any rational basis; 2) the FWS's offsite mitigation policy was inconsistent; and 3) the FWS's reliance on unnamed sources to pay the additional costs for providing adequate off-site mitigation was arbitrary and capricious. First, the Sierra Club claimed that the mitigation part of the HCP was not sufficient under the ESA. In response, the FWS stated that its concerns were met when the applicant added mitigation measures. However, the FWS's own field offices had concerns over the impacts of the planned development on the Alabama beach mouse. The regional office stated that the effects of the Fort Morgan project were the largest of any beach mouse HCP to date, but provided the least mitigation. In court, the FWS claimed that mitigation concerns were addressed before the Biological Opinion and that they were going to require additional funds for offsite mitigation. The plaintiffs held that the FWS decision was not rational. Second, the Sierra Club claimed that the FWS failed to develop standards to determine the appropriate amount of mitigation necessary for the survival of the beach mouse. The FWS decision to grant permits was inconsistent with their own Habitat Conservation Planning Handbook. According to the Handbook, mitigation measures should be as consistent as possible for all HCPs with similarly situated species. The Handbook states that "the Service should not apply inconsistent mitigation policies for the same species, unless differences are based on biological or other good reasons and are clearly explained. The court examined mitigation requirements for other projects within the beach mouse habitat and found no consistency. The court did not find that the FWS followed their own guidelines. What the court did find, however, was that the FWS, had no justification for the issuance of the permits. Third, the Sierra Club also challenged the plan because the sources which were intended to fund the offsite mitigation efforts, remained unnamed. The FWS mentioned that additional funds were needed for mitigation efforts, but they did not include where the funding would come from. The Biological Opinion required additional funds from nonprofit organizations in order to fully mitigate the projects, but never stated how much, from whom, or the likelihood the funds would ever be acquired. The court agreed and cited the FWS's own documentation, which stated that the "Applicant's offsite mitigation funding would have to be combined with additional funds from a non-profit organization in order to purchase a large tract or several tracts for mitigation purposes." Without a given source of funding or specific amount to be spent, the court could find no rational basis for issuance of the permits. Analysis under National Environmental Policy Act FWS failed to prepare an environmental impact statement (EIS) as required by the NEPA. The NEPA requires that federal agencies like the FWS consider the environmental consequences of proposed actions to ensure fully informed and well-considered decisions. A project that may adversely affect an endangered species or its critical habitat is considered to significantly affect the environment, requiring an EIS. Rather than prepare an EIS, the FWS issued a "finding of no significant impact" for the Fort Morgan developments, concluding its analysis of possible impacts on the Alabama beach mouse. There are four criteria to be considered in determining whether an agency's decision not to prepare an EIS is arbitrary and capricious: First, the agency must have accurately identified the relevant environmental concern. Second, once the agency has identified the problem it must have taken a "hard look" at the problem in preparing the Environmental Assessment (EA). Third, if a finding of no significant impact is made, the agency must be able to make a convincing case for its finding. Last, if the agency does find an impact of true significance, preparation of an EIS can be avoided only if the agency finds that changes or safeguards in the project sufficiently reduce the impact to a minimum. After a careful review of the Administrative Record, the Court was persuaded that many of the important "facts" on which the FWS based its decision appear to be assumptions, presumptions, or conclusions themselves — not facts based on any evidence, documents, or data in the Administrative Record. Opinion of court The plaintiff's motion for preliminary injunction was granted, and the defendant's cross-motion for summary judgment was denied. The District Court remanded the decision to issue the permits to the FWS. It directed the FWS to gather the necessary data and conduct the required scientific analysis in order to determine whether the permits issued meet requirements under the ESA and the NEPA. The court stressed that the FWS must do more than merely go through the motions in performing its duties to protect the Alabama beach mouse from extinction. The court agreed with the Sierra Club and found that the FWS ignored its initial concerns, failing to determine if the proposed amount of mitigation funding could provide adequate mitigation. The court noted the complete lack of consideration or explanation of the amount of mitigation funding in the plan or permits. Without analysis or consideration, the court concluded that the FWS cannot support its decision that the amount of mitigation funding was adequate and found the issuance of permits arbitrary and capricious. Additionally, the court held that the FWS's reliance on unnamed sources for offsite mitigation was contrary to the law and unsupported by any factually reliable basis in the Administrative Record. Significance and subsequent developments Although environmental groups have challenged a number of incidental take permits in court, judges typically defer to the expert judgment of the FWS. Sierra Club v. Babbitt was one of the few cases where the plaintiffs won. The Chevron deference rule (see Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc.), in which the Supreme Court holds that courts should defer to agency interpretations of such statutes unless they are unreasonable, was used in this case. The environmental statutes were clear and unambiguous. However, the FWS interpretation of the laws were not reasonable. This sets the stage for regulatory federal agencies being held accountable for noncompliance of environmental statutes and irrational decision making. References Sierra Club litigation United States District Court for the Southern District of Alabama cases Endangered species 1998 in the environment 1998 in United States case law Environment of Alabama
Sierra Club v. Babbitt
Biology
2,717
57,926,868
https://en.wikipedia.org/wiki/List%20of%20informally%20named%20dinosaurs
This list of informally named dinosaurs is a listing of dinosaurs (excluding Aves; birds and their extinct relatives) that have never been given formally published scientific names. This list only includes names that were not properly published ("unavailable names") and have not since been published under a valid name (see list of dinosaur genera for valid names). The following types of names are present on this list: Nomen nudum, Latin for "naked name": A name that has appeared in print but has not yet been formally published by the standards of the International Commission on Zoological Nomenclature. Nomina nuda (the plural form) are invalid, and are therefore not italicized as a proper generic name would be. Nomen manuscriptum, Latin for "manuscript name": A name that appears in manuscript but was not formally published. A nomen manuscriptum is equivalent to a nomen nudum for everything except the method of publication, and description. Nomen ex dissertationae, Latin for "dissertation name": A name that appears in a dissertation but was not formally published. Nicknames or descriptive names given to specimens or taxa by researchers or the press. A Alamotyrannus "Alamotyrannus" ("Ojo Alamo tyrant") is the informal placeholder name given to an as yet undescribed genus or species of tyrannosaurid from the Late Cretaceous period of North America. The fossils of this animal originate from the Ojo Alamo Formation in New Mexico and they were discovered during the early 2000s. The suggested binomial "Alamotyrannus brinkmani", was created when the paper describing the genus was written in 2013. "Alamotyrannus" lived during the early Maastrichtian. Specimen ACM 7975, a jaw discovered in the Ojo Alamo Formation, New Mexico in 1924, has been tentatively identified as Gorgosaurus libratus but may instead belong to "Alamotyrannus" as per Dalman & Lucas (2013) and McDavid (2022). This specimen has been mentioned in a 2016 publication by Dalman and Lucas as an indeterminate tyrannosaurid without generic attribution, and it's noted that the specimen is under study by the senior author. Photograph taken by McDavid (2022) shows the specimen on display in the Beneski Museum of Natural History. Alan the Dinosaur "Alan the Dinosaur" is the name given to a sauropod caudal vertebra (YORYM:2001.9337) found in 1995 in the Saltwick Formation (Middle Jurassic, Aalenian) of Whitby, England. It is the oldest sauropod found in the United Kingdom, dating back 176-172 million years ago. Its name reference that of its discoverer, Alan Gurr, and the fact that it is not identifiable to species level. An analysis done in 2015 found that it was a member of Eusauropoda, could be excluded from Diplodocoidea, and was most similar to Cetiosaurus. The fossil of "Alan" is housed in the Yorkshire Museum, where it forms part of the Yorkshire's Jurassic World exhibit, featuring a VR recreation. Allosaurus robustus "Allosaurus robustus" is an informal name used for specimen "NMV P150070", a theropod astragalus known from the Wonthaggi Formation (Early Cretaceous) of Victoria, Australia. When first studied, it was thought to have belonged to a species of Allosaurus. Samuel Welles challenged this identification as he thought that the astragalus belonged to an ornithomimid, but the original authors defended their classification. Sometime in the early 2000s, Daniel Chure examined the bone and found that it did not represent a new species of Allosaurus, but could still represent an allosauroid. At the same time, Yoichi Azuma and Phil Currie noted that the astragalus resembled that of their new genus Fukuiraptor. It may well represent a theropod related to Australovenator, though some argue that it could represent an abelisauroid. A 2019 study strongly supported a megaraptoran affinity for the astragalus. The name "Allosaurus robustus", first confined as a museum label, was first published by Chure in 2000. Amargastegos "Amargastegos" is an informal genus of extinct stegosaurid ornithischian dinosaur known from the La Amarga Formation of Argentina, named by Roman Ulansky in 2014 on the basis of MACN N-43 (some dorsal osteoderms, the cervical and caudal vertebrae, and one skull bone), and the type species is "A. brevicollum". In 2016, Peter Malcolm Galton and Kenneth Carpenter declared it a nomen nudum, establishing it as an indeterminate stegosaur. Amphicoelias brontodiplodocus/Barackosaurus "Barackosaurus" is the informal name created in 2010 which is used for a sauropod found in Kimmeridgian-aged sediments pertaining to the Morrison Formation, Wyoming. It was found in the Dana Quarry and "Barackosaurus" was supposedly 20 meters long and weighed 20 tons. In 2010, an article was made available, but not formally published, by Henry Galiano and Raimund Albersdorfer in which they dubbed the Dana Quarry specimens which had already been referred to as "Barackosaurus" as "Amphicoelias brontodiplodocus". The specific name referred to their hypothesis based on these specimens that nearly all Morrison diplodocid species are either growth stages or represent sexual dimorphism among members of the genus Amphicoelias, but this analysis was met with skepticism and the publication itself has been disclaimed by its lead author, explaining that it is "obviously a drafted manuscript complete with typos, etc., and not a final paper. In fact, no printing or distribution has been attempted". As of 2015, they are now on display at the Lee Kong Chian Natural History Museum in Singapore. Andhrasaurus "Andhrasaurus" is an informal genus of extinct armored ornithischian dinosaur from the Kota Formation of India. The proposed species is "A. indicus". Ulansky (2014) coined the name for skull elements, about 30 osteoderms, and the extremities of vertebrae and limbs, all preserved in the collections of the GSI and assigned to Ankylosauria by Nath et al. (2002). In 2016, Peter Malcolm Galton and Kenneth Carpenter noted that "Andhrasaurus" did not meet ICZN requirements and therefore declared it a nomen nudum, listing it as Thyreophora indet., while noting that the jawbones described by Nath et al. (2002) belonging to crocodylomorphs. The dermal armor informally named "Andhrasaurus" was redescribed by Galton (2019), referring the material to Ankylosauria. Angeac ornithomimosaur The "Angeac ornithomimosaur" is an informal name given to an unnamed ornithomimosaur taxon known from the Early Cretaceous (previously thought to be Hauterivian-Barremian in age, but now thought to be Berriasian aged) Angeac-Charente bonebed (part of the stratigraphy of the Aquitaine Basin) near Angeac-Charente in western France. The taxon is toothless and is known from numerous disarticulated remains representing at least 70 individuals covering almost all of the skeleton, some remains were described by Allain et al. (2014). Angloposeidon "Angloposeidon" is the informal name given to a sauropod dinosaur from the Early Cretaceous (Barremian) Wessex Formation of the Isle of Wight in southern England. It was a possible brachiosaurid but has not been formally named. Darren Naish, a notable vertebrate palaeontologist, has worked with the specimen and has recommended that this name only be used informally and that it not be published. However, he published it himself in his book Tetrapod Zoology Book One from 2010. The remains consist of a single cervical vertebra (MIWG.7306), which indicate it was a very large animal, 20 metres or greater in length. Archaeoraptor "Archaeoraptor" is the informal generic name for an important fossil from China that was later discovered to have been fabricated from multiple unrelated fossils. The name was created in an article published in National Geographic magazine in 1999, where the magazine claimed that the fossil was a "missing link" between birds and terrestrial theropod dinosaurs. Even prior to this publication there had been severe doubts about the fossil's authenticity. Further scientific study showed it to be a forgery constructed from rearranged pieces of real fossils from different species. Zhou et al. found that the head and upper body actually belong to a specimen of the primitive fossil bird Yanornis, and another 2002 study found that the tail belongs to a small winged dromaeosaur, Microraptor, named in 2000. The legs and feet belong to an as yet unknown animal. Archbishop "The Archbishop" is a giant brachiosaurid sauropod dinosaur similar to Brachiosaurus and Giraffatitan. It was long considered a specimen of Brachiosaurus (now Giraffatitan) brancai due to being found in the same formation in Tendaguru, Tanzania. However, the "Archbishop" shows significant differences including a unique vertebral morphology and a proportionally longer neck, that indicates it is a different, previously unknown genus and species. It was discovered by Frederick Migeod in 1930. "The Archbishop" is a nickname that functions as a placeholder – the specimen currently has no scientific name. The specimen is currently housed in the Natural History Museum in London, and will eventually be re-described by Dr. Michael P. Taylor of Bristol University. In May 2018, Taylor started to work on describing the Archbishop. Atlantohadros "Atlantohadros", more commonly known as the "Merchantville hadrosaur", is an informally named hadosaurid dinosaur that lived in the Merchantville Formation in the northeastern United States. Brown (2021) found "Atlantohadros" to be more derived than Tethyshadros but less derived than Saurolophinae and Lambeosaurinae. The name was intended to be used in that publication, but was cut for unknown reasons; initial versions of Brown (2021) contained the word "Atlantohadros" superimposed over "Merchantville Taxon" in a cladogram; subsequent corrections have erased the genus name entirely. Three specimens were discovered northwest of Freehold near Manalapan–Marlboro township line in Monmouth County during the 1970s. These are: YPM VPPU.021813, YPM VPPU.021813, and AMNH 13704, with YPM VPPU.021813 possibly belonging to the same individual as YPM VPPU.021813 due similar weathering, size and the same horizon. These specimens consist of both coracoids, both scapulae, a femur, a fragmentary proximal tibia, and a dentary from a cast of the specimen (the original likely lost in YPM's catalogue) in the adult specimen, as well as a rib, a femur and long bone portions in the juvenile. AMNH 13704, id a partial dentary of a probable perinate. Scattered bones associated with these include a quadrate, several partial maxilla portions, a partial jugal, skull roof fragments and several rib fragments. B Baguasaurus "Baguasaurus" (meaning "Bagua lizard") is the informal name given to an as yet undescribed genus of lithostrotian sauropod dinosaur from the Late Cretaceous (Campanian – Maastrichtian-aged) Chota Formation of Peru. The proposed holotype, consisting of caudal vertebrae, was first mentioned in a review of the Chota Formation by Mourier et al. (1988), and the name "Baguasaurus" was coined by Larramendi & Molina Pérez (2020)."Baguasaurus" was estimated to be long and weighed . Balochisaurus "Balochisaurus" (meaning "Balochi lizard", for the Baloch tribes of Pakistan) is an informal taxon of titanosaurian sauropod dinosaur from the Late Cretaceous of Pakistan. The proposed species is "B. malkani". The discovery was made (along with other dinosaur specimens) in 2001 near Vitariki by a team of paleontologists from the Geological Survey of Pakistan. Described in 2006 by M.S. Malkani, the genus is based on seven tail vertebrae found in the Maastrichtian-age Vitakri Member of the Pab Formation, with additional vertebrae and a partial skull assigned to it. Balochisaurus was assigned to the family "Balochisauridae" along with "Marisaurus". It was considered invalid by Wilson, Barrett and Carrano (2011). Barnes High Sauropod The "Barnes High sauropod" is the informal name given to MIWG-BP001, an undescribed sauropod dinosaur specimen from the Wessex Formation on the Isle of Wight. It was discovered in the cliffs around Barnes High in 1992 and is currently owned by the privately run unaccredited Dinosaur Farm Museum near Brighstone, the ownership situation was described as "complex" and the specimen is currently inaccessible to researchers. It is roughly 40% complete and consists of a "Partial postcranial skeleton, including presacral vertebrae, anterior caudal vertebrae, girdle and limb elements" including a largely complete forelimb. It has been suggested to be a brachiosaurid and is possibly synonymous with the earlier named Eucamerotus due to similarities with the vertebrae. Bayosaurus "Bayosaurus" is the informal name given to an as yet undescribed genus of theropod dinosaur. The name was coined by paleontologists Rodolfo Coria, Philip J. Currie, and Paulina Carabajal in 2006. It apparently was an abelisauroid from the Turonian Cerro Lisandro Formation of Neuquén, Argentina, around long. The specimen is MCF-PVPH-237, including dorsal and sacral vertebrae, a fragmentary pelvis, and other partial bones, which were discovered in 2000. The name was used in a phylogenetic analysis to indicate the position of MCF-PVPH-237. Beelemodon "Beelemodon" is the informal name given to an undescribed theropod genus from the Late Jurassic, possibly belonging to a coelurosaur. The fossils include two teeth found in Wyoming, United States. The name appeared in print in 1997, when paleontologist Robert T. Bakker mentioned it in a symposium for the Academy of Natural Sciences. The teeth are most similar to Compsognathus, but have no unique features and also share similarities with Protarchaeopteryx and dromaeosaurids. Biconcavoposeidon "Biconcavoposeidon" is the placeholder name for AMNH FARB 291, five consecutive posterior dorsal vertebrae of a brachiosaurid sauropod, from the Late Jurassic Morrison Formation, Wyoming. Not much else is currently known about "Biconcaveoposeidon", except that it was discovered in the Bone Cabin quarry in 1898. Bihariosaurus "Bihariosaurus" (meaning "Bihor lizard") is an invalid genus of iguanodontian dinosaur from Early Cretaceous Bauxite of Cornet, Romania. The type species, "Bihariosaurus bauxiticus", was named but not described by Marinescu in 1989. It was similar to Camptosaurus, and was an iguanodont. The original publication of the taxon did not include sufficient description, and the illustrations cannot distinguish it from any other ornithopod. Biscoveosaurus "Biscoveosaurus" is the informal name of an ornithopod dinosaur specimen from the Early Maastrichtian age Snow Hill Island Formation of James Ross Island, Antarctica. It comes from the Cape Lamb Member of the formation, the same member as Morrosaurus, another basal ornithopod. As such, it's been suggested it may be a secondary specimen of that species, but as the holotype of Morrosaurus is fragmentary and doesn't overlap with the material of "Biscoveosaurus", this can't as yet be tested. The specimen consists of dentaries, teeth, a braincase, parts of the maxillae, forelimb elements, assorted vertebrae, and the pectoral girdle; this makes it unique compared to the other James Ross Island ornithopods, which do not have both cranial and postcranial remains. It has been estimated the animal would have been about in length. C Capitalsaurus "Capitalsaurus" is the informal genus name given to a tailbone belonging to a large theropod dinosaur that lived during the Early Cretaceous. It was discovered on 28 January 1898, by construction workers excavating a sewer at the intersection of Washington, D.C.'s First and F Streets SE. The only known specimen, it was assigned two different species designations – Creosaurus potens and Dryptosaurus potens – and eventually overturned each time. In the 1990s, the paleontologist Peter Kranz asserted that it represented a unique type of dinosaur and assigned it the name "Capitalsaurus". He successfully campaigned through local schools to make "Capitalsaurus" the official dinosaur of Washington, D.C., which became law in 1998. A year later, the district further recognized F Street at the discovery site as Capitalsaurus Court. It designated 28 January 2001, as Capitalsaurus Day. Changdusaurus "Changdusaurus" (also known as "Changtusaurus") is the informal name given to a genus of dinosaur from the Late Jurassic Period. It lived in what is now China. "Changdusaurus" is classified as a stegosaurid. The type species was named "Changdusaurus laminoplacodus" by Zhao in 1983, but it has never been formally described, and remains a nomen nudum. One source indicates the fossils have been lost. Comanchesaurus "Comanchesaurus" is a nomen ex dissertationae for fossilized remains from the Late Triassic of New Mexico that were initially interpreted as belonging to a theropod dinosaur. The remains, NMMNH P-4569, consist of a partial skeleton including vertebral centra and hindlimb bones, and came from the Norian-age Upper Triassic Bull Canyon Formation of Guadalupe County. Adrian Hunt, in his unpublished dissertation, proposed the name "Comanchesaurus kuesi" for the specimen, but the name was never adopted, and was first referred to in the scientific literature in a 2007 redescription of Late Triassic North American material thought to belong to dinosaurs (Nesbitt, Irmis, and Parker, 2007). In the redescription, the authors found the material to belong to a "possible indeterminate saurischian". Cryptoraptor "Cryptoraptor" is a nomen ex dissertationae for fossilized remains from the Late Triassic of New Mexico that were initially interpreted as belonging to a theropod dinosaur. The remains, NMMNH P-17375, consist of a partial skeleton including partial hindlimb and pelvic bones, and came from the Norian-age Upper Triassic Bull Canyon Formation of Quay County. Adrian Hunt, in his unpublished dissertation, proposed the name "Cryptoraptor lockleyi" for the specimen, but the name was never adopted, and was first referred to in the scientific literature in a 2007 redescription of Late Triassic North American material thought to belong to dinosaurs. In the redescription, the authors found the material to belong to an intermediate archosaur, as no features exclusive to dinosaurs could be identified. Cryptotyrannus "Cryptotyrannus" (meaning "secret/hidden tyrant"), more commonly known as the "Merchantville tyrannosauroid", is an informally named tyrannosauroid dinosaur that lived in the Merchantville Formation. It was informally named by Brown (2021), who found it to be the sister taxon of Dryptosaurus, reinstating Dryptosauridae. The name appeared in the initial version of Brown's paper, superimposed over "Merchantville Taxon" in a cladogram; a subsequent correction has erased the name entirely. "Cryptotyrannus" is known from two specimens discovered during the 1970s, the holotype YPM VPPU.021795 and the paratype YPM VPPU.022416. Similar coloration and weathering indicate that these are probably the same individual. These are a partial foot bone and one caudal vertebrae. However, a skeletal produced for the paper depicts a hand claw. The foot morphology is consistent with tyrannosaurs, being extremely similar to the Dryptosaurus aquilunguis. Autapomorphies include a metatarsal IV that is far more gracile and IV in proximal view also has a triangular, rather than subrectangular in outline. The holotype was once tentatively assigned to "Coelosaurus" antiquus. Shark bites present on the holotype suggest that the specimen's fragmentary nature is due to predation or scavenging by marine predators. D Dachongosaurus "Dachongosaurus" is the informal name given to an undescribed genus of sauropod dinosaur from the Early Jurassic of China. It is known from fossils including at least a partial articulated skeleton from the Dark Red Beds of the Lower Lufeng Series (Sinemurian stage) in Yunnan. Possibly a cetiosaur, the "type species" is "Dachongosaurus yunnanensis", coined by Zhao in 1985. An alternate spelling is "Dachungosaurus". As with other informal names coined by Zhao in 1985 and 1983, nothing has since been published, and the remains may have been redescribed under another name. Dongshengosaurus "Dongshengosaurus" is the informal name given to an undescribed genus of iguanodontian dinosaur from the Early Cretaceous of Liaoning, China. the "type species", "D. sinensis", was named by Pan Rui in his 2009 thesis. It is known from a partial juvenile skeleton discovered from the Yixian Formation. Damalasaurus "Damalasaurus" (meaning "Damala lizard") is the informal name given to a genus of herbivorous dinosaur from the Early Jurassic. It was a sauropod, though its exact classification within the clade is unknown. Fossils of "Damalasaurus", including a rib, have been found in the Middle Daye Group of Tibet. Species attributed to this genus include "Damalasaurus laticostalis" and "D. magnus", although it is possible that both names refer to the same species. Duranteceratops "Duranteceratops" is a purported new taxon of chasmosaurine ceratopsid from the Hell Creek Formation. In 2012, a ceratopsid skull supposedly distinguishable from Triceratops was unearthed in South Dakota by a fossil poacher named John Carter. Though it has yet to be published, according to the Prehistoric Times issue no. 121 from Spring 2017, the specimen is to be named "Duranteceratops". E EK troodontid The "EK troodontid" (specimen SPS 100/44) is an unnamed genus of troodontid dinosaur discovered in Mongolia. In the scientific literature it is referred to as the "EK troodontid", after the Early Cretaceous sediments in which it was found. SPS 100/44 was discovered by Sergei Mikhailovich Kurzanov during the 1979 Soviet-Mongolian Paleontological Expedition. It was found in deposits of the Barunbayaskaya Svita at the Khamareen Us locality, Dornogov (southeastern Gobi Desert), in the Mongolian People's Republic. SPS 100/44 was described by Rinchen Barsbold and colleagues in 1987. Its fossil remains include an incomplete skeleton consisting of the braincase, posterior parts of the lower mandibles, a maxillary fragment with teeth, parts of five cervical vertebrae (cervicals ?2-?6), an articulated right manus with partial semilunate, left manus phalanx I-1, distal end of the left femur, and fragmentary left and right pedes. Barsbold pointed out that the specimen was smaller and from older sediments than other known troodontids, but it had some features of the skull that could have made it a juvenile. Barsbold also indicated the high degree of fusion of the bones of the skull and the unusual foot morphology to indicate that it might be an adult of an unknown taxon. Barsbold took the conservative position and did not name this specimen because it was not complete enough to rule out the possibility that it was a juvenile of a known genus of troodontid. Barsbold also noted that the naturally articulated manus of SPS 100/44 showed no signs of an opposable third digit, as was suggested for Troodon by Russell and Seguin in 1982. Turner and colleagues, in 2007, found the EK troodontid to be a distinct basal genus of troodontid, in a polytomy with Jinfengopteryx and a clade of more derived troodontids. Eoplophysis "Eoplophysis" is a genus of stegosaur known from the Middle Jurassic Cornbrash Formation, Sharp's Hill Formation, and Chipping Norton Formation of England. It was originally named Omosaurus vetustus by the renowned German paleontologist Friedrich von Huene. The holotype, OUM J.14000, is a right femur of a juvenile individual from the Middle Jurassic (upper Bathonian) Cornbrash Formation of Oxfordshire, England, although it was probably reworked from the slightly older Forest Marble Formation in view of its eroded nature. Because of the renaming of Omosaurus, an occupied name, as Dacentrurus, O. vetustus was renamed into a Dacentrurus vetustus in 1964. In the 1980s, researcher Peter Malcolm Galton reviewed all known stegosaur material from the Bathonian of England and concluded that Omosaurus vetustus was valid and should be tentatively referred to Lexovisaurus. However, the species was later considered a nomen dubium in both reviews of Stegosauria. In their alpha-taxonomic review of stegosaurs, Susannah Maidment and her colleagues noted that OUM J.14000 shares characters present in both sauropods and stegosaurs, but that it lacks synapomorphies exclusive to Stegosauria and assigned it as a Dinosauria indet. Nevertheless, the amateur paleontologist Roman Ulansky coined the new genus "Eoplophysis" ("Dawn Armed Form") for O. vetustus, noting differences with the femora of other stegosaurs. Eugongbusaurus "Eugongbusaurus" is the informal name (nomen nudum) proposed for a neornithischian found in the Oxfordian-age Shishugou Formation of Xinjiang, China. The intended type species, "Gongbusaurus" wucaiwanensis, was described by Dong Zhiming in 1989 for two partial skeletons as a second species of the poorly known tooth taxon Gongbusaurus. Fragmentary skeleton IVPP 8302, the type specimen for the new species, included a partial lower jaw, three tail vertebrae, and a partial forelimb. Second specimen IVPP 8303 consisted of two hip vertebrae, eight tail vertebrae, and two complete hind limbs. Dong estimated it as around long, and considered it to be a strong runner. He assigned the genus Gongbusaurus to the Hypsilophodontidae, a paraphyletic grade of small herbivorous bipedal dinosaurs. Because dinosaur teeth are generally not distinctive enough to hold a name, it is unsurprising that other paleontologists have suggested removing "G." wucaiwanensis from Gongbusaurus and giving it its own genus. The possible replacement name "Eugongbusaurus" leaked out accidentally and remains informal. F Fendusaurus "Fendusaurus" is a nomen ex dissertatione proposed by Fedak (2006) for FGM 998GF13-II, which includes a skull. Other specimens referred to "Fendusaurus" are FGM998GF13-I, FGM998GF13-III, FGM998GF69, FGM998GF9, and FGM998GF18, all found by a crew from the Princeton University. All the specimens include femora and coracoids, and although they each share slightly different features, the differences are credited to intra-specific variation. Known specimens of "Fendusaurus" were previously classified as cf. Ammosaurus. The femora and coracoids also help identify different individuals, and Timothy J. Fedak, the describer of the specimens, found that each block represented about one individual. "Fendusaurus" is known from the Early Jurassic (Hettangian) McCoy Brook Formation of Wasson Bluff, Nova Scotia. It is the first non-avian dinosaur from Nova Scotia. As five specimens of "Fendusaurus" are from the McCoy Brook Formation, the formation is the richest prosauropod site in North America. The formation is also similar to other formations of North America and Asia, as it lacks any remains presently assigned to Anchisaurus. Fedak places "Fendusaurus" as a genus of the family Massospondylidae. The specimens of "Fendusaurus" include mostly crushed vertebrae, along with appendicular elements. They are distinguishable from Anchisaurus by the morphology of both the ilium and sacral vertebrae. However, in some specimens, the morphology of the femora and coracoids are quite different, which led Fedak to speculate that more than one species may have been present. "Fendusaurus", according to Fedak, can be distinguished from all closely related sauropodomorphs by the extreme elongation of the cervical vertebrae; a four vertebrae sacrum that includes a dorsosacral and caudosacral; the elongate postacetabular process of the ilium; and an expanded anterior distal process of the tibia. Ferganastegos "Ferganastegos" is a dubious genus of stegosaur from the Middle Jurassic (Callovian) Balabansai Formation of Fergana Valley, Kyrgyzstan. The holotype of "Ferganastegos callovicus", IGB 001, consists of four posterior dorsal vertebrae. Although Averianov et al. did not consider the vertebrae diagnostic to genus, the freelance Russian dinosaur enthusiast and amateur paleontologist Roman Ulansky decided that the differences between IGB 001 and other stegosaurs were sufficient to warrant a binomial for IGB 001, "Ferganastegos callovicus" (Callovian roof from Fergana Valley), despite the fact he did not examine the material himself. Other researchers still contend that the material is not diagnostic and that the genus is a nomen dubium. Ferropectis "Ferropectis" is a nodosaurid ankylosaur from the Late Cretaceous (Cenomanian) Eagle Ford Group in Texas that was named in a 2018 dissertation by Matt Clemens. The intended type species is "Ferropectis brysorum", and in the phylogenetic analysis it was placed as the sister taxon to Borealopelta in a clade including Hungarosaurus, Europelta, and Pawpawsaurus. Francoposeidon "Francoposeidon" (meaning "French earthquake god") is the informal name given to an as yet undescribed genus of turiasaurian sauropod dinosaur from the Early Cretaceous (Hauterivian)-aged Angeac-Charente bonebed of France. The proposed type species is "F. charantensis", and the remains consist of a braincase, some skull bones, teeth, cervical, dorsal and caudal vertebrae, chevrons, pelvic girdle and all the limb bones" alongside isolated teeth, belonging to at least 7 individuals. The length of the femur was measured to be around , (± ), making "Francoposeidon" one of the largest known sauropods discovered in Europe. Futabasaurus "Futabasaurus" is an informal name for a genus of theropod dinosaur from the Late Cretaceous of Japan, known only from a partial shin bone of ~ wide that was discovered in the Coniacian-age Ashizawa Formation of the Futaba Group; it was likely around when fully grown. It was first mentioned as "Futaba-ryu" by Hasegawa et al. (1987), and the name was coined by David Lambert in 1990 as a conversion from the Japanese nickname "Futaba-ryu", for an undescribed theropod. Dong Zhiming and coauthors briefly discussed the fossil shin bone it was based on that same year, publishing a photograph. They considered the bone to belong to an indeterminate tyrannosaurid. If the specimen is eventually described and named, it will require a different name, because the name Futabasaurus has since been used for a genus of plesiosaur. G Gadolosaurus "Gadolosaurus" is an informal name given to PIN, no. 3458/5 an unnamed juvenile hadrosauroid dinosaur specimen from the Bayan Shireh Formation of Baishan Tsav, Mongolia. The name "Gadolosaurus" was first used in a 1979 book by Japanese paleontologist Tsunemasa Saito, in a caption for a photo of the specimen. This specimen represents an individual that was only about a meter long (39 inches). The specimen was part of a Soviet exhibition of fossils in Japan. Apparently, the name comes from a Japanese phonetic translation of the Cyrillic word gadrosavr, or hadrosaur, and was never meant by the Russians to establish a new generic name. Despite the only name ever applied to it being merely a mistranslation of gadrosavr, this specimen has appeared in many popular dinosaur books, with varying identifications. Donald F. Glut in 1982 reported it as either an iguanodont or hadrosaur, with no crest or boot on the ischium (the lack of which are both characteristics of the crested lambeosaurine duckbills), and suggested it could be the juvenile of a previously named genus like Tanius or Shantungosaurus. David Lambert in 1983 classified it as an iguanodont, but changed his mind by 1990, when it was listed as a synonym of Arstanosaurus without comment. What may be the same animal is mentioned but not named by David B. Norman and Hans-Dieter Sues in a 2000 book on Mesozoic reptiles from Mongolia and the former USSR; this material, from the Soviet-Mongolian expeditions of the 1970s, had been listed as Arstanosaurus in the Russian Academy of Sciences, and was found in the Cenomanian-age Bayan Shireh Formation of Baishin Tsav. Averianov, Lopatin, and Tsogtbaatar in 2022 provided a preliminary description of this specimen and its taxonomic position, finding that the specimen may represent a juvenile of a novel taxon that was closely related to but more derived than the contemporary hadrosauroid Gobihadros. Gallimimus mongoliensis "Gallimimus mongoliensis" is an informal name Rinchen Barsbold used for a nearly complete skeleton (IGM 100/14) known from the Bayan Shireh Formation, but since it differs from Gallimimus in some details, Yoshitsugu Kobayashi and Barsbold proposed in 2006 that it probably belongs to a different genus. It was recently included in a phylogenetic analysis, which recovered it as closely related to Tototlmimus. Gspsaurus "Gspsaurus" (a nomen manuscriptum) is a titanosaurian sauropod dinosaur from the Late Cretaceous Vitakri Member of the Pab Formation of Sulaiman Basin of Pakistan. It has been suggested to be synonymous with the also invalid taxon "Maojandino", also proposed by Malkani. The intended holotype, MSM-79-19 and MSM-80-19, consisting of parts of the skull, including a rostrum, was discovered in 2001, and parts of the holotype were initially referred to "Marisaurus jeffi". Grusimimus "Grusimimus" (or "Tsurumimus") is an informal name for an undescribed genus of ornithomimid from the Early Cretaceous (Hauterivian–Barremian) aged Shinekhudag Formation of Mongolia. Known from a skeleton including all regions except the skull, "Grusimimus" was given an invalid name in 1997 by Rinchen Barsbold, who also suggested the species name "tsuru". The specimen (GIN 960910KD) was found in 1996 and examined by Barsbold before he suggested the informal name, a nomen nudum. An abstract and poster were presented on the taxon by Kobayashi & Barsbold in 2002, and the former published a thesis paper on the specimen (referred to as "Ornithomimosauria indet.") which found the taxon to be close to Harpymimus phylogenetically but possible more derived. A recent phylogenetic analysis recovered "Grusimimus" closely related to Beishanlong and Garudimimus. H Hanwulosaurus "Hanwulosaurus" is the informal name given to an as-yet undescribed genus of dinosaur from the Late Cretaceous. It was an ankylosaur around long, which is long for an ankylosaur. Its fossils were found in Inner Mongolia, China. Much of a skeleton, including a complete skull, vertebrae, ribs, a scapula, an ulna, femora, bones from the shin, and armor, was discovered; this may be the most complete ankylosaurian skeleton yet found in Asia, according to early reports. Zhao Xijin, who has studied it, suggests that it may belong to its own subgroup within the Ankylosauria. The name first surfaced in news reports in 2001. Haute Moulouya Sauropod The "Haute Moulouya Sauropod", also known as NHMUK PV R36834, consisted originally of two complete cervical vertebrae recovered from the Lower Jurassic sediments of the Haute Moulouya Basin, central Morocco. This material was initially identified as belonging to an early member of Eusauropoda, if so, the oldest member of the group. Additional material was previously recovered, SNSB-BSPG 2014 I 106 that consists of dorsal vertebrae and a pubis fragment. A recent revision suggest both specimens belong to the same taxon, that likely comes from a higher stratigraphic level (Likely Late Pliensbachian) and that represents a valid more basal taxon, related with Amygdalodon. Tought other analisis still recover it alternatively as an Eusauropod, in a polytomy with Barapasaurus. Heilongjiangosaurus "Heilongjiangosaurus" (meaning "Heilongjiang lizard") is the informal name given to an as-yet undescribed genus of duckbilled dinosaur from the Late Cretaceous. It possibly was a lambeosaurine, and may in fact be the same animal as Charonosaurus. The fossils were found in Maastrichtian-age rocks in Heilongjiang, China. As a nomen nudum, it is unclear what material it was intended to be based on, but might be connected to the nomen nudum "Mandschurosaurus" jiainensis, informally named in a 1983 publication. The "type species" is "H. jiayinensis", and it was coined in 2001 in a faunal list by Li and Jin. Hironosaurus "Hironosaurus" (meaning "Hirono lizard") is the informal name given to an as-yet undescribed genus of dinosaur from the Late Cretaceous. Found in Hirono, Fukushima, Japan, it was probably a type of hadrosaur, although no subfamily identification has been made. The fossils are quite fragmentary, and consist of teeth and a vertebra, possibly from the tail. Since the fossils have never been fully described in a scientific paper, "Hironosaurus" is considered a nomen nudum. It was first mentioned by Hisa in an obscure 1988 publication and was later (1990) brought to a wider audience by David Lambert. Dong Zhiming, Y. Hasegawa, and Y. Azuma regarded the material as belonging to a hadrosaurid, but lacking any characteristics to allow more precise identification (thus indeterminate). Hisanohamasaurus "Hisanohamasaurus" (meaning "Hisano-hama lizard") is the informal name given to an as yet undescribed genus of dinosaur from the Late Cretaceous. It is a nomen nudum known only from teeth that first appeared in a general-audience dinosaur book by David Lambert in 1990. Although initially identified a diplodocid, it later re-identified as a nemegtosaurid similar to Nemegtosaurus. As its name suggests, its fossils were found in Japan. The location is part of Iwaki, Fukushima. I Ikqaumishan "Ikqaumishan" is an informal genus of titanosaurian dinosaurs from the Late Cretaceous (Maastrichtian) Vitakri Formation of Pakistan described by Malkani (2023) in Scientific Research Publishing, a known predatory publisher. The assigned fossil material includes multiple humeri. Caudal vertebrae and osteoderms found nearby may also be referrable to "Ikqaumishan". The intended type species is "Ikqaumishan smqureshi." Imrankhanhero "Imrankhanhero" is an informal genus of titanosaurian dinosaurs from the Late Cretaceous (Maastrichtian) Vitakri Formation of Pakistan described by Malkani (2023) in Scientific Research Publishing, a known predatory publisher. The assigned fossil material includes a humerus, a femur, fibulae, a tibia, and a metatarsal. Caudal vertebrae found nearby may also be referrable to "Imrankhanhero". The intended type species is "Imrankhanhero zilefatmi." Imrankhanshaheen "Imrankhanshaheen" is an informal genus of titanosaurian dinosaurs from the Late Cretaceous (Maastrichtian) Vitakri Formation of Pakistan described by Malkani (2024) in Scientific Research Publishing, a known predatory publisher. The proposed holotype includes a braincase, vertebrae, a humerus, ulnae, a radius, metacarpals, a tibia, fibulae, ribs, girdle bones, and osteoderms. The intended type species is "Imrankhanshaheen masoombushrai." J Jeholraptor "Jeholraptor" is the informal replacement genus name given to the microraptorine Sinornithosaurus haoiana—resulting in the new combination "Jeholraptor" haoiana—by Gregory S. Paul in the third edition of The Princeton Field Guide to Dinosaurs in 2024. The S. haoiana fossil is known from the Early Cretaceous (Barremian) upper Yixian Formation of China. The specimen, which is nearly complete, is about long and was probably close to in weight. Paul suggested that, due to similarities in the quadratojugal, "Jeholraptor" may have been a close relative of Wulong. Jiangjunmiaosaurus "Jiangjunmiaosaurus" (meaning "temple of the general lizard") is an informal name created by an anonymous author in 1987 for a possible chimaera of Monolophosaurus and Sinraptor. Paul (1988) tentatively placed "Jiangjunmiaosaurus" within Allosauridae and commented on the nasal ridges and orbital horn combining to form low, rugose-surfaced crests, and mentioned that "other excellent bones" may also be referable to "Jiangjunmiaosaurus". Jindipelta "Jindipelta" (Lei et al., 2019; in press) is the currently informal name given to an ankylosaur from the Zhumapu Formation in China. It is known from a partial skeleton found in Cenomanian rocks and the intended type species is "J. zouyunensis". The name was first announced in the 2019 SVP abstract book, alongside the megalosauroid Yunyangosaurus. Julieraptor "Julieraptor" is the nickname of a dromaeosaurid fossil found in the Judith River Formation, Montana in 2002. Parts of the same skeleton were illegally excavated and nicknamed Sid Vicious in 2006, and the poacher responsible subsequently served jail time for the theft. Bob Bakker therefore also nicknamed the specimen "Kleptoraptor". The skeleton was arranged to be sold to Royal Ontario Museum. It is known from a skeleton consisting of an almost complete skeleton missing most of its skull, most tail vertebra, part of the femur, some spinal and neck vertebra, one claw but it has a well preserved braincase. K Kagasaurus "Kagasaurus" (meaning "Kaga lizard") is the informal name given to an as yet undescribed genus of dinosaur from the Early Cretaceous. It was a theropod which lived in what is now Japan. The type species was named by Hisa in 1988, but is known from only two teeth. Since "Kagasaurus" has never been formally described, it is considered a nomen nudum. Unlike "Kitadanisaurus" and Katsuyamasaurus, it is unlikely that "Kagasaurus" is synonymous with Fukuiraptor, and may instead be a dromaeosaurid. Katsuyamasaurus "Katsuyamasaurus" is an informal name for a genus of intermediate theropod known from the Early Cretaceous (Barremian) of the Kitadani Formation, Japan. Known from a single middle caudal vertebra and an ulna, the taxon was informally called "Katsuyama-ryu", until Lambert (1990) made it into an invalid genus name, "Katsuyamasaurus". The caudal vertebra was suggested to belong to an ornithopod by Chure (2000), and Olshevsky (2000) suggested the material was a synonym of Fukuiraptor. However, the ulna differs from Fukuiraptor, and the large olecranon suggests the taxon falls outside Maniraptoriformes. Khanazeem "Khanazeem" is an informal genus of titanosaurian sauropod from the Late Cretaceous Vitakri Formation of Pakistan. The holotype is a partial skeleton and consists of a dentary with teeth, caudal vertebrae, femora, humeri, and tibiae. The intended type species is "Khanazeem saraikistani" and was first mentioned by Malkani (2022). Khetranisaurus "Khetranisaurus" (meaning "Khetran lizard", for the Khetran people of Pakistan) is an informal taxon of titanosaurian sauropod from the Late Cretaceous of Balochistan, western Pakistan (also spelled "Khateranisaurus" in some early reports). The proposed species is "K. barkhani", described by M. Sadiq Malkani in 2006, and it is based on a tail vertebra, found in the Maastrichtian-age Vitakri Member of the Pab Formation. It was assigned to "Pakisauridae" (used as a synonym of Titanosauridae), along with "Pakisaurus" and "Sulaimanisaurus". It was considered invalid by Wilson, Barrett and Carrano (2011). Koreanosaurus "Koreanosaurus" (meaning "Korean lizard") is the informal name given to an as-yet unnamed genus of dinosaur from the Early Cretaceous (Aptian-Albian). It was a possible dromaeosaur (or similar theropod) which was discovered in the Gugyedong Formation of South Korea, although at times it has been referred to the Tyrannosauridae, Hypsilophodontidae and Hadrosauridae. Based solely on DGBU-78(=DGBU-1978B), a femur, the name was coined by Kim in 1979, but by 1993 Kim decided that it was a species of Deinonychus, and created the informal name "D." "koreanensis". Kim et al. (2005) referred the specimen to Eumaniraptora based on a proximolateral ridge, shelf-like posterior trochanter, and absence of an accessory trochanter and mediodistal crest. The presence of a large fourth trochanter was noted to be similar to Adasaurus and Velociraptor. Kunmingosaurus "Kunmingosaurus" is an informally named primitive sauropod which lived during the Early Jurassic. Its fossils were found in Yunnan, China in 1954. The type and only species is "Kunmingosaurus wudingensis", invalidly coined by Zhao in 1985. It is known from fossils found in the Fengjiahe Formation (or the Lower Lufeng Series), including pelvic, hind limb, and vertebral material. L Lancanjiangosaurus "Lancanjiangosaurus" (alternative spelling "Lanchanjiangosaurus"; meaning "Lancangjiang lizard", named after the Lancangjiang River of China) is the informal name given to an as yet undescribed genus of sauropod dinosaur from the Middle Jurassic. The "type species", "L. cachuensis", was coined by Zhou in 1983, but remains a nomen nudum. It is known from the Dapuka Group of Tibet. Lijiagousaurus "Lijiagousaurus" (meaning "Lijiagou lizard") is the informal name given to an as yet undescribed genus of herbivorous iguanodontian dinosaur from the Late Cretaceous of what is now Sichuan, China. It has not been formally described yet, but the formal publication is forthcoming, from Chinese paleontologist Ouyang Hui. "Lijiagousaurus" was only briefly mentioned in the Chongqing Natural History Museum guidebook (2001) and is thus a nomen nudum.The holotype consists of hindlimb bones, a scapula, an ischium and other fragments. Likhoelesaurus "Likhoelesaurus" (meaning "Li Khole lizard") is the name given to an as yet undescribed genus of archosauriform, either a dinosaur or rauisuchian, from the Late Triassic of what is now South Africa. The name was coined by Ellenberger in 1970, and the "type species" is "Likhoelesaurus ingens". It is named after the town in Lesotho where the fossils were found. The only fossils recovered have been teeth, from the late Carnian–early Norian-age Lower Elliot Formation. Ellenberger (1972) regarded the genus as a giant carnosaur, and Kitching and Raath (1984) treated it as possibly referable to Basutodon. Knoll listed "Likhoelesaurus" as a rauisuchian, also he noted that could also be a rauisuchian. Lopasaurus "Lopasaurus" (meaning "Alberto Lopa's lizard") is the name given to an as yet undescribed genus of dromaeosaurid theropod, possibly belonging to Unenlagiinae due to its similarity to Buitreraptor, Neuquenraptor and Pamparaptor, from the Late Cretaceous (Maastrichtian)-aged Serra da Galga Formation in the Ponto 1 do Price site of Brazil. The intended holotype, a partial right metatarsus showing metatarsals II, III and IV, was discovered by Alberto Lopa during the 1950s but the fossil was lost shortly after the death of Llewellyn Ivor Price in 1980 and it has not been located since. "Lopasaurus" was briefly mentioned by Brum et al. in their description of Ypupiara lopai, where it was tentatively referred to Unenlagiinae. Brum et al. (2021) also did not refer "Lopasaurus" to Ypupiara, which was found in the same formation as "Lopasaurus". M Magulodon "Magulodon" is the name given to an as yet undescribed genus of dinosaur from the Early Cretaceous (Aptian to Albian stages, approximately 112 million years ago). It was a possible ornithischian, either an ornithopod or basal ceratopsian, which was discovered in the Arundel Formation of Maryland, United States. The type species, "Magulodon muirkirkensis", was coined by Kranz in 1996. It is a tooth taxon, based solely on a single tooth. Since it has not been formally described, it is also a nomen nudum. It was considered to be an indeterminate specimen in a paper which cited the intended type specimen but avoided using the name to prevent taxonomic clutter. Maltaceratops "Maltaceratops" is the informal name given to an as yet undescribed genus of centrosaurine ceratopsian from the Late Cretaceous (Campanian-aged) Judith River Formation of Montana. The proposed type species is "M. hammondorum", and the proposed holotype is a possible skull. It had been previously nicknamed the "Malta new taxon". Mangahouanga "Mangahouanga" (named after the stream of the same name), or the "Joan Wiffen's theropod" is an informal name given to the theropod discovered in the Tahora Formation, New Zealand by Joan Wiffen, who considered it to be a possible megalosaurid in 1975. The vertebra was described by Molnar 1981, and it was ruled as an indeterminate theropod in 2010 by Agnolin et al. The name "Mangahouanga" was coined by Molina-Pérez & Larramendi (2016) and no species name was given. They estimated it to reach up to long and weigh up to and is represented by of a single vertebra. Maojandino "Maojandino" is an informally named taxon of titanosaurid sauropod dinosaur from the Late Cretaceous Maastrichtian stage of Pakistan. The intended type species is "Maojandino alami." Marisaurus "Marisaurus" (meaning "Mari lizard", for the Mari tribe of Pakistan) is an informal taxon of titanosaurian sauropod from the Late Cretaceous of Balochistan, western Pakistan. The type species is "M. jeffi", described by Muhammad Sadiq Malkani in 2004, and it is based on tail vertebrae, found in the Maastrichtian-age Vitakri Member of the Pab Formation. Much additional material, including a partial skull, many vertebrae, and a few hindlimb bones, was referred to this genus. "Marisaurus" was assigned to "Balochisauridae" with "Sulaimanisaurus", although the family was used as a synonym of Saltasauridae. It was considered invalid by Wilson, Barrett and Carrano (2011). Maroccanoraptor "Maroccanoraptor" is an informal name suggested for a supposed unenlagiine theropod from the Kem Kem Formation of Morocco, however, it lacks the requirements to become a valid taxon, thus leaving it as a naked name. The intended type species is "M. elbegiensis", first described by Singer (2015) on the basis of a single coracoid. The fossil was later suggested to belong to a non-dinosaurian crocodyliform. Megacervixosaurus "Megacervixosaurus" (meaning "big neck lizard") is the informal name given to an as yet undescribed genus of herbivorous dinosaur from the Late Cretaceous Zonggo Formation of Tibet. It was a titanosaur sauropod which lived in what is now China The type species, "Megacervixosaurus tibetensis", was coined by Chinese paleontologist Zhao Xijin in 1983. "Megacervixosaurus" has never been formally described, and remains a nomen nudum. Megapleurocoelus "Megapleurocoelus" is an informally named sauropod belonging to Flagellicaudata, from the Kem Kem Formation of Morocco, however, it lacks the requirements to become a valid taxon, thus leaving it as a naked name. The intended type species is "M. menduckii", first described by Singer (2015) and the holotype is JP Cr376, a single centrum from a dorsal vertebra. Microcephale "Microcephale", also known as "Mycocephale", (meaning "tiny head") is the informal name of a genus of very small pachycephalosaurid dinosaur, otherwise known as the "North American dwarf species", which lived during the Late Cretaceous. Its fossils were found in the late Campanian-age Dinosaur Park Formation, in Alberta, Canada. Not much is known about this dinosaur, as it has not yet been fully described; it is therefore a nomen nudum. The fossils of "Microcephale", including tiny skull caps, were first mentioned by paleontologist Paul Sereno in 1997, in a list of pachycephalosaurids. These skull caps measure less than 5 cm (2 in) each. No potential species name was given. Microdontosaurus "Microdontosaurus" (meaning "tiny-toothed lizard") is the name given to an as yet undescribed genus of sauropod dinosaur from China. It was named from fossils from the Middle Jurassic-age Dapuka Group of Xinjiang. The intended type species is "M. dayensis." As with other informal names created by Zhao in 1985 or 1983, it has not been used since then, and may have been redescribed under another name. Microvenator chagyabi "Microvenator chagyabi" is the informal name given to an as yet undescribed species of theropod dinosaur, likely belonging to Coelurosauria, from the Early Cretaceous Lura Formation of Tibet, China. It was coined by Zhao (1985) and the proposed holotype consists of a specimen including teeth. Mifunesaurus "Mifunesaurus" (meaning 'Mifune lizard') is a nomen nudum given to an extinct non-avian non-maniraptoriform tetanuran theropod dinosaur from the Late Cretaceous (Cenomanian; ~96 Ma) Kabu Formation of Japan. The intended holotype, stored at the Mifune Dinosaur Museum, with the tooth on display, of "Mifunesaurus" consists only of a few bones, among which are a tibia, a phalanx, a metatarsus and a single tooth (tooth catalogued as YNUGI 10003; rest of the skeleton catalogued as MDM 341), discovered by N. & K. Wasada in 1979. The genus was informally coined by Hisa in 1985 and no epithet was given. The known tooth was too thick to be the tooth of a ceratosaurid, and too tall to belong to an abelisaurid, which means that "Mifunesaurus" was probably a megalosauroid or a carnosaur based on the shape of the known tooth. Mitchell ornithopod The "Mitchell ornithopod" is the informal nickname of an ornithopod dinosaur discovered near Mitchell, Oregon, being the first described dinosaur from Oregon but not the first discovered; a hadrosaurid sacrum was discovered in the Late Cretaceous (Campanian)-aged Cape Sebastian Sandstone near Cape Sebastian during the 1960s and excavated in 1994 by Dave Taylor, but the remains of the Cape Sebastian ornithopod were not prepared for peer review and described until 2019, merely weeks after the Mitchell ornithopod was described. The single known bone, F118B00, was a toe bone, specifically the third phalanx of the central digit of the right hindlimb foot, and was discovered by Gregory Retallack in 2015 while on an annual field trip with his students, in a layer of the Albian-aged Hudspeth Shale Formation; in 2021, Gloria Carr discovered another bone, this time a vertebra, that likely belonged to the same species of ornithopod. No excavation was required – the bone was found resting on the ground and Retallack immediately knew it was different from the various marine fossils scattered nearby. The bone was described in 2018 by Gregory Retallack, Jessica Theodor, Edward Davis, Samantha Hopkins and Paul Barrett. It was part of a bloated carcass swept out into the ocean, likely originating from Idaho, although further discoveries, such as Strommer (2021), dispute this claim and suggest it may have been deposited by a mudflow. The bone was later compared to more complete remains of other ornithopods and the "Mitchell ornithopod" bone most closely matched those of hadrosaurs and iguanodonts, although it was likely a basal ornithopod. Rettalack believes that the bone belonged to a new genus, although there is not enough sufficient remains to base this claim on. Moshisaurus Hisa (1985) used "Moshisaurus" (or "Moshi-ryu") for the incomplete sauropod humerus NSM PV17656, from the Early Cretaceous Miyako Group of Japan. Dong et al. (1990) and Hasegawa et al. (1991) referred them to Mamenchisaurus, but Azuma & Tomida (1998) and Barrett et al. (2002) assigned them to Sauropoda indet. N Newtonsaurus "Newtonsaurus" is an informally named genus erected for the theropod dinosaur species Zanclodon cambrensis. The species is based on the specimen BMNH R2912, an external mold of a dentary, which was discovered in the Late Triassic (Rhaetian) aged beds of the Lilstock Formation near Bridgend, Wales in 1898 and described by Edwin Tulley Newton in 1899. The taxon was reassigned to ?Megalosaurus by Molnar in 1990, which was followed by Peter Galton in publications in 1998 and 2005. The species is considered to be a nomen dubium, as it has no diagnostic features, and is considered to be a coelophysoid grade theropod outside Averostra based on the low interdental plates and possession of only a single meckelian foramen. It has alternatively been suggested to possibly represent another indeterminate predatory archosaur. The name "Newtonsaurus" was coined in 1999 by Stephan Pickering, in reference the describer. Paleontologists have avoided using the name "Newtonsaurus" since its appearance in 1999 in private publications, although "Zanclodon" cambrensis or Megalosaurus cambrensis have both been used for this taxon. Ngexisaurus "Ngexisaurus" is the informal name given to an as yet undescribed genus of theropod dinosaur, likely belonging to Avetheropoda, from the Middle Jurassic Dapuka Group of Tibet, China. The type species, "Ngexisaurus dapukaensis", was coined by Zhao in 1983. A synonym of "Ngexisaurus" coined by Zhao (1985) is "Megalosaurus" dapukaensis and Fossilworks lists "M." dapukaensis as a megalosaurid tetanuran separate from "Ngexisaurus" proper. Nicksaurus "Nicksaurus" is an informally named Titanosaurian sauropod dinosaur from the Late Cretaceous red muds of the Vitakri Formation of the Sulaiman Basin, Pakistan. The dinosaur shared a habitat with other sauropod dinosaurs including Khetranisaurus, Sulaimanisaurus, Pakisaurus, Gspsaurus, Saraikimasoom, and Maojandino. The intended type species is "Nicksaurus razashahi" and was first used by Malkani (2019). Nurosaurus "Nurosaurus" (Nur-o-saw-rus, meaning "Nur lizard") is the informal name for a genus of sauropod dinosaur. It is known from a partial, large skeleton, that was presented as soon-to-be-described by Zhiming Dong in 1992, where he gave the proposed binomial "Nurosaurus qaganensis". It was discovered in the Qagannur Formation of Inner Mongolia, southeast of Erenhot. The deposit is younger than the Psittacosaurus-bearing Guyang Group, but is still Early Cretaceous. It was found alongside the plates and scapula of a stegosaur. The foot of "Nurosaurus" is notable for a stress fracture present on the first of the fourth digit of the left foot, which was the first identified fracture of its kind, and have since been identified on the phalanges and metatarsals of Apatosaurus, Barosaurus, Brachiosaurus, Camarasaurus, and Diplodocus. O Oharasisaurus "Oharasisaurus" is the name given to an as yet undescribed genus of somphospondylian sauropod, possibly belonging to the Euhelopodidae, from the Early Cretaceous Kuwajima Formation (Facies III layer) of Japan. The name "Oharasisaurus" was coined by Larramendi & Molina Pérez (2020) and the holotype, a tooth, was first mentioned by Matsuoka (2000). Orcomimus "Orcomimus" (Pronounced or-coh-mEYEm-us) is the name given to an as yet undescribed genus of dinosaur from the Late Cretaceous period 66 million years ago. The dinosaur was an ornithomimid which lived in what is now South Dakota, in the United States. The type was coined by Michael Triebold in 1997, but has never been formally described and is currently a nomen nudum. "Orcomimus" was a bipedal theropod, but the dinosaur is known from only a pelvis and a hindlimb. "Orcomimus" is thought to be relatively advanced for other ornithomimids at the time, although this is hard to tell from the limited amount of specimens found of the dinosaur. It may be referable to one of the ornithomimosaur species currently known from the Hell Creek Formation, where the holotype of "Orcomimus" was found. Oshanosaurus "Oshanosaurus" (meaning "Oshan lizard") is the informal name given to an as yet undescribed genus of sauropod dinosaur from the Early Jurassic period of Yunnan, China. Its fossils were found in the Lower Lufeng Series. The intended "type species", "Oshanosaurus youngi", was coined by Zhao in 1985. It has sometimes been associated with heterodontosaurids, which appears to be due to the juxtaposition of a species of Dianchungosaurus (formerly thought to be a heterodontosaurid) in the text of Zhao (1985). In 1971 Zhao Xijin discovered a dinosaur fossil at Dianchung in Eshan county, giving it the informal name "Oshanosaurus youngi". In their 2019 popular book Dinosaur Facts and Figures: The Theropods, Molina-Perez and Larramendi suggested that it belonged to the theropod Eshanosaurus, but without elaboration. Osteoporosia "Osteoporosia" is an informally named theropod, either belonging to Carcharodontosauridae or Megaraptora, from the Kem Kem Formation of Morocco, however, it lacks the requirements to become a valid taxon, thus leaving it as a naked name. The intended type species is "O. gigantea", first described by Singer (2015) and the holotype is JP Cr340, a tooth, with an indeterminate posterior or dorsal neural arch also known. A 2019 theropod faunal list found "Osteoporosia" to be a possible synonym of Sauroniops pachytholus. Otogosaurus "Otogosaurus" is an informally named sauropod from Inner Mongolia, China. The supposed type species is "Otogosaurus sarulai". It is known from partial postcranial remains, including a tibia long and several footprints. It is named after Otog Banner in Inner Mongolia where it was discovered, and Sarula, the girl who discovered the fossils. Despite sometimes being presented as a valid taxon, sometimes accompanied by citations to Zhao (2004) or Zhao & Tan (2004), scholars have not been able to locate such a source, so it remains informal until a paper is discovered. P Pakisaurus "Pakisaurus" (meaning "Pakistan lizard") is an informal taxon of titanosaurian sauropod from the Late Cretaceous of Balochistan, western Pakistan, and also Gujarat, India. The proposed species is "P. balochistani", and it was named by M. Sadiq Malkani in 2006, based on isolated tail vertebrae found in the Maastrichtian-age Vitakri Member of the Pab Formation. In 2023, a femur discovered in the Lameta Formation of India was assigned to "Pakisaurus". It was considered invalid by Wilson, Barrett and Carrano (2011) during their description of a Jainosaurus cf. septentrionalis skeleton. "Anokhadino mirliaquati" was synonymised with "Pakisaurus balochistani" by Malkani (2019). Paw Paw scuteling The "Paw Paw scuteling" is the name used for a juvenile nodosaurid discovered in 1990 from the Paw Paw Formation of northern Fort Worth, Texas. It was discovered by John C. Maurice, the 12-year-old son of fossil collector John M. Maurice. The specimen consists of a partial skeleton including a third of the backbone, part of the skull, and both leg and arm elements. It is one of two or three nodosaurs known from the formation alongside Pawpawsaurus and Texasetes, and one of the very few known specimens of a baby nodosaur. Some phylogenetic analyses have recovered it as sister to Niobrarasaurus. Although taxonomically indeterminate due to its life stage and fragmentary nature, it is often used in phylogenetic analyses for determining the taxonomic affinity of other nodosaur genera. Podischion "Podischion" is an informal genus of hadrosaurid dinosaur known from a skeleton discovered in 1911 on the Red Deer River in Alberta by a crew led by Barnum Brown. The remains were tentatively named "Podischion", which was not mentioned in published literature until Dingus & Norell (2010). It is possible that the skeleton represents an individual of Hypacrosaurus. Q Qaikshaheen "Qaikshaheen" is an informal genus of titanosaurian dinosaurs from the Late Cretaceous (Maastrichtian) Vitakri Formation of Pakistan described by Malkani (2023) in Scientific Research Publishing, a known predatory publisher. The proposed holotype specimen includes fragmentary cervical and dorsal vertebrae, partial pectoral and pelvic girdles, humeri, femora, a tibia, and fibulae. Other bones, including several vertebrae, ribs, a humerus, ulnae, metacarpals, metatarsals, a femur, and a partial pelvic girdle, were also referred. The intended type species is "Qaikshaheen masoomniazi." R Ronaldoraptor "Ronaldoraptor", also known as the "Mitrata" Oviraptorid, is an undescribed oviraptorid from Mongolia and has been listed as "Oviraptor sp." The name was first used by Luis Rey in 2003, in his book A Field Guide to Dinosaurs: The Essential Handbook for Travelers in the Mesozoic, where he drew an illustration, captioning it "Ronaldoraptor". "Ronaldoraptor" may have been closely related to Citipati osmolskae. Rutellum "Rutellum" is the pre-Linnaean name given to a dinosaur specimen from the Late Jurassic (Oxfordian)-aged Coralline Oolite Formation. It was a sauropod, possibly a cetiosaurid, which lived in what is now England. The specimen (OU 1352), called "Rutellum impicatum", was described in 1699 by Edward Lhuyd alongside specimen OU 1358, what is now believed to be a Megalosaurus tooth crown, and is notable as the earliest named entity that is recognizable as a dinosaur. It was based on a tooth collected from Caswell, near Witney, Oxfordshire. Because "Rutellum impicatum" was named before 1758 (the official starting date for zoological nomenclature according to the ICZN), it is not considered a part of modern biological nomenclature. S Sabinosaurus "Sabinosaurus" or "Sabinosaurio" is a name used for PASAC-1, a partial skeleton of a hadrosaur that was discovered in the Sabinas Basin in Mexico in 2001. It was initially described as Kritosaurus sp. by Jim Kirkland and colleagues (2006), but considered an indeterminate saurolophine by Prieto-Márquez (2014). This skeleton is about 20% larger than other known specimens, around long, and with a distinctively curved ischium, and represents the largest known well-documented North American saurolophine. Unfortunately, the nasal bones are also incomplete in the skull remains from this material. Saldamosaurus "Saldamosaurus" is an informal genus of stegosaurid dinosaur known from a complete braincase discovered in the Early Cretaceous Saldam Formation of Siberia, Russia. The type species, "Saldamosaurus tuvensis", was named in 2014 but according to Galton and Carpenter (2016) it did not meet the requirements of the International Code of Zoological Nomenclature and is hence a nomen nudum. Saltillomimus "Saltillomimus" is an informal name for an ornithomimid theropod from the Late Cretaceous (late Campanian) of the Cerro del Pueblo Formation in Mexico. It is known from SEPCP 16/237, a partial tail, most of a hindlimb, and forelimb bones, discovered in 1998, and the possible juvenile specimen SEPCP 16/221, a partial leg and hip bone, that was given the name "Saltillomimus rapidus" by Martha Carolina Aguillón Martinez in 2010. A skeletal reconstruction was put on display in 2014 at the Museo del Desierto, which served to highlight its robust thighs and unusual hips that combine primitive and advanced features seen in ornithomimosaurs from both Asia and North America. Named in Martinez' 2010 thesis, the taxon name is an invalid nomen ex dissertatione. Sanchusaurus "Sanchusaurus" (meaning "Lizard from Sanchu") or "Sanchu-ryu" is an informal name for possible ornithomimosaur dinosaur from the Early Cretaceous period of Asia. It is only known by a partial tail vertebra, found in Nakasato, Japan. Dong (1990) considered it synonymous with Gallimimus but the large discrepancy in both age and location between the two species renders this opinion untenable. The genus has not been formally described and is considered a nomen nudum. It was first mentioned by Hisa in 1985. In 2006, it is shown that this animal is not fully grown, and characters of tail vertebra is not unique to that of ornithomimosaur. Saraikimasoom "Saraikimasoom" (meaning 'Innocent one') is an invalid species of titanosaur dinosaur from the Vitakri Formation in Pakistan. The type species, Saraikimasoom vitakri, was described by Sadiq Malkhani in 2015, in a paper describing multiple Pakistani dinosaurs, such as Gspsaurus, "Nicksaurus" and "Maojandino". Saraikimasoom is currently recognised as a nomen manuscriptum. Saraikisaurus "Saraikisaurus" (meaning "Saraiki lizard") is an invalid genus name proposed for a putative reptile found in the Late Cretaceous (Maastrichtian)-aged Vitakri Formation of Pakistan and possibly also the Lameta Formation of India. The intended type species is "Saraikisaurus minhui", known from the proposed holotype—a fragmentary dentary (GSP/MSM-157-16)—and a referred specimen—an incomplete vertebra (GSP/MSM-64-15). Malkani initially interpreted the dentary as belonging to a basal pterodactyloid and created the further monotypic family "Saraikisauridae" and subfamily "Saraikisaurinae" to house it. The name "Saraikisaurus" was first proposed by M. Sadiq Malkani in a 2013 conference. A later endeavor to describe it in 2015 was not peer-reviewed. In 2021, Malkani attempted to formally describe "Saraikisaurus" and other taxa in Scientific Research Publishing, a known predatory publisher. In 2024, he reinterpreted the specimen as instead belonging to a noasaurid theropod, redescribing it as a novel taxon in another unreviewed paper on ResearchGate. He assigned the fragmentary vertebra to this genus on the basis of compatible size and preservation styles, and apparent similarities to the corresponding bones in Laevisuchus. Shake-N-Bake theropod The "Shake-N-Bake theropod" is an undescribed species of coelophysoid from the Kayenta Formation, known from partial skeleton MCZ 8817 within the collection of Harvard Museum of Natural History. Shansaraiki "Shansaraiki" (meaning "respected Saraiki peoples") is an informal genus of theropod that was probably an abelisaur. The holotype was found in the Shalghara locality of the Late Cretaceous Vitakri Formation of Pakistan and consists of GSP/ MSM-140-3 (symphysis), GSP/MSM-5-3 (mid-ramus with partial teeth bases) and GSP/MSM-57-3 (dorsal vertebrae), although they may belong to separate specimens as they were found apart from each other. The intended type species is "Shansaraiki insafi" and was first mentioned by Malkani (2022). Siamodracon "Siamodracon" is an extinct genus of invalid stegosaurid dinosaur known from a single dorsal vertebra found in Thailand's Phu Kradung Formation. The type species, "Siamodracon altispinax", was named by Ulansky in 2014. According to Galton and Carpenter (2016) it did not meet the requirements of the International Code of Zoological Nomenclature. "Siamodracon" was the first thyreophoran dinosaur discovered in South East Asia. Sidormimus "Sidormimus" is an informal genus of noasaurid discovered in the Elrhaz Formation in Niger. It was discovered in 2000 by Chris Sidor and it was immediately described by Lyon on the Project Exploration website, with a photograph of the holotype. During the same year, on the National Geographic website, the same photograph of the holotype was labelled "Dogosaurus". It has also been referred to as the "Gadoufaoua noasaurid". In 2005, Sidor himself confirmed that "Sidormimus" was the Elrhaz noasaurid. "Sidormimus" has been mentioned by Paul Sereno three times. "Sidormimus" is known from a partial post cranial skeleton. Its neck and ribs were exposed when the holotype was discovered. Sinopeltosaurus "Sinopeltosaurus" is a dubious genus of extinct thyreophoran ornithischian dinosaur described by Roman Ulansky. The type and only species is "S. minimus" of the lower Jurassic Lufeng Formation of Yunnan China, based on an articulated set of ankle bones. The specimen is FMNH CUP 2338, and includes the distal tibia and fibula, distal tarsals, most metatarsals, and some phalanges. FMNH CUP 2338 was described in 2008 by Randall Irmis and Fabian Knoll, as one of the few definitive specimens of Ornithischia from the Early Jurassic based on features of the ankle and pes. In 2016, Peter Malcolm Galton and Kenneth Carpenter identified it as a nomen dubium, and listed it as Ornithischia indet., possible Thyreophora indet. Ulansky variously referred to it as "Sinopeltosaurus minimus" or "Sinopelta minima"; Galton and Carpenter, as the first revisers under ICZN, made the former official. Skaladromeus "Skaladromeus" or the "Kaiparowits ornithopod" is an ornithopod from the Kaiparowits Formation named in a 2012 thesis by Clint Boyd. The intended type species is "Skaladromeus goldenii". Sousatitan "Sousatitan" is the name given to an as yet undescribed genus of titanosaurian sauropod dinosaur from the Early Cretaceous-aged Rio Piranhas Formation of Brazil. The intended holotype consists of a left fibula, and "Sousatitan" was coined by Ghilardi et al. (2016). Stegotitanus "Stegotitanus" is the informal replacement genus name given to the stegosaur Stegosaurus ungulatus—resulting in the new combination "Stegotitanus" ungulatus—by Gregory S. Paul in the third edition of The Princeton Field Guide to Dinosaurs in 2024. Stegosaurus ungulatus fossils are known from the Late Jurassic (Kimmeridgian) upper Morrison Formation of Wyoming, US. "Stegotitanus" was one of the largest stegosaurs, at about long and in weight. Suciasaurus A fossil theropod (possibly a tyrannosaur) nicknamed "Suciasaurus rex" was discovered in 2012 at Sucia Island State Park in San Juan County of the U.S. State of Washington. It was the first dinosaur discovered in Washington state. The finding was announced when Burke Museum paleontologists published a discovery paper in PLoS ONE. Prompted by a petition from students at an elementary school at Parkland, near Tacoma, the Washington State Legislature introduced a bill in 2019 to make it the official state dinosaur. A renewed push came in 2021, though House Republicans, like Minority leader J. T. Wilcox, called it low priority versus the ongoing COVID-19 pandemic, and eventually the bill failed to pass, though in 2023 it passed. Sugiyamasaurus "Sugiyamasaurus" (meaning "Sugiyama lizard") is the informal name given to a few spatulate teeth belonging to a titanosauriform, possibly Fukuititan, which lived in Japan during the Early Cretaceous. The name was first printed by David Lambert in 1990 in the Dinosaur Data Book, and also appears in Lambert's Ultimate Dinosaur Book and in many on-line lists of dinosaurs. Since it has not been formally described, "Sugiyamasaurus" is a nomen nudum. Remains were found near Katsuyama City and were initially referred to Camarasauridae, but might belong to Fukutitan because they were unearthed in the same quarry as the Fukuititan material. Sulaimanisaurus "Sulaimanisaurus" (meaning "Sulaiman lizard", for the Sulaiman foldbelt) is an informal taxon of titanosaurian sauropod from the Late Cretaceous of Balochistan, western Pakistan (also spelled "Sulaimansaurus" in some early reports). The proposed species is "S. gingerichi", described by M. Sadiq Malkani in 2006, and it is based on seven tail vertebra, found in the Maastrichtian-age Vitakri Member of the Pab Formation. Four additional tail vertebrae have been assigned to it. It was considered to be related to "Pakisaurus" and "Khetranisaurus" in the family "Pakisauridae" (used as a synonym of Titanosauridae). It was considered invalid by Wilson, Barrett and Carrano (2011). T Teihivenator "Teihivenator" ("strong hunter") is an improperly named taxon of tyrannosauroid coelurosaur from the Navesink Formation of New Jersey. It was suggested to contain the species, "T." macropus, originally classified as a species of Dryptosaurus (= "Laelaps", a name preoccupied by a mite). It was suggested as a separate genus in 2017 by Chan-gyu Yun. The name "Teihivenator" is invalid because the publication naming it is online-only, which means that a registration with ZooBank is required to be present in the article when published. However, the ZooBank registry was only added in after initial publication, meaning that it fails the requirement to be a validly published taxon. In 2017, a preprint paper by Chase Brownstein concluded that the remains of L. macropus are a mixture of tyrannosauroid and ornithomimid elements with no distinguishing characteristics, rendering the species a chimera and a nomen dubium. In 2018, Brownstein stated that a tibia of L. macropus catalogued as specimen AMNH FARB 2550 represents a tyrannosauroid that probably was distinct from Dryptosaurus, but not sufficiently to base a taxon on. That Which Cannot Be Named "That Which Cannot Be Named" is the name given by Darren Naish to an undescribed associated skeleton of a small coelurosaur from the Wessex Formation of the Isle of Wight. The specimen is in private ownership and is currently inaccessible to researchers. It has been suggested that the specimen is possibly a tyrannosauroid. Tiantaisaurus "Tiantaisaurus", alternatively spelled "Tiantaiosaurus", is the name given to a specimen of therizinosaur from the Aptian age Laijia Formation of Zhejiang, China. According to correspondence through the Dinosaur Mailing List, the former name (from a 2012 study) was the one intended to be use for an official description. After being discovered in 2005, it was first mentioned named in an unpublished manuscript written in 2007. The given species was named "T. sifengensis". The specimen consists of an ischium, an astragalus, a tibia, a femur, an incomplete pubis and ilium, and a large number of vertebrae from across the body. Tobasaurus "Tobasaurus" (meaning "Toba City lizard") is the informal name given to an as yet undescribed genus of sauropod dinosaur from the Euhelopodidae from the Early Cretaceous (Hauterivian – Barremian-aged) Matsuo Group of Japan. The proposed holotype is a partial skeleton (mostly limb bones), and "Tobasaurus" grew up to when fully grown. It is the inspiration for the Vivosaur "Toba" in the video game Fossil Fighters. Tonouchisaurus "Tonouchisaurus" (meaning "Tonouchi lizard") is the informal name given to an as yet undescribed genus of coelurosaurian dinosaur from the Early Cretaceous Period of Mongolia. The suggested "type species", "Tonouchisaurus mongoliensis", was first informally mentioned in a Japanese news article. It was notably small: less than in length. The specimen informally dubbed "Tonouchisaurus mongoliensis" is based on limb material, and the manual and pedal remains were initially reported to incorporate a complete didactyl manus and complete pes, and Rinchen Barsbold therefore initially interpreted "Tonouchisaurus" as a tyrannosauroid, but he later noted that the manus is actually tridactyl and that the pes has a sub-arcometatarsalian condition. U Ubirajara "Ubirajara" (meaning "Lord of the Spear") is an informal genus of compsognathid theropod known from the Early Cretaceous Crato Formation of Brazil; it was discovered in 1995 and was named in 2020 in an "In Press" article that was later withdrawn due to the specimen having been illegally smuggled from Brazil to Germany. It is considered a nomen manuscriptum. Utetitan "Utetitan" is the informal name given to specimens of the titanosaur Alamosaurus from the Late Cretaceous (Maastrichtian) lower North Horn Formation of Utah US, by Gregory S. Paul in the third edition of The Princeton Field Guide to Dinosaurs in 2024. Other titanosaurian bones from the upper Black Peaks Formation or Texas, US, may also be referrable to this taxon. "Utetitan" was reportedly about long and in weight. The intended type species is "Utetitan zellaguymondeweyae." V Vectensia In 1982 Justin Delair informally named the genus "Vectensia" based on specimen GH 981.45, an armour plate. Like the holotype of Polacanthus it was found at Barnes High, but reportedly in an older layer, of the Lower Wessex Formation. Blows in 1987 tentatively referred it to Polacanthus. Vitakridrinda "Vitakridrinda" is a genus of abelisaurid theropod dinosaur from the Late Cretaceous of Balochistan, western Pakistan. The intended type species is "V. sulaimani". The discovery was made (along with other dinosaur specimens) near Vitariki by a team of palaeontologists from the Geological Survey of Pakistan, in rocks from the Maastrichtian-age Vitakri Member of the Pab Formation. Informally named in an abstract by M.S. Malkani in 2004 (to which Malkani [2006] attributes the name), it is based on partial remains including two thigh bones, and a tooth. A partial snout and braincase were originally referred to the holotype, and additional vertebrae may also belong to this genus. However, the snout was later reclassified as a new genus of mesoeucrocodylian, Induszalim, while the braincase was later referred to Gspsaurus. Thomas Holtz gave a possible length of 6 meters (19.7 feet). Vitakrisaurus "Vitakrisaurus" is a genus of noasaurid theropod dinosaurs represented by only one known species, "Vitakrisaurus saraiki", which is the intended type species. It lived in the late Cretaceous period, approximately 70 million years ago, during the Maastrichtian, in what is today the Indian subcontinent. Its fossils were found in Pakistan's Vitakri Formation. The holotype specimen, MSM-303-2 is a right foot with a seemingly tridactyl form and robust phalanges. It may belong to Noasauridae due to similarities with the foot of Velocisaurus, although inconsistencies within its brief description and a lack of comparison with other theropods within the article makes formal classification difficult. The generic name references the Vitakri Member of the Pab Formation and combines this with the Greek suffix "saurus", meaning "reptile". The specific name honours the Saraiki people, who primarily live in southern Pakistan. However, like most dinosaur taxa named by M. Sadiq Malkani, it is probably a nomen nudum. Some authors consider "Vitakrisaurus" to be the same animal as "Vitakridrinda". W White Rock spinosaurid "White Rock spinosaurid" is the nickname of a giant spinosaur from the Vectis Formation of the Isle of Wight described in 2022. Its remains are so fragmentary that the describers refrained from naming it, but considered the name "Vectispinus". With vertebrae comparable in dimensions to Spinosaurus, it was likely among the largest theropods with a length exceeding . X Xinghesaurus "Xinghesaurus" was the name given to a species of sauropod dinosaur, possibly a titanosauriform, in 2009, in the guidebook for the dinosaur expo "Miracle of Deserts", written by Hasegawa et al. No species name was given for the genus. Based on the skeletal mount, "Xinghesaurus" was likely around long and weighed around . Y Yibinosaurus "Yibinosaurus" (meaning "Yibin lizard") is the informal name given to an as yet undescribed genus of herbivorous dinosaur from the Early Jurassic. It was a sauropod which lived in what is now Sichuan, China. The suggested "type species", "Yibinosaurus zhoui", is briefly mentioned in the Chongqing Natural History Museum guidebook (2001) as under description by Chinese paleontologist Ouyang Hui. It was coined as a nomen ex dissertationae by Ouyang (2003), and is based on a specimen referred to Gongxianosaurus sp. nov. by Luo and Wang (1999). Yuanmouraptor "Yuanmouraptor" is an informally named carnosaur from Yuanmou County, China. It lived during the Middle Jurassic, around 174 and 163 million years ago, and it is known from ZLJ0115, which is a complete, articulated skull on display at an unknown Chinese museum (possibly the Lufeng Dinosaur Museum), alongside a reconstructed skeleton of "Yuanmouraptor". "Yuanmouraptor" was briefly mentioned in a 2014 guide book, and Hendrickx et al. (2019) classify it as a metriacanthosaurid. Yunxianosaurus "Yunxianosaurus" is the provisional name for a genus of titanosaurian dinosaurs from the Late Cretaceous of what is now Hubei, China. The type species, "Yunxianosaurus hubeinensis", was proposed by Chinese paleontologist Li Zhengqi in 2001. The fossils of "Yunxianosaurus" were found near the Nanyang Prefecture. Li stated that the name "Yunxianosaurus" was a temporary label for ease of description, but that further field work and study of the fossils would be required before the genus could be given an official name. Z Zamyn Khondt oviraptorid Zamyn Khondt oviraptorid is a nickname for oviraptorid specimen IGM or GIN 100/42. Since the type skull of Oviraptor is so poorly preserved and crushed, the skull of IGM 100/42 has become the quintessential depiction of that dinosaur, even appearing in scientific papers with the label Oviraptor philoceratops. However, this distinctive-looking, tall-crested species has more features of the skull in common with Citipati than it does with Oviraptor and it may represent a second species of Citipati or possibly an entirely new genus, pending further study. See also List of dinosaur genera References External links Theropod Database Blog post clarifying sauropod nomina nuda from Zhao (1985) Lists of prehistoric reptiles Dinosaur-related lists Lists of prehistoric animal genera (alphabetic) Nomina nuda
List of informally named dinosaurs
Biology
20,460
2,155,690
https://en.wikipedia.org/wiki/List%20of%20hash%20functions
This is a list of hash functions, including cyclic redundancy checks, checksum functions, and cryptographic hash functions. Cyclic redundancy checks Adler-32 is often mistaken for a CRC, but it is not: it is a checksum. Checksums Universal hash function families Non-cryptographic hash functions Keyed cryptographic hash functions Unkeyed cryptographic hash functions See also Hash function security summary Secure Hash Algorithms NIST hash function competition Key derivation functions (category) References List Checksum algorithms Cryptography lists and comparisons
List of hash functions
Technology
109
29,090,677
https://en.wikipedia.org/wiki/Observations%20of%20daily%20living
Observations of daily living (ODLs) are cues that people attend to in the course of their everyday life, that inform them about their health. ODLs are different from signs, symptoms, and clinical indicators in that they are defined by the patient, and are not necessarily directly mapped to biomedical models of disease and illness. Examples of ODLs include observations about sleep patterns, exercise behavior and activity trackers, nutritional intake, attitudes and moods, alertness at work or in class, and environmental features such as clutter in the living or working space. Not all patient-generated data constitute ODLs. For example, a patient with diabetes may record their blood glucose levels every day at home, generating data to share with their clinician. That kind of patient-generated data is crucial to inform clinical decision making, but does not constitute ODLs. ODLs are typically defined by patients and their families because they are meaningful to them, and help them self-manage their health and make decisions about it. ODLs may very well complement biomedical indicators and inform medical decision making by providing a more complete and holistic view of the patient as a whole person, provided they are properly integrated in clinical workflows and supported by health information technologies. See also Chronic care management Personal health record Self care Telehealth References Self-care Health informatics Living arrangements
Observations of daily living
Biology
279
1,717,863
https://en.wikipedia.org/wiki/Ram-air%20intake
A ram-air intake is any intake design which uses the dynamic air pressure created by vehicle motion, or ram pressure, to increase the static air pressure inside of the intake manifold on an internal combustion engine, thus allowing a greater massflow through the engine and hence increasing engine power. Design features The ram-air intake works by reducing the intake air velocity by increasing the cross-sectional area of the intake ducting. When gas velocity goes the pressure is increased. The increased pressure in the air box will ultimately have a positive effect on engine output as more oxygen will enter the cylinder during each engine cycle. Ram-air systems are used on high-performance vehicles, most often on motorcycles and performance cars. The 1990 Kawasaki Ninja ZX-11 C1 model used a ram-air intake, the very first on any production motorcycle. Ram-air was a feature on some cars in the sixties. It fell out of favor in the seventies, but recently made a comeback. While ram-air intakes may increase the volumetric efficiency of an engine, they can be difficult to combine with carburetors, which rely on a venturi-engineered pressure drop to draw fuel through the main jet. As the pressurised ram-air may kill this venturi effect, the carburetor needs to be designed to take this into account, or, alternatively, the engine may need fuel-injection. At low speeds (subsonic speeds) increases in static pressure are however limited to a few percent. Aircraft Pitot sensors are used to measure ram pressure which, along with static pressure, is used to estimate the airspeed of an aircraft. See also Air filter Booster Ramjet Supercharger Turbocharger Diffuser (automotive) References Engine technology Automotive technologies Motorcycle engines
Ram-air intake
Technology
356
63,156,542
https://en.wikipedia.org/wiki/International%20Photography%20Hall%20of%20Fame%20and%20Museum
The International Photography Hall of Fame and Museum in St. Louis, Missouri, honors those who have made great contributions to the field of photography. History In 1977, the first Hall of Fame and Museum opened in Santa Barbara, California, as a part of the Brooks Institute of Photography. A few years later, in 1983 the museum moved to Oklahoma City and in 2013, moved to its current location, St. Louis, Missouri. The IPHF is the first organization worldwide that recognizes significant contributors to the artistic craft and science of photography. In addition to an extensive collection of photographs and cameras, IPHF offers lectures and other educational opportunities; surrounding all aspects of photography, past, and present, for people of all ages. Hall of Fame inductees The IPHF inductees artists and individuals that have changed the art industry with their photography or inventions. IPHF has more than 70 inductees and archives more than 30,000 images. Each year a nominating committee selects inductees based on their contributions to the art or science of photography and their impact on the history of photography. 1966 Inductees William Henry Fox Talbot 1968 Inductees George Eastman Mathew B. Brady 1971 Inductees Alfred Stieglitz 1973 Inductees George W. Harris 1974 Inductees Edward Steichen 1976 Inductees Robert Capa 1978 Inductees Erich Salomon 1979 Inductees Brassai Gertrude Kasebier Peter Henry Emerson 1980 Inductees Adolf Fassbender Pirie MacDonald Victor Hasselblad 1982 Inductees William Henry Jackson 1984 Inductees Ansel Adams August Sander Bill Brandt Dorothea Lange Edward Weston Eugene Atget Imogen Cunningham James Van Der Zee Oskar Barnack Paul Strand Walker Evans William Eugene Smith Yasuzo Nojima 1986 Inductees André Kertész Clarence White Diane Arbus Josef Sudek Timothy O'Sullivan 1989 Inductees Paul Lindwood Gittings 1991 Inductees Dr. Edwin Herbert Land 2000 Inductees Berenice Abbott 2001 Inductees Henri Cartier-Bresson Lewis Hine 2002 Inductees Carleton Watkins Gordon Parks Helmut Gernsheim 2003 Inductees Andre Adolphe-Eugene Disderi Peter Dombrovskis 2004 Inductees Frederick Scott Archer Robert Frank Ruth Bernhard 2005 Inductees Beaumont Newhall Harold Edgerton Manuel Alvarez Bravo 2006 Inductees Arnold Newman Richard Avedon 2007 Inductees Roger Fenton 2013 Inductees Yousef Karsh 2016 Inductees Annie Leibovitz Ernst Haas Graham Nash John Knoll Ken Burns Margaret Bourke-White Sebastiao Salgado Steve Jobs Thomas Knoll Willard S. Boyle 2017 Inductees Anne Geddes Cindy Sherman Edward Curtis Ernest H. Brooks II Harry Benson James Nachtwey Jerry Uelsmann Kenny Rogers Ryszard Horowitz William Eggleston 2018 Inductees Joe Rosenthal Joel Bernstein John Sexton John Loengard Susan Meiselas Walter Looss Jr. 2019 Inductees Bruce Davidson Elliott Erwitt Julia Margaret Cameron Mary Ellen Mark Olivia Parker Paul Nicklen Ralph Gibson Steve McCurry Tony Vaccaro 2020 Inductees Robert Adams Lynsey Addario Alfred Eisenstaedt Hiro Jay Maisel Duane Michals Carrie Mae Weems Henry Diltz 2021 Inductees Dawoud Bey Larry Burrows Philip-Lorca diCorcia David Douglas Duncan Sally Mann Pete Souza Joyce Tenneson Joel Sartore 2022 Inductees Edward Burtynsky Chester Higgins Graciela Iturbide Helen Levitt Danny Lyon Sarah Moon 2023 Inductees Nan Goldin Vivian Maier Bea Nettles Matika Wilbur Collection The IPHF collection focuses on photographic works beginning from the 18th century to the present. In addition to photographs, the museum has a large collection of cameras, darkroom, and studio tools dating back to the late 1800s. The entire collection consists of more than 6,000 historical cameras and photography tools and 30,000 photographs. Some of the 19th-century photographic tools include Magic Lanterns, a Praxinoscope Theatre, and an Edison Projecting Kinetoscope. Within the collection can be found a wide variety of photographic memorabilia from historic manuals on processes and techniques to monographs of notable photographers. Exhibitions Retrospective, Phil Borges, October–December 2004 Alaska Wild, December 2004 – January 2005 In Plain Sight, Beaumont Newhall, January–April 2005 Stopping Time, Harold Edgerton, January–April 2005 Mestizjae, Manuel Alvarez Bravo, January–April 2005 Photography of Hugh Scott, The Oklahoma City National Memorial, 10 Years Remembering, April–July 2005 An Itinerant Eye, James Walden, July–December 2005 A Life In Photography, Arnold Newman, July–December 2005 Nicholas Orzio's Occupied Japan, Nicholas Orzio, February–May 2017 Vivian Maier, Vivian Maier, February–May 2018 Cabbagetown, Oraien Catledge, January–April 2019 40th Year Anniversary: Nanjing-St. Louis Sister City: Retrospective, April–July 2019 Moment By Moment, John Loengard, July–September 2019 2019 Hall of Fame Induction and Awards Exhibition, November 2019–March 2020 References Awards established in 1965 Halls of fame in Missouri Museums in St. Louis Photography awards Photography museums and galleries in the United States Science and technology halls of fame
International Photography Hall of Fame and Museum
Technology
1,080
43,967,340
https://en.wikipedia.org/wiki/Criminal%20%282016%20film%29
Criminal is a 2016 American action thriller film directed by Ariel Vromen and written by Douglas Cook and David Weisberg. The film is about a convict who is implanted with a dead CIA agent's memories to finish an assignment. The film stars Kevin Costner, Gary Oldman, and Tommy Lee Jones, in the second collaboration among all three following the 1991 film JFK. The film also features Alice Eve, Antje Traue and Gal Gadot, with the death of Ryan Reynolds's character, early in the film, setting the plot in motion. Principal photography began on September 4, 2014, in London. The film was produced by Campbell-Grobman Films and Millennium Films and was released on April 15, 2016. It received generally negative reviews from critics and was a financial disappointment, grossing $38.8million against its $31.5million budget. Plot Spanish industrialist-turned-anarchist Xavier Heimdahl arranges for his associate Jan Strook—a hacker known as "the Dutchman"—to create a wormhole program that would allow the owner to bypass all computer codes protecting the world's nuclear defense codes. The Dutchman panics and attempts to hand his secret over to CIA agent Bill Pope. Although Pope gets the Dutchman to a safe house and recovers the money to pay him for his services, he is caught by Heimdahl's men and tortured to death before he can tell anyone where he hid the Dutchman. Desperate to find the Dutchman, Pope's supervisor Quaker Wells contacts Dr. Micah Franks, who has developed a treatment that could theoretically plant the memory patterns of a dead person onto a living one. Franks requests that they "graft" Pope's knowledge into the brain of convict Jerico Stewart, who received damage in his frontal lobe by being abused as a child and is effectively a sociopath. After the operation, Jerico escapes custody and fakes his death. He steals a maintenance van and goes to Pope's house, where he holds Pope's widow Jillian hostage while he looks for the money. As time goes on, he experiences memory flashes of Pope's past, but all he can determine is that the bag of money was hidden behind a bookshelf, without identifying where it or the Dutchman is kept. The CIA learns that the Dutchman is planning to sell the program to the Russians, believing that the CIA has betrayed him. Fortunately, they are able to find Jerico after he contacts Dr. Franks for medication using Pope's CIA codes. Jerico is beginning to develop emotions and draw on Pope's experience. As Jerico attempts to retrace the route Pope took to hide the Dutchman, Heimdahl creates a distraction at the airport that draws Wells' attention, allowing Heimdahl's accomplice and lover Elsa to try and capture Jerico, killing his CIA guards before Jerico escapes by driving a taxi off a bridge. Jerico retreats to the Popes' house, where he encounters Jillian and explains the situation to her. Although she initially fears him, Jillian comes to accept Jerico as she sees him bonding with her daughter, Emma, allowing Jerico to stay the night. The next morning, Jerico realizes through a conversation with Jillian that the bag is hidden in the rare books collection at the University of London where she works. He attempts to retrieve the bag but is captured by Heimdahl and Elsa once he has found it. Heimdahl threatens to kill Jillian and Emma unless Jerico takes him to the Dutchman. With the CIA and a Russian strike team now seeking the Dutchman, Jerico, who has now recalled that Pope hid the Dutchman in Jillian's office at the university, escapes Elsa using an improvised nitro-glycerine bomb, returning to the office to provide a hurried explanation to the Dutchman. Elsa finds them before they can escape, shooting Jerico in the shoulder and killing the Dutchman, but Jerico gets the upper hand and bludgeons her to death with a lamp. Jerico steals an ambulance and takes the flash drive containing the wormhole program to the airfield where Heimdahl is attempting an escape. Jerico saves Jillian and Emma, even as Heimdahl shoots him. As Heimdahl's plane takes off, Jerico reveals to Wells that he had the Dutchman reprogram the wormhole so that it would target the source of the next transmission. This results in Heimdahl unwittingly destroying his own plane when he tries to fire a missile at the airfield. A few months later, Jerico is shown on the beach where Pope and Jillian had their honeymoon. He is initially unresponsive to anything but automatic reflexes and responses. With all other options exhausted, Wells and Franks take Jillian and Emma to see him. The sight of Pope's family confirms that some part of Pope exists in Jerico as he responds with a nose-tap. This was Pope and Jillian's way of saying "I love you". Witnessing this, Quaker reflects that he will offer Jerico a job. Cast In addition, Ryan Reynolds appearsin the opening minutes of the filmas CIA agent Bill Pope. Lara Decaro appears as Pope's daughter Emma. Production Development On June 20, 2013, it was announced that Millennium Films had acquired the script for Criminal, written by Douglas Cook and David Weisberg, an action film in which a dead CIA operative's memories, secrets, and skills are implanted into a dangerous criminal, who is sent on a government mission. J.C. Spink, Chris Bender, Matt O'Toole and Mark Gill were initially announced as producers, with Boaz Davidson later joining the production. On September 13, Millennium set Ariel Vromen to direct the film. Casting On June 17, 2014, Kevin Costner was cast to play a dangerous criminal with a dead CIA operative's skills, secrets, and memories implanted into him to finish a job. On July 10, Gary Oldman was in talks to join the film to play the CIA chief. On July 23, Tommy Lee Jones joined the film to play a neuroscientist who transplants the memories to the criminal, while Oldman's role was also confirmed. On August 4, Ryan Reynolds was added to the cast. On August 7, Alice Eve joined the cast. On August 11, Jordi Mollà joined the film in the villain role of Hagbardaka Heimdahl, who wants the dead CIA agent's secrets now implanted in the criminal's brain. On August 12, Gal Gadot signed on to star in the film as Reynolds' character's wife. On September 26, Antje Traue joined the film to play the villain's accomplice. Filming Principal photography on the film began on September 4, 2014, in London. Some actors and crews were also spotted filming scenes for the film on King's Road in Kingston. From September 22–25, filming was taking place in Yateley, Hampshire, where actors were spotted filming car crashes and helicopter chase scenes at the Blackbushe Airport. Filming was also done in Croydon College in Croydon, with the college building used as medical research labs and the CIA operations centre. In October 2014, Connect 2 Cleanrooms installed a cleanroom in Surrey Quays Road, London, for the scene where Tommy Lee Jones' character operates on Kevin Costner's. On October 23, aerial drone filming was undertaken featuring Costner in a car chase scene on White's Row in East London. Some filming also took place at the SOAS University of London library. Filming also took place at Pinewood Studios. Music On December 9, 2014, it was announced that Haim Mazar had signed on to compose the music of the film. However, on June 10, 2015, it was announced that Brian Tyler and Keith Power had taken over scoring duties on the film, replacing Mazar. Release The film was to be released on January 22, 2016, in the United States, but in August 2015 the release was pushed back to April 15, 2016. Reception Box office Criminal grossed $14.7million domestically (United States and Canada), and $24.1million in other territories, for a worldwide total of $38.8million, against a budget of $31.5million. In the United States and Canada, the film was released alongside The Jungle Book and Barbershop: The Next Cut, and was projected to gross $9–12 million from 2,683 theaters in its opening weekend. The film ended up grossing just $5.8 million in its opening weekend, below expectations and among the worst wide-release openings of Costner's career, finishing 6th at the box office. Critical response In his review, Empire magazine's John Nugent wrote: "We can but pray that scientists invent a procedure to remove the memory of ever watching this film in the first place", and awarded the film 1 star out of 5. Writing for The Daily Telegraph, Tim Robey called it "wanton, low-down entertainment" and awarded it 2 stars out of 5. In his review for BBC, Mark Kermode placed it in fifth place in his mid-year list of the Worst Films of 2016. References External links 2016 films 2016 action thriller films 2010s spy thriller films American action thriller films American spy thriller films Films about brain transplantation British action thriller films British spy thriller films British chase films American chase films 2010s English-language films Films scored by Brian Tyler Films about amnesia Films about the Central Intelligence Agency Films about computing Films about consciousness Films about terrorism Films directed by Ariel Vromen Films set in Hampshire Films set in London Films shot in Hampshire Films shot in London Summit Entertainment films Techno-thriller films Films with screenplays by Douglas S. Cook Films with screenplays by David Weisberg 2016 drama films Films shot at Pinewood Studios 2010s American films 2010s British films English-language action thriller films English-language spy thriller films
Criminal (2016 film)
Technology
2,083
67,222,715
https://en.wikipedia.org/wiki/Cerium%28III%29%20sulfide
Cerium(III) sulfide, also known as cerium sesquisulfide, is an inorganic compound with the formula Ce2S3. It is the sulfide salt of cerium(III) and exists as three polymorphs with different crystal structures. Its high melting point (comparable to silica or alumina) and chemically inert nature have led to occasional examination of potential use as a refractory material for crucibles, but it has never been widely adopted for this application. The distinctive red colour of two of the polymorphs (α- and β-Ce2S3) and aforementioned chemical stability up to high temperatures have led to some limited commercial use as a red pigment (known as cerium sulfide red). Synthesis The oldest syntheses reported for cerium(III) sulfide follow a typical rare earth sesquisulfide formation route, which involves heating the corresponding cerium sesquioxide to 900–1100 °C in an atmosphere of hydrogen sulfide: Ce2O3 + 3 H2S → Ce2S3 + 3 H2O Newer synthetic procedures utilise less toxic carbon disulfide gas for sulfurisation, starting from cerium dioxide which is reduced by the CS2 gas at temperatures of 800–1000 °C: 6 CeO2 + 5 CS2 → 3 Ce2S3 + 5 CO2 + SO2 Polymorphs Ce2S3 exists in three polymorphic forms: α-Ce2S3 (orthorhombic, burgundy colour), β-Ce2S3 (tetragonal, red colour), γ-Ce2S3 (cubic, black colour). They are analogous to the crystal structures of the likewise trimorphic Pr2S3 and Nd2S3. Following the synthetic procedures given above will yield mostly the α- and β- polymorphs, with the proportion of α-Ce2S3 increasing at lower temperatures (~700–900 °C) and with longer reaction times. The α- form can be irreversibly transformed into β-Ce2S3 by vacuum heating at 1200 °C for 7 hours. Then γ-Ce2S3 is obtained from sintering of β-Ce2S3 powder via hot pressing at an even higher temperature (1700 °C). α polymorph The α polymorph of cerium(III) sulfide has the same structure as α-. It contains both 7-coordinate and 8-coordinate cerium ions, , with monocapped and bicapped trigonal prismatic coordination geometry, respectively. The sulfide ions, , are 5-coordinate. Two thirds of them adopt a square pyramidal geometry and one third adopt a trigonal bipyramidal geometry. γ polymorph The γ polymorph of cerium(III) sulfide adopts a cation-deficient form of the structure. 8 out the 9 metal positions in the structure are occupied by cerium in γ-, with the remainder as vacancies. This composition can be represented by the formula . The cerium ions are 8-coordinate while the sulfide ions are 6-coordinate (distorted octahedral). Reactions Some reported reactions of cerium(III) sulfide are with bismuth compounds in order to form superconducting crystalline materials of the M(O,F)BiS2 family (for M=Ce). The reaction of Ce2S3 with Bi2S3 and Bi2O3 in a sealed tube at 950 °C gives the parent compound CeOBiS2: 3 Ce2S3 + Bi2S3 + 2 Bi2O3 → 6 CeOBiS2 This material is superconducting on its own, but the properties can be enhanced if it is doped with fluoride by including BiF3 in the reaction mixture. Applications Refractory material Cerium(III) and cerium(IV) sulfides were first investigated in the 1940s as part of the Manhattan project, where they were considered -but eventually not adopted- as advanced refractory materials. Their suggested application was as the material in crucibles for the casting of uranium and plutonium metal. Although the sulfide's properties (high melting point and large, large negative ΔfG° and chemical inertness) are suitable and cerium is a relatively common element (66 ppm, about as much as copper), the danger of the traditional H2S-involving production route and the difficulty in controlling the formation of the resulting Ce2S3/CeS solid mixture meant that the compound was ultimately not developed further for such applications. Pigment and other uses The main non-research use of cerium(III) sulfide is as a specialty inorganic pigment. The strong red hues of α- and β-Ce2S3, non-prohibitive cost of cerium, and chemically inert behaviour up to high temperature are the factors which make the compound desirable as a pigment. Regarding other applications, the γ-Ce2S3 polymorph has a band gap of 2.06 eV and high Seebeck coefficient, thus it has been proposed as a high-temperature semiconductor for thermoelectric generators. A practical implementation thereof has not been demonstrated so far. References Sesquisulfides Cerium(III) compounds Refractory materials Inorganic pigments
Cerium(III) sulfide
Physics,Chemistry
1,113
75,099,468
https://en.wikipedia.org/wiki/Anne%20van%20den%20Nouweland
Anne van den Nouweland is a Dutch-American game theorist specializing in cooperative game theory, the game-based formation of complex networks, and their application in the design of communication networks. She works as a professor of economics at the University of Oregon. Education and career Van den Nouweland studied mathematics as an undergraduate at Nijmegen University in the Netherlands, graduating in 1984, and earned a master's degree there in 1989. Her master's thesis research applied intuitionism to the understanding of the Riemann–Stieltjes integral, supervised by Arnoud van Rooij and Wim Veldman. After two more years as a teaching assistant in the mathematics department at Nijmegen, she moved to the econometrics department at Tilburg University, also in the Netherlands, completing her Ph.D. there in 1993. Her doctoral dissertation, Games and Graphs in Economic Situations, was promoted by Stef Tijs. After completing her doctorate, she stayed on at Tilburg as an assistant professor and member of the CentER for Economic Research. She moved to the University of Oregon in 1996, was tenured there as an associate professor in 2001, and was promoted to full professor in 2007. Book Van den Nouweland is the coauthor of Social and Economic Networks in Cooperative Game Theory (with Marco Slikker, Kluwer Academic Publishers, 2001). References External links Home page Year of birth missing (living people) Living people Dutch emigrants to the United States 21st-century Dutch economists Dutch mathematicians Dutch women economists Dutch women mathematicians American economists American mathematicians American women economists American women mathematicians Game theorists Radboud University Nijmegen alumni Tilburg University alumni Academic staff of Tilburg University University of Oregon faculty
Anne van den Nouweland
Mathematics
357
3,655,571
https://en.wikipedia.org/wiki/Eastern%20Arabic%20numerals
The Eastern Arabic numerals, also called Indo-Arabic numerals or Arabic-Indic numerals as known by Unicode, are the symbols used to represent numerical digits in conjunction with the Arabic alphabet in the countries of the Mashriq (the east of the Arab world), the Arabian Peninsula, and its variant in other countries that use the Persian numerals on the Iranian plateau and in Asia. The early Hindu–Arabic numeral system used a variety of shapes. It is unknown when the Western Arabic numeral shapes diverged from those of Eastern Arabic numerals; it is considered that 1, 2, 3, 4, 5, and 9 are related in both versions, but 6, 7 and 8 are from different sources. Origin The numeral system originates from an ancient Indian numeral system, which was re-introduced during the Islamic Golden Age in the book On the Calculation with Hindic Numerals written by the Persian mathematician and engineer al-Khwarizmi, whose name was Latinized as Algoritmi. Other names These numbers are known as () in Arabic. They are sometimes also called Indic numerals or Arabic–Indic numerals in English. However, that is sometimes discouraged as it can lead to confusion with Indian numerals, used in Brahmic scripts of the Indian subcontinent. Numerals Each numeral in the Persian variant has a different Unicode point even if it looks identical to the Eastern Arabic numeral counterpart. However, the variants used with Urdu, Sindhi, and other Languages of South Asia are not encoded separately from the Persian variants. Written numerals are arranged with their lowest-value digit to the right, with higher value positions added to the left. That is identical to the arrangement used for Western Arabic numerals, even though Arabic script is read from right-to-left. Columns of numbers are usually arranged with the decimal points aligned. Negative signs are written to the right of magnitudes, e.g. (−3). In-line fractions are written with the numerator on the left and the denominator on the right of the fraction slash, e.g. (). The Arabic decimal separator (U+066B) or the comma is used as the decimal mark, as in (3.14159265358). The arabic thousands separator (U+066C) or quote or Arabic comma (U+060C) may be used as a thousands separator, e.g. (1,000,000,000). Contemporary use Eastern Arabic numerals are in predominant use over Western Arabic numerals in many countries to the east of the Arab world, notably Iran and Afghanistan. In Arabic-speaking Asia, as well as Egypt and Sudan, both types of numerals are in use (and are often employed alongside each other), though Western Arabic numerals are increasingly used, including in Saudi Arabia. The United Arab Emirates uses both Eastern and Western Arabic numerals. In Pakistan, Western Arabic numerals are more extensively used digitally. Eastern numerals continue to see use in Urdu publications and newspapers, as well as signboards. In the Maghreb, only Western Arabic numerals are commonly used. In medieval times, these areas used a slightly different set (from which, via Italy, Western Arabic numerals derive). The Thaana writing system used for the Maldivian language adopted its first nine letters (haa, shaviyani, noonu, raa, baa, lhaviyani, kaafu, alifu, and vaavu) from Perso-Arabic digits. See also Arabic numerals Abjad numerals Notes References Numerals
Eastern Arabic numerals
Mathematics
782
38,034,777
https://en.wikipedia.org/wiki/D-Sight
D-Sight is a company that specializes in decision support software and associated services in the domains of project prioritization, supplier selection and collaborative decision-making. It was founded in 2010 as a spin-off from the Université Libre de Bruxelles (ULB). Their headquarters are located in Brussels, Belgium. Software products D-Sight has developed different software products that all aim at supporting different complex decision-making processes. All products are distributed under the model of software as a service. The products are used in a wide variety of industries such as energy and natural resources, chemical and pharmaceutical, NGO and public, etc. D-Sight Portfolio D-Sight Portfolio is a Project Portfolio Management (PPM) platform focused around the early stage decision-making. It allows users to: Collect and centralize data for project requests and build the business case Prioritize project proposals and evaluate the ranking Allocate resources to those proposals that add the most value to the organization, thereby optimizing the project portfolio D-Sight Sourcing D-Sight Sourcing is a strategic sourcing platform to standardize and justify the supplier selection process. D-Sight CDM D-Sight Collaborative Decision-Making (CDM) is a decision-making software that offers a structured approach to data-based group decisions. Methodology The methodology used in these platforms is a multi-criteria decision analysis (MCDA). Rather than looking at one single determinant to make decisions, MCDA methods consider multiple factors. They integrate both quantitative and qualitative information, and allow to make informed rather than purely intuitive decisions. D-Sight's software products implement more specifically the Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) and geometrical analysis for interactive decision aid (GAIA), multi-attribute utility theory (MAUT) and analytic hierarchy process (AHP). References Decision support systems
D-Sight
Technology
383
40,384,839
https://en.wikipedia.org/wiki/Tetrahedral%20cupola
In 4-dimensional geometry, the tetrahedral cupola is a polychoron bounded by one tetrahedron, a parallel cuboctahedron, connected by 10 triangular prisms, and 4 triangular pyramids. Related polytopes The tetrahedral cupola can be sliced off from a runcinated 5-cell, on a hyperplane parallel to a tetrahedral cell. The cuboctahedron base passes through the center of the runcinated 5-cell, so the Tetrahedral cupola contains half of the tetrahedron and triangular prism cells of the runcinated 5-cell. The cupola can be seen in A2 and A3 Coxeter plane orthogonal projection of the runcinated 5-cell: See also Tetrahedral pyramid (5-cell) References External links Segmentochora: tetaco, tet || co, K-4.23 4-polytopes
Tetrahedral cupola
Mathematics
190
42,447,237
https://en.wikipedia.org/wiki/Reverse%20Transcription%20Loop-mediated%20Isothermal%20Amplification
Reverse transcription loop-mediated isothermal amplification (RT-LAMP) is a one step nucleic acid amplification method to multiply specific sequences of RNA. It is used to diagnose infectious disease caused by RNA viruses. It combines LAMP DNA-detection with reverse transcription, making cDNA from RNA before running the reaction. RT-LAMP does not require thermal cycles (unlike PCR) and is performed at a constant temperature between 60 and 65 °C. RT-LAMP is used in the detection of RNA viruses (groups II, IV, and V on the Baltimore Virus Classification system), such as the SARS-CoV-2 virus and the Ebola virus. Applications RT-LAMP is used to test for the presence of specific RNA-samples of viruses for the specific sequence of the virus, made possible by comparing the sequences against a large external database of references. Detection of the SARS-CoV2-Virus The RT-LAMP technique is being supported as a cheaper and easier alternative to RT-PCR for the early diagnostics of people that are infectious for COVID-19. There are open access test designs (including the recombinant proteins) which makes it legally possible for anyone to produce a test. In contrast to classic rapid tests by lateral flow, RT-LAMP allows the early diagnosis of the disease by testing the viral RNA. The tests can be done without previous RNA-isolation, detecting the viruses directly from swabs or from saliva. Detection of non-human viruses One example of use case of RT-LAMP was as an experiment to detect a new duck Tembusu-like, BYD virus, named after the region, Baiyangdian, where it was first isolated Another recent application of this method, was in a 2013 experiment to detect an Akabane virus using RT-LAMP. The experiment, done in China, isolated the virus from aborted calf fetuses. Detection of body fluids RT-LAMP is also being used in Forensic Serology to identify body fluids. Researchers have done experiments to show that this method can effectively identify certain body fluids. Knowing there would be limitations, Su et al, come to the conclusion that RT-LAMP was only able to identify blood. Methodology Reverse transcription A specific sequence of the cDNA is detected by 4 LAMP primers. Two of them are inner primers (FIP and BIP), which serve as base for the Bst enzyme copy the template into a new DNA. The outer primers(F3 and B3) anneal to the template strand and help the reaction to proceed. As in the case of RT-PCR, the RT-LAMP procedure starts by making DNA from the sample RNA. This conversion is made by a reverse transcriptase, an enzyme derived from retroviruses capable of making such a conversion. This DNA derived from RNA is called cDNA, or complementary DNA. The FIP primer is used by the reverse transcriptase to build a single-strand of copy DNA. The F3 primer binds to this side of the template strand as well, and displaces the previously made copy. Amplification This displaced, single-stranded copy is a mixture of target RNA and primers. The primers are designed to have a sequence that binds to the sequence itself, forming a loop. The BIP primer binds to the other end of this single strand and is used by the Bst DNA polymerase to build a complementary strand, making double-strand DNA. The F3 primer binds to this end and displaces, once again, this newly generated single-stranded DNA molecule. This new single strand that has been released will act as the starting point for the LAMP cycling amplification. This single-stranded DNA has a dumbbell-like structure as the ends fold and self-bind, forming two loops. The DNA polymerase and the FIP or BIP primers keep amplifying this strand and the LAMP-reaction product is extended. This cycle can be started from either the forward or backward side of the strand using the appropriate primer. Once this cycle has begun, the strand undergoes self-primed DNA synthesis during the elongation stage of the amplification process. This amplification takes place in less an hour, under isothermal conditions between 60 and 65 °C. Read out The read out of RT-LAMP tests is frequently colorimetric. Two of the common ways are based on measuring either pH or magnesium ions. The amplification reaction causes pH to lower and Mg2+ levels to drop. This can be perceived by indicators, such as Phenol red, for pH, and hydroxynaphthol blue (HNB), for magnesium. Another option is to use SYBR Green I, a DNA intercalating coloring agent. Advantages and disadvantages This method is specifically advantageous because it can all be done quickly in one step. The sample is mixed with the primers, reverse transcriptase and DNA polymerase and the reaction takes place under a constant temperature. The required temperature can be achieved using a simple hot water bath. PCR requires thermocycling; RT-LAMP does not, making it more time efficient and very cost effective. This inexpensive and streamlined method can be more readily used in developing countries that do not have access to high tech laboratories. A disadvantage of this method is generating the sequence specific primers. For each LAMP assay, primers must be specifically designed to be compatible with the target DNA. This can be difficult which discourages researchers from using the LAMP method in their work. There is however, a free software called Primer Explorer, developed by Fujitsu in Japan, which can aid in the selection of these primers. See also Loop-mediated isothermal amplification References External links LAMP Primer Explorer MorphoCatcher, a tool for design of species-specific primers Scholia page for RT-LAMP Open access protocols for RT-LAMP to detect SARS-CoV-2 Molecular biology techniques RNA
Reverse Transcription Loop-mediated Isothermal Amplification
Chemistry,Biology
1,234
22,631,535
https://en.wikipedia.org/wiki/List%20of%20software%20for%20nuclear%20engineering
With the decreased cost and increased capabilities of computers, Nuclear Engineering has implemented computer software (Computer code to Mathematical model) into all facets of this field. There are a wide variety of fields associated with nuclear engineering, but computers and associated software are used most often in design and analysis. Neutron kinetics, thermal-hydraulics, and structural mechanics are all important in this effort. Each software needs to be tested and verified before use. The codes can be separated by use and function. Most of the software are written in C and Fortran. Monte Carlo Radiation Transport Geant4 (CERN) McCARD (KAIST) MCNP (LANL) OpenMC PHITS (JAEA) SCALE (KENO V and KENO VI) (ORNL) Serpent (VTT) TRIPOLI-4 (CEA) Transmutation, fuel depletion ACAB code Activation and transmutation calculations for nuclear applications ORIP_XXI code Isotope transmutation simulations ORILL Code 1D transmutation, fuel depletion (burn-up) and radiological protection code FISPACT-II Multiphysics, inventory and source-term code MURE Serpent-MCNP utility for Reactor Evolution VESTA Monte Carlo depletion interface code Reactor Systems Analysis Particle Accelerators and High Voltage Machines Magnetic Fusion Research Toolkit PyNE The Nuclear Engineering Toolkit Deterministic Radiation Transport CASMO5 (Studsvik) HELIOS-2 (Studsvik) SCALE (ORNL) MPACT (ORNL) THOR nTRACER (Seoul National University) Steady-state Reactor Analysis SIMULATE5 Spatial Kinetics PARCS SIMULATE-3K NESTLE Thermal-Hydraulics ATHLET (GRS, Gesellschaft für Anlagen- und Reaktorsicherheit) TRACE (NRC) SPACE (KEPCO) RELAP5-3D (Idaho National Laboratory) GOTHIC (Numerical Advisory Solutions) CATHARE (CEA) FLICA-4 (CEA) RETRAN (RETRAN-02 and RETRAN-3D) VIPRE-01 PROTO-FLO PROTO-HX PROTO-HVAC PROTO-Sprinkler Computational Fluid Dynamics CFX (ANSYS) FLUENT (ANSYS) StarCD (Siemens) STAR-CCM+ (Siemens) LOGOS COBRA-TF TransAT code_saturne (EDF) neptune_cfd (EDF) Trio_CFD (CEA) Severe Accident ATHLET-CD (GRS) MELCOR (Sandia National Laboratories) MAAP (EPRI) ASTEC (IRSN and GRS) Many codes are supported by the U.S. Nuclear Regulatory Commission (NRC). These include SCALE, PARCS, TRACE (Formerly RELAP5 and TRAC-B), MELCOR, and many others. http://www.nrc.gov/about-nrc/regulatory/research/safetycodes.html See also Safety code (nuclear reactor) Computational science Computational physics Computer simulation List of software for nanostructures modeling References External links http://www.min.uc.edu/nuclear/current_research/sinema-research/codes-of-interest https://www.nrc.gov/about-nrc/regulatory/research/safetycodes.html http://www.oecd-nea.org/tools/abstract/list http://www.ne.anl.gov/codes/ http://www.irsn.fr/EN/Research/Scientific-tools/Computer-codes/Pages/Computer-codes-2624.aspx https://www.oecd-nea.org/tools/abstract/list/category/* Nuclear technology Physics software
List of software for nuclear engineering
Physics
786
1,465,844
https://en.wikipedia.org/wiki/Deutschlandsender%20Herzberg/Elster
The Deutschlandsender III was a 500 kilowatt longwave transmitter, erected in 1938/39 near Herzberg, Brandenburg in Germany. Used for the Deutschlandsender radio broadcasts, the guyed mast reaching a height of was the tallest construction in Europe and the second tallest in the world. Construction The Deutschlandsender III used a tall guyed steel lattice mast of triangular cross section. This was used as a mast radiator and was therefore mounted on a high steatite insulator. At the top of the mast was a lens-like electrical lengthening structure with a diameter of and a height of . Because the mast was under high voltage during transmission, the aircraft warning lighting was realized in a very unconventional manner. On small poles near the mast multiple rotating searchlights were mounted which illuminated the lens-like structure on the top. It was planned to expand the facility to a circle group antenna. Therefore, ten tall masts should be built on a circle with a diameter of around the central mast. In 1944 construction of a backup antenna in form of a triangle antenna, carried by three tall masts, forming a triangle with sidelength, started on the location of the planned mast No. 9. This antenna could not be completed as a result of the war. On 21 April 1945 the transmitter was severely damaged by Allied bombing. The mast remained unimpaired, but it was dismantled by the Soviet occupation troops, a task that lasted from July 1946 to 23 December 1947. The other parts of the facility were dismantled in 1959, when waterworks were built on the former station area. Nevertheless, there are still some remnants of the base visible at the location. It is unknown what happened to the mast after it was dismantled. It is sometimes claimed that it was rebuilt in Ukraine, as "Kiev" was scrawled on the containers the components were transported in. See also List of masts External links http://www.skyscraperpage.com/diagrams/?b45271 Former radio masts and towers Radio masts and towers in Germany Buildings and structures in Elbe-Elster Demolished buildings and structures in Germany History of telecommunications in Germany Towers completed in 1939 Lost objects Elbe-Elster Land 1939 establishments in Germany
Deutschlandsender Herzberg/Elster
Physics
457
1,575,837
https://en.wikipedia.org/wiki/B-type%20main-sequence%20star
A B-type main-sequence star (B V) is a main-sequence (hydrogen-burning) star of spectral type B and luminosity class V. These stars have from 2 to 16 times the mass of the Sun and surface temperatures between 10,000 and 30,000 K. B-type stars are extremely luminous and blue. Their spectra have strong neutral helium absorption lines, which are most prominent at the B2 subclass, and moderately strong hydrogen lines. Examples include Regulus, Algol A and Acrux. History This class of stars was introduced with the Harvard sequence of stellar spectra and published in the Revised Harvard photometry catalogue. The definition of type B-type stars was the presence of non-ionized helium lines with the absence of singly ionized helium in the blue-violet portion of the spectrum. All of the spectral classes, including the B type, were subdivided with a numerical suffix that indicated the degree to which they approached the next classification. Thus B2 is 1/5 of the way from type B (or B0) to type A. Later, however, more refined spectra showed lines of ionized helium for stars of type B0. Likewise, A0 stars also show weak lines of non-ionized helium. Subsequent catalogues of stellar spectra classified the stars based on the strengths of absorption lines at specific frequencies, or by comparing the strengths of different lines. Thus, in the MK Classification system, the spectral class B0 has the line at wavelength 439 nm being stronger than the line at 420 nm. The Balmer series of hydrogen lines grows stronger through the B class, then peak at type A2. The lines of ionized silicon are used to determine the sub-class of the B-type stars, while magnesium lines are used to distinguish between the temperature classes. Properties Type-B stars do not have a corona and lack a convection zone in their outer atmosphere. They have a higher mass loss rate than smaller stars such as the Sun, and their stellar wind has velocities of about 3,000 km/s. The energy generation in main-sequence B-type stars comes from the CNO cycle of thermonuclear fusion. Because the CNO cycle is very temperature sensitive, the energy generation is heavily concentrated at the center of the star, which results in a convection zone about the core. This results in a steady mixing of the hydrogen fuel with the helium byproduct of the nuclear fusion. Many B-type stars have a rapid rate of rotation, with an equatorial rotation velocity of about 200 km/s. Be and B[e] stars Spectral objects known as "Be stars" are massive yet non-supergiant entities that notably have, or had at some time, 1 or more Balmer lines in emission, with the hydrogen-related electromagnetic radiation series projected out by the stars being of particular scientific interest. Be stars are generally thought to feature unusually strong stellar winds, high surface temperatures, and significant attrition of stellar mass as the objects rotate at a curiously rapid rate, all of this in contrast to many other main-sequence star types. Objects known as B[e] stars are distinct from Be stars in having unusual neutral or low ionization emission lines that are considered to have 'forbidden mechanisms', something denoted by the use of the square brackets. In other words, these particular stars' emissions appear to undergo processes not normally allowed under 1st-order perturbation theory in quantum mechanics. The definition of a B[e] star can include blue giants and blue supergiants. Spectral standard stars The revised Yerkes Atlas system (Johnson & Morgan 1953) listed a dense grid of B-type dwarf spectral standard stars, however not all of these have survived to this day as standards. The "anchor points" of the MK spectral classification system among the B-type main-sequence dwarf stars, i.e. those standard stars that have remain unchanged since at least the 1940s, are Thabit (B0 V), Haedus (B3 V), and Alkaid (B3 V). Besides these anchor standards, the seminal review of MK classification by Morgan & Keenan (1973) listed "dagger standards" of Paikauhale (B0 V), Omega Scorpii (B1 V), 42 Orionis (B1 V), 22 Scorpii (B3 V), Rho Aurigae (B5 V), and 18 Tauri (B8 V). The Revised MK Spectra Atlas of Morgan, Abt, & Tapscott (1978) further contributed the standards Acrab (B2 V), 29 Persei (B3 V), HD 36936 (B5 V), and HD 21071 (B7 V). Gray & Garrison (1994) contributed two B9 V standards: Omega Fornacis and HR 2328. The only published B4 V standard is 90 Leonis, from Lesh (1968). There has been little agreement in the literature on choice of B6 V standard. Chemical peculiarities Some of the B-type stars of stellar class B0–B3 exhibit unusually strong lines of non-ionized helium. These chemically peculiar stars are termed helium-strong stars. These often have strong magnetic fields in their photosphere. In contrast, there are also helium-weak B-type stars with understrength helium lines and strong hydrogen spectra. Other chemically peculiar B-types stars are the mercury-manganese stars with spectral types B7-B9. Planets B-type stars known to have planets include the main-sequence B-type HIP 78530 and HD 129116. See also Herbig Ae/Be star Star count References Star types
B-type main-sequence star
Astronomy
1,182
53,843,628
https://en.wikipedia.org/wiki/Chemical%20bonding%20of%20water
Water () is a simple triatomic bent molecule with C2v molecular symmetry and bond angle of 104.5° between the central oxygen atom and the hydrogen atoms. Despite being one of the simplest triatomic molecules, its chemical bonding scheme is nonetheless complex as many of its bonding properties such as bond angle, ionization energy, and electronic state energy cannot be explained by one unified bonding model. Instead, several traditional and advanced bonding models such as simple Lewis and VSEPR structure, valence bond theory, molecular orbital theory, isovalent hybridization, and Bent's rule are discussed below to provide a comprehensive bonding model for , explaining and rationalizing the various electronic and physical properties and features manifested by its peculiar bonding arrangements. Lewis structure and valence bond theory The Lewis structure of describes the bonds as two sigma bonds between the central oxygen atom and the two peripheral hydrogen atoms with oxygen having two lone pairs of electrons. Valence bond theory suggests that is sp3 hybridized in which the 2s atomic orbital and the three 2p orbitals of oxygen are hybridized to form four new hybridized orbitals which then participate in bonding by overlapping with the hydrogen 1s orbitals. As such, the predicted shape and bond angle of sp3 hybridization is tetrahedral and 109.5°. This is in open agreement with the true bond angle of 104.45°. The difference between the predicted bond angle and the measured bond angle is traditionally explained by the electron repulsion of the two lone pairs occupying two sp3 hybridized orbitals. While valence bond theory is suitable for predicting the geometry and bond angle of , its prediction of electronic states does not agree with the experimentally measured reality. In the valence bond model, the two sigma bonds are of identical energy and so are the two lone pairs since they both resides in the same bonding and nonbonding orbitals, thus corresponding to two energy levels in the photoelectronic spectrum. In other words, if water was formed from two identical O-H bonds and two identical sp3 lone pairs on the oxygen atom as predicted by valence bond theory, then its photoelectron spectrum (PES) would have two (degenerate) peaks and energy, one for the two O-H bonds and the other for the two sp3 lone pairs. However, the photoelectronic spectrum of reveals four different energy levels that correspond to the ionization energies of the two bonding and two nonbonding pairs of elections at 12.6eV, 14.7eV, 18.5eV, and 32.2eV. This suggest that neither the two O-H bonds nor the two sp3 lone pairs are degenerate in energy. Molecular orbital treatment Simple In contrast to localizing electrons within their atomic orbitals in valence bond theory, the molecular orbital approach considers electrons to be delocalized across the entire molecule. The simple MO diagram of is shown on the right. Following simple symmetry treatments, the 1s orbitals of hydrogen atom are premixed as a1 and b1. Orbitals of same symmetry and similar energy levels can then be mixed to form a new set of molecular orbitals with bonding, nonbonding, and antibonding characteristics. In the simple MO diagram of , the 2s orbital of oxygen is mixed with the premixed hydrogen orbitals, forming a new bonding (2a1) and antibonding orbital (4a1). Similarly, the 2p orbital (b1) and the other premixed hydrogen 1s orbitals (b1) are mixed to make bonding orbital 1b1 and antibonding orbital 2b1. The two remaining 2p orbitals are unmixed. While this simple MO diagram does not provide four different energy levels as experimentally determined from PES, the two bonding orbitals are nonetheless distinctly different thus providing differentiation on the bonding electron energy levels. Hybridized To further distinguish the electron energy differences between the two non-bonding orbitals, orbital mixing can be further performed between the 2p (3a1) orbital on oxygen and the antibonding 4a1 orbital since they are of the same symmetry and close in energy level. Mixing these two orbitals affords two new sets of orbitals as shown in the right boxed in red. Significant mixing of these two orbitals results in both energy changes and changes in the shape of the molecular orbital. There's now significant sp hybridization characterization that is previously not present in the simple MO diagram. Consequently, the two nonbonding orbitals are now at different energies, providing the four distinct energy levels consistent with the PES. Alternatively, instead of mixing the 3a1 nonbonding orbital with the 4a1 antibonding orbital, one can also mix the 3a1 nonbinding orbital with the 2a1 bonding orbital to produce a similar MO diagram of . This alternative MO diagram can also be derived by performing the Walsh diagram treatment via adjusting bonding geometry from linear to bent shape. In addition, these MO diagrams can be generated from bottom up by first hybridizing the oxygen 2s and 2p orbitals (assume sp2 hybridization) and then mixing orbitals of same symmetry. For simple molecules, pictorially generating their MO diagram can be achieved without extensive knowledge of point group theory and using reducible and irreducible representations. Note that the size of the atomic orbitals in the final molecular orbital are different from the size of the original atomic orbitals, this is due to different mixing proportions between the oxygen and hydrogen orbitals since their initial atomic orbital energies are different. In other words, when two orbitals mix, the amount the orbitals mix is inversely proportional to the initial difference in energy of the orbitals. Therefore, orbitals which are initially close in energy mix (i.e. interact) more than orbitals which are initially far apart in energy. When two orbitals of different energy mix (i.e. interact), the low energy combination resembles more the initial low energy orbital; the higher energy combination resembles more the initial high energy orbital. When two orbitals can interact and they are of the same initial energy, then the two resultant combination orbitals are derived equally from the two initial orbitals. (Second order perturbation theory). In addition, while the valence bond theory predicts is sp3 hybridized, the prediction from MO theory is more complex. Since the 2pz orbital is not involved at all in interactions with the hydrogen atoms and becomes an unhybridized lone pair (nO(π)), one would argue is sp2 hybridized. This would be true under the idealized assumption that s and p character are evenly distributed between the two O-H bonds and O lone pair (nO(σ)). However, this prediction (120° bond angles) is inconsistent with the bond angle of being 104.5°. The actual hybridization of can be explained via the concept of isovalent hybridization or Bent's rule. In short, s character is accumulated in lone pair orbitals because s character is energy lowering relative to p character, and lone pair electrons are closely held with unshared electron density. In contrast, bonding pairs are localized further away and electron density is shared with another atom, so additional s character does not lower energy quite as effectively.  Hence, comparatively more p character is distributed into the bonding orbitals. Isovalent hybridization and Bent's rule Isovalent hybridization refers to advanced or second order atomic orbital mixing that does not produce simple sp, sp2, and sp3 hybridization schemes. For molecules with lone pairs, the bonding orbitals are isovalent hybrids since different fractions of s and p orbitals are mixed to achieve optimal bonding. Isovalent hybridization is used to explain bond angles of those molecules that is inconsistent with the generalized simple sp, sp2 and sp3 hybridization. For molecules containing lone pairs, the true hybridization of these molecules depends on the amount of s and p characters of the central atom which is related to its electronegativity. "According to Bent's rule, as the substituent electronegativies increase, orbitals of greater p character will be directed towards those groups. By the above discussion, this will decrease the bond angle. In predicting the bond angle of water, Bent's rule suggests that hybrid orbitals with more s character should be directed towards the very electropositive lone pairs, while that leaves orbitals with more p character directed towards the hydrogens. This increased p character in those orbitals decreases the bond angle between them to less than the tetrahedral 109.5°." Molecular orbital theory versus valence bond theory Molecular Orbital Theory vs. Valence Bond Theory has been a topic of debate since the early to mid 1900s. Despite continued heated debate on which model more accurately depict the true bonding scheme of molecules, scientists now view MO and VB theories as complementary and teammates. With the development of modern high speed computers and advanced molecular modeling programs, both MO and VB theories are used widely today, though for generally different purposes. In general, MO theory can accurately predict the ground state energy of the system, the different electronic states energies of bonding and nonbonding orbitals, and magnetic and ionization properties in a straight forward manner. On the other hand, VB theory is traditionally useful for predicting bond angle and mechanism drawing. Modern valence bond theory can provide the same electronic information obtained by MO theory, though the process is more complicated. In addition, modern VB theory can also predict excited states energies in which MO theory cannot easily achieve. The truth is, both theories are equally important in understanding chemical bonding that while neither theory is completely comprehensive, the two together nonetheless provides a in-depth model for chemical bonds. In the words of Roald Hoffmann: "Taken together, MO and VB theories constitute not an arsenal, but a tool kit... Insistence on a journey... equipped with one set of tools and not the other puts one at a disadvantage. Discarding any one of the two theories undermines the intellectual heritage of chemistry." See also Valence bond theory Molecular orbital theory Isovalent hybridization Bent's rule Linus Pauling Hans Bethe Roald Hoffmann References External links The Rules of Walsh Walsh Diagrams Constructing Walsh Diagrams Constructing MO Diagrams Water chemistry Chemical bonding
Chemical bonding of water
Physics,Chemistry,Materials_science
2,103
73,471,646
https://en.wikipedia.org/wiki/White%20etching%20cracks
White etching cracks (WEC), or white structure flaking or brittle flaking, is a type of rolling contact fatigue (RCF) damage that can occur in bearing steels under certain conditions, such as hydrogen embrittlement, high stress, inadequate lubrication, and high temperature. WEC is characterised by the presence of white areas of microstructural alteration in the material, which can lead to the formation of small cracks that can grow and propagate over time, eventually leading to premature failure of the bearing. WEC has been observed in a variety of applications, including wind turbine gearboxes, automotive engines, and other heavy machinery. The exact mechanism of WEC formation is still a subject of research, but it is believed to be related to a combination of microstructural changes, such as phase transformations and grain boundary degradation, and cyclic loading. Cause White etching cracks (WECs), first reported in 1996, are cracks that can form in the microstructure of bearing steel, leading to the development of a network of branched white cracks. They are usually observed in bearings that have failed due to rolling contact fatigue or accelerated rolling contact fatigue. These cracks can significantly shorten the reliability and operating life of bearings, both in the wind power industry and in several industrial applications. The exact cause of WECs and their significance in rolling bearing failures have been the subject of much research and discussion. Ultimately, the formation of WECs appears to be influenced by a complex interplay between material, mechanical, and chemical factors, including hydrogen embrittlement, high stresses from sliding contact, inclusions, electrical currents, and temperature. They all also have all been identified as potential drivers of WECs. Hydrogen embrittlement One of the most commonly quoted potential causes of WECs is hydrogen embrittlement caused by an unstable equilibrium between material, mechanical, and chemical aspects, which occurs when hydrogen atoms diffuse into the bearing steel, causing micro-cracks to form. Hydrogen can come from a variety of sources, including the hydrocarbon lubricant or water contamination, and it is often used in laboratory tests to reproduce WECs. Mechanisms behind the generation of hydrogen from lubricants was attributed to three primary factors contributing: decomposition of lubricants through catalytic reactions with a fresh metal surface, breakage of molecular chains within the lubricant due to shear on the sliding surface, and thermal decomposition of lubricants caused by heat generation during sliding. Hydrogen generation is influenced by lubricity, wear width, and the catalytic reaction of a fresh metal surface. Stress localisation Stresses higher than anticipated can also accelerate rolling contact fatigue, which is a known precursor to WECs. WECs commence at subsurface during the initial phases of their formation, particularly at non-metallic inclusions. As the sliding contact period extended, these cracks extended from the subsurface region to the contact surface, ultimately leading to flaking. Furthermore, there was an observable rise in the extent of microstructural modifications near the cracks, suggesting that the presence of the crack is a precursor to these alterations. The direction of sliding on the bearing surface played a significant role in WEC formation. When the traction force opposed the direction of over-rolling (referred to as negative sliding), it consistently led to the development of WECs. Conversely, when the traction force aligned with the over-rolling direction (positive sliding), WECs did not manifest. The magnitude of sliding exerted a dominant influence on WEC formation. Tests conducted at a sliding-to-rolling ratio (SRR) of -30% consistently resulted in the generation of WECs, while no WECs were observed in tests at -5% SRR. Furthermore, the number of WECs appeared to correlate with variations in contact severity, including changes in surface roughness, rolling speed, and lubricant temperature. Electrical current One of the primary causes of WECs is the passage of electrical current through the bearings. Both Alternating Current (AC) and Direct Current (DC) can lead to the formation of WECs, albeit through slightly different mechanisms. In general, hydrogen generation from lubricants can be accelerated by electric current, potentially accelerating WEC formation. Under certain conditions, when the current densities are low (less than 1 mA/mm2), electrical discharges can significantly shorten the lifespan of bearings by causing WECs. These WECs can develop in under 50 hours due to electrical discharges. Electrostatic sensors prove to be useful in detecting these critical discharges early on, which are associated with failures induced by WECs. The analysis revealed that different reaction layers form in the examined areas, depending on the electrical polarity. In the case of AC, the rapid change in polarity involves the creation of a plasma channel through the lubricant film in the bearing, leading to a momentary, intense discharge of energy. The localised heating and rapid cooling associated with these discharges can cause changes in the microstructure of the steel, leading to the formation of WEAs and WECs. On the other hand, DC can cause a steady flow of electrons through the bearing. This can lead to the electrochemical dissolution of the metal, a process known as fretting corrosion. The constant flow of current can also cause local heating, leading to thermal gradients within the bearing material. These gradients can cause stresses that lead to the formation of WECs. Microstructure WECs are sub-surface white cracks networks within local microstructural changes that are characterised by a changed microstructure known as white etching area (WEA). The term "white etching" refers to the white appearance of the altered microstructure of a polished and etched steel sample in the affected areas. The WEA is formed by amorphisation (phase transformation) of the martensitic microstructure due to friction at the crack faces during over-rolling, and these areas appear white under an optical microscope due to their low-etching response to the etchant. The microstructure of WECs consists of ultra-fine, nano-crystalline, carbide-free ferrite, or ferrite with a very fine distribution of carbide particles that exhibits a high degree of crystallographic misorientation. WEC propagation is mostly transgranular and does not follow a certain cleavage plane. Researchers observed three distinct types of microstructural alterations near the generated cracks: uniform white etching areas (WEAs), thin elongated regions of dark etching areas (DEA), and mixed regions comprising both light and dark etching areas with some misshaped carbides. During repeated stress cycles, the position of the crack constantly shifts, leaving behind an area of intense plastic deformation composed of ferritic, martensite, austenite (due to austenitization) and carbides. nano-grains, i.e., WEAs. The microscopic displacement of the crack plane in a single stress cycle accumulates to form micron-sized WEAs during repeated stress cycles. After the initial development of a fatigue crack around inclusions, the faces of the crack rub against each other during cycles of compressive stress. This results in the creation of WEAs through localised intense plastic deformation. It also causes partial bonding of the opposing crack faces and material transfer between them. Consequently, the WEC reopens at a slightly different location compared to its previous position during the release of stress. Furthermore, it has been acknowledged that WEA is one of the phases that arise from different processes and is generally observed as a result of a phase transformation in rolling contact fatigue. WEA is harder than the matrix and . Additionally, WECs are caused by stresses higher than anticipated and occur due to bearing rolling contact fatigue as well as accelerated rolling contact fatigue. WECs in bearings are accompanied with a white etching matter (WEM). WEM forms asymmetrically along WECs. There is no significant microstructural differences between the untransformed material adjacent to cracking and the parent material although WEM exhibits variable carbon content and increased hardness compared to the parent material. A study in 2019 suggests that WEM may initiate ahead of the crack, challenging the conventional crack-rubbing mechanism. Testing for WEC Triple disc rolling contact fatigue (RCF) Rig is a specialised testing apparatus used in the field of tribology and materials science to evaluate the fatigue resistance and durability of materials subjected to rolling contact. This rig is designed for simulating the conditions encountered in various mechanical systems, such as rolling bearings, gears, and other components exposed to repeated rolling and sliding motions. The rig typically consists of three discs or rollers arranged in a specific configuration. These discs can represent the interacting components of interest, such as a rolling bearing. The rig also allows precise control over the loading conditions, including the magnitude of the load, contact pressure, and contact geometry. PCS Instruments Micro-pitting Rig (MPR) is a specialised testing instrument used in the field of tribology and mechanical engineering to study micro-pitting, a type of surface damage that occurs in lubricated rolling and sliding contact systems. The MPR is designed to simulate real-world operating conditions by subjecting test specimens, often gears or rolling bearings, to controlled rolling and sliding contact under lubricated conditions. Impact Offshore wind turbines are subject to challenging environmental conditions, including corrosive saltwater, high wind forces, and potential electrical currents. These conditions can contribute to bearing failures and impact the reliability and maintenance of wind turbines. Several factors that can lead to bearing failures, such as corrosion, fatigue, wear, improper lubrication, high electric currents, and the need for improved materials and designs to ensure the longevity and performance of bearings in offshore wind turbines. WECs negatively affects the reliability of bearings, not only in the wind industry but also in various other industrial applications such as electric motors, paper machines, industrial gearboxes, pumps, ship propulsion systems, and the automotive sector. 60% of wind turbines failures are linked to WEC. In October 2018, a workshop on WECs was organised in Düsseldorf by a junior research group funded by the German Federal Ministry of Education and Research (BMBF). Representatives from academia and industry gathered to discuss the mechanisms behind WEC formation in wind turbines, focusing on the fundamental material processes causing this phenomenon. Further reading References Fracture mechanics Materials degradation Mechanical failure modes Metallurgy Tribology Friction
White etching cracks
Physics,Chemistry,Materials_science,Technology,Engineering
2,162
595,929
https://en.wikipedia.org/wiki/S-Adenosyl%20methionine
S-Adenosyl methionine (SAM), also known under the commercial names of SAMe, SAM-e, or AdoMet, is a common cosubstrate involved in methyl group transfers, transsulfuration, and aminopropylation. Although these anabolic reactions occur throughout the body, most SAM is produced and consumed in the liver. More than 40 methyl transfers from SAM are known, to various substrates such as nucleic acids, proteins, lipids and secondary metabolites. It is made from adenosine triphosphate (ATP) and methionine by methionine adenosyltransferase. SAM was first discovered by Giulio Cantoni in 1952. In bacteria, SAM is bound by the SAM riboswitch, which regulates genes involved in methionine or cysteine biosynthesis. In eukaryotic cells, SAM serves as a regulator of a variety of processes including DNA, tRNA, and rRNA methylation; immune response; amino acid metabolism; transsulfuration; and more. In plants, SAM is crucial to the biosynthesis of ethylene, an important plant hormone and signaling molecule. Structure S-Adenosyl methionine consists of the adenosyl group attached to the sulfur of methionine, providing it with a positive charge. It is synthesized from ATP and methionine by S-Adenosylmethionine synthetase enzyme through the following reaction: ATP + L-methionine + H2O phosphate + diphosphate + S-adenosyl-L-methionine The sulfonium functional group present in S-adenosyl methionine is the center of its peculiar reactivity. Depending on the enzyme, S-adenosyl methionine can be converted into one of three products: adenosyl radical, which converts to deoxyadenosine (AdO): classic rSAM reaction, also cogenerates methionine S-adenosyl homocysteine, releasing methyl radical methylthioadenosine (SMT), homoalanine radical Biochemistry SAM cycle The reactions that produce, consume, and regenerate SAM are called the SAM cycle. In the first step of this cycle, the SAM-dependent methylases (EC 2.1.1) that use SAM as a substrate produce S-adenosyl homocysteine as a product. S-Adenosyl homocysteine is a strong negative regulator of nearly all SAM-dependent methylases despite their biological diversity. This is hydrolysed to homocysteine and adenosine by S-adenosylhomocysteine hydrolase EC 3.3.1.1 and the homocysteine recycled back to methionine through transfer of a methyl group from 5-methyltetrahydrofolate, by one of the two classes of methionine synthases (i.e. cobalamin-dependent (EC 2.1.1.13) or cobalamin-independent (EC 2.1.1.14)). This methionine can then be converted back to SAM, completing the cycle. In the rate-limiting step of the SAM cycle, MTHFR (methylenetetrahydrofolate reductase) irreversibly reduces 5,10-methylenetetrahydrofolate to 5-methyltetrahydrofolate. Radical SAM enzymes A large number of enzymes cleave SAM reductively to produce radicals: 5′-deoxyadenosyl 5′-radical, methyl radical, and others. These enzymes are called radical SAMs. They all feature iron-sulfur cluster at their active sites. Most enzymes with this capability share a region of sequence homology that includes the motif CxxxCxxC or a close variant. This sequence provides three cysteinyl thiolate ligands that bind to three of the four metals in the 4Fe-4S cluster. The fourth Fe binds the SAM. The radical intermediates generated by these enzymes perform a wide variety of unusual chemical reactions. Examples of radical SAM enzymes include spore photoproduct lyase, activases of pyruvate formate lyase and anaerobic sulfatases, lysine 2,3-aminomutase, and various enzymes of cofactor biosynthesis, peptide modification, metalloprotein cluster formation, tRNA modification, lipid metabolism, etc. Some radical SAM enzymes use a second SAM as a methyl donor. Radical SAM enzymes are much more abundant in anaerobic bacteria than in aerobic organisms. They can be found in all domains of life and are largely unexplored. A recent bioinformatics study concluded that this family of enzymes includes at least 114,000 sequences including 65 unique reactions. Deficiencies in radical SAM enzymes have been associated with a variety of diseases including congenital heart disease, amyotrophic lateral sclerosis, and increased viral susceptibility. Polyamine biosynthesis Another major role of SAM is in polyamine biosynthesis. Here, SAM is decarboxylated by adenosylmethionine decarboxylase (EC 4.1.1.50) to form S-adenosylmethioninamine. This compound then donates its n-propylamine group in the biosynthesis of polyamines such as spermidine and spermine from putrescine. SAM is required for cellular growth and repair. It is also involved in the biosynthesis of several hormones and neurotransmitters that affect mood, such as epinephrine. Methyltransferases are also responsible for the addition of methyl groups to the 2′ hydroxyls of the first and second nucleotides next to the 5′ cap in messenger RNA. Therapeutic uses Osteoarthrtitis pain As of 2012, the evidence was inconclusive as to whether SAM can mitigate the pain of osteoarthritis; clinical trials that had been conducted were too small from which to generalize. Liver disease The SAM cycle has been closely tied to the liver since 1947 because people with alcoholic cirrhosis of the liver would accumulate large amounts of methionine in their blood. While multiple lines of evidence from laboratory tests on cells and animal models suggest that SAM might be useful to treat various liver diseases, as of 2012 SAM had not been studied in any large randomized placebo-controlled clinical trials that would allow an assessment of its efficacy and safety. Depression A 2016 Cochrane review concluded that for major depressive disorder, "Given the absence of high quality evidence and the inability to draw firm conclusions based on that evidence, the use of SAMe for the treatment of depression in adults should be investigated further." A 2020 systematic review found that it performed significantly better than placebo, and had similar outcomes to other commonly used antidepressants (imipramine and escitalopram). Anti-cancer treatment SAM has recently been shown to play a role in epigenetic regulation. DNA methylation is a key regulator in epigenetic modification during mammalian cell development and differentiation. In mouse models, excess levels of SAM have been implicated in erroneous methylation patterns associated with diabetic neuropathy. SAM serves as the methyl donor in cytosine methylation, which is a key epigenetic regulatory process. Because of this impact on epigenetic regulation, SAM has been tested as an anti-cancer treatment. In many cancers, proliferation is dependent on having low levels of DNA methylation. In vitro addition in such cancers has been shown to remethylate oncogene promoter sequences and decrease the production of proto-oncogenes. In cancers such as colorectal cancer, aberrant global hypermethylation can inhibit promoter regions of tumor-suppressing genes. Contrary to the former information, colorectal cancers (CRCs) are characterized by global hypomethylation and promoter-specific DNA methylation. Pharmacokinetics Oral SAM achieves peak plasma concentrations three to five hours after ingestion of an enteric-coated tablet (400–1000 mg). The half-life is about 100 minutes. Availability in different countries In Canada, the UK, and the United States, SAM is sold as a dietary supplement under the marketing name SAM-e (also spelled SAME or SAMe). It was introduced in the US in 1999, after the Dietary Supplement Health and Education Act was passed in 1994. It was introduced as a prescription drug in Italy in 1979, in Spain in 1985, and in Germany in 1989. As of 2012, it was sold as a prescription drug in Russia, India, China, Italy, Germany, Vietnam, and Mexico. Adverse effects Gastrointestinal disorder, dyspepsia and anxiety can occur with SAM consumption. Long-term effects are unknown. SAM is a weak DNA-alkylating agent. Another reported side effect of SAM is insomnia; therefore, the supplement is often taken in the morning. Other reports of mild side effects include lack of appetite, constipation, nausea, dry mouth, sweating, and anxiety/nervousness, but in placebo-controlled studies, these side effects occur at about the same incidence in the placebo groups. Interactions and contraindications Taking SAM at the same time as some drugs may increase the risk of serotonin syndrome, a potentially dangerous condition caused by having too much serotonin. These drugs include, but are certainly not limited to, dextromethorphan (Robitussin), meperidine (Demerol), pentazocine (Talwin), and tramadol (Ultram). SAM can also interact with many antidepressant medications — including tryptophan and the herbal medicine Hypericum perforatum (St. John's wort) — increasing the potential for serotonin syndrome or other side effects, and may reduce the effectiveness of levodopa for Parkinson's disease. SAM can increase the risk of manic episodes in people who have bipolar disorder. Toxicity A 2022 study concluded that SAMe could be toxic. Jean-Michel Fustin of Manchester University said that the researchers found that excess SAMe breaks down into adenine and methylthioadenosine in the body, both producing the paradoxical effect of inhibiting methylation. This was found in laboratory mice, causing harm to health, and in in vitro tests on human cells. See also DNA methyltransferase SAM-I riboswitch SAM-II riboswitch SAM-III riboswitch SAM-IV riboswitch SAM-V riboswitch SAM-VI riboswitch List of investigational antidepressants References External links Alpha-Amino acids Coenzymes Dietary supplements Biology of bipolar disorder Psychopharmacology Sulfonium compounds
S-Adenosyl methionine
Chemistry
2,290
36,685,603
https://en.wikipedia.org/wiki/IBM%20270x
270x is a generic name for a family of IBM non-programmable communications controllers used with System/360 and System/370 computers. The family consisted of the following devices: IBM 2701 Data Adapter Unit IBM 2702 Transmission Control IBM 2703 Transmission Control The 2701 and 2702 were announced simultaneously with System/360 in 1964, the 2703 was announced a year later. The 270x series was superseded by the IBM 3704 and 3705 communications controllers in 1972. 2701 The 2701 supported up to four start-stop or synchronous communications lines. It had two multiplexor channel interfaces for connection to one or two host computers. The synchronous adapter originally supported the Synchronous Transmit-Receive (STR) protocol, and later Binary Synchronous Communications (BISYNC) when it was introduced in 1967, in half duplex mode at speeds of up to 40,800 bits per second (bit/s). The 2701 could also have "data acquisition and control adapters" for direct control of external equipment. Initially the 2701 supported the following devices: IBM 1009 Data Transmission Unit IBM 1013 Card Transmission Terminal IBM 7701 Magnetic Tape Transmission Terminal IBM 7702 Magnetic Tape Transmission Terminal IBM 7710 Data Communication Unit IBM 7711 Data Communication Unit IBM 7740 Communication Control System IBM 7750 Programmed Transmission Control Remote System/360 with 2701 Serial synchronous terminals IBM 1030 Data Collection System IBM 1050 Data Communication System IBM 1060 Data Communication System IBM 1070 Process Communication System AT&T 83B2 Type Selective Calling Terminals Western Union Plan 115A Outstations Common Carrier TWX Stations European Teleprinters Parallel data devices Contact sense terminals Contact operate terminals Later the IBM 2740 and IBM 2741 Communication Terminals, and the IBM 2260/2848 were added. 2702 The 2702 could accommodate up to 31 communication lines, but at a slower speed than the 2701. The System/360 Configurator indicates that in the 2702 supported start-stop lines only. Initially the 2702 supported the following terminals: IBM 1030 Data Collection System IBM 1050 Data Communication System IBM 1060 Data Communication System IBM 1070 Process Communication System AT&T 83B2 Type Selective Calling Terminals Western Union Plan 115A Outstations Common Carrier TWX Stations European Teleprinters Later the IBM 2740 and IBM 2741 Communication Terminals, the IBM 1032 Digital Time Unit, and a second channel interface were added. 2703 The 2703 supported up to 176 half-duplex start-stop or Binary Synchronous communication lines. The maximum speed of one line was 2400 bit/s but the total aggregate line speed was limited. By 1970 the maximum line speed had been raised to 4800 bit/s. The 2703 attached to a single multiplexer channel; each communication line occupied a subchannel. It had a four or eight byte buffer per line to reduce data transfer to and from the host computer. The IBM 2712 Remote Multiplexer allowed up to fourteen slow speed devices to be multiplexed over one high speed line to a 2703. As of 1967 the 2703 supported the following devices: IBM 1030 Data Collection System IBM 1050 Data Communication System IBM 1060 Data Communication System IBM 1070 Process Communication System IBM 2741 and 2740 Communications Terminals AT&T 83B2 Type Selective Calling Terminals Western Union Plan 115A Outstations Common Carrier TWX Stations Remote System/360 via 2701 with 2701 or 2703 IBM 2780 Data Transmission Terminal IBM 1130 Computing System with Synchronous Communications Adapter (SCA) Clones Many companies produced clones of 270x controllers, such as the Memorex 1270, introduced in 1970, and devices from NCR-Comten. References External links Component Description: IBM 2701 Data Adapter Unit IBM 2703 Transmission Control Component Description 270x 270x 270x
IBM 270x
Engineering
806
44,777,384
https://en.wikipedia.org/wiki/Ford%20NAA%20tractor
The Ford NAA tractor (also known as the Ford NAA) is a tractor that was introduced by Ford as an entirely new model in 1953 and dubbed the Golden Jubilee. The NAA designation was a reference to the first three digits of the serial number style used starting with this tractor. It was designed as a replacement for the Ford N-Series tractors. Larger than the 8N, the Golden Jubilee featured live hydraulics, a 50th-year Golden Jubilee badging, an overhead-valve "Red Tiger" four-cylinder engine and streamlined styling, but just as significantly, it was the first tractor Ford built after losing its court battle with Harry Ferguson in 1952 over the patents the Irish inventor held on the Ferguson System three-point hitch. Engine Below the NAA's new hood was a 134-cu.in., overhead-valve, gas-burning inline four-cylinder engine worth 32 hp. Ford's British Fordson tractors were readily available with diesel engines, but in the States, diesels were still uncommon. A kerosene-burning NAA, known as the NAB, was an option but found few buyers. Transmission A four-speed transmission was standard on the NAA, and auxiliary gearing was available. Hydraulics The NAA's Solid System hydraulics relied on an engine-driven hydraulic pump rather than the PTO-driven pump that was standard issue on the N tractors (this meant that the hydraulics could be operated without the PTO being engaged) and a live PTO was optional. Other changes The NAA is also slightly larger than its predecessors: four inches longer, four inches higher and 100 pounds heavier at 2,840 pounds. For 1954, The NAA was carried over, sans the Golden Jubilee badging (which is popular with collectors today), with only a gear ratio change. In late 1954, Ford introduced its three-digit number series tractors, which further improved upon the NAA. The 600 incorporated improved brakes and wheel seals as well as an ASAE standard PTO. The 700 was a row-crop tractor that could be ordered with either a tricycle or wide front end. References External links Tractor Data for NAA Ford tractors Tractors
Ford NAA tractor
Engineering
448
26,143,254
https://en.wikipedia.org/wiki/2-%28Dicyanomethylene%29croconate
2-(Dicyanomethylene)croconate is a divalent anion with chemical formula or ((N≡C−)2C=)(C5O4)2−. It is one of the pseudo-oxocarbon anions, as it can be described as a derivative of the croconate oxocarbon anion through the replacement of one oxygen atom by a dicyanomethylene group =C(−C≡N)2. The anion was synthesized and characterized by A. Fatiadi in 1980, by hydrolysis of croconate violet treated with potassium hydroxide. It gives an orange solution in water. See also Croconate violet, 1,3-bis(dicyanomethylene)croconate Croconate blue, 1,2,3-tris(dicyanomethylene)croconate 1,2-Bis(dicyanomethylene)squarate 1,3-Bis(dicyanomethylene)squarate References Oxyanions Cyclopentenes
2-(Dicyanomethylene)croconate
Chemistry
230
19,146,762
https://en.wikipedia.org/wiki/Nutrient%20agar
Nutrient agar is a general-purpose solid medium supporting growth of a wide range of non-fastidious organisms. It typically contains (mass/volume): 0.5% peptone - this provides organic nitrogen 0.3% beef extract/yeast extract - the water-soluble content of these contribute vitamins, carbohydrates, nitrogen, and salts 1.5% agar - this gives the mixture solidity 0.5% sodium chloride - this gives the mixture proportions similar to those found in the cytoplasm of most organisms distilled water - water serves as a transport medium for the agar's various substances pH adjusted to neutral (6.8) at . Nutrient broth has the same composition, but lacks agar. These ingredients are combined and boiled for approximately one minute to ensure they are mixed and then sterilized by autoclaving, typically at for 15 minutes. Then they are cooled to around and poured into Petri dishes which are covered immediately. Once the dishes hold solidified agar, they are stored upside down and are often refrigerated until used. Inoculation takes place on warm dishes rather than cool ones: if refrigerated for storage, the dishes must be rewarmed to room temperature prior to inoculation. See also Plate count agar Bacteria Bacterial growth References Further reading Lapage S., Shelton J. and Mitchell T., 1970, Methods in Microbiology', Norris J. and Ribbons D., (Eds.), Vol. 3A, Academic Press, London. MacFaddin J. F., 2000, Biochemical Tests for Identification of Medical Bacteria, 3rd Ed., Lippincott, Williams and Wilkins, Baltimore. Downes F. P. and Ito K., (Ed.), 2001, Compendium of Methods for the Microbiological Examination of Foods, 4th Ed., American Public Health Association, Washington, D.C. American Public Health Association, Standard Methods for the Examination of Dairy Products, 1978, 14th Ed., Washington D.C. Microbiological media
Nutrient agar
Biology
432
56,103,252
https://en.wikipedia.org/wiki/Padmakar%E2%80%93Ivan%20index
In chemical graph theory, the Padmakar–Ivan (PI) index is a topological index of a molecule, used in biochemistry. The Padmakar–Ivan index is a generalization introduced by Padmakar V. Khadikar and Iván Gutman of the concept of the Wiener index, introduced by Harry Wiener. The Padmakar–Ivan index of a graph G is the sum over all edges uv of G of number of edges which are not equidistant from u and v. Let G be a graph and e = uv an edge of G. Here denotes the number of edges lying closer to the vertex u than the vertex v, and is the number of edges lying closer to the vertex v than the vertex u. The Padmakar–Ivan index of a graph G is defined as The PI index is very important in the study of quantitative structure–activity relationship for the classification models used in the chemical, biological sciences, engineering, and nanotechnology. Examples The PI index of Dendrimer Nanostar of the following figure can be calculated by References Mathematical chemistry Cheminformatics Graph invariants
Padmakar–Ivan index
Chemistry,Mathematics
225
25,010
https://en.wikipedia.org/wiki/Proton%E2%80%93proton%20chain
The proton–proton chain, also commonly referred to as the chain, is one of two known sets of nuclear fusion reactions by which stars convert hydrogen to helium. It dominates in stars with masses less than or equal to that of the Sun, whereas the CNO cycle, the other known reaction, is suggested by theoretical models to dominate in stars with masses greater than about 1.3 solar masses. In general, proton–proton fusion can occur only if the kinetic energy (temperature) of the protons is high enough to overcome their mutual electrostatic repulsion. In the Sun, deuteron-producing events are rare. Diprotons are the much more common result of proton–proton reactions within the star, and diprotons almost immediately decay back into two protons. Since the conversion of hydrogen to helium is slow, the complete conversion of the hydrogen initially in the core of the Sun is calculated to take more than ten billion years. Although sometimes called the "proton–proton chain reaction", it is not a chain reaction in the normal sense. In most nuclear reactions, a chain reaction designates a reaction that produces a product, such as neutrons given off during fission, that quickly induces another such reaction. The proton–proton chain is, like a decay chain, a series of reactions. The product of one reaction is the starting material of the next reaction. There are two main chains leading from hydrogen to helium in the Sun. One chain has five reactions, the other chain has six. History of the theory The theory that proton–proton reactions are the basic principle by which the Sun and other stars burn was advocated by Arthur Eddington in the 1920s. At the time, the temperature of the Sun was considered to be too low to overcome the Coulomb barrier. After the development of quantum mechanics, it was discovered that tunneling of the wavefunctions of the protons through the repulsive barrier allows for fusion at a lower temperature than the classical prediction. In 1939, Hans Bethe attempted to calculate the rates of various reactions in stars. Starting with two protons combining to give a deuterium nucleus and a positron he found what we now call Branch II of the proton–proton chain. But he did not consider the reaction of two nuclei (Branch I) which we now know to be important. This was part of the body of work in stellar nucleosynthesis for which Bethe won the Nobel Prize in Physics in 1967. The proton–proton chain The first step in all the branches is the fusion of two protons into a deuteron. As the protons fuse, one of them undergoes beta plus decay, converting into a neutron by emitting a positron and an electron neutrino (though a small amount of deuterium nuclei is produced by the "pep" reaction, see below): {| border="0" |- style="height:2em;" |p ||+ ||p||→ || | +|| | + | ||+ || |} The positron will annihilate with an electron from the environment into two gamma rays. Including this annihilation and the energy of the neutrino, the net reaction {| border="0" |- style="height:2em;" |p ||+ ||p|| + →  | + | ||+ || |} (which is the same as the PEP reaction, see below) has a Q value (released energy) of 1.442 MeV: The relative amounts of energy going to the neutrino and to the other products is variable. This is the rate-limiting reaction and is extremely slow due to it being initiated by the weak nuclear force. The average proton in the core of the Sun waits 9 billion years before it successfully fuses with another proton. It has not been possible to measure the cross-section of this reaction experimentally because it is so low but it can be calculated from theory. After it is formed, the deuteron produced in the first stage can fuse with another proton to produce the stable, light isotope of helium, : :{| border="0" |- style="height:2em;" | ||+ || ||→ || ||+ || ||+ || |} This process, mediated by the strong nuclear force rather than the weak force, is extremely fast by comparison to the first step. It is estimated that, under the conditions in the Sun's core, each newly created deuterium nucleus exists for only about one second before it is converted into helium-3. In the Sun, each helium-3 nucleus produced in these reactions exists for only about 400 years before it is converted into helium-4. Once the helium-3 has been produced, there are four possible paths to generate . In , helium-4 is produced by fusing two helium-3 nuclei; the and branches fuse with pre-existing to form beryllium-7, which undergoes further reactions to produce two helium-4 nuclei. About 99% of the energy output of the sun comes from the various chains, with the other 1% coming from the CNO cycle. According to one model of the sun, 83.3 percent of the produced by the various branches is produced via branch I while produces 16.68 percent and 0.02 percent. Since half the neutrinos produced in branches II and III are produced in the first step (synthesis of a deuteron), only about 8.35 percent of neutrinos come from the later steps (see below), and about 91.65 percent are from deuteron synthesis. However, another solar model from around the same time gives only 7.14 percent of neutrinos from the later steps and 92.86 percent from the synthesis of deuterium nuclei. The difference is apparently due to slightly different assumptions about the composition and metallicity of the sun. There is also the extremely rare branch. Other even rarer reactions may occur. The rate of these reactions is very low due to very small cross-sections, or because the number of reacting particles is so low that any reactions that might happen are statistically insignificant. The overall reaction is: releasing 26.73 MeV of energy, some of which is lost to the neutrinos. The branch {| border="0" |- style="height:2em;" | ||+ || ||→ || ||+ ||2  ||+ || |} The complete chain releases a net energy of but 2.2 percent of this energy (0.59 MeV) is lost to the neutrinos that are produced. The branch is dominant at temperatures of 10 to . Below , the chain proceeds at slow rate, resulting in a low production of . The branch :{| border="0" |- style="height:2em;" | ||+ || ||→ ||||+ || ||+ || |- style="height:2em;" | ||+ || ||→ ||||+ || ||+ || ||/ || |- style="height:2em;" | ||+ || ||→ ||2  || || ||+ || |} The branch is dominant at temperatures of 18 to . Note that the energies in the second reaction above are the energies of the neutrinos that are produced by the reaction. 90 percent of the neutrinos produced in the reaction of to carry an energy of , while the remaining 10 percent carry . The difference is whether the lithium-7 produced is in the ground state or an excited (metastable) state, respectively. The total energy released going from to stable is about 0.862 MeV, almost all of which is lost to the neutrino if the decay goes directly to the stable lithium. The branch :{| border="0" |- style="height:2em;" | ||+ || ||→ || ||+ || || || ||+ || |- style="height:2em;" | ||+ || ||→ || ||+ || |- style="height:2em;" | || || ||→ || ||+ || ||+ || || |- style="height:2em;" | || || ||→ ||2  |} The last three stages of this chain, plus the positron annihilation, contribute a total of 18.209 MeV, though much of this is lost to the neutrino. The chain is dominant if the temperature exceeds . The chain is not a major source of energy in the Sun, but it was very important in the solar neutrino problem because it generates very high energy neutrinos (up to ). The (Hep) branch This reaction is predicted theoretically, but it has never been observed due to its rarity (about in the Sun). In this reaction, helium-3 captures a proton directly to give helium-4, with an even higher possible neutrino energy (up to ). :{| border="0" |- style="height:2em;" | ||+ || ||→ || ||+ || ||+ || |} The mass–energy relationship gives for the energy released by this reaction plus the ensuing annihilation, some of which is lost to the neutrino. Energy release Comparing the mass of the final helium-4 atom with the masses of the four protons reveals that 0.7 percent of the mass of the original protons has been lost. This mass has been converted into energy, in the form of kinetic energy of produced particles, gamma rays, and neutrinos released during each of the individual reactions. The total energy yield of one whole chain is . Energy released as gamma rays will interact with electrons and protons and heat the interior of the Sun. Also kinetic energy of fusion products (e.g. of the two protons and the from the reaction) adds energy to the plasma in the Sun. This heating keeps the core of the Sun hot and prevents it from collapsing under its own weight as it would if the sun were to cool down. Neutrinos do not interact significantly with matter and therefore do not heat the interior and thereby help support the Sun against gravitational collapse. Their energy is lost: the neutrinos in the , , and chains carry away 2.0%, 4.0%, and 28.3% of the energy in those reactions, respectively. The following table calculates the amount of energy lost to neutrinos and the amount of "solar luminosity" coming from the three branches. "Luminosity" here means the amount of energy given off by the Sun as electromagnetic radiation rather than as neutrinos. The starting figures used are the ones mentioned higher in this article. The table concerns only the 99% of the power and neutrinos that come from the reactions, not the 1% coming from the CNO cycle. The PEP reaction A deuteron can also be produced by the rare pep (proton–electron–proton) reaction (electron capture): :{| border="0" |- style="height:2em;" | ||+ || ||+ || ||→ || ||+ || |} In the Sun, the frequency ratio of the pep reaction versus the reaction is 1:400. However, the neutrinos released by the pep reaction are far more energetic: while neutrinos produced in the first step of the reaction range in energy up to , the pep reaction produces sharp-energy-line neutrinos of . Detection of solar neutrinos from this reaction were reported by the Borexino collaboration in 2012. Both the pep and reactions can be seen as two different Feynman representations of the same basic interaction, where the electron passes to the right side of the reaction as a positron. This is represented in the figure of proton–proton and electron-capture reactions in a star, available at the NDM'06 web site. See also CNO cycle Triple-alpha process References External links Nuclear fusion reactions Proton
Proton–proton chain
Chemistry
2,582
61,244,353
https://en.wikipedia.org/wiki/Steiner-Optik
Steiner-Optik (also rendered as Steiner Optics) is a manufacturer of optical equipment for the military, hunting and marine sector. The company is headquartered in Bayreuth, northern Bavaria, and has been part of the Beretta Group since 2008. Steiner manufactures products for the civilian market as well as for the defense industry. Its product range includes binoculars for military and police use, rifle scope sights and spotting scopes for hunting, seafaring, outdoor and ornithology. Every year 200,000 to 250,000 binoculars are produced, of which 80% are exported. History The company was founded in 1947 by Karl Steiner, and the first product of the company was the Steinette camera. In 1955, the company changed focus to production of binoculars. In 1965, Steiner was awarded a contract with the West German Bundeswehr, which it supplied with the service binoculars called Steiner 8×30 FERO-D12 Bundeswehr Fernglas (German Army Binoculars) between 1966 and 1972. Steiner was the first company to produce nitrogen-filled binoculars. In 1989, Steiner-Optik received by its own account until then the world's largest order for military binoculars, which included the delivery of 72,000 M22 7×50 binoculars to the US Army. Other innovations by Steiner optics included the first binoculars with bearing compass and the first binoculars with laser protection filters. Product range Binoculars Wildlife SkyHawk 4.0 Blue Horizons Safari UltraSharp Navigator Pro Commander Commander Global Observer Ranger Extreme Nighthunter LRF 1700 Hunting rifle scopes Ranger Ranger BC Nighthunter Tactical rifle scopes T5Xi M series Red Dot Sights MRS MPS Lasers DBAL series (including AN/PEQ-15A DBAL-A2) OTAL series CQBL-1 SBAL series Night vision devices AN/PVS-21 See also Swarovski Optik References External links www.steiner.de Optics manufacturing companies Telescope manufacturers manufacturing companies of Germany companies based in Bavaria Bayreuth
Steiner-Optik
Astronomy
402