id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
35,360,069 | https://en.wikipedia.org/wiki/Wave%20surface | In mathematics, Fresnel's wave surface, found by Augustin-Jean Fresnel in 1822, is a quartic surface describing the propagation of light in an optically biaxial crystal. Wave surfaces are special cases of tetrahedroids which are in turn special cases of Kummer surfaces.
In projective coordinates (w:x:y:z) the wave surface is given by
They are used in the treatment of conical refractions.
References
Fresnel, A. (1822), "Second supplément au mémoire sur la double réfraction" (signed 31 March 1822, submitted 1 April 1822), in H. de Sénarmont, É. Verdet, and L. Fresnel (eds.), Oeuvres complètes d'Augustin Fresnel, Paris: Imprimerie Impériale (3 vols., 1866–70), vol.2 (1868), pp.369–442, especially pp. 369 (date présenté), 386–8 (eq.4), 442 (signature and date).
External links
Fresnel wave surface
Algebraic surfaces
Complex surfaces
Waves | Wave surface | [
"Physics"
] | 241 | [
"Waves",
"Physical phenomena",
"Motion (physics)"
] |
43,432,873 | https://en.wikipedia.org/wiki/Membrane%20estrogen%20receptor | Membrane estrogen receptors (mERs) are a group of receptors which bind estrogen. Unlike nuclear estrogen receptors, which mediate their effects via slower genomic mechanisms, mERs are cell surface receptors that rapidly alter cell signaling via modulation of intracellular signaling cascades.
Nuclear estrogen receptors such as ERα and ERβ become mERs through palmitoylation, a post-translational modification that enhances ER association with caveolin-1 to enable trafficking of ERs to the membrane or membrane caveolae. Other putative mERs include GPER (GPR30), GPRC6A, ER-X, ERx and Gq-mER.
Structure-function relationship
In mice and humans, ERβ localization in the plasma membrane occurs after palmitoylation on cysteine 418. Dimerization of mERs appears necessary for their function in rapid cell signaling.
Signaling mechanisms
G-protein coupled receptors
Various electrophysiological studies support E2 signaling via GPCRs. mERs are thought to activate G-protein coupled receptors to regulate L-type Ca2+ channels and activate protein kinase A (PKA), protein kinase C (PKC), and mitogen activated protein kinase (MAPK) signaling cascades.
Gq-coupled mERs (Gq-mERs) activation has been demonstrated to rapidly increase membrane excitability various neuronal cell types by desensitizing GABAB receptor coupling to G protein-coupled inwardly rectifying K+ channels (GIRKs).
mGluRs
Localization of mERs in caveolae allows them to be held in close proximity to specific receptors such as mGluRs. Various studies have demonstrated mER's ability to activate mGluR signaling, even in the absence of glutamate. ER/mGluR signaling is thought to be highly relevant for female motivational behavior. Interestingly, modification of caveolin expression appears to alter the nature of ER-mGluR interactions.
Clinical significance
Membrane estrogen receptors have been implicated in reproductive, cardiovascular, neural, and immune function, including cancer, neurodegenerative disease, and cardiovascular disorders.
Cancer
GPER1 pathways modify local inflammation and strengthen cellular immune responses in breast cancer and melanoma, making it a strong prognostic marker.
Neurodegenerative disease
mERs have a demonstrated neuroprotective effect against neurodegenerative disorders like Parkinson's disease, which is thought to underlie the lower incidence of the disorder in women compared to men.
Cardiovascular disorders
mERβ has been demonstrated to mitigate cardiac cell pathology caused by angiotensin II. Activation of mER but not nuclear ER signaling in vascular epithelial cells promotes protection against vascular injury in mice. Striatin, a scaffolding protein that links mERs to membrane caveolae, is necessary for this effect.
Addiction
Propensity to addiction appears to be mediated by sex hormones such as estrogen. In neural reward circuity, nuclear ERs are not commonly expressed, and mERs have been demonstrated to act on mGluR5 to facilitate psychostimulant-induced behavioral and neurochemical effects.
See also
Membrane steroid receptor
Estrogen receptor
References
G protein-coupled receptors
Human proteins
Human female endocrine system | Membrane estrogen receptor | [
"Chemistry"
] | 673 | [
"G protein-coupled receptors",
"Signal transduction"
] |
43,435,961 | https://en.wikipedia.org/wiki/PreQ1-III%20riboswitch | PreQ1-III riboswitches are a class of riboswitches that bind pre-queuosine1 (PreQ1), a precursor to the modified nucleoside queuosine.
PreQ1-III riboswitches are the third class of riboswitches to be discovered that sense this ligand, and are structurally distinct from preQ1-I and preQ1-II riboswitches.
Most sequenced examples of preQ1-III riboswitches are obtained from uncultivated metagenome samples, but the few examples in cultivated organisms are present in strains that are known to or suspected to be Faecalibacterium prausnitzii, a species of Gram-positive Clostridia.
Known examples of preQ1-III riboswitches are found upstream of queT genes, which are expected to encode transporters of a queuosine derivative. The other two known classes of preQ1 riboswitches are also commonly found upstream of queT genes.
The atomic-resolution structure of a preQ1-III riboswitch has been solved by X-ray crystallography.
References
Cis-regulatory RNA elements
Riboswitch | PreQ1-III riboswitch | [
"Chemistry"
] | 265 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
43,437,293 | https://en.wikipedia.org/wiki/School-based%20family%20counseling | School-based family counseling (SBFC) is an integrated approach to mental health intervention that focuses on both school and family in order to help children overcome personal problems and succeed at school. SBFC is practiced by a wide variety of mental health professionals, including: psychologists, social workers, school counselors, psychiatrists, and marriage and family therapists, as well as special education teachers. What they all share in common is the belief that children who are struggling in school can be best helped by interventions that link family and school. SBFC is typically practiced at the school site, but may be based in a community mental health agency that works in close collaboration with schools.
The Need for SBFC
Family problems, such as marital discord, divorce, financial difficulties, child abuse and neglect, life-threatening illness, sibling in a gang, and poor parenting skills are associated with a wide variety of children's problems, e.g. delinquency, depression, suicide attempts, and substance abuse. These family problems can have a negative effect on children's learning and school behavior. However, there is research showing that healthy families that cope effectively with their problems help children succeed at school.
Traditionally trained school counselors and school psychologists may lack the family counseling training necessary to help students who are experiencing problems at home. If school personnel recommend that a parent seek counseling from a community agency for family problems, the parent and family may not go because of the stigma associated with therapy or because of restrictions imposed by managed care.
SBFC reduces the stigma associated with therapy by emphasizing that counseling for family members has an educational goal: helping the student to succeed at school. Parents, guardians, and family members are approached as partners with the SBFC professional, all working together to promote school success. The SBFC professional is an advocate for the child, the family, and the school.
Some of the problems SBFC approaches have been used to address are: bullying and cyber-bullying, depression, marital problems, school violence, grief and loss, trauma, life-threatening illness, school crises, learning disorders, immigrant families, suicide, and school suspension.
Some examples of large SBFC programs are: "The Copper River Project" in Copper River District Alaskan schools; and "Place2Be" - a SBFC program based in over 200 British schools.
Origins
The earliest pioneer of SBFC was Alfred Adler, the Austrian psychiatrist who developed 30 guidance clinics attached to schools in Vienna in the 1920s. Through these guidance clinics Adler and his colleagues counseled parents and teachers (often both together in large meetings where both groups were present) on how to help children overcome problems at home and school. This Adlerian home-school approach to counseling was strength-based with its emphasis on helping children develop Social Interest.
Later Developments
With the advent of World War II, the Vienna guidance clinics closed. The psychiatrist Rudolf Dreikurs, who worked with Adler, emigrated to the USA in the 1930s and popularized the Adlerian approach to home-school intervention through books like: Children the Challenge (for parents), Maintaining Sanity in the Classroom (for teachers), and Discipline Without Tears (for parents and teachers). In the USA, during the 1950s, 60's, and 70's the mental health professions developed somewhat independently of each other with the result that children having difficulty at school would typically be seen by a school counselor or school psychologist. Children having difficulty at home would typically be seen by a community-based mental health professional. Beginning in the 1970s the mental health literature begins to show an increasing emphasis on linking home and school interventions. By 2000 there existed a substantial literature on the integration of family and school counseling approaches.
Strengths of SBFC
SBFC is a strength-based approach to counseling that emphasizes working with parents and guardians as partners. It emphasizes integrating intervention (remedial) and prevention approaches at school and in the family. This emphasis on working collaboratively with parents and guardians in order to help their children succeed in school is appealing to families because of its educational focus. It allows for counselors to hold interventions with students to connect school preparation with future career options is critical for the ever-developing technological work economy. SBFC is also a culturally sensitive counseling approach because it reduces the stigma associated with the mental health professions. This approach is practiced by many different mental health professionals and educators.
Evidence-based Support for SBFC
Evidence-based support for the effectiveness of SBFC comes from numerous randomized control group studies employing combined school and family interventions. This research indicates positive effects at post and follow up tests for problem behaviors at home and at school, and also for Latino, African American, Native American, and Thai children and families as well. The challenges of working with low income families are well known in the mental health literature.
. However, SBFC programs such as the Center for Child & Family Development SBFC program, the Families and Schools Together (Fast) program, and the Linking the Interests of Families and Teachers (LIFT) program were designed precisely to engage these families. There are several randomized control group studies demonstrating the effectiveness of SBFC with low income families.
SBFC Challenges and Solutions in Low-Income Communities
The SBFC approach focuses on reducing family and school problems (such as school violence, trauma, and other challenges experienced by immigrants and low-income families) and increasing family and school resources in order to strengthen child well-being. An important part of SBFC is eliciting the support of the family. This section describes common challenges experienced by school mental health professionals when working with low income communities and how SBFC is designed to address these challenges.
Challenge: Developing Parent Support
SBFC requires family members' participation. This can be a challenge for those students whose family members are not available to go to workshops and trainings either because they cannot take time out of work or because they do not support or relate to the principles of the SBFC approach. Students seeking counseling to navigate difficulties of the immigrant family experience often have family members who work long hours and cannot give time to the trainings of the SBFC method of counseling. Immigrant families rarely have the time, or the language literacy, to help their children navigate school institutions which can create a disconnect between school counseling programs and immigrant households. The processes of acculturation and assimilation are slow and delicate; their complications extend to a lack of understanding of the Western practice of multicultural progressive school counseling. People in immigrant households may have different traumas and experiences that make them feel lost, depressed, stuck with regard to life choices, and alienated from all formal institutions of mainstream society, including schools. School counseling is often stigmatized by immigrant families because school counselors often approach immigrant students with English-only career tests and one-size-fits-all resource guides. Immigrants students are often subject to ethnic stereotypes by school counselors; for example, Hispanics are considered physical laborers and not scholars and Asian immigrant students are held to higher academic standards and considered intelligent Common ethnic stereotypes assumed by school counselors undermine the intended impact of the SBFC approach.
SBFC Solutions
SBFC addresses these challenges in a number of ways. First, school transformation is a fundamental SBFC premise. Schools must change to accommodate the needs of parents and families and to avoid marginalization of immigrant and low-income parents Strickland, 2020). A common SBFC approach to involve parent is for the SBFC practitioner to have flexible hours that (for example) would permit the SBFC practitioner to work from 12-8 pm on a weekday and work every second Saturday. Further, many SBFC practitioners trained in working with families understand the need to address barriers. This may result in the SBFC practitioner visiting the family in the home to establish a relationship, explore barriers, and make a plan that works for moving forward based on the family’s needs. This SBFC flexibility makes it easier for working parents to meet with the SBFC practitioner. In situations where it is impossible for the parents to meet with the SBFC practitioner, the SBFC practitioner will reach out to the parents using the phone, the internet (email and Zoom), and sending letters. Research on immigrant parents shows that they value schools reaching out to them if it is done in a respectful and culturally sensitive manner. Second, it is a hallmark of the SBFC approach to treat the parents/guardians with respect and as equal partners with the SBFC practitioner. The family is viewed as a source of strength for children and that even in dysfunctional families there are family members willing to advocate for the educational success of a child. Assessing family strengths is basic to the SBFC approach. Third, the parents/guardians are approached – not for therapy – but for consultation on how to
help their child succeed at school. This educational approach, rather than a therapy approach, is very appealing to immigrant and low-income families who may not regard therapy as a solution and who are not comfortable with traditional Eurocentric approaches to therapy that emphasize individualism and assertiveness. Schools have a long history of marginalizing immigrant and low income parents, but approaches like SBFC that emphasize respect, caring, cultural humility, and reaching out to parents, can increase parent involvement in schools. School personnel trained in SBFC may be in the best position to implement SBFC because the focus in a school setting is on improving student functioning for success as opposed to addressing mental health issues which is frequently stigmatized in immigrant communities. Most parents are willing to attend counseling sessions at a school versus attending family therapy at a community health clinic. Fourth, multicultural competence is an essential part of the SBFC approach. Gerrard, 2020, pp. 51–59). For example, when working with Latino/a immigrant families, SBFC practitioners understand the role of familia and educacion: that the valuing of family and education is a core part of the culture and that parent involvement may be expressed by home-support rather than school visits. Parent home-support for children’s academic performance and well-being includes: support for doing homework, having a quiet place to study, emphasizing respect for teachers, discussing future plans, and excusing chores so that more time can be spent on homework.
Challenge:School violence
School violence is often a response to a lack of present family members which can be correlated with an economy demanding extremely long work hours for low-income household survival. Low-income families that are constantly working can be less active participants in their children's schools and upbringing. Studies note that many low-income communities lack mentors who can stress the value of schooling, which in turn correlates to student disengagement and an increase in acting out and violent behavior. Lack of engagement among community members can translate to a lack of parent involvement in schooling institutions which can impede the counselor-teacher partnership the SBFC approach urges. Low-income communities' high numbers of population turnover rates, lack of strong well-funded institutions, and displacement of people from their neighborhoods not only increases school violence among students, but can also create a disconnection between households and local institutions. Students in low-income communities with high crime rates are more prone to commit violence in their local schooling institutions which can promote fear into many students yet students in general are prone to stay silent when approached by school administration to speak about this issue. Violence in schooling institutions is often normalized by parents due to constant violence in their communities which then disrupts possible connections between counselors and parents. The high rates of school violence within low-income urban schools can lead to traumatic stress which is often neglected by low-income/ethnic minority families while only further undermines the SBFC method of counseling.
SBFC Solutions
SBFC practitioners address school violence in a number of ways. As a general strategy, SBFC practitioners collaborate with school administrators and teachers to increase student engagement. Student engagement refers to: having a sense of belonging and being a part of the school;experiencing teacher support and caring; having friends at school; and experiencing fair and effective discipline. SBFC strategies to promote student engagement include: promoting a positive school climate; strengthening school organization and infrastructure; and facilitating student interactions. Studies show that increasing student engagement reduces bullying and school dropout, and creates “caring schools” that are family-friendly and create a safe place for communities where violence is prevalent.
Student engagement falls largely in the area of prevention through the SBFC meta-model and framework. Student engagement is a vital construct to be used in prevention and intervention efforts that target issues related to dropout prevention, bullying, and the behaviors associated with the disengagement of students to school (academic failure, chronic absenteeism, behavioral issues). Student engagement is the central construct in student dropout. It is the slow process of disengagement that contributes to student dropout. There are disproportionate rates of student dropout, chronic absenteeism, and behavior-related suspensions for black and latino/a students. Focusing on student engagement through the SBFC systemic model provides an infrastructure of protective factors that can mitigate negative influences and prevent involvement in school or community violence. Implementation of student engagement interventions through the SBFC model focusing on school prevention and family prevention have been found to be highly associated with student outcomes. SBFC practitioners following the SBFC meta-model focus their interventions on school prevention and family intervention which largely includes a “whole-school” approach engaging multiple stakeholders, most importantly families. SBFC also places importance on developing community resources to reduce school problems. Community violence is a complex problem and not easily solved – but SBFC practitioners view community intervention as part of their approach to helping children, families and schools.
Challenge: Trauma and mental health access
Low-income households often do not validate students' traumatic experiences and can show little interest in attending training with school counselors. Being low-income can produce various traumas that most families tend to ignore, creating a culture of mental health negligence and a lack of proper self care within these households. A 2008 survey shows that the mental health needs of the poor are often unmet due to the lack of insurance coverage. A 2015 study found that 48% of whites received mental health services compared to 31% of African Americans and Hispanics, and 22% of Asians. Mental health negligence can be attributed to low-income communities being often misdiagnosed and misunderstood with their trauma or mental healthcare overall. SBFC approaches can also fail to account for ethnic/cultural attitudes toward mental health. For example, Mexican American families were found to have a lower rate of mental health problems due to the strong cultural belief of natural healing in comparison to traditional psychiatric services.
SBFC Solutions
Although SBFC practitioners often offer parent education training at school and community sites, parent consultation is the form of counseling most frequently provided by SBFC. Parent consultation can also be provided by phone or internet (Zoom) for parents who do not want to visit the school or community counseling center. While many low-income parents are not interested in participating in “therapy”, many are very interested in seeing their children succeed at school and welcome the opportunity to speak with a SBFC practitioner who treats them with respect and approaches them as equal partners in helping their child succeed. (Gerrard, Carter, & Ribera, 2020). Parents who do not feel comfortable attending a parent education workshop at their child’s school, may welcome meeting with the SBFC practitioner at the school or in a home visit, or through brief weekly phone calls. In rare situations where contact between the parents and SBFC counselor is not feasible, SBFC practitioners may use a family systems counseling approach with students that helps them to relate more constructively to their family and strengthen family relationships.
Most SBFC programs based in schools, such as the Center for Child & Family Development Mission Possible program, the Families and Schools Together (FAST) program, the Linking the Interests of Families and Teachers (LIFT) program, and the Place2Be program were developed especially to reach low-income families and are free for students and families, making mental health services accessible to low income families. The educational focus of SBFC services also makes mental health counseling more accessible to these same families.
Barriers to Entry for SBFC
The development of SBFC programs requires both cross-disciplinary and cross-cultural thinking and a willingness to set aside mental health professional "turf" issues. Graduates of mental health discipline academic programs who have not been exposed to other mental health professions, may develop a “silo” approach to mental health that leads to competition and “turf” battles with professionals from other disciplines. Strategies to overcome interprofessional barriers to SBFC practitioners include: using discipline inclusive language (e.g. SBFC practitioner rather than SBFC counselor, SBFC social worker, etc.; becoming familiar with the SBFC literature in other mental health professions; fostering a collaborative relationship with members of other mental health disciplines; gathering evidence-based support for your SBFC program; developing support from administrators leading the organization in which you are practicing an SBFC approach.
Examples of Books on School-Based Family Counseling
Boyd-Franklin, L. & Hafer Bry, B. (2000). Reaching out in family therapy. New York, NY: The Guilford Press.
Dreikurs, R.; Cassel, P. (1965). Discipline without tears. New York, NY: Harper and Row.
Fine, Marvin J. & Carlson C . (Eds.). (1992) Family-school intervention: A systems perspective. New York, NY:Allyn and Bacon.
Gerrard, B. & Soriano, M. (Eds.) (2013). School-based family counseling: Transforming family-school relationships. Phoenix, AZ: CreateSpace.
Gerrard, B., Carter, M. & Ribera, D. (Eds.).(2019). School-based family counseling: An interdisciplinary practitioner's guide. London, UK: Routledge.
Hinckle, J. & Wells, M. (1995). Family counseling in the schools. Greensboro, NC: ERIC/CASS Publications.
Laundy, K. C. (2015). Building School-Based Collaborative Mental Health Teams: A Systems Approach to Student Achievement. Camp Hill, PA: TPI Press.
Miller, L. D. (Ed.) (2002). Integrating school and family counseling: Practical solutions. Alexandria, VA: American Counseling Association.
Palmatier. Larry L. (1998) Crisis Counseling For A Quality School Community: A family perspective. New York, NY: Taylor & Francis.
Sherman, R., Shumsky, A. & Roundtree, Y. (1994) Enlarging the Therapeutic Circle. New York, NY: Brunner/Mazel.
Sheridan, S. & Kratochwill, T. (2008) Conjoint behavioral consultation, promoting family-based connections and interventions. New York, NY:Springer.
Shute, R. & Slee, P. (Eds.). (2016). Mental health and wellbeing through schools: The way forward. London, UK: Routledge.
Steele W. & Raider M. (1991). Working With Families in Crisis: School-based intervention. New York, NY: The Guilford Press.
Walsh, W. & Giblen, N. (Eds) (1988). Family counseling in school settings. Springfield, Il: Charles C. Thomas.
Walsh, W. & Williams, G. (1997) Schools and Family Therapy: Using Systems Theory and Family Therapy in the Resolution of School Problems. New York, NY: Charles C. Thomas.
Citations
Counseling
Clinical psychology
School counseling | School-based family counseling | [
"Biology"
] | 3,996 | [
"Behavioural sciences",
"Behavior",
"Clinical psychology"
] |
43,440,252 | https://en.wikipedia.org/wiki/Digital%20buffer | A digital buffer (or a logic buffer) is an electronic circuit element used to copy a digital input signal and isolate it from any output load. For the typical case of using voltages as logic signals, a logic buffer's input impedance is high, so it draws little current from the input circuit, to avoid disturbing its signal.
The digital buffer is important in data transmission between connected systems. Buffers are used in registers (data storage device) and buses (data transferring device). To connect to a shared bus, a tri-state digital buffer should be used, because it has a high impedance ("inactive" or "disconnected") output state (in addition to logic low and high).
Functionality
A voltage buffer amplifier transfers a voltage from a high output impedance circuit to a second circuit with low input impedance. Directly connecting a low impedance load to a power source draws current according to Ohm's law. The high current affects the source. Buffer inputs are high impedance. A buffered load effectively does not affect the source circuit. The buffer's output current is generated within the buffer. In this way, a buffer provides isolation between a power source and a low impedance. The buffer does not intentionally amplify or attenuate the input signal, and so may be called a unity gain buffer.
A digital buffer is a type of voltage buffer amplifier that is only concerned about digital logic levels, and thus may be non-linear. It may also act as a level shifter, with output voltages differing from the input voltages. One case of this is an inverting buffer which translates an active-high signal to an active-low one (or vice versa).
Types
Single input voltage buffer
Inverting buffer
This buffer's output state is the opposite of the input state. If the input is high, the output is low, and vice versa. Graphically, an inverting buffer is represented by a triangle with a small circle at the output, with the circle signifying inversion. The inverter is a basic building block in digital electronics. Decoders, state machines, and other sophisticated digital devices often include inverters.
Non-inverting buffer
This kind of buffer performs no inversion or decision-making possibilities. A single input digital buffer is different from an inverter. It does not invert or alter its input signal in any way. It reads an input and outputs a value. Usually, the input side reads either HIGH or LOW input and outputs a HIGH or LOW value, correspondingly. Whether the output terminal sends off HIGH or LOW signal is determined by its input value. The output value will be high if and only if the input value is high. In other words, Q will be high if and only if A is HIGH.
Tri-state digital buffer
Unlike the single input digital buffer which has only one input, the tri-state digital buffer has two inputs: a data input and a control input. (A control input is analogous to a valve, which controls the data flow.) When the control input is active, the output value is the input value, and the buffer is not different from the single input digital buffer.
Active high tri-state digital buffer
An active-high tri-state digital buffer is a buffer that is in an active state that transmits its data input to the output only when its control input voltage is high (logic 1). But when the control input is low (logic 0), the output is high impedance (abbreviated as "Hi-Z"), as if the part had been removed from the circuit.
Active low tri-state digital buffer
It is basically the same as active high digital buffer except the fact that the buffer is active when the control input is at a low state.
Inverting tri-state digital buffer
Tri-State digital buffers also have inverting varieties in which the output is the inverse of the input.
Application
Single input voltage buffers are used in many places for measurements including:
In strain gauge circuitry to measure deformations in structures like bridges, airplane wings and I-beams in buildings.
In temperature measurement circuitry for boilers and in high altitude aircraft in a cold environment.
In control circuits for aircraft, people movers in airports, subways and in many different production operations.
Tri-state voltage buffers are used widely to transmit onto shared buses, since a bus can only transmit one input device's data at a time. The high-impedance output state effectively temporarily disconnects that input device from the bus, since at most only one device should actively drive the bus's shared wires.
References
Electronic circuits | Digital buffer | [
"Engineering"
] | 949 | [
"Electronic engineering",
"Electronic circuits"
] |
43,441,652 | https://en.wikipedia.org/wiki/Hydrogenics | Hydrogenics is a developer and manufacturer of hydrogen generation and fuel cell products based on water electrolysis and proton-exchange membrane (PEM) technology. Hydrogenics is divided into two business units: OnSite Generation and Power Systems. Onsite Generation is headquartered in Oevel, Belgium and had 73 full-time employees as of December 2013. Power Systems is based in Mississauga, Ontario, Canada, with a satellite facility in Gladbeck, Germany. It had 62 full-time employees as of December 2013. Hydrogenics maintains operations in Belgium, Canada and Germany with satellite offices in the United States, Indonesia, Malaysia and Russia.
Business overview
OnSite Generation
The OnSite Generation business segment is based on water electrolysis technology, which involves the decomposition of water into oxygen () and hydrogen gas () by passing an electric current through a liquid electrolyte. The resultant hydrogen gas is then captured and used for industrial gas applications, hydrogen fueling applications, and is used to store renewable and surplus energy in the form of hydrogen gas. Hydrogenics' HySTAT electrolyzer products can be used both indoors and outdoors.
Power Systems
The Power Systems business segment is based on PEM fuel cell technology, which transforms chemical energy resulting from the electrochemical reaction of hydrogen and oxygen into electrical energy. (Edgar) Its HyPM products can handle electrical power outputs ranging from 1 kilowatt to 1 megawatt. The company also develops and delivers hydrogen generation products based on PEM water electrolysis.
Power to Gas
Power-to-Gas is an energy process and storage technology, which takes the excess power generated by wind turbines, solar power, or biomass power plants and converts carbon dioxide and water into methane using electrolysis, enabling it be stored. The excess electricity can then be held in existing reserves, including power and natural gas grids. This allows for seasonally adjusted storage of significant amounts of power and the provision of -neutral fuels in the form of the resulting renewable energy source gas.
History
In 1988, Hydrogenics was founded under the name Traduction Militech Translation Inc. In 1995, it entered into the fuel cell technology development business and Traduction Militech Translation changed its name to Hydrogenics in 1990.
In 2002, Hydrogenics acquired EnKAT GmbH, which formed its Hydrogenics Europe division. It also acquired Greenlight Power Technologies, Inc., a competing fuel cell testing business, in 2003. A year later, in 2004, the company acquired Stuart Energy, a manufacturer of hydrogen-generation products based on alkaline electrolyte technology.
In 2007, Hydrogenics narrowed the focus of its fuel cell activities by exiting the fuel cell testing business and working more on forklift power and backup power markets. That same year, Heliocentris partnered with Hydrogenics and SMA Solar Technologie to incorporate Hydrogenics' fuel cell power modules into stationary backup power systems.
In September 2010, Hydrogenics formed an alliance with CommScope Inc., a Hickory, North Carolina–based multinational telecommunications company. Per the alliance, CommScope invested US$8.5 million in Hydrogenics as part of a joint product development program.
Hydrogenics signed a Memorandum of Understanding (MoU) with Iwatani Corporation, a Japanese industrial energy company, in April 2012. The companies began to collaborate on hydrogen solutions in the Japanese energy market, including utility-scale hydrogen energy storage, hydrogen generation and fuelling, fuel cell integration, and industrial hydrogen generation. Later that month Hydrogenics and Enbridge Inc. entered into a joint venture to develop utility-scale energy storage beginning in Ontario. Under the agreement, hydrogen produced during periods of excess renewable generation will be injected into Enbridge's existing natural gas pipeline network. In June 2013, Hydrogenics announced that its Power-to-Gas facility was operational with the first direct injection of hydrogen into a gas pipeline.
Hydrogenics entered into a joint venture with South Korea–based Kolon Water & Energy to provide power generation in that country in June 2014.
In 2019 Hydrogenics was acquired in large parts by Cummins as part of their New Power division. Hydrogenics is now owned 81% by Cummins and 19% by Air Liquide. The name of the company has since been changed to Accelera.
Projects
In June 2000, General Motors and Hydrogenics released their codeveloped HydroGen1, a vehicle powered by a first generation proton exchange membrane fuel cell system. The following year, in October, the two companies developed low-pollution technology to power cars and trucks.
In December 2002, Natural Resources Canada (NRCan) selected Hydrogenics to develop a next-generation hybrid fuel cells bus; Hydrogenics integrated its vehicle-to-grid technology into a 12.5 meter New Flyer Inverno 40i transit bus. Hydrogenics' FC Hybrid Tecnobus midibus was exhibited in Europe in 2005.
In January 2010, Hydrogenics began development of a next-generation power system to be used for surface mobility applications on the moon for the Canadian Space Agency. The system includes an electrolyzer that produces both hydrogen and oxygen using solar power, and a fuel cell system that can be used for mobility, auxiliary, and life support systems. Heliocentris and FAUN Umwelttechnick collaborated with Hydrogenics to develop a hybrid waste disposal vehicle for BSR (Berliner Stadtreinigung) in August of that year.
In July 2012, Hydrogenics joined a consortium with EU members to build the world's largest steady state hydrogen storage facility in the Puglia region of Italy. The system is part of the R&D smart grid project "INGRID."
In April 2013, Hydrogenics won a contract to supply a 1 megawatt hydrogen energy storage system to German utility E.ON in Hamburg. The system will use electrolyzers based on Hydrogenics' proton exchange membrane (PEM) technology for hydrogen production and use excess power generated from regional renewable energy sources, primarily wind energy. In November the first of E.ON's P2G facilities provided by Hydrogenics became operational. The Falkenhagen facility uses wind-powered electrolysis equipment to transform water to hydrogen, which is then mixed with natural gas.
In February 2014, Hydrogenics was awarded two projects with the United Kingdom government. Hydrogenics will provide its technology to build hydrogen fuel stations throughout the UK.
Hydrogenics was selected as a Preferred Respondent for a power-to-gas project in Ontario by the Independent Electricity System Operator. (IESO), a corporation responsible for operating the electricity market and directing the operation of the bulk electrical system in the province of Ontario, Canada, in July 2014.
See also
Power to gas
References
External links
Official site
Sustainable energy
Fuel cell manufacturers
Electrolysis
Energy companies of Canada
Membrane technology
Technology companies of Canada
Manufacturing companies based in Mississauga
Cummins | Hydrogenics | [
"Chemistry"
] | 1,393 | [
"Electrolysis",
"Electrochemistry",
"Membrane technology",
"Separation processes"
] |
50,628,081 | https://en.wikipedia.org/wiki/Indium%20aluminium%20nitride | Indium aluminium nitride (InAlN) is a direct bandgap semiconductor material used in the manufacture of electronic and photonic devices. It is part of the III-V group of semiconductors, being an alloy of indium nitride and aluminium nitride, and is closely related to the more widely used gallium nitride. It is of special interest in applications requiring good stability and reliability, owing to its large direct bandgap and ability to maintain operation at temperatures of up to 1000 °C., making it of particular interest to areas such as the space industry. InAlN high-electron-mobility transistors (HEMTs) are attractive candidates for such applications owing to the ability of InAlN to lattice-match to gallium nitride, eliminating a reported failure route in the closely related aluminium gallium nitride HEMTs.
InAlN is grown epitaxially by metalorganic chemical vapour deposition or molecular beam epitaxy in combination with other semiconductor materials such as gallium nitride, aluminium nitride and their associated alloys to produce semiconductor wafers, which are then used as the active component in semiconductor device manufacture. InAlN is an especially difficult material to grow epitaxially due to the widely different properties of aluminium nitride and indium nitride, and the resulting narrow window for optimised growth can lead to contamination (i.e. to produce indium gallium aluminium nitride) and poor crystal quality, at least when compared to AlGaN. Similarly, device fabrication techniques optimised for AlGaN devices may require adjustment to account for the different material properties of InAlN
References
Indium compounds
III-V semiconductors
Aluminium compounds
Nitrides | Indium aluminium nitride | [
"Chemistry"
] | 355 | [
"Semiconductor materials",
"III-V semiconductors"
] |
50,630,753 | https://en.wikipedia.org/wiki/Post-column%20oxidation%E2%80%93reduction%20reactor | A post-column oxidation-reduction reactor is a chemical reactor that performs derivatization to improve the quantitative measurement of organic analytes. It is used in gas chromatography (GC), after the column and before a flame ionization detector (FID), to make the response factor of the detector uniform for all carbon-based species.
The reactor contains catalysts that converts all of the carbon atoms of organic molecules in GC column effluents into methane before reaching the FID. As a result, all carbon atoms are detected equally, and therefore calibration standards for each compound are not needed. It can improve the response of the FID to many compounds with poor or low response, including carbon monoxide (CO), carbon dioxide (CO2), hydrogen cyanide (HCN), formamide (CH3NO), formaldehyde (CH2O), and formic acid (CH2O2).
History
The concept of using a post-column catalytic reactor to enhance the response of the FID was first developed for the reduction of carbon dioxide and carbon monoxide to methane using a nickel catalyst. The reaction device, often referred to as a methanizer, is limited to the conversion of carbon dioxide and carbon monoxide to methane, and the catalysts are poisoned by sulfur and ethylene among others.
Using a combustion reactor prior to the reduction reactor allows other carbon-containing chemicals to benefit from enhancement in FID detection. In the combustion step, all carbon is converted to carbon dioxide, allowing it to be converted to methane for FID detection regardless of its original chemical form.
Operating principle
Chemical reactions
The reactor operates by converting organic analytes after GC separation into methane prior to detection by FID. The oxidation and reduction reactions occur sequentially, wherein the organic compound is first combusted to produce carbon dioxide, which is subsequently reduced to methane. The following reactions illustrate the oxidation/reduction process for formic acid.
HCO2H + 1/2O2 <=> CO2 + H2O
CO2 + 4H2 <=> CH4 + 2H2O
The reactions are fast compared to the time scales typical of gas chromatography, resulting in manageable peak broadening and tailing. Elements other than carbon, as CH4, are not ionized in the flame and thus do not contribute to the FID signal.
Effect on the FID
Only the CHO+ ions formed from the ionization of carbon compounds are detected. Thus, the non-methane byproducts of the reactions are not detected by the FID.
Since every compound passes through the catalyst bed in the reactor, certain substances that might be harmful, or that could negatively affect the efficiency and durability of the FID, are converted into safer forms. For instance, cyanide is catalytically changed into methane, water, and nitrogen.
Advantages and disadvantages
Advantages
The reactor ensures uniform sensitivity to most organic molecules, leading to consistent and reliable detection across a wide range of analytes.
By eliminating the need for multiple calibrations and standards, the reactor increases the accuracy of quantification, thereby reducing errors and enhancing the reliability of analytical results.
Reduction in calibration requirements decreases the cost of ownership and saves time, making the analytical process more efficient.
The reactor enables the quantification of complex mixtures even when standards are not available (provided retention times are known or can be estimated), thereby expanding the applicability of gas chromatography.
Unlike traditional methanizers, which primarily convert CO and CO2, oxidation-reduction reactors can convert a broader range of organic compounds to methane, leading to a more comprehensive response and improved sensitivity for a wide variety of analytes.
These reactors are more resistant to poisoning by compounds containing nitrogen and oxygen, which ensures consistent performance even in the presence of interfering substances.
Compared to packed column versions of methanizers, oxidation-reduction reactors typically produce sharper peaks, which enhances resolution and improves the quality of chromatographic separation.
Disadvantages
Cost of reactor and replacements. (Each replacement unit costs ~ $6000)
The addition of dead volume causes an increase in peak broadening depending on the GC column flow rates and molecule types (5-10% broadening is typical)
Heteroatoms and oxidation of PolyArc transfer lines increase this broadening.
Susceptible to sulfur, silicon, and halogen poisoning.
Requires constant feed of hydrogen.
Cannot be regenerated in-house; must be shipped back to the manufacturer for a replacement.
Cannot be used with cryogenic oven temperatures, or for GCxGC.
Poor response for species containing C-F bonds.
May contribute to power overload on older GC models, preventing the use of other heated components such as valve boxes or auxiliary detectors.
Benefit over methanizers
Comprehensive Conversion: The reactor converts all organic compounds to methane, whereas traditional methanizers typically only convert CO and CO2. This comprehensive conversion results in a more uniform response and more sensitive detection for a wider range of organic species.
Operation and data analysis
The PolyArc reactor needs hydrogen and air, which are both gases used in any existing FID setup. Software for capturing and analyzing FID signals remains applicable, and no extra software is necessary for the device. Gas flows to the device are controlled using an external control box that must be calibrated manually for the desired flows of air and hydrogen. The detector's overall response can be analyzed either by an external or an internal standard method.
In the external standard method, the FID signal is correlated to the concentration of carbon separately from the analysis. In practice, this entails the injection of any carbon species at varying amounts to create a plot of signal (i.e. peak area) versus injected carbon amount (e.g. moles of carbon). The user should take care to account for any sample splitting, adsorption, inlet discrimination, and leaks. The data should form a line with a slope, m, and an intercept, b. The inverse of this line can be used to determine the amount of carbon in any subsequent injection from any compound.
This is different from a typical FID calibration where this procedure would need to be completed for each compound to account for the relative response differences. The calibration should be examined periodically to account for catalyst deactivation and other sources of detector drift.
In the internal standard method, the sample is doped with a known amount of some organic molecule and the amount of all other species can be derived from their relative response to the internal standard (IS). The IS can be any organic molecule and should be chosen for ease of use and compatibility with the compounds in the mixture. For example, one could add 0.01 g of methanol as the IS to 0.9 g of gasoline. The 1 wt% mixture of methanol/gasoline is then injected and the concentration of all other species can be determined from their relative response to methanol on a carbon basis,
The effects of injection-to-injection variability resulting from different injection volumes, varying split ratios, and leaks are eliminated with the internal standard method. However, inlet discrimination caused by adsorption, reaction, or preferential vaporization in the inlet can lead to accuracy issues when the internal standard is influenced differently than the analyte.
Any non-carbon species that would not be detected in a traditional FID setup (e.g. water, nitrogen, ammonia) will not be detected with PolyArc/FID. This detector can be paired with other detectors that give complementary information such as the mass spectrometer or thermal conductivity detector.
References
Chromatography | Post-column oxidation–reduction reactor | [
"Chemistry"
] | 1,580 | [
"Chromatography",
"Separation processes"
] |
50,632,213 | https://en.wikipedia.org/wiki/Tokamak%20sawtooth | A sawtooth is a relaxation that is commonly observed in the core of tokamak plasmas, first reported in 1974. The relaxations occur quasi-periodically and cause a sudden drop in the temperature and density in the center of the plasma. A soft-xray pinhole camera pointed toward the plasma core during sawtooth activity will produce a sawtooth-like signal. Sawteeth effectively limit the amplitude of the central current density. The Kadomtsev model of sawteeth is a classic example of magnetic reconnection. Other repeated relaxation oscillations occurring in tokamaks include the edge localized mode (ELM) which effectively limits the pressure gradient at the plasma edge and the fishbone instability which effectively limits the density and pressure of fast particles.
Kadomtsev model
An often cited description of the sawtooth relaxation is that by Kadomtsev. The Kadomtsev model uses a resistive magnetohydrodynamic (MHD) description of the plasma. If the amplitude of the current density in the plasma core is high enough so that the central safety factor is below unity, a linear eigenmode will be unstable, where is the poloidal mode number. This instability may be the internal kink mode, resistive internal kink mode or tearing mode. The eigenfunction of each of these instabilities is a rigid displacement of the region inside . The mode amplitude will grow exponentially until it saturates, significantly distorting the equilibrium fields, and enters the nonlinear phase of evolution. In the nonlinear evolution, the plasma core inside the surface is driven into a resistive reconnection layer. As the flux in the core is reconnected, an island grows on the side of the core opposite the reconnection layer. The island replaces the core when the core has completely reconnected so that the final state has closed nested flux surfaces, and the center of the island is the new magnetic axis. In the final state, the safety factor is greater than unity everywhere. The process flattens temperature and density profiles in the core.
After a relaxation, the flattened temperature and safety factor profiles become peaked again as the core reheats on the energy confinement time scale, and the central safety factor drops below unity again as the current density resistively diffuses back into the core. In this way, the sawtooth relaxation occurs repeatedly with average period .
The Kadomtsev picture of sawtoothing in a resistive MHD model was very successful at describing many properties of the sawtooth in early tokamak experiments. However as measurements became more accurate and tokamak plasmas got hotter, discrepancies appeared. One discrepancy is that relaxations caused a much more rapid drop in the central plasma temperature of hot tokamaks than predicted by the resistive reconnection in the Kadomtsev model. Some insight into fast sawtooth crashes was provided by numerical simulations using more sophisticated model equations and by the Wesson model. Another discrepancy found was that the central safety factor was observed to be significantly less than unity immediately after some sawtooth crashes. Two notable explanations for this are incomplete reconnection and rapid rearrangement of flux immediately after a relaxation.
Wesson model
The Wesson model offers an explanation fast sawtooth crashes in hot tokamaks. Wesson's model describes a sawtooth relaxation based on the non-linear evolution of the quasi-interchange (QI) mode. The nonlinear evolution of the QI does not involve much reconnection, so it does not have Sweet-Parker scaling and the crash can proceed much faster in high temperature, low resistivity plasmas given a resistive MHD model. However more accurate experimental methods for measuring profiles in tokamaks were developed later. It was found that the profiles during sawtoothing discharges are not necessarily flat with as needed by Wesson's description of the sawtooth. Nevertheless, Wesson-like relaxations have been observed experimentally on occasion.
Numerical simulation
The first results of a numerical simulation that provided verification of the Kadomtsev model were published in 1976. This simulation demonstrated a single Kadomtsev-like sawtooth relaxation. In 1987 the first results of a simulation demonstrating repeated, quasi-periodic sawtooth relaxations was published. Results from resistive MHD simulations of repeated sawtoothing generally give reasonably accurate crash times and sawtooth period times for smaller tokamaks with relatively small Lundquist numbers.
In large tokamaks with larger Lundquist numbers, sawtooth relaxations are observed to occur much faster than predicted by the resistive Kadomtsev model. Simulations using two-fluid model equations or non-ideal terms in Ohm's law besides the resistive term, such as the Hall and electron inertia terms, can account for the fast crash times observed in hot tokamaks. These models can allow much faster reconnection at low resistivity.
Giant sawteeth
Large, hot tokamaks with significant populations of fast particles sometimes see so called "giant sawteeth". Giant sawteeth are much larger relaxations and may cause disruptions. They are a concern for ITER. In hot tokamaks, under some circumstances, minority hot particle species can stabilize the sawtooth instability. drops well below unity during the long period of stabilization, until instability is triggered, and the resulting crash is very large.
References
Plasma phenomena
Science and technology in the Soviet Union | Tokamak sawtooth | [
"Physics"
] | 1,113 | [
"Plasma phenomena",
"Physical phenomena",
"Plasma physics"
] |
26,511,501 | https://en.wikipedia.org/wiki/Landscape%20evolution%20model | A landscape evolution model is a physically-based numerical model that simulates changing terrain over the course of time. The change in, or evolution of, terrain, can be due to: glacial or fluvial erosion, sediment transport and deposition, regolith production, the slow movement of material on hillslopes, more intermittent events such as rockfalls, debris flows, landslides, and other surface processes. These changes occur in response to the land surface being uplifted above sea-level (or other base-level) by surface uplift, and also respond to subsidence. A typical landscape evolution model takes many of these factors into account.
Landscape evolution models are used primarily in the field of geomorphology. As they improve, they are beginning to be consulted by land managers to aid in decision making, most recently in the area of degraded landscapes.
The earliest landscape evolution models were developed in the 1970s. In those models, flow of water across a mesh was simulated, and cell elevations were changed in response to calculated erosional power. Modern landscape evolution models can leverage graphics processing units and other acceleration hardware and software, to run more quickly.
See also
Hillslope evolution
SIBERIA,
CAESAR-Lisflood,
LANDIS II, , an open-source, forest landscape model that simulates future forests
pyBadlands,
Community Surface Dynamics Modeling System,
References
Geomorphology models
Mathematical modeling | Landscape evolution model | [
"Mathematics"
] | 289 | [
"Applied mathematics",
"Mathematical modeling"
] |
26,520,106 | https://en.wikipedia.org/wiki/Lie%20point%20symmetry | Lie point symmetry is a concept in advanced mathematics. Towards the end of the nineteenth century, Sophus Lie introduced the notion of Lie group in order to study the solutions of ordinary differential equations (ODEs). He showed the following main property: the order of an ordinary differential equation can be reduced by one if it is invariant under one-parameter Lie group of point transformations. This observation unified and extended the available integration techniques. Lie devoted the remainder of his mathematical career to developing these continuous groups that have now an impact on many areas of mathematically based sciences. The applications of Lie groups to differential systems were mainly established by Lie and Emmy Noether, and then advocated by Élie Cartan.
Roughly speaking, a Lie point symmetry of a system is a local group of transformations that maps every solution of the system to another solution of the same system. In other words, it maps the solution set of the system to itself. Elementary examples of Lie groups are translations, rotations and scalings.
The Lie symmetry theory is a well-known subject. In it are discussed continuous symmetries opposed to, for example, discrete symmetries. The literature for this theory can be found, among other places, in these notes.
Overview
Types of symmetries
Lie groups and hence their infinitesimal generators can be naturally "extended" to act on the space of independent variables, state variables (dependent variables) and derivatives of the state variables up to any finite order. There are many other kinds of symmetries. For example, contact transformations let coefficients of the transformations infinitesimal generator depend also on first derivatives of the coordinates. Lie-Bäcklund transformations let them involve derivatives up to an arbitrary order. The possibility of the existence of such symmetries was recognized by Noether. For Lie point symmetries, the coefficients of the infinitesimal generators depend only on coordinates, denoted by .
Applications
Lie symmetries were introduced by Lie in order to solve ordinary differential equations. Another application of symmetry methods is to reduce systems of differential equations, finding equivalent systems of differential equations of simpler form. This is called reduction. In the literature, one can find the classical reduction process, and the moving frame-based reduction process. Also symmetry groups can be used for classifying different symmetry classes of solutions.
Geometrical framework
Infinitesimal approach
Lie's fundamental theorems underline that Lie groups can be characterized by elements known as infinitesimal generators. These mathematical objects form a Lie algebra of infinitesimal generators. Deduced "infinitesimal symmetry conditions" (defining equations of the symmetry group) can be explicitly solved in order to find the closed form of symmetry groups, and thus the associated infinitesimal generators.
Let be the set of coordinates on which a system is defined where is the cardinality of . An infinitesimal generator in the field is a linear operator that has in its kernel and that satisfies the Leibniz rule:
.
In the canonical basis of elementary derivations , it is written as:
where is in for all in .
Lie groups and Lie algebras of infinitesimal generators
Lie algebras can be generated by a generating set of infinitesimal generators as defined above. To every Lie group, one can associate a Lie algebra. Roughly, a Lie algebra is an algebra constituted by a vector space equipped with Lie bracket as additional operation. The base field of a Lie algebra depends on the concept of invariant. Here only finite-dimensional Lie algebras are considered.
Continuous dynamical systems
A dynamical system (or flow) is a one-parameter group action. Let us denote by such a dynamical system, more precisely, a (left-)action of a group on a manifold :
such that for all point in :
where is the neutral element of ;
for all in , .
A continuous dynamical system is defined on a group that can be identified to i.e. the group elements are continuous.
Invariants
An invariant, roughly speaking, is an element that does not change under a transformation.
Definition of Lie point symmetries
In this paragraph, we consider precisely expanded Lie point symmetries i.e. we work in an expanded space meaning that the distinction between independent variable, state variables and parameters are avoided as much as possible.
A symmetry group of a system is a continuous dynamical system defined on a local Lie group acting on a manifold . For the sake of clarity, we restrict ourselves to n-dimensional real manifolds where is the number of system coordinates.
Lie point symmetries of algebraic systems
Let us define algebraic systems used in the forthcoming symmetry definition.
Algebraic systems
Let be a finite set of rational functions over the field where and are polynomials in i.e. in variables with coefficients in . An algebraic system associated to is defined by the following equalities and inequalities:
An algebraic system defined by is regular (a.k.a. smooth) if the system is of maximal rank , meaning that the Jacobian matrix is of rank at every solution of the associated semi-algebraic variety.
Definition of Lie point symmetries
The following theorem (see th. 2.8 in ch.2 of ) gives necessary and sufficient conditions so that a local Lie group is a symmetry group of an algebraic system.
Theorem. Let be a connected local Lie group of a continuous dynamical system acting in the n-dimensional space . Let with define a regular system of algebraic equations:
Then is a symmetry group of this algebraic system if, and only if,
for every infinitesimal generator in the Lie algebra of .
Example
Consider the algebraic system defined on a space of 6 variables, namely with:
The infinitesimal generator
is associated to one of the one-parameter symmetry groups. It acts on 4 variables, namely and . One can easily verify that and . Thus the relations are satisfied for any in that vanishes the algebraic system.
Lie point symmetries of dynamical systems
Let us define systems of first-order ODEs used in the forthcoming symmetry definition.
Systems of ODEs and associated infinitesimal generators
Let be a derivation w.r.t. the continuous independent variable . We consider two sets and . The associated coordinate set is defined by and its cardinal is . With these notations, a system of first-order ODEs is a system where:
and the set specifies the evolution of state variables of ODEs w.r.t. the independent variable. The elements of the set are called state variables, these of parameters.
One can associate also a continuous dynamical system to a system of ODEs by resolving its equations.
An infinitesimal generator is a derivation that is closely related to systems of ODEs (more precisely to continuous dynamical systems). For the link between a system of ODEs, the associated vector field and the infinitesimal generator, see section 1.3 of. The infinitesimal generator associated to a system of ODEs, described as above, is defined with the same notations as follows:
Definition of Lie point symmetries
Here is a geometrical definition of such symmetries. Let be a continuous dynamical system and its infinitesimal generator. A continuous dynamical system is a Lie point symmetry of if, and only if, sends every orbit of to an orbit. Hence, the infinitesimal generator satisfies the following relation based on Lie bracket:
where is any constant of and i.e. . These generators are linearly independent.
One does not need the explicit formulas of in order to compute the infinitesimal generators of its symmetries.
Example
Consider Pierre François Verhulst's logistic growth model with linear predation, where the state variable represents a population. The parameter is the difference between the growth and predation rate and the parameter corresponds to the receptive capacity of the environment:
The continuous dynamical system associated to this system of ODEs is:
The independent variable varies continuously; thus the associated group can be identified with .
The infinitesimal generator associated to this system of ODEs is:
The following infinitesimal generators belong to the 2-dimensional symmetry group of :
Software
There exist many software packages in this area. For example, the package liesymm of Maple provides some Lie symmetry methods for PDEs. It manipulates integration of determining systems and also differential forms. Despite its success on small systems, its integration capabilities for solving determining systems automatically are limited by complexity issues. The DETools package uses the prolongation of vector fields for searching Lie symmetries of ODEs. Finding Lie symmetries for ODEs, in the general case, may be as complicated as solving the original system.
References
Lie groups
Symmetry | Lie point symmetry | [
"Physics",
"Mathematics"
] | 1,755 | [
"Lie groups",
"Mathematical structures",
"Algebraic structures",
"Geometry",
"Symmetry"
] |
36,337,411 | https://en.wikipedia.org/wiki/Structural%20integrity%20and%20failure | Structural integrity and failure is an aspect of engineering that deals with the ability of a structure to support a designed structural load (weight, force, etc.) without breaking and includes the study of past structural failures in order to prevent failures in future designs.
Structural integrity is the ability of an item—either a structural component or a structure consisting of many components—to hold together under a load, including its own weight, without breaking or deforming excessively. It assures that the construction will perform its designed function during reasonable use, for as long as its intended life span. Items are constructed with structural integrity to prevent catastrophic failure, which can result in injuries, severe damage, death, and/or monetary losses.
Structural failure refers to the loss of structural integrity, or the loss of load-carrying structural capacity in either a structural component or the structure itself. Structural failure is initiated when a material is stressed beyond its strength limit, causing fracture or excessive deformations; one limit state that must be accounted for in structural design is ultimate failure strength. In a well designed system, a localized failure should not cause immediate or even progressive collapse of the entire structure.
Introduction
Structural integrity is the ability of a structure to withstand an intended load without failing due to fracture, deformation, or fatigue. It is a concept often used in engineering to produce items that will serve their designed purposes and remain functional for a desired service life.
To construct an item with structural integrity, an engineer must first consider a material's mechanical properties, such as toughness, strength, weight, hardness, and elasticity, and then determine the size and shape necessary for the material to withstand the desired load for a long life. Since members can neither break nor bend excessively, they must be both stiff and tough. A very stiff material may resist bending, but unless it is sufficiently tough, it may have to be very large to support a load without breaking. On the other hand, a highly elastic material will bend under a load even if its high toughness prevents fracture.
Furthermore, each component's integrity must correspond to its individual application in any load-bearing structure. Bridge supports need a high yield strength, whereas the bolts that hold them need good shear and tensile strength. Springs need good elasticity, but lathe tooling needs high rigidity. In addition, the entire structure must be able to support its load without its weakest links failing, as this can put more stress on other structural elements and lead to cascading failures.
History
The need to build structures with integrity goes back as far as recorded history. Houses needed to be able to support their own weight, plus the weight of the inhabitants. Castles needed to be fortified to withstand assaults from invaders. Tools needed to be strong and tough enough to do their jobs.
In ancient times there were no mathematical formulas to predict the integrity of a structure. Builders, blacksmiths, carpenters, and masons relied on a system of trial and error (learning from past failures), experience, and apprenticeship to make safe and sturdy structures. Historically, safety and longevity were ensured by overcompensating, for example, using 20 tons of concrete when 10 tons would do. Galileo was one of the first to take the strength of materials into account in 1638, in his treatise Dialogues of Two New Sciences. However, mathematical ways to calculate such material properties did not begin to develop until the 19th century. The science of fracture mechanics, as it exists today, was not developed until the 1920s, when Alan Arnold Griffith studied the brittle fracture of glass.
Starting in the 1940s, the infamous failures of several new technologies made a more scientific method for analyzing structural failures necessary. During World War II, over 200 welded-steel ships broke in half due to brittle fracture, caused by stresses created from the welding process, temperature changes, and by the stress concentrations at the square corners of the bulkheads. In the 1950s, several De Havilland Comets exploded in mid-flight due to stress concentrations at the corners of their squared windows, which caused cracks to form and the pressurized cabins to explode. Boiler explosions, caused by failures in pressurized boiler tanks, were another common problem during this era, and caused severe damage. The growing sizes of bridges and buildings led to even greater catastrophes and loss of life. This need to build constructions with structural integrity led to great advances in the fields of material sciences and fracture mechanics.
Types of failure
Structural failure can occur from many types of problems, most of which are unique to different industries and structural types. However, most can be traced to one of five main causes.
The first is that the structure is not strong and tough enough to support the load, due to either its size, shape, or choice of material. If the structure or component is not strong enough, catastrophic failure can occur when the structure is stressed beyond its critical stress level.
The second type of failure is from fatigue or corrosion, caused by instability in the structure's geometry, design or material properties. These failures usually begin when cracks form at stress points, such as squared corners or bolt holes too close to the material's edge. These cracks grow as the material is repeatedly stressed and unloaded (cyclic loading), eventually reaching a critical length and causing the structure to suddenly fail under normal loading conditions.
The third type of failure is caused by manufacturing errors, including improper selection of materials, incorrect sizing, improper heat treating, failing to adhere to the design, or shoddy workmanship. This type of failure can occur at any time and is usually unpredictable.
The fourth type of failure is from the use of defective materials. This type of failure is also unpredictable, since the material may have been improperly manufactured or damaged from prior use.
The fifth cause of failure is from lack of consideration of unexpected problems. This type of failure can be caused by events such as vandalism, sabotage, or natural disasters. It can also occur if those who use and maintain the construction are not properly trained and overstress the structure.
Notable failures
Bridges
Dee bridge
The Dee Bridge was designed by Robert Stephenson, using cast iron girders reinforced with wrought iron struts. On 24 May 1847, it collapsed as a train passed over it, killing five people. Its collapse was the subject of one of the first formal inquiries into a structural failure. This inquiry concluded that the design of the structure was fundamentally flawed, as the wrought iron did not reinforce the cast iron, and that the casting had failed due to repeated flexing.
First Tay Rail Bridge
The Dee bridge disaster was followed by a number of cast iron bridge collapses, including the collapse of the first Tay Rail Bridge on 28 December 1879. Like the Dee bridge, the Tay collapsed when a train passed over it, killing 75 people. The bridge failed because it was constructed from poorly made cast iron, and because designer Thomas Bouch failed to consider wind loading on it. Its collapse resulted in cast iron being replaced by steel construction, and a complete redesign in 1890 of the Forth Railway Bridge, which became the first bridge in the world entirely made of steel.
First Tacoma Narrows Bridge
The 1940 collapse of the original Tacoma Narrows Bridge is sometimes characterized in physics textbooks as a classic example of resonance, although this description is misleading. The catastrophic vibrations that destroyed the bridge were not due to simple mechanical resonance, but to a more complicated oscillation between the bridge and winds passing through it, known as aeroelastic flutter. Robert H. Scanlan, a leading contributor to the understanding of bridge aerodynamics, wrote an article about this misunderstanding. This collapse, and the research that followed, led to an increased understanding of wind/structure interactions. Several bridges were altered following the collapse to prevent a similar event occurring again. The only fatality was a dog.
I-35W Bridge
The I-35W Mississippi River bridge (officially known simply as Bridge 9340) was an eight-lane steel truss arch bridge that carried Interstate 35W across the Mississippi River in Minneapolis, Minnesota, United States. The bridge was completed in 1967, and its maintenance was performed by the Minnesota Department of Transportation. The bridge was Minnesota's fifth–busiest, carrying 140,000 vehicles daily. The bridge catastrophically failed during the evening rush hour on 1 August 2007, collapsing to the river and riverbanks beneath. Thirteen people were killed and 145 were injured. Following the collapse, the Federal Highway Administration advised states to inspect the 700 U.S. bridges of similar construction after a possible design flaw in the bridge was discovered, related to large steel sheets called gusset plates which were used to connect girders together in the truss structure. Officials expressed concern about many other bridges in the United States sharing the same design and raised questions as to why such a flaw would not have been discovered in over 40 years of inspections.
Buildings
Thane building collapse
On 4 April 2013, a building collapsed on tribal land in Mumbra, a suburb of Thane in Maharashtra, India. It has been called the worst building collapse in the area: 74 people died, including 18 children, 23 women, and 33 men, while more than 100 people survived.
The building was under construction and did not have an occupancy certificate for its 100 to 150 low- to middle-income residents; its only occupants were the site construction workers and their families. The building was reported to have been illegally constructed because standard practices were not followed for safe, lawful construction, land acquisition and resident occupancy.
By 11 April, a total of 15 suspects were arrested including builders, engineers, municipal officials, and other responsible parties. Governmental records indicate that there were two orders to manage the number of illegal buildings in the area: a 2005 Maharashtra state order to use remote sensing and a 2010 Bombay High Court order. Complaints were also made to state and municipal officials.
On 9 April, the Thane Municipal Corporation began a campaign to demolish illegal buildings in the area, focusing on "dangerous" buildings, and set up a call center to accept and track the resolutions of complaints about illegal buildings. The forest department, meanwhile, promised to address encroachment of forest land in the Thane District.
Savar building collapse
On 24 April 2013, Rana Plaza, an eight-storey commercial building, collapsed in Savar, a sub-district in the Greater Dhaka Area, the capital of Bangladesh. The search for the dead ended on 13 May with the death toll of 1,134. Approximately 2,515 injured people were rescued from the building alive.
It is considered to be the deadliest garment-factory accident in history, as well as the deadliest accidental structural failure in modern human history.
The building contained clothing factories, a bank, apartments, and several other shops. The shops and the bank on the lower floors immediately closed after cracks were discovered in the building. Warnings to avoid using the building after cracks appeared the day before had been ignored. Garment workers were ordered to return the following day and the building collapsed during the morning rush-hour.
Sampoong Department Store collapse
On 29 June 1995, the five-story Sampoong Department Store in the Seocho District of Seoul, South Korea collapsed resulting in the deaths of 502 people, with another 1,445 being trapped.
In April 1995, cracks began to appear in the ceiling of the fifth floor of the store's south wing due to the presence of an air-conditioning unit on the weakened roof of the poorly built structure. On the morning of 29 June, as the number of cracks in the ceiling increased dramatically, store managers closed the top floor and shut off the air conditioning, but failed to shut the building down or issue formal evacuation orders as the executives themselves left the premises as a precaution.
Five hours before the collapse, the first of several loud bangs was heard emanating from the top floors, as the vibration of the air conditioning caused the cracks in the slabs to widen further. Amid customer reports of vibration in the building, the air conditioning was turned off but, the cracks in the floors had already grown to 10 cm wide. At about 5:00 p.m. local time, the fifth-floor ceiling began to sink, and at 5:57 p.m., the roof gave way, sending the air conditioning unit crashing through into the already-overloaded fifth floor.
Ronan Point
On 16 May 1968, the 22-story residential tower Ronan Point in the London Borough of Newham collapsed when a relatively small gas explosion on the 18th floor caused a structural wall panel to be blown away from the building. The tower was constructed of precast concrete, and the failure of the single panel caused one entire corner of the building to collapse. The panel was able to be blown out because there was insufficient reinforcement steel passing between the panels. This also meant that the loads carried by the panel could not be redistributed to other adjacent panels, because there was no route for the forces to follow. As a result of the collapse, building regulations were overhauled to prevent disproportionate collapse and the understanding of precast concrete detailing was greatly advanced. Many similar buildings were altered or demolished as a result of the collapse.
Oklahoma City bombing
On 19 April 1995, the nine-story concrete framed Alfred P. Murrah Federal Building in Oklahoma was struck by a truck bomb causing partial collapse, resulting in the deaths of 168 people. The bomb, though large, caused a significantly disproportionate collapse of the structure. The bomb blew all the glass off the front of the building and completely shattered a ground floor reinforced concrete column (see brisance). At second story level a wider column spacing existed, and loads from upper story columns were transferred into fewer columns below by girders at second floor level. The removal of one of the lower story columns caused neighbouring columns to fail due to the extra load, eventually leading to the complete collapse of the central portion of the building. The bombing was one of the first to highlight the extreme forces that blast loading from terrorism can exert on buildings, and led to increased consideration of terrorism in structural design of buildings.
Versailles wedding hall
The Versailles wedding hall (), located in Talpiot, Jerusalem, is the site of the worst civil disaster in Israel's history. At 22:43 on Thursday night, 24 May 2001 during the wedding of Keren and Asaf Dror, a large portion of the third floor of the four-story building collapsed, killing 23 people. The bride and the groom survived.
World Trade Center Towers 1, 2, and 7
In the September 11 attacks, two commercial airliners were deliberately crashed into the Twin Towers of the World Trade Center in New York City. The impact, explosion and resulting fires caused both towers to collapse within less than two hours. The impacts severed exterior columns and damaged core columns, redistributing the loads that these columns had carried. This redistribution of loads was greatly influenced by the hat trusses at the top of each building. The impacts dislodged some of the fireproofing from the steel, increasing its exposure to the heat of the fires. Temperatures became high enough to weaken the core columns to the point of creep and plastic deformation under the weight of higher floors. The heat of the fires also weakened the perimeter columns and floors, causing the floors to sag and exerting an inward force on exterior walls of the building. WTC Building 7 also collapsed later that day; the 47-story skyscraper collapsed within seconds due to a combination of a large fire inside the building and heavy structural damage from the collapse of the North Tower.
Champlain Towers
On 24 June 2021, Champlain Towers South, a 12-story condominium building in Surfside, Florida partially collapsed, causing dozens of injuries and 98 deaths. The collapse was captured on video. One person was rescued from the rubble, and about 35 people were rescued on 24 June from the uncollapsed portion of the building. Long-term degradation of reinforced concrete-support structures in the underground parking garage, due to water penetration and corrosion of the reinforcing steel, has been considered as a factor in—or the cause of—the collapse. The issues had been reported in 2018 and noted as "much worse" in April 2021. A $15 million program of remedial works had been approved at the time of the collapse.
First Congregational Church, New London, Connecticut
On 24 January, 2024, the spire of this Gothic-revival stone church collapsed, bringing down the roof and irretrievably damaging the structure.
Aircraft
Repeat structural failures on the same type of aircraft occurred in 1954, when two de Havilland Comet C1 jet airliners crashed due to decompression caused by metal fatigue, and in 1963–64, when the vertical stabilizer on four Boeing B-52 bombers broke off in mid-air.
Other
Warsaw Radio Mast
On 8 August 1991 at 16:00 UTC Warsaw radio mast, the tallest man-made object ever built before the erection of Burj Khalifa, collapsed as a consequence of an error in exchanging the guy-wires on the highest stock. The mast first bent and then snapped at roughly half its height. It destroyed at its collapse a small mobile crane of Mostostal Zabrze. As all workers had left the mast before the exchange procedures, there were no fatalities, in contrast to the similar collapse of the WLBT Tower in 1997.
Hyatt Regency walkway
On 17 July 1981, two suspended walkways through the lobby of the Hyatt Regency in Kansas City, Missouri, collapsed, killing 114 and injuring more than 200 people at a tea dance. The collapse was due to a late change in design, altering the method in which the rods supporting the walkways were connected to them, and inadvertently doubling the forces on the connection. The failure highlighted the need for good communication between design engineers and contractors, and rigorous checks on designs and especially on contractor-proposed design changes. The failure is a standard case study on engineering courses around the world, and is used to teach the importance of ethics in engineering.
See also
Structural analysis
Structural robustness
Catastrophic failure
Earthquake engineering
Porch collapse
Forensic engineering
Progressive collapse
Seismic performance
Serviceability failure
Structural fracture mechanics
Collapse zone
Engineering disasters
Tofu-dreg project
Urban search and rescue
List of structural failures and collapses
References
Notes
Citations
Bibliography
Feld, Jacob; Carper, Kenneth L. (1997). Construction Failure. John Wiley & Sons. .
Lewis, Peter R. (2007). Disaster on the Dee. Tempus.
Petroski, Henry (1994). Design Paradigms: Case Histories of Error and Judgment in Engineering. Cambridge University Press. .
Scott, Richard (2001). In the Wake of Tacoma: Suspension Bridges and the Quest for Aerodynamic Stability. ASCE Publications. .
Solid mechanics
Materials science
Building engineering
Mechanical engineering
Building defects
Structural engineering
Engineering failures | Structural integrity and failure | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 3,816 | [
"Structural engineering",
"Systems engineering",
"Solid mechanics",
"Applied and interdisciplinary physics",
"Reliability engineering",
"Building engineering",
"Technological failures",
"Materials science",
"Engineering failures",
"Construction",
"Civil engineering",
"Mechanics",
"Mechanical... |
52,078,022 | https://en.wikipedia.org/wiki/Toi%20%28programming%20language%29 | Toi is an imperative, type-sensitive language that provides the basic functionality of a programming language. The language was designed and developed from the ground-up by Paul Longtine. Written in C, Toi was created with the intent to be an educational experience and serves as a learning tool (or toy, hence the name) for those looking to familiarize themselves with the inner-workings of a programming language.
Specification
Types
0 VOID - Null, no data
1 ADDR - Address type (bytecode)
2 TYPE - A `type` type
3 PLIST - Parameter list
4 FUNC - Function
5 OBJBLDR - Object builder
6 OBJECT - Object/Class
7 G_PTR - Generic pointer
8 G_INT - Generic integer
9 G_FLOAT - Generic double
10 G_CHAR - Generic character
11 G_STR - Generic string
12 S_ARRAY - Static array
13 D_ARRAY - Dynamic array
14 H_TABLE - Hashtable
15 G_FIFO - Stack
Runtime
Runtime context definition
The runtime context keeps track of an individual threads metadata, such as:
The operating stack
The operating stack where current running instructions push/pop to.
refer to STACK DEFINITION
Namespace instance
Data structure that holds the references to variable containers, also proving the interface for Namespace Levels.
refer to NAMESPACE DEFINITION
Argument stack
Arguments to function calls are pushed on to this stack, flushed on call.
refer to STACK DEFINITION, FUNCTION DEFINITION
Program counter
An interface around bytecode to keep track of traversing line-numbered instructions.
refer to PROGRAM COUNTER DEFINITION
This context gives definition to an 'environment' where code is executed.
Namespace definition
A key part to any operational computer language is the notion of a 'Namespace'.
This notion of a 'Namespace' refers to the ability to declare a name, along with
needed metadata, and call upon the same name to retrieve the values associated
with that name.
In this definition, the namespace will provide the following key mechanisms:
Declaring a name
Assigning a name to a value
Retrieving a name's value
Handle a name's scope
Implicitly move in/out of scopes
The scope argument is a single byte, where the format is as follows:
Namespace|Scope
0000000 |0
Scopes are handled by referencing to either the Global Scope or the Local Scope.
The Local Scope is denoted by '0' in the scope argument when referring to names,
and this scope is initialized when evaluating any new block of code. When a different block of code is called, a new scope is added as a new Namespace level. Namespace levels act as context switches within function contexts. For example, the local namespace must be 'returned to' if that local namespace context needs to be preserved on return. Pushing 'Namespace levels' ensures that for every n function calls, you can traverse n instances of previous namespaces. For example, take this namespace level graphic, where each Level is a namespace instance:
Level 0: Global namespace, LSB == '1'.
Level 1: Namespace level, where Local Level is at 1, LSB == '0'.
When a function is called, another namespace level is created and the local
level increases, like so:
Level 0: Global namespace, LSB == '1'.
Level 1: Namespace level.
Level 2: Namespace level, where Local Level is at 2, LSB == '0'.
Global scope names (LSB == 1 in the scope argument) are persistent through the runtime as they handle all function definitions, objects, and
names declared in the global scope. The "Local Level" is at where references
that have a scope argument of '0' refer to when accessing names.
The Namespace argument refers to which Namespace the variable exists in.
When the namespace argument equals 0, the current namespace is referenced.
The global namespace is 1 by default, and any other namespaces must be declared
by using the
Variable definition
Variables in this definition provide the following mechanisms:
Provide a distinguishable area of typed data
Provide a generic container around typed data, to allow for labeling
Declare a set of fundamental datatypes, and methods to:
Allocate the proper space of memory for the given data type,
Deallocate the space of memory a variables data may take up, and
Set in place a notion of ownership
For a given variable V, V defines the following attributes
V -> Ownership
V -> Type
V -> Pointer to typed space in memory
Each variable then can be handled as a generic container.
In the previous section, the notion of Namespace levels was introduced. Much
like how names are scoped, generic variable containers must communicate their
scope in terms of location within a given set of scopes. This is what is called
'Ownership'. In a given runtime, variable containers can exist in the following
structures: A stack instance, Bytecode arguments, and Namespaces
The concept of ownership differentiates variables existing on one or more of the
structures. This is set in place to prevent accidental deallocation of variable
containers that are not copied, but instead passed as references to these
structures.
Function definition
Functions in this virtual machine are a pointer to a set of instructions in a
program with metadata about parameters defined.
Object definition
In this paradigm, objects are units that encapsulate a separate namespace and
collection of methods.
Bytecode spec
Bytecode is arranged in the following order:
<opcode>, <arg 0>, <arg 1>, <arg 2>
Where the <opcode> is a single byte denoting which subroutine to call with the
following arguments when executed. Different opcodes have different argument
lengths, some having 0 arguments, and others having 3 arguments.
Interpreting Bytecode Instructions
A bytecode instruction is a single-byte opcode, followed by at maximum 3
arguments, which can be in the following forms:
Static (single byte)
Name (single word)
Address (depending on runtime state, usually a word)
Dynamic (size terminated by NULL, followed by (size)*bytes of data)
i.e. FF FF 00 <0xFFFF bytes of data>,
01 00 <0x1 bytes of data>,
06 00 <0x6 bytes of data>, etc.
Below is the specification of all the instructions with a short description for
each instruction, and instruction category:
Opcode
Keywords:
TOS - 'Top Of Stack' The top element
TBI - 'To be Implemented'
S<[variable]> - Static Argument.
N<[variable]> - Name.
A<[variable]> - Address Argument.
D<[variable]> - Dynamic bytecode argument.
Hex | Mnemonic | arguments - description
Stack manipulation
These subroutines operate on the current-working stack(1).
10 POP S<n> - pops the stack n times.
11 ROT - rotates top of stack
12 DUP - duplicates the top of the stack
13 ROT_THREE - rotates top three elements of stack
Variable management
20 DEC S<scope> S<type> N - declare variable of type
21 LOV S<scope> N - loads reference variable on to stack
22 STV S<scope> N - stores TOS to reference variable
23 CTV S<scope> N D<data> - loads constant into variable
24 CTS D<data> - loads constant into stack
Type management
Types are in the air at this moment. I'll detail what types there are when
the time comes
30 TYPEOF - pushes type of TOS on to the stack TBI
31 CAST S<type> - Tries to cast TOS to <type> TBI
Binary Ops
OPS take the two top elements of the stack, perform an operation and push
the result on the stack.
40 ADD - adds
41 SUB - subtracts
42 MULT - multiplies
43 DIV - divides
44 POW - power, TOS^TOS1 TBI
45 BRT - base root, TOS root TOS1 TBI
46 SIN - sine TBI
47 COS - cosine TBI
48 TAN - tangent TBI
49 ISIN - inverse sine TBI
4A ICOS - inverse consine TBI
4B ITAN - inverse tangent TBI
4C MOD - modulus TBI
4D OR - or's TBI
4E XOR - xor's TBI
4F NAND - and's TBI
Conditional Expressions
Things for comparison, < > = ! and so on and so forth.
Behaves like Arithmetic instructions, besides NOT instruction. Pushes boolean
to TOS
50 GTHAN - Greater than
51 LTHAN - Less than
52 GTHAN_EQ - Greater than or equal to
53 LTHAN_EQ - Less than or equal to
54 EQ - Equal to
55 NEQ - Not equal to
56 NOT - Inverts TOS if TOS is boolean
57 OR - Boolean OR
58 AND - Boolean AND
Loops
60 STARTL - Start of loop
61 CLOOP - Conditional loop. If TOS is true, continue looping, else break
6E BREAK - Breaks out of loop
6F ENDL - End of loop
Code flow
These instructions dictate code flow.
70 GOTO A<addr> - Goes to address
71 JUMPF A<n> - Goes forward <n> lines
72 IFDO - If TOS is TRUE, do until done, if not, jump to done
73 ELSE - Chained with an IFDO statement, if IFDO fails, execute ELSE
block until DONE is reached.
74 JTR - jump-to-return. TBI
75 JTE - jump-to-error. Error object on TOS TBI
7D ERR - Start error block, uses TOS to evaluate error TBI
7E DONE - End of block
7F CALL N - Calls function, pushes return value on to STACK.
Generic object interface. Expects object on TOS
80 GETN N<name> - Returns variable associated with name in object
81 SETN N<name> - Sets the variable associated with name in object
Object on TOS, Variable on TOS1
82 CALLM N<name> - Calls method in object
83 INDEXO - Index an object, uses argument stack
84 MODO S<OP> - Modify an object based on op. [+, -, *, /, %, ^ .. etc.]
F - Functions/classes
FF DEFUN NS<type> D<args> - Un-funs everything. no, no- it defines a
function. D is its name, S<type> is
the return value, D<args> is the args.
FE DECLASS ND<args> - Defines a class.
FD DENS S - Declares namespace
F2 ENDCLASS - End of class block
F1 NEW S<scope> N - Instantiates class
F0 RETURN - Returns from function
Special Bytes
00 NULL - No-op
01 LC N<name> - Calls OS function library, i.e. I/O, opening files, etc. TBI
02 PRINT - Prints whatever is on the TOS.
03 DEBUG - Toggle debug mode
0E ARGB - Builds argument stack
0F PC S - Primitive call, calls a subroutine A. A list of TBI
primitive subroutines providing methods to tweak
objects this bytecode set cannot touch. Uses argstack.
Compiler/Translator/Assembler
Lexical analysis
Going from code to bytecode is what this section is all about. First off an
abstract notation for the code will be broken down into a binary tree as so:
<node>
/\
/ \
/ \
<arg> <next>
node> can be an argument of its parent node, or the next instruction.
Instruction nodes are nodes that will produce an instruction, or multiple based
on the bytecode interpretation of its instruction. For example, this line of
code:
int x = 3
would translate into:
def
/\
/ \
/ \
/ \
/ \
int set
/\ /\
/ \ / \
null 'x' 'x' null
/\
/ \
null 3
Functions are expressed as individual binary trees. The root of any file is
treated as an individual binary tree, as this is also a function.
The various instruction nodes are as follows:
def <type> <name>
Define a named space in memory with the type specified
See the 'TYPES' section under 'OVERVIEW'
set <name> <value>
Set a named space in memory with value specified
Going from Binary Trees to Bytecode
The various instruction nodes within the tree will call specific functions
that will take arguments specified and lookahead and lookbehind to formulate the
correct bytecode equivalent.
Developer's Website
The developer of the language, Paul Longtine, operates a publicly available website and blog called banna.tech, named after his online alias 'banna'.
References
Specification | Toi (programming language) | [
"Engineering"
] | 2,676 | [
"Software engineering",
"Programming language topics"
] |
52,084,413 | https://en.wikipedia.org/wiki/Vector%20quantity | In the natural sciences, a vector quantity (also known as a vector physical quantity, physical vector, or simply vector) is a vector-valued physical quantity.
It is typically formulated as the product of a unit of measurement and a vector numerical value (unitless), often a Euclidean vector with magnitude and direction.
For example, a position vector in physical space may be expressed as three Cartesian coordinates with SI unit of meters.
In physics and engineering, particularly in mechanics, a physical vector may be endowed with additional structure compared to a geometrical vector.
A bound vector is defined as the combination of an ordinary vector quantity and a point of application or point of action.
Bound vector quantities are formulated as a directed line segment, with a definite initial point besides the magnitude and direction of the main vector.
For example, a force on the Euclidean plane has two Cartesian components in SI unit of newtons and an accompanying two-dimensional position vector in meters, for a total of four numbers on the plane (and six in space).
A simpler example of a bound vector is the translation vector from an initial point to an end point; in this case, the bound vector is an ordered pair of points in the same position space, with all coordinates having the same quantity dimension and unit (length an meters).
A sliding vector is the combination of an ordinary vector quantity and a line of application or line of action, over which the vector quantity can be translated (without rotations).
A free vector is a vector quantity having an undefined support or region of application; it can be freely translated with no consequences; a displacement vector is a prototypical example of free vector.
Aside from the notion of units and support, physical vector quantities may also differ from Euclidean vectors in terms of metric.
For example, an event in spacetime may be represented as a position four-vector, with coherent derived unit of meters: it includes a position Euclidean vector and a timelike component, (involving the speed of light).
In that case, the Minkowski metric is adopted instead of the Euclidean metric.
Vector quantities are a generalization of scalar quantities and can be further generalized as tensor quantities.
Individual vectors may be ordered in a sequence over time (a time series), such as position vectors discretizing a trajectory.
A vector may also result from the evaluation, at a particular instant, of a continuous vector-valued function (e.g., the pendulum equation).
In the natural sciences, the term "vector quantity" also encompasses vector fields defined over a two- or three-dimensional region of space, such as wind velocity over Earth's surface.
Pseudo vectors and bivectors are also admitted as physical vector quantities.
See also
List of vector quantities
Vector representation
References
Vectors (mathematics and physics) | Vector quantity | [
"Physics",
"Mathematics"
] | 565 | [
"Quantity",
"Vector physical quantities",
"Physical quantities"
] |
49,492,438 | https://en.wikipedia.org/wiki/International%20Conference%20on%20Green%20Chemistry | The International IUPAC Conferences on Green Chemistry (ICGCs) gather several hundreds scientists, technologists, and experts from all over the world with the aim to exchange and disseminate new ideas, discoveries, and projects on green chemistry and a sustainable development. After mid twentieth century, an increasingly general consensus acknowledges that these subjects play a unique role in mapping the way ahead for the humankind progress. Typical topics discussed in these IUPAC Conferences are:
bio-based renewable chemical resources, bio-inspired materials and nanomaterials, bio-based polymers;
polymer composites and natural surfactants;
green solvents, catalysts, and synthetic methodologies (e.g., microwaves, ultrasounds, solid state synthesis), biocatalysis and biotransformations;
biofuels and chemistry for improved energy harvesting;
materials for sustainable construction and cultural heritage;
pollution prevention;
metrics, evaluation, education, and communication of green chemistry.
History
In 2006 the International Union of Pure and Applied Chemistry (IUPAC) promoted the organization of the 1st International IUPAC Conference on Green-Sustainable Chemistry (ICGC-1). This conference, started in collaboration with the German Chemical Society (GDCh), was a major acknowledgement by IUPAC of the relevance of green chemistry. The Special Topic Issue on Green Chemistry in Pure and Applied Chemistry and the starting of a Subcommittee on Green Chemistry, operating in the IUPAC Division of Organic and Biomolecular Chemistry, were two important landmarks towards that acknowledgement.
ICGC-1 registered the presence of over 450 participants from 42 countries and proceedings were published in Pure and Applied Chemistry. This Conference then became a biannual appointment that continuously attracted several hundreds scientists and technologists from academia, research institutes, and industries.
On 14 July 2017, IUPAC established the Interdivisional Committee on Green Chemistry for Sustainable Development ICGCSD that supersedes the former Subcommittee on Green Chemistry and has the aim to assist IUPAC in initiating, promoting, and coordinating the work of the Union in the area of green and sustainable chemistry.
ICGCSD will continue to organize the ICGCs Series. The next 8th Conference will take place in Bangkok, Thailand, in September 2018.
List of Conferences
References
Chemistry conferences
Environmental chemistry
Environmental conferences
Green chemistry
Waste minimisation | International Conference on Green Chemistry | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 477 | [
"Green chemistry",
"Chemical engineering",
"Environmental chemistry",
"nan"
] |
49,492,887 | https://en.wikipedia.org/wiki/Jones%E2%80%93Dole%20equation | The Jones–Dole equation, or Jones–Dole expression, is an empirical expression that describes the relationship between the viscosity of a solution and the concentration of solute within the solution (at a fixed temperature and pressure). The Jones–Dole equation is written as
where
η is the viscosity of the solution (at a fixed temperature and pressure),
η0 is the viscosity of the solvent at the same temperature and pressure,
A is a coefficient that describes the impact of charge–charge interactions on the viscosity of a solution (it is usually positive) and can be calculated from Debye–Hückel theory,
B is a coefficient that characterises the solute–solvent interactions at a defined temperature and pressure,
C is the solute concentration.
The Jones–Dole B coefficient is often used to classify ions as either structure-makers (kosmotropes) or structure-breakers (chaotropes) according to their supposed strengthening or weakening of the hydrogen-bond network of water. The Jones–Dole expression works well up to about 1 M, but at higher concentrations breaks down, as the viscosity of all solutions increase rapidly at high concentrations.
The large increase in viscosity as a function of solute concentration seen in all solutions above about 1 M is the effect of a jamming transition at a high concentration. As a result, the viscosity increases exponentially as a function of concentration and then diverges at a critical concentration. This has been referred to as the "Mayonnaise effect", as the viscosity of mayonnaise (essentially a solution of oil in water) is extremely high because of the jamming of micrometer-scale droplets.
References
Equations | Jones–Dole equation | [
"Chemistry",
"Mathematics"
] | 356 | [
"Mathematical objects",
"Physical chemistry stubs",
"Equations"
] |
49,495,238 | https://en.wikipedia.org/wiki/Peierls%20substitution | The Peierls substitution method, named after the original work by Rudolf Peierls is a widely employed approximation for describing tightly-bound electrons in the presence of a slowly varying magnetic vector potential.
In the presence of an external magnetic vector potential , the translation operators, which form the kinetic part of the Hamiltonian in the tight-binding framework, are simply
and in the second quantization formulation
The phases are defined as
Properties
The number of flux quanta per plaquette is related to the lattice curl of the phase factor, and the total flux through the lattice is with being the magnetic flux quantum in Gaussian units.
The flux quanta per plaquette is related to the accumulated phase of a single particle state, surrounding a plaquette:
Justification
Here we give three derivations of the Peierls substitution, each one is based on a different formulation of quantum mechanics theory.
Axiomatic approach
Here we give a simple derivation of the Peierls substitution, which is based on The Feynman Lectures (Vol. III, Chapter 21). This derivation postulates that magnetic fields are incorporated in the tight-binding model by adding a phase to the hopping terms and show that it is consistent with the continuum Hamiltonian. Thus, our starting point is the Hofstadter Hamiltonian:
The translation operator can be written explicitly using its generator, that is the momentum operator. Under this representation its easy to expand it up to the second order,
and in a 2D lattice . Next, we expand up to the second order the phase factors, assuming that the vector potential does not vary significantly over one lattice spacing (which is taken to be small)
Substituting these expansions to relevant part of the Hamiltonian yields
Generalizing the last result to the 2D case, the we arrive to Hofstadter Hamiltonian at the continuum limit:
where the effective mass is and .
Semi-classical approach
Here we show that the Peierls phase factor originates from the propagator of an electron in a magnetic field due to the dynamical term appearing in the Lagrangian. In the path integral formalism, which generalizes the action principle of classical mechanics, the transition amplitude from site at time to site at time is given by
where the integration operator, denotes the sum over all possible paths from to and is the classical action, which is a functional that takes a trajectory as its argument. We use to denote a trajectory with endpoints at . The Lagrangian of the system can be written as
where is the Lagrangian in the absence of a magnetic field. The corresponding action reads
Now, assuming that only one path contributes strongly, we have
Hence, the transition amplitude of an electron subject to a magnetic field is the one in the absence of a magnetic field times a phase.
Another derivation
The Hamiltonian is given by
where is the potential landscape due to the crystal lattice. The Bloch theorem asserts that the solution to the problem:, is to be sought in the Bloch sum form
where is the number of unit cells, and the are known as Wannier functions. The corresponding eigenvalues , which form bands depending on the crystal momentum , are obtained by calculating the matrix element
and ultimately depend on material-dependent hopping integrals
In the presence of the magnetic field the Hamiltonian changes to
where is the charge of the particle. To amend this, consider changing the Wannier functions to
where . This makes the new Bloch wave functions
into eigenstates of the full Hamiltonian at time , with the same energy as before. To see this we first use to write
Then when we compute the hopping integral in quasi-equilibrium (assuming that the vector potential changes slowly)
where we have defined , the flux through the triangle made by the three position arguments. Since we assume is approximately uniform at the lattice scale - the scale at which the Wannier states are localized to the positions - we can approximate , yielding the desired result,
Therefore, the matrix elements are the same as in the case without magnetic field, apart from the phase factor picked up, which is denoted the Peierls phase factor. This is tremendously convenient, since then we get to use the same material parameters regardless of the magnetic field value, and the corresponding phase is computationally trivial to take into account. For electrons () it amounts to replacing the hopping term with
References
Electronic structure methods
Electronic band structures | Peierls substitution | [
"Physics",
"Chemistry",
"Materials_science"
] | 885 | [
"Electron",
"Quantum chemistry",
"Quantum mechanics",
"Computational physics",
"Electronic structure methods",
"Electronic band structures",
"Computational chemistry",
"Condensed matter physics"
] |
49,497,304 | https://en.wikipedia.org/wiki/Eukaryotic%20initiation%20factor%204F | Eukaryotic initiation factor 4F (eIF4F) is a heterotrimeric protein complex that binds the 5' cap of messenger RNAs (mRNAs) to promote eukaryotic translation initiation. The eIF4F complex is composed of three non-identical subunits: the DEAD-box RNA helicase eIF4A, the cap-binding protein eIF4E, and the large "scaffold" protein eIF4G. The mammalian eIF4F complex was first described in 1983, and has been a major area of study into the molecular mechanisms of cap-dependent translation initiation ever since.
Function
eIF4F is important for recruiting the small ribosomal subunit (40S) to the 5' cap of mRNAs during cap-dependent translation initiation. Components of the complex are also involved in cap-independent translation initiation; for instance, certain viral proteases cleave eIF4G to remove the eIF4E-binding region, thus inhibiting cap-dependent translation.
Structure
Structures of eIF4F components have been solved individually and as partial complexes by a variety of methods, but no complete structure of eIF4F is currently available.
Subunits
In mammals, the eIF4E•G•A trimeric complex can be directly purified from cells, while only the two subunit eIF4E•G can be purified from yeast cells. This suggests that the interaction between the eIF4A and eIF4G in yeast is not as stable. eIF4E binds the m7G 5' cap and the eIF4G scaffold, connecting the mRNA 5' terminus to a hub of other initiation factors and mRNA. The interaction of eIF4G•A is thought to guide the formation of a single-stranded RNA landing pad for the 43S preinitiation complex (43S PIC) via eIF4A's RNA helicase activity.
The eIF4F proteins interact with a number of different binding partners, and there are multiple genetic isoforms of eIF4A, eIF4E, and eIF4G in the human genome. In mammals, eIF4F is bridged to the 40S ribosomal subunit by eIF3 via eIF4G, while budding yeast lacks this connection. Interactions between eIF4G and PABP are thought to mediate the circularization of mRNA particles, forming a "closed-loop" thought to stimulate translation.
Approximate molecular weight for human proteins.
In addition to the major proteins encompassing the eIF4F trimer, the eIF4F complex functionally interacts with proteins including eIF4B and eIF4H. The unusual isoform of eIF4G, eIF4G2 or DAP5, also appears to perform a non-canonical translation function.
Regulation
The eIF4E subunit of eIF4F is an important target of mTOR signaling through the eIF4E binding protein (4E-BP). Phosphorylation of 4E-BPs by mTOR prevents their binding to eIF4E, freeing eIF4E to bind eIF4G and participate in translation initiation.
See also
Eukaryotic translation
Eukaryotic initiation factor
5' cap
References
Protein biosynthesis
Gene expression
Protein complexes
RNA-binding proteins | Eukaryotic initiation factor 4F | [
"Chemistry",
"Biology"
] | 698 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
45,256,442 | https://en.wikipedia.org/wiki/Semi-simplicity | In mathematics, semi-simplicity is a widespread concept in disciplines such as linear algebra, abstract algebra, representation theory, category theory, and algebraic geometry. A semi-simple object is one that can be decomposed into a sum of simple objects, and simple objects are those that do not contain non-trivial proper sub-objects. The precise definitions of these words depends on the context.
For example, if G is a finite group, then a nontrivial finite-dimensional representation V over a field is said to be simple if the only subrepresentations it contains are either {0} or V (these are also called irreducible representations). Now Maschke's theorem says that any finite-dimensional representation of a finite group is a direct sum of simple representations (provided the characteristic of the base field does not divide the order of the group). So in the case of finite groups with this condition, every finite-dimensional representation is semi-simple. Especially in algebra and representation theory, "semi-simplicity" is also called complete reducibility. For example, Weyl's theorem on complete reducibility says a finite-dimensional representation of a semisimple compact Lie group is semisimple.
A square matrix (in other words a linear operator with V a finite-dimensional vector space) is said to be simple if its only invariant linear subspaces under T are {0} and V. If the field is algebraically closed (such as the complex numbers), then the only simple matrices are of size 1-by-1. A semi-simple matrix is one that is similar to a direct sum of simple matrices; if the field is algebraically closed, this is the same as being diagonalizable.
These notions of semi-simplicity can be unified using the language of semi-simple modules, and generalized to semi-simple categories.
Introductory example of vector spaces
If one considers all vector spaces (over a field, such as the real numbers), the simple vector spaces are those that contain no proper nontrivial subspaces. Therefore, the one-dimensional vector spaces are the simple ones. So it is a basic result of linear algebra that any finite-dimensional vector space is the direct sum of simple vector spaces; in other words, all finite-dimensional vector spaces are semi-simple.
Semi-simple matrices
A square matrix or, equivalently, a linear operator T on a finite-dimensional vector space V is called semi-simple if every T-invariant subspace has a complementary T-invariant subspace. This is equivalent to the minimal polynomial of T being square-free.
For vector spaces over an algebraically closed field F, semi-simplicity of a matrix is equivalent to diagonalizability. This is because such an operator always has an eigenvector; if it is, in addition, semi-simple, then it has a complementary invariant hyperplane, which itself has an eigenvector, and thus by induction is diagonalizable. Conversely, diagonalizable operators are easily seen to be semi-simple, as invariant subspaces are direct sums of eigenspaces, and any eigenbasis for this subspace can be extended to an eigenbasis of the full space.
Semi-simple modules and rings
For a fixed ring R, a nontrivial R-module M is simple, if it has no submodules other than 0 and M. An R-module M is semi-simple if every R-submodule of M is an R-module direct summand of M (the trivial module 0 is semi-simple, but not simple). For an R-module M, M is semi-simple if and only if it is the direct sum of simple modules (the trivial module is the empty direct sum). Finally, R is called a semi-simple ring if it is semi-simple as an R-module. As it turns out, this is equivalent to requiring that any finitely generated R-module M is semi-simple.
Examples of semi-simple rings include fields and, more generally, finite direct products of fields. For a finite group G Maschke's theorem asserts that the group ring R[G] over some ring R is semi-simple if and only if R is semi-simple and |G| is invertible in R. Since the theory of modules of R[G] is the same as the representation theory of G on R-modules, this fact is an important dichotomy, which causes modular representation theory, i.e., the case when |G| does divide the characteristic of R to be more difficult than the case when |G| does not divide the characteristic, in particular if R is a field of characteristic zero.
By the Artin–Wedderburn theorem, a unital Artinian ring R is semisimple if and only if it is (isomorphic to) , where each is a division ring and is the ring of n-by-n matrices with entries in D.
An operator T is semi-simple in the sense above if and only if the subalgebra generated by the powers (i.e., iterations) of T inside the ring of endomorphisms of V is semi-simple.
As indicated above, the theory of semi-simple rings is much more easy than the one of general rings. For example, any short exact sequence
of modules over a semi-simple ring must split, i.e., . From the point of view of homological algebra, this means that there are no non-trivial extensions. The ring Z of integers is not semi-simple: Z is not the direct sum of nZ and Z/n.
Semi-simple categories
Many of the above notions of semi-simplicity are recovered by the concept of a semi-simple category C. Briefly, a category is a collection of objects and maps between such objects, the idea being that the maps between the objects preserve some structure inherent in these objects. For example, R-modules and R-linear maps between them form a category, for any ring R.
An abelian category C is called semi-simple if there is a collection of simple objects , i.e., ones with no subobject other than the zero object 0 and itself, such that any object X is the direct sum (i.e., coproduct or, equivalently, product) of finitely many simple objects. It follows from Schur's lemma that the endomorphism ring
in a semi-simple category is a product of matrix rings over division rings, i.e., semi-simple.
Moreover, a ring R is semi-simple if and only if the category of finitely generated R-modules is semisimple.
An example from Hodge theory is the category of polarizable pure Hodge structures, i.e., pure Hodge structures equipped with a suitable positive definite bilinear form. The presence of this so-called polarization causes the category of polarizable Hodge structures to be semi-simple.
Another example from algebraic geometry is the category of pure motives of smooth projective varieties over a field k modulo an adequate equivalence relation . As was conjectured by Grothendieck and shown by Jannsen, this category is semi-simple if and only if the equivalence relation is numerical equivalence. This fact is a conceptual cornerstone in the theory of motives.
Semisimple abelian categories also arise from a combination of a t-structure and a (suitably related) weight structure on a triangulated category.
Semi-simplicity in representation theory
One can ask whether the category of finite-dimensional representations of a group or a Lie algebra is semisimple, that is, whether every finite-dimensional representation decomposes as a direct sum of irreducible representations. The answer, in general, is no. For example, the representation of given by
is not a direct sum of irreducibles. (There is precisely one nontrivial invariant subspace, the span of the first basis element, .) On the other hand, if is compact, then every finite-dimensional representation of admits an inner product with respect to which is unitary, showing that decomposes as a sum of irreducibles. Similarly, if is a complex semisimple Lie algebra, every finite-dimensional representation of is a sum of irreducibles. Weyl's original proof of this used the unitarian trick: Every such is the complexification of the Lie algebra of a simply connected compact Lie group . Since is simply connected, there is a one-to-one correspondence between the finite-dimensional representations of and of . Thus, the just-mentioned result about representations of compact groups applies. It is also possible to prove semisimplicity of representations of directly by algebraic means, as in Section 10.3 of Hall's book.
See also: Fusion category (which are semisimple).
See also
A semisimple Lie algebra is a Lie algebra that is a direct sum of simple Lie algebras.
A semisimple algebraic group is a linear algebraic group whose radical of the identity component is trivial.
Semisimple algebra
Semisimple representation
References
External links
MathOverflow:Are abelian non-degenerate tensor categories semisimple?
Linear algebra
Representation theory
Ring theory
Algebraic geometry | Semi-simplicity | [
"Mathematics"
] | 1,921 | [
"Ring theory",
"Fields of abstract algebra",
"Algebraic geometry",
"Linear algebra",
"Representation theory",
"Algebra"
] |
45,257,067 | https://en.wikipedia.org/wiki/Recombinant%20human%20parathyroid%20hormone | Recombinant human parathyroid hormone, sold under the brand name Preotact among others, is an artificially manufactured form of the parathyroid hormone used to treat hypoparathyroidism (under-active parathyroid glands). Recombinant human parathyroid hormone is used in the treatment of osteoporosis in postmenopausal women at high risk of osteoporotic fractures. A significant reduction in the incidence of vertebral fractures has been demonstrated. It is used in combination with calcium and vitamin D supplements.
The most common side effects include sensations of tingling, tickling, pricking, or burning of the skin (paraesthesia); low blood calcium; headache; high blood calcium; and nausea.
Recombinant human parathyroid hormone (Preotact) was approved for medical use in the European Union in April 2006. Recombinant human parathyroid hormone (Natpara) was approved for medical use in the United States in January 2015, and in the European Union (as Natpar) in February 2017.
Medical uses
Recombinant human parathyroid hormone (Natpara) is indicated as an adjunct to calcium and vitamin D to control hypocalcemia in people with hypoparathyroidism. Recombinant human parathyroid hormone (Natpar) is indicated as adjunctive treatment of adults with chronic hypoparathyroidism who cannot be adequately controlled with standard therapy alone.
Recombinant human parathyroid hormone (Preotact) is indicated for the treatment of osteoporosis in postmenopausal women at high risk of fractures, but the marketing authorization has been withdrawn at the manufacturer's request.
Contraindications
Parathyroid hormone treatment should not be initiated in patients:
with hypersensitivity to PTH or excipients
who have received radiation therapy to the skeleton
with pre-existing hypercalcemia and other disturbances in the metabolism of phosphate or calcium
with metabolic bone diseases other than primary osteoporosis (including hyperparathyroidism and Paget's disease
with unexplained elevations of bone-specific alkaline phosphatase
with severe chronic kidney disease
with severe liver impairment
Adverse effects
The most common side effects include too high or too low blood calcium levels, which can lead to headache, diarrhea, vomiting, paraesthesia (unusual sensations like pins and needles), hypoaesthesia (reduced sense of touch), and high calcium levels in the urine.
In the US, the FDA label for parathyroid hormone contains a black box warning for osteosarcoma (a malignant bone tumor).
Interactions
Parathyroid hormone is a natural peptide that is not metabolised in the liver. It is not protein bound and has a low volume of distribution, therefore no specific drug-drug interactions are suspected. From the knowledge of the mechanism of action, combined use of Preotact and cardiac glycosides may predispose patients to digitalis toxicity if hypercalcemia develops.
Undesirable effects
Hypercalcemia and/or hypercalciuria reflect the known pharmacodynamic actions of parathyroid hormone in the gastrointestinal tract, the kidney and the skeleton, and is therefore an expected undesirable effect. Nausea is another commonly reported adverse reaction to the use of parathyroid hormone.
Pharmacodynamic properties
Mechanism of action
Preotact contains recombinant human parathyroid hormone which is identical to the full-length native 84-amino acid polypeptide.
Physiological actions of parathyroid hormone include stimulation of bone formation by direct effects on bone forming cells (osteoblasts) indirectly increasing the intestinal absorption of calcium and increasing the tubular reabsorption of calcium and excretion of phosphate by the kidney.
Pharmacodynamic effects
The skeletal effects of parathyroid hormone depend upon the pattern of systemic exposure. Transient elevations in parathyroid hormone levels after subcutaneous injection of Preotact stimulates new bone formation on trabecular and cortical bone surfaces by preferential stimulation of osteoblastic activity over osteoclastic activity.
Effects on serum calcium concentrations
Parathyroid hormone is the principal regulator of serum calcium hemostasis. In response to subcutaneous doses of Preotact (100 micrograms), serum total calcium levels increase gradually and reach peak concentration at approximately 6 to 8 hours after dosing. In general, serum calcium levels return to normal within 24 hours.
Clinical efficacy
In an 18-month double-blind, placebo controlled study, the effects of Preotact on the fracture incidence in 2532 women with postmenopausal osteoporosis was studied. Approximately 19% of patients had a prevalent vertebral fracture at baseline and the mean lumbar T-score of -3.0 in both active and placebo arm.
Compared to the placebo group, there was a 61% relative risk reduction of a new vertebral fracture at month 18 for the women in the Preotact group.
To prevent one or more new vertebral fractures, 48 women had to be treated for a median of 18 months for the total population. For patients who were already fractured, the number needed to treat was 21.
Effect on bone mineral density
In the same study mentioned above, Preotact increased bone mineral density in the lumbar spine after 18 months treatment by 6.5% compared with a reduction by 0.3% in the placebo group. The difference was statistically significant. The increase of bone mineral density in the hip was also statistically significant compared to placebo, but only around 1.0% at study endpoint. Continued treatment up to 24 months lead to a continued increase in bone mineral density.
Pharmacokinetics
Absorption
Subcutaneous administration of parathyroid hormone into the abdomen produces a rapid increase in plasma parathyroid hormone levels which reach peak at 1 to 2 hours after dosing. The mean half-life is approximately 1.5 hours. The absolute bioavailability of 100 micrograms of Preotact after subcutaneous administration in the abdomen is 55%.
Distribution
The volume of distribution at steady-state following intravenous administration is approximately 5.4 liters. Intersubject variability is about 40%.
Biotransformation
Parathyroid hormone is efficiently removed from the blood by a receptor-mediated process in the liver and is broken down into smaller peptide fragments. The fragments derived from the amino-terminus are further degraded within the cell while the fragments derived from the carboxy-terminus are released back into the blood and cleared by the kidney. These carboxy-terminal fragments are thought to play a role in the regulation of parathyroid hormone activity. Under normal physiological conditions full-length parathyroid hormone H constitutes only 5-30% of the circulating forms of the molecule, while 70-95% is present as carboxy-terminal fragments. Following administration of Preotact, carboxy-terminal fragments make up about 60-90% of the circulating forms of the molecule. Intersubject variability in systemic clearance is about 15%.
Elimination
Parathyroid hormone is metabolised in the liver and to a lesser extent in the kidney. It is not excreted from the body in its intact form. Circulating carboxy-terminal fragments are filtered by the kidney, but are subsequently broken down into even smaller fragments during tubular reuptake. No studies have so far been performed in patients with severe hepatic impairment. The pharmacokinetics of parathyroid hormone in patients with severe chronic kidney disease (creatinine clearance of less than 30 ml/min) has not been investigated either.
Pharmaceutical particulars
Preotact is delivered in a two chamber, glass ampoule. One chamber contains the active substance in the form of a white powder (with excipients: mannitol, citric acid monohydrate, NaCl, NaOH, HCl). And the other contains the solvent; water for injection. The powder is mixed with the solvent when the ampoule is inserted into the injection device.
See also
Teriparatide, another parathyroid hormone
References
Hormones of calcium metabolism
Parathyroid hormone receptor agonists
Drugs developed by Takeda Pharmaceutical Company
Drugs acting on the musculoskeletal system
Receptor agonists
Withdrawn drugs
Orphan drugs | Recombinant human parathyroid hormone | [
"Chemistry"
] | 1,759 | [
"Receptor agonists",
"Neurochemistry",
"Drug safety",
"Withdrawn drugs"
] |
45,258,329 | https://en.wikipedia.org/wiki/Flood%20arch | A flood arch is a small supplemental arch bridge provided alongside a main bridge. It provides extra capacity for floodwater.
The space beneath a flood arch is normally dry and often carries a towpath or similar. In some cases it borders on the shallow edge of a river, but this does not carry substantial flow in normal conditions. A bridge with multiple arches across a flowing river would instead be termed a viaduct.
For some bridges, flood arches were added after the first bridge had been constructed, often after initial flooding.
References
Arches and vaults
Bridge components | Flood arch | [
"Technology"
] | 110 | [
"Bridge components",
"Components"
] |
48,230,267 | https://en.wikipedia.org/wiki/Oriented%20structural%20straw%20board | Oriented structural straw board (OSSB) is an engineered board that is made by splitting straw and formed by adding methylene diphenyl diisocyanate (MDI) and then hot compressing layers of straw in specific orientations. Research and development for OSSB panels began in the mid 1980s and was spearheaded by the Alberta Research Council, Canada (today AITF).
Uses
OSSB can replace wood oriented strand board (OSB) and particle board in structural and non-structural applications, such as interior and exterior walls for house construction, furniture and interior decoration.
OSSB panels are formaldehyde-free, they are also used for applications where air quality is a concern, such as kindergartens, hospitals, bedrooms, and hotels.
Manufacturing
OSSB panel manufacturing starts with careful selection of straw fibres, which are then cut, cleaned, split and dried. Splitting straws allows resin to coat what would otherwise be the inside of a hollow straw. Producing split straw of sufficient length was the key technical innovation making OSSBs possible. OSSB is thus sometimes referred to as oriented split straw board. Formaldehyde free resin is added to the straw and the fibres are oriented for strength and appearance, and shaped into a mat through directional mat forming. The mat is then pressed between heated belts, water is vaporized, transferring heat into the straw. The heat cures the adhesive and causes a series of physical and chemical changes to the pressurized raw materials, which harden the final product.
Properties
OSSB panels have high structural strength, load bearing and stability in both directions, as well as superior workability and excellent nail holding properties on all sides. Water permeability treated OSSB panels are more water resistant than treated traditional wood panel boards because they have no internal gaps or voids. OSSB panels are also highly earthquake resistant.
The resin used to manufacture OSSB is p-MDI, which does not emit formaldehyde (volatile organic compounds / VOCs). The raw material can be treated by various borate compounds, which are toxic to termites, beetles, molds, fungi and mammals, but only at higher doses.
See also
Oriented strand board
References
Composite materials | Oriented structural straw board | [
"Physics"
] | 447 | [
"Materials",
"Composite materials",
"Matter"
] |
48,231,632 | https://en.wikipedia.org/wiki/British%20Mid-Ocean%20Ridge%20Initiative | The British Mid-Ocean Ridge Initiative (the BRIDGE Programme) was a multidisciplinary scientific investigation of the creation of the Earth's crust in the deep oceans. It was funded by the UK's Natural Environment Research Council (NERC) from 1993 to 1999.
Mid-Ocean ridges
Mid-Ocean ridges are active volcanic mountain ranges snaking through the depths of the Earth's oceans. They occur where the edges of the Earth's tectonic plates are separating, allowing mantle rock to rise to the seafloor and harden, creating new crust. The addition of this crust can cause ocean basins to widen perpendicular to the ridge. This seafloor spreading is the engine of continental drift. At intervals along the mid-ocean ridges super-heated mineral-rich fluids are vented from the seabed. These hydrothermal vents are populated by animal and bacterial species not found elsewhere on Earth.
BRIDGE investigated the geological setting of the ridge, the geochemistry of vent fluids, and ways in which biological communities survive in this apparently hostile environment. To achieve this the programme developed novel deep-ocean technologies for deployment from surface ships and manned submersibles. It also conducted experimental research into the mechanical and chemical nature of the rocks and underlying crust in these active volcanic regions. The scale of the investigation ranged from extensive regional studies mapping unexplored seafloor to microscopic and chemical analyses at individual vent sites. To achieve the programme's objectives work was focused at five contrasting locations: the Mid-Atlantic Ridge at 24–30°N; the Mid-Atlantic Ridge at 36–39°N; Iceland and the Reykjanes Ridge to its south west; the Scotia back-arc basin (SW Atlantic); and the Lau basin (SW Pacific). Intensive localised studies were made within these areas.
Background to BRIDGE
The idea for a British mid-ocean ridge research programme was developed by Professors Joe Cann of Leeds University and Roger Searle of Durham University after they attended a meeting in Oregon in 1987 where the idea for a US mid-ocean ridge research programme (the RIDGE Program) was being developed.
In the UK researchers in many disciplines were already studying mid-ocean ridges but it was felt this research could be better integrated to produce new multidisciplinary approaches yielding results of wider significance.
The 'BRIDGE' branding of research commenced before research council funding was sought for a formal programme. BRIDGE was mentioned by name in The Independent newspaper in February 1989. By this time the community of researchers in this field were referring to themselves as the BRIDGE Consortium. Deep-ocean science cruises were being identified as BRIDGE cruises by 1990. The first BRIDGE newsletter appeared in 1991.
Once the idea of BRIDGE was in place an application for funding was made to the Natural Environment Research Council. This was successful and full funding commenced in 1993 for a programme that would run until 1999. The final budget was £13M.
Aims
To invest in British mid-ocean ridge research so that both human skills and instrument resources were increased
To use both existing capabilities and newly developed instruments to solve some of the fundamental scientific problems pertaining to mid-ocean ridges
To expand UK mid-ocean ridge research to involve a wider range of skills and new techniques
To seek both direct and indirect commercial benefits from mid-ocean ridge research
To liaise with other national programmes to maximise the benefits of British activities
This last aim was achieved directly and by participation in the international InterRidge network.
Objectives
To undertake the crucial observations, experiments and modelling aimed at solving fundamental scientific problems
To conduct the basic surveys necessary to site both regional and local studies
To develop new marine instrumentation for use in experiments
To acquire access to the survey vehicles and instruments necessary for undertaking the science
To attract scientists from diverse disciplines to participate in mid-ocean ridge research
To consult the UK marine instrumentation community to refine requirements for, and capabilities of, new instruments
To seek active involvement of the UK biotechnology community in mid-ocean ridge research
To construct and update plans that would enable these aims and objectives to be met
Scientific problems
From the wide range of scientific problems that could be addressed by mid-ocean ridge research, BRIDGE identified six that were of most relevance to UK research.
How does the three-dimensional structure of mid-ocean ridges, and especially their segmentation by transform faults and similar features, relate to the physical properties and dynamics of the underlying Earth's mantle?
Can the geochemistry of the lavas erupted at mid-ocean ridges give insights into the scale and origin of heterogeneities in the underlying mantle?
What is the nature of the magmatic plumbing system within the crust and upper mantle below mid-ocean ridges?
How does the rate of flow and geochemical composition of the black smoker hydrothermal vent fluids vary with time, and what causes this variation? Can this help us to understand more about the origin of ore deposits found on land?
How do the bacteria that live around the black smoker hot springs survive the high temperatures and the toxic environment? Can these capabilities be exploited in biotechnology?
How do the chemical and biological processes at the black smoker vent fields affect the global flux of chemical species in and out of the ocean? Are the nutrient levels of the oceans partly controlled from the mid-ocean ridges?
Programme structure
Science
The scientific aims and objectives of the programme were directed by an international steering committee which met twice a year. The programme held a series of annual funding rounds to which scientists and engineers in the field submitted research proposals. Following a peer-review assessment of each proposal by independent referees the steering committee ranked the most highly rated proposals on their scientific merit and contribution to the programme's objectives. This short-list was then recommended to NERC for funding.
Management
From 1993 to 1995 programme management (day-to-day administration and budget oversight) was undertaken by NERC head office in Swindon. A separate Science Coordinator role (incorporating, among other duties, responsibility for expanding the BRIDGE Consortium, organising national conferences and publishing the newsletter) was based at Leeds University where the BRIDGE Chief Scientist, Joe Cann, was chairman of Earth Sciences.
In 1995 NERC began contracting out programme management for their large programmes. BRIDGE programme management absorbed the science coordination role and a new programme manager was appointed, based at Leeds University. The Leeds BRIDGE office was the programme hub until the end of March 1999 after which the conclusion of the programme was administered by NERC.
Programme content
BRIDGE funded 44 research projects: 4 multidisciplinary; 15 geology; 6 biology; 11 studies of the hydrothermal environment at vent fields (9 of the ocean floor and 2 of the overlying water column); and 8 engineering projects to develop the required technologies. More than 200 scientists in 28 research centres around the UK contributed to this programme. There were 26 BRIDGE deep-ocean research cruises to the North Atlantic, SW Atlantic, SE Pacific, SW Pacific and Indian oceans, 18 of which were directly funded by the programme.
Results
To discuss and publicise the programme's results BRIDGE organised its own science conferences at Durham University (1991), the Institute of Oceanographic Sciences Deacon Laboratory (IOSDL), Wormley (1992), Leeds University (1993), Oxford University (1994), the Geological Society of London (1994, 1995 and 1997), Cambridge University (1996), Southampton Oceanography Centre (1997) and Bristol University (1998). In addition BRIDGE science was reported at other meetings nationally and internationally, for example: at the Royal Society meeting Mid-Ocean Ridges: Dynamics of Processes Associated with Creation of New Ocean Crust (1996), the 1996 British Association for the Advancement of Science annual science festival at Birmingham in a BRIDGE session entitled Abyssal Inferno: Seafloor volcanoes, hot vents and exotic life at the mid-ocean ridges, at Geoscience 98, Keele University (1998), at the meeting Technology for Deep-Sea Geological Investigations at the Geological Society of London (1998) and at meetings of the American Geophysical Union.
Three of the BRIDGE conferences resulted in books published by the Geological Society of London, presenting in greater detail the science reported at the meetings.
Throughout the programme rapid publication of results was effected through The BRIDGE Newsletter. In style this was an academic journal (but without peer review) comprising BRIDGE science results together with conference announcements, meeting reports, cruise reports, updates from the mid-ocean ridge programmes of other nations and general news items of relevance to this field of research. It was published twice a year in spring and autumn. The first issue of eight stapled sheets appeared in August 1991 but after NERC funding commenced it was commercially printed and bound. By issue 10, in April 1996, it had grown to 100 pages and was being distributed to more than 600 researchers and interested parties in 20 countries. The last newsletter, No. 17, was produced in autumn 1999 as a magazine called The Fiery Deep, Exploring a New Earth summarising the programme and its results to that time. On 16 November 1999 at the Natural History Museum, London these results were presented to invited guests at a formal end of programme meeting.
As the programme ended, Joe Cann reported, "As a result of the BRIDGE initiative, several groups of UK scientists are at the forefront of international research in mid-ocean ridge science. The areas of expertise of these scientists range from marine geophysics and geodynamics, physical and chemical oceanography, to marine biology." "Every area had success. Here are a few examples. We found new pools of molten rock below the ocean floor where none was expected. We discovered large fields of hot springs, where the wisdom of the time said there should be none. We followed the strange lifecycle of the blind shrimp that live around hot springs in the Atlantic. We made sonar images of the first of a family of enormous faults that slice through the ocean floor, bringing deep rock to the surface. We showed how the flow of one of the big, hot spring fields was affected by scientific drilling. We traced the relationships between animals in hot spring communities up and down the Atlantic. We built new instruments, too, that can operate in these hostile regions".
Legacy
In addition to the results of the researches, which are still quoted, the BRIDGE Programme left an interdisciplinary community of deep-ocean scientists with a proven track record of collaboration and new equipment for working at depths of over 3,500 metres.
BRIDGE equipment
BRIDGE had purchased for the UK research fleet a Simrad multibeam echosounder for mapping the seafloor from a surface ship. To increase detail in any geographical areas of interest it also funded upgrades to the existing UK Towed Ocean Bottom Instrument (TOBI), which made 3D images of the seabed as it was towed 300m above the ocean floor. TOBI was modified to increase its resolution, to add a gyrocompass and to add a three component magnetometer for measuring the magnetic field of the seafloor rock over which it was towed.
The BRIDGE Towed instrument (BRIDGET), was developed for hunting and studying the plumes of warm, mineral rich fluids rising into the water column from vent fields. This "hot-spring sniffer" was towed at depth behind a ship in areas where vent fields were suspected to occur and fed geochemical data back to the ship in real time.
Once fields had been detected the fluids venting from the sea-floor could be studied directly using the MEDUSA instrument. Deployed by a deep submergence vehicle, this could be placed over individual vents for extended time periods to record the characteristics of the fluids as they emerge. At the BRIDGE programme's close six MEDUSA instruments had been built with BRIDGE funding, three more were constructed for the Geological Survey of Japan, and the next generation was being developed for various US agencies including NASA.
For examining the rock of the mid-ocean ridge a new deep ocean drill, the BRIDGE Drill, was developed which marked the core as it drilled. The marking of the core allowed the original north-south orientation of the core to be known after it had been removed. This permitted the magnetic alignment of the rock from which the sample was taken to be determined, providing information on sea-bed movements that had taken place after the rock had formed.
For study of the dispersal of animals found at the vent fields, the biologists developed a Planktonic Larval Sampler for Molecular Analysis (PLASMA). This was designed to take samples of water to catch the dispersing larvae of animals living around the vents. PLASMA could be left on the sea-bed in the vicinity of a vent field for up to a year if required, sampling at programmed intervals and preserving any larvae for DNA analysis after the recovery of the equipment.
BRIDGE data archive
BRIDGE collected and compiled: multibeam bathymetry, sonar imagery, seismic data, electromagnetic data, gravimetry, petrology (including rock sections, cores, sediments and analytical data), chemical and physical oceanography (samples and analytical data), macro- and microbiology (specimens, film and analytical data); numerical models and audiovisual records. For the benefit of future researchers a BRIDGE data archive was lodged with the UK's National Oceanography Centre at Southampton.
Notes
References
BRIDGE Newsletter page references
External links
BRIDGE Data Archive
BRIDGE Drill
Natural Environment Research Council
Oceanography
Research projects | British Mid-Ocean Ridge Initiative | [
"Physics",
"Environmental_science"
] | 2,680 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
31,157,948 | https://en.wikipedia.org/wiki/Serial%20module | In abstract algebra, a uniserial module M is a module over a ring R, whose submodules are totally ordered by inclusion. This means simply that for any two submodules N1 and N2 of M, either or . A module is called a serial module if it is a direct sum of uniserial modules. A ring R is called a right uniserial ring if it is uniserial as a right module over itself, and likewise called a right serial ring if it is a right serial module over itself. Left uniserial and left serial rings are defined in a similar way, and are in general distinct from their right-sided counterparts.
An easy motivating example is the quotient ring for any integer . This ring is always serial, and is uniserial when n is a prime power.
The term uniserial has been used differently from the above definition: for clarification see below.
A partial alphabetical list of important contributors to the theory of serial rings includes the mathematicians Keizo Asano, I. S. Cohen, P.M. Cohn, Yu. Drozd, D. Eisenbud, A. Facchini, A.W. Goldie, Phillip Griffith, I. Kaplansky, V.V Kirichenko, G. Köthe, H. Kuppisch, I. Murase, T. Nakayama, P. Příhoda, G. Puninski, and R. Warfield.
Following the common ring theoretic convention, if a left/right dependent condition is given without mention of a side (for example, uniserial, serial, Artinian, Noetherian) then it is assumed the condition holds on both the left and right. Unless otherwise specified, each ring in this article is a ring with unity, and each module is unital.
Properties of uniserial and serial rings and modules
It is immediate that in a uniserial R-module M, all submodules except M and 0 are simultaneously essential and superfluous. If M has a maximal submodule, then M is a local module. M is also clearly a uniform module and thus is directly indecomposable. It is also easy to see that every finitely generated submodule of M can be generated by a single element, and so M is a Bézout module.
It is known that the endomorphism ring EndR M is a semilocal ring which is very close to a local ring in the sense that EndR M has at most two maximal right ideals. If M is assumed to be Artinian or Noetherian, then EndR M is a local ring.
Since rings with unity always have a maximal right ideal, a right uniserial ring is necessarily local. As noted before, a finitely generated right ideal can be generated by a single element, and so right uniserial rings are right Bézout rings. A right serial ring R necessarily factors in the form where each ei is an idempotent element and eiR is a local, uniserial module. This indicates that R is also a semiperfect ring, which is a stronger condition than being a semilocal ring.
Köthe showed that the modules of Artinian principal ideal rings (which are a special case of serial rings) are direct sums of cyclic submodules. Later, Cohen and Kaplansky determined that a commutative ring R has this property for its modules if and only if R is an Artinian principal ideal ring. Nakayama showed that Artinian serial rings have this property on their modules, and that the converse is not true
The most general result, perhaps, on the modules of a serial ring is attributed to Drozd and Warfield: it states that every finitely presented module over a serial ring is a direct sum of cyclic uniserial submodules (and hence is serial). If additionally the ring is assumed to be Noetherian, the finitely presented and finitely generated modules coincide, and so all finitely generated modules are serial.
Being right serial is preserved under direct products of rings and modules, and preserved under quotients of rings. Being uniserial is preserved for quotients of rings and modules, but never for products. A direct summand of a serial module is not necessarily serial, as was proved by Puninski, but direct summands of finite direct sums of uniserial modules are serial modules.
It has been verified that Jacobson's conjecture holds in Noetherian serial rings.
Examples
Any simple module is trivially uniserial, and likewise semisimple modules are serial modules.
Many examples of serial rings can be gleaned from the structure sections above. Every valuation ring is a uniserial ring, and all Artinian principal ideal rings are serial rings, as is illustrated by semisimple rings.
More exotic examples include the upper triangular matrices over a division ring Tn D, and the group ring for some finite field of prime characteristic p and group G having a cyclic normal p-Sylow subgroup.
Structure
This section will deal mainly with Noetherian serial rings and their subclass, Artinian serial rings. In general, rings are first broken down into indecomposable rings. Once the structure of these rings are known, the decomposable rings are direct products of the indecomposable ones. Also, for semiperfect rings such as serial rings, the basic ring is Morita equivalent to the original ring. Thus if R is a serial ring with basic ring B, and the structure of B is known, the theory of Morita equivalence gives that where P is some finitely generated progenerator B. This is why the results are phrased in terms of indecomposable, basic rings.
In 1975, Kirichenko and Warfield independently and simultaneously published analyses of the structure of Noetherian, non-Artinian serial rings. The results were the same however the methods they used were very different from each other. The study of hereditary, Noetherian, prime rings, as well as quivers defined on serial rings were important tools. The core result states that a right Noetherian, non-Artinian, basic, indecomposable serial ring can be described as a type of matrix ring over a Noetherian, uniserial domain V, whose Jacobson radical J(V) is nonzero. This matrix ring is a subring of Mn(V) for some n, and consists of matrices with entries from V on and above the diagonal, and entries from J(V) below.
Artinian serial ring structure is classified in cases depending on the quiver structure. It turns out that the quiver structure for a basic, indecomposable, Artinian serial ring is always a circle or a line. In the case of the line quiver, the ring is isomorphic to the upper triangular matrices over a division ring (note the similarity to the structure of Noetherian serial rings in the preceding paragraph). A complete description of structure in the case of a circle quiver is beyond the scope of this article, but can be found in . To paraphrase the result as it appears there: A basic Artinian serial ring whose quiver is a circle is a homomorphic image of a "blow-up" of a basic, indecomposable, serial quasi-Frobenius ring.
A decomposition uniqueness property
Two modules U and V are said to have the same monogeny class, denoted , if there exists a monomorphism and a monomorphism . The dual notion can be defined: the modules are said to have the same epigeny class, denoted , if there exists an epimorphism and an epimorphism .
The following weak form of the Krull-Schmidt theorem holds. Let U1, ..., Un, V1, ..., Vt be n + t non-zero uniserial right modules over a ring R. Then the direct sums and are isomorphic R-modules if and only if n = t and there exist two permutations and of 1, 2, ..., n such that and for every i = 1, 2, ..., n.
This result, due to Facchini, has been extended to infinite direct sums of uniserial modules by Příhoda in 2006. This extension involves the so-called quasismall uniserial modules. These modules were defined by Nguyen Viet Dung and Facchini, and their existence was proved by Puninski. The weak form of the Krull-Schmidt Theorem holds not only for uniserial modules, but also for several other classes of modules (biuniform modules, cyclically presented modules over serial rings, kernels of morphisms between indecomposable injective modules, couniformly presented modules.)
Notes on alternate, similar and related terms
Right uniserial rings can also be referred to as right chain rings or right valuation rings. This latter term alludes to valuation rings, which are by definition commutative, uniserial domains. By the same token, uniserial modules have been called chain modules, and serial modules semichain modules. The notion of a catenary ring has "chain" as its namesake, but it is in general not related to chain rings.
In the 1930s, Gottfried Köthe and Keizo Asano introduced the term Einreihig (literally "one-series") during investigations of rings over which all modules are direct sums of cyclic submodules. For this reason, uniserial was used to mean "Artinian principal ideal ring" even as recently as the 1970s. Köthe's paper also required a uniserial ring to have a unique composition series, which not only forces the right and left ideals to be linearly ordered, but also requires that there be only finitely many ideals in the chains of left and right ideals. Because of this historical precedent, some authors include the Artinian condition or finite composition length condition in their definitions of uniserial modules and rings.
Expanding on Köthe's work, Tadashi Nakayama used the term generalized uniserial ring to refer to an Artinian serial ring. Nakayama showed that all modules over such rings are serial. Artinian serial rings are sometimes called Nakayama algebras, and they have a well-developed module theory.
Warfield used the term homogeneously serial module for a serial module with the additional property that for any two finitely generated submodules A and B, where J(−) denotes the Jacobson radical of the module. In a module with finite composition length, this has the effect of forcing the composition factors to be isomorphic, hence the "homogeneous" adjective. It turns out that a serial ring R is a finite direct sum of homogeneously serial right ideals if and only if R is isomorphic to a full n × n matrix ring over a local serial ring. Such rings are also known as primary decomposable serial rings.
Notes
Textbooks
Primary Sources
Module theory
Ring theory | Serial module | [
"Mathematics"
] | 2,295 | [
"Fields of abstract algebra",
"Ring theory",
"Module theory"
] |
31,159,552 | https://en.wikipedia.org/wiki/Minor%20loop%20feedback | Minor loop feedback is a classical method used to design stable robust linear feedback control systems using feedback loops around sub-systems within the overall feedback loop. The method is sometimes called minor loop synthesis in college textbooks, some government documents.
The method is suitable for design by graphical methods and was used before digital computers became available. In World War 2 this method was used to design gun laying control systems. It is still used now, but not always referred to by name. It is often discussed within the context of Bode plot methods. Minor loop feedback can be used to stabilize opamps.
Example
Telescope position servo
This example is slightly simplified (no gears between the motor and the load) from the control system for the Harlan J. Smith Telescope at the McDonald Observatory. In the figure there are three feedback loops: current control loop, velocity control loop and position control loop. The last is the main loop. The other two are minor loops. The forward path, considering only the forward path without the minor loop feedback, has three unavoidable phase shifting stages. The motor inductance and winding resistance form a low-pass filter with a bandwidth around 200 Hz. Acceleration to velocity is an integrator and velocity to position is an integrator. This would have a total phase shift of 180 to 270 degrees. Simply connecting position feedback would almost always result in unstable behaviour.
Current control loop
The innermost loop regulates the current in the torque motor. This type of motor creates torque that is nearly proportional to the rotor current, even if it is forced to turn backward. Because of the action of the commutator, there are instances when two rotor windings are simultaneously energized. If the motor was driven by a voltage controlled voltage source, the current would roughly double, as would the torque. By sensing the current with a small sensing resister (RS) and feeding that voltage back to the inverting input of the drive amplifier, the amplifier becomes a voltage controlled current source. With constant current, when two windings are energized, they share the current and the variation of torque is on the order of 10%.
Velocity control loop
The next innermost loop regulates motor speed. The voltage signal from the Tachometer (a small permanent magnet DC generator) is proportional to the angular velocity of the motor. This signal is fed back to the inverting input of the velocity control amplifier (KV). The velocity control system makes the system 'stiffer' when presented with torque variations such as wind, movement about the second axis and torque ripple from the motor.
Position control loop
The outermost loop, the main loop, regulates load position. In this example, position feedback of the actual load position is presented by a Rotary encoder that produces a binary output code. The actual position is compared to the desired position by a digital subtractor that drives a DAC (Digital-to-analog converter) that drives the position control amplifier (KP). Position control allows the servo to compensate for sag and for slight position ripple caused by gears (not shown) between the motor and the telescope
Synthesis
The usual design procedure is to design the innermost subsystem (the current control loop in the telescope example) using local feedback to linearize and flatten the gain. Stability is generally assured by Bode plot methods. Usually, the bandwidth is made as wide as possible. Then the next loop (the velocity loop in the telescope example) is designed. The bandwidth of this sub-system is set to be a factor of 3 to 5 less than the bandwidth of the enclosed system. This process continues with each loop having less bandwidth than the bandwidth of the enclosed system. As long as the bandwidth of each loop is less than the bandwidth of the enclosed sub-system by a factor of 3 to 5, the phase shift of the enclosed system can be neglected, i.e. the sub-system can be treated as simple flat gain. Since the bandwidth of each sub-system is less than the bandwidth of the system it encloses, it is desirable to make the bandwidth of each sub-system as large as possible so that there is enough bandwidth in the outermost loop. The system is often expressed as a Signal-flow graph and its overall transfer function can be computed from Mason's Gain Formula.
References
External links
Li, Yunfeng and Roberto Horowitz. "Mechatronics of Electrostatic Microactuators for Computer Disk Drive Dual-Stage Servo Systems." IEEE/ASME Transactions on Mechatronics, Vol. 6 No. 2. June 2001.
Dawson, Joel L. "Feedback Systems." MIT.
Large Telescope Conference 1971, contains full text of Dittmar's presentation.
Control theory | Minor loop feedback | [
"Mathematics"
] | 969 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
32,140,327 | https://en.wikipedia.org/wiki/Bernstein%27s%20theorem%20%28approximation%20theory%29 | In approximation theory, Bernstein's theorem is a converse to Jackson's theorem. The first results of this type were proved by Sergei Bernstein in 1912.
For approximation by trigonometric polynomials, the result is as follows:
Let f: [0, 2π] → C be a 2π-periodic function, and assume r is a natural number, and 0 < α < 1. If there exists a number C(f) > 0 and a sequence of trigonometric polynomials {Pn}n ≥ n0 such that
then f = Pn0 + φ, where φ has a bounded r-th derivative which is α-Hölder continuous.
See also
Bernstein's lethargy theorem
Constructive function theory
References
Theorems in approximation theory | Bernstein's theorem (approximation theory) | [
"Mathematics"
] | 153 | [
"Theorems in approximation theory",
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical analysis stubs"
] |
32,144,185 | https://en.wikipedia.org/wiki/International%20Society%20for%20Computational%20Biology%20Student%20Council | The International Society for Computational Biology Student Council (ISCB-SC) is a dedicated section of the International Society for Computational Biology created in 2004. It is composed by students and young researchers from all levels (undergraduates, postgraduates, and postdoctoral researchers) in the fields of bioinformatics and computational biology. The organisation promotes the development of the students' community worldwide by organizing different events including symposia, workshops, webinars, internship coordination and hackathons. A special focus is made on the development of soft skills in order to develop potential in bioinformatics and computational biology students around the world.
The ISCB-SC is composed of Regional Student Groups (RSGs) which are located in various regions across the globe. Since its foundation, the ISCB-SC has gained representation in most continents through over thirty RSGs which collectively represent more than 1,000 students and researchers around the world.
History
The ISCB Student Council was officially established in 2004 by a vote from the ISCB Board of Directors at the ISMB/ECCB meeting. The concept of the student council was first proposed by Manuel Corpas in late 2003. The establishment of the student council was the first move of the ISCB to include the new generation of computational biologists into the then newly emerging field.
The first meeting had about 30 students and eight members of the ISCB leadership at the Intelligent Systems for Molecular Biology (ISMB) 2005 in Detroit, Michigan
The first official Student Council Symposium (SCS) was held later that year at the European Conference on Computational Biology (ECCB) meeting in Madrid, Spain. The first SCS had approximately 100 attendees and was the beginning of a long-running series of meetings organized and highlighting students. Other than the first instance of SCS, all later editions have as a satellite meeting of ISMB. In 2012, the SC started a second set of symposia that takes place at ECCB when it is not co-located with ISMB; the European Student Council Symposium (ESCS) regularly has more attendees than SCS when both events are held in the same year. Recently, the SC symposia have also been held along with other ISCB conferences, like in the case for the Latin American Student Council Symposium (LA-SCS) which is held every two years along with the Latin American ISCB Conference since 2014, or the ISCB Student Council Symposium in Africa (SCS-Africa) which is held along the ISCB Africa ASBCB conference since 2015.
Regional Student Groups
In 2006, the student council created the Regional Student Group (RSG) initiative to tailor the efforts of the council for the local community. RSGs are locally managed groups established at various regions. Each RSG operates either under the aegis of a higher education institution or as an independent organisation sub-affiliated to local computational biology or bioinformatics organisation. RSGs are provided support from both the student council and other local groups in their region and often organize multi-RSG events such as BeNeLuxFra. , the global network of the student council includes over 25 active RSGs operating across five continents.
The RSG committee was set up in 2008 to oversee and coordinates the activities of RSGs across the world. The team comprises an RSG-Chair, a co-chair and five vice-chairs corresponding to geographical regions (Asia-Pacific, Europe, Africa, Latin America and USA-Canada). The RSG chair is required to have previous experience as a president or secretary of an RSG. Currently, around 2,000 students and researchers are active members of RSG-related events.
The ISCB-SC continuously looks for PhD students or postdocs who may be interested in starting a new RSG in a region where the student council has no presence yet. The student council also supports existing RSGs by advising them on how to grow, in terms of the events and projects they face as well as economically. There are 3 funding calls every year where RSGs can submit proposals containing different types of events which will be evaluated by the continental RSG chairs and the Executive Committee. Since the creation of the Student Council F1000Research channel in 2016, achievements by RSGs throughout each year are discussed in editorial articles.
The ISCB-SC runs an internship program in which researchers throughout the world can offer a place for a student from a developing nation to join his/her group for a period of time. This program aims to give students from developing nations the opportunity to have an experience in a working environment from first class research groups and interact with expert Principal Investigators around the world.
Press
As a worldwide organization, the SC organizes symposia, workshops and different events across the globe. After more than ten years of existence, many new experiences have been gained on how to deal with different obstacles that arise when trying to tackle complex objectives. Learning experiences are published as articles that aim to help new RSGs, as well as other student organizations, including bioinformatics communities, to organize their own events and strengthen their growth.
Publications about the Student Council
The Student Council is well recognized by members of the scientific community
and its members are also recognized as outstanding members of their community
The Student Council also assists in efforts to improve the quality of public reference material
Finally, highlights from all major symposia from both the Student Council and from the Regional Student Groups have been periodically published over the years (2007, 2008, 2009, 2010, 2011,
2012,
2014, 2015, 2016 and 2017
).
Student Council Outreach Publications
Since its conception, the ISCB Student Council has not only advocated for the students in the ISCB and beyond but, also, the SC has made much effort to educate new students.
In 2014, the ISCB Student Council began publishing a collection of articles in the PLOS journals that covers details about the development of the SC and how to advance in the field. As a collective effort because of reaching a decade of story, different Student Council leaders from different regions in the world came together and published 12 articles as part of the "Stories from the road: ISCB Student Council Collection" that summarize the experience and lessons that were learned through those years. These articles range in topic from starting and expanding your scientific network, dealing with the frustration of failure, to disseminating science to the public. In particular, the article "Explain Bioinformatics to Your Grandmother!" that aims to simplify the answer to the question "What is bioinformatics?" to non-scientists, has accumulated more than 17,000 views (as of 2017) and constitutes one of the most read articles in PLOS Computational Biology.
The Student Council also has its own article channel on F1000Research which features publications highlighting the activities of the council.
In addition to the collections in F1000 and PLOS, the Student Council has publications that have appeared in BMC Bioinformatics and other journals.
Events
The student council organizes several symposia each year to coincide with the major ISCB Conferences
The Student Council Symposium (SCS) is held each year directly preceding ISMB (or ISMB/ECCB). The symposium has been organised each year since 2005.
The European Student Council Symposium (ESCS) is a biennial symposium that takes place preceding the ECCB (when ECCB is not co-located with ISMB). It has been held since 2010.
The Latin American Student Council Symposium (LA-SCS) is held in even-numbered years directly preceding ISCB-LA. The LA-SCS has been held every other year since 2014.
The African Student Council Symposium (SCS Africa) is held preceding the ISCB Africa ASBCB Conference on Bioinformatics since 2015.
The Asian Student Council Symposium (ASCS) is held preceding the ISCB-Asia Conference since 2022.
In addition to the main ISCB-SC events that accompany the main ISCB conference(s), RSGs also hold events on a regular basis; these include annual events in Argentina, Germany and the UK.
Committees
Every year, ISCB and SC members elect new leadership positions. The positions are filled by students and/or postdoctoral members of the SC. The key mission of the SC's Executive Team is to lead the sustainable development of the organisation and its RSGs, and managing the coordination of all activities.
In addition, the SC aims to coordinate and integrate efforts of all members and volunteers who contribute their time. For the purpose of better organizational structure, the SC has established several committees that are chaired by SC members and managed by the executive team. Below is the list of the SC's committee's with description of each committee's responsibilities:
References
Computational biology
Biology societies
Bioinformatics organizations
Cybernetics
Computer science institutes
Biological research institutes
Biochemistry research institutes
Com b
Student organizations established in 2004
Scientific organizations established in 2004 | International Society for Computational Biology Student Council | [
"Chemistry",
"Biology"
] | 1,809 | [
"Bioinformatics organizations",
"Biochemistry organizations",
"Biochemistry research institutes",
"Bioinformatics",
"Computational biology"
] |
32,145,358 | https://en.wikipedia.org/wiki/Eclipse%20season | An eclipse season is a period, roughly every six months, when eclipses occur. Eclipse seasons are the result of the axial parallelism of the Moon's orbital plane (tilted five degrees to the Earth's orbital plane), just as Earth's weather seasons are the result of the axial parallelism of Earth's tilted axis as it orbits around the Sun. During the season, the "lunar nodes" – the line where the Moon's orbital plane intersects with the Earth's orbital plane – align with the Sun and Earth, such that a solar eclipse is formed during the new moon phase and a lunar eclipse is formed during the full moon phase.
Only two (or occasionally three) eclipse seasons occur during each year, and each season lasts about 35 days and repeats just short of six months (173 days) later, thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. During the eclipse season, the Moon is at a low ecliptic latitude (less than around 1.5° north or south), hence the Sun, Moon, and Earth become aligned straightly enough (in syzygy) for an eclipse to occur. Eclipse seasons should occur 38 times within a saros period (6,585.3 days).
The type of each solar eclipse (whether total or annular, as seen from the sublunar point) depends on the apparent sizes of the Sun and Moon, which are functions of the distances of Earth from the Sun and of the Moon from Earth, respectively, as seen from Earth's surface. These distances vary because both the Earth and the Moon have elliptic orbits.
If both orbits were coplanar (i.e. on the same plane) with each other, then two eclipses would happen every lunar month (29.53 days), assuming the Earth had a perfectly circular orbit centered around the Sun, and the Moon's orbit was also perfectly circular and centered around the Earth. A lunar eclipse would occur at every full moon, a solar eclipse every new moon, and all solar eclipses would be the same type.
Details
An eclipse season is the only time when the Sun (from the perspective of the Earth) is close enough to one of the Moon's nodes to allow an eclipse to occur. During the season, whenever there is a full moon a lunar eclipse may occur and whenever there is a new moon a solar eclipse may occur. If the Sun is close enough to a node, then a "full" eclipse [total or annular solar, or total lunar] will occur. Each season lasts from 31 to 37 days, and seasons recur about every 6 months (173 days). At least two (one solar and one lunar, in any order), and at most three eclipses (solar, lunar, then solar again, or vice versa), will occur during every eclipse season. This is because it is about 15 days (a fortnight) between a full moon and a new moon and vice versa. If there is an eclipse at the very beginning of the season, then there is enough time (30 days) for two more eclipses.
In other words, because the eclipse season (34 days long on average) is longer than the synodic month (one lunation, or the time for the Moon to return to a particular phase and about 29.5 days), the Moon will be new or full at least two, and up to three, times during the season. Eclipse seasons occur slightly shy of six months apart (successively occurring every 173.31 days - half of an eclipse year), the time it takes the Sun to travel from one node to the next along the ecliptic. If the last eclipse of an eclipse season occurs at the very beginning of a calendar year, a total of seven eclipses to occur since there is still time before the end of the calendar year for two full eclipse seasons, each having up to three eclipses.
Examples: Part 1 out of 4
Visual sequence of two particular eclipse seasons
In each sequence below, each eclipse is separated by a fortnight. The first and last eclipse in each sequence is separated by one synodic month. See also Eclipse cycles.
(The two eclipse seasons above share similarities (lunar or solar centrality and gamma of each eclipse in the same column) because they are a half saros apart.)
22-year chart of eclipses (1999–2020) demonstrating seasons
The penumbral lunar eclipse of November 29–30, 2020 was followed by the solar eclipse of December 14, 2020.
Examples: Part 2 out of 4
Visual sequence of two particular eclipse seasons
In each sequence below, each eclipse is separated by a fortnight. The first and last eclipse in each sequence is separated by one synodic month. See also Eclipse cycles.
(The two eclipse seasons above share similarities (lunar or solar centrality and gamma of each eclipse in the same column) because they are a half saros apart.)
Examples: Part 3 out of 4
Visual sequence of two particular eclipse seasons
In each sequence below, each eclipse is separated by a fortnight. The first and last eclipse in each sequence is separated by one synodic month. See also Eclipse cycles.
(The two eclipse seasons above share similarities (lunar or solar centrality and gamma of each eclipse in the same column) because they are a half saros apart.)
Examples: Part 4 out of 4
Visual sequence of two particular eclipse seasons
In each sequence below, each eclipse is separated by a fortnight. The first and last eclipse in each sequence is separated by one synodic month. See also Eclipse cycles.
(The two eclipse seasons above share similarities (lunar or solar centrality and gamma of each eclipse in the same column) because they are a half saros apart.)
See also
Eclipse cycle
Ecliptic
Lunar node
Lunar phase
Orbit of the Moon
Orbital inclination
Syzygy
References
External links
NASA Eclipse Web Site
Sun-centered animation of Moon inclination UNL Astronomy
Season
Lunar eclipses
Time in astronomy
Units of time | Eclipse season | [
"Physics",
"Astronomy",
"Mathematics"
] | 1,229 | [
"Time in astronomy",
"Physical quantities",
"Time",
"Units of time",
"Astronomical events",
"Quantity",
"Spacetime",
"Units of measurement",
"Eclipses"
] |
32,148,595 | https://en.wikipedia.org/wiki/Ars%20operon | In molecular biology, the ars operon is an operon found in several bacterial taxon. It is required for the detoxification of arsenate, arsenite, and antimonite. This system transports arsenite and antimonite out of the cell. The pump is composed of two polypeptides, the products of the arsA and arsB genes. This two-subunit enzyme produces resistance to arsenite and antimonite. Arsenate, however, must first be reduced to arsenite before it is extruded. A third gene, arsC, expands the substrate specificity to allow for arsenate pumping and resistance. ArsC is an approximately 150-residue arsenate reductase that uses reduced glutathione (GSH) to convert arsenate to arsenite with a redox active cysteine residue in the active site. ArsC forms an active quaternary complex with GSH, arsenate, and glutaredoxin 1 (Grx1). The three ligands must be present simultaneously for reduction to occur.
ArsA and ArsB
ArsA and ArsB form an anion-translocating ATPase. The ArsB protein is distinguished by its overall hydrophobic character, in keeping with its role as a membrane-associated channel. Sequence analysis reveals the presence of 13 putative transmembrane (TM) regions.
ArsC
The arsC protein structure has been solved. It belongs to the thioredoxin superfamily fold which is defined by a beta-sheet core surrounded by alpha-helices. The active cysteine residue of ArsC is located in the loop between the first beta-strand and the first helix, which is also conserved in the Spx protein and its homologues.
The arsC family also comprises the Spx proteins which are Gram-positive bacterial transcription factors that regulate the transcription of multiple genes in response to disulphide stress.
ArsD and ArsR
ArsD is a trans-acting repressor of the arsRDABC operon that confers resistance to arsenicals and antimonials in Escherichia coli. It possesses two-pairs of vicinal cysteine residues, Cys(12)-Cys(13) and Cys(112)-Cys(113), that potentially form separate binding sites for the metalloids that trigger dissociation of ArsD from the operon. However, as a homodimer it has four vicinal cysteine pairs. The ArsD family consists of several bacterial arsenical resistance operon trans-acting repressor ArsD proteins.
ArsR is a trans-acting regulatory protein. It acts as a repressor on the arsRDABC operon when no arsenic is present in the cell. When arsenic is present in the cell ArsR will lose affinity for the operator and RNA polymerase can transcribe the arsDCAB genes.
ArsD and ArsR work together to regulate the ars operon.
arsenic chaperone, ArsD, encoded by the arsRDABC operon of Escherichia coli. ArsD transfers trivalent metalloids to ArsA, the catalytic subunit of an As(III)/Sb(III) efflux pump. Interaction with ArsD increases the affinity of ArsA for arsenite, thus increasing its ATPase activity at lower concentrations of arsenite and enhancing the rate of arsenite extrusion.
References
Protein families
Operons | Ars operon | [
"Chemistry",
"Biology"
] | 723 | [
"Operons",
"Protein families",
"Gene expression",
"Protein classification"
] |
25,105,276 | https://en.wikipedia.org/wiki/Spinodal | In thermodynamics, the limit of local stability against phase separation with respect to small fluctuations is clearly defined by the condition that the second derivative of Gibbs free energy is zero.
The locus of these points (the inflection point within a G-x or G-c curve, Gibbs free energy as a function of composition) is known as the spinodal curve. For compositions within this curve, infinitesimally small fluctuations in composition and density will lead to phase separation via spinodal decomposition. Outside of the curve, the solution will be at least metastable with respect to fluctuations. In other words, outside the spinodal curve some careful process may obtain a single phase system. Inside it, only processes far from thermodynamic equilibrium, such as physical vapor deposition, will enable one to prepare single phase compositions. The local points of coexisting compositions, defined by the common tangent construction, are known as a binodal coexistence curve, which denotes the minimum-energy equilibrium state of the system. Increasing temperature results in a decreasing difference between mixing entropy and mixing enthalpy, and thus the coexisting compositions come closer. The binodal curve forms the basis for the miscibility gap in a phase diagram. The free energy of mixing changes with temperature and concentration, and the binodal and spinodal meet at the critical or consolute temperature and composition.
Criterion
For binary solutions, the thermodynamic criterion which defines the spinodal curve is that the second derivative of free energy with respect to density or some composition variable is zero.
Critical point
Extrema of the spinodal in a temperature vs composition plot coincide with those of the binodal curve, and are known as critical points. The spinodal itself can be thought of as a line of pseudocritical points, with the correlation function taking a scaling form with non-classical critical exponents. Strictly speaking, a spinodal is defined as a mean field theoretic object. As such, the spinodal does not exist in real systems, but one can extrapolate to infer the existence of a pseudospinodal that exhibits critical-like behavior such as critical slowing down.
Isothermal liquid-liquid equilibria
In the case of ternary isothermal liquid-liquid equilibria, the spinodal curve (obtained from the Hessian matrix) and the corresponding critical point can be used to help the experimental data correlation process.
References
Thermodynamics | Spinodal | [
"Physics",
"Chemistry",
"Mathematics"
] | 521 | [
"Thermodynamics",
"Dynamical systems"
] |
25,110,709 | https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance | Nuclear magnetic resonance (NMR) is a physical phenomenon in which nuclei in a strong constant magnetic field are disturbed by a weak oscillating magnetic field (in the near field) and respond by producing an electromagnetic signal with a frequency characteristic of the magnetic field at the nucleus. This process occurs near resonance, when the oscillation frequency matches the intrinsic frequency of the nuclei, which depends on the strength of the static magnetic field, the chemical environment, and the magnetic properties of the isotope involved; in practical applications with static magnetic fields up to ca. 20 tesla, the frequency is similar to VHF and UHF television broadcasts (60–1000 MHz). NMR results from specific magnetic properties of certain atomic nuclei. High-resolution nuclear magnetic resonance spectroscopy is widely used to determine the structure of organic molecules in solution and study molecular physics and crystals as well as non-crystalline materials. NMR is also routinely used in advanced medical imaging techniques, such as in magnetic resonance imaging (MRI). The original application of NMR to condensed matter physics is nowadays mostly devoted to strongly correlated electron systems. It reveals large many-body couplings by fast broadband detection and should not be confused with solid state NMR, which aims at removing the effect of the same couplings by Magic Angle Spinning techniques.
The most commonly used nuclei are and , although isotopes of many other elements, such as , , and , can be studied by high-field NMR spectroscopy as well. In order to interact with the magnetic field in the spectrometer, the nucleus must have an intrinsic angular momentum and nuclear magnetic dipole moment. This occurs when an isotope has a nonzero nuclear spin, meaning an odd number of protons and/or neutrons (see Isotope). Nuclides with even numbers of both have a total spin of zero and are therefore not NMR-active.
In its application to molecules the NMR effect can be observed only in the presence of a static magnetic field. However, in the ordered phases of magnetic materials, very large internal fields are produced at the nuclei of magnetic ions (and of close ligands), which allow NMR to be performed in zero applied field. Additionally, radio-frequency transitions of nuclear spin I > with large enough electric quadrupolar coupling to the electric field gradient at the nucleus may also be excited in zero applied magnetic field (nuclear quadrupole resonance).
In the dominant chemistry application, the use of higher fields improves the sensitivity of the method (signal-to-noise ratio scales approximately as the power of with the magnetic field strength) and the spectral resolution. Commercial NMR spectrometers employing liquid helium cooled superconducting magnets with fields of up to 28 Tesla have been developed and are widely used.
It is a key feature of NMR that the resonance frequency of nuclei in a particular sample substance is usually directly proportional to the strength of the applied magnetic field. It is this feature that is exploited in imaging techniques; if a sample is placed in a non-uniform magnetic field then the resonance frequencies of the sample's nuclei depend on where in the field they are located. This effect serves as the basis of magnetic resonance imaging.
The principle of NMR usually involves three sequential steps:
The alignment (polarization) of the magnetic nuclear spins in an applied, constant magnetic field B0.
The perturbation of this alignment of the nuclear spins by a weak oscillating magnetic field, usually referred to as a radio frequency (RF) pulse. The oscillation frequency required for significant perturbation is dependent upon the static magnetic field (B0) and the nuclei of observation.
The detection of the NMR signal during or after the RF pulse, due to the voltage induced in a detection coil by precession of the nuclear spins around B0. After an RF pulse, precession usually occurs with the nuclei's Larmor frequency and, in itself, does not involve transitions between spin states or energy levels.
The two magnetic fields are usually chosen to be perpendicular to each other as this maximizes the NMR signal strength. The frequencies of the time-signal response by the total magnetization (M) of the nuclear spins are analyzed in NMR spectroscopy and magnetic resonance imaging. Both use applied magnetic fields (B0) of great strength, usually produced by large currents in superconducting coils, in order to achieve dispersion of response frequencies and of very high homogeneity and stability in order to deliver spectral resolution, the details of which are described by chemical shifts, the Zeeman effect, and Knight shifts (in metals). The information provided by NMR can also be increased using hyperpolarization, and/or using two-dimensional, three-dimensional and higher-dimensional techniques.
NMR phenomena are also utilized in low-field NMR, NMR spectroscopy and MRI in the Earth's magnetic field (referred to as Earth's field NMR), and in several types of magnetometers.
History
Nuclear magnetic resonance was first described and measured in molecular beams by Isidor Rabi in 1938, by extending the Stern–Gerlach experiment, and in 1944, Rabi was awarded the Nobel Prize in Physics for this work. In 1946, Felix Bloch and Edward Mills Purcell expanded the technique for use on liquids and solids, for which they shared the Nobel Prize in Physics in 1952.
Russell H. Varian filed the "Method and means for correlating nuclear properties of atoms and magnetic fields", on October 21, 1948 and was accepted on July 24, 1951. Varian Associates developed the first NMR unit called NMR HR-30 in 1952.
Purcell had worked on the development of radar during World War II at the Massachusetts Institute of Technology's Radiation Laboratory. His work during that project on the production and detection of radio frequency power and on the absorption of such RF power by matter laid the foundation for his discovery of NMR in bulk matter.
Rabi, Bloch, and Purcell observed that magnetic nuclei, like and , could absorb RF energy when placed in a magnetic field and when the RF was of a frequency specific to the identity of the nuclei. When this absorption occurs, the nucleus is described as being in resonance. Different atomic nuclei within a molecule resonate at different (radio) frequencies in the same applied static magnetic field, due to various local magnetic fields. The observation of such magnetic resonance frequencies of the nuclei present in a molecule makes it possible to determine essential chemical and structural information about the molecule.
The improvements of the NMR method benefited from the development of electromagnetic technology and advanced electronics and their introduction into civilian use. Originally as a research tool it was limited primarily to dynamic nuclear polarization, by the work of Anatole Abragam and Albert Overhauser, and to condensed matter physics, where it produced one of the first demonstrations of the validity of the BCS theory of superconductivity by the observation by Charles Slichter of the Hebel-Slichter effect. It soon showed its potential in organic chemistry, where NMR has become indispensable, and by the 1990s improvement in the sensitivity and resolution of NMR spectroscopy resulted in its broad use in analytical chemistry, biochemistry and materials science.
In the 2020s zero- to ultralow-field nuclear magnetic resonance (ZULF NMR), a form of spectroscopy that provides abundant analytical results without the need for large magnetic fields, was developed. It is combined with a special technique that makes it possible to hyperpolarize atomic nuclei.
Theory of nuclear magnetic resonance
Nuclear spins and magnets
All nucleons, that is neutrons and protons, composing any atomic nucleus, have the intrinsic quantum property of spin, an intrinsic angular momentum analogous to the classical angular momentum of a spinning sphere. The overall spin of the nucleus is determined by the spin quantum number S. If the numbers of both the protons and neutrons in a given nuclide are even then , i.e. there is no overall spin. Then, just as electrons pair up in nondegenerate atomic orbitals, so do even numbers of protons or even numbers of neutrons (both of which are also spin- particles and hence fermions), giving zero overall spin.
However, an unpaired proton and unpaired neutron will have a lower energy when their spins are parallel, not anti-parallel. This parallel spin alignment of distinguishable particles does not violate the Pauli exclusion principle. The lowering of energy for parallel spins has to do with the quark structure of these two nucleons. As a result, the spin ground state for the deuteron (the nucleus of deuterium, the 2H isotope of hydrogen), which has only a proton and a neutron, corresponds to a spin value of 1, not of zero. On the other hand, because of the Pauli exclusion principle, the tritium isotope of hydrogen must have a pair of anti-parallel spin neutrons (of total spin zero for the neutron spin-pair), plus a proton of spin . Therefore, the tritium total nuclear spin value is again , just like the simpler, abundant hydrogen isotope, 1H nucleus (the proton). The NMR absorption frequency for tritium is also similar to that of 1H. In many other cases of non-radioactive nuclei, the overall spin is also non-zero and may have a contribution from the orbital angular momentum of the unpaired nucleon. For example, the nucleus has an overall spin value .
A non-zero spin is associated with a non-zero magnetic dipole moment, , via the relation where γ is the gyromagnetic ratio. Classically, this corresponds to the proportionality between the angular momentum and the magnetic dipole moment of a spinning charged sphere, both of which are vectors parallel to the rotation axis whose length increases proportional to the spinning frequency. It is the magnetic moment and its interaction with magnetic fields that allows the observation of NMR signal associated with transitions between nuclear spin levels during resonant RF irradiation or caused by Larmor precession of the average magnetic moment after resonant irradiation. Nuclides with even numbers of both protons and neutrons have zero nuclear magnetic dipole moment and hence do not exhibit NMR signal. For instance, is an example of a nuclide that produces no NMR signal, whereas , , and are nuclides that do exhibit NMR spectra. The last two nuclei have spin S > and are therefore quadrupolar nuclei.
Electron spin resonance (ESR) is a related technique in which transitions between electronic rather than nuclear spin levels are detected. The basic principles are similar but the instrumentation, data analysis, and detailed theory are significantly different. Moreover, there is a much smaller number of molecules and materials with unpaired electron spins that exhibit ESR (or electron paramagnetic resonance (EPR)) absorption than those that have NMR absorption spectra. On the other hand, ESR has much higher signal per spin than NMR does.
Values of spin angular momentum
Nuclear spin is an intrinsic angular momentum that is quantized. This means that the magnitude of this angular momentum is quantized (i.e. S can only take on a restricted range of values), and also that the x, y, and z-components of the angular momentum are quantized, being restricted to integer or half-integer multiples of ħ, the reduced Planck constant. The integer or half-integer quantum number associated with the spin component along the z-axis or the applied magnetic field is known as the magnetic quantum number, m, and can take values from +S to −S, in integer steps. Hence for any given nucleus, there are a total of angular momentum states.
The z-component of the angular momentum vector () is therefore . The z-component of the magnetic moment is simply:
Spin energy in a magnetic field
Consider nuclei with a spin of one-half, like , or . Each nucleus has two linearly independent spin states, with m = or m = − (also referred to as spin-up and spin-down, or sometimes α and β spin states, respectively) for the z-component of spin. In the absence of a magnetic field, these states are degenerate; that is, they have the same energy. Hence the number of nuclei in these two states will be essentially equal at thermal equilibrium.
If a nucleus with spin is placed in a magnetic field, however, the two states no longer have the same energy as a result of the interaction between the nuclear magnetic dipole moment and the external magnetic field. The energy of a magnetic dipole moment in a magnetic field B0 is given by:
Usually the z-axis is chosen to be along B0, and the above expression reduces to:
or alternatively:
As a result, the different nuclear spin states have different energies in a non-zero magnetic field. In less formal language, we can talk about the two spin states of a spin as being aligned either with or against the magnetic field. If γ is positive (true for most isotopes used in NMR) then ("spin up") is the lower energy state.
The energy difference between the two states is:
and this results in a small population bias favoring the lower energy state in thermal equilibrium. With more spins pointing up than down, a net spin magnetization along the magnetic field B0 results.
Precession of the spin magnetization
A central concept in NMR is the precession of the spin magnetization around the magnetic field at the nucleus, with the angular frequency where relates to the oscillation frequency and B is the magnitude of the field. This means that the spin magnetization, which is proportional to the sum of the spin vectors of nuclei in magnetically equivalent sites (the expectation value of the spin vector in quantum mechanics), moves on a cone around the B field. This is analogous to the precessional motion of the axis of a tilted spinning top around the gravitational field. In quantum mechanics, is the Bohr frequency of the and expectation values. Precession of non-equilibrium magnetization in the applied magnetic field B0 occurs with the Larmor frequency without change in the populations of the energy levels because energy is constant (time-independent Hamiltonian).
Magnetic resonance and radio-frequency pulses
A perturbation of nuclear spin orientations from equilibrium will occur only when an oscillating magnetic field is applied whose frequency νrf sufficiently closely matches the Larmor precession frequency νL of the nuclear magnetization. The populations of the spin-up and -down energy levels then undergo Rabi oscillations, which are analyzed most easily in terms of precession of the spin magnetization around the effective magnetic field in a reference frame rotating with the frequency νrf. The stronger the oscillating field, the faster the Rabi oscillations or the precession around the effective field in the rotating frame. After a certain time on the order of 2–1000 microseconds, a resonant RF pulse flips the spin magnetization to the transverse plane, i.e. it makes an angle of 90° with the constant magnetic field B0 ("90° pulse"), while after a twice longer time, the initial magnetization has been inverted ("180° pulse"). It is the transverse magnetization generated by a resonant oscillating field which is usually detected in NMR, during application of the relatively weak RF field in old-fashioned continuous-wave NMR, or after the relatively strong RF pulse in modern pulsed NMR.
Chemical shielding
It might appear from the above that all nuclei of the same nuclide (and hence the same γ) would resonate at exactly the same frequency but this is not the case. The most important perturbation of the NMR frequency for applications of NMR is the "shielding" effect of the shells of electrons surrounding the nucleus. Electrons, similar to the nucleus, are also charged and rotate with a spin to produce a magnetic field opposite to the applied magnetic field. In general, this electronic shielding reduces the magnetic field at the nucleus (which is what determines the NMR frequency). As a result, the frequency required to achieve resonance is also reduced.
This shift in the NMR frequency due to the electronic molecular orbital coupling to the external magnetic field is called chemical shift, and it explains why NMR is able to probe the chemical structure of molecules, which depends on the electron density distribution in the corresponding molecular orbitals. If a nucleus in a specific chemical group is shielded to a higher degree by a higher electron density of its surrounding molecular orbitals, then its NMR frequency will be shifted "upfield" (that is, a lower chemical shift), whereas if it is less shielded by such surrounding electron density, then its NMR frequency will be shifted "downfield" (that is, a higher chemical shift).
Unless the local symmetry of such molecular orbitals is very high (leading to "isotropic" shift), the shielding effect will depend on the orientation of the molecule with respect to the external field (B0). In solid-state NMR spectroscopy, magic angle spinning is required to average out this orientation dependence in order to obtain frequency values at the average or isotropic chemical shifts. This is unnecessary in conventional NMR investigations of molecules in solution, since rapid "molecular tumbling" averages out the chemical shift anisotropy (CSA). In this case, the "average" chemical shift (ACS) or isotropic chemical shift is often simply referred to as the chemical shift.
Radiation Damping
In 1949, Suryan first suggested that the interaction between a radiofrequency coil and a sample's bulk magnetization could explain why experimental observations of relaxation times differed from theoretical predictions. Building on this idea, Bloembergen and Pound further developed Suryan's hypothesis by mathematically integrating the Maxwell–Bloch equations, a process through which they introduced the concept of "radiation damping."
Radiation damping (RD) in Nuclear Magnetic Resonance (NMR) is an intrinsic phenomenon observed in many high-field NMR experiments, especially relevant in systems with high concentrations of nuclei like protons or fluorine. RD occurs when transverse bulk magnetization from the sample, following a radio frequency pulse, induces an electromagnetic field (emf) in the receiver coil of the NMR spectrometer. This generates an oscillating current and a non-linear induced transverse magnetic field which returns the spin system to equilibrium faster than other mechanisms of relaxation.
RD can result in line broadening and measurement of a shorter spin-lattice relaxation time (). For instance, a sample of water in a 400 MHz NMR spectrometer will have around 20 ms, whereas its is hundreds of milliseconds. This effect is often described using modified Bloch equations that include terms for radiation damping alongside the conventional relaxation terms. The longitudinal relaxation time of radiation damping () is given by the equation [1].
[1]
where is the gyromagnetic ratio, is the magnetic permeability, is the equilibrium magnetization per unit volume, is the filling factor of the probe which is the ratio of the probe coil volume to the sample volume enclosed, is the quality factor of the probe, and , , and are the resonance frequency, inductance, and resistance of the coil, respectively. The quantification of line broadening due to radiation damping can be determined by measuring the and use equation [2].
[2]
Radiation damping in NMR is influenced significantly by system parameters. It is notably more prominent in systems where the NMR probe possesses a high quality factor () and a high filling factor , resulting in a strong coupling between the probe coil and the sample. The phenomenon is also impacted by the concentration of the nuclei within the sample and their magnetic moments, which can intensify the effects of radiation damping. The strength of the magnetic field is inversely proportional to the lifetime of RD. The impact of radiation damping on NMR signals is multifaceted. It can accelerate the decay of the NMR signal faster than intrinsic relaxation processes would suggest. This acceleration can complicate the interpretation of NMR spectra by causing broadening of spectral lines, distorting multiplet structures, and introducing artifacts, especially in high-resolution NMR scenarios. Such effects make it challenging to obtain clear and accurate data without considering the influence of radiation damping.
To mitigate these effects, various strategies are employed in NMR spectroscopy. These methods majorly stem from hardware or software. Hardware modifications including RF feed-circuit and Q-factor switches reduce the feedback loop between the sample magnetization and the electromagnetic field induced by the coil and function successfully. Other approaches such as designing selective pulse sequences also effectively manage the fields induced by radiation damping. These approaches aim to control and limit the disruptive effects of radiation damping during NMR experiments and all approaches are successful in eliminating RD to a fairly large extent.
Overall, understanding and managing radiation damping is crucial for obtaining high-quality NMR data, especially in modern high-field spectrometers where the effects can be significant due to the increased sensitivity and resolution.
Relaxation
The process of population relaxation refers to nuclear spins that return to thermodynamic equilibrium in the magnet. This process is also called T1, "spin-lattice" or "longitudinal magnetic" relaxation, where T1 refers to the mean time for an individual nucleus to return to its thermal equilibrium state of the spins. After the nuclear spin population has relaxed, it can be probed again, since it is in the initial, equilibrium (mixed) state.
The precessing nuclei can also fall out of alignment with each other and gradually stop producing a signal. This is called T2, "spin-spin" or transverse relaxation. Because of the difference in the actual relaxation mechanisms involved (for example, intermolecular versus intramolecular magnetic dipole-dipole interactions), T1 is usually (except in rare cases) longer than T2 (that is, slower spin-lattice relaxation, for example because of smaller dipole-dipole interaction effects). In practice, the value of T2*, which is the actually observed decay time of the observed NMR signal, or free induction decay (to of the initial amplitude immediately after the resonant RF pulse), also depends on the static magnetic field inhomogeneity, which may be quite significant. (There is also a smaller but significant contribution to the observed FID shortening from the RF inhomogeneity of the resonant pulse). In the corresponding FT-NMR spectrum—meaning the Fourier transform of the free induction decay— the width of the NMR signal in frequency units is inversely related to the T2* time. Thus, a nucleus with a long T2* relaxation time gives rise to a very sharp NMR peak in the FT-NMR spectrum for a very homogeneous ("well-shimmed") static magnetic field, whereas nuclei with shorter T2* values give rise to broad FT-NMR peaks even when the magnet is shimmed well. Both T1 and T2 depend on the rate of molecular motions as well as the gyromagnetic ratios of both the resonating and their strongly interacting, next-neighbor nuclei that are not at resonance.
A Hahn echo decay experiment can be used to measure the dephasing time, as shown in the animation. The size of the echo is recorded for different spacings of the two pulses. This reveals the decoherence that is not refocused by the 180° pulse. In simple cases, an exponential decay is measured which is described by the T2 time.
NMR spectroscopy
NMR spectroscopy is one of the principal techniques used to obtain physical, chemical, electronic and structural information about molecules due to the chemical shift of the resonance frequencies of the nuclear spins in the sample. Peak splittings due to J- or dipolar couplings between nuclei are also useful. NMR spectroscopy can provide detailed and quantitative information on the functional groups, topology, dynamics and three-dimensional structure of molecules in solution and the solid state. Since the area under an NMR peak is usually proportional to the number of spins involved, peak integrals can be used to determine composition quantitatively.
Structure and molecular dynamics can be studied (with or without "magic angle" spinning (MAS)) by NMR of quadrupolar nuclei (that is, with spin ) even in the presence of magnetic "dipole-dipole" interaction broadening (or simply, dipolar broadening), which is always much smaller than the quadrupolar interaction strength because it is a magnetic vs. an electric interaction effect.
Additional structural and chemical information may be obtained by performing double-quantum NMR experiments for pairs of spins or quadrupolar nuclei such as . Furthermore, nuclear magnetic resonance is one of the techniques that has been used to design quantum automata, and also build elementary quantum computers.
Continuous-wave (CW) spectroscopy
In the first few decades of nuclear magnetic resonance, spectrometers used a technique known as continuous-wave (CW) spectroscopy, where the transverse spin magnetization generated by a weak oscillating magnetic field is recorded as a function of the oscillation frequency or static field strength B0. When the oscillation frequency matches the nuclear resonance frequency, the transverse magnetization is maximized and a peak is observed in the spectrum. Although NMR spectra could be, and have been, obtained using a fixed constant magnetic field and sweeping the frequency of the oscillating magnetic field, it was more convenient to use a fixed frequency source and vary the current (and hence magnetic field) in an electromagnet to observe the resonant absorption signals. This is the origin of the counterintuitive, but still common, "high field" and "low field" terminology for low frequency and high frequency regions, respectively, of the NMR spectrum.
As of 1996, CW instruments were still used for routine work because the older instruments were cheaper to maintain and operate, often operating at 60 MHz with correspondingly weaker (non-superconducting) electromagnets cooled with water rather than liquid helium. One radio coil operated continuously, sweeping through a range of frequencies, while another orthogonal coil, designed not to receive radiation from the transmitter, received signals from nuclei that reoriented in solution. As of 2014, low-end refurbished 60 MHz and 90 MHz systems were sold as FT-NMR instruments, and in 2010 the "average workhorse" NMR instrument was configured for 300 MHz.
CW spectroscopy is inefficient in comparison with Fourier analysis techniques (see below) since it probes the NMR response at individual frequencies or field strengths in succession. Since the NMR signal is intrinsically weak, the observed spectrum suffers from a poor signal-to-noise ratio. This can be mitigated by signal averaging, i.e. adding the spectra from repeated measurements. While the NMR signal is the same in each scan and so adds linearly, the random noise adds more slowly – proportional to the square root of the number of spectra added (see random walk). Hence the overall signal-to-noise ratio increases as the square-root of the number of spectra measured. However, monitoring an NMR signal at a single frequency as a function of time may be better suited for kinetic studies than pulsed Fourier-transform NMR spectrosocopy.
Fourier-transform spectroscopy
Most applications of NMR involve full NMR spectra, that is, the intensity of the NMR signal as a function of frequency. Early attempts to acquire the NMR spectrum more efficiently than simple CW methods involved illuminating the target simultaneously with more than one frequency. A revolution in NMR occurred when short radio-frequency pulses began to be used, with a frequency centered at the middle of the NMR spectrum. In simple terms, a short pulse of a given "carrier" frequency "contains" a range of frequencies centered about the carrier frequency, with the range of excitation (bandwidth) being inversely proportional to the pulse duration, i.e. the Fourier transform of a short pulse contains contributions from all the frequencies in the neighborhood of the principal frequency. The restricted range of the NMR frequencies for most light spin- nuclei made it relatively easy to use short (1 - 100 microsecond) radio frequency pulses to excite the entire NMR spectrum.
Applying such a pulse to a set of nuclear spins simultaneously excites all the single-quantum NMR transitions. In terms of the net magnetization vector, this corresponds to tilting the magnetization vector away from its equilibrium position (aligned along the external magnetic field). The out-of-equilibrium magnetization vector then precesses about the external magnetic field vector at the NMR frequency of the spins. This oscillating magnetization vector induces a voltage in a nearby pickup coil, creating an electrical signal oscillating at the NMR frequency. This signal is known as the free induction decay (FID), and it contains the sum of the NMR responses from all the excited spins. In order to obtain the frequency-domain NMR spectrum (NMR absorption intensity vs. NMR frequency) this time-domain signal (intensity vs. time) must be Fourier transformed. Fortunately, the development of Fourier transform (FT) NMR coincided with the development of digital computers and the digital fast Fourier transform (FFT). Fourier methods can be applied to many types of spectroscopy.
Richard R. Ernst was one of the pioneers of pulsed NMR and won a Nobel Prize in chemistry in 1991 for his work on Fourier Transform NMR and his development of multi-dimensional NMR spectroscopy.
Multi-dimensional NMR spectroscopy
The use of pulses of different durations, frequencies, or shapes in specifically designed patterns or pulse sequences allows production of a spectrum that contains many different types of information about the molecules in the sample. In multi-dimensional nuclear magnetic resonance spectroscopy, there are at least two pulses: one leads to the directly detected signal and the others affect the starting magnetization and spin state prior to it. The full analysis involves repeating the sequence with the pulse timings systematically varied in order to probe the oscillations of the spin system are point by point in the time domain. Multidimensional Fourier transformation of the multidimensional time signal yields the multidimensional spectrum. In two-dimensional nuclear magnetic resonance spectroscopy (2D-NMR), there will be one systematically varied time period in the sequence of pulses, which will modulate the intensity or phase of the detected signals. In 3D-NMR, two time periods will be varied independently, and in 4D-NMR, three will be varied.
There are many such experiments. In some, fixed time intervals allow (among other things) magnetization transfer between nuclei and, therefore, the detection of the kinds of nuclear–nuclear interactions that allowed for the magnetization transfer. Interactions that can be detected are usually classified into two kinds. There are through-bond and through-space interactions. Through-bond interactions relate to structural connectivity of the atoms and provide information about which ones are directly connected to each other, connected by way of a single other intermediate atom, etc. Through-space interactions relate to actual geometric distances and angles, including effects of dipolar coupling and the nuclear Overhauser effect.
Although the fundamental concept of 2D-FT NMR was proposed by Jean Jeener from the Free University of Brussels at an international conference, this idea was largely developed by Richard Ernst, who won the 1991 Nobel prize in Chemistry for his work in FT NMR, including multi-dimensional FT NMR, and especially 2D-FT NMR of small molecules. Multi-dimensional FT NMR experiments were then further developed into powerful methodologies for studying molecules in solution, in particular for the determination of the structure of biopolymers such as proteins or even small nucleic acids.
In 2002 Kurt Wüthrich shared the Nobel Prize in Chemistry (with John Bennett Fenn and Koichi Tanaka) for his work with protein FT NMR in solution.
Solid-state NMR spectroscopy
This technique complements X-ray crystallography in that it is frequently applicable to molecules in an amorphous or liquid-crystalline state, whereas crystallography, as the name implies, is performed on molecules in a crystalline phase. In electronically conductive materials, the Knight shift of the resonance frequency can provide information on the mobile charge carriers. Though nuclear magnetic resonance is used to study the structure of solids, extensive atomic-level structural detail is more challenging to obtain in the solid state. Due to broadening by chemical shift anisotropy (CSA) and dipolar couplings to other nuclear spins, without special techniques such as MAS or dipolar decoupling by RF pulses, the observed spectrum is often only a broad Gaussian band for non-quadrupolar spins in a solid.
Professor Raymond Andrew at the University of Nottingham in the UK pioneered the development of high-resolution solid-state nuclear magnetic resonance. He was the first to report the introduction of the MAS (magic angle sample spinning; MASS) technique that allowed him to achieve spectral resolution in solids sufficient to distinguish between chemical groups with either different chemical shifts or distinct Knight shifts. In MASS, the sample is spun at several kilohertz around an axis that makes the so-called magic angle θm (which is ~54.74°, where 3cos2θm-1 = 0) with respect to the direction of the static magnetic field B0; as a result of such magic angle sample spinning, the broad chemical shift anisotropy bands are averaged to their corresponding average (isotropic) chemical shift values. Correct alignment of the sample rotation axis as close as possible to θm is essential for cancelling out the chemical-shift anisotropy broadening. There are different angles for the sample spinning relative to the applied field for the averaging of electric quadrupole interactions and paramagnetic interactions, correspondingly ~30.6° and ~70.1°. In amorphous materials, residual line broadening remains since each segment is in a slightly different environment, therefore exhibiting a slightly different NMR frequency.
Line broadening or splitting by dipolar or J-couplings to nearby 1H nuclei is usually removed by radio-frequency pulses applied at the 1H frequency during signal detection. The concept of cross polarization developed by Sven Hartmann and Erwin Hahn was utilized in transferring magnetization from protons to less sensitive nuclei by M.G. Gibby, Alex Pines and John S. Waugh. Then, Jake Schaefer and Ed Stejskal demonstrated the powerful use of cross polarization under MAS conditions (CP-MAS) and proton decoupling, which is now routinely employed to measure high resolution spectra of low-abundance and low-sensitivity nuclei, such as carbon-13, silicon-29, or nitrogen-15, in solids. Significant further signal enhancement can be achieved by dynamic nuclear polarization from unpaired electrons to the nuclei, usually at temperatures near 110 K.
Sensitivity
Because the intensity of nuclear magnetic resonance signals and, hence, the sensitivity of the technique depends on the strength of the magnetic field, the technique has also advanced over the decades with the development of more powerful magnets. Advances made in audio-visual technology have also improved the signal-generation and processing capabilities of newer instruments.
As noted above, the sensitivity of nuclear magnetic resonance signals is also dependent on the presence of a magnetically susceptible nuclide and, therefore, either on the natural abundance of such nuclides or on the ability of the experimentalist to artificially enrich the molecules, under study, with such nuclides. The most abundant naturally occurring isotopes of hydrogen and phosphorus (for example) are both magnetically susceptible and readily useful for nuclear magnetic resonance spectroscopy. In contrast, carbon and nitrogen have useful isotopes but which occur only in very low natural abundance.
Other limitations on sensitivity arise from the quantum-mechanical nature of the phenomenon. For quantum states separated by energy equivalent to radio frequencies, thermal energy from the environment causes the populations of the states to be close to equal. Since incoming radiation is equally likely to cause stimulated emission (a transition from the upper to the lower state) as absorption, the NMR effect depends on an excess of nuclei in the lower states. Several factors can reduce sensitivity, including:
Increasing temperature, which evens out the Boltzmann population of states. Conversely, low temperature NMR can sometimes yield better results than room-temperature NMR, providing the sample remains liquid.
Saturation of the sample with energy applied at the resonant radiofrequency. This manifests in both CW and pulsed NMR; in the first case (CW) this happens by using too much continuous power that keeps the upper spin levels completely populated; in the second case (pulsed), each pulse (that is at least a 90° pulse) leaves the sample saturated, and four to five times the (longitudinal) relaxation time (5T1) must pass before the next pulse or pulse sequence can be applied. For single pulse experiments, shorter RF pulses that tip the magnetization by less than 90° can be used, which loses some intensity of the signal, but allows for shorter recycle delays. The optimum there is called an Ernst angle, after the Nobel laureate. Especially in solid state NMR, or in samples containing very few nuclei with spin (diamond with the natural 1% of carbon-13 is especially troublesome here) the longitudinal relaxation times can be on the range of hours, while for proton-NMR they are often in the range of one second.
Non-magnetic effects, such as electric-quadrupole coupling of spin-1 and spin- nuclei with their local environment, which broaden and weaken absorption peaks. , an abundant spin-1 nucleus, is difficult to study for this reason. High resolution NMR instead probes molecules using the rarer isotope, which has spin-.
Isotopes
Many isotopes of chemical elements can be used for NMR analysis.
Commonly used nuclei:
, the most commonly used spin- nucleus in NMR investigations, has been studied using many forms of NMR. Hydrogen is highly abundant, especially in biological systems. It is the nucleus providing the strongest NMR signal (apart from , which is not commonly used due to its instability and radioactivity). Proton NMR has a narrow chemical-shift range but gives sharp signals in solution state. Fast acquisition of quantitative spectra (with peak integrals in stoichiometric ratios) is possible due to short relaxation time. The nucleus has provided the sole diagnostic signal for clinical magnetic resonance imaging (MRI).
, a spin-1 nucleus, is commonly utilized to provide a signal-free medium in the form of deuterated solvents for proton NMR, to avoid signal interference from hydrogen-containing solvents in measurement of NMR of solutes. It is also used in determining the behavior of lipids in lipid membranes and other solids or liquid crystals as it is a relatively non-perturbing label which can selectively replace . Alternatively, can be detected in media specially labeled with . Deuterium resonance is commonly used in high-resolution NMR spectroscopy to monitor drift of the magnetic field strength (lock) and to monitor the homogeneity of the external magnetic field.
is very sensitive to NMR. It exists at a very low concentration in natural helium and can be purified from . It is used mainly in studies of endohedral fullerenes, where its chemical inertness is beneficial to ascertaining the structure of the entrapping fullerene.
is more sensitive than and yields sharper signals. The nuclear spin of 10B is 3 and that of 11B is . Quartz tubes must be used because borosilicate glass interferes with measurement.
, a spin- nucleus, is widely used, despite its relative paucity in naturally occurring carbon (approximately 1.1%). It is stable to nuclear decay. Since there is a low percentage in natural carbon, spectrum acquisition on samples which have not been enriched in takes a long time. Frequently used for labeling of compounds in synthetic and metabolic studies. Has low sensitivity and moderately wide chemical shift range, yields sharp signals. Low percentage makes it useful by preventing spin–spin couplings and makes the spectrum appear less crowded. Slow relaxation of 13C not bonded to hydrogen means that spectra are not integrable unless long acquisition times are used.
, spin-1, is a medium sensitivity nucleus with wide chemical shift range. Its large quadrupole moment interferes with acquisition of high resolution spectra, limiting usefulness to smaller molecules and functional groups with a high degree of symmetry such as in the head-groups of lipids.
, spin-, is relatively commonly used. Can be used for isotopically labeling compounds. Very insensitive but yields sharp signals. Low percentage in natural nitrogen together with low sensitivity requires high concentrations or expensive isotope enrichment.
, spin-, low sensitivity and very low natural abundance (0.037%), wide chemical shift range (up to 2000 ppm). Its quadrupole moment causes line broadening. Used in metabolic and biochemical studies of chemical equilibria.
, spin-, relatively commonly measured. Sensitive, yields sharp signals, has a wide chemical shift range.
, spin-, 100% of natural phosphorus. Medium sensitivity, wide chemical shift range, yields sharp lines. Spectra tend to have a moderate level of noise. Used in biochemical studies and in coordination chemistry with phosphorus-containing ligands.
and , spin-, broad signal. is significantly more sensitive, preferred over despite its slightly broader signal. Organic chlorides yield very broad signals. Its use is limited to inorganic and ionic chlorides and very small organic molecules.
, spin-, relatively small quadrupole moment, moderately sensitive, very low natural abundance. Used in biochemistry to study calcium binding to DNA, proteins, etc.
, used in studies of catalysts and complexes.
Other nuclei (usually used in the studies of their complexes and chemical bonding, or to detect presence of the element):
,
, ,
,
,
,
,
,
Applications
NMR is extensively used in medicine in the form of magnetic resonance imaging. NMR is widely used in organic chemistry and industrially mainly for analysis of chemicals. The technique is also used to measure the ratio between water and fat in foods, monitor the flow of corrosive fluids in pipes, or to study molecular structures such as catalysts.
Medicine
The application of nuclear magnetic resonance best known to the general public is magnetic resonance imaging for medical diagnosis and magnetic resonance microscopy in research settings. However, it is also widely used in biochemical studies, notably in NMR spectroscopy such as proton NMR, carbon-13 NMR, deuterium NMR and phosphorus-31 NMR. Biochemical information can also be obtained from living tissue (e.g. human brain tumors) with the technique known as in vivo magnetic resonance spectroscopy or chemical shift NMR microscopy.
These spectroscopic studies are possible because nuclei are surrounded by orbiting electrons, which are charged particles that generate small, local magnetic fields that add to or subtract from the external magnetic field, and so will partially shield the nuclei. The amount of shielding depends on the exact local environment. For example, a hydrogen bonded to an oxygen will be shielded differently from a hydrogen bonded to a carbon atom. In addition, two hydrogen nuclei can interact via a process known as spin–spin coupling, if they are on the same molecule, which will split the lines of the spectra in a recognizable way.
As one of the two major spectroscopic techniques used in metabolomics, NMR is used to generate metabolic fingerprints from biological fluids to obtain information about disease states or toxic insults.
Chemistry
The aforementioned chemical shift came as a disappointment to physicists who had hoped that the resonance frequency of each nuclear species would be constant in a given magnetic field. But about 1951, chemist S. S. Dharmatti pioneered a way to determine the structure of many compounds by studying the peaks of nuclear magnetic resonance spectra. It can be a very selective technique, distinguishing among many atoms within a molecule or collection of molecules of very similar type but which differ only in terms of their local chemical environment. NMR spectroscopy is used to unambiguously identify known and novel compounds, and as such, is usually required by scientific journals for identity confirmation of synthesized new compounds. See the articles on carbon-13 NMR and proton NMR for detailed discussions.
A chemist can determine the identity of a compound by comparing the observed nuclear precession frequencies to known or predicted frequencies. Further structural data can be elucidated by observing spin–spin coupling, a process by which the precession frequency of a nucleus can be influenced by the spin orientation of a chemically bonded nucleus. Spin–spin coupling is easily observed in NMR of hydrogen-1 ( NMR) since its natural abundance is nearly 100%.
Because the nuclear magnetic resonance timescale is rather slow, compared to other spectroscopic methods, changing the temperature of a T2* experiment can also give information about fast reactions, such as the Cope rearrangement or about structural dynamics, such as ring-flipping in cyclohexane. At low enough temperatures, a distinction can be made between the axial and equatorial hydrogens in cyclohexane.
An example of nuclear magnetic resonance being used in the determination of a structure is that of buckminsterfullerene (often called "buckyballs", composition C60). This now famous form of carbon has 60 carbon atoms forming a sphere. The carbon atoms are all in identical environments and so should see the same internal H field. Unfortunately, buckminsterfullerene contains no hydrogen and so nuclear magnetic resonance has to be used. spectra require longer acquisition times since carbon-13 is not the common isotope of carbon (unlike hydrogen, where is the common isotope). However, in 1990 the spectrum was obtained by R. Taylor and co-workers at the University of Sussex and was found to contain a single peak, confirming the unusual structure of buckminsterfullerene.
Battery
Nuclear Magnetic Resonance (NMR) is a powerful analytical tool for investigating the local structure and ion dynamics in battery materials. NMR provides unique insights into the short-range atomic environments within complex electrochemical systems such as batteries. Electrochemical processes rely on redox reactions, in which 7Li or 23Na are often involved. Accordingly, their NMR spectroscopies are affected by the electronic structure of the material, which makes NMR an essential technique for probing the behavior of battery components during operation.
Applications of NMR in Battery Research
Electrodes and Structural Transformations: During charge and discharge cycles, the materials in the anodes and cathodes undergo local structural transformations. These changes can be monitored using NMR by analyzing the signal's line shape, line intensity, and chemical shift. These transformations are often not captured by X-ray diffraction techniques (providing long-range information), making NMR indispensable for understanding the underlying mechanisms of energy storage.
Metal Dendrite Formation: One of the challenges in lithium and sodium-based batteries is the formation of metal dendrites, which can lead to short circuits and catastrophic battery failure. In Situ NMR allows researchers to observe the formation of lithium or sodium dendrites in real time during battery cycling. Varying the cycling rates can also quantify the effect on dendrite formation, aiding in the development of strategies to suppress dendrite growth and reduce the risk of short circuits.
Solid Electrolytes and Interfaces: Solid electrolytes, a key focus of next-generation battery research, often suffer from limited ion diffusion rates. NMR techniques can measure diffusivity in solid electrolytes, helping researchers understand how to enhance ion conductivity. Furthermore, NMR is used to study the Solid Electrolyte Interface (SEI), a layer that forms on the electrode surface and thus influences battery stability. Solid-state NMR (ssNMR) is particularly valuable for characterizing the composition and ion dynamics within the SEI layer due to its nondestructive testing capabilities.
In Situ and Ex Situ NMR Techniques
NMR technology can be divided into two main experimental approaches in battery research: In Situ NMR and Ex Situ NMR. Each offers unique advantages depending on the research goals.
In Situ NMR: In situ NMR enables real-time observation of chemical and structural changes in batteries while they are operating. This is particularly important for studying transient species that only exist under working conditions, such as certain intermediate reaction products. In situ NMR has become a critical tool for understanding processes like lithium and sodium plating and dendrite formation during battery cycling.
Ex Situ NMR: Ex situ NMR is used after the battery has been disassembled, allowing for high-resolution analysis of battery components. It is often employed to study a wide range of nuclei, including 1H, 2H, 6Li, 7Li, 13C, 15N, 17O, 19F, 25Mg, 29Si, 31P, 51V, 133Cs. Many of these nuclei are quadrupolar or present in low abundance, making them difficult to detect. However, ex situ NMR benefits from better sensitivity and narrower linewidths, which can be further improved by employing larger sample volumes, higher magnetic fields, or magic angle spinning (MAS).
Purity determination (w/w NMR)
While NMR is primarily used for structural determination, it can also be used for purity determination, provided that the structure and molecular weight of the compound is known. This technique requires the use of an internal standard of known purity. Typically this standard will have a high molecular weight to facilitate accurate weighing, but relatively few protons so as to give a clear peak for later integration e.g. 1,2,4,5-tetrachloro-3-nitrobenzene. Accurately weighed portions of the standard and sample are combined and analysed by NMR. Suitable peaks from both compounds are selected and the purity of the sample is determined via the following equation.
Where:
wstd: weight of internal standard
wspl: weight of sample
n[H]std: the integrated area of the peak selected for comparison in the standard, corrected for the number of protons in that functional group
n[H]spl: the integrated area of the peak selected for comparison in the sample, corrected for the number of protons in that functional group
MWstd: molecular weight of standard
MWspl: molecular weight of sample
P: purity of internal standard
Non-destructive testing
Nuclear magnetic resonance is extremely useful for analyzing samples non-destructively. Radio-frequency magnetic fields easily penetrate many types of matter and anything that is not highly conductive or inherently ferromagnetic. For example, various expensive biological samples, such as nucleic acids, including RNA and DNA, or proteins, can be studied using nuclear magnetic resonance for weeks or months before using destructive biochemical experiments. This also makes nuclear magnetic resonance a good choice for analyzing dangerous samples.
Segmental and molecular motions
In addition to providing static information on molecules by determining their 3D structures, one of the remarkable advantages of NMR over X-ray crystallography is that it can be used to obtain important dynamic information. This is due to the orientation dependence of the chemical-shift, dipole-coupling, or electric-quadrupole-coupling contributions to the instantaneous NMR frequency in an anisotropic molecular environment. When the molecule or segment containing the NMR-observed nucleus changes its orientation relative to the external field, the NMR frequency changes, which can result in changes in one- or two-dimensional spectra or in the relaxation times, depending on the correlation time and amplitude of the motion.
Data acquisition in the petroleum industry
Another use for nuclear magnetic resonance is data acquisition in the petroleum industry for petroleum and natural gas exploration and recovery. Initial research in this domain began in the 1950s, however, the first commercial instruments were not released until the early 1990s. A borehole is drilled into rock and sedimentary strata into which nuclear magnetic resonance logging equipment is lowered. Nuclear magnetic resonance analysis of these boreholes is used to measure rock porosity, estimate permeability from pore size distribution and identify pore fluids (water, oil and gas). These instruments are typically low field NMR spectrometers.
NMR logging, a subcategory of electromagnetic logging, measures the induced magnet moment of hydrogen nuclei (protons) contained within the fluid-filled pore space of porous media (reservoir rocks). Unlike conventional logging measurements (e.g., acoustic, density, neutron, and resistivity), which respond to both the rock matrix and fluid properties and are strongly dependent on mineralogy, NMR-logging measurements respond to the presence of hydrogen. Because hydrogen atoms primarily occur in pore fluids, NMR effectively responds to the volume, composition, viscosity, and distribution of these fluids, for example oil, gas or water. NMR logs provide information about the quantities of fluids present, the properties of these fluids, and the sizes of the pores containing these fluids. From this information, it is possible to infer or estimate:
The volume (porosity) and distribution (permeability) of the rock pore space
Rock composition
Type and quantity of fluid hydrocarbons
Hydrocarbon producibility
The basic core and log measurement is the T2 decay, presented as a distribution of T2 amplitudes versus time at each sample depth, typically from 0.3 ms to 3 s. The T2 decay is further processed to give the total pore volume (the total porosity) and pore volumes within different ranges of T2. The most common volumes are the bound fluid and free fluid. A permeability estimate is made using a transform such as the Timur-Coates or SDR permeability transforms. By running the log with different acquisition parameters, direct hydrocarbon typing and enhanced diffusion are possible.
Flow probes for NMR spectroscopy
Real-time applications of NMR in liquid media have been developed using specifically designed flow probes (flow cell assemblies) which can replace standard tube probes. This has enabled techniques that can incorporate the use of high performance liquid chromatography (HPLC) or other continuous flow sample introduction devices. These flow probes have used in various online process monitoring such as chemical reactions, environmental pollutant degradation.
Process control
NMR has now entered the arena of real-time process control and process optimization in oil refineries and petrochemical plants. Two different types of NMR analysis are utilized to provide real time analysis of feeds and products in order to control and optimize unit operations. Time-domain NMR (TD-NMR) spectrometers operating at low field (2–20 MHz for ) yield free induction decay data that can be used to determine absolute hydrogen content values, rheological information, and component composition. These spectrometers are used in mining, polymer production, cosmetics and food manufacturing as well as coal analysis. High resolution FT-NMR spectrometers operating in the 60 MHz range with shielded permanent magnet systems yield high resolution NMR spectra of refinery and petrochemical streams. The variation observed in these spectra with changing physical and chemical properties is modeled using chemometrics to yield predictions on unknown samples. The prediction results are provided to control systems via analogue or digital outputs from the spectrometer.
Earth's field NMR
In the Earth's magnetic field, NMR frequencies are in the audio frequency range, or the very low frequency and ultra low frequency bands of the radio frequency spectrum. Earth's field NMR (EFNMR) is typically stimulated by applying a relatively strong dc magnetic field pulse to the sample and, after the end of the pulse, analyzing the resulting low frequency alternating magnetic field that occurs in the Earth's magnetic field due to free induction decay (FID). These effects are exploited in some types of magnetometers, EFNMR spectrometers, and MRI imagers. Their inexpensive portable nature makes these instruments valuable for field use and for teaching the principles of NMR and MRI.
An important feature of EFNMR spectrometry compared with high-field NMR is that some aspects of molecular structure can be observed more clearly at low fields and low frequencies, whereas other aspects observable at high fields are not observable at low fields. This is because:
Electron-mediated heteronuclear J-couplings (spin–spin couplings) are field independent, producing clusters of two or more frequencies separated by several Hz, which are more easily observed in a fundamental resonance of about 2 kHz."Indeed it appears that enhanced resolution is possible due to the long spin relaxation times and high field homogeneity which prevail in EFNMR."
Chemical shifts of several ppm are clearly separated in high field NMR spectra, but have separations of only a few millihertz at proton EFNMR frequencies, so are usually not resolved.
Zero field NMR
In zero field NMR all magnetic fields are shielded such that magnetic fields below 1 nT (nanotesla) are achieved and the nuclear precession frequencies of all nuclei are close to zero and indistinguishable. Under those circumstances the observed spectra are no-longer dictated by chemical shifts but primarily by J-coupling interactions which are independent of the external magnetic field. Since inductive detection schemes are not sensitive at very low frequencies, on the order of the J-couplings (typically between 0 and 1000 Hz), alternative detection schemes are used. Specifically, sensitive magnetometers turn out to be good detectors for zero field NMR. A zero magnetic field environment does not provide any polarization hence it is the combination of zero field NMR with hyperpolarization schemes that makes zero field NMR desirable.
Quantum computing
NMR quantum computing uses the spin states of nuclei within molecules as qubits. NMR differs from other implementations of quantum computers in that it uses an ensemble of systems; in this case, molecules.
Magnetometers
Various magnetometers use NMR effects to measure magnetic fields, including proton precession magnetometers (PPM) (also known as proton magnetometers), and Overhauser magnetometers.
SNMR
Surface magnetic resonance (or magnetic resonance sounding) is based on the principle of nuclear magnetic resonance (NMR) and measurements can be used to indirectly estimate the water content of saturated and unsaturated zones in the earth's subsurface. SNMR is used to estimate aquifer properties, including quantity of water contained in the aquifer, porosity, and hydraulic conductivity.
Makers of NMR equipment
Major NMR instrument makers include Thermo Fisher Scientific, Magritek, Oxford Instruments, Bruker, Spinlock SRL, General Electric, JEOL, Kimble Chase, Philips, Siemens AG, and formerly Agilent Technologies (who acquired Varian, Inc.).
See also
Benchtop NMR spectrometer
Larmor equation (Not to be confused with Larmor formula).
Least-squares spectral analysis
Liquid nitrogen
NMR crystallography
NMR spectra database
Nuclear magnetic resonance in porous media
Nuclear quadrupole resonance (NQR)
Protein dynamics
Rabi cycle
Relaxometry
Spin echo
Structure-based assignment
References
Further reading
K.V.R. Chary, Girjesh Govil (2008) NMR in Biological Systems: From Molecules to Human. Springer. .
The Feynman Lectures on Physics Vol. II Ch. 35: Paramagnetism and Magnetic Resonance
External links
Tutorial
NMR/MRI tutorial
NMR Library NMR Concepts
NMR Course Notes
Downloadable NMR exercises as PowerPoint (english/german) and PDF (german only) files
Animations and simulations
A free interactive simulation of NMR principles
Interactive simulation on the Bloch sphere
Video
introduction to NMR and MRI
Richard Ernst, NL – Developer of multidimensional NMR techniques Freeview video provided by the Vega Science Trust.
'An Interview with Kurt Wuthrich' Freeview video by the Vega Science Trust (Wüthrich was awarded a Nobel Prize in Chemistry in 2002 "for his development of nuclear magnetic resonance spectroscopy for determining the three-dimensional structure of biological macromolecules in solution").
The Nobel Prize Winner - Documentary about Richard R. Ernst by Lukas Schwarzenbacher and Susanne Schmid (Swiss German with English subtitles)
Other
Spotlight on nuclear magnetic resonance: a timeless technique
Scientific techniques
Articles containing video clips
Biomagnetics | Nuclear magnetic resonance | [
"Physics",
"Chemistry",
"Biology"
] | 12,507 | [
"Biomagnetics",
"Nuclear magnetic resonance",
"Nuclear physics"
] |
25,113,859 | https://en.wikipedia.org/wiki/Remote%20visual%20inspection | Remote Visual Inspection or Remote Digital Video Inspection, also known as RVI or RDVI, is a form of visual inspection which uses visual aids including video technology to allow an inspector to look at objects and materials from a distance because the objects are inaccessible or are in dangerous environments. RVI is also a specialty branch of nondestructive testing (NDT).
Purposes
Technologies include, but not limited to, rigid or flexible borescopes, videoscopes, fiberscopes, push cameras, pan/tilt/zoom cameras and robotic crawlers. Remote are commonly used where distance, angle of view and limited lighting may impair direct visual examination or where access is limited by time, financial constraints or atmospheric hazards.
RVI/RDVI is commonly used as a predictive maintenance or regularly scheduled maintenance tool to assess the "health" and operability of fixed and portable assets. RVI/RDVI enables greater inspection coverage, inspection repeatability and data comparison.
The "remote" portion of RVI/RDVI refers to the characterization of the operator not entering the inspection area due to physical size constraints or potential safety issues related to the inspection environment.
Applications
Typical applications for RVI include:
Aircraft engines (turbofan, turbojet, turboshaft)
Aircraft fuselage
Turbines for power generation (steam and gas)
Process piping (oil and gas, pharmaceutical, food preparation)
Nuclear Power Stations - contaminated areas
Any areas where it is too dangerous, small or costly to view directly
References
Nondestructive testing
Maintenance
Tests | Remote visual inspection | [
"Materials_science",
"Engineering"
] | 306 | [
"Maintenance",
"Nondestructive testing",
"Materials testing",
"Mechanical engineering"
] |
25,114,007 | https://en.wikipedia.org/wiki/ASCE%20Library | ASCE Library is an online full-text civil engineering database providing the contents of peer-reviewed journals, proceedings, e-books, and standards published by the American Society of Civil Engineers. The Library offers free access to abstracts of Academic journal articles, proceedings papers, e-books, and standards as well as many e-book chapters. Access to the content is available either by subscription or pay-per-view for individual articles or chapters. E-books and standards can be purchased and downloaded in their entirety. Most references cited by journal articles and proceedings papers in the library are linked to original sources using CrossRef. Linking provides researchers with the ability to link from reference citations to the bibliographic records of other scientific and technical publishers’ articles. ASCE also offers librarians usage statistics which are compliant with the COUNTER Code of Practice for Journals and Databases. All articles are in PDF (Portable Document Format) and many journal articles are also available in HTML format.
History
ASCE Journals first appeared online in the Fall of 2000. The online collection was designated ASCE Research Library in the Fall of 2004 with the addition of ASCE Proceedings papers. In June 2012, the platform migrated from Scitation, to Literatum managed by Atypon and the site was renamed ASCE Library. In June 2013, e-books and standards were added with the ability to download individual book chapters as well as complete books. In 2016, ASCE added access to premium content from Civil Engineering Magazine.
Coverage
The ASCE Library offers online access to more than 150,000 technical and professional papers. It encompasses the full text of papers published in 35 journals (as of 2019) from 1983 to the present, conference proceedings from 2000 to the present, and full text of ASCE standards and e-books. The ASCE Library is supplemented by Civil Engineering Database (CEDB), a free bibliographic database offering records of all publications by American Society of Civil Engineers since 1872. CEDB is updated at the end of each month. The update includes the journal content for the following month and all other content published in the previous month.
Journals
Coverage includes the following Journals:
ASCE OPEN: Multidisciplinary Journal of Civil Engineering
International Journal of Geomechanics
Journal of Architectural Engineering
Journal of Aerospace Engineering
Journal of Bridge Engineering
Journal of Composites for Construction
Journal of Performance of Constructed Facilities
Journal of Civil Engineering Education
Journal of Construction Engineering & Management
Journal of Computing in Civil Engineering
Journal of Cold Regions Engineering
Journal of Environmental Engineering
Journal of Engineering Mechanics
Journal of Energy Engineering
Journal of Geotechnical & Geoenvironmental Engineering
Journal of Hazardous, Toxic & Radioactive Waste
Journal of Hydrologic Engineering
Journal of Hydraulic Engineering
Journal of Irrigation & Drainage Engineering
Journal of Infrastructure Systems
Journal of Legal Affairs and Dispute Resolution in Engineering and Construction
Journal of Management in Engineering
Journal of Materials in Civil Engineering
Journal of Pipeline Systems Engineering and Practice
Journal of Structural Engineering
Journal of Surveying Engineering
Journal of Transportation Engineering, Part A: Systems
Journal of Transportation Engineering, Part B: Pavements
Journal of Urban Planning and Development
Journal of Water Resources Planning and Management
Journal of Waterway, Port, Coastal, and Ocean Engineering
Natural Hazards Review
Practice Periodical on Structural Design & Construction
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering; Part B: Mechanical Engineering
Proceedings
Includes more than 750 proceedings titles and 70,000 technical papers, as well as a complete archive of conference proceedings papers from 2000 to present (selected earlier conference proceedings are also included).
See also
List of academic databases and search engines
References
External links
American Society of Civil Engineers
ASCE Publication Homepage
ASCE Library web site
Journals
Proceedings
E-books
Standards
Civil engineering
American Society of Civil Engineers
Online databases | ASCE Library | [
"Engineering"
] | 749 | [
"Construction",
"Civil engineering",
"American Society of Civil Engineers",
"Civil engineering organizations"
] |
39,252,459 | https://en.wikipedia.org/wiki/Semiconductor%20Bloch%20equations | The semiconductor Bloch equations (abbreviated as SBEs) describe the optical response of semiconductors excited by coherent classical light sources, such as lasers. They are based on a full quantum theory, and form a closed set of integro-differential equations for the quantum dynamics of microscopic polarization and charge carrier distribution. The SBEs are named after the structural analogy to the optical Bloch equations that describe the excitation dynamics in a two-level atom interacting with a classical electromagnetic field. As the major complication beyond the atomic approach, the SBEs must address the many-body interactions resulting from Coulomb force among charges and the coupling among lattice vibrations and electrons.
Background
The optical response of a semiconductor follows if one can determine its macroscopic polarization as a function of the electric field that excites it. The connection between and the microscopic polarization is given by
where the sum involves crystal-momenta of all relevant electronic states. In semiconductor optics, one typically excites transitions between a valence and a conduction band. In this connection, is the dipole matrix element between the conduction and valence band and defines the corresponding transition amplitude.
The derivation of the SBEs starts from a system Hamiltonian that fully includes the free-particles, Coulomb interaction, dipole interaction between classical light and electronic states, as well as the phonon contributions. Like almost always in many-body physics, it is most convenient to apply the second-quantization formalism after the appropriate system Hamiltonian is identified. One can then derive the quantum dynamics of relevant observables by using the Heisenberg equation of motion
Due to the many-body interactions within , the dynamics of the observable couples to new observables and the equation structure cannot be closed. This is the well-known BBGKY hierarchy problem that can be systematically truncated with different methods such as the cluster-expansion approach.
At operator level, the microscopic polarization is defined by an expectation value for a single electronic transition between a valence and a conduction band. In second quantization, conduction-band electrons are defined by fermionic creation and annihilation operators and , respectively. An analogous identification, i.e., and , is made for the valence band electrons. The corresponding electronic interband transition then becomes
that describe transition amplitudes for moving an electron from conduction to valence band ( term) or vice versa ( term). At the same time, an electron distribution follows from
It is also convenient to follow the distribution of electronic vacancies, i.e., the holes,
that are left to the valence band due to optical excitation processes.
Principal structure of SBEs
The quantum dynamics of optical excitations yields an integro-differential equations that constitute the SBEs
These contain the renormalized Rabi energy
as well as the renormalized carrier energy
where corresponds to the energy of free electron–hole pairs and is the Coulomb matrix element, given here in terms of the carrier wave vector .
The symbolically denoted contributions stem from the hierarchical coupling due to many-body interactions. Conceptually, , , and are single-particle expectation values while the hierarchical coupling originates from two-particle correlations such as polarization-density correlations or polarization-phonon correlations. Physically, these two-particle correlations introduce several nontrivial effects such as screening of Coulomb interaction, Boltzmann-type scattering of and toward Fermi–Dirac distribution, excitation-induced dephasing, and further renormalization of energies due to correlations.
All these correlation effects can be systematically included by solving also the dynamics of two-particle correlations. At this level of sophistication, one can use the SBEs to predict optical response of semiconductors without phenomenological parameters, which gives the SBEs a very high degree of predictability. Indeed, one can use the SBEs in order to predict suitable laser designs through the accurate knowledge they produce about the semiconductor's gain spectrum. One can even use the SBEs to deduce existence of correlations, such as bound excitons, from quantitative measurements.
The presented SBEs are formulated in the momentum space since carrier's crystal momentum follows from . An equivalent set of equations can also be formulated in position space. However, especially, the correlation computations are much simpler to be performed in the momentum space.
Interpretation and consequences
The dynamic shows a structure where an individual is coupled to all other microscopic polarizations due to the Coulomb interaction . Therefore, the transition amplitude is collectively modified by the presence of other transition amplitudes. Only if one sets to zero, one finds isolated transitions within each state that follow exactly the same dynamics as the optical Bloch equations predict. Therefore, already the Coulomb interaction among produces a new solid-state effect compared with optical transitions in simple atoms.
Conceptually, is just a transition amplitude for exciting an electron from valence to conduction band. At the same time, the homogeneous part of dynamics yields an eigenvalue problem that can be expressed through the generalized Wannier equation. The eigenstates of the Wannier equation is analogous to bound solutions of the hydrogen problem of quantum mechanics. These are often referred to as exciton solutions and they formally describe Coulombic binding by oppositely charged electrons and holes.
However, a real exciton is a true two-particle correlation because one must then have a correlation between one electron to another hole. Therefore, the appearance of exciton resonances in the polarization does not signify the presence of excitons because is a single-particle transition amplitude. The excitonic resonances are a direct consequence of Coulomb coupling among all transitions possible in the system. In other words, the single-particle transitions themselves are influenced by Coulomb interaction making it possible to detect exciton resonance in optical response even when true excitons are not present.
Therefore, it is often customary to specify optical resonances as excitonic instead of exciton resonances. The actual role of excitons on optical response can only be deduced by quantitative changes to induce to the linewidth and energy shift of excitonic resonances.
The solutions of the Wannier equation produce valuable insight to the basic properties of a semiconductor's optical response. In particular, one can solve the steady-state solutions of the SBEs to predict optical absorption spectrum analytically with the so-called Elliott formula. In this form, one can verify that an unexcited semiconductor shows several excitonic absorption resonances well below the fundamental bandgap energy. Obviously, this situation cannot be probing excitons because the initial many-body system does not contain electrons and holes to begin with. Furthermore, the probing can, in principle, be performed so gently that one essentially does not excite electron–hole pairs. This gedanken experiment illustrates nicely why one can detect excitonic resonances without having excitons in the system, all due to virtue of Coulomb coupling among transition amplitudes.
Extensions
The SBEs are particularly useful when solving the light propagation through a semiconductor structure. In this case, one needs to solve the SBEs together with the Maxwell's equations driven by the optical polarization. This self-consistent set is called the Maxwell–SBEs and is frequently applied to analyze present-day experiments and to simulate device designs.
At this level, the SBEs provide an extremely versatile method that describes linear as well as nonlinear phenomena such as excitonic effects, propagation effects, semiconductor microcavity effects, four-wave-mixing, polaritons in semiconductor microcavities, gain spectroscopy, and so on. One can also generalize the SBEs by including excitation with terahertz (THz) fields that are typically resonant with intraband transitions. One can also quantize the light field and investigate quantum-optical effects that result. In this situation, the SBEs become coupled to the semiconductor luminescence equations.
See also
Absorption
Semiconductor-luminescence equations
Elliott formula
Quantum-optical spectroscopy
Optical Bloch equations
Wannier equation
Gain spectroscopy of semiconductors
Semiconductor laser theory
Nonlinear theory of semiconductor lasers
Further reading
References
Eponymous equations of physics
Semiconductor analysis
Quantum mechanics | Semiconductor Bloch equations | [
"Physics"
] | 1,708 | [
"Quantum mechanics",
"Theoretical physics",
"Eponymous equations of physics",
"Equations of physics"
] |
39,253,041 | https://en.wikipedia.org/wiki/Klyne%E2%80%93Prelog%20system | In stereochemistry, the Klyne–Prelog system (named for William Klyne and Vladimir Prelog) for describing conformations about a single bond offers a more systematic means to unambiguously name complex structures, where the torsional or dihedral angles are not found to occur in 60° increments. Klyne notation views the placement of the substituent on the front atom as being in regions of space called anti/syn and clinal/periplanar relative to a reference group on the rear atom. A plus (+) or minus (−) sign is placed at the front to indicate the sign of the dihedral angle. Anti or syn indicates the substituents are on opposite sides or the same side, respectively. Clinal substituents are found within 30° of either side of a dihedral angle of 60° (from 30° to 90°), 120° (90°–150°), 240° (210°–270°), or 300° (270°–330°). Periplanar substituents are found within 30° of either 0° (330°–30°) or 180° (150°–210°). Juxtaposing the designations produces the following terms for the conformers of butane (see Alkane stereochemistry for an explanation of conformation nomenclature): gauche butane is syn-clinal (+sc or −sc, depending on the enantiomer), anti butane is anti-periplanar, and eclipsed butane is syn-periplanar.
References
Stereochemistry
Chemical nomenclature | Klyne–Prelog system | [
"Physics",
"Chemistry"
] | 347 | [
"Stereochemistry",
"Space",
"Stereochemistry stubs",
"nan",
"Spacetime"
] |
40,549,362 | https://en.wikipedia.org/wiki/Graft-versus-tumor%20effect | Graft-versus-tumor effect (GvT) appears after allogeneic hematopoietic stem cell transplantation (HSCT). The graft contains donor T cells (T lymphocytes) that can be beneficial for the recipient by eliminating residual malignant cells. GvT might develop after recognizing tumor-specific or recipient-specific alloantigens. It could lead to remission or immune control of hematologic malignancies. This effect applies in myeloma and lymphoid leukemias, lymphoma, multiple myeloma and possibly breast cancer. It is closely linked with graft-versus-host disease (GvHD), as the underlying principle of alloimmunity is the same. CD4+CD25+ regulatory T cells (Treg) can be used to suppress GvHD without loss of beneficial GvT effect.
The biology of GvT response is still not fully understood but it is probable that the reaction with polymorphic minor histocompatibility antigens expressed either specifically on hematopoietic cells or more widely on a number of tissue cells or tumor-associated antigens is involved. This response is mediated largely by cytotoxic T lymphocytes (CTL) but it can be employed by natural killers (NK cells) as separate effectors, particularly in T-cell-depleted HLA-haploidentical HSCT.
Graft-versus-leukemia
Graft-versus-leukemia (GvL) is a specific type of GvT effect. As the name of this effect indicates, GvL is a reaction against leukemic cells of the host. GvL requires genetic disparity because the effect is dependent on the alloimmunity principle. GvL is a part of the reaction of the graft against the host. Whereas graft-versus-host-disease (GvHD) has a negative impact on the host, GvL is beneficial for patients with hematopeietic malignancies. After HSC transplantation both GvL and GvHD develop. The interconnection of those two effects can be seen by comparison of leukemia relapse after HSC transplantation with development of GvHD. Patients who develop chronic or acute GvHD have lower chance of leukemia relapse. When transplanting T-cell depleted stem cell transplant, GvHD can be partially prevented, but in the same time the GvL effect is also reduced, because T-cells play an important role in both of those effects. The possibilities of GvL effect in the treatment of hematopoietic malignancies are limited by GvHD. The ability to induce GvL but not GvH after HSCT would be very beneficial for those patients. There are some strategies to suppress the GvHD after transplantation or to enhance GvL but none of them provide an ideal solution to this problem. For some forms of hematopoietic malignancies, for example acute myeloid leukemia (AML), the essential cells during HSCT are, beside the donor's T cells, the NK cells, which interact with KIR receptors. NK cells are within the first cells to repopulate host's bone marrow which means they play important role in the transplant engraftment. For their role in the GvL effect, their alloreactivity is required. Because KIR and HLA genes are inherited independently, the ideal donor can have compatible HLA genes and KIR receptors that induce the alloreaction of NK cells at the same time. This will occur with most of the non-related donor. When transplanting HSC during AML, T-cells are usually selectively depleted to prevent GvHD while NK cells help with the GvL effect which prevent leukemia relapse. When using non-depleted T-cell transplant, cyclophosphamide is used after transplantation to prevent GvHD or transplant rejection. Other strategies currently clinically used for suppressing GvHD and enhancing GvL are for example optimization of transplant condition or donor lymphocyte infusion (DLI) after transplantation. However, none of those provide satisfactory universal results, thus other options are still being inspected. One of the possibilities is the use of cytokines. Granulocyte colony-stimulating factor (G-CSF) is used to mobilize HSC and mediate T cell tolerance during transplantation. G-CSF can help to enhance GvL effect and suppress GvHD by reducing levels of LPS and TNF-α. Using G-CSF also increases levels of Treg, which can also help with prevention of GvHD. Other cytokines can also be used to prevent or reduce GvHD without eliminating GvL, for example KGF, IL-11, IL-18 and IL-35.
See also
Graft-versus-host disease
Hematopoietic stem cell transplantation
References
Transplantation medicine
Immunology | Graft-versus-tumor effect | [
"Biology"
] | 1,058 | [
"Immunology"
] |
40,558,499 | https://en.wikipedia.org/wiki/CompEx | CompEx (meaning Competency in Ex atmospheres) is a global certification scheme for electrical and mechanical craftspersons and designers working in potentially explosive atmospheres. The scheme is operated by JTLimited, UK and is accredited by UKAS to ISO/IEC 17024.
The scheme was created by EEMUA (Engineering Equipment and Materials Users' Association) to satisfy the general competency requirements of BS EN 60079 (IEC 60079), parts 10, 14 and 17. The requirements are currently explicitly detailed in IEC 60079 Part 14 Annex A, detailing knowledge/skills and competency requirements for responsible persons, operatives and designers.
The scheme is broken down to twelve units covering different actions and hazardous area concepts.
In 2017, CompEx 01-04 was introduced to the NEC Standard. NEC500 & also NEC505, along with Ex "f" Foundation Courses. These are provided by Global EX Solutions, via Eaton
See also
ATEX directive
Electrical Equipment in Hazardous Areas
References
Further reading
External links
https://www.compex.org.uk/
Electrical safety
Explosion protection | CompEx | [
"Chemistry",
"Engineering"
] | 227 | [
"Explosion protection",
"Combustion engineering",
"Explosions"
] |
41,980,300 | https://en.wikipedia.org/wiki/Azaborane | Azaborane usually refers a borane cluster where BH vertices are replaced by N or NR (R stands typically for H or organic substituent). Like many of the related boranes, these clusters are polyhedra and can be classified as closo-, nido-, arachno-, etc..
Within the context of Wade's rules, NR is a 4-electron vertex, and N is a 3-electron vertex. Prominent examples are the charge-neutral nido- (i.e. ) and closo- (i.e. ).
Azaboranes can also refer to simpler compounds including iminoboranes (RB=NR', where R and R' stand typically for H or organic substituent) and borazines.
See also
Carborane
References
Boron–nitrogen compounds
Cluster chemistry
Boranes | Azaborane | [
"Chemistry"
] | 184 | [
"Cluster chemistry",
"Organometallic chemistry"
] |
41,985,381 | https://en.wikipedia.org/wiki/Chemical%20shift%20index | The chemical shift index or CSI is a widely employed technique in protein nuclear magnetic resonance spectroscopy that can be used to display and identify the location (i.e. start and end) as well as the type of protein secondary structure (beta strands, helices and random coil regions) found in proteins using only backbone chemical shift data The technique was invented by David S. Wishart in 1992 for analyzing 1Hα chemical shifts and then later extended by him in 1994 to incorporate 13C backbone shifts. The original CSI method makes use of the fact that 1Hα chemical shifts of amino acid residues in helices tends to be shifted upfield (i.e. towards the right side of an NMR spectrum) relative to their random coil values and downfield (i.e. towards the left side of an NMR spectrum) in beta strands. Similar kinds of upfield and downfield trends are also detectable in backbone 13C chemical shifts.
Implementation
The CSI is a graph-based technique that essentially employs an amino acid-specific digital filter to convert every assigned backbone chemical shift value into a simple three-state (-1, 0, +1) index. This approach generates a more easily understood and much more visually pleasing graph of protein chemical shift values. In particular, if the upfield 1Hα chemical shift (relative to an amino acid-specific random coil value) of a certain residue is > 0.1 ppm, then that amino acid residue is assigned a value of -1. Similarly, if the downfield 1Hα chemical shift of a certain amino acid residue is > 0.1 ppm then that residue is assigned a value of +1. If an amino acid residue's chemical shift is not shifted downfield or upfield by a sufficient amount (i.e. <0.1 ppm), it is given a value of 0. When this 3-state index is plotted as a bar graph over the full length of the protein sequence, simple inspection can allow one to identify beta strands (clusters of +1 values), alpha helices (clusters of -1 values), and random coil segments (clusters of 0 values). A list of the amino acid-specific random coil chemical shifts for CSI calculations is given in Table 1. An example of a CSI graph for a small protein is shown in Figure 1 with the arrows located above the black bars indicating locations of the beta strands and the rectangular box indicating the location of a helix.
Performance
Using only 1Hα chemical shifts and simple clustering rules (clusters of 3 or more vertical bars for beta strands and clusters of 4 or more vertical bars for alpha helices), the CSI is typically 75-80% accurate in the identification of secondary structures. This performance depends partly on the quality of the NMR data set as well as the technique (manual or programmatic) used to identify the protein secondary structures. As noted above, a consensus CSI method that filters upfield/downfield chemical shift changes in 13Cα, 13Cβ, and 13C' atoms in a similar manner to 1Hα shifts has also been developed. The consensus CSI combines the CSI plots from backbone 1H and 13C chemical shifts to generate a single CSI plot. It can be up to 85-90% accurate.
History
The link between protein chemical shifts and protein secondary structure (specifically alpha helices) was first described by John Markley and colleagues in 1967. With the development of modern 2-dimensional NMR techniques, it became possible to measure more protein chemical shifts. With more peptides and proteins were being assigned in the early 1980s it soon became obvious that amino acid chemical shifts were sensitive not only to helical conformations, but also to β-strand conformations. Specifically, the secondary 1Hα chemical shifts of all amino acids exhibit a clear upfield trend on helix formation and an obvious downfield trend on β-sheet formation. By the early 1990s, a sufficient body of 13C and 15N chemical shift assignments for peptides and proteins had been collected to determine that similar upfield/downfield trends were evident for essentially all backbone 13Cα, 13Cβ, 13C', 1HN and 15N (weakly) chemical shifts. It was these rather striking chemical shift trends that were exploited in the development of the chemical shift index.
Limitations
The CSI method is not without some shortcomings. In particular, its performance drops if chemical shift assignments are mis-referenced or incomplete. It is also quite sensitive to the choice of random coil shifts used to calculate the secondary shifts and it generally identifies alpha helices (>85% accuracy) better than beta strands (<75% accuracy) regardless of the choice of random coil shifts. Furthermore, the CSI method does not identify other kinds of secondary structures, such as β-turns. Because of these shortcomings, a number of alternative CSI-like approaches have been proposed. These include: 1) a prediction method that employs statistically derived chemical shift/structure potentials (PECAN); 2) a probabilistic approach to secondary structure identification (PSSI); 3) a method that combines secondary structure predictions from sequence data and chemical shift data (PsiCSI), 4) a secondary structure identification approach that uses pre-specified chemical shift patterns (PLATON) and 5) a two-dimensional cluster analysis method known as 2DCSi. The performance of these newer methods is generally slightly better (2-4%) than the original CSI method.
Utility
Since its original description in 1992, the CSI method has been used to characterize the secondary structure of thousands of peptides and proteins. Its popularity is largely due to the fact that it is easy to understand and can be implemented without the need for specialized computer programs. Even though the CSI method can be easily performed manually, a number of commonly used NMR data processing programs such as NMRView, NMR structure generation web servers such as CS23D as well as various NMR data analysis web servers such as RCI, Preditor and PANAV have incorporated the CSI method into their software.
See also
Chemical Shift
Random Coil Index
Protein NMR
Protein Chemical Shift Re-Referencing
Protein secondary structure
Protein Chemical Shift Prediction
NMR
Nuclear magnetic resonance spectroscopy
Protein nuclear magnetic resonance spectroscopy
Protein
References
External links
CSI calculations via RCI webserver
CSI calculations via Preditor webserver
Stand-alone CSI program for Linux/Unix
Chemical shift rereferencing for CSI calculations by Shiftcor
Chemical shift rereferencing for CSI calculations by PANAV
Nuclear magnetic resonance
Nuclear magnetic resonance software
Protein methods
Protein structure
Biophysics
Scientific techniques | Chemical shift index | [
"Physics",
"Chemistry",
"Biology"
] | 1,344 | [
"Biochemistry methods",
"Applied and interdisciplinary physics",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance software",
"Protein methods",
"Protein biochemistry",
"Biophysics",
"Structural biology",
"Nuclear physics",
"Protein structure"
] |
33,735,571 | https://en.wikipedia.org/wiki/Killer%20activation%20receptor | Killer Activation Receptors (KARs) are receptors expressed on the plasma membrane (cell membrane) of Natural Killer cells (NK cells). KARs work together with Killer Inhibitory Receptors (abbreviated as KIRs in the text), which inactivate KARs in order to regulate the NK cells functions on hosted or transformed cells. These receptors have a broad binding specificity and are able to broadcast opposite signals. It is the balance between these competing signals that determines if the cytotoxic activity of the NK cell and apoptosis of distressed cell occurs.
Killer Inhibitory Receptor vs. Killer-cell Immunglobulin-like Receptors
There is sometimes confusion regarding the KIR acronym. The KIR term has been started to be being used parallelly both for the Killer-cell immunoglobulin-like receptors (KIRs) and for the Killer Inhibitory Receptors. The Killer-cell immunoglobulin-like receptors involve both activation and inhibitory receptors. Killer-cell inhibitory receptors involve both immunoglobulin-like receptors and C-type lectin-like receptors.
Killer Activation Receptors vs. Killer Inhibitory Receptors
KARs and KIRs have some morphological features in common, such as being transmembrane proteins. The similarities are specially found in the extracellular domains.
The differences between KARs and KIRs tend to be in the intracellular domains. They can have a tyrosine containing activation or inhibitory motifs in the intracellular part of the receptor molecule (they are called ITAMs and ITIMs).
At first, it was thought that there was only one KAR and one KIR receptor present on the NK cell, known as the two-receptor model. In the last decade, many different KARs and KIRs, such as NKp46 or NKG2D, have been discovered creating the opposing-signals model.NKG2D is activated by the cell-surface ligands MICA and ULBP2.Even though KARs and KIRs are receptors with antagonistic effects on NK cells, they have some structural characteristics in common. Both receptors are usually transmembrane proteins. Also, the extracellular domains of these proteins tend to have similar molecular features and are responsible for ligand recognition.
The opposing functions of these receptors are due to differences in their intracellular domains. KARs proteins possess positively charged transmembrane residues and short cytoplasmic tails that contain few intracellular signaling domains. In contrast, KIRs proteins usually have long cytoplasmic tails.
As the chains from KARs are not able to mediate any signal transduction in isolation, a common feature of such receptors is the presence of noncovalently linked subunits that contain immunoreceptor tyrosine-based activation motifs (ITAMs) in their cytoplasmic tails. ITAMs are composed of a conserved sequence of amino acids, including two Tyr-x-x-Leu/Ile elements (where x is any amino acid) separated by six to eight amino acid residues. When the binding of an activation ligand to an activation receptor complex occurs, the tyrosine residues in the ITAMs in the associated chain are phosphorylated by kinases, and a signal that promotes natural cytotoxicity is conveyed to the interior of the NK cell. Therefore, ITAMs are involved in the facilitation of signal transduction. These subunits are moreover composed of an accessory signaling molecule such as CD3ζ, the γc chain, or one of two adaptor proteins called DAP10 and DAP12. All of these molecules possess negatively charged transmembrane domains.
A common feature of members of all KIR is the presence of immunoreceptor tyrosine-based inhibition motifs (ITIMs) in their cytoplasmic tails. ITIMs are composed of the sequence Ile/Val/Leu/Ser-x-Tyr-x-x-Leu/Val, where x denotes any amino acid. The latter are essential to the signaling functions of these molecules. When an inhibitory receptor is stimulated by the binding of MHC class I, kinases and phosphatases are recruited to the receptor complex. This is how ITIMs counteract the effect of kinases initiated by activating receptors and manage to inhibit the signal transduction within the NK cell.
Types of Killer Activation Receptors
Based on their structure there are three different groups of KARS. The first group of receptors is called Natural Cytotoxicity Receptors (NCR), which only includes activation receptors. The two other classes are: Natural Killer Group 2 (NKG2), which includes activation and inhibition receptors, and some KIRs which do not have an inhibitor role.
The three receptors that are included in the NCR class are NKp46, NKp44 and NKp30. The crystal structure of NKp46, which is representative for all three NCR, has been determined. It has two C2-set immunoglobulin domains, and it’s probable that the binding site for its ligand is near the interdomain hinge.
There are two NKG2-class receptors which are NKG2D and CD94/NKG2C. NKG2D, which doesn’t bind to CD94, is a homodimeric lectin-like receptor. CD94/NKG2C consists in a complex formed by the CD94 protein, which is a C-type lectin molecule bound to the NKG2C protein. This molecule can bind to five classes of NKG2 (A, B, C, E and H), but the union can trigger an activation or an inhibition response, depending on the NKG2 molecule (CD94/NKG2A, for example, is an inhibitor complex).
Most KIRs have an inhibitor function, however, a few KIRs that have an activator role also exist. One of these activating KIRs is KIR2DS1, which has an Ig-like structure, like KIRs in general.
Finally, there is CD16, a low affinity Fc receptor (FcγRIII) which contains N-glycosylation sites; therefore, it is a glycoprotein.
Killer Activation Receptors are associated with signaling intracellular chains. In fact, these intracellular domains determine the opposite functions of activation and inhibitory receptors. Activation receptors are associated with an accessory signaling molecule (for instance, CD3ζ) or with an adaptor protein, which can be either DAP10 or DAP12. All of these signaling molecules contain immunoreceptor tyrosine-based activated motifs (ITAMs), which are phosphorylated and consequently facilitate signal transduction.
Each of these receptors has a specific ligand, although some receptors that belong to the same class, such as NCR, recognize similar molecules.
How do they work?
KARs can detect a specific type of molecules: MICA and MICB. These molecules are in MHC class I of human cells and they are related to cellular stress: this is why MICA and MICB appear in infected or transformed cells but they aren't very common in healthy cells. KARs recognize MICA and MICB when they are in a huge proportion and get engaged. This engagement activates the natural killer cell to attack the transformed or infected cells. This action can be done in different ways. NK can kill directly the hosted cell, it can do it by segregating cytokines, IFN-β and IFN-α, or by doing both things.
There are other less common ligands, like carbohydrate domains, which are recognized by a group of receptors: C-type lectins (so named because they have calcium-dependent carbohydrates recognition domains).
In addition to lectins, there are other molecules implicated in the activation of NK. These additional proteins are: CD2 and CD16. The CD16 works in antibody-mediated recognition.
Finally, there is a group of proteins which are related to the activation in an unknown way. These are NKp30, Nkp44 and Nkp46.
These ligands activate the NK cell, however, before the activation, Killer Inhibition Receptors (KIRs) recognize certain molecules in the MHC class I of the hosted cell and get engaged with them. These molecules are typical of healthy cells but some of these molecules are repressed in infected or transformed cells. For this reason when the hosted cell is really infected the proportion of KARs engaged with ligands is bigger than the proportion of KIRs engaged with MHC I molecules. When this happens the NK is activated and the hosted cell is destroyed. On the other hand, if there are more KIRs engaged with MHC class I molecules than KARs engaged with ligands, the NK isn't activated and the suspicious hosted cell remains alive.
KARs and KIRs: their role in cancer
One way by which NK cells are able to distinguish between normal and infected or transformed cells is by monitoring the amount of MHC class I molecules cells have on their surface. When it come to an infected and a tumor cell, the expression of MHC class I decreases.
In cancers, a Killer Activation Receptor (KAR), located on the surface of the NK cell, binds to certain molecules which only appear on cells that are undergoing stress situations. In humans, this KAR is called NKG2D and the molecules it recognizes MICA and MICB. This binding provides a signal which induces the NK cell to kill the target cell.
Then, Killer Inhibitory Receptors (KIRs) examine the surface of the tumor cell in order to determine the levels of MHC class I molecules it has. If KIRs bind sufficiently to MHC class I molecules, the “killing signal” is overridden to prevent the killing of the cell. However, if KIRs are not sufficiently engaged to MHC class I molecules, killing of the target cell proceeds.
References
Further reading
Immunology
Lymphocytes
Receptors | Killer activation receptor | [
"Chemistry",
"Biology"
] | 2,091 | [
"Immunology",
"Receptors",
"Signal transduction"
] |
33,738,639 | https://en.wikipedia.org/wiki/Scenario%20testing | Scenario testing is a software testing activity that uses scenarios: hypothetical stories to help the tester work through a complex problem or test system. The ideal scenario test is a credible, complex, compelling or motivating story; the outcome of which is easy to evaluate. These tests are usually different from test cases in that test cases are single steps whereas scenarios cover a number of steps.
History
Cem Kaner coined the phrase scenario test by October 2003. He commented that one of the most difficult aspects of testing was maintaining step-by-step test cases along with their expected results. His paper attempted to find a way to reduce the re-work of complicated written tests and incorporate the ease of use cases.
A few months later, Hans Buwalda wrote about a similar approach he had been using that he called "soap opera testing". Like television soap operas these tests were both exaggerated in activity and condensed in time. The key to both approaches was to avoid step-by-step testing instructions with expected results and instead replaced them with a narrative that gave freedom to the tester while confining the scope of the test.
Methods
System scenarios
In this method only those sets of realistic, user activities that cover several components in the system are used as scenario tests. Development of system scenario can be done using:
Story lines
State transitions
Business verticals
Implementation story from customers
Use-case and role-based scenarios
In this method the focus is on how a user uses the system with different roles and environment.
See also
Test script
Test suite
Session-based testing
References
Software testing | Scenario testing | [
"Engineering"
] | 315 | [
"Software engineering",
"Software testing"
] |
33,739,475 | https://en.wikipedia.org/wiki/Mobilities | Mobilities is a contemporary paradigm in the social sciences that explores the movement of people (human migration, individual mobility, travel, transport), ideas (see e.g. meme) and things (transport), as well as the broader social implications of those movements. Mobility can also be thought as the movement of people through social classes, social mobility or income, income mobility.
A mobility "turn" (or transformation) in the social sciences began in the 1990s in response to the increasing realization of the historic and contemporary importance of movement on individuals and society. This turn has been driven by generally increased levels of mobility and new forms of mobility where bodies combine with information and different patterns of mobility. The mobilities paradigm incorporates new ways of theorizing about how these mobilities lie "at the center of constellations of power, the creation of identities and the microgeographies of everyday life." (Cresswell, 2011, 551)
The mobility turn arose as a response to the way in which the social sciences had traditionally been static, seeing movement as a black box and ignoring or trivializing "the importance of the systematic movements of people for work and family life, for leisure and pleasure, and for politics and protest" (Sheller and Urry, 2006, 208). Mobilities emerged as a critique of contradictory orientations toward both sedentarism and deterritorialisation in social science. People had often been seen as static entities tied to specific places, or as nomadic and placeless in a frenetic and globalized existence. Mobilities looks at movements and the forces that drive, constrain and are produced by those movements.
Several typologies have been formulated to clarify the wide variety of mobilities. Most notably, John Urry divides mobilities into five types: mobility of objects, corporeal mobility, imaginative mobility, virtual mobility and communicative mobility. Later, Leopoldina Fortunati and Sakari Taipale proposed an alternative typology taking the individual and the human body as a point of reference. They differentiate between ‘macro-mobilities’ (consistent physical displacements), ‘micro-mobilities’ (small-scale displacements), ‘media mobility’ (mobility added to the traditionally fixed forms of media) and ‘disembodied mobility’ (the transformation in the social order). The categories are typically considered interrelated, and therefore they are not exclusive.
Scope
While mobilities is commonly associated with sociology, contributions to the mobilities literature have come from scholars in anthropology, cultural studies, economics, geography, migration studies, science and technology studies, and tourism and transport studies. (Sheller and Urry, 2006, 207)
The eponymous journal Mobilities provides a list of typical subjects which have been explored in the mobilities paradigm (Taylor and Francis, 2011):
Mobile spatiality and temporality
Sustainable and alternative mobilities
Mobile rights and risks
New social networks and mobile media
Immobilities and social exclusions
Tourism and travel mobilities
Migration and diasporas
Transportation and communication technologies
Transitions in complex systems
Origins
Sheller and Urry (2006, 215) place mobilities in the sociological tradition by defining the primordial theorist of mobilities as Georg Simmel (1858–1918). Simmel's essays, "Bridge and Door" (Simmel, 1909 / 1994) and "The Metropolis and Mental Life" (Simmel, 1903 / 2001) identify a uniquely human will to connection, as well as the urban demands of tempo and precision that are satisfied with mobility.
The more immediate precursors of contemporary mobilities research emerged in the 1990s (Cresswell 2011, 551). Historian James Clifford (1997) advocated for a shift from deep analysis of particular places to the routes connecting them. Marc Augé (1995) considered the philosophical potential of an anthropology of "non-places" like airports and motorways that are characterized by constant transition and temporality. Sociologist Manuel Castells outlined a "network society" and suggested that the "space of places" is being surpassed by a "space of flows." Feminist scholar Caren Kaplan (1996) explored questions about the gendering of metaphors of travel in social and cultural theory.
The contemporary paradigm under the moniker "mobilities" appears to originate with the work of sociologist John Urry. In his book, Sociology Beyond Societies: Mobilities for the Twenty-First Century, Urry (2000, 1) presents a "manifesto for a sociology that examines the diverse mobilities of peoples, objects, images, information and wastes; and of the complex interdependencies between, and social consequences of, these diverse mobilities."
This is consistent with the aims and scope of the eponymous journal Mobilities, which "examines both the large-scale movements of people, objects, capital, and information across the world, as well as more local processes of daily transportation, movement through public and private spaces, and the travel of material things in everyday life" (Taylor and Francis, 2011).
In 2006, Mimi Sheller and John Urry published an oft-cited paper that examined the mobilities paradigm as it was just emerging, exploring its motivations, theoretical underpinnings, and methodologies. Sheller and Urry specifically focused on automobility as a powerful socio-technical system that "impacts not only on local public spaces and opportunities for coming together, but also on the formation of gendered subjectivities, familial and social networks, spatially segregated urban neighborhoods, national images and aspirations to modernity, and global relations ranging from transnational migration to terrorism and oil wars" (Sheller and Urry, 2006, 209). This was further developed by the journal Mobilities (Hannam, Sheller and Urry, 2006).
Mobilities can be viewed as an extension of the "spatial turn" in the arts and sciences in the 1980s, in which scholars began "to interpret space and the spatiality of human life with the same critical insight and interpretive power as have traditionally been given to time and history (the historicality of human life) on one hand, and to social relations and society (the sociality of human life) on the other" (Sheller and Urry, 2006, 216; Engel and Nugent, 2010, 1; Soja, 1999 / 2005, 261).
Engel and Nugent (2010) trace the conceptual roots of the spatial turn to Ernst Cassirer and Henri Lefebvre (1974), although Fredric Jameson appears to have coined the epochal usage of the term for the 1980s paradigm shift. Jameson (1988 / 2003, 154) notes that the concept of the spatial turn "has often seemed to offer one of the more productive ways of distinguishing postmodernism from modernism proper, whose experience of temporality -- existential time, along with deep memory -- it is henceforth conventional to see as dominant of the high modern."
For Oswin & Yeoh (2010) mobility seems to be inextricably intertwined with late-modernity and the end of the nation-state. The sense of mobility makes us to think in migratory and tourist fluxes as well as the necessary infrastructure for that displacement takes place.
P. Vannini (2012) opted to see mobility as a projection of existent cultural values, expectancies and structures that denotes styles of life. Mobility after all would not only generate effects on people's behaviour but also specific styles of life. Vannini explains convincingly that on Canada's coast, the values of islanders defy the hierarchal order in populated cities from many perspectives. Islanders prioritize the social cohesion and trust of their communities before the alienation of mega-cities. There is a clear physical isolation that marks the boundaries between urbanity and rurality. From another view, nonetheless, this ideological dichotomy between authenticity and alienation leads residents to commercialize their spaces to outsiders. Although the tourism industry is adopted in these communities as a form of activity, many locals have historically migrated from urban populated cities.
Mobilities and transportation geography
The intellectual roots of mobilities in sociology distinguish it from traditional transportation studies and transportation geography, which have firmer roots in mid 20th century positivist spatial science.
Cresswell (2011, 551) presents six characteristics distinguishing mobilities from prior approaches to the study of migration or transport:
Mobilities often links science and social science to the humanities.
Mobilities often links across different scales of movement, while traditional transportation geography tends to focus on particular forms of movement at only one scale (such as local traffic studies or household travel surveys).
Mobilities encompasses the movement of people, objects, and ideas, rather than narrowly focusing on areas like passenger modal shift or freight logistics.
Mobilities considers both motion and "stopping, stillness and relative immobility."
Mobilities incorporates mobile theorization and methodologies to avoid the privileging of "notions of boundedness and the sedentary."
Mobilities often embraces the political and differential politics of mobility, as opposed to the apolitical, "objective" stance often sought by researchers associated with engineering disciplines
Mobilities can be seen as a postmodern descendant of modernist transportation studies, with the influence of the spatial turn corresponding to a "post-structuralist agnosticism about both naturalistic and universal explanations and about single-voiced historical narratives, and to the concomitant recognition that position and context are centrally and inescapably implicated in all constructions of knowledge" (Cosgrove, 1999, 7; Warf and Arias, 2009).
Despite these ontological and epistemological differences, Shaw and Hesse (2010, 207) have argued that mobilities and transport geography represent points on a continuum rather than incompatible extremes. Indeed, traditional transport geography has not been wholly quantitative any more than mobilities is wholly qualitative. Sociological explorations of mobility can incorporate empirical techniques, while model-based inquiries can be tempered with richer understandings of the meanings, representations and assumptions inherently embedded in models.
Shaw and Sidaway (2010, 505) argue that even as research in the mobilities paradigm has attempted to reengage transportation and the social sciences, mobilities shares a fate similar to traditional transportation geography in still remaining outside the mainstream of the broader academic geographic community.
Theoretical underpinnings of mobilities
Sheller and Urry (2006, 215-217) presented six bodies of theory underpinning the mobilities paradigm:
The prime theoretical foundation of mobilities is the work of early 20th-century sociologist Georg Simmel, who identified a uniquely human "will to connection," and provided a theoretical connection between mobility and materiality. Simmel focused on the increased tempo of urban life, that "drives not only its social, economic, and infrastructural formations, but also the psychic forms of the urban dweller." Along with this tempo comes a need for precision in timing and location in order to prevent chaos, which results in complex and novel systems of relationships.
A second body of theory comes from the science and technology studies which look at mobile sociotechnical systems that incorporate hybrid geographies of human and nonhuman components. Automobile, rail or air transport systems involve complex transport networks that affect society and are affected by society. These networks can have dynamic and enduring parts. Non-transport information networks can also have unpredictable effects on encouraging or suppressing physical mobility (Pellegrino 2012).
A third body of theory comes from the postmodern conception of spatiality, with the substance of places being constantly in motion and subject to constant reassembly and reconfiguration (Thrift 1996).
A fourth body of theory is a "recentring of the corporeal body as an affective vehicle through which we sense place and movement, and construct emotional geographies". For example, the car is "experienced through a combination of senses and sensed through multiple registers of motion and emotion″ (Sheller and Urry 2006, 216).
A fifth body of theory incorporates how topologies of social networks relate to how complex patterns form and change. Contemporary information technologies and ways of life often create broad but weak social ties across time and space, with social life incorporating fewer chance meetings and more networked connections.
Finally, the last body of theory is the analysis of complex transportation systems that are "neither perfectly ordered nor anarchic." For example, the rigid spatial coupling, operational timings, and historical bindings of rail contrast with unpredictable environmental conditions and ever-shifting political winds. And, yet, "change through the accumulation of small repetitions...could conceivably tip the car system into the postcar system."
Mobilities methodologies
Mimi Sheller and John Urry (2006, 217-219) presented seven methodological areas often covered in mobilities research:
Analysis of the patterning, timing and causation of face-to-face co-presence
Mobile ethnography - participation in patterns of movement while conducting ethnographic research
Time-space diaries - subjects record what they are doing, at what times and in what places
Cyber-research - exploration of virtual mobilities through various forms of electronic connectivity
Study of experiences and feelings
Study of memory and private worlds via photographs, letters, images and souvenirs
Study of in-between places and transfer points like lounges, waiting rooms, cafes, amusement arcades, parks, hotels, airports, stations, motels, harbors
See also
Bicycle
Congestion
Home care
Hypermobility (travel)
Pedestrian
Public transport
Private transport
Transportation engineering
References
Social sciences
Space
Motion (physics) | Mobilities | [
"Physics",
"Mathematics"
] | 2,835 | [
"Physical phenomena",
"Motion (physics)",
"Space",
"Mechanics",
"Geometry",
"Spacetime"
] |
33,742,206 | https://en.wikipedia.org/wiki/David%20Boger | David Vernon Boger (born 13 November 1939) is an Australian chemical engineer.
In 2017, Boger was elected a member of the National Academy of Engineering for discoveries and fundamental research on elastic and particulate fluids and their application to waste minimization in the minerals industry.
Life
He graduated from Bucknell University with a B.S. where he studied with Robert Slonaker, and from University of Illinois with an M.S. and Ph.D.
He teaches at Monash University, the University of Melbourne, and the University of Florida. He is one of three inaugural Laureate Professors at the University of Melbourne.
Work
Boger is known for his studies of non-Newtonian fluids (which behave both as liquids and solids) which have improved the understanding of how this group of fluids flow and led to major financial and environmental benefits. Boger discovered 'perfect' non-Newtonian fluids, which are elastic and have constant viscosity and are now known as Boger fluids, which enabled him to explain how non-Newtonian fluids behave. He was able to apply his ideas to improve the disposal of "red mud", a toxic waste produced during the manufacture of aluminium from bauxite and a major environmental problem. His findings have also led to improved inks for industrial inkjet printers, insecticide chemicals that spread evenly on leaves and reduced drag in oil pipelines.
He was elected to the Fellowship of the Australian Academy of Science in 1993 and served on the Council of the Australian Academy of Science from 1999 to 2002. He was awarded the Matthew Flinders Medal and Lecture in 2000. In 2007 he was elected Fellow of the Royal Society and of the Royal Society of Victoria.
Honours and awards
2005 Prime Minister's Prize for Science
2000 Matthew Flinders Medal and Lecture by the Australian Academy of Science
2017 Elected to the US National Academy of Engineering
2024 Appointed Companion of the Order of Australia
References
1939 births
Living people
Australian chemical engineers
Fellows of the Royal Society
People from Kutztown, Pennsylvania
Bucknell University alumni
Grainger College of Engineering alumni
Academic staff of Monash University
Academic staff of the University of Melbourne
University of Florida faculty
Companions of the Order of Australia
Fellows of the Australian Academy of Technological Sciences and Engineering
Fellows of the Australian Academy of Science
Chemical engineering academics | David Boger | [
"Chemistry"
] | 455 | [
"Chemical engineering academics",
"Chemical engineers"
] |
33,742,973 | https://en.wikipedia.org/wiki/ND%20experiment | Neutral Detector (ND) is a detector for particle physics experiments created by the team of physicists in the
Budker Institute of Nuclear Physics, Novosibirsk, Russia.
Experiments with the ND were conducted from 1982 to 1987 at the e+e− storage ring VEPP-2M in the energy range 2E=0.5-1.4 GeV.
Physics
At the beginning of 80s the leading cross sections of the electron-positron annihilation in the final states with charged particles were measured in the energy range 2E=0.5-1.4 GeV. Processes with the neutral particles in the final state were less studied. To investigate the radiative decays of the , , and mesons and other processes involving photons, , and mesons the ND
was constructed. Its distinguishing features are defined by the specially designed electromagnetic calorimeter based on NaI(Tl) scintillation counters.
List of published analyses
Radiative decays
Rare decays of the , , and mesons
Search for rare decays
light scalars and in -meson radiative decays
Non-resonant electron-positron annihilation into hadrons
Test of QED processes
(virtual Compton scattering)
Analyses of other processes
Measurement of the ω-meson parameters
Upper limits on electron width of scalar and tensor mesons , , , , and
Search for
Detector
Based on goals of the physics program the ND consist of
Electromagnetic calorimeter
168 rectangular NaI(Tl) scintillation counters
total mass of NaI(Tl) is 2.6 t
solid angle coverage is 65% of 4π sr
minimum thickness is 32 cm or 12 radiation length
energy resolution for photons is σ/E = 4% /
Charged particle coordinate system
3 layers of coaxial cylindrical 2-d wire proportional chambers in the center of the detector
solid angle coverage is 80% of 4π sr
angular resolution is 0.5° in the azimuthal and 1.5° in the polar direction
surrounded by the 5-mm thick plastic scintillation counter for trigger
Flat (shower) coordinate 2-d wire proportional chambers
2 layers of flat 2-d wire proportional chambers.
angular resolution is 2° in the azimuthal and 3.5° in the polar direction for 0.5 GeV photons
Iron absorber & anti-coincidence counters
The electromagnetic calorimeter is covered by the 10-cm thick iron absorber and plastic scintillation anti-coincidence counters.
Results
Data collected with the ND experiment corresponds to the integrated luminosity 19 pb−1.
Results of the experiments with ND are presented in Ref.,
and are included in the PDG Review.
See also
References
External links
Budker Institute of Nuclear Physics (BINP)
VEPP-2M
NOVOSIBIRSK-ND experiment record on INSPIRE-HEP
Particle detectors
Particle experiments
Experimental particle physics
Particle physics facilities
Budker Institute of Nuclear Physics | ND experiment | [
"Physics",
"Technology",
"Engineering"
] | 610 | [
"Particle detectors",
"Measuring instruments",
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
33,744,020 | https://en.wikipedia.org/wiki/AS/NZS%201200 | The AS/NZS 1200 standard is a joint Australian/New Zealand Standard, for the requirements of pressure equipment which aims to promote safety and uniformity throughout Australia and New Zealand.
History
This Standard was originally published in 1931 as CB1 - SAA Boiler Code. It was subsequently issued in 1963 and 1967. In 1972, the Standard was issued under the designation AS 1200. The 2nd, 3rd and 4th editions were published in 1978, 1981 and 1988 respectively. In 1994 the Standard was jointed revised and issued under the designation AS/NZS 1200. The latest and 6th edition was published in 2000.
Abstract
This Standard is a 'parent' document which sets out basic requirements and good practice for the design, materials, manufacture, examination, testing, installation, conformity assessment, commissioning, operation, inspection, maintenance, repair, alteration and disposal of pressure equipment (boilers, pressure vessels and pressure piping) but excluding gas cylinders, blast furnaces, pipelines, fire extinguishers, storage tanks to name a few.
As a 'parent' document this Standard specifies specific requirements to pressure equipment by making reference to a range of Australian, New Zealand and other Standards.
Requirements and Compliance
Equipment to be used in Australia or New Zealand must meet the following requirements:
Compliance with the Standards listed in this Standard, including international Standards, or those equivalently agreed between all parties.
Compliance with all applicable legal jurisdictions.
Compliance with the contract documents.
When a selected Standard is used (local or international) as listed in this Standard, it shall be used in its entirety except when compliance with more appropriate requirements of other Standards. Irrespective of the Standard used, the Standard should only be considered as the minimal requirements and not necessarily holistic, therefore judgement needs to be exercised to ensure all relevant matters are covered.
Pressure Equipment Standards
AS/NZS 1200 is a 'parent' Standard that specifies the requirements and Standards specific to pressure equipment. The following provides a sample of some of the Standards applicable to pressure equipment under this Standard.
Conformity Assessment
Australia
National
Pressure equipment is to be in accordance with AS/NZS 1200 per the Australian Government legislation as stipulated in Regulations 4.05(2)(d) and 4.51 (4)(b) of the Occupational Health and Safety (Safety Standards) Regulations 1994.
Queensland
Some pressure equipment (excluding pressure piping) is regulated in the state of Queensland as per Schedule 4 (13) of the Workplace Health and Safety Regulation 2008. In this Schedule, it refers to certain criteria being met per AS 4343, which scope is to classify hazard levels of pressure equipment to AS/NZS 1200.
New South Wales
Pressure equipment is regulated in the state of New South Wales as per Clause 94 (a) of the Occupational Health and Safety Regulation 2001. This Clause refers to AS 4343 and AS 1210, which are main Standards referenced in the 'parent' Standard AS/NZS 1200.
New Zealand
Pressure equipment is to be in accordance with AS/NZS 1200 under New Zealand legislation as stipulated in section 3.4.1(1) of the Approved Code of Practice (ACOP) for Pressure Equipment (Excluding Boilers). This ACOP supports the requirements of the Health and Safety in Employment (Pressure Equipment, Cranes and Passenger Ropeways) Regulations 1999.
References
Standards of Australia and New Zealand
Pressure vessels | AS/NZS 1200 | [
"Physics",
"Chemistry",
"Engineering"
] | 676 | [
"Structural engineering",
"Chemical equipment",
"Physical systems",
"Hydraulics",
"Pressure vessels"
] |
33,744,682 | https://en.wikipedia.org/wiki/Kinematic%20wave | In gravity and pressure driven fluid dynamical and geophysical mass flows such as ocean waves, avalanches, debris flows, mud flows, flash floods, etc., kinematic waves are important mathematical tools to understand the basic features of the associated wave phenomena.
These waves are also applied to model the motion of highway traffic flows.
In these flows, mass and momentum equations can be combined to yield a kinematic wave equation. Depending on the flow configurations, the kinematic wave can be linear or non-linear, which depends on whether the wave phase speed is a constant or a variable. Kinematic wave can be described by a simple partial differential equation with a single unknown field variable (e.g., the flow or wave height, ) in terms of the two independent variables, namely the time () and the space () with some parameters (coefficients) containing information about the physics and geometry of the flow. In general, the wave can be advecting and diffusing. However, in simple situations, the kinematic wave is mainly advecting.
Kinematic wave for debris flow
Non-linear kinematic wave for debris flow can be written as follows with complex non-linear coefficients:
where is the debris flow height, is the time, is the downstream channel position, is the pressure gradient and the depth dependent nonlinear variable wave speed, and is a flow height and pressure gradient dependent variable diffusion term.
This equation can also be written in the conservative form:
where is the generalized flux that depends on several physical and geometrical parameters of the flow, flow height and the hydraulic pressure gradient. For this equation reduces to the Burgers' equation.
References
Further reading
Fluid dynamics
Oceanographical terminology | Kinematic wave | [
"Chemistry",
"Engineering"
] | 350 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
33,746,146 | https://en.wikipedia.org/wiki/Partnership%20for%20Observation%20of%20the%20Global%20Ocean | The Partnership for Observation of the Global Ocean (POGO), which was founded in 1999, is a consortium of major oceanographic institutions around the world, represented by their directors. POGO's goal is to promote global operational oceanography, the implementation of a Global Ocean Observing System, and the importance of ocean observations for society. As of 2023, POGO has 56 member organizations. The current chair is Captain Francisco Arias Isaza (INVEMAR, Colombia).
It is supported from annual dues subscribed by the members, as well as by grants from charitable foundations. The establishing funds for POGO were provided by the Alfred P. Sloan and Richard Lounsbery foundations.
POGO provides a forum (at the annual meetings and intersessionally) for members to meet with peers and senior officials of partner organisations to discuss issues of mutual concern. To ease the shortage in trained observers of the ocean in developing countries, it has developed a suite of programmes in capacity building, and works with relevant partner organisations in the marine field SCOR, IOC, GOOS, GEO). It engages in outreach activities to the general public such as hosting exhibits at international events such as Expo 2012 Yeosu Korea, UNFCCC COP Meetings and AGU-ASLO-TOS Ocean Sciences Meetings.
History
In March 1999, the Directors of Scripps Institution of Oceanography, Woods Hole Oceanographic Institution, and the Southampton Oceanography Centre in the UK, convened a planning meeting in the headquarters of the Intergovernmental Oceanographic Commission of the United Nations Education, Science and Culture Organisation (IOC-UNESCO) in Paris. This meeting confirmed the value of creating a new partnership and defined the initial mission statement and terms of reference.
Scripps Institution of Oceanography hosted the first formal meeting in early December 1999, which included senior officials from 17 institutions in 12 countries (as well as representatives of the IOC, the Scientific Committee for Oceanic Research (SCOR) of the International Council for Science (ICSU), the Committee on Earth Observation Satellites (CEOS) and several international scientific programs. At this meeting, there was agreement on an initial work plan, including development of an advocacy plan for observing systems; participation in processes to secure governmental commitments to fund ocean observing systems; a data interchange demonstration pilot project; and establishment of a clearinghouse for information exchange among POGO members, as well as the broader community.
POGO Capacity Building
The Nippon Foundation - POGO Centre of Excellence in Observational Oceanography (currently hosted by the Ocean Frontier Institute, Canada). An annual program in which ten scientists from developing countries are supported to study for ten months in an intensive programme related to ocean observations.
The POGO-SCOR Visiting Fellowship programme, for scientists from developing countries to spend up to three months in a major oceanographic institution. The programme is carried out in conjunction with POGO's partner organisation SCOR.
The NF-POGO Ocean Training Partnership programme, under which early-career scientists participate in major oceanographic cruises, and spend time at a participating major oceanographic institute before and after the cruise to experience cruise preparation and data analysis. (2008+).
Under POGO capacity-building schemes, roughly 1,300 early-career scientists from over 90 countries have received advanced training. Former scholars or alumni of NF-POGO training become members of the NF-POGO Alumni Network for Oceans (NANO).
Activities
In its São Paulo Declaration of 2001, POGO drew attention to the world imbalance between Northern and Southern Hemispheres in the capacity to observe the oceans, resulting in its establishment of a capacity-building programme (above). It also underlined the relative scarcity of ocean observations in the Southern Hemisphere compared with the Northern Hemisphere, and POGO member JAMSTEC, organised a circumnavigation of the Southern Hemisphere, the BEAGLE Expedition, using its ship RV Mirai, at a cost estimated to be around $35M. More recently, selected Antarctic Expeditions of the Alfred Wegener Institute have been labelled POGO Expeditions. POGO also supports the Southern Ocean Observing System.
Around the time POGO was being started, the Argo programme was also beginning.
The GEO Secretariat was established during the early years of POGO. Oceans did not appear among the nine societal-benefit areas around which GEO was structured at that time. POGO advocated for a greater prominence of ocean observing activities within GEO, which led the creation of a new Ocean Task (SB01, Oceans and Society: Blue Planet) in the 2012-2015 GEO Work Plan. This was expanded and further developed into what is now known as the GEO Blue Planet Initiative.
POGO contributed to OceanObs'09 in Venice in 2009, as well as participating in the post-Venice Framework for Ocean Observing Committee. POGO was also a sponsor of the OceanObs'19 conference in Honolulu, USA.
POGO member institutions have been driving the establishment of OceanSites (coordinated, deep-ocean, multi-disciplinary time-series reference sites), which has made significant progress in recent years.
The idea for an "International Quiet Ocean Experiment" first came up during one of the POGO Annual Meetings. With seed funding from the Sloan Foundation, the idea was further developed in partnership with SCOR. An Open Science Meeting was convened under the auspices of SCOR and POGO at IOC-UNESCO, Paris, in August–September 2011, to develop a Science Plan for the project, which could last up to ten years.
Members
POGO currently has 56 member institutions across 31 countries. A full list of current members can be found on the POGO website.
Secretariat
The Secretariat is hosted by Plymouth Marine Laboratory in the UK, with a satellite office hosted by the University of Algarve in Portugal.
References
External links
GEO Blue Planet
NF-POGO Alumni Network for Oceans
Oceanography
International scientific organizations | Partnership for Observation of the Global Ocean | [
"Physics",
"Environmental_science"
] | 1,195 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
43,444,725 | https://en.wikipedia.org/wiki/E.%20T.%20S.%20Appleyard | Edgar Thomas Snowden Appleyard (14 June 1904 – 15 June 1939) was a physicist and pioneer in the fields of thin films and superconductivity.
Biography
He was born on 14 June 1904, the son of Edgar Snowden Appleyard and Elizabeth Whitehead of Huddersfield, England.
Appleyard attended Almondbury Grammar School and then was admitted to the Cambridge as a King’s College scholar. In the Natural Science Tripos he selected Physics as one of the key science subjects to focus his interest. He spent several years on research in the Cavendish Laboratory. In 1929 at the University of Bristol's H.H. Wills Physics Laboratory, Appleyard received an appointment to a George Wills research associateship. At the University of Chicago for the 1931–1932 academic year, Appleyard was awarded with a Rockefeller fellowship.
Appleyard died on 15 June 1939 through injuries caused by a fall.
Noteworthy collaborators
H. W. B. Skinner
John J. Hopfield
A.C.B. Lovell
A. D. Misener
Heinz London
Research interests
Excitation of polarized light
Preparation of Schumann plates
Thin metal films: Conductivity, Resistance
Superconductivity
Select publications
Appleyard, E. T. S. "Electronic Structure of the a-X Band System of N2." Physical Review 41.2 (1932): 254.
Appleyard, E. T. S. "Discussion of the papers by Finch, Appleyard and Lennard-Jones." Proceedings of the Physical Society 49.4S (1937): 151.
References
1904 births
1939 deaths
Alumni of King's College, Cambridge
Cavendish Laboratory
People associated with the University of Bristol
University of Chicago fellows
English physicists
Superconductivity
Accidental deaths from falls | E. T. S. Appleyard | [
"Physics",
"Materials_science",
"Engineering"
] | 352 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
43,447,946 | https://en.wikipedia.org/wiki/Scanning%20Habitable%20Environments%20with%20Raman%20and%20Luminescence%20for%20Organics%20and%20Chemicals | Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals (SHERLOC) is an ultraviolet Raman spectrometer that uses fine-scale imaging and an ultraviolet (UV) laser to determine fine-scale mineralogy, and detect organic compounds designed for the Perseverance rover as part of the Mars 2020 mission. It was constructed at the Jet Propulsion Laboratory with major subsystems being delivered from Malin Space Science Systems and Los Alamos National Laboratory.
SHERLOC has a calibration target with possible Mars suit materials, and it will measure how they change over time in the Martian surface environment.
Goals
According to a 2017 Universities Space Research Association (USRA) report:
Construction
There are three locations on the rover where SHERLOC components are located. The SHERLOC Turret Assembly (STA) is mounted at the end of the rover arm. The STA contains spectroscopy and imaging components. The SHERLOC Body Assembly (SBA) is located on the rover chassis and acts as the interface between the STA and the Mars 2020 rover. The SBA deals with command and data handling, along with power distribution. The SHERLOC Calibration Target (SCT) is located on the front of the rover chassis and hold spectral standards.
SHERLOC consists of both imaging and spectroscopic elements. It has two imaging components consisting of heritage hardware from the MSL MAHLI instrument. The Wide Angle Topographic Sensor for Operations and eNgineering (WATSON) is a built to print re-flight that can generate color images over multiple scales. The other, Autofocus Context Imager (ACI), acts as the mechanism that allows the instrument to get a contextual image of a sample and to autofocus the laser spot for the spectroscopic part of the SHERLOC investigation.
For Spectroscopy, it utilizes a NeCu laser to generate UV photons (248.6 nm) which can generate characteristic Raman and fluorescence photons from a scientifically interesting sample. The deep UV laser is co-boresighted to a context imager and integrated into an autofocusing/scanning optical system that allows correlation of spectral signatures to surface textures, morphology and visible features. The context imager has a spatial resolution of 30 μm and currently is designed to operate in the 400-500 nm wavelength range.
Results from Mars
Over the course of three years, SHERLOC and WATSON have been successfully collecting spectra and images of minerals and organics on the surface of Mars. Utilizing WATSON and ACI images, there was confirmation that the Jezero Crater floor consists of aqueously altered mafic material with various igneous origins. In addition, WATSON has been used to collect selfies of the Perseverance rover and the Ingenuity helicopter. Recently, it successfully sealed and stored the first two rock samples from Mars. Because of it, We now know that these rocks derived from a volcanic environment, and that there was liquid water there in Mars's past, that formed salts that SHERLOC has seen.
See also
Composition of Mars
Curiosity rover
Exploration of Mars
Geology of Mars
List of rocks on Mars
Mars Science Laboratory
MOXIE
PIXL
Scientific information from the Mars Exploration Rover mission
Timeline of Mars Science Laboratory
References
External links
Mars 2020 Mission - Home Page - NASA/JPL
Scientific instruments
Mars 2020 instruments
Raman spectroscopy
Spectrometers | Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 688 | [
"Spectrum (physical sciences)",
"Scientific instruments",
"Measuring instruments",
"Spectrometers",
"Spectroscopy"
] |
43,450,635 | https://en.wikipedia.org/wiki/Refractive%20index%20and%20extinction%20coefficient%20of%20thin%20film%20materials | A. R. Forouhi and I. Bloomer deduced dispersion equations for the refractive index, n, and extinction coefficient, k, which were published in 1986 and 1988. The 1986 publication relates to amorphous materials, while the 1988 publication relates to crystalline. Subsequently, in 1991, their work was included as a chapter in The Handbook of Optical Constants. The Forouhi–Bloomer dispersion equations describe how photons of varying energies interact with thin films. When used with a spectroscopic reflectometry tool, the Forouhi–Bloomer dispersion equations specify n and k for amorphous and crystalline materials as a function of photon energy E. Values of n and k as a function of photon energy, E, are referred to as the spectra of n and k, which can also be expressed as functions of the wavelength of light, λ, since E = hc/λ. The symbol h is the Planck constant and c, the speed of light in vacuum. Together, n and k are often referred to as the "optical constants" of a material (though they are not constants since their values depend on photon energy).
The derivation of the Forouhi–Bloomer dispersion equations is based on obtaining an expression for k as a function of photon energy, symbolically written as k(E), starting from first principles quantum mechanics and solid state physics. An expression for n as a function of photon energy, symbolically written as n(E), is then determined from the expression for k(E) in accordance to the Kramers–Kronig relations which states that n(E) is the Hilbert transform of k(E).
The Forouhi–Bloomer dispersion equations for n(E) and k(E) of amorphous materials are given as:
The five parameters A, B, C, Eg, and n(∞) each have physical significance. Eg is the optical energy band gap of the material. A, B, and C depend on the band structure of the material. They are positive constants such that 4C − B2 > 0. Finally, n(∞), a constant greater than unity, represents the value of n at E = ∞. The parameters B0 and C0 in the equation for n(E) are not independent parameters, but depend on A, B, C, and Eg. They are given by:
where
Thus, for amorphous materials, a total of five parameters are sufficient to fully describe the dependence of both n and k on photon energy, E.
For crystalline materials which have multiple peaks in their n and k spectra, the Forouhi–Bloomer dispersion equations can be extended as follows:
The number of terms in each sum, q, is equal to the number of peaks in the n and k spectra of the material. Every term in the sum has its own values of the parameters A, B, C, Eg, as well as its own values of B0 and C0. Analogous to the amorphous case, the terms all have physical significance.
Characterizing thin films
The refractive index (n) and extinction coefficient (k) are related to the interaction between a material and incident light, and are associated with refraction and absorption (respectively). They can be considered as the "fingerprint of the material". Thin film material coatings on various substrates provide important functionalities for the microfabrication industry, and the n, k, as well as the thickness, t, of these thin film constituents must be measured and controlled to allow for repeatable manufacturing.
The Forouhi–Bloomer dispersion equations for n and k were originally expected to apply to semiconductors and dielectrics, whether in amorphous, polycrystalline, or crystalline states. However, they have been shown to describe the n and k spectra of transparent conductors, as well as metallic compounds. The formalism for crystalline materials was found to also apply to polymers, which consist of long chains of molecules that do not form a crystallographic structure in the classical sense.
Other dispersion models that can be used to derive n and k, such as the Tauc–Lorentz model, can be found in the literature. Two well-known models—Cauchy and Sellmeier—provide empirical expressions for n valid over a limited measurement range, and are only useful for non-absorbing films where k=0. Consequently, the Forouhi–Bloomer formulation has been used for measuring thin films in various applications.
In the following discussions, all variables of photon energy, E, will be described in terms of wavelength of light, λ, since experimentally variables involving thin films are typically measured over a spectrum of wavelengths. The n and k spectra of a thin film cannot be measured directly, but must be determined indirectly from measurable quantities that depend on them. Spectroscopic reflectance, R(λ), is one such measurable quantity. Another, is spectroscopic transmittance, T(λ), applicable when the substrate is transparent. Spectroscopic reflectance of a thin film on a substrate represents the ratio of the intensity of light reflected from the sample to the intensity of incident light, measured over a range of wavelengths, whereas spectroscopic transmittance, T(λ), represents the ratio of the intensity of light transmitted through the sample to the intensity of incident light, measured over a range of wavelengths; typically, there will also be a reflected signal, R(λ), accompanying T(λ).
The measurable quantities, R(λ) and T(λ) depend not only on n(λ) and k(λ) of the film, but also on film thickness, t, and n(λ) and k(λ) of the substrate. For a silicon substrate, the n(λ) and k(λ) values are known and are taken as a given input. The challenge of characterizing thin films involves extracting t, n(λ) and k(λ) of the film from the measurement of R(λ) and/or T(λ). This can be achieved by combining the Forouhi–Bloomer dispersion equations for n(λ) and k(λ) with the Fresnel equations for the reflection and transmission of light at an interface to obtain theoretical, physically valid, expressions for reflectance and transmittance. In so doing, the challenge is reduced to extracting the five parameters A, B, C, Eg, and n(∞) that constitute n(λ) and k(λ), along with film thickness, t, by using a nonlinear least squares regression analysis fitting procedure. The fitting procedure entails an iterative improvement of the values of A, B, C, Eg, n(∞), t, in order to reduce the sum of the squares of the errors between the theoretical R(λ) or theoretical T(λ) and the measured spectrum of R(λ) or T(λ).
Besides spectroscopic reflectance and transmittance, spectroscopic ellipsometry can also be used in an analogous way to characterize thin films and determine t, n(λ) and k(λ).
Measurement examples
The following examples show the versatility of using the Forouhi–Bloomer dispersion equations to characterize thin films using a tool based on near-normal incident spectroscopic reflectance. Near-normal spectroscopic transmittance is also used when the substrate is transparent. The n(λ) and k(λ) spectra of each film are obtained along with film thickness, over a wide range of wavelengths from deep ultraviolet to near infrared wavelengths (190–1000 nm).
In the following examples, the notation for theoretical and measured reflectance in the spectral plots is expressed as "R-theor" and "R-meas", respectively.
Below are schematics depicting the thin film measurement process:
The Forouhi–Bloomer dispersion equations in combination with Rigorous Coupled-Wave Analysis (RCWA) have also been used to obtain detailed profile information (depth, CD, sidewall angle) of trench structures. In order to extract structure information, polarized broadband reflectance data, Rs and Rp, must be collected over a large wavelength range from a periodic structure (grating), and then analyzed with a model that incorporates Forouhi–Bloomer dispersion equations and RCWA. Inputs into the model include grating pitch and n and k spectra of all materials within the structure, while outputs can include Depth, CDs at multiple locations, and even sidewall angle. The n and k spectra of such materials can be obtained in accordance with the methodology described in this section for thin film measurements.
Below are schematics depicting the measurement process for trench structures. Examples of trench measurements then follow.
Example 1: Amorphous silicon on oxidized silicon substrate (a-Si/SiO2/Si-Sub)
Example 1 shows one broad maximum in the n(λ) and k(λ) spectra of the a-Si film, as is expected for amorphous materials. As a material transitions toward crystallinity, the broad maximum gives way to several sharper peaks in its n(λ) and k(λ) spectra, as demonstrated in the graphics.
When the measurement involves two or more films in a stack of films, the theoretical expression for reflectance must be expanded to include the n(λ) and k(λ) spectra, plus thickness, t, of each film. However, the regression may not converge to unique values of the parameters, due to the non-linear nature of the expression for reflectance. So it is helpful to eliminate some of the unknowns. For example, the n(λ) and k(λ) spectra of one or more of the films may be known from the literature or previous measurements, and held fixed (not allowed to vary) during the regression. To obtain the results shown in Example 1, the n(λ) and k(λ) spectra of the SiO2 layer was fixed, and the other parameters, n(λ) and k(λ) of a-Si, plus thicknesses of both a-Si and SiO2 were allowed to vary.
Example 2: 248 nm photoresist on silicon substrate (PR/Si-Sub)
Polymers such as photoresist consist of long chains of molecules which do not form a crystallographic structure in the classic sense. However, their n(λ) and k(λ) spectra exhibit several sharp peaks rather than a broad maximum expected for non-crystalline materials. Thus, the measurement results for a polymer are based on the Forouhi–Bloomer formulation for crystalline materials. Most of the structure in the n(λ) and k(λ) spectra occurs in the deep UV wavelength range and thus to properly characterize a film of this nature, it is necessary that the measured reflectance data in the deep UV range is accurate.
The figure shows a measurement example of a photoresist (polymer) material used for 248 nm micro-lithography. Six terms were used in the Forouhi–Bloomer equations for crystalline materials to fit the data and achieve the results.
Example 3: Indium tin oxide on glass substrate (ITO/Glass-Sub)
Indium tin oxide (ITO) is a conducting material with the unusual property that it is transparent, so it is widely used in the flat panel display industry. Reflectance and transmittance measurements of the uncoated glass substrate were needed in order to determine the previously unknown n(λ) and k(λ) spectra of the glass. The reflectance and transmittance of ITO deposited on the same glass substrate were then measured simultaneously, and analyzed using the Forouhi–Bloomer equations.
As expected, the k(λ) spectrum of ITO is zero in the visible wavelength range, since ITO is transparent. The behavior of the k(λ) spectrum of ITO in the near-infrared (NIR) and infrared (IR) wavelength ranges resembles that of a metal: non-zero in the NIR range of 750–1000 nm (difficult to discern in the graphics since its values are very small) and reaching a maximum value in the IR range (λ > 1000 nm). The average k value of the ITO film in the NIR and IR range is 0.05.
Example 4: Multi-spectral analysis of germanium (40%)–selenium (60%) thin films
When dealing with complex films, in some instances the parameters cannot be resolved uniquely. To constrain the solution to a set of unique values, a technique involving multi-spectral analysis can be used. In the simplest case, this entails depositing the film on two different substrates and then simultaneously analyzing the results using the Forouhi–Bloomer dispersion equations.
For example, the single measurement of reflectance in 190–1000 nm range of Ge40Se60/Si does not provide unique n(λ) and k(λ) spectra of the film. However, this problem can be solved by depositing the same Ge40Se60 film on another substrate, in this case oxidized silicon, and then simultaneously analyzing the measured reflectance data to determine:
Thickness of the Ge40Se60/Si film on the silicon substrate as 34.5 nm,
Thickness of the Ge40Se60/Si film on the oxidized silicon substrate as 33.6 nm,
Thickness of SiO2 (with n and k spectra of SiO2 held fixed), and
n and k spectra, in 190–1000 nm range, of Ge40Se60/Si.
Example 5: Complex trench structure
The trench structure depicted in the adjacent diagram repeats itself in 160 nm intervals, that is, it has a given pitch of 160 nm. The trench is composed of the following materials:
Accurate n and k values of these materials are necessary in order to analyze the structure. Often a blanket area on the trench sample with the film of interest is present for the measurement. In this example, the reflectance spectrum of the poly-silicon was measured on a blanket area containing the poly-silicon, from which its n and k spectra were determined in accordance with the methodology described in this article that uses the Forouhi–Bloomer dispersion equations. Fixed tables of n and k values were used for the SiO2 and Si3N4 films.
Combining the n and k spectra of the films with Rigorous Coupled-Wave Analysis (RCWA) the following critical parameters were determined (with measured results as well):
References
Light
Thin films
Semiconductors
Refraction
Metrology | Refractive index and extinction coefficient of thin film materials | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 3,024 | [
"Physical phenomena",
"Refraction",
"Physical quantities",
"Electromagnetic spectrum",
"Materials",
"Nanotechnology",
"Materials science",
"Electronic engineering",
"Planes (geometry)",
"Thin films",
"Electrical resistance and conductance",
"Spectrum (physical sciences)",
"Semiconductors",
... |
43,451,826 | https://en.wikipedia.org/wiki/Hybrid%20plasmonic%20waveguide | A hybrid plasmonic waveguide is an optical waveguide that achieves strong light confinement by coupling the light guided by a dielectric waveguide and a plasmonic waveguide. It is formed by separating a medium of high refractive index (usually silicon) from a metal surface (usually gold or silver) by a small gap.
History
Dielectric waveguides use total internal reflection to confine light in a high index region. They can guide light over a long distance with very low loss, but their light confinement ability is limited by diffraction. Plasmonic waveguides, on the other hand, use surface plasmon to confine light near a metal surface. The light confinement ability of plasmonic waveguides is not limited by diffraction, and, as a result, they can confine light to very small volumes. However, these guides suffer significant propagation loss because of the presence of metal as part of the guiding structure. The aim of designing the hybrid plasmonic waveguide was to combine these two different wave guiding schemes and achieve high light confinement without suffering large loss. Many different variations of this structure have been proposed. Many other types of hybrid plasmonic waveguides have been proposed since then to improve light confinement ability or to reduce fabrication complexity.
Principle of operation
The operation of the hybrid plasmonic waveguides can be explained using the concept of mode coupling. The most commonly used hybrid plasmonic waveguide consists of a silicon nanowire placed very near a metal surface and separated by a low index region. The silicon waveguide supports dielectric waveguide mode, which is mostly confined in silicon. The metal surface supports surface plasmon, which is confined near the metal surface. When these two structures are brought close to each other, the dielectric waveguide mode supported by the silicon nanowire couples to the surface plasmon mode supported by the metal surface. As a result of this mode coupling, light becomes highly confined in the region between the metal and the high index region (silicon nanowire).
Applications
Hybrid plasmonic waveguide provides large confinement of light at a lower loss compared to many previously reported plasmonic waveguides. It is also compatible with silicon photonics technology, and can be integrated with silicon waveguides on the same chip. Similar to a slot-waveguide, it can also confine light in the low index medium. Combination of these attractive features has stimulated worldwide research activity on the application of this new guiding scheme. Some notable examples of such applications are compact lasers, electro optic modulators, biosensors, polarization control devices, and thermo-optic switches.
References
Photonics
Plasmonics | Hybrid plasmonic waveguide | [
"Physics",
"Chemistry",
"Materials_science"
] | 565 | [
"Plasmonics",
"Surface science",
"Condensed matter physics",
"Nanotechnology",
"Solid state engineering"
] |
28,346,382 | https://en.wikipedia.org/wiki/Organelle%20biogenesis | Organelle biogenesis is the biogenesis, or creation, of cellular organelles in cells. Organelle biogenesis includes the process by which cellular organelles are split between daughter cells during mitosis; this process is called organelle inheritance.
Discovery
Following the discovery of cellular organelles in the nineteenth century, little was known about their function and synthesis until the development of electron microscopy and subcellular fractionation in the twentieth century. This allowed experiments on the function, structure, and biogenesis of these organelles to commence.
Mechanisms of protein sorting and retrieval have been found to give organelles their characteristic composition. It is known that cellular organelles can come from preexisting organelles; however, it is a subject of controversy whether organelles can be created without a preexisting one.
Process
Several processes are known to have developed for organelle biogenesis. These can range from de novo synthesis to the copying of a template organelle; the formation of an organelle 'from scratch' and using a preexisting organelle as a template to manufacture an organelle, respectively. The distinct structures of each organelle are thought to be caused by the different mechanisms of the processes which create them and the proteins that they are made up of. Organelles may also be 'split' between two cells during the process of cellular division (known as organelle inheritance), where the organelle of the parent cell doubles in size and then splits with each half being delivered to their respective daughter cells.
The process of organelle biogenesis is known to be regulated by specialized transcription networks that modulate the expression of the genes that code for specific organellar proteins. In order for organelle biogenesis to be carried out properly, the specific genes coding for the organellar proteins must be transcribed properly and the translation of the resulting mRNA must be successful. In addition to this, the process requires the transfer of polypeptides to their site of function, guided by signaling peptides. If proteins are not directed to their respective sites of subcellular function, a defective organelle that fails to fulfill its tasks within the cell properly may result.
Several metabolic diseases are known to be caused by a fault in the process of organelle biogenesis. These may include mitochondrial biogenesis defects, peroxisome biogenesis disorders, and lysosomal storage disorders.
References
Biochemistry
Cell biology
Organelles | Organelle biogenesis | [
"Chemistry",
"Biology"
] | 482 | [
"Biochemistry",
"Cell biology",
"nan"
] |
29,811,008 | https://en.wikipedia.org/wiki/Transition%20metal%20oxo%20complex | A transition metal oxo complex is a coordination complex containing an oxo ligand. Formally O2–, an oxo ligand can be bound to one or more metal centers, i.e. it can exist as a terminal or (most commonly) as bridging ligands. Oxo ligands stabilize high oxidation states of a metal. They are also found in several metalloproteins, for example in molybdenum cofactors and in many iron-containing enzymes. One of the earliest synthetic compounds to incorporate an oxo ligand is potassium ferrate (K2FeO4), which was likely prepared by Georg E. Stahl in 1702.
Reactivity
Olation and acid-base reactions
A common reaction exhibited by metal-oxo compounds is olation, the condensation process that converts low molecular weight oxides to polymers with M-O-M linkages. Olation often begins with the deprotonation of a metal-hydroxo complex. It is the basis for mineralization and the precipitation of metal oxides. For the oxides of d0 metals, VV, NbV, TaV, MoVI, and WVI, the olation process affords polyoxometallates, a large class of molecular metal oxides.
Oxygen-atom transfer
Metal oxo complexes are intermediates in many metal-catalyzed oxidation reactions. Oxygen-atom transfer is common reaction of particular interest in organic chemistry and biochemistry. Some metal-oxos are capable of transferring their oxo ligand to organic substrates. One such example of this type of reactivity is from the enzyme superfamily molybdenum oxotransferase.
In water oxidation catalysis, metal oxo complexes are intermediates in the conversion of water to O2.
Hydrogen-atom abstraction
Transition metal-oxo's are also capable of abstracting strong C–H, N–H, and O–H bonds. Cytochrome P450 contains a high-valent iron-oxo which is capable of abstracting hydrogen atoms from strong C–H bonds.
Molecular oxides
Some of the longest known and most widely used oxo compounds are oxidizing agents such as potassium permanganate (KMnO4) and osmium tetroxide (OsO4). Compounds such as these are widely used for converting alkenes to vicinal diols and alcohols to ketones or carboxylic acids. More selective or gentler oxidizing reagents include pyridinium chlorochromate (PCC) and pyridinium dichromate (PDC). Metal oxo species are capable of catalytic, including asymmetric oxidations of various types. Some metal-oxo complexes promote C-H bond activation, converting hydrocarbons to alcohols.
Metalloenzymes
Iron(IV)-oxo species
Iron(IV)-oxo compounds are intermediates in many biological oxidations:
Alpha-ketoglutarate-dependent hydroxylases activate O2 by oxidative decarboxylation of ketoglutarate, generating Fe(IV)=O centers, i.e. ferryl, that hydroxylate a variety of hydrocarbon substrates.
Cytochrome P450 enzymes, use a heme cofactor, insert ferryl oxygen into saturated C–H bonds, epoxidize olefins, and oxidize aromatic groups.
Methane monooxygenase (MMO) oxidizes methane to methanol via oxygen atom transfer from an iron-oxo intermediate at its non-heme diiron center. Much effort is aimed at reproducing reactions with synthetic catalysts.
Molybdenum/tungsten oxo species
The oxo ligand (or analogous sulfido ligand) is nearly ubiquitous in molybdenum and tungsten chemistry, appearing in the ores containing these elements, throughout their synthetic chemistry, and also in their biological role (aside from nitrogenase). The biologically transported species and starting point for biosynthesis is generally accepted to be oxometallates MoO42− or WO42−. All Mo/W enzymes, again except nitrogenase, are bound to one or more molybdopterin prosthetic group. The Mo/W centers generally cycle between hexavalent (M(VI)) and tetravalent (M(IV)) states. Although there is some variation among these enzymes, members from all three families involve oxygen atom transfer between the Mo/W center and the substrate. Representative reactions from each of the three structural classes are:
Sulfite oxidase: SO32− + H2O → SO42− + 2 H+ + 2 e−
DMSO reductase: H3C–S(O)–CH3 (DMSO) + 2 H+ + 2 e− → H3C–S–CH3 (DMS) + H2O
Aldehyde ferredoxin oxidoreductase: R–CHO + H2O → R–CO2H + 2 H+ + 2 e−
The three different classes of molybdenum cofactors are shown in the adjacent figure. The biological use of tungsten mirrors that of molybdenum.
Oxygen-evolving complex
The active site for the oxygen-evolving complex (OEC) of photosystem II (PSII) is a Mn4O5Ca centre with several bridging oxo ligands that participate in the oxidation of water to molecular oxygen. The OEC is proposed to utilize a terminal oxo intermediate as a part of the water oxidation reaction. This complex is responsible for the production of nearly all of earth's molecular oxygen. This key link in the oxygen cycle is necessary for much of the biodiversity present on earth.
The "oxo wall"
The term "oxo wall" is a theory used to describe the fact that no terminal oxo complexes are known for metal centers with octahedral symmetry and d-electron counts beyond 5.
Oxo compounds for the vanadium through iron triads (Groups 3-8) are well known, whereas terminal oxo compounds for metals in the cobalt through zinc triads (Groups 9-12) are rare and invariably feature metals with coordination numbers lower than 6. This trend holds for other metal-ligand multiple bonds. Claimed exceptions to this rule have been retracted.
The iridium oxo complex Ir(O)(mesityl)3 may appear to be an exception to the oxo-wall rule, but it is not because the complex is non-octahedral. The trigonal symmetry reorders the metal d-orbitals below the degenerate MO π* pair. In three-fold symmetric complexes, multiple MO bonding is allowed for as many as 7 d-electrons.
Terminal oxo ligands are also rather rare for the titanium triad, especially zirconium and hafnium and are unknown for group 3 metals (scandium, yttrium, and lanthanum).
See also
Metal-ligand multiple bond
Oxide
Polyoxometalate
Metallate
Oxophilic
Dioxygen complex
References
Ligands
Transition metal oxides | Transition metal oxo complex | [
"Chemistry"
] | 1,500 | [
"Ligands",
"Coordination chemistry"
] |
29,817,345 | https://en.wikipedia.org/wiki/Spatial%20bifurcation | Spatial bifurcation is a form of bifurcation theory. The classic bifurcation analysis is referred to as an ordinary differential equation system, which is independent on the spatial variables. However, most realistic systems are spatially dependent. In order to understand spatial variable system (partial differential equations), some scientists try to treat with the spatial variable as time and use the AUTO package get a bifurcation results.
The weak nonlinear analysis will not provide substantial insights into the nonlinear problem of pattern selection. To understand the pattern selection mechanism, the method of spatial dynamics is used, which was found to be an effective method exploring the multiplicity of steady state solutions.
See also
Spatial ecology
spatial pattern
References
Bifurcation theory | Spatial bifurcation | [
"Mathematics"
] | 145 | [
"Bifurcation theory",
"Mathematical analysis",
"Mathematical analysis stubs",
"Dynamical systems"
] |
29,820,802 | https://en.wikipedia.org/wiki/Coronal%20radiative%20losses | In astronomy and in astrophysics, for radiative losses of the solar corona, it is meant the energy flux radiated from the external atmosphere of the Sun (traditionally divided into chromosphere, transition region and corona), and, in particular, the processes of production of the radiation coming from the solar corona and transition region, where the plasma is optically-thin. On the contrary, in the chromosphere, where the temperature decreases from the photospheric value of 6000 K to the minimum of 4400 K, the optical depth is about 1, and the radiation is thermal.
The corona extends much further than a solar radius from the photosphere and looks very complex and inhomogeneous in the X-rays images taken by satellites (see the figure on the right taken by the XRT on board Hinode).
The structure and dynamics of the corona are dominated by the solar magnetic field. There are strong evidences that even the heating mechanism, responsible for its high temperature of million degrees, is linked to the magnetic field of the Sun.
The energy flux irradiated from the corona changes in active regions, in the quiet Sun and in coronal holes; actually, part of the energy is irradiated outwards, but approximately the same amount of the energy flux is conducted back towards the chromosphere, through the steep transition region. In active regions the energy flux is about 107 erg cm−2sec−1, in the quiet Sun it is roughly 8 105 – 106 erg cm−2sec−1, and in coronal holes 5 105 - 8 105 erg cm−2sec−1, including the losses due to the solar wind.
The required power is a small fraction of the total flux irradiated from the Sun, but this energy is enough to maintain the plasma at the temperature of million degrees, since the density is very low and the processes of radiation are different from those occurring in the photosphere, as it is shown in detail in the next section.
Processes of radiation of the solar corona
The electromagnetic waves coming from the solar corona are emitted mainly in the X-rays. This radiation is not visible from the Earth because it is filtered by the atmosphere. Before the first rocket missions, the corona could be observed only in white light during the eclipses, while in the last fifty years the solar corona has been photographed in the EUV and X-rays by many satellites (Pioneer 5, 6, 7, 8, 9, Helios, Skylab, SMM, NIXT, Yohkoh, SOHO, TRACE, Hinode).
The emitting plasma is almost completely ionized and very light, its density is about 10−16 - 10−14 g/cm3. Particles are so isolated that almost all the photons can leave the Sun's surface without interacting with the matter above the photosphere: in other words, the corona is transparent to the radiation and the emission of the plasma is optically-thin. The Sun's atmosphere is not the unique example of X-ray source, since hot plasmas are present wherever in the Universe: from stellar coronae to thin galactic halos. These stellar environments are the subject of the X-ray astronomy.
In an optically-thin plasma the matter is not in thermodynamical equilibrium with the radiation, because collisions between particles and photons are very rare, and, as a matter of fact, the square root mean velocity of photons, electrons, protons and ions is not the same: we should define a temperature for each of these particle populations. The result is that the emission spectrum does not fit the spectral distribution of a blackbody radiation, but it depends only on those collisional processes which occur in a very rarefied plasma.
While the Fraunhofer lines coming from the photosphere are absorption lines, principally emitted from ions which absorb photons of the same frequency of the transition to an upper energy level, coronal lines are emission lines produced by metal ions which had been excited to a superior state by collisional processes. Many spectral lines are emitted by highly ionized atoms, like calcium and iron, which have lost most of their external electrons; these emission lines can be formed only at certain temperatures, and therefore their individuation in solar spectra is sufficient to determine the temperature of the emitting plasma.
Some of these spectral lines can be forbidden on the Earth: in fact, collisions between particles can excite ions to metastable states; in a dense gas these ions immediately collide with other particles and so they de-excite with an allowed transition to an intermediate level, while in the corona it is more probable that this ion remains in its metastable state, until it encounters a photon of the same frequency of the forbidden transition to the lower state. This photon induces the ion to emit with the same frequency by stimulated emission. Forbidden transitions from metastable states are often called as satellite lines.
The Spectroscopy of the corona allows the determination of many physical parameters of the emitting plasma. Comparing the intensity in lines of different ions of the same element, temperature and density can be measured with a good approximation: the different states of ionization are regulated by the Saha equation.
The Doppler shift gives a good measurement of the velocities along the line of sight but not in the perpendicular plane.
The line width should depend on the Maxwell–Boltzmann distribution of velocities at the temperature of line formation (thermal line broadening), while it is often larger than predicted.
The widening can be due to pressure broadening, when collisions between particles are frequent, or it can be due to turbulence: in this case the line width can be used to estimate the macroscopic velocity also on the Sun's surface, but with a great uncertainty.
The magnetic field can be measured thanks to the line splitting due to the Zeeman effect.
Optically-thin plasma emission
The most important processes of radiation for an optically-thin plasma
are
the emission in resonance lines of ionized metals (bound-bound emission);
the radiative recombinations (free-bound radiation) due to the most abundant coronal ions;
for very high temperatures above 10 MK, the bremsstrahlung (free-free emission).
Therefore, the radiative flux can be expressed as the sum of three terms:
where is the number of electrons per unit volume, the ion number density, the Planck constant, the frequency of the emitted radiation corresponding to the energy jump , the coefficient of collisional de-excitation relative to the ion transition, the radiative losses for plasma recombination and the bremsstrahlung contribution.
The first term is due to the emission in every single spectral line. With a good approximation, the number of occupied states at the superior level and the number of states at the inferior energy level are given by the equilibrium between collisional excitation and spontaneous emission
where is the transition probability of spontaneous emission.
The second term is calculated as the energy emitted per unit volume and time when free electrons are captured from ions to recombine into neutral atoms (dielectronic capture).
The third term is due to the electron scattering by protons and ions because of the Coulomb force: every accelerated charge emits radiation according to classical electrodynamics. This effect gives an appreciable contribution to the continuum spectrum only at the highest temperatures, above 10 MK.
Taking into account all the dominant radiation processes, including satellite lines from metastable states, the emission of an optically-thin plasma can be expressed more simply as
where depends only on the temperature. All the radiation mechanisms require collision processes and basically depend on the squared density (). The integral of the squared density along the line of sight is called the emission measure and is often used in X-ray astronomy.
The function has been modeled by many authors but with differences
that depend strongly upon the assumed elemental abundances of the plasma, and of course
on the atomic parameters and their estimation.
In order to calculate the radiative flux from an optically-thin plasma in a convenient
analytic form, Rosner et al. (1978)
suggested a formula for P(T) (erg cm3 s−1) as follows:
See also
Spectral lines
Spectroscopy
Plasma physics
X-ray astronomy
Sun
Corona
Photosphere
Chromosphere
Solar transition region
Solar wind
Nanoflares
References
Bibliography
Sun
Emission spectroscopy
Space plasmas
X-ray astronomy | Coronal radiative losses | [
"Physics",
"Chemistry",
"Astronomy"
] | 1,740 | [
"Space plasmas",
"Spectrum (physical sciences)",
"Emission spectroscopy",
"Astrophysics",
"X-ray astronomy",
"Spectroscopy",
"Astronomical sub-disciplines"
] |
29,821,546 | https://en.wikipedia.org/wiki/Cladding%20%28fiber%20optics%29 | Cladding in optical fibers is one or more layers of materials of lower refractive index in intimate contact with a core material of higher refractive index.
The cladding causes light to be confined to the core of the fiber by total internal reflection at the boundary between the core and cladding. Light propagation within the cladding is typically suppressed for most fibers. However, some fibers can support cladding modes in which light propagates through the cladding as well as the core. Depending upon the quantity of modes that are supported, they are referred to as multi-mode fibers and single-mode fibers. Improving transmission through fibers by applying a cladding was discovered in 1953 by Dutch scientist Bram van Heel.
History
The fact that transmission through fibers could be improved by applying a cladding was discovered in 1953 by van Heel, who used it to demonstrate image transmission through a bundle of optical fibers. Early cladding materials included oils, waxes, and polymers. Lawrence E. Curtiss at the University of Michigan developed the first glass cladding in 1956, by inserting a glass rod into a tube of glass with a lower refractive index, fusing the two together, and drawing the composite structure into an optical fiber.
Modes
A cladding mode is a mode that is confined to the cladding of an optical fiber by virtue of the fact that the cladding has a higher refractive index than the surrounding medium, which is either air or the primary polymer overcoat. These modes are generally undesired. Modern fibers have a primary polymer overcoat with a refractive index that is slightly higher than that of the cladding, so that light propagating in the cladding is rapidly attenuated and disappears after only a few centimeters of propagation. An exception to this is double-clad fiber, which is designed to support a mode in its inner cladding, as well as one in its core.
Advantages
In the production of glass fibers, there will inevitably be surface irregularities (ex. pore and cracks) that will scatter light when struck and lessen the total travel distance of the light. The inclusion of a glass cladding greatly reduces the attenuation caused by these surface irregularities. This is due to the light scattering less at the glass/glass interface than it would have at the glass/air interface for a fiber without cladding. The two primary factors that allow for this are the smaller change in index of refraction seen between two surfaces of glass, as well as surface irregularities on the cladding not interfering with the light beams. The inclusion of glass cladding is also an improvement over just applying a polymer coating, as glass will typically be stronger, more homogenous, and cleaner. Additionally, the inclusion of a cladding layer also allows for the usage of smaller glass fiber cores. With most glass fibers have a cladding that raises the total outer diameter to 125 microns.
Effect on numerical aperture
The numerical aperture of a multimode optical fiber is a function of the indices of refraction of the cladding and the core:
The numerical aperture allows for the calculation of the acceptance angle of incidence at the fiber interface. Which will give the maximum angle at which the incidence light can enter the core and maintain total internal reflection:
By combining both of these equations it can be seen in the diagram above how is a function of and , where is the index of refraction of the core and
is the index of refraction of the cladding.
Recent developments
Due to the relatively greater transmission of light they offer, fiber optic cores and claddings are usually made from highly purified silica glass. Certain impurities can be added to impart various properties, such as increasing transmission distance or improving fiber flexibility. There has been significant work done in improving these properties within the last several years.
References
Fiber optics
Optics | Cladding (fiber optics) | [
"Physics",
"Chemistry"
] | 770 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
35,371,721 | https://en.wikipedia.org/wiki/Adams%20decarboxylation | The Adams decarboxylation is a chemical reaction that involved the decarboxylation of coumarins which have carboxylic acid group in the third position. The decarboxylation is achieved by aqueous solution of sodium bisulfite, heat and a concentrated solution of sodium hydroxide.
References
Organic reactions
Name reactions | Adams decarboxylation | [
"Chemistry"
] | 73 | [
"Name reactions",
"Organic reactions"
] |
35,398,475 | https://en.wikipedia.org/wiki/Resonating%20valence%20bond%20theory | In condensed matter physics, the resonating valence bond theory (RVB) is a theoretical model that attempts to describe high-temperature superconductivity, and in particular the superconductivity in cuprate compounds. It was first proposed by an American physicist P. W. Anderson and Indian theoretical physicist Ganapathy Baskaran in 1987. The theory states that in copper oxide lattices, electrons from neighboring copper atoms interact to form a valence bond, which locks them in place. However, with doping, these electrons can act as mobile Cooper pairs and are able to superconduct. Anderson observed in his 1987 paper that the origins of superconductivity in doped cuprates was in the Mott insulator nature of crystalline copper oxide. RVB builds on the Hubbard and t-J models used in the study of strongly correlated materials.
In 2014, evidence showing that fractional particles can happen in quasi two-dimensional magnetic materials, was found by EPFL scientists lending support for Anderson's theory of high-temperature superconductivity.
Description
The physics of Mott insulators is described by the repulsive Hubbard model Hamiltonian:
In 1971, Anderson first suggested that this Hamiltonian can have a non-degenerate ground state that is composed of disordered spin states. Shortly after the high-temperature superconductors were discovered, Anderson and Kivelson et al. proposed a resonating valence bond ground state for these materials, written as
where represented a covering of a lattice by nearest neighbor dimers. Each such covering is weighted equally. In a mean field approximation, the RVB state can be written in terms of a Gutzwiller projection, and displays a superconducting phase transition per the Kosterlitz–Thouless mechanism. However, a rigorous proof for the existence of a superconducting ground state in either the Hubbard or the t-J Hamiltonian is not yet known. Further the stability of the RVB ground state has not yet been confirmed.
References
High-temperature superconductors
Correlated electrons
Condensed matter physics
Theoretical physics | Resonating valence bond theory | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 432 | [
"Theoretical physics",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Correlated electrons",
"Matter"
] |
35,402,112 | https://en.wikipedia.org/wiki/Myticin | Myticin is a cysteine-rich peptide produced in three isoforms, A, B and C, by Mytilus galloprovincialis (Mediterranean mussel), which are found primarily in marine habitats. Myticin is also produced in other species of Mytilus (Mytilus spp.), though the properties of Myticin in Mytilus galloprovincialis is understood to a greater extent. Isoforms A and B show antibacterial activity against Gram-positive bacteria, while isoform C is additionally active against the fungus Fusarium oxysporum and bacterium Escherichia coli (streptomycin resistant strain D31). Myticin-prepro is the precursor peptide.
The mature molecule, named myticin, consists of 40 residues, with four intramolecular disulphide bridges, an N-terminal signal peptide and a cysteine array in the primary structure different from that of previously characterised cysteine-rich antimicrobial peptides. The first 20 amino acids are a putative signal peptide, and the antimicrobial peptide sequence is a 36-residue C-terminal extension. Such a structure suggests that myticins are synthesised as prepro-proteins that are then processed by various proteolytic events before storage in the hemocytes as the active peptide. Myticin precursors are expressed mainly in the haemocytes.
Role of Myticin in Mytilus galloprovincialis
AMPs (Antimicrobial peptides) play a significant role in innate immunity defenses exhibited by Bivalves. In marine organisms, AMPs are the main factor in innate immune response which helps to protect them against pathogenic microorganisms in their environment. The innate immune response is thought be nonspecific, though there is limited research in this area. It is unclear whether invertebrates such as bivalves have a similar immune system to vertebrates, however, Myticin is expressed in the hemocytes of mussels, and recent studies have suggested that this molecule is activated after injury to its tissues.
Isoform C
Isoform C is the most widely studied isoform of myticin, most likely due to its abundance and diversity. Myticin C has shown a wide diversity in Mytilus galloprovincialis, with individuals expressing unique sequences of the peptide compared to genes that were not immune related. It has been shown to be an active defense mechanism against many organisms including fish rhabdovirus. Isoform C has also been shown to have antiviral properties against OsHV-1, which is one of the most important and devastating bivalve pathogens. Additionally, a modified version was show to have antiviral activity against HSV-1 and HSV-2 in humans, demonstrating the potential of Myticin c in human application.
References
Antimicrobial peptides
Protein families | Myticin | [
"Chemistry",
"Biology"
] | 604 | [
"Biochemistry stubs",
"Protein families",
"Protein stubs",
"Protein classification"
] |
26,520,744 | https://en.wikipedia.org/wiki/Z-tube | The Z-tube is an experimental apparatus for measuring the tensile strength of a liquid.
It consists of a Z-shaped tube with open ends, filled with a liquid, and set on top of a spinning table. If the tube were straight, the liquid would immediately fly out one end or the other of the tube as it began to spin. By bending the ends of the tube back towards the center of rotation, a shift of the liquid away from center will result in the water level in one end of the tube rising and thus increasing the pressure in that end of the tube, and consequently returning the liquid to the center of the tube. By measuring the rotational speed and the distance from the center of rotation to the liquid level in the bent ends of the tube, the pressure reduction inside the tube can be calculated.
Negative pressures, (i.e. less than zero absolute pressure, or in other words, tension) have been reported using water processed to remove dissolved gases. Tensile strengths up to 280 atmospheres have been reported for water in glass.
References
Fluid mechanics
Materials science
Scientific instruments | Z-tube | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 221 | [
"Applied and interdisciplinary physics",
"Materials science",
"Scientific instruments",
"Measuring instruments",
"Civil engineering",
"nan",
"Fluid mechanics"
] |
26,521,667 | https://en.wikipedia.org/wiki/Adatanserin | Adatanserin (WY-50,324, SEB-324) is a mixed 5-HT1A receptor partial agonist and 5-HT2A and 5-HT2C receptor antagonist. It was under development by Wyeth as an antidepressant but was ultimately not pursued.
Adantaserin has been shown to be neuroprotective against ischemia-induced glutamatergic excitotoxicity, an effect which appears to be mediated by blockade of the 5-HT2A receptor.
Synthesis
2-Chloropyrimidine (1) reacts with piperazine (2), forming 2-(1-piperazinyl)pyrimidine (3). Treatment with the phthalimide derivative N-(2-bromoethyl)phthalimide (4) in an alkylation reaction produces (5), which is deprotected using hydrazine to give the primary amine (6). Amide formation with the acid chloride of 1-adamantanecarboxylic acid yields adatanserin.
See also
Flibanserin
References
5-HT1A agonists
5-HT2A antagonists
5-HT2C antagonists
Abandoned drugs
Adamantanes
Carboxamides
Piperazines
Aminopyrimidines | Adatanserin | [
"Chemistry"
] | 286 | [
"Drug safety",
"Abandoned drugs"
] |
26,526,105 | https://en.wikipedia.org/wiki/Sony%20Ericsson%20Yari | Yari (U100i) (Kita in the Philippines) is a slider phone from Sony Ericsson as a successor to the Sony Ericsson F305. It was unveiled on December 14, 2009. It is a phone meant for entertainment (e.g. games, music). It is available in two colors: Cranberry Red and Achromatic Black. It is the lowest end model for the Sony Ericsson entertainment series. It has three related phones, the Aino, Satio and Vivaz. It has motion sensing functions as well as an "MSN"-like chat function while sending text messages.
References
External links
Official website
Yari
Mobile phones introduced in 2009 | Sony Ericsson Yari | [
"Technology"
] | 145 | [
"Mobile computer stubs",
"Mobile technology stubs"
] |
26,527,265 | https://en.wikipedia.org/wiki/Zeta%20function%20%28operator%29 | The zeta function of a mathematical operator is a function defined as
for those values of s where this expression exists, and as an analytic continuation of this function for other values of s. Here "tr" denotes a functional trace.
The zeta function may also be expressible as a spectral zeta function in terms of the eigenvalues of the operator by
.
It is used in giving a rigorous definition to the functional determinant of an operator, which is given by
The Minakshisundaram–Pleijel zeta function is an example, when the operator is the Laplacian of a compact Riemannian manifold.
One of the most important motivations for Arakelov theory is the zeta functions for operators with the method of heat kernels generalized algebro-geometrically.
See also
Quillen metric
References
Functional analysis
Zeta and L-functions | Zeta function (operator) | [
"Mathematics"
] | 174 | [
"Mathematical analysis",
"Functions and mappings",
"Functional analysis",
"Mathematical analysis stubs",
"Mathematical objects",
"Mathematical relations"
] |
26,527,624 | https://en.wikipedia.org/wiki/Hilbert%20spectroscopy | Hilbert Spectroscopy uses Hilbert transforms to analyze broad spectrum signals from gigahertz to terahertz frequency radio. One suggested use is to quickly analyze liquids inside airport passenger luggage.
References
Spectroscopy
Signal processing
Security technology | Hilbert spectroscopy | [
"Physics",
"Chemistry",
"Astronomy",
"Technology",
"Engineering"
] | 44 | [
"Spectroscopy stubs",
"Telecommunications engineering",
"Molecular physics",
"Spectrum (physical sciences)",
"Computer engineering",
"Signal processing",
"Instrumental analysis",
"Astronomy stubs",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
26,530,198 | https://en.wikipedia.org/wiki/DNA%20damage-binding%20protein | DNA damage-binding protein or UV-DDB is a protein complex that is responsible for repair of UV-damaged DNA. This complex is composed of two protein subunits, a large subunit DDB1 (p127) and a small subunit DDB2 (p48). When cells are exposed to UV radiation, DDB1 moves from the cytosol to the nucleus and binds to DDB2, thus forming the UV-DDB complex. This complex formation is highly favorable and it is demonstrated by UV-DDB's binding preference and high affinity to the UV lesions in the DNA. This complex functions in nucleotide excision repair, recognising UV-induced (6-4) pyrimidine-pyrimidone photoproducts and cyclobutane pyrimidine dimers.
Structure
The helical domain at the n-terminus of DDB2 binds to UV damaged DNA with high affinity to form the UV-DDB complex. The helical binding interaction at the n-terminus of DDB2 allows for the protein to bind immediately after detecting UV damaged DNA. DNA binds to DDB2 only when damaged by UV radiation. Binding with high affinity to a helical domain of DDB2 in the dimer form, UV-DDB, is facilitated by the n-terminal alpha helical paddle and beta wings of the DDB2 subunit. Both the alpha helical fold and the beta wing loops form a "winged helix" motif. The dimerized complex acts as a scaffold for DNA damage repair pathways and allows for other proteins to detect, interact, and repair UV damaged DNA.
DDB2
DDB2 is a protein part of the CUL4A–RING ubiquitin ligase (CRL4) complex. It was thought that DDB2 only acts to recognize legions of UV damaged DNA. It has been found that DDB2 plays a role in promoting chromatin unfolding. This role is independent of DDB2's role in the CRL4 complex.
Damage sensor role
UV-DDB is not only responsible for the repair of damaged DNA, it can also function by acting as a damage sensor. In base excision repair, UV-DDB galvanizes OGG1 and APE 1 activities. During DNA damage, proteins OGG1 and APE 1 encounter difficulty in repairing the lesions in a DNA wrapped nucleosome. Additionally, histones function by making the DNA inaccessible because of the way they make DNA coil and wrap into chromatin. UV-DDP plays a role in identifying the damaged sites within the chromatin, thereby allowing access to base excision repair proteins. When UV-DDB is recruited to these damaged sites, it recognizes the OGG1- AP DNA complex and further accelerates the turnover of glycosylases.
References | DNA damage-binding protein | [
"Chemistry"
] | 597 | [
"Biochemistry stubs",
"Protein stubs"
] |
26,532,574 | https://en.wikipedia.org/wiki/UBC%20Okanagan%20Digital%20Microfluidics | The UBC Okanagan Digital Microfluidics Research Group is an interdisciplinary research group at the University of British Columbia Okanagan that develops integrated devices for biochip applications. Lab-on-a-chip digital microfluidic devices are fabricated in digital architectures that merge micrometre-scale electrical circuitry with applications requiring dynamic fluid control, as voltage actuation signals from patterned electrodes are used to direct and actuate fluid flow within the chips. The structures are not application-specific. Fluid actuation signals for droplet mixing, splitting, and routing are set by the control software and can be reconfigured as needed and in real-time (unlike continuous-flow microfluidic structures incorporating micropumps, microvalves, and microchannels which are fabricated as permanent application-specific structures).
External links
UBC Okanagan Digital Microfluidics
Duke University Digital Microfluidics
Microfluidics | UBC Okanagan Digital Microfluidics | [
"Materials_science"
] | 196 | [
"Microfluidics",
"Microtechnology"
] |
45,268,288 | https://en.wikipedia.org/wiki/OpenStudio | OpenStudio is a suite of free and open-source software applications for building energy analysis used in building information modeling. OpenStudio applications run on Microsoft Windows, Macintosh, and Linux platforms. Its primary application is a plugin for proprietary SketchUp, that enables engineers to view and edit 3D models for EnergyPlus simulation software.
OpenStudio was first released in April 2008 by the National Renewable Energy Laboratory, a part of the U.S. Department of Energy. NREL reports an average of 700 OpenStudio downloads per month. Google's strategist for SketchUp, remarked that "OpenStudio is lauded around our office as one of the most complicated plug-ins ever written for SketchUp".
OpenStudio was designed to work with SketchUp, because many architects already use SketchUp for building designs. The integration allows architects to analyze a design's energy performance before beginning construction.
The first private organization selected by NREL to conduct OpenStudio courses was Performance Systems Development, a New York–based training institute. Courses will be conducted for building professionals, software developers, and utility administrators. Harshul Singhal and Chris Balbach teaches OpenStudio to the engineers on regular basis under this contract. From May 2018, Harshul Singhal started teaching OpenStudio through The Energy Simulation Academy (TESA) which is another private organization selected by NREL to conduct such training.
Features
OpenStudio includes a Sketchup Plug-in and other associated applications:
The Sketchup Plug-in allows users to create 3D geometry needed for EnergyPlus using the existing drawing tools.
RunManager manages simulations and workflows and gives users access to the output files through a graphical interface.
ResultsViewer enables browsing, plotting, and comparing EnergyPlus output data, especially time series.
Sketchup Plugin
The OpenStudio Sketchup Plug-in allows users to use the standard SketchUp tools to create and edit EnergyPlus zones and surfaces. It allows SketchUp to view EnergyPlus input files in 3D. The plug-in allows users to mix EnergyPlus simulation content with decorative content.
The plug-in adds the building energy simulation capabilities of EnergyPlus to the SketchUp environment. Users can launch an EnergyPlus simulation of the model and view the results without leaving SketchUp.
The Plug-in allows engineers to:
Create and edit EnergyPlus zones and surfaces
Launch EnergyPlus and view the results without leaving SketchUp
Match interzone surface boundary conditions
Search for surfaces and subsurfaces by object name
Add internal gains and simple outdoor air for load calculations
Add the ideal HVAC system for load calculations
Set and change default constructions
Add daylighting controls and illuminance map
See also
Building information modeling
SketchUp
Trimble Navigation
References
OpenStudio Online Training, The Energy Simulation Academy
External links
OpenStudio Suite, U.S. DOE
OpenStudio Plug-in, U.S. DOE
Project homepage
GitHub project
OpenStudio Online Training
2008 software
Building information modeling
Cross-platform free software
Free computer-aided design software | OpenStudio | [
"Engineering"
] | 625 | [
"Building engineering",
"Building information modeling"
] |
45,272,885 | https://en.wikipedia.org/wiki/Standard%20cubic%20centimetres%20per%20minute | Standard cubic centimeters per minute (SCCM) is a unit used to quantify the flow rate of a fluid. 1 SCCM is identical to 1 cm³STP/min. Another expression of it would be Nml/min. These standard conditions vary according to different regulatory bodies. One example of standard conditions for the calculation of SCCM is = 0 °C (273.15 K) and = 1.01 bar (14.72 psia) and a unity compressibility factor = 1 (i.e., an ideal gas is used for the definition of SCCM). This example is for the semi-conductor-manufacturing industry.
Conversion to mass flowrate and molar flowrate
For conversion purposes, it is useful to think of one SCCM as the mass flow rate of one cubic centimeter per minute of a fluid, typically a gas, at a density defined at some standard temperature, , and pressure, .
To convert one SCCM to the measure of mass flow rate in the SI system, kg/s, one relies in the fundamental relationship between mass flow rate and volumetric flow rate (see volumetric flow rate),
where is density at some standard conditions, and an equation of state such as
with being the fluid molecular weight, the fluid compressibility factor, and the universal gas constant. By including in the above relationship between and units of measurement and their conversion between square brackets one obtains
where is in , in , and is in . Thereafter, by replacing with the above equation of state one obtains
Using this last relationship, one can convert a mass flow rate in the more familiar unit of kg/s to SCCM and vice versa with
and
With this conversion from SCCM to kg/s, one can then use available unit calculators to convert kg/s to other units, such as g/s of the CGS system, or slug/s.
Based on the above formulas, the relationship between SCCM and molar flow rate in kmol/s is given by
and
Conversion examples
For some usage examples, consider the conversion of 1 SCCM to kg/s of a gas of molecular weight , where is in kg/kmol. Furthermore, consider standard conditions of 101325 Pa and 273.15 K, and assume the gas is an ideal gas (i.e., ). Using the unity bracket method (see conversion of units) one obtains:
Considering nitrogen, which has a molecular weight of 28 kg/kmol, 1 SCCM of nitrogen in kg/s is given by:
To do the same for 1 SCCM of helium, which has a molecular weight of 4 kg/kmol, one obtains:
Notice that 1 SCCM of helium is less in kg/s than one SCCM of nitrogen.
To convert 50 SCCM of nitrogen with the above considerations one does
To convert 1 SCCM to kmol/s one does
Related units of flow measurement
A unit related to the SCCM is the SLM or SLPM which stands for Standard litre per minute. Their conversion is
and
Another unit is the SCFM which stands for standard cubic feet per minute.
Yet another unit related to SCCM (and SLM) is the PCCM (and PLM) which stands for Perfect Cubic Centimeter per Minute (Perfect Litre per Minute). One PCCM is one SCCM when the gas is ideal. In other words, one PCCM is exactly the same as one SCCM if and only if in the above relationships.
References
External links
Mass flow rate calculator (including SCCM)
Fluid dynamics
Units of flow | Standard cubic centimetres per minute | [
"Chemistry",
"Mathematics",
"Engineering"
] | 731 | [
"Chemical engineering",
"Quantity",
"Piping",
"Units of flow",
"Units of measurement",
"Fluid dynamics"
] |
45,273,533 | https://en.wikipedia.org/wiki/Integral%20Molten%20Salt%20Reactor | The integral molten salt reactor (IMSR) is a nuclear power plant design targeted at developing a commercial product for the small modular reactor (SMR) market. It employs molten salt reactor technology which is being developed by the Canadian company Terrestrial Energy.
The IMSR is based closely on the denatured molten salt reactor (DMSR), a reactor design from Oak Ridge National Laboratory. In addition, it incorporates some elements found in the small modular advanced high temperature reactor (SmAHTR), a later design from the same laboratory. The IMSR belongs to the DMSR class of molten salt reactors (MSR) and hence is a "burner" reactor that employs a liquid fuel rather than a conventional solid fuel. This liquid contains the nuclear fuel as well as serving as the primary coolant.
In 2016, Terrestrial Energy engaged in a pre-licensing design review for the IMSR with the Canadian Nuclear Safety Commission and entered the second phase of this process in October 2018 after successfully completing the first stage in late 2017.
The company claims it will have its first commercial IMSRs licensed and operating in the 2020s.
Design
The integral molten salt reactor (IMSR) integrates into a compact, sealed and replaceable nuclear reactor unit, called the IMSR Core-unit. The Core-unit comes in a single size designed to deliver 440 megawatts of thermal heat. If used to generate electricity then the notional capacity is 195 megawatts electrical. The unit includes all the primary components of the nuclear reactor that operate on the liquid molten fluoride salt fuel: moderator, primary heat exchangers, pumps and shutdown rods. The Core-unit forms the heart of the IMSR system. In the Core-unit, the fuel salt is circulated between the graphite core and heat exchangers. The Core-unit itself is placed inside a surrounding vessel called the guard vessel. The entire Core-unit module can be lifted out for replacement. The guard vessel that surrounds the Core-unit acts as a containment vessel. In turn, a shielded silo surrounds the guard vessel.
The IMSR belongs to the denatured molten salt reactor (DMSR) class of molten salt reactors (MSR).
It is designed to have all the safety features associated with the Molten Salt class of reactors including low pressure operation (the reactor and primary coolant is operated near normal atmospheric pressure), the inability to lose primary coolant (the fuel is the coolant), the inability to suffer a meltdown accident (the fuel operates in an already molten state) and the robust chemical binding of the fission products within the primary coolant salt (reduced pathway for accidental release of fission products).
The design uses standard assay low-enriched uranium fuel, with less than 5% U235 with a simple converter (also known as a "burner") fuel cycle objective (as do most operating power reactors today). The proposed fuel is in the form of uranium tetrafluoride (UF4) blended with carrier salts. These salts are also fluorides, such as lithium fluoride (LiF), sodium fluoride (NaF) and/or beryllium fluoride (BeF2). These carrier salts increase the heat capacity of the fuel and lower the fuel's melting point.
The fuel salt blend also acts as the primary coolant for the reactor.
The IMSR is a thermal-neutron reactor moderated by vertical graphite tubular elements.
The molten salt fuel-coolant mixture flows upward through these tubular elements where it goes critical.
After heating up in this moderated core the liquid fuel flows upward through a central common chimney and is then pulled downward by pumps through heat exchanges positioned inside the reactor vessel.
The liquid fuel then flow down the outer edge of the reactor core to repeat the cycle.
All the primary components, heat exchangers, pumps etc. are positioned inside the reactor vessel.
The reactor’s integrated architecture avoids the use of external piping for the fuel that could leak or break.
The piping external to the reactor vessel contain two additional salt loops in series: a secondary, nonradioactive coolant salt, followed by another (third) coolant salt.
These salt loops act as additional barriers to any radionuclides, as well as improving the system's heat capacity. It also allows easier integration with the heat sink end of the plant; either process heat or power applications using standard industrial grade steam turbine plants are envisioned by Terrestrial Energy.
The IMSR Core-unit is designed to be completely replaced after a 7-year period of operation. During operation, small fresh fuel/salt batches are periodically added to the reactor system. This online refueling process does not require the mechanical refueling machinery required for solid fuel reactor systems.
Many of these design features are based on two previous molten salt designs from Oak Ridge National Laboratory (ORNL) – the ORNL denatured molten salt reactor (DMSR) from 1980 and the solid fuel/liquid salt cooled, small modular advanced high temperature reactor (SmAHTR), a 2010 design. The DMSR, as carried into the IMSR design, proposed to use molten salt fuel and graphite moderator in a simplified converter design using LEU, with periodic additions of LEU fuel. Most previous proposals for molten salt reactors all bred more fuel than needed to operate, so were called breeders. Converter or "burner" reactors like the IMSR and DMSR can also utilize plutonium from existing spent fuel as their makeup fuel source. The more recent SmAHTR proposal was for a small, modular, molten salt cooled but solid TRISO fuelled reactor.
Replaceable core-unit
The design uses a replaceable Core-unit. When the graphite moderator's lifetime exposure to neutron flux causes it to start distorting beyond acceptable limits, rather than remove and replace the graphite moderator, the entire IMSR Core-unit is replaced as a unit. This includes the pumps, pump motors, shutdown rods, heat exchangers and graphite moderator, all of which are either inside the vessel or directly attached to it. To facilitate a replacement, the design employs two reactor silos in the reactor building, one operating and one idle or with a previous, empty, spent Core-unit in cool-down. After 7 years of operation, the Core-unit is shut down and cools in place to allow short-lived radionuclides to decay. After that cool-down period, the spent Core-unit is lifted out and eventually replaced.
Simultaneously, a new Core-unit is installed and activated in the second silo. This entails connection to the secondary (coolant) salt piping, placement of the containment head and biological shield and loading with fresh fuel salt. The containment head provides double containment (the first being the sealed reactor vessel itself). The new Core-unit can now start its 7 years of power operations.
The IMSR vendor accumulates sealed, spent IMSR Core-units and spent fuel salt tanks in onsite, below grade silos. This operational mode reduces uncertainties with respect to long service life of materials and equipment, replacing them by design rather than allowing age-related issues such as creep or corrosion to accumulate.
Online refueling
The IMSR employs online fueling. While operating, small fresh fuel salt batches are periodically added to the reactor system. As the reactor uses circulating liquid fuel this process does not require complex mechanical refueling machinery. The reactor vessel is never opened, thereby ensuring a clean operating environment. During the 7 years, no fuel is removed from the reactor; this differs from solid fuel reactors which must remove fuel to make room for any new fuel assemblies, limiting fuel utilization.
Safety
Nuclear power reactors have three fundamental safety requirements: control, cooling, and containment.
Control
Nuclear reactors require control over the critical nuclear chain reaction. As such, the design must provide for exact control over the reaction rate of the core, and must enable reliable shut-down when needed. Under routine operations, the IMSR relies on intrinsic stability for reactivity control; there are no control rods. This behavior is known as negative power feedback—the reactor is self-stabilizing in power output and temperature, and is characterized as a load-following reactor. Reactor power is controlled by the amount of heat removed from the reactor. Increased heat removal results in a drop in fuel salt temperature, resulting in increased reactivity and in turn increased power. Conversely, reducing heat removal will increase reactor temperature at first, lowering reactivity and subsequently reducing reactor power. If all heat removal is lost, the reactor power will drop to a very low power level.
As backup (and shutdown method for maintenance), the IMSR employs shutdown rods filled with neutron absorber. These rods are normally held out of the critical region by the upward pressure of the pumped salt in circulation but will drop into place to stop criticality if pumped circulation is lost due to a power outage or pump failure.
As with other molten salt reactors, the reactor can also be shut down by draining the fuel salt from the Core-unit into storage tanks.
A failsafe backup is provided in the form of meltable cans, filled with a liquid neutron absorbing material that will permanently shut down the reactor in the event of a severe overheating event.
Cooling
A nuclear reactor is a thermal power system—it generates heat, transports it and eventually converts it to mechanical energy in a heat engine, in this case a steam turbine.
Such systems require that the heat is removed, transported and converted at the same rate it is generated.
A fundamental issue for nuclear reactors is that even when the nuclear fission process is halted, heat continues to be generated at significant levels by the radioactive decay of the fission products for days or months.
This is known as decay heat and is the major safety driver behind the cooling of nuclear reactors, because this decay heat must be removed.
For conventional light water reactors the flow of cooling water must continue in all foreseeable circumstances, otherwise damage and melting of the (solid) fuel can result.
Light water reactors operate with a volatile coolant, requiring high pressure operation and depressurization in an emergency.
The IMSR instead uses liquid fuel at low pressure.
IMSR does not rely on bringing coolant to the reactor or depressurizing the reactor, using instead passive cooling.
Heat continuously dissipates from the Core-unit.
During normal operation, heat loss is reduced by the moderate temperature of the reactor vessel in normal operation, combined with the stagnant air between the Core-unit and guard vessel, which only allows radiant heat transfer. Radiant heat transfer is a strong function of temperature; any increase in the temperature of the Core-unit will rapidly increase heat loss.
Upon shutdown of the primary salt pumps, the reactor passively drops power to a very small level.
It can still heat up slowly by the small but constant decay heat as previously described. Due to the large heat capacity of the graphite and the salts, this increase in temperature is slow.
The higher temperatures slowly increase thermal radiant heat loss, and subsequent heat loss from the guard vessel itself to the outside air.
Low pressure nitrogen flows by natural convection over the outside of the guard vessel, transporting heat to the metal reactor building roof.
This roof provides the passive heat loss required, acting as a giant radiator to the outside air.
As a result, heat loss is increased while decay heat naturally drops; an equilibrium is reached where temperatures peak and then drop.
The thermal dynamics and inertia of the entire system of the Core-unit in its containment silo is sufficient to absorb and disperse decay heat.
In the long term, as decay heat dissipates almost completely, and the plant is still not recovered, the reactor would increase power to the level of the heat loss to the internal reactor vessel auxiliary cooling system (IRVACS), and stay at that low power level (and normal temperature) indefinitely.
In the event that the low pressure nitrogen coolant leaks from the IRVACS then natural air will offer similar cooling capability. Albeit with a minor nuclear activation of the argon in the air.
The molten salts are excellent heat transfer fluids, with volumetric heat capacities close to water, along with high thermal conductivity.
Containment
All molten salt reactors have features that contribute to containment safety. These mostly have to do with the properties of the salt itself. The salts are chemically inert. They do not burn and are not combustible. The salts have low volatility (high boiling points around 1400 °C), allowing a low operating pressure of the core and cooling loops. This provides a large margin above the normal operating temperature of some 600 to 700 °C. This makes it possible to operate at low pressures without risk of coolant/fuel boiling (an issue with water cooled reactors).
The high chemical stability of the salt precludes energetic chemical reactions such as hydrogen gas generation/detonation and sodium combustion, that can challenge the design and operations of other reactor types. The fluoride salt reacts with many fission products to produce chemically stable, non-volatile fluorides, such as cesium fluoride. Similarly, the majority of other high risk fission products such as iodine, dissolve into the fuel salt, bound up as iodide salts. However, for the MSRE "of the order of
one-fourth to one-third of the iodine has not been
adequately accounted for." There is some uncertainty as to whether this is a measurement error, as the concentrations are small and other fission products also had similar accounting problems. See liquid fluoride thorium reactor and molten salt reactor for more information.
The IMSR also has multiple physical containment barriers. It uses a sealed, integral reactor unit, the Core-unit. The Core-unit is surrounded by the guard vessel on its side and bottom, itself surrounded by a gas-tight structural steel and concrete silo. The Core-unit is covered up from the top by a steel containment head which is itself covered by thick round steel and concrete plates. The plates serve as radiation shield and provide protection against external hazards such as explosions or aircraft crash penetration. The reactor building provides an additional layer of protection against such external hazards, as well as a controlled, filtered-air confinement area.
Most molten salt reactors use a gravity drain tank as an emergency storage reservoir for the molten fuel salt. The IMSR deliberately avoids this drain tank. The IMSR design is simpler and eliminates the bottom drain line and accompanying risks from low level vessel penetrations. The result is a more compact, robust design with fewer parts and few failure scenarios. The salt can however be drained from the reactor by pumping it out the top.
Relative to light water reactors the scale and capital cost of the containment building is significantly reduced as there is no need to deal with the phase change risk associated with a water based coolant.
Economics
The economics of conventional nuclear reactors are dominated by the capital cost, primarily the cost to build and finance the construction of the facility. Uranium costs are relatively low, however, conventional fuel fabrication is a significant cost of operation.
Due to the dominance of capital cost, most nuclear power reactors have sought to reduce cost per Watt by increasing the total power output of the reactor system. However, this often leads to very large projects that are difficult to finance, manage and to standardize.
Terrestrial Energy states that they have made progress on this by producing a more compact, efficient reactor system, with a greater safety allowance compared to traditional systems, as well as avoiding complex fuel fabrication processes.
As molten salts have a low vapor pressure and high heat volumetric heat capacity the reactor and containment can be compact and low pressure. This allows for more modularity in construction.
The higher operating temperature with molten salts improves thermodynamic efficiency. The IMSR produces around 40% more electricity than a comparably sized water-cooled SMR. The result is around 40% more revenue from the same reactor size, leaving a large impact on the economics of the reactor. The design is also able to extract more energy from the same quantity of fuel before it is considered "spent."
Safety approach
A large part of the cost of traditional nuclear power reactors is related to safety, and the resulting quality and regulatory requirements that can drive costs up.
The IMSR approach is to rely on inherent and passive safety features rather than complex active systems, potentially reducing costs in this important area while still increasing the safety profile.
For control, inherent reactor power control by reactivity feedback, rather than a reactor control system with actively positioning control rods is used.
For cooling, the always-on, passive cooling system based on heat loss, enabling safety-grade decay heat removal. Unlike conventional reactors the IMSR decay cooling mechanism does not require backup electric power.
For containment, the salt properties provide a key difference with water-cooled reactors. The salts have low vapor pressures and high boiling points, and are chemically stable. High pressures and hydrogen threats are thereby eliminated from the containment design, reducing the required containment volume, design pressure, and attendant costs. The high cesium retention of the salt reduces the available source term in an accident, further reducing the fundamental risk profile.
Efficiency
Conventional nuclear reactors, such as pressurized and boiling water reactors, use water as a coolant. Due to water's high vapor pressure at elevated temperatures, they are limited to operating at a relatively low temperature, usually near 300 °C. This limits the thermodynamic efficiency, typically to around 32–34%. In other words, water-cooled power reactors generate 32–34 watts of electricity for every 100 watts of reactor power.
The higher thermal stability and low vapor pressure of the salt allows operation at higher temperatures. IMSR provides final heat at temperatures of around 550–600 °C, which results in an efficiency in the 45–48% range.
The IMSR produces around 1.4 times more electricity per unit reactor heat output compared to conventional commercial reactors. Thus it generates some 40% more revenues from the same reactor power. This has a large impact on the project economics. In addition, the higher temperature of the IMSR allows for the use of more compact, lower-cost turbine systems, already in common use with coal fired power stations, as opposed to conventional nuclear power plants that usually need specialized low-temperature turbines that are not used anywhere else. This helps to further lower the capital cost.
Nuclear efficiency—the amount of nuclear fuel used per unit electricity generated. Whilst uranium is relatively cheap fuel costs in a traditional nuclear facility are significant due to the high cost of fuel fabrication. The IMSR avoids most of the expensive fabrication process and as such the fuel cost is expected to be negligible.
Modularity
A key cost driver is in the nature of the equipment used. Standardized, manufactured components are lower cost than specialized, or even custom components.
Molten salts have high volumetric heat capacity, a low vapor pressure and no hydrogen generation potential, so there is no need for large-volume, high-pressure vessels for the reactor and containment or other equipment areas. This reduces the size of the Core-unit and containment compared to water-cooled reactors. Similarly, molten salt heat exchangers used are more compact than the large steam generators employed in PWRs.
The compact Core-unit forms the basic modularity of the IMSR system. Core-units are identical and small enough to be fabricated in a controlled in-door environment.
Reactor pressure
High pressure is a cost driver for any component, as it increases both quality requirements and required materials (thickness). Large, high pressure components require heavy weldings and forgings that have limited availability. A typical operating pressure for a pressurized water reactor (PWR) is over 150 atmospheres. For the IMSR, due to the low vapor pressure and high boiling point of the salt, the Core-unit operates at or near atmospheric pressure (other than a few atmospheres of pressure from the hydrostatic weight of the salt). This is despite the higher operating temperature. The result is lighter, thinner components that are easier to manufacture and modularize.
Other markets
Various non-electric applications exist that have a large market demand for energy: steam reforming, paper and pulp production, chemicals and plastics, etc. Water-cooled conventional reactors are unsuitable to most of these markets due to the low operating temperature of around 300 °C, and too large in size to match single point industrial heat needs. The IMSR's smaller size and higher operating temperature (around 700 °C in the reactor, up to 600 °C delivered) could potentially open up new markets in these process heat applications. In addition, cogeneration, the production of both heat and electricity, are also potentially attractive.
Licensing
Terrestrial Energy was founded in Canada in 2013 with the objective of commercialising the IMSR, and is currently working to license (in both Canada and the USA) an IMSR design with a thermal power capacity of 400 MW (equivalent to 190 MW electrical).
As standard industrial grade steam turbines are proposed, cogeneration, or combined heat and power, is also possible.
In 2016, Terrestrial Energy engaged in a pre-licensing design review for the IMSR with the Canadian Nuclear Safety Commission (CNSC).
It successfully completed the first stage of this process in late 2017, and entered the second phase of the design review in October 2018.
Terrestrial Energy claims it will have its first commercial IMSRs licensed and operating in the 2020s.
On August 15, 2019, CNSC and the United States Nuclear Regulatory Commission signed a joint memorandum of cooperation (MOC) aimed at enhancing technical reviews of advanced reactor and small modular reactor technologies. As part of the MOC, the agencies undertook in May 2022 a joint review of Terrestrial Energy’s Postulated Initiating Events (PIE) analysis and methodology for the IMSR® This work is foundational for further regulatory safety reviews and the regulatory program to prepare license applications required to operate IMSR® plants in Canada and the United States.
In 2023 the CNSC completed phase 2 of a Vendor Design Review and declared that there were no fundamental barriers to licensing the IMSR design. However this decision is non-binding and Terrestrial Energy still need a site and construction license to proceed.
See also
References
Further reading
Nuclear power
Nuclear power reactor types
Molten salt reactors | Integral Molten Salt Reactor | [
"Physics"
] | 4,546 | [
"Power (physics)",
"Physical quantities",
"Nuclear power"
] |
48,249,035 | https://en.wikipedia.org/wiki/Corey%E2%80%93Link%20reaction | In organic chemistry, the Corey–Link reaction is a name reaction that converts a 1,1,1-trichloro-2-keto structure into a 2-aminocarboxylic acid (an alpha amino acid) or other acyl functional group with control of the chirality at the alpha position. The reaction is named for E.J. Corey and John Link, who first reported the reaction sequence.
Process
The first stage of the process is the reduction of the carbonyl to give a 1,1,1-trichloro-2-hydroxy structure. The original protocol used catecholborane with a small amount of one enantiomer of CBS catalyst (a Corey–Itsuno reduction). The choice of chirality of the catalyst thus gives selectivity for one or the other stereochemistry of the alcohol, which subsequently controls the stereochemistry of the amino substituent in the ultimate product.
This 2-hydroxy structure then undergoes a Jocic–Reeve reaction using azide as the nucleophile. The multistep reaction mechanism begins with deprotonation of the alcohol, followed by the oxygen-anion attacking the adjacent trichloromethyl position to form an epoxide. The azide then opens this ring by an SN2 reaction to give a 2-azido structure whose stereochemistry is inverted compared to the original 2-hydroxy. The oxygen, having become attached to the first carbon of the chain during the epoxide formation, simultaneously displaces a second chlorine atom there to form an acyl chloride. An additional nucleophilic reactant, such as hydroxide or an alkoxide, then triggers an acyl substitution to produce a carboxylic acid or ester. Various other nucleophiles can be used to generate other acyl functional groups. This sequence of steps gives a 2-azido compound, which is then reduced to the 2-amino compound in a separate reaction, typically a Staudinger reaction.
Bargellini reaction
The Bargellini reaction involves the same type of dichloroepoxy intermediate, formed by a different method, that reacts with a single structure containing two nucleophilic groups. It thus gives products such as morpholinones or piperazinones, alpha-amino esters or amides in which the amine is tethered to the acyl substituent group.
References
Name reactions
Chemical synthesis of amino acids | Corey–Link reaction | [
"Chemistry"
] | 520 | [
"Name reactions"
] |
48,253,250 | https://en.wikipedia.org/wiki/TBC%20domain | The TBC domain is an evolutionarily conserved protein domain found in all eukaryotes. It is approximately 180 to 200 amino acids long. The domain is named for its initial discovery in the proteins Tre-2, Bub2, and Cdc16.
TBC family members act as GTPase-activating proteins (GAPs) for small GTPases in regulating the cell cycle. For example, Rab activity is modulated in part by GAPs, and many RabGAPs share a Tre2/Bub2/Cdc16(TBC)-domain architecture, suggesting that TBC domain-containing proteins may behave similarly.
Examples
USP6 and CDC16 contain TBC domains. In addition, all proteins in the TBC family contain this domain:
Functions
TBC mainly functions as a specific Rab GAP (GTPases activating proteins) by being used as tools to inactivate specific membrane trafficking events. GAPs serve to increase GTPase activity by contributing the residues to the active site and promoting conversion from GTP to GDP form. Such activity of TBC proteins does not always require a close physical interaction although few TBC proteins have shown clear GAP activity towards their binding Rabs. Rab families contribute to defining organelles and controlling specificity and rate of transport through individual pathways. Therefore, TBC Rab-GAPS are essential regulators of intracellular and membrane transports as well as central participants in signal transduction. Nevertheless, not all TBC may have a primary role as a Rab-GAP and conversely, not all Rab-GAP contain TBC. In addition, the fact that this family has been poorly studied makes it then further complicated.
Evolution and research
Phylogenetic analysis has provided insight into the evolution of the TBC family. ScrollSaw was implemented as a recent strategy to overcome poor resolution between TBC genes found in standard phylogenetic strategies during initial reconstructions. Significantly, the TBC domain is nearly always smaller than the Rab cohort in any individual genome, suggesting Rab/TBC coevolution. Twenty-one putative TBC sub-classes were founded and identified as a seven robust and two moderately supported clades.
Moreover, there has also been systematic analysis in order to identify the target Rabs of TBC proteins. It was, at first, based on the physical interaction between the TBC domain and its substrate Rab. For instance Barr and his coworkers found a specific interaction between RUTBC3/RabGAP-5 and Rab5A that activates the GTPase activity of Rab5 isoform. Similarly other research has shown that, among other important aspects, the TBC-Rab interaction alone is insufficient to determine the target of TBC proteins. However, there has been a second approach to identifying the target Rabs of TBC by investigating their in vitro GAP activity. Yet there has been similar discrepancies between this findings of different investigators which can be found in literature and may be attributable to differences between methods of in vitro. In addition, research has shown that TBC proteins are associated with some human diseases. For example, a dysfunction of TBC1D1 and TBC1D4 directly affects insulin actions and glucose uptake. Causing overweight or leanness due to the fact that this two family members of TBC regulate insulin-stimulated GLUT4 translocation to the plasma membrane in mammals. Furthermore, many of them have been shown to be associated with cancer, but the exact mechanism by which they are associated with this illness remains largely unknown. Therefore, there’s still much research needed to be done on this biological topic.
References
External links
Protein domains | TBC domain | [
"Biology"
] | 753 | [
"Protein domains",
"Protein classification"
] |
48,258,779 | https://en.wikipedia.org/wiki/Chromatophore%20%28bacteria%29 | In some forms of photosynthetic bacteria, a chromatophore is a pigmented(coloured), membrane-associated vesicle used to perform photosynthesis. They contain different coloured pigments.
Chromatophores contain bacteriochlorophyll pigments and carotenoids. In purple bacteria, such as Rhodospirillum rubrum, the light-harvesting proteins are intrinsic to the chromatophore membranes. However, in green sulfur bacteria, they are arranged in specialised antenna complexes called chlorosomes.
References
Photosynthesis
Organelles | Chromatophore (bacteria) | [
"Chemistry",
"Biology"
] | 125 | [
"Bacteria stubs",
"Photosynthesis",
"Bacteria",
"Biochemistry",
"Chemical process stubs"
] |
46,605,227 | https://en.wikipedia.org/wiki/Shifted%20force%20method | The net electrostatic force acting on a charged particle with index contained within a collection of particles is given as:
where is the spatial coordinate, is a particle index, is the separation distance between particles and , is the unit vector from particle to particle , is the force magnitude, and and are the charges of particles and , respectively. With the electrostatic force being proportional to , individual particle-particle interactions are long-range in nature, presenting a challenging computational problem in the simulation of particulate systems. To determine the net forces acting on particles, the Ewald or Lekner summation methods are generally employed. One alternative and usually computationally faster technique based on the notion that interactions over large distances (e.g. > 1 nm) are insignificant to the net forces acting in certain systems is the method of spherical truncation. The equations for basic truncation are:
where is the cutoff distance. Simply applying this cutoff method introduces a discontinuity in the force at that results in particles experiencing sudden impulses when other particles cross the boundary of their respective interaction spheres. In the particular case of electrostatic forces, as the force magnitude is large at the boundary, this unphysical feature can compromise simulation accuracy. A way to correct this problem is to shift the force to zero at , thus removing the discontinuity. This can be accomplished with a variety of functions, but the most simple/computationally efficient approach is to simply subtract the value of the electrostatic force magnitude at the cutoff distance as such:
As mentioned before, the shifted force (SF) method is generally suited for systems that do not have net electrostatic interactions that are long-range in nature. This is the case for condensed systems that show electric-field screening effects. Note that anisotropic systems (e.g. interfaces) may not be accurately simulated with the SF method, although an adaption of the SF method for interfaces has been recently suggested. Additionally, note that certain system properties (e.g. energy-dependent observables) will be more greatly influenced by the use of the SF method than others. It is not safe to assume, without reasonable argument, that the SF method can be used to accurately determine a certain property for a given system. If the accuracy of the SF method need be tested, this may be done by testing for convergence (i.e. showing that simulation results do not significantly change with increasing cutoff) or by comparing with results obtained through other electrostatics techniques (such as Ewald) that are known to perform well. As a rough rule of thumb, results obtained with the SF method tend to be sufficiently accurate when the cutoff is at least five times larger than the distance of the near neighbor interactions.
With the SF method, a discontinuity is still present in the derivative of the force, and it may be preferable for ionic liquids to further alter the force equation as to remove that discontinuity.
References
Molecular dynamics
Computational chemistry
Molecular modelling | Shifted force method | [
"Physics",
"Chemistry"
] | 614 | [
"Molecular physics",
"Computational physics",
"Molecular dynamics",
"Computational chemistry",
"Theoretical chemistry",
"Molecular modelling"
] |
46,606,018 | https://en.wikipedia.org/wiki/Faithfully%20flat%20descent | Faithfully flat descent is a technique from algebraic geometry, allowing one to draw conclusions about objects on the target of a faithfully flat morphism. Such morphisms, that are flat and surjective, are common, one example coming from an open cover.
In practice, from an affine point of view, this technique allows one to prove some statement about a ring or scheme after faithfully flat base change.
"Vanilla" faithfully flat descent is generally false; instead, faithfully flat descent is valid under some finiteness conditions (e.g., quasi-compact or locally of finite presentation).
A faithfully flat descent is a special case of Beck's monadicity theorem.
Idea
Given a faithfully flat ring homomorphism , the faithfully flat descent is, roughy, the statement that to give a module or an algebra over A is to give a module or an algebra over together with the so-called descent datum (or data). That is to say one can descend the objects (or even statements) on to provided some additional data.
For example, given some elements generating the unit ideal of A, is faithfully flat over . Geometrically, is an open cover of and so descending a module from to would mean gluing modules on to get a module on A; the descend datum in this case amounts to the gluing data; i.e., how are identified on overlaps .
Affine case
Let be a faithfully flat ring homomorphism. Given an -module , we get the -module and because is faithfully flat, we have the inclusion . Moreover, we have the isomorphism of -modules that is induced by the isomorphism and that satisfies the cocycle condition:
where are given as:
with . Note the isomorphisms are determined only by and do not involve
Now, the most basic form of faithfully flat descent says that the above construction can be reversed; i.e., given a -module and a -module isomorphism such that , an invariant submodule:
is such that .
Here is the precise definition of descent datum. Given a ring homomorphism , we write:
for the map given by inserting in the i-th spot; i.e., is given as , as , etc. We also write for tensoring over when is given the module structure by .
Now, given a -module with a descent datum , define to be the kernel of
.
Consider the natural map
.
The key point is that this map is an isomorphism if is faithfully flat. This is seen by considering the following:
where the top row is exact by the flatness of B over A and the bottom row is the Amitsur complex, which is exact by a theorem of Grothendieck. The cocycle condition ensures that the above diagram is commutative. Since the second and the third vertical maps are isomorphisms, so is the first one.
The forgoing can be summarized simply as follows:
Zariski descent
The Zariski descent refers simply to the fact that a quasi-coherent sheaf can be obtained by gluing those on a (Zariski-)open cover. It is a special case of a faithfully flat descent but is frequently used to reduce the descent problem to the affine case.
In details, let denote the category of quasi-coherent sheaves on a scheme X. Then Zariski descent states that, given quasi-coherent sheaves on open subsets with and isomorphisms such that (1) and (2) on , then exists a unique quasi-coherent sheaf on X such that in a compatible way (i.e., restricts to ).
In a fancy language, the Zariski descent states that, with respect to the Zariski topology, is a stack; i.e., a category equipped with the functor the category of (relative) schemes that has an effective descent theory. Here, let denote the category consisting of pairs consisting of a (Zariski)-open subset U and a quasi-coherent sheaf on it and the forgetful functor .
Descent for quasi-coherent sheaves
There is a succinct statement for the major result in this area: (the prestack of quasi-coherent sheaves over a scheme S means that, for any S-scheme X, each X-point of the prestack is a quasi-coherent sheaf on X.)
The proof uses Zariski descent and the faithfully flat descent in the affine case.
Here "quasi-compact" cannot be eliminated.
Example: a vector space
Let F be a finite Galois field extension of a field k. Then, for each vector space V over F,
where the product runs over the elements in the Galois group of .
Specific descents
fpqc descent
Étale descent
An étale descent is a consequence of a faithfully descent.
Galois descent
See also
Amitsur complex
Hilbert scheme
Quot scheme
Notes
References
(a detailed discussion of a 2-category)
Algebraic geometry | Faithfully flat descent | [
"Mathematics"
] | 1,045 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
46,613,633 | https://en.wikipedia.org/wiki/CoGeNT | The CoGeNT experiment has searched for dark matter. It uses a single germanium crystal (~100 grams) as a cryogenic detector for WIMP particles. CoGeNT has operated in the Soudan Underground Laboratory since 2009.
Results
Their first announcement was an excess of events recorded after 56 days. Juan Collar, who presented the results to a conference at the University of California, was quoted: "If it's real, we're looking at a very beautiful dark-matter signal".
This signal conflicts with other searches that have failed to find any evidence, such as XENON and LUX but appears to confirm results from DAMA.
They observed an annual modulation in the event rate that could indicate light dark matter.
The annual modulation has continued to be seen in 3 years of data.
However more recent work has shown that the excess of events attributed to a tentative dark matter signal was in fact due to an underestimated background from surface events. After accounting for this background there is no evidence for a signal in data from the CoGeNT experiment and no tension with null results from other experiments.
References
External links
CoGeNT Dark Matter Experiment website
Experiments for dark matter search | CoGeNT | [
"Physics"
] | 237 | [
"Dark matter",
"Experiments for dark matter search",
"Unsolved problems in physics"
] |
52,085,407 | https://en.wikipedia.org/wiki/Einstein%27s%20static%20universe | Einstein's static universe, aka the Einstein universe or the Einstein static eternal universe, is a relativistic model of the universe proposed by Albert Einstein in 1917.
Shortly after completing the general theory of relativity, Einstein applied his new theory of gravity to the universe as a whole. Assuming a universe that was static in time, and possessed of a uniform distribution of matter on the largest scales, Einstein was led to a finite, static universe of spherical spatial curvature.
To achieve a consistent solution to the Einstein field equations for the case of a static universe with a non-zero density of matter, Einstein found it necessary to introduce a new term to the field equations, the cosmological constant. In the resulting model, the radius R and density of matter ρ of the universe were related to the cosmological constant Λ according to Λ = 1/R2 = κρ/2, where κ is the Einstein gravitational constant.
Following the discovery by Edwin Hubble of a linear relation between the redshifts of the galaxies and their distance in 1929, Einstein abandoned his static model of the universe and proposed expanding models such as the Friedmann–Einstein universe and the Einstein–de Sitter universe. In both cases, he set the cosmological constant to zero, declaring it "no longer necessary ... and theoretically unsatisfactory". In many Einstein biographies, it is claimed that Einstein referred to the cosmological constant in later years as his "biggest blunder". The astrophysicist Mario Livio has recently cast doubt on this claim, suggesting that it may be exaggerated.
References
General relativity
Albert Einstein | Einstein's static universe | [
"Physics"
] | 330 | [
"General relativity",
"Relativity stubs",
"Theory of relativity"
] |
52,085,968 | https://en.wikipedia.org/wiki/C8H5NO3 | {{DISPLAYTITLE:C8H5NO3}}
The molecular formula C8H5NO3 (molar mass: 163.13 g/mol, exact mass: 163.0269 u) may refer to:
N-Hydroxyphthalimide
Isatoic anhydride
Molecular formulas | C8H5NO3 | [
"Physics",
"Chemistry"
] | 69 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
52,093,617 | https://en.wikipedia.org/wiki/Bregans%20lens | The Bregans lens is a large lens, mounted in a gilt wooden frame, with a focal length of 1,580 mm. Another smaller lens acts as a condenser and can be positioned by means of a sliding mechanism along a supporting track. Beyond the condenser is a small adjustable metal plate for holding specimens. The wooden mount on a small table fitted with castors, dated 1767, is the work of the Florentine artisan Francesco Spighi; the metal parts are signed by Gaspero Mazzeranghi. The maker of the lens, Benedetto Bregans, donated it to Cosimo III de' Medici in 1697. The instrument was used some time later by Giuseppe Averani and Cipriano Targioni for experiments on the combustion of diamonds and other precious stones. In 1814, Humphry Davy — on a visit to Florence with Michael Faraday — used the lens to repeat Averani's experiments. In 1860, Giovanni Battista Donati mounted the lens on a tube (inv. 582) for use as a starlight condenser to observe the absorption bands of stellar spectra.
References
Microscopes | Bregans lens | [
"Chemistry",
"Technology",
"Engineering"
] | 238 | [
"Microscopes",
"Measuring instruments",
"Microscopy"
] |
52,094,170 | https://en.wikipedia.org/wiki/Elastic%20and%20inelastic%20collisions%20apparatus | The elastic and inelastic collisions apparatus is a large apparatus to study elastic and inelastic collisions.
It consists of a large frame carrying two beams from which two rows of six and two wooden balls, respectively, are suspended from pairs of strings. The instrument was often used with two elastic balls (of ivory) or inelastic balls (of wet clay), of equal or different mass. By changing the parameters of the experiments such as height of fall and mass, one could conduct a systematic investigation of collision-related phenomena. For example, when the row of balls is struck by one of the outermost balls, the row of balls remains motionless and the impulse is fully transmitted to the ball at the opposite end, which rebounds. As it falls back, the cycle continues in the opposite direction. This apparatus was illustrated by Jean-Antoine Nollet in Leçons de physique expérimentale (Paris, 1743–1748). In his description, Nollet claims to have merely altered and developed a model previously used by Edme Mariotte. The most sophisticated devices for studying elastic and inelastic collisions were built by Willem Jacob 's Gravesande and Petrus van Musschenbroek.
The instrument is held in the Lorraine collections of the Museo Galileo in Florence.
References
Bibliography
External links
Machines | Elastic and inelastic collisions apparatus | [
"Physics",
"Technology",
"Engineering"
] | 266 | [
"Physical systems",
"Machines",
"Mechanical engineering"
] |
52,094,190 | https://en.wikipedia.org/wiki/Mechanical%20paradox | The mechanical paradox is an apparatus for studying physical paradoxes. It consists of a trapezoidal veneered wooden frame with two brass rails, and a pair of brass cones joined at their bases by a wooden disk which rests on the rails. When the double cone is placed at the low end of the frame, it automatically starts to roll upward, giving the impression of escaping the universal law of the gravitational force.
The device's unintuitive behavior is due to the fact that the natural motion of bodies depends on that of their center of gravity, which has a natural tendency to descend. Since the rails diverge, the center of gravity of the double cone—when placed on the rotation axis at its maximum diameter—does not rise when the entire body seems to be moving upward; rather, the center is shifting downward. In its travel, the resting-points of the double cone on the rails converge toward its two apexes. As a result, the distance of the center of gravity from the horizontal plane decreases as the double cone rises.
An example of the instrument is held in the Lorraine collections of the Museo Galileo in Florence, Italy.
It was first published by William Leybourne in 1694.
Bibliography
External links
Machines
Physical paradoxes | Mechanical paradox | [
"Physics",
"Technology",
"Engineering"
] | 253 | [
"Physical systems",
"Machines",
"Mechanical engineering"
] |
52,095,250 | https://en.wikipedia.org/wiki/Stanley%20decomposition | In commutative algebra, a Stanley decomposition is a way of writing a ring in terms of polynomial subrings. They were introduced by .
Definition
Suppose that a ring R is a quotient of a polynomial ring k[x1,...] over a field by some ideal. A Stanley decomposition of R is a representation of R as a direct sum (of vector spaces)
where each xα is a monomial and each Xα is a finite subset of the generators.
See also
Rees decomposition
Hironaka decomposition
References
Commutative algebra | Stanley decomposition | [
"Mathematics"
] | 115 | [
"Fields of abstract algebra",
"Commutative algebra"
] |
52,095,463 | https://en.wikipedia.org/wiki/Rees%20decomposition | In commutative algebra, a Rees decomposition is a way of writing a ring in terms of polynomial subrings. They were introduced by .
Definition
Suppose that a ring R is a quotient of a polynomial ring k[x1,...] over a field by some homogeneous ideal. A Rees decomposition of R is a representation of R as a direct sum (of vector spaces)
where each ηα is a homogeneous element and the d elements θi are a homogeneous system of parameters for R and
ηαk[θfα+1,...,θd] ⊆ k[θ1, θfα].
See also
Stanley decomposition
Hironaka decomposition
References
Commutative algebra | Rees decomposition | [
"Mathematics"
] | 146 | [
"Fields of abstract algebra",
"Commutative algebra"
] |
37,802,425 | https://en.wikipedia.org/wiki/North%20Pacific%20Intermediate%20Water | North Pacific Intermediate Water (NPIW) is cold, moderately low salinity water mass that originates in the mixed water region (MWR) between the Kuroshio and Oyashio waters just east of Japan. Examination of NPIW at stations just east of the MWR indicates that the mixed waters in the MWR are the origin of the newest NPIW. The new NPIW ‘‘formed’’ in the MWR is a mixture of relatively fresh, recently ventilated Oyashio water coming from the subpolar gyre, and more saline, older Kuroshio water. The mixing process results in a salinity minimum and also in rejuvenation of the NPIW layer in the subtropical gyre due to the Oyashio input.
Properties and Formation
The North Pacific Intermediate Water is a well-defined salinity minimum, located in the North Pacific subtropical gyre. It occurs at a depth range of 300-800 meters and is confined to a narrow density range. NPIW forms when low-salinity, high-oxygen subpolar water is overrun by warm, saline subtropical waters. The NPIW occurs where the cold, low-saline Oyashio and warm, saline Kuroshio currents meet. The Oyashio water is freshened in the Okhotsk Sea and transported to the North Pacific subtropical gyre by thermohaline circulation. Some studies have found that Gulf of Alaska Intermediate Water additionally contributes to the subpolar waters on the eastern side of the gyre.
The density of the NPIW in the MWR, about 26.6 – 26.9 σθ, is slightly higher than the apparent later winter surface density of the subpolar water. The low salinity and high oxygen at the density of the NPIW are attained in the subpolar gyre, through vertical diffusion in the open North Pacific and direct ventilation in the Okhotsk Sea as a result of sea ice formation.
According to recent studies, the NPIW is formed along three paths between the Kuroshio Extension and the subarctic front. The formation time scale is estimated to be 1 – 1.5 years based on observations by floating instruments and the residence time of NPIW is estimated to be about 20 years. Since the formation time is much less than the residence time, the properties of the NPIW may change on an annual basis as a result of the large changes in the Oyashio transport.
References
Shimizu, Y., T. Iwao, I. Yasuda, S. Ito, T. Watanabe, K. Uehara, N. Shikama, and T. Nakano, 2004: Formation process of North Pacific intermediate water revealed by profiling floats set to drift on 26.7 σθ isopycnal surface. J. Oceanogr., 60, 453–462.
Talley, L., Y. Nagata, M. Fujimura, T. Kono, D. Inagake, M. Hirai, and K. Okuda, 1995: North Pacific intermediate water in the Kuroshio/Oyashio mixed water region. J. Phys. Oceanogr.,25, 475–501.
Oceanography
Water masses
Pacific Ocean | North Pacific Intermediate Water | [
"Physics",
"Chemistry",
"Environmental_science"
] | 696 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Chemical oceanography",
"Water masses"
] |
37,802,754 | https://en.wikipedia.org/wiki/Non-proteinogenic%20amino%20acids | In biochemistry, non-coded or non-proteinogenic amino acids are distinct from the 22 proteinogenic amino acids (21 in eukaryotes), which are naturally encoded in the genome of organisms for the assembly of proteins. However, over 140 non-proteinogenic amino acids occur naturally in proteins and thousands more may occur in nature or be synthesized in the laboratory. Chemically synthesized amino acids can be called unnatural amino acids. Unnatural amino acids can be synthetically prepared from their native analogs via modifications such as amine alkylation, side chain substitution, structural bond extension cyclization, and isosteric replacements within the amino acid backbone. Many non-proteinogenic amino acids are important:
intermediates in biosynthesis,
in post-translational formation of proteins,
in a physiological role (e.g. components of bacterial cell walls, neurotransmitters and toxins),
natural or man-made pharmacological compounds,
present in meteorites or used in prebiotic experiments (such as the Miller–Urey experiment),
might be important neurotransmitters, such as γ-aminobutyric acid, and
can play a crucial role in cellular bioenergetics, such as creatine.
Definition by negation
Technically, any organic compound with an amine (–NH2) and a carboxylic acid (–COOH) functional group is an amino acid. The proteinogenic amino acids are a small subset of this group that possess a central carbon atom (α- or 2-) bearing an amino group, a carboxyl group, a side chain and an α-hydrogen levo conformation, with the exception of glycine, which is achiral, and proline, whose amine group is a secondary amine and is consequently frequently referred to as an imino acid for traditional reasons, albeit not an imino.
The genetic code encodes 20 standard amino acids for incorporation into proteins during translation. However, there are two extra proteinogenic amino acids: selenocysteine and pyrrolysine. These non-standard amino acids do not have a dedicated codon, but are added in place of a stop codon when a specific sequence is present, UGA codon and SECIS element for selenocysteine, UAG PYLIS downstream sequence for pyrrolysine. All other amino acids are termed "non-proteinogenic".
There are various groups of amino acids:
20 standard amino acids
22 proteinogenic amino acids
over 80 amino acids created abiotically in high concentrations
about 900 are produced by natural pathways
over 118 engineered amino acids have been placed into protein
These groups overlap, but are not identical. All 22 proteinogenic amino acids are biosynthesised by organisms and some, but not all, of them also are abiotic (found in prebiotic experiments and meteorites). Some natural amino acids, such as norleucine, are misincorporated translationally into proteins due to infidelity of the protein-synthesis process. Many amino acids, such as ornithine, are metabolic intermediates produced biosynthetically, but not incorporated translationally into proteins. Post-translational modification of amino acid residues in proteins leads to the formation of many proteinaceous, but non-proteinogenic, amino acids. Other amino acids are solely found in abiotic mixes (e.g. α-methylnorvaline). Over 30 unnatural amino acids have been inserted translationally into protein in engineered systems, yet are not biosynthetic.
Nomenclature
In addition to the IUPAC numbering system to differentiate the various carbons in an organic molecule, by sequentially assigning a number to each carbon, including those forming a carboxylic group, the carbons along the side-chain of amino acids can also be labelled with Greek letters, where the α-carbon is the central chiral carbon possessing a carboxyl group, a side chain and, in α-amino acids, an amino group – the carbon in carboxylic groups is not counted. (Consequently, the IUPAC names of many non-proteinogenic α-amino acids start with 2-amino- and end in -ic acid.)
Natural non-L-α-amino acids
Most natural amino acids are α-amino acids in the L configuration, but some exceptions exist.
Non-alpha
Some non-α-amino acids exist in organisms. In these structures, the amine group is displaced further from the carboxylic acid end of the amino acid molecule. Thus a β-amino acid has the amine group bonded to the second carbon away, and a γ-amino acid has it on the third. Examples include β-alanine, GABA, and δ-aminolevulinic acid.
The reason why α-amino acids are used in proteins has been linked to their frequency in meteorites and prebiotic experiments. An initial speculation on the deleterious properties of β-amino acids in terms of secondary structure turned out to be incorrect.
D-amino acids
Some amino acids contain the opposite absolute chirality, chemicals that are not available from normal ribosomal translation and transcription machinery. Most bacterial cells walls are formed by peptidoglycan, a polymer composed of amino sugars crosslinked with short oligopeptides bridged between each other. The oligopeptide is non-ribosomally synthesised and contains several peculiarities including D-amino acids, generally D-alanine and D-glutamate. A further peculiarity is that the former is racemised by a PLP-binding enzymes (encoded by alr or the homologue dadX), whereas the latter is racemised by a cofactor independent enzyme (murI). Some variants are present, in Thermotoga spp. D-Lysine is present and in certain vancomycin-resistant bacteria D-serine is present (vanT gene).
Without a hydrogen on the α-carbon
All proteinogenic amino acids have at least one hydrogen on the α-carbon. Glycine has two hydrogens, and all others have one hydrogen and one side-chain. Replacement of the remaining hydrogen with a larger substituent, such as a methyl group, distorts the protein backbone.
In some fungi α-aminoisobutyric acid is produced as a precursor to peptides, some of which exhibit antibiotic properties. This compound is similar to alanine, but possesses an additional methyl group on the α-carbon instead of a hydrogen. It is therefore achiral. Another compound similar to alanine without an α-hydrogen is dehydroalanine, which possesses a methylene sidechain. It is one of several naturally occurring dehydroamino acids.
Twin amino acid stereocentres
A subset of L-α-amino acids are ambiguous as to which of two ends is the α-carbon. In proteins a cysteine residue can form a disulfide bond with another cysteine residue, thus crosslinking the protein. Two crosslinked cysteines form a cystine molecule. Cysteine and methionine are generally produced by direct sulfurylation, but in some species they can be produced by transsulfuration, where the activated homoserine or serine is fused to a cysteine or homocysteine forming cystathionine. A similar compound is lanthionine, which can be seen as two alanine molecules joined via a thioether bond and is found in various organisms. Similarly, djenkolic acid, a plant toxin from jengkol beans, is composed of two cysteines connected by a methylene group. Diaminopimelic acid is both used as a bridge in peptidoglycan and is used a precursor to lysine (via its decarboxylation).
Prebiotic amino acids and alternative biochemistries
In meteorites and in prebiotic experiments (e.g. Miller–Urey experiment) many more amino acids than the twenty standard amino acids are found, several of which are at higher concentrations than the standard ones. It has been conjectured that if amino acid based life were to arise elsewhere in the universe, no more than 75% of the amino acids would be in common. The most notable anomaly is the lack of aminobutyric acid.
Straight side chain
The genetic code has been described as a frozen accident and the reasons why there is only one standard amino acid with a straight chain, alanine, could simply be redundancy with valine, leucine and isoleucine. However, straight chained amino acids are reported to form much more stable alpha helices.
Chalcogen
Serine, homoserine, O-methylhomoserine and O-ethylhomoserine possess a hydroxymethyl, hydroxyethyl, O-methylhydroxymethyl and O-methylhydroxyethyl side chain; whereas cysteine, homocysteine, methionine and ethionine possess the thiol equivalents. The selenol equivalents are selenocysteine, selenohomocysteine, selenomethionine and selenoethionine. Amino acids with the next chalcogen down are also found in nature: several species such as Aspergillus fumigatus, Aspergillus terreus, and Penicillium chrysogenum in the absence of sulfur are able to produce and incorporate into protein tellurocysteine and telluromethionine.
Expanded genetic code
Roles
In cells, especially autotrophs, several non-proteinogenic amino acids are found as metabolic intermediates. However, despite the catalytic flexibility of PLP-binding enzymes, many amino acids are synthesised as keto acids (such as 4-methyl-2-oxopentanoate to leucine) and aminated in the last step, thus keeping the number of non-proteinogenic amino acid intermediates fairly low.
Ornithine and citrulline occur in the urea cycle, part of amino acid catabolism (see below).
In addition to primary metabolism, several non-proteinogenic amino acids are precursors or the final production in secondary metabolism to make small compounds or non-ribosomal peptides (such as some toxins).
Post-translationally incorporated into protein
Despite not being encoded by the genetic code as proteinogenic amino acids, some non-standard amino acids are nevertheless found in proteins. These are formed by post-translational modification of the side chains of standard amino acids present in the target protein. These modifications are often essential for the function or regulation of a protein; for example, in γ-carboxyglutamate the carboxylation of glutamate allows for better binding of calcium cations, and in hydroxyproline the hydroxylation of proline is critical for maintaining connective tissues. Another example is the formation of hypusine in the translation initiation factor EIF5A, through modification of a lysine residue. Such modifications can also determine the localization of the protein, for example, the addition of long hydrophobic groups can cause a protein to bind to a phospholipid membrane.
There is some preliminary evidence that aminomalonic acid may be present, possibly by misincorporation, in protein.
Toxic analogues
Several non-proteinogenic amino acids are toxic due to their ability to mimic certain properties of proteinogenic amino acids, such as thialysine. Some non-proteinogenic amino acids are neurotoxic by mimicking amino acids used as neurotransmitters (that is, not for protein biosynthesis), including quisqualic acid, canavanine and azetidine-2-carboxylic acid.
Cephalosporin C has an α-aminoadipic acid (homoglutamate) backbone that is amidated with a cephalosporin moiety. Penicillamine is a therapeutic amino acid, whose mode of action is unknown.
Naturally-occurring cyanotoxins can also include non-proteinogenic amino acids. Microcystin and nodularin, for example, are both derived from ADDA, a β-amino acid.
Not amino acids
Taurine is an amino sulfonic acid and not an amino carboxylic acid, however it is occasionally considered as such as the amounts required to suppress the auxotroph in certain organisms (such as cats) are closer to those of "essential amino acids" (amino acid auxotrophy) than of vitamins (cofactor auxotrophy).
The osmolytes, sarcosine and glycine betaine are derived from amino acids, but have a secondary and quaternary amine respectively.
See also
Dicarboxylic acid
Notes
References
Amino acids | Non-proteinogenic amino acids | [
"Chemistry"
] | 2,685 | [
"Amino acids",
"Biomolecules by chemical classification"
] |
37,804,480 | https://en.wikipedia.org/wiki/DNA%20polymerase%20IV | DNA polymerase IV is a prokaryotic polymerase that is involved in mutagenesis and is encoded by the dinB gene. It exhibits no 3′→5′ exonuclease (proofreading) activity and hence is error prone. In E. coli, DNA polymerase IV (Pol 4) is involved in non-targeted mutagenesis. Pol IV is a Family Y polymerase expressed by the dinB gene that is switched on via SOS induction caused by stalled polymerases at the replication fork. During SOS induction, Pol IV production is increased tenfold and one of the functions during this time is to interfere with Pol III holoenzyme processivity. This creates a checkpoint, stops replication, and allows time to repair DNA lesions via the appropriate repair pathway. Another function of Pol IV is to perform translesion synthesis at the stalled replication fork like, for example, bypassing N2-deoxyguanine adducts at a faster rate than transversing undamaged DNA. Cells lacking dinB gene have a higher rate of mutagenesis caused by DNA damaging agents.
Replication bypass of 8-oxoguanine
Reactive oxygen species are produced continuously during normal metabolism and these damage DNA. DNA polymerase IV can catalyze translesion synthesis across a variety of DNA damages including 8-oxoguanine, a major oxidative damage with high mutagenic potential. Upon chromosome duplication by replicative polymerases, unrepaired 8-oxoguanine tends to mispair with A, so that during the next round of replication a G:C to T:A transversion mutation is produced (G:C → 8-oxoG:C → 8-oxoG:A → T:A). However, when DNA polymerase IV intervenes to bypass the damage, it preferentially incorporates the correct nucleotide CTP opposite 8-oxoguanine with high efficiency, thus avoiding potential mutations (G:C → 8-oxoG:C → 8-oxoG:C → GC).
References
DNA replication
EC 2.7.7 | DNA polymerase IV | [
"Biology"
] | 444 | [
"Genetics techniques",
"DNA replication",
"Molecular genetics"
] |
37,804,627 | https://en.wikipedia.org/wiki/DNA%20polymerase%20V | DNA Polymerase V (Pol V) is a polymerase enzyme involved in DNA repair mechanisms in bacteria, such as Escherichia coli. It is composed of a UmuD' homodimer and a UmuC monomer, forming the UmuD'2C protein complex. It is part of the Y-family of DNA Polymerases, which are capable of performing DNA translesion synthesis (TLS). Translesion polymerases bypass DNA damage lesions during DNA replication - if a lesion is not repaired or bypassed the replication fork can stall and lead to cell death. However, Y polymerases have low sequence fidelity during replication (prone to add wrong nucleotides). When the UmuC and UmuD' proteins were initially discovered in E. coli, they were thought to be agents that inhibit faithful DNA replication and caused DNA synthesis to have high mutation rates after exposure to UV-light. The polymerase function of Pol V was not discovered until the late 1990s when UmuC was successfully extracted, consequent experiments unequivocally proved UmuD'2C is a polymerase. This finding lead to the detection of many Pol V orthologs and the discovery of the Y-family of polymerases.
Function
Pol V functions as a TLS (translesion DNA synthesis) polymerase in E. coli as part of the SOS response to DNA damage. When DNA is damaged regular DNA synthesis polymerases are unable to add dNTPs onto the newly synthesized strand. DNA Polymerase III (Pol III) is the regular DNA polymerase in E. coli. As Pol III stalls unable to add nucleotides to the nascent DNA strand, the cell becomes at risk of having the replication fork collapse and cell death to occur. Pol V TLS function depends on association with other elements of the SOS response, most importantly Pol V translesion activity is tightly dependent on the formation of RecA nucleoprotein filaments. Pol V can use TLS on lesions that block replication or miscoding lesions, which modify bases and lead to wrong base pairing. However, it is unable to translate through 5' → 3' backbone nick errors. Pol V also lacks exonuclease activity, thus rendering unable to proofread synthesis causing it to be error prone.
SOS Response
SOS response in E. coli attempts to alleviate the effect of a damaging stress in the cell. The role of Pol V in SOS response triggered by UV-radiation is described as follows:
Pol III stalls at lesion site.
DNA replication helicase DnaB continues to expand the replication fork creating single stranded DNA (ssDNA) segments ahead of from the lesion.
ssDNA binding proteins (SSBs) stabilize ssDNA.
RecA recruited and loaded onto ssDNA by RecFOR replacing SSBs. Formation of RecA nucleoprotein filament (RecA*).
RecA functions through mediator proteins to activate Pol V (see Regulation).
Pol V accesses 3'-OH of nascent DNA strand and extends strand past the lesion site.
Pol III resumes elongation.
Regulation
Pol V is only expressed in the cell during the SOS response. It is very tightly regulated at different levels of protein expression and under different mechanisms to avoid its activity unless absolutely necessary for survival of the cell. Pol V's strict regulation stems from its poor replication fidelity, Pol V is highly mutagenic and it is used as a last resort in DNA repair mechanisms. As such, the expression of the UmuD'2C complex takes 45–50 minutes after UV radiation exposure.
Transcriptional regulation
Transcription of the SOS response genes is negatively regulated by the LexA repressor. LexA binds to the promoter of the UmuDC operon and inhibits gene transcription. DNA damage in the cell leads to the formation of RecA*. RecA* interacts with LexA and stimulates its proteolytic activity, which leads to the autocleavage of the repressor freeing the operon for transcription. The UmuDC operon is transcribed and translated into UmuC and UmuD.
Post-translational regulation
The formation of the UmuD'2C complex is limited by the formation of UmuD' from UmuD. UmuD is made of a polypeptide with 139 amino acid residues that form a stable tertiary structure, however it needs to be post-translationally modified to be in its active form. UmuD has self-proteolytic activity that is activated by RecA, it removes 24 amino acids at the N-terminus, turning it into UmuD'. UmuD' can form a homodimer and associate with UmuC to form the active UmuD'2C complex.
Functional regulation
UmuD'2C complex is inactive unless associated with RecA*. Pol V directly interacts with RecA* at the 3' tip of the nucleoprotein filament; this is the site of the nascent DNA strand where Pol V restarts DNA synthesis. Additionally, it has been shown that the REV1/REV3L/REV7 pathway is necessary for the TLS synthesis mediated by DNA polymerase V.
References
External links
DNA replication
EC 2.7.7
EC 3.4.21 | DNA polymerase V | [
"Biology"
] | 1,093 | [
"Genetics techniques",
"DNA replication",
"Molecular genetics"
] |
37,805,597 | https://en.wikipedia.org/wiki/Speciality%20chemicals | Specialty chemicals (also called specialties or effect chemicals) are particular chemical products which provide a wide variety of effects on which many other industry sectors rely. Some of the categories of speciality chemicals are adhesives, agrichemicals, cleaning materials, colors, cosmetic additives, construction chemicals, elastomers, flavors, food additives, fragrances, industrial gases, lubricants, paints, polymers, surfactants, and textile auxiliaries. Other industrial sectors such as automotive, aerospace, food, cosmetics, agriculture, manufacturing, and textiles are highly dependent on such products.
Speciality chemicals are materials used on the basis of their performance or function. Consequently, in addition to "effect" chemicals they are sometimes referred to as "performance" chemicals or "formulation" chemicals. They can be unique molecules or mixtures of molecules known as formulations. The physical and chemical characteristics of the single molecules or the formulated mixtures of molecules and the composition of the mixtures influences the performance end product. In commercial applications the companies providing these products more often than not provide targeted customer service to innovative individual technical solutions for their customers. This is a differentiating component of the service provided by speciality chemical producers when they are compared to the other sub-sectors of the chemical industry such as fine chemicals, commodity chemicals, petrochemicals and pharmaceuticals.
In the USA the speciality chemical manufacturers are members of the Society of Chemical Manufacturers and Affiliates (SOCMA). In the United Kingdom such companies are members of the British Association for Chemical Specialties (BACS). SOCMA state that "Specialty chemicals differ from commodity chemicals in that each one may have only one or two uses, while commodities may have dozens of different applications for each chemical. While commodity chemicals make up most of the production volume (by weight) in the global marketplace, specialty chemicals make up most of the diversity (number of different chemicals) in commerce at any given time."
Manufacturing speciality chemicals
The specialty chemicals industry is a sector within the broader chemical industry that produces a diverse range of high-value chemicals and materials used in various applications. These chemicals, also known as performance or effect chemicals, are formulated to provide specific functions, enhance product performance, or meet specific customer requirements. Specialty chemicals are used in a wide array of industries, including automotive, aerospace, agriculture, construction, electronics, food and beverage, personal care, pharmaceutical, and textile.
Specialty chemicals are usually manufactured in batch chemical plants using batch processing techniques. A batch process is one in which a defined quantity of product is made from a fixed input of raw materials during a measured period of time. The batch process most often consists of introducing accurately measured amounts of starting materials into a vessel followed by a series of processes involving mixing, heating, cooling, making more chemical reactions, distillation, crystallization, separation, drying, packaging etc., taking place at predetermined and scheduled intervals. The manufacturing processes are supported by activities such as the quality testing, storage, warehousing, logistics of the products, and management by recycling, treatment and disposal of by-products, and waste streams. For the next "batch" the equipment may be cleaned and the above processes repeated.
Most specialty chemicals are organic chemicals that are used in a wide range of everyday products used by consumers and industry. It is a consumer driven sector and as such the specialty chemical industry has to be innovative, entrepreneurial and consumer-driven. In contrast to the production of commodity chemicals that are usually made on large scale single product manufacturing units to achieve economies of scale, specialty manufacturing units are required to be flexible because the products, raw materials, processes &, operating conditions and equipment mix may change on a regular basis to respond to the needs of customers.
In the United Kingdom there are many speciality chemical companies that are members of the British Association of Chemical Specialties (BACS) and also the Chemical Industries Association (CIA). Most of these companies have their manufacturing units based in the North of England for example see the membership of the Northeast of England they are members of the Northeast of England process Industry Cluster (NEPIC).
The global specialty chemical market
The Specialty Chemical market size is valued at US$627.7 billion in 2021 and it is expected to grow to $886.2 billion by 2030, at a CAGR of 4.3% during 2022-2030. These speciality products are marketed as pesticides, speciality polymers, electronic chemicals, surfactants, construction chemicals, Industrial Cleaners, flavors and fragrances, speciality coatings, printing inks, water-soluble polymers, food additives, paper chemicals, oil field chemicals, plastic adhesives, adhesives and sealants, cosmetic chemicals, water management chemicals, catalysts, textile chemicals.
The world's top five specialty chemicals segments in 2012 were specialty polymers, industrial and institutional (I&I) cleaners, construction chemicals, electronic chemicals, and flavors and fragrances. These segments had a market share of about 36%. The ten largest segments accounted for 62% of total annual specialty chemicals sales.
Specialty chemical companies
The specialty chemical market is complex and each specialty chemicals business segment comprises many sub-segments, each with individualized product, market and competitive profiles. This has given rise to a wide range of business needs and opportunities, consequently there are a large number of speciality chemical companies around the world. Many of these companies are SME's with their own niche products and sometimes technology focus. The common stock of over 400 speciality chemical companies from around the world are identified by Bloomberg, providers of global business and financial information. There are many more privately owned speciality chemical companies that are not quoted on the global stock markets.
In 2010, the 10 largest European speciality chemical companies were BASF, AkzoNobel, Clariant, Evonik, Cognis, Kemira, Lanxess, Rhodia, Wacker and Croda. By definition, speciality chemicals are produced in relatively small quantities but they represent 28 per cent of EU chemical sales.
The 10 largest USA speciality chemical companies are The Lubrizol Corporation, Huntsman, Ashland, Chemtura, Rockwood, Albemarle, Cabot, W. R. Grace, Ferro Corporation, and Cytec Industries.
The emergence of India as a manufacturer and supplier of speciality chemicals has had a major impact on the Global speciality chemical industry. In India the many speciality chemical companies are members of national organisations such as the Indian Chemical Council (ICC) and the Indian Speciality Chemical Manufacturers' Association (ISCMA). The wide capability of these companies extends into all sectors and sub-sectors of the speciality chemical market.
The United Kingdom has 1300 speciality chemical companies and which have an annual turnover of £11.2 billion. The products of these UK companies are sold globally and contribute significantly to the UKs export trade. With over £30 billion of exports the chemical industry is the last remaining net-exporting manufacturing industry in the UK and Speciality Chemicals make up a significant proportion of this. The products include dyestuffs, paints, explosives, adhesives, flavors and fragrances, photographic chemicals, unrecorded media and various industrial specialities. As Speciality Chemical manufacturers, unlike commodity chemical manufacturers, are less dependent on large scale infrastructure, therefore Speciality Chemical companies can be found in almost all regions of the UK. Some 80% of the United Kingdom Chemical industry is based in the north of the country and consequently there are concentrations of Speciality Chemical companies in Yorkshire, and in the membership of the Northeast of England Process Industry Cluster (NEPIC).
Product Categories
Specialty chemicals can be broadly categorized into the following segments:
Adhesives and sealants: These chemicals are used to bond materials together and provide leak-proof sealing in various applications, such as automotive, construction, and packaging.
Agrochemicals: Chemicals used in agriculture, including fertilizers, pesticides, herbicides, and fungicides, to enhance crop yield and protect plants from pests and diseases.
Catalysts: Substances that increase the rate of chemical reactions without being consumed in the process, widely used in the production of petrochemicals, polymers, and other chemical processes.
Coatings: Chemical formulations applied on surfaces to provide protection, appearance enhancement, or specific functional properties like corrosion resistance, water repellency, and UV protection.
Electronic chemicals: High-purity chemicals used in the production of electronic components, such as semiconductors, integrated circuits, and printed circuit boards.
Flavors and fragrances: Chemicals that impart taste and aroma to food and beverage products, as well as personal care and household products.
Food additives: Substances added to food products to enhance their taste, texture, appearance, or preservation.
Personal care ingredients: Chemicals used in the formulation of cosmetics, toiletries, and other personal care products, such as emulsifiers, surfactants, and moisturizing agents.
Pharmaceuticals: Active ingredients and excipients used in the production of prescription and over-the-counter drugs.
Polymers: Large molecules made up of repeating units, used in the production of plastics, elastomers, and resins, with applications across various industries.
Surfactants: Compounds that lower surface tension between two liquids or between a liquid and a solid, used in detergents, emulsions, and foaming agents.
Textile chemicals: Chemicals used in the processing and finishing of textiles, including dyes, pigments, and fabric softeners.
See also
Fine chemicals
Commodity chemicals
Petrochemicals
Northeast of England Process Industry Cluster
Chemical industry
Commercial classification of chemicals
References
Products of chemical industry | Speciality chemicals | [
"Chemistry",
"Engineering"
] | 1,981 | [
"Chemical engineering",
"Products of chemical industry"
] |
37,806,131 | https://en.wikipedia.org/wiki/C17H27N3O17P2 | {{DISPLAYTITLE:C17H27N3O17P2}}
The molecular formula C17H27N3O17P2 (molar mass: 607.354 g/mol) may refer to:
Uridine diphosphate N-acetylgalactosamine
Uridine diphosphate N-acetylglucosamine
Molecular formulas | C17H27N3O17P2 | [
"Physics",
"Chemistry"
] | 83 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
37,809,826 | https://en.wikipedia.org/wiki/Interacting%20particle%20system | In probability theory, an interacting particle system (IPS) is a stochastic process on some configuration space given by a site space, a countably-infinite-order graph and a local state space, a compact metric space . More precisely IPS are continuous-time Markov jump processes describing the collective behavior of stochastically interacting components. IPS are the continuous-time analogue of stochastic cellular automata.
Among the main examples are the voter model, the contact process, the asymmetric simple exclusion process (ASEP), the Glauber dynamics and in particular the stochastic Ising model.
IPS are usually defined via their Markov generator giving rise to a unique Markov process using Markov semigroups and the Hille-Yosida theorem. The generator again is given via so-called transition rates where is a finite set of sites and with for all . The rates describe exponential waiting times of the process to jump from configuration into configuration . More generally the transition rates are given in form of a finite measure on .
The generator of an IPS has the following form. First, the domain of is a subset of the space of "observables", that is, the set of real valued continuous functions on the configuration space . Then for any observable in the domain of , one has
.
For example, for the stochastic Ising model we have , , if for some and
where is the configuration equal to except it is flipped at site . is a new parameter modeling the inverse temperature.
The Voter model
The voter model (usually in continuous time, but there are discrete versions as well) is a process similar to the contact process. In this process is taken to represent a voter's attitude on a particular topic. Voters reconsider their opinions at times distributed according to independent exponential random variables (this gives a Poisson process locally – note that there are in general infinitely many voters so no global Poisson process can be used). At times of reconsideration, a voter chooses one neighbor uniformly from amongst all neighbors and takes that neighbor's opinion. One can generalize the process by allowing the picking of neighbors to be something other than uniform.
Discrete time process
In the discrete time voter model in one dimension, represents the state of particle at time . Informally each individual is arranged on a line and can "see" other individuals that are within a radius, . If more than a certain proportion, of these people disagree then the individual changes her attitude, otherwise she keeps it the same. Durrett and Steif (1993) and Steif (1994) show that for large radii there is a critical value such that if most individuals never change, and for in the limit most sites agree. (Both of these results assume the probability of is one half.)
This process has a natural generalization to more dimensions, some results for this are discussed in Durrett and Steif (1993).
Continuous time process
The continuous time process is similar in that it imagines each individual has a belief at a time and changes it based on the attitudes of its neighbors. The process is described informally by Liggett (1985, 226), "Periodically (i.e., at independent exponential times), an individual reassesses his view in a rather simple way: he chooses a 'friend' at random with certain probabilities and adopts his position." A model was constructed with this interpretation by Holley and Liggett (1975).
This process is equivalent to a process first suggested by Clifford and Sudbury (1973) where animals are in conflict over territory and are equally matched. A site is selected to be invaded by a neighbor at a given time.
References
Lattice models
Self-organization
Complex systems theory
Spatial processes
Markov models | Interacting particle system | [
"Physics",
"Materials_science",
"Mathematics"
] | 778 | [
"Self-organization",
"Lattice models",
"Computational physics",
"Condensed matter physics",
"Statistical mechanics",
"Dynamical systems"
] |
25,115,911 | https://en.wikipedia.org/wiki/Marchenko%E2%80%93Pastur%20distribution | In the mathematical theory of random matrices, the Marchenko–Pastur distribution, or Marchenko–Pastur law, describes the asymptotic behavior of singular values of large rectangular random matrices. The theorem is named after soviet mathematicians Volodymyr Marchenko and Leonid Pastur who proved this result in 1967.
If denotes a random matrix whose entries are independent identically distributed random variables with mean 0 and variance , let
and let be the eigenvalues of (viewed as random variables). Finally, consider the random measure
counting the number of eigenvalues in the subset included in .
Theorem. Assume that so that the ratio . Then (in weak* topology in distribution), where
and
with
The Marchenko–Pastur law also arises as the free Poisson law in free probability theory, having rate and jump size .
Moments
For each , its -th moment is
Some transforms of this law
The Stieltjes transform is given by
for complex numbers of positive imaginary part, where the complex square root is also taken to have positive imaginary part. It satisfies the quadratic equationThe Stieltjes transform can be repackaged in the form of the R-transform, which is given by
The S-transform is given by
For the case of , the -transform is given by where satisfies the Marchenko-Pastur law.
where
For exact analyis of high dimensional regression in the proportional asymptotic regime, a convenient form is often which simplifies to
The following functions and , where satisfies the Marchenko-Pastur law, show up in the limiting Bias and Variance respectively, of ridge regression and other regularized linear regression problems. One can show that and .
Application to correlation matrices
For the special case of correlation matrices, we know that
and . This bounds the probability mass over the interval defined by
Since this distribution describes the spectrum of random matrices with mean 0, the eigenvalues of correlation matrices that fall inside of the aforementioned interval could be considered spurious or noise. For instance, obtaining a correlation matrix of 10 stock returns calculated over a 252 trading days period would render . Thus, out of 10 eigenvalues of said correlation matrix, only the values higher than 1.43 would be considered significantly different from random.
See also
Wigner semicircle distribution
Tracy–Widom distribution
References
Link to free-access pdf of Russian version
Link to free download Another free access site
Probability distributions
Random matrices | Marchenko–Pastur distribution | [
"Physics",
"Mathematics"
] | 502 | [
"Random matrices",
"Functions and mappings",
"Probability distributions",
"Mathematical objects",
"Matrices (mathematics)",
"Mathematical relations",
"Statistical mechanics"
] |
25,118,486 | https://en.wikipedia.org/wiki/Covering%20groups%20of%20the%20alternating%20and%20symmetric%20groups | In the mathematical area of group theory, the covering groups of the alternating and symmetric groups are groups that are used to understand the projective representations of the alternating and symmetric groups. The covering groups were classified in : for , the covering groups are 2-fold covers except for the alternating groups of degree 6 and 7 where the covers are 6-fold.
For example the binary icosahedral group covers the icosahedral group, an alternating group of degree 5, and the binary tetrahedral group covers the tetrahedral group, an alternating group of degree 4.
Definition and classification
A group homomorphism from D to G is said to be a Schur cover of the finite group G if:
the kernel is contained both in the center and the commutator subgroup of D, and
amongst all such homomorphisms, this D has maximal size.
The Schur multiplier of G is the kernel of any Schur cover and has many interpretations. When the homomorphism is understood, the group D is often called the Schur cover or Darstellungsgruppe.
The Schur covers of the symmetric and alternating groups were classified in . The symmetric group of degree has
Schur covers of order 2⋅n! There are two isomorphism classes if and one isomorphism class if n = 6.
The alternating group of degree n has one isomorphism class of Schur cover, which has order n! except when n is 6 or 7, in which case the Schur cover has order 3⋅n!.
Finite presentations
Schur covers can be described by means of generators and relations. The symmetric group Sn has a presentation on generators ti for i = 1, 2, ..., and relations
titi = 1, for
ti+1titi+1 = titi+1ti, for
tjti = titj, for .
These relations can be used to describe two non-isomorphic covers of the symmetric group. One covering group 2⋅S has generators z, t1, ..., tn−1 and relations:
zz = 1
titi = z, for
ti+1titi+1 = titi+1ti,
tjti = titjz, for .
The same group 2⋅S can be given the following presentation using the generators z and si given by ti or tiz according as i is odd or even:
zz = 1
sisi = z, for
si+1sisi+1 = sisi+1siz, for
sjsi = sisjz, .
The other covering group 2⋅S has generators z, t1, ..., tn−1 and relations:
zz = 1, zti = tiz, for
titi = 1, for
ti+1titi+1 = titi+1tiz, for
tjti = titjz, for .
The same group 2⋅S can be given the following presentation using the generators z and si given by ti or tiz according as i is odd or even:
zz = 1, zsi = siz, for
sisi = 1, for
si+1sisi+1 = sisi+1si, for
sjsi = sisjz, for .
Sometimes all of the relations of the symmetric group are expressed as , where mij are non-negative integers, namely , , and , for . The presentation of 2⋅S becomes particularly simple in this form: () = z, and zz = 1. The group 2⋅S has the nice property that its generators all have order 2.
Projective representations
Covering groups were introduced by Issai Schur to classify projective representations of groups. A (complex) linear representation of a group G is a group homomorphism from the group G to a general linear group, while a projective representation is a homomorphism from G to a projective linear group. Projective representations of G correspond naturally to linear representations of the covering group of G.
The projective representations of alternating and symmetric groups are the subject of the book .
Integral homology
Covering groups correspond to the second group homology group, , also known as the Schur multiplier. The Schur multipliers of the alternating groups An (in the case where n is at least 4) are the cyclic groups of order 2, except in the case where n is either 6 or 7, in which case there is also a triple cover. In these cases, then, the Schur multiplier is the cyclic group of order 6, and the covering group is a 6-fold cover.
H2(An, Z) = 0 for n ≤ 3
H2(An, Z) = Z/2Z for n = 4, 5
H2(An, Z) = Z/6Z for n = 6, 7
H2(An, Z) = Z/2Z for n ≥ 8
For the symmetric group, the Schur multiplier vanishes for n ≤ 3, and is the cyclic group of order 2 for n ≥ 4:
H2(Sn, Z) = 0 for n ≤ 3
H2(Sn, Z) = Z/2Z for n ≥ 4
Construction of double covers
The double covers can be constructed as spin (respectively, pin) covers of faithful, irreducible, linear representations of An and Sn. These spin representations exist for all n, but are the covering groups only for n ≥ 4 (n ≠ 6, 7 for An). For n ≤ 3, Sn and An are their own Schur covers.
Explicitly, Sn acts on the n-dimensional space Rn by permuting coordinates (in matrices, as permutation matrices). This has a 1-dimensional trivial subrepresentation corresponding to vectors with all coordinates equal, and the complementary -dimensional subrepresentation (of vectors whose coordinates sum to 0) is irreducible for . Geometrically, this is the symmetries of the -simplex, and algebraically, it yields maps and expressing these as discrete subgroups (point groups). The special orthogonal group has a 2-fold cover by the spin group , and restricting this cover to An and taking the preimage yields a 2-fold cover . A similar construction with a pin group yields the 2-fold cover of the symmetric group: . As there are two pin groups, there are two distinct 2-fold covers of the symmetric group, 2⋅S, also called and Ŝn.
Construction of triple cover for n = 6, 7
The triple covering of A6, denoted 3⋅A6, and the corresponding triple cover of S6, denoted 3⋅S6, can be constructed as symmetries of a certain set of vectors in a complex 6-space. While the exceptional triple covers of A6 and A7 extend to extensions of S6 and S7, these extensions are not central and so do not form Schur covers.
This construction is important in the study of the sporadic groups, and in much of the exceptional behavior of small classical and exceptional groups, including: construction of the Mathieu group M24, the exceptional covers of the projective unitary group U4(3) and the projective special linear group and the exceptional double cover of the group of Lie type G2(4).
Exceptional isomorphisms
For low dimensions there are exceptional isomorphisms with the map from a special linear group over a finite field to the projective special linear group.
For n = 3, the symmetric group is SL(2, 2) ≅ PSL(2, 2) and is its own Schur cover.
For n = 4, the Schur cover of the alternating group is given by SL(2, 3) → PSL(2, 3) ≅ A4, which can also be thought of as the binary tetrahedral group covering the tetrahedral group. Similarly, GL(2, 3) → PGL(2, 3) ≅ S4 is a Schur cover, but there is a second non-isomorphic Schur cover of S4 contained in GL(2,9) – note that 9 = 32 so this is extension of scalars of GL(2, 3). In terms of the above presentations, GL(2, 3) ≅ Ŝ4.
For n = 5, the Schur cover of the alternating group is given by SL(2, 5) → PSL(2, 5) ≅ A5, which can also be thought of as the binary icosahedral group covering the icosahedral group. Though PGL(2, 5) ≅ S5, GL(2, 5) → PGL(2, 5) is not a Schur cover as the kernel is not contained in the derived subgroup of GL(2 ,5). The Schur cover of PGL(2, 5) is contained in GL(2, 25) – as before, 25 = 52, so this extends the scalars.
For n = 6, the double cover of the alternating group is given by SL(2, 9) → PSL(2, 9) ≅ A6. While PGL(2, 9) is contained in the automorphism group PΓL(2, 9) of PSL(2, 9) ≅ A6, PGL(2, 9) is not isomorphic to S6, and its Schur covers (which are double covers) are not contained in nor a quotient of GL(2, 9). Note that in almost all cases, with the unique exception of A6, due to the exceptional outer automorphism of A6. Another subgroup of the automorphism group of A6 is M10, the Mathieu group of degree 10, whose Schur cover is a triple cover. The Schur covers of the symmetric group S6 itself have no faithful representations as a subgroup of GL(d, 9) for d ≤ 3. The four Schur covers of the automorphism group PΓL(2, 9) of A6 are double covers.
For n = 8, the alternating group A8 is isomorphic to SL(4, 2) = PSL(4, 2), and so SL(4, 2) → PSL(4, 2), which is 1-to-1, not 2-to-1, is not a Schur cover.
Properties
Schur covers of finite perfect groups are superperfect, that is both their first and second integral homology vanish. In particular, the double covers of An for n ≥ 4 are superperfect, except for n = 6, 7, and the six-fold covers of An are superperfect for n = 6, 7.
As stem extensions of a simple group, the covering groups of An are quasisimple groups for n ≥ 5.
References
Finite groups
Permutation groups | Covering groups of the alternating and symmetric groups | [
"Mathematics"
] | 2,230 | [
"Mathematical structures",
"Algebraic structures",
"Finite groups"
] |
25,122,527 | https://en.wikipedia.org/wiki/Combustion%20chemical%20vapor%20deposition | Combustion chemical vapor deposition (CCVD) is a chemical process by which thin-film coatings are deposited onto substrates in the open atmosphere.
History
In the 1980s initial attempts were performed to improve the adhesion of metal-plastic composites in dental ceramics using flame-pyrolytically deposited silicon dioxide (SiO2). The silicoater process derived from these studies provided a starting point in the development of CCVD processes. This process was constantly developed and new applications for flame-pyrolytically deposited SiO2 layers where found. At this time, the name "Pyrosil" was coined for these layers. Newer and ongoing studies deal with deposition of other materials (vide infra).
Principles and procedure
In the CCVD process, a precursor compound, usually a metal-organic compound or a metal salt, is added to the burning gas. The flame is moved closely above the surface to be coated. The high energy within the flame converts the precursors into highly reactive intermediates, which readily react with the substrate, forming a firmly adhering deposit. The microstructure and thickness of the deposited layer can be controlled by varying process parameters such as speed of substrate or flame, number of passes, substrate temperature and distance between flame and substrate. CCVD can produce coatings with orientation from preferred to epitaxial, and can produce conformal layers less than 10 nm thick. Thus, CCVD technique is a true vapor deposition process for making thin film coatings.
The CCVD coating process has the ability to deposit thin films in the open atmosphere using inexpensive precursor chemicals in solution leading to continuous, production-line manufacturing. It does not require post-deposition treatment e.g., annealing. The throughput potential is high. Coatings can be deposited at substantial temperatures, for example, alpha-alumina was deposited on Ni-20Cr at temperatures between 1050 and 1125 C. A 1999 review article summarizes the various oxide coatings that had been deposited to date, which included Al2O3, Cr2O3, SiO2, CeO2, some spinel oxides (MgAl2O4, NiAl2O4), and yttria stabilized zirconia (YSZ).
Remote combustion chemical vapor deposition (r-CCVD)
The so-called remote combustion chemical vapour deposition is a new variant of the classical CCVD process. It likewise uses flames to deposit thin films, however, this method is based on other chemical reaction mechanisms and offers further abilities for deposition of layer systems which are not practicable by means of CCVD, e.g. titanium dioxide.
Applications
Pros and cons
Cost-effective, partly because no devices for generation and maintenance of a vacuum are needed
Flexible in use due to various implementations
Fewer layer materials compared to some low-pressure methods, limited primarily to oxides. The exceptions are some precious metals such as silver, gold and platinum
Limited to layer materials, for which suitable precursors are available, however, this is the case for most metals
See also
Chemical vapor deposition
Atomic layer deposition, a more precise and conformal coating technology
Physical vapor deposition, the deposition of materials from vapor without chemical reactions
Plasma-enhanced chemical vapor deposition
References
Chemical vapor deposition
Coatings
Thin film deposition | Combustion chemical vapor deposition | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 674 | [
"Thin film deposition",
"Coatings",
"Thin films",
"Chemical vapor deposition",
"Planes (geometry)",
"Solid state engineering"
] |
31,167,063 | https://en.wikipedia.org/wiki/Sievert%20chamber | A Sievert chamber is a type of ionization chamber used in radiation dose measurements. It was invented by Professor Rolf Maximilian Sievert in Sweden in the years 1920-40.
See also
Geiger–Müller tube
Ionization chamber
Dosimetry
References
Particle detectors | Sievert chamber | [
"Physics",
"Technology",
"Engineering"
] | 55 | [
"Nuclear and atomic physics stubs",
"Particle detectors",
"Measuring instruments",
"Nuclear physics"
] |
31,170,218 | https://en.wikipedia.org/wiki/Energy%20Catalyzer | The Energy Catalyzer (also called E-Cat) is a claimed cold fusion reactor devised by inventor Andrea Rossi with support from the late physicist Sergio Focardi. An Italian patent, which received a formal but not a technical examination, describes the apparatus as a "process and equipment to obtain exothermal reactions, in particular from nickel and hydrogen". Rossi and Focardi said the device worked by infusing heated hydrogen into nickel powder, transmuting it into copper and producing excess heat. An international patent application received an unfavorable international preliminary report on patentability in 2011 because it was adjudged to "offend against the generally accepted laws of physics and established theories".
The device has been the subject of demonstrations and tests several times, and commented on by various academics and others. No independent tests have ever been made, and no peer-reviewed tests of the device have ever been published. Steve Featherstone wrote in Popular Science that by the summer of 2012 Rossi's "outlandish claims" for the E-Cat seemed "thoroughly debunked".
Demonstrations
Invited guests attended several demonstrations in Bologna in 2011. The device has not been independently verified. Of a January demonstration, Discovery Channel analyst Benjamin Radford wrote that "If this all sounds fishy to you, it should," and that "In many ways cold fusion is similar to perpetual motion machines. The principles defy the laws of physics, but that doesn't stop people from periodically claiming to have invented or discovered one." According to Phys.org (11 August 2011), the demonstrations held from January to April 2011 had several flaws that compromised their credibility and Rossi had refused to perform tests that could verify his claims.
University of Bologna researchers have attended some E-Cat demonstrations, but only as observers. On 5 November 2011, the University of Bologna clarified that its researchers had not been involved in the demonstrations and that none of those took place at the university. Rossi had signed a contract with the university, but the contract was terminated and no research was done because Rossi did not make the first payment.
Skeptic Ian Bryce speculated that the E-Cat was misconnected during demonstrations, and that the power attributed to fusion is supplied to the device through the earth wire. Dick Smith offered Rossi one million dollars to demonstrate that the E-Cat system worked as claimed, while the power through the earth wire was also being measured, which Rossi refused. Peter Thieberger, a senior physicist at Brookhaven National Laboratory, said it would be very difficult for this misconnection to happen by accident and that the issue could only be cleared with a fully independent test.
On 28 October 2011 the unit was "customer tested" and was said to release 2,635 kWh during five and a half hours of self-sustained mode, an average power of 479 kilowatts – just under half the promised power of one megawatt. Independent observers were not allowed to watch the measurements or make their own, and the plant remained connected to a power supply during the test allegedly to supply power to the fans and the water pumps.
After working with Rossi, Sergio Focardi concluded that nuclear fusion reactions happen inside the Energy Catalyzer. Focardi states that the nuclear process is facilitated by a secret additive, known only by Rossi and not by him. According to Focardi, the process would be much less intense without this additive. Rossi and Focardi are then reported to have been unable to find a peer-reviewed scientific journal that would publish their paper describing how they claim the Energy Catalyzer operates. Their paper appears only in Rossi's self-published blog, Journal of Nuclear Physics.
In May 2013 a non-peer-reviewed paper describing "results obtained from evaluations of the operation of the E-Cat HT in two test runs" was submitted to the arXiv digital archive. Although the authors of the paper wrote that they were not in control of all of the aspects of the process, they concluded that, even by the most conservative of measurements, the device produced excess heat with a resulting energy density that was at least one order of magnitude, and possibly several, higher than any other conventional energy source. The test was partly funded by the Swedish energy research consortium, Elforsk. Elforsk stated on their website that the results were very remarkable, but that it was highly questionable to speculate whether nuclear transformation had occurred when no access had been provided to the reactants. In a response to the original manuscript archived on arXiv, commentators criticized the testing as not truly independent, described the report as having "characteristics more typically found in pseudo‐scientific texts", and stated that "The authors seem to jump to conclusions fitting pre‐conceived ideas where alternative explanations are possible." Astrophysicist Ethan Siegel commented at ScienceBlogs saying Rossi did not allow the reactants or products to be measured on this occasion. In the previous tests there were not enough and (the only two nickel isotopes which can fuse with hydrogen), at 3.6% and 0.9% respectively, in the reactants to explain the 10% copper output; these isotope levels are typical of natural copper, rather than of fusion by-product. According to Siegel, Rossi also refused to unplug the machine while it was operating despite it being an easy way to surreptitiously power the device. He also added that the supposedly independent testers had to rely on data supplied by Rossi.
In October 2014 a non-peer-reviewed paper by the same authors as the May 2013 report describes results from evaluations in March 2014 of an upgraded version of the E-Cat which runs at higher temperatures. Unlike previous demonstrations, the test was carried out with monitoring equipment and in a laboratory not supplied by Rossi, and was run over an extended duration (32 days). However, as with the previous report, the authors were not in full control of the process; Rossi intervened during the insertion of the fuel charge, start up of the reactor, shut down of the reactor, and extraction of the spent fuel. Overall, the total excess heat measured was calculated to be well beyond that possible by any conventional, non-nuclear source. In this report, they present analyses of samples of spent fuel, concluding from the isotopes found that "nuclear reactions are therefore indicated to be present in the run process, which however is hard to reconcile with the fact that no radioactivity was detected outside the reactor during the run." Following fuel and ash isotopic analysis, the authors speculate as to isotopes of especially nickel and lithium being part of the reaction, in particular transmutation of and to , and from to through some unknown process.
Particle physicist Tommaso Dorigo commented on the 2014 test, called the isotopic measurements "startling" but he expressed deep concern about Rossi being involved in collecting the spent fuel, that the testers may have "overlooked some simple trick" and that "given the extraordinary nature of the claim… this constitutes a major flaw, which totally invalidates any conclusions one might otherwise draw."
Astrophysicist Ethan Siegel was highly critical of the test, stating that the testers were not independent, that Rossi could have tampered with the fuel samples, that the 'open calorimeter' setup used was inappropriate, and that "it’s relatively easy to fake the amount of energy being drawn through a power cord if there is a hookup to an external source."
On 31 January 2019, Rossi's company released a new product (E-Cat SK) via live video stream. The product is reported as currently available to be leased by factories as a source of heat. After viewing the video, Tom Casten noted that "The E-Cat demonstration makes giant claims of scientific breakthroughs with no validation". Similarly, the Australian physicist and aerospace engineer Ian Bryce noted that, in the video demonstration, the "inputs, outputs, and measurement points are not defined, making the results largely meaningless", that the nuclear reaction purportedly occurring within the E-Cat SK would "release much deadly radiation. Yet the meters show zero ionizing radiation and no neutrons. Fortunate for the bystanders!" and concludes, regarding Rossi's E-Cat cold fusion device, "there is no real doubt about it being a fake".
Reactions to the claims
Theoretical astrophysicist Ethan Siegel and nuclear physicist Peter Thieberger have pointed out that the claims for the E-Cat are incompatible with the fundamentals of nuclear physics. In particular, the Coulomb barrier for the claimed fusion reaction is so high that it is insurmountable anywhere in the known universe, including the interior of stars. The reaction also would create gamma radiation that would have penetrated the few inches of shielding apparently provided by the E-Cat, inducing acute radiation syndrome in persons in the vicinity of the purported demonstrations. Given numerous other scientific inconsistencies – such as the ratio of isotopes in the supposed copper "fusion product" being identical to that in natural copper – the authors argued that it is now time "for the E-Cat's proponents to provide the provable, testable, reproducible science that can answer these straightforward physics objections."
Peter Ekström, lecturer at the Department of Nuclear Physics at Lund University in Sweden, concluded in May 2011, "I am convinced that the whole story is one big scam, and that it will be revealed in less than one year." He cited the unlikelihood of a chemical reaction being strong enough to overcome the Coulomb barrier, the lack of gamma rays, the lack of explanation for the origin of the extra energy, the lack of the expected radioactivity after fusing a proton with 58Ni, the unexplained occurrence of 11% iron in the spent fuel, the 10% copper in the spent fuel having the same isotopic ratios as natural copper, and the lack of any unstable copper isotope in the spent fuel as if the reactor only produced stable isotopes. Kjell Aleklett, physics professor at Uppsala University, said the percentage of copper was too high for any known reaction of nickel, and the copper had the same isotopic ratio as natural copper. He also stated, "Known chemical reactions cannot explain the amount of energy measured. A nuclear reaction can explain the amount of energy, but the knowledge we have today says that this reaction cannot take place." Scientific skeptic James Randi, discussing the E-Cat in the context of previous cold fusion claims, suggested that it will eventually be proven to be a fraud.
Other cold fusion supporters have been more supportive of the claims. For example, in 2011 Dennis M. Bushnell, Chief Scientist at NASA Langley Research Center, described LENR as a "promising" technology and praised the work of Rossi and Focardi.
Theoretical nuclear physicist Yeong E. Kim of Purdue University has proposed a potential theoretical explanation of the reported results of the device, but has stated that, for confirmation of this theory, "it is very important to carry out Rossi-type experiments independently." Kim had previously put forward this theory to explain the results of the now-discredited Fleischman and Pons cold fusion experiment in 1989.
Steve Featherstone wrote in Popular Science that by the summer of 2012 Rossi's "outlandish claims" for the E-Cat seemed "thoroughly debunked" and that Rossi "looked like a con man clinging to his story to the bitter end."
Patents
An application in 2008 to patent the device internationally received an unfavorable preliminary report on patentability at the World Intellectual Property Organization from the European Patent Office, noting that the description of the device was based on "general statements and speculations" and citing "numerous deficiencies in both the description and in the evidence provided to support its feasibility" as well as incompatibilities with "generally accepted laws of physics and established theories." The patent application was published on 15 October 2009.
On 6 April 2011 an application was approved by the Italian Patent and Trademark Office, which issued a patent for the invention, valid only in Italy. Under then-current Italian law, the examination of the application was more formal and less technical than for the corresponding PCT application.
In March 2014 the US Patent Office replied to Rossi's US patent application with a provisional decision to reject it, saying "The specification is objected to as inoperable. Specifically there is no evidence in the corpus of nuclear science to substantiate the claim that nickel will spontaneously ionize hydrogen gas and therefore 'absorb' the resulting proton".
Lawsuit
In January 2014 a newly formed company, Industrial Heat LLC, announced that it had acquired rights to Rossi's E-Cat technology. In April 2016, Rossi filed a lawsuit in the USA against Industrial Heat, alleging that he was not paid an $89 million licensing fee due after a one-year test period of an E-Cat unit. Industrial Heat's comment on the lawsuit was that after three years of effort they were unable to reproduce Rossi's E-Cat test results.
On 5 July 2017 the parties settled; the terms of the settlement were not released.
See also
Brilliant Light Power
References
Fringe physics
Discovery and invention controversies
Cold fusion
Italian inventions | Energy Catalyzer | [
"Physics",
"Chemistry"
] | 2,713 | [
"Nuclear fusion",
"Cold fusion",
"Nuclear physics"
] |
31,170,355 | https://en.wikipedia.org/wiki/C-terminal%20telopeptide | The C-terminal telopeptide (CTX), also known as carboxy-terminal collagen crosslinks, is the C-terminal telopeptide of fibrillar collagens such as collagen type I and type II. It is used as a biomarker in the serum to measure the rate of bone turnover. It can be useful in assisting clinicians to determine a patient's nonsurgical treatment response as well as evaluate a patient's risk of developing complications during healing following surgical intervention. The test used to detect the CTX marker is called the Serum CrossLaps, and it is more specific to bone resorption than any other test currently available.
Biomarker discovery
In the early 2000s, a link between bisphosphonate use and impaired bone physiology was noted. The strong inhibition of osteoclast function precipitated by bisphosphonate therapy can lead to inhibition of normal bone turnover, leading to impaired wound healing following trauma (such as dental surgery) or even spontaneous non-healing bone exposure. Because bisphosphonates are preferentially deposited in bone with high turnover rates, it is possible that the levels of bisphosphonate within the jaw bones are selectively elevated.
With the advent of implant dentistry, more dental patients are undergoing therapies in the oral cavity that involve bone healing, such as surgical implant placement and bone grafting procedures. In order to evaluate the risk of osteonecrosis for a patient taking bisphosphonates, use of the CTX biomarker was introduced in 2000 by Rosen.
Use as a biomarker
Although a number of surrogate biomarkers exist for measuring the metabolic products of bone resorption, the serum CTX marker was chosen because it is both highly correlated to bone turnover rate and already available for detection in a laboratory test carried out by a major lab testing corporation.
The CTX test measures for the presence and concentration of a crosslink peptide sequence of type I collagen, found, among other tissues, in bone. This specific peptide sequence relates to bone turnover because it is the portion that is cleaved by osteoclasts during bone resorption, and its serum levels are therefore proportional to osteoclastic activity at the time the blood sample is drawn. Serum levels in healthy patients not taking bisphosphonates tends to hover above 300 pg/mL.
Patients who are placed on a 6-month drug holiday exhibit marked improvements in their serum CTX values; in one study, patients showed an improvement of 155.3 pg/mL over 6 months or a rate of 25.9 pg/mL each month.
Initially, urinary CTX levels were sought, but this proved to offer no greater value than urinary NTX values—both tests suffered from large spontaneous fluctuations unrelated to therapy or intervention, and were therefore largely unreliable. In contrast, the monoclonal antibody test for detecting serum CTX levels features minimal spontaneous disruption yet remarkable change to antiresorptive therapy, making the serum CTX assay both highly sensitive and specific.
See also
N-terminal telopeptide
References
Biomarkers
Bones
Chemical pathology
Collagens
Octapeptides | C-terminal telopeptide | [
"Chemistry",
"Biology"
] | 666 | [
"Biochemistry",
"Chemical pathology",
"Biomarkers"
] |
31,172,470 | https://en.wikipedia.org/wiki/Law%20of%20Maximum | The Law of Maximum also known as Law of the Maximum is a principle developed by Arthur Wallace which states that total growth of a crop or a plant is proportional to about 70 growth factors. Growth will not be greater than the aggregate values of the growth factors. Without the correction of the limiting growth factors, nutrients, waters and other inputs are not fully or judicially used resulting in wasted resources.
Applications
The factors range from 0 for no growth to 1 for maximum growth. Actual growth is calculated by the total multiplication of each growth factor. For example, if three factors had a value of 0.5, the actual growth would be:
0.5 × 0.5 × 0.5 = 0.125, which is 12.5% of optimum.
If each of the three factors had a value of 0.9 the actual growth would be:
0.9 × 0.9 × 0.9 = 0.729, which is 72.9% of optimum.
Hence the need to achieve maximal value for each factor is critical in order to obtain maximal growth.
Demonstrations of "Law of the Maximum"
The following demonstrates the Law of the Maximum. For the various crops listed below, one, two or three factors were limiting while all the other factors were 1. When two or three factors were simultaneously limiting, predicted growth of the two or three factors was similar to the actual growth when the two or three factors were limits individually and then multiplied together.
Growth Factors
A. Adequacy of Nutrients
B. Non-nutrient elements and nutrients excesses that cause toxicities (stresses)
C. Interactions of the nutrients
D. Soil Conditioning requirement and physical processes
E. Additional biology
F. Weather factors
G. Management
External links
Law of the Maximum, in Handbook of soil science by Malcolm E. Sumner
References
Computational biology | Law of Maximum | [
"Biology"
] | 368 | [
"Computational biology"
] |
31,172,772 | https://en.wikipedia.org/wiki/UniPROBE | The Universal PBM Resource for Oligonucleotide-Binding Evaluation (UniPROBE) is database of DNA-binding proteins determined by protein-binding microarrays.
See also
Protein microarray
DNA-binding domain
References
External links
Official website
Biological databases
Microarrays
Proteomics | UniPROBE | [
"Chemistry",
"Materials_science",
"Biology"
] | 63 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Bioinformatics",
"Molecular biology techniques"
] |
40,562,160 | https://en.wikipedia.org/wiki/Convention%20on%20Assistance%20in%20the%20Case%20of%20a%20Nuclear%20Accident%20or%20Radiological%20Emergency | The Convention on Assistance in the Case of a Nuclear Accident or Radiological Emergency is a 1986 treaty of the International Atomic Energy Agency (IAEA) whereby states have agreed to provide notification to the IAEA of any assistance that they can provide in the case of a nuclear accident that occurs in another state that has ratified the treaty. Along with the Convention on Early Notification of a Nuclear Accident, it was adopted in direct response to the April 1986 Chernobyl disaster.
The Convention was concluded and signed at a special session of the IAEA general conference on 26 September 1986; the special session was called because of the Chernobyl disaster, which had occurred five months before. Significantly, the Soviet Union and the Ukrainian SSR—the states that were responsible for the Chernobyl disaster—both signed the treaty at the conference and quickly ratified it. It was signed by 68 states and the Convention entered into force on 26 February 1987 after the third ratification.
As of 2016, there are 112 states that have ratified or acceded to the Convention, plus the European Atomic Energy Community, the Food and Agriculture Organization, the World Health Organization, and the World Meteorological Organization. The states that have signed the Convention but not ratified it are Afghanistan, Côte d'Ivoire, Democratic Republic of the Congo, Holy See, Niger, North Korea, Sierra Leone, Sudan, Syria, and Zimbabwe. The states that have ratified the Convention but have since denounced it and withdrawn from the agreement are Bulgaria, Hungary, Mongolia, and Poland.
References
External links
Convention on Assistance in the Case of a Nuclear Accident or Radiological Emergency, IAEA information page.
Text of the Convention.
Signatures and ratifications.
Aftermath of the Chernobyl disaster
International Atomic Energy Agency treaties
Treaties concluded in 1986
Treaties entered into force in 1986
1986 in Austria
Treaties of Albania
Treaties of Algeria
Treaties of Argentina
Treaties of Armenia
Treaties of Australia
Treaties of Austria
Treaties of Bangladesh
Treaties of the Byelorussian Soviet Socialist Republic
Treaties of Belgium
Treaties of Bolivia
Treaties of Bosnia and Herzegovina
Treaties of Botswana
Treaties of Brazil
Treaties of Burkina Faso
Treaties of Cameroon
Treaties of Canada
Treaties of Chile
Treaties of the People's Republic of China
Treaties of Colombia
Treaties of Costa Rica
Treaties of Croatia
Treaties of Cuba
Treaties of Cyprus
Treaties of the Czech Republic
Treaties of Czechoslovakia
Treaties of Denmark
Treaties of Egypt
Treaties of El Salvador
Treaties of Estonia
Treaties of Finland
Treaties of France
Treaties of Gabon
Treaties of West Germany
Treaties of East Germany
Treaties of Greece
Treaties of Guatemala
Treaties of Iceland
Treaties of India
Treaties of Indonesia
Treaties of Iran
Treaties of Ba'athist Iraq
Treaties of Ireland
Treaties of Israel
Treaties of Italy
Treaties of Japan
Treaties of Jordan
Treaties of Kazakhstan
Treaties of South Korea
Treaties of Kuwait
Treaties of Laos
Treaties of Latvia
Treaties of Lebanon
Treaties of the Libyan Arab Jamahiriya
Treaties of Liechtenstein
Treaties of Lithuania
Treaties of Luxembourg
Treaties of Malaysia
Treaties of Mali
Treaties of Mauritania
Treaties of Mauritius
Treaties of Mexico
Treaties of Monaco
Treaties of Montenegro
Treaties of Morocco
Treaties of Mozambique
Treaties of the Netherlands
Treaties of New Zealand
Treaties of Nicaragua
Treaties of Nigeria
Treaties of Norway
Treaties of Oman
Treaties of Pakistan
Treaties of Panama
Treaties of Paraguay
Treaties of Peru
Treaties of the Philippines
Treaties of Portugal
Treaties of Qatar
Treaties of Moldova
Treaties of Romania
Treaties of the Soviet Union
Treaties of Saint Vincent and the Grenadines
Treaties of Saudi Arabia
Treaties of Senegal
Treaties of Serbia and Montenegro
Treaties of Singapore
Treaties of Slovakia
Treaties of Slovenia
Treaties of South Africa
Treaties of Spain
Treaties of Sri Lanka
Treaties of Sweden
Treaties of Switzerland
Treaties of Tajikistan
Treaties of Thailand
Treaties of North Macedonia
Treaties of Tunisia
Treaties of Turkey
Treaties of the Ukrainian Soviet Socialist Republic
Treaties of the United Arab Emirates
Treaties of the United Kingdom
Treaties of Tanzania
Treaties of the United States
Treaties of Uruguay
Treaties of Vietnam
Treaties entered into by the European Atomic Energy Community
Treaties of Yugoslavia
Treaties extended to Aruba
Treaties extended to the Netherlands Antilles
Treaties entered into by the World Health Organization
Treaties of Lesotho
Treaties entered into by the Food and Agriculture Organization
Treaties entered into by the World Meteorological Organization | Convention on Assistance in the Case of a Nuclear Accident or Radiological Emergency | [
"Chemistry",
"Technology"
] | 793 | [
"Nuclear accidents and incidents",
"Aftermath of the Chernobyl disaster",
"Environmental impact of nuclear power",
"Radioactivity"
] |
40,562,466 | https://en.wikipedia.org/wiki/Joint%20Convention%20on%20the%20Safety%20of%20Spent%20Fuel%20Management%20and%20on%20the%20Safety%20of%20Radioactive%20Waste%20Management | The Joint Convention on the Safety of Spent Fuel Management and on the Safety of Radioactive Waste Management is a 1997 International Atomic Energy Agency (IAEA) treaty. It is the first treaty to address radioactive waste management on a global scale.
Content
The states that ratify the Convention agree to be governed by the convention's provisions on the storage of nuclear waste, including transport and the location, design, and operation of storage facilities.
The Convention implements meetings of the state parties that review the states' implementation of the convention. The Fourth Review Meeting was held in 2012. A summary report from the meeting, and links to the national reports from the participating countries, is available on the IAEA website.
Creation and state parties
The convention was concluded in Vienna, Austria, on 29 September 1997 and entered into force on 18 June 2001. It was signed by 42 states. As of March 2016, it has 71 state parties plus the European Atomic Energy Community. Lebanon, and the Philippines have signed the convention but have not ratified it.
The following are the parties to the convention. States in bold have at least one nuclear power plant in operation.
References
1997 in Austria
2001 in the environment
International Atomic Energy Agency treaties
Radioactive waste
Treaties concluded in 1997
Treaties entered into force in 2001
Waste treaties
Treaties of Albania
Treaties of Argentina
Treaties of Armenia
Treaties of Australia
Treaties of Austria
Treaties of Belarus
Treaties of Belgium
Treaties of Bosnia and Herzegovina
Treaties of Botswana
Treaties of Brazil
Treaties of Bulgaria
Treaties of Canada
Treaties of Chile
Treaties of the People's Republic of China
Treaties of Croatia
Treaties of Cyprus
Treaties of the Czech Republic
Treaties of Denmark
Treaties of Estonia
Treaties of Finland
Treaties of France
Treaties of Gabon
Treaties of Georgia (country)
Treaties of Germany
Treaties of Ghana
Treaties of Greece
Treaties of Hungary
Treaties of Iceland
Treaties of Indonesia
Treaties of Ireland
Treaties of Italy
Treaties of Kazakhstan
Treaties of Japan
Treaties of South Korea
Treaties of Kyrgyzstan
Treaties of Latvia
Treaties of Lebanon
Treaties of Lithuania
Treaties of Luxembourg
Treaties of North Macedonia
Treaties of Malta
Treaties of Mauritania
Treaties of Mauritius
Treaties of Moldova
Treaties of Montenegro
Treaties of Morocco
Treaties of the Netherlands
Treaties of Nigeria
Treaties of Norway
Treaties of Oman
Treaties of Peru
Treaties of Poland
Treaties of Portugal
Treaties of Romania
Treaties of Russia
Treaties of Saudi Arabia
Treaties of Senegal
Treaties of Slovakia
Treaties of Slovenia
Treaties of South Africa
Treaties of Spain
Treaties of Sweden
Treaties of Switzerland
Treaties of Tajikistan
Treaties of Ukraine
Treaties of the United Arab Emirates
Treaties of the United Kingdom
Treaties of the United States
Treaties of Uruguay
Treaties of Uzbekistan
Treaties entered into by the European Atomic Energy Community
Treaties extended to Hong Kong
Treaties of Vietnam | Joint Convention on the Safety of Spent Fuel Management and on the Safety of Radioactive Waste Management | [
"Chemistry",
"Technology"
] | 508 | [
"Radioactive waste",
"Environmental impact of nuclear power",
"Radioactivity",
"Hazardous waste"
] |
40,564,571 | https://en.wikipedia.org/wiki/Amplituhedron | In mathematics and theoretical physics (especially twistor string theory), an amplituhedron is a geometric structure introduced in 2013 by Nima Arkani-Hamed and Jaroslav Trnka. It enables simplified calculation of particle interactions in some quantum field theories. In planar N = 4 supersymmetric Yang–Mills theory, also equivalent to the perturbative topological B model string theory in twistor space, an amplituhedron is defined as a mathematical space known as the positive Grassmannian.
Amplituhedron theory challenges the notion that spacetime locality and unitarity are necessary components of a model of particle interactions. Instead, they are treated as properties that emerge from an underlying phenomenon.
The connection between the amplituhedron and scattering amplitudes is a conjecture that has passed many non-trivial checks, including an understanding of how locality and unitarity arise as consequences of positivity. Research has been led by Nima Arkani-Hamed. Edward Witten described the work as "very unexpected" and said that "it is difficult to guess what will happen or what the lessons will turn out to be".
Description
When subatomic particles interact, different outcomes are possible. The evolution of the various possibilities is called a "tree", and the probability amplitude of a given outcome is called its scattering amplitude. According to the principle of unitarity, the sum of the probabilities (the squared moduli of the probability amplitudes) for every possible outcome is 1.
The on-shell scattering process "tree" may be described by a positive Grassmannian, a structure in algebraic geometry analogous to a convex polytope, that generalizes the idea of a simplex in projective space. A polytope is the n-dimensional analogue of a 3-dimensional polyhedron, the values being calculated in this case are scattering amplitudes, and so the object is called an amplituhedron.
Using twistor theory, Britto–Cachazo–Feng–Witten recursion (BCFW recursion) relations involved in the scattering process may be represented as a small number of twistor diagrams. These diagrams effectively provide the recipe for constructing the positive Grassmannian, i.e. the amplituhedron, which may be captured in a single equation. The scattering amplitude can thus be thought of as the volume of a certain polytope, the positive Grassmannian, in momentum twistor space.
When the volume of the amplituhedron is calculated in the planar limit of N = 4 D = 4 supersymmetric Yang–Mills theory, it describes the scattering amplitudes of particles described by this theory.
The twistor-based representation provides a recipe for constructing specific cells in the Grassmannian which assemble to form a positive Grassmannian, i.e., the representation describes a specific cell decomposition of the positive Grassmannian.
The recursion relations can be resolved in many different ways, each giving rise to a different representation, with the final amplitude expressed as a sum of on-shell processes in different ways as well. Therefore, any given on-shell representation of scattering amplitudes is not unique, but all such representations of a given interaction yield the same amplituhedron.
The twistor approach is relatively abstract. While amplituhedron theory provides an underlying geometric model, the geometrical space is not physical spacetime and is also best understood as abstract.
Implications
The twistor approach simplifies calculations of particle interactions. In a conventional perturbative approach to quantum field theory, such interactions may require the calculation of thousands of Feynman diagrams, most describing off-shell "virtual" particles which have no directly observable existence. In contrast, twistor theory provides an approach in which scattering amplitudes can be computed in a way that yields much simpler expressions. Amplituhedron theory calculates scattering amplitudes without referring to such virtual particles. This undermines the case for even a transient, unobservable existence for such virtual particles.
The geometric nature of the theory suggests in turn that the nature of the universe, in both classical relativistic spacetime and quantum mechanics, may be described with geometry.
Calculations can be done without assuming the quantum mechanical properties of locality and unitarity. In amplituhedron theory, locality and unitarity arise as a direct consequence of positivity. They are encoded in the positive geometry of the amplituhedron, via the singularity structure of the integrand for scattering amplitudes. Arkani-Hamed suggests this is why amplituhedron theory simplifies scattering-amplitude calculations: in the Feynman-diagrams approach, locality is manifest, whereas in the amplituhedron approach, it is implicit.
See also
Associahedron
Wilson loop
References
External links
Grassmannian Geometry of Scattering Amplitudes Workshop, December 8–12, 2014
N = 4 D = 4 super Yang–Mills theory from nLab
2013 in science
Gauge theories
Geometry
Quantum gravity
Scattering theory | Amplituhedron | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,043 | [
"Scattering theory",
"Unsolved problems in physics",
"Quantum gravity",
"Scattering",
"Geometry",
"Physics beyond the Standard Model"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.